爬虫剑谱第七页(输入关键词爬取拼多多商品信息并进行保存)

开始实战!

获取拼多多商品信息并进行永久化csv文件进行保存

代码如下:



import requests
from lxml import etree
import csv

#获取关键词
word = input("请输出您想搜索的关键词:")
url = "https://youhui.pinduoduo.com/search/landing?keyword=%s"
headers = {
    "user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.81 Safari/537.36 Edg/94.0.992.50"
}
#发起请求
respones = requests.get(url=(url%word),headers=headers,proxies={'http://':'222.74.73.202:42055'}).content

#数据解析
etree = etree.HTML(respones)
#爬取每一条链接
commodity_list = etree.xpath('//*[@id="__next"]/div/div[2]/div/div[2]/a')
f = open("爬取拼多多商城信息(%s).csv","w",encoding="utf-8")
csv_writer = csv.writer(f)
for commodity in commodity_list:
    #解析每条商品链接中的商品描述
    describe = commodity.xpath("./div/p/text()")
   
    #获取每件商品的券后价
    price = commodity.xpath('./div/div/p[1]/span/text()')
    
    # 获取每件商品的原价
    cost_price = commodity.xpath('./div/div/span/text()')
   
    #获取每件商品的图片
    commodity_photo = commodity.xpath('./img/@src')
    csv_writer.writerow([describe,price,cost_price,commodity_photo])
f.close()







代码结果:

输入关键词 

csv文件生成:

 结果:


版权声明:本文为weixin_53328988原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接和本声明。