爬取的内容为豆瓣图书top250的信息,如图所示
(2)爬取豆瓣图书top250的10页信息,通过手动浏览,以下为前4页的网址:
https://book.douban.com/top250
https://book.douban.com/top250?start=25
https://book.douban.com/top250?start=50
https://book.douban.com/top250?start=75
然后把第1页的网址改为https://book.douban.com/top250?start=0也能正常浏览,故只需更改start=后面的数字即可,以此来构造出10页的网址。
(3)需要爬取的信息有:书名、书本的URL链接、作者、出版社、出版时间,书本价格、评分和评价
(4)运用Python中的csv库,把爬取的信息存储在本地的CSV文本中
from lxml import etree
import requests
import csv
fp =open('C://Users/LP/Desktop/doubanbook.csv','wt',newline='',encoding='utf-8')
writer = csv.writer(fp)
writer.writerow(('name', 'url', 'author', 'publisher', 'date', 'price','rate', 'comment'))
urls = ['https://book.douban.com/top250?start={}'.format(str(i))for i in range(0,250,25)]
headers = {
'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36(KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36'
}
for url in urls:
html = requests.get(url,headers=headers)
selector = etree.HTML(html.text)
infos = selector.xpath('//tr[@class="item"]')
for info in infos:
name = info.xpath('td/div/a/@title')[0]
url = info.xpath('td/div/a/@href')[0]
book_infos = info.xpath('td/p/text()')[0]
author = book_infos.split('/')[0]
publisher = book_infos.split('/')[-3]
date = book_infos.split('/')[-2]
price = book_infos.split('/')[-1]
rate = info.xpath('td/div/span[2]/text()')[0]
comments = info.xpath('td/p/span/text()')
comment = comments[0] if len(comments) != 0 else "空"
writer.writerow((name,url,author,publisher,date,price,rate,comment))
fp.close()
领取专属 10元无门槛券
私享最新 技术干货