我试图在网上刮多个页面,并且我已经成功地检索到了一个页面的数据。现在,我想知道如何实现一些循环来从几个页面中检索数据。
到网页的链接是:category/dealers-distributors/
这是我的密码:
from bs4 import BeautifulSoup
import requests
import csv
source = requests.get('https://www.diac.ca/directory/wpbdp_category/dealers-distributors/').text
soup = BeautifulSoup(source, 'lxml')
csv_file = open('scrape.csv', 'w')
csv_writer = csv.writer(csv_file)
csv_writer.writerow(['company', 'website'])
for i in soup.find_all('div', class_='wpbdp-listing'):
company = i.find('div', class_='listing-title').a.text
print(company)
website = i.find('div', class_='wpbdp-field-business_website_address').span.a.text
print(website)
csv_writer.writerow([company, website])
csv_file.close()
我非常感谢你的反馈和见解。非常感谢!
发布于 2019-08-13 12:30:14
一种可能是尝试使用class=next
在标记下查找链接。如果存在链接,请使用该链接加载下一页。如果该链接不存在,则中断循环:
import requests
from bs4 import BeautifulSoup
source = requests.get('https://www.diac.ca/directory/wpbdp_category/dealers-distributors/').text
soup = BeautifulSoup(source, 'lxml')
page = 1
while True:
print('Page no. {}'.format(page))
print('-' * 80)
for i in soup.find_all('div', class_='wpbdp-listing'):
company = i.find('div', class_='listing-title').a.text
print(company)
website = i.find('div', class_='wpbdp-field-business_website_address').span.a.text
print(website)
if soup.select_one('.next a[href]'):
soup = BeautifulSoup(requests.get(soup.select_one('.next a[href]')['href']).text, 'lxml')
page += 1
else:
break
指纹:
Page no. 1
--------------------------------------------------------------------------------
AMD Medicom Inc.
http://www.medicom.ca
Clinical Research Dental Supplies & Services Inc.
http://www.clinicalresearchdental.com
Coltene Whaledent
http://www.coltene.com
CompuDent Systems Inc.
http://www.compudent.ca
DenPlus Inc.
http://www.denplus.com
Dental Canada Instrumentation
http://www.mydentalcanada.com
Dental Services Group of Toronto Inc.
http://www.dsgtoronto.com
Dental Wings Inc.
http://www.dentalwings.com
Dentsply Sirona Canada
http://www.dentsplysirona.ca
DiaDent Group International Inc.
http://www.diadent.com
Page no. 2
--------------------------------------------------------------------------------
DMG America LLC
http://www.dmg-america.com
Hager Worldwide, Inc.
http://www.hagerworldwide.com
Hansamed Ltd
http://www.hansamed.net
Henry Schein Canada
http://www.henryschein.com
Heraeus Kulzer LLC
http://www.heraeus-kulzer-us.com
Johnson & Johnson Inc.
http://www.jjnjcanada.com
K-Dental Inc.
http://www.k-dental.ca
Kerr Dental
http://www.kerrdental.com
Northern Surgical & Medical Supplies Ltd.
www.northernsurgical.com
Northern Surgical and Medical Supplies Ltd.
http://www.northernsurgical.com
Page no. 3
--------------------------------------------------------------------------------
Patterson Dental/Dentaire Canada Inc.
http://www.pattersondental.ca
Procter & Gamble Oral Health
http://www.pg.com
Qwerty Dental Inc.
http://www.qwertydental.com
Sable Industries Inc.
http://www.sableindustriesinc.com
Septodont of Canada, Inc.
http://www.septodont.ca
Sure Dental Supplies of Canada Inc.
http://www.suredental.com
Swiss NF Metals Inc.
http://www.swissnf.com
The Aurum Group
http://www.aurumgroup.com
The Surgical Room Inc.
http://www.thesurgicalroom.ca
Unique Dental Supply Inc.
http://www.uniquedentalsupply.com
发布于 2019-08-13 13:04:31
一般过程是这样的:
# Make soup
links = [link.get('href') for link in soup.find_all('a')] #These are the links you want to visit next
for link in links:
requests.get(link)
# Do whatever / make soup again
对此也有帮助的是requests.Session()
,它维护cookies / headers等。
session = requests.Session()
session.get(some_url)
下面是我刚才写的一个示例,它更多地涉及到显示抓取的一般流程:
def scrape_data(link):
entries = soup.find_all('div', class_='data')
return [entry.text for entry in entries]
def paginate(link):
requests.get(link)
links = soup.find_all('a', class_='nav')
return [link.get('href') for link in links]
def main():
data = [scrape_data(link) for link in paginate(starting_link)]
# Export / process data here
https://stackoverflow.com/questions/57484707
复制