
大家好,又见面了,我是你们的朋友全栈君。
items:
class GiteeItem(scrapy.Item):
link = scrapy.Field()
desc = scrapy.Field()
passdb:
import emoji
import pymysql
connect = pymysql.connect(host='localhost', user='root', password='root', db='mindsa', charset='utf8mb4')
cursor = connect.cursor()
def insertGitee(item):
sql = """INSERT INTO gitee(link,`desc`) VALUES ({},{})""".format("'" + emoji.demojize(item['link']) + "'",
"'" + emoji.demojize(item['desc']) + "'")
cursor.execute(sql)
connect.commit()pipelines:
class GiteePipeline:
def process_item(self, item, spider):
insertGitee(item)settings:
ITEM_PIPELINES = {
'myscrapy.pipelines.GiteePipeline': 300,
}GiteeSprider:import scrapy
from myscrapy.items import GiteeItem
class GiteeSprider(scrapy.Spider):
name = 'gitee'
allow_domains = 'gitee.com'
start_urls = ['https://gitee.com/explore/all']
def parse(self, response, **kwargs):
# 使用绝对路径定位标签
elements = response.xpath('//div[@class="ui relaxed divided items explore-repo__list"]//div[@class="item"]')
for element in elements:
# 注意:再次进行xpath的时候是相对路径在需要//前面加上.。是.//而不是//
link = self.allow_domains + element.xpath('.//h3/a/@href').get()
desc = element.xpath('.//div[@class="project-desc"]/text()').get()
item = GiteeItem()
item['link'] = link
item['desc'] = desc
yield item
# 注意:根据多个属性值进行xpath的时候,用and来连接。
next_href__get = response.xpath(
'//div[@class="ui tiny pagination menu"]//a[@class="icon item" and @rel="next"]/@href'
).get()
if next_href__get is not None:
# 如果存在下一页则继续请求
yield scrapy.Request("https://gitee.com"+next_href__get, self.parse)版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 举报,一经查实,本站将立刻删除。
发布者:全栈程序员栈长,转载请注明出处:https://javaforall.cn/219206.html原文链接:https://javaforall.cn