前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >Python 爬虫之 request+beautifulsoup+mysql

Python 爬虫之 request+beautifulsoup+mysql

作者头像
Wu_Candy
发布2023-09-14 16:54:19
1750
发布2023-09-14 16:54:19
举报
文章被收录于专栏:无量测试之道无量测试之道

一、什么是爬虫? 它是指向网站发起请求,获取资源后分析并提取有用数据的程序; 爬虫的步骤:

1、发起请求 使用http库向目标站点发起请求,即发送一个Request Request包含:请求头、请求体等

2、获取响应内容 如果服务器能正常响应,则会得到一个Response Response包含:html,json,图片,视频等

3、解析内容 解析html数据:正则表达式(RE模块),第三方解析库如Beautifulsoup,pyquery等 解析json数据:json模块 解析二进制数据:以wb的方式写入文件

4、保存数据 数据库(MySQL,Mongdb、Redis)文件

二、本次选择爬虫的数据来源于链家,因为本人打算搬家,想观察一下近期的链家租房数据情况,所以就直接爬取了链家数据,相关的代码如下:

代码语言:javascript
复制
from bs4 import BeautifulSoup as bs
from requests.exceptions import RequestException
import requests
import re
from DBUtils import DBUtils

def main(response): #web页面数据提取与入库操作
    html = bs(response.text, 'lxml')
    for data in html.find_all(name='div',attrs={"class":"content__list--item--main"}):
        try:
            print(data)
            Community_name = data.find(name="a", target="_blank").get_text(strip=True)
            name=str(Community_name).split(" ")[0]
            sizes=str(Community_name).split(" ")[1]
            forward=str(Community_name).split(" ")[2]
            flood = data.find(name="span",class_="hide").get_text(strip=True)
            flood=str(flood).replace(" ","").replace("/","")
            sqrt= re.compile("\d\d+㎡")
            area=str(data.find(text=sqrt)).replace(" ","")
            maintance=data.find(name="span",class_="content__list--item--time oneline").get_text(strip=True)
            maintance=str(maintance)
            price=data.find(name="span",class_="content__list--item-price").get_text(strip=True)
            price=str(price)
            print(name,sizes,forward,flood,maintance,price)
            insertsql = "INSERT INTO test_log.`information`(Community_name,size,forward,area,flood,maintance,price) VALUES " \
            "('"+name+"','"+sizes+"','"+forward+"','"+area+"','"+flood+"','"+maintance+"','"+price+"');"
            insert_sql(insertsql)
        except:
            print("have an error!!!")

def insert_sql(sql): #数据入库操作
    dbconn=DBUtils("test6")
    dbconn.dbExcute(sql)

def get_one_page(urls): #获取web页面数据
    try:
        headers = {"Host": "bj.lianjia.com",
              "Connection": "keep-alive",
              "Cache-Control": "max-age=0",
              "Upgrade-Insecure-Requests": "1",
              "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.61 Safari/537.36",
              "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9",
              "Sec-Fetch-Site": "none",
              "Sec-Fetch-Mode": "navigate",
              "Sec-Fetch-User": "?1",
              "Sec-Fetch-Dest": "document",
              "Accept-Encoding": "gzip, deflate, br",
              "Accept-Language": "zh-CN,zh;q=0.9",
              "Cookie": "lianjia_uuid=fa1c2e0b-792f-4a41-b48e-78531bf89136; _smt_uid=5cfdde9d.cbae95b; sensorsdata2015jssdkcross=%7B%22distinct_id%22%3A%2216b3fad98fc1d1-088a8824f73cc4-e353165-2710825-16b3fad98fd354%22%2C%22%24device_id%22%3A%2216b3fad98fc1d1-088a8824f73cc4-e353165-2710825-16b3fad98fd354%22%2C%22props%22%3A%7B%22%24latest_traffic_source_type%22%3A%22%E8%87%AA%E7%84%B6%E6%90%9C%E7%B4%A2%E6%B5%81%E9%87%8F%22%2C%22%24latest_referrer%22%3A%22https%3A%2F%2Fwww.baidu.com%2Flink%22%2C%22%24latest_referrer_host%22%3A%22www.baidu.com%22%2C%22%24latest_search_keyword%22%3A%22%E6%9C%AA%E5%8F%96%E5%88%B0%E5%80%BC%22%7D%7D; _ga=GA1.2.1891741852.1560141471; UM_distinctid=17167f490cb566-06c7739db4a69e-4313f6b-100200-17167f490cca1e; Hm_lvt_9152f8221cb6243a53c83b956842be8a=1588171341; lianjia_token=2.003c978d834648dbbc2d3aa4b226145cd7; select_city=110000; lianjia_ssid=fc20dfa1-6afb-4407-9552-2c4e7aeb73ce; CNZZDATA1253477573=1893541433-1588166864-https%253A%252F%252Fwww.baidu.com%252F%7C1591157903; CNZZDATA1254525948=1166058117-1588166331-https%253A%252F%252Fwww.baidu.com%252F%7C1591154084; CNZZDATA1255633284=1721522838-1588166351-https%253A%252F%252Fwww.baidu.com%252F%7C1591158264; CNZZDATA1255604082=135728258-1588168974-https%253A%252F%252Fwww.baidu.com%252F%7C1591153053; _jzqa=1.2934504416856578000.1560141469.1588171337.1591158227.3; _jzqc=1; _jzqckmp=1; _jzqy=1.1588171337.1591158227.1.jzqsr=baidu.-; _qzjc=1; _gid=GA1.2.1223269239.1591158230; _qzja=1.1313673973.1560141469311.1588171337488.1591158227148.1591158227148.1591158233268.0.0.0.7.3; _qzjto=2.1.0; srcid=eyJ0Ijoie1wiZGF0YVwiOlwiMThmMWQwZTY0MGNiNTliNTI5OTNlNGYxZWY0ZjRmMmM3ODVhMTU3ODNhNjMwODhlZjlhMGM2MTJlMDFiY2JiN2I4OTBkODA0M2Q0YTM0YzIyMWE0YzIwOTBkODczNTQwNzM0NTc1NjBlM2EyYTc3NmYwOWQ3OWQ4OWJjM2UwYzAwY2RjMTk3MTMwNzYwZDRkZTc2ODY0OTY0NTA5YmIxOWIzZWQyMWUzZDE3ZjhmOGJmMGNmOGYyMTMxZTI1MzIxMGI4NzhjNjYwOGUyNjc3ZTgxZjA2YzUzYzE4ZjJmODhmMTA1ZGVhOTMyZTRlOTcxNmNiNzllMWViMThmNjNkZTJiMTcyN2E0YzlkODMwZWIzNmVhZTQ4ZWExY2QwNjZmZWEzNjcxMjBmYWRmYjgxMDY1ZDlkYTFhMDZiOGIwMjI2NTg1ZGU4NTQyODBjODFmYTUyYzI0NDg5MjRlNWI0N1wiLFwia2V5X2lkXCI6XCIxXCIsXCJzaWduXCI6XCI2Yzk3M2U5M1wifSIsInIiOiJodHRwczovL2JqLmxpYW5qaWEuY29tL2RpdGllenVmYW5nL2xpNDY0NjExNzkvcnQyMDA2MDAwMDAwMDFsMSIsIm9zIjoid2ViIiwidiI6IjAuMSJ9"}
        response = requests.get(url=urls, headers=headers)
        main(response)
    except RequestException:
        return None

if __name__=="__main__":
      for i in range(64): #遍历翻页
        if(i==0):
            urls = "https://bj.lianjia.com/ditiezufang/li46461179/rt200600000001l1/"
            get_one_page(urls)
        else:
            urls = "https://bj.lianjia.com/ditiezufang/li46461179/rt200600000001l1/".replace("rt","pg"+str(i))
            get_one_page(urls)

说明:本代码中使用了《Python之mysql实战》的那篇文章,请注意结合着一起来看。

三、以下是获取到的数据入库后的结果图

结论:爬虫是获取数据的重要方式之一,我们需要掌握多种方式去获取数据。机器学习是基于数据的学习,我们需要为机器学习做好数据的准备,大家一起加油哟~

本文参与 腾讯云自媒体同步曝光计划,分享自微信公众号。
原始发表:2023-09-10 08:00,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 无量测试之道 微信公众号,前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
相关产品与服务
云数据库 MySQL
腾讯云数据库 MySQL(TencentDB for MySQL)为用户提供安全可靠,性能卓越、易于维护的企业级云数据库服务。其具备6大企业级特性,包括企业级定制内核、企业级高可用、企业级高可靠、企业级安全、企业级扩展以及企业级智能运维。通过使用腾讯云数据库 MySQL,可实现分钟级别的数据库部署、弹性扩展以及全自动化的运维管理,不仅经济实惠,而且稳定可靠,易于运维。
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档