我认为这个代码片段已经接近正常工作了,但是它并没有从它所指向的网站下载数据。我正在尝试下载名为“fs- table”的表,并希望将每个“fs-table”放在单独的Excel工作表中。
# pip install -U multi-mechanize
import mechanize
mech = mechanize.Browser()
from mechanize import Browser
from BeautifulSoup import BeautifulSoup
from openpyxl import load_workbook
from openpyxl import Workbook
mech = Browser()
tckr = ['SBUX','MSFT','AAPL']
url = "https://finance.google.com/finance?q=NASDAQ:" + tckr + "&fstype=ii"
page = mech.open(url)
html = page.read()
soup = BeautifulSoup(html)
table = soup.find("fs-table", border=1)
url_list = [url + s for s in tckr]
for url in url_list:
try:
wb1 = Workbook()
ws1 = wb1.active
wb1 = load_workbook('C:/Users/Excel/Desktop/template.xlsx')
wb1.create_sheet(tckr)
with open('C:/Users/Excel/Desktop/today.csv', 'a', newline='') as f:
for row in table.findAll('tr')[1:]:
col = row.findAll('td')
rank = col[0].string
artist = col[1].string
album = col[2].string
cover_link = col[3].img['src']
record = (rank, artist, album, cover_link)
print("|".join(record))
except HTTPError:
print("{} - not found".format(url))
wb1.save('C:/Users/Excel/Desktop/template.xlsx')
这是我正在尝试与之合作的网站。
现在,我收到了这个消息: ModuleNotFoundError:没有名为'mechanize‘的模块
但是,我已经安装了多机制!
我使用的是Python 3.6.1;Spyder 3.2.4
发布于 2018-02-23 14:41:29
尝尝这个。它将从该站点获取表格数据。
from bs4 import BeautifulSoup
import requests
URL = "https://finance.google.com/finance?q=NASDAQ:{}&fstype=ii"
def Get_Table(ticker):
response = requests.get(URL.format(ticker))
soup = BeautifulSoup(response.text,"lxml")
table = soup.select_one("#fs-table")
for items in table.select(" tr"):
data = [' '.join(item.text.split()) for item in items.select("th,td")]
print(data)
if __name__ == '__main__':
for tckr in ['SBUX','MSFT','AAPL']:
Get_Table(tckr)
发布于 2018-02-23 13:07:57
更换您的
from mechanize import Browser
通过
import mechanize
还有你的
mech = Browser()
通过
mech = mechanize.Browser()
顺便说一句,url = "https://finance.google.com/finance?q=NASDAQ:“+ tckr + "&fstype=ii",而tckr还没有定义。我对python的了解有限。
https://stackoverflow.com/questions/48948520
复制相似问题