需求:由于业务检查需求,需要将一个结构化地址,如”XX省XX市XX区XXX号“地区转化为对应国家统计区行政划分的
省、市、区(县)、镇(街道)、乡结构。
解决思路:
1、自行编制文本解析方法,考虑比较复杂,很多情况不能覆盖,暂时不考虑,如果能解析,则速度会比较快。
2、通过爬虫,在百度搜索“百度百科” + 业务地址,通过分析第一个页面中的地址信息,但是可能会出现很多不一样的信息,分析起来有一定难度。但是优点是可以无限制爬取。
3、依靠高德API接口https://lbs.amap.com/api/webservice/guide/api/georegeo【地理编码、逆地理编码】,个人开发者明天拥有30万免费使用额度,对于一般而言已经足够,速度还快。
基于当前业务量大小,决定使用思路3。
前期准备:
依赖库:requests、lxml、pandas
1、阅读高德API接口参数,得出可以使用“地址名”来进行地理编码得到经纬度,再使用逆地理编码,通过经纬度得到“省、市、区(县)、镇(街道)”信息。特殊情况:部分地址十分不规则的话,需要增加默认搜索地址。
2、爬取 统计用区划和城乡划分代码:http://www.stats.gov.cn/tjsj/tjbz/tjyqhdmhcxhfdm/2019/index.html,以如下形式储存。主要考虑高德【逆地理编码】API没有到乡级,如果有就不要爬取国家统计局信息了。最后通过所在街道下的城乡信息,与机构地址匹配找出相应的最后一级信息。
3、学习xpath解析方法,使用lxml库。高德API返回内容是xml形式。
具体实现:
1、pandas打开excel文件,主要用加上dtype=object参数,保持数据原来的属性,不然一些数值型文本会被加载为数值。
file_name = 'data/address2test.xls'
df = pd.read_excel(file_name,dtype=object)
city_bk = '惠州市'
# 构造请求
req_geo_url = ''
req_geo_s = 'https://restapi.amap.com/v3/geocode/geo?address='
req_geo_e = '&output=XML&key=2a8d3af7ce489cb7e219d7df54d92678'
req_regeo_url = ''
req_regeo_s = 'https://restapi.amap.com/v3/geocode/regeo?output=xml&location='
req_regeo_e = '&key=2a8d3af7ce489cb7e219d7df54d92678&radius=1000&extensions=all'
headers = {
'User-Agent':'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/7.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET4.0C; .NET4.0E; Media Center PC 6.0)',
}
list_err_url = [] # 存储错误的url
# 对标题进行重新排序,默认取第一个列为地址,并追加后续列,如果已经有,则会保存。reIndex需要通过返回赋值,没有inplace参数。
new_columns = [df.columns[0]] + ['执行结果','标准地址','国家','省份','城市','县区代码','县区','乡镇代码','乡镇','街道',"乡村地址""]
df = df.reindex(columns=new_columns)
2、遍历每一行,这里使用df.apply方法,构造高德api requests请求,逐行执行。
df_sel = df['执行结果'] != 1
df.loc[df_sel,"执行结果"],df.loc[df_sel,"标准地址"],df.loc[df_sel,"国家"],df.loc[df_sel,"省份"],df.loc[df_sel,"城市"],df.loc[df_sel,"县区代码"],df.loc[df_sel,"县区"],df.loc[df_sel,"乡镇代码"],df.loc[df_sel,"乡镇"],df.loc[df_sel,"街道"] = zip(*df[df_sel].apply(append_address, axis=1))
# 请求函数
def append_address(x):
result = 1
url = req_geo_s + str(x[0]) + req_geo_e
print('执行序号:',str(x.name),'地址:',str(x[0]),'url:',url)
# 初始化结果
location = formatted_address = country = province = city = citycode = district = ''
adcode = township = towncode = streetNumber_street = streetNumber_number = ''
try:
resp = requests.get(url,timeout=5,headers = headers) # 设置访问超时,以及http头
xml = etree.XML(resp.content)
count = xml.xpath('/response/count/text()')[0]
if int(count) == 0:
# 如果为空,说明他的地址很不规范,但是这种一般是本地的业务
resp = requests.get(req_geo_s + city_bk + str(x[0]) + req_geo_e,timeout=5,headers = headers) # 设置访问超时,以及http头
xml = etree.XML(resp.content)
city = xml.xpath('/response/geocodes/geocode/city/text()') # 如果有多个,则选择为惠州市的
locations = xml.xpath('/response/geocodes/geocode/location/text()')
# 判断找到了多少个,如果有多个的话,则返回默认城市
if len(city) == 1:
location = locations[0]
else:
location = locations[0]
for i in range(len(city)):
if city[i] == city_bk:
location = locations[i]
except Exception as e:
print('req_geo_e error message:',str(e),'error url:',url)
list_err_url.append(url)
result = 0
location = ''
# 如果正常,则继续访问
if location != '' and result != 0:
url = req_regeo_s + location + req_regeo_e
try:
resp = requests.get(url,timeout=5,headers = headers) # 设置访问超时,以及http头
xml = etree.XML(resp.content)
# 逆编码内容
formatted_address = xml.xpath('/response/regeocode/formatted_address/text()')
if len(formatted_address)>0: formatted_address = formatted_address[0]
country = xml.xpath('/response/regeocode/addressComponent/country/text()')
if len(country)>0: country = country[0]
province = xml.xpath('/response/regeocode/addressComponent/province/text()')
if len(province)>0: province = province[0]
city = xml.xpath('/response/regeocode/addressComponent/city/text()')
if len(city)>0: city = city[0]
citycode = xml.xpath('/response/regeocode/addressComponent/citycode/text()')
if len(citycode)>0: citycode = citycode[0]
district = xml.xpath('/response/regeocode/addressComponent/district/text()')
if len(district)>0: district = district[0]
adcode = xml.xpath('/response/regeocode/addressComponent/adcode/text()')
if len(adcode)>0: adcode = adcode[0]
township = xml.xpath('/response/regeocode/addressComponent/township/text()')
if len(township)>0: township = township[0]
towncode = xml.xpath('/response/regeocode/addressComponent/towncode/text()')
if len(towncode)>0: towncode = towncode[0]
streetNumber_street = xml.xpath('/response/regeocode/addressComponent/streetNumber/street/text()')
if len(streetNumber_street)>0: streetNumber_street = streetNumber_street[0]
streetNumber_number = xml.xpath('/response/regeocode/addressComponent/streetNumber/number/text()')
if len(streetNumber_number)>0: streetNumber_number = streetNumber_number[0]
except Exception as e:
print('location error message:',str(e),'error url:',url)
result = 0
list_err_url.append(url)
# 返回元祖执行结果
return(result,formatted_address,country,province,city,adcode,district,towncode,township,streetNumber_street + streetNumber_number)
3、执行到这里,已经获取到了4级地址信息,还需要补充最后一级。先通过爬取到的统计局标准,构造一个{‘区域代码(前6位):{城镇/代码(7-9位):[vllage]}}的一个2层字典+列表的一个结构。
# 读取行政区划,village解析为5级字典
sdf = pd.read_csv('data/stats.csv',dtype=object))
sdf.drop(sdf[sdf['statType'] != 'village'].index, inplace=True)
sdf.drop(columns=['statName', 'statProvince','statCity','statCounty','statTown','statVillageType'],inplace=True)
# 构造行政区域字典,
d_state = {}
for i in range(len(sdf)):
#if i > 3:
# break
# 分割
statCode = str(sdf.iloc[i]['statCode']).strip().replace("'","")
city = statCode[:6]
town = statCode[6:9]
# 形成(乡全程,乡简称(用于匹配),标识符)
village_deal = deal_village(str(sdf.iloc[i]['statVillage'])) #处理过户
#print('city:',city,'town:',town)
if not city in d_state:
d_state[city] = {}
d_t = d_state[city]
if not town in d_t:
d_t[town] = []
d_t[town].append(village_deal)
4、再次遍历经过标准化处理的地址,使用village的简称与具体地址做匹配,如果存在则返回,并补充。最后结果如下:
总结
1、高德API成功率当前2万多条,仅有28条无法识别,5000条需要补充默认城市信息才能进行查找,总体效果较好。
2、最后乡级进行补充,仅用简称进行简单匹配,效果一般。考虑使用爬虫查找最近的社区或村委会,或找找有无相关可以查找对应的网站进行爬取。