时间:2023-06-21 05:00:01 | 来源:网站运营
时间:2023-06-21 05:00:01 来源:网站运营
爬虫实战—“网络信息安全”公司top100:url_begin = "https://www.tianyancha.com/search/p"url_end = "?key=%E7%BD%91%E7%BB%9C%E4%BF%A1%E6%81%AF%E5%AE%89%E5%85%A8"final_result = []for i in range(1, 6): # 分析前5个页面 url = url_begin + str(i) + url_end # 拼接得到页面URL
headers = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.122 Safari/537.36" }
Requests 简化了 urllib 的诸多冗杂且无意义的操作,并提供了更强大的功能。
Requests 是 Python 所有模块中最受欢迎的一个,全世界最优秀的程序员都在使用它!
import requestsres= requests.get(url,headers=headers)print(res.status_code) #判断是否正确得到页面,返回200说明成功
BeautifulSoup 模块,可以化腐朽为神奇,将一个复杂的网页结构转化为书籍目录的形式供你浏览。本项目主要对公司名称、天眼查评分、注册资本、成立时间、法人、经营范围几项信息进行提取。
from bs4 import BeautifulSoupsoup = BeautifulSoup(res.text,'html.parser')result = [] # 一个页面的信息#公司名、评分company = []score = []targets= soup.find_all('div',class_="search-item sv-search-company")for each in targets: temp = each.find('a') company.append(temp.text)for each in targets: try: temp = each.find('span',class_ ="score-num") score.append(int(temp.text)) except: score.append("暂无")#法人boss = []targets = soup.find_all('div', class_="title -wider text-ellipsis")for each in targets: try: boss.append(each.a.text) except: boss.append("未公开")#注册资本reg_money = []targets = soup.find_all('div',class_="title -narrow text-ellipsis")for each in targets: try: reg_money.append(int(each.span.text.split("万")[0])) except: reg_money.append("未公开")#成立日期date = []targets = soup.find_all('div',class_="title text-ellipsis")for each in targets: try: date.append(each.span.text) except: date.append("未公开")# 经营范围&历史名称service = []targets = soup.find_all('div', class_="search-item sv-search-company")for each in targets: try: temp = each.find('div', class_="match row text-ellipsis" ) service.append(temp.span.text) except: service.append("未公开")#汇总信息length = len(company)for i in range(0,length): result.append([company[i], score[i], reg_money[i], date[i], boss[i], service[i]])return result
import openpyxlwb = openpyxl.Workbook()wb.guess_type = Truews = wb.activews.append(["公司名称", "天眼评分", "注册资本(万元)", "成立时间", "法人", "其他信息"])for item in data: ws.append(item)wb.save("网络信息安全前100.xlsx")
关键词:安全,实战,网络,信息,爬虫