I frequently need a list of CVEs listed on a vendor's security bulletin page. Sometimes that's simple to copy off, but often they're mixed in with a bunch of text.
I haven't touched Python in a good while, so I thought this would be a great exercise to figure out how to extract that info – especially since I keep finding myself doing it manually.
Here's my current code:
#!/usr/bin/env python3
# REQUIREMENTS
# python3
# BeautifulSoup (pip3 install beautifulsoup)
# python 3 certificates (Applications/Python 3.x/ Install Certificates.command) <-- this one took me forever to figure out!
import sys
if sys.version_info[0] < 3:
raise Exception("Use Python 3: python3 " + sys.argv[0])
from urllib.request import urlopen
from bs4 import BeautifulSoup
import re
#specify/get the url to scrape
#url ='https://chromereleases.googleblog.com/2020/02/stable-channel-update-for-desktop.html'
#url = 'https://source.android.com/security/bulletin/2020-02-01.html'
url = input("What is the URL? ") or 'https://chromereleases.googleblog.com/2020/02/stable-channel-update-for-desktop.html'
print("Checking URL: " + url)
# CVE regular expression
cve_pattern = 'CVE-\d{4}-\d{4,7}'
# query the website and return the html
page = urlopen(url).read()
# parse the html returned using beautiful soup
soup = BeautifulSoup(page, 'html.parser')
count = 0
############################################################
# ANDROID === search for CVE references within <td> tags ===
# find all <td> tags
all_tds = soup.find_all("td")
#print all_tds
for td in all_tds:
if "cve" in td.text.lower():
print(td.text)
############################################################
# CHROME === search for CVE reference within <span> tags ===
# find all <span> tags
all_spans = soup.find_all("span")
for span in all_spans:
# this code returns results in triplicate
for i in re.finditer(cve_pattern, span.text):
count += 1
print(count, i.group())
# this code works, but only returns the first match
# match = re.search(cve_pattern,span.text)
# if match:
# print(match.group(0))
What I have working for the Android URL works fine; the problem I'm having is for the Chrome URL. They have the CVE info inside <span> tags, and I'm trying to leverage regular expressions to pull that out.
Using the re.finditer approach, I end up with results in triplicate.
Using the re.search approach it misses CVE-2019-19925 – they listed two CVEs on that same line.
Can you offer any advice on the best way to get this working?
I finally worked it out myself. No need for BeautifulSoup; everything is RegEx now. To work around the duplicate/triplicate results I was seeing before, I convert the re.findall list result to a dictionary (retaining order of unique values) and back to a list.
import sys
if sys.version_info[0] < 3:
raise Exception("Use Python 3: python3 " + sys.argv[0])
import requests
import re
# Specify/get the url to scrape (included a default for easier testing)
### there is no input validation taking place here ###
url = input("What is the URL? ") #or 'https://chromereleases.googleblog.com/2020/02/stable-channel-update-for-desktop.html'
print()
# CVE regular expression
cve_pattern = r'CVE-\d{4}-\d{4,7}'
# query the website and return the html
page = requests.get(url)
# initialize count to 0
count = 0
#search for CVE references using RegEx
cves = re.findall(cve_pattern, page.text)
# after several days of fiddling, I was still getting double and sometimes triple results on certain pages. This next line
# converts the list of objects returned from re.findall to a dictionary (which retains order) to get unique values, then back to a list.
# (thanks to https://stackoverflow.com/a/48028065/9205677)
# I found order to be important sometimes, as the most severely rated CVEs are often listed first on the page
cves = list(dict.fromkeys(cves))
# print the results to the screen
for cve in cves:
print(cve)
count += 1
print()
print(str(count) + " CVEs found at " + url)
print()
Related
I have been writing a script which will recover for me CVSS3 scores when i enter a vulnerability name, i've pretty much got it working as intended except for a minor annoying detail.
π ~/Documents/Tools/Scripts ❯ python3 CVSS3-Grabber.py
Paste Vulnerability Name: PHP 7.2.x < 7.2.21 Multiple Vulnerabilities.
Base Score: None
Vector: <re.Match object; span=(27869, 27913), match='CVSS:3.0/AV:N/AC:L/PR:N/UI:R/S:U/C:L/I:N/A:H'>
Temporal Vector: <re.Match object; span=(27986, 28008), match='CVSS:3.0/E:U/RL:O/RC:C'>
As can be seen the output could be much neater, i would much prefer something like this:
π ~/Documents/Tools/Scripts ❯ python3 CVSS3-Grabber.py
Paste Vulnerability Name: PHP 7.2.x < 7.2.21 Multiple Vulnerabilities.
Base Score: None
Vector: CVSS:3.0/AV:N/AC:L/PR:N/UI:R/S:U/C:L/I:N/A:H
However i have been struggling to figure out how to get the output nicer, is there an easy part of the re module that im missing that can do this for me? or perhaps putting the output into a file first would then allow me to manipulate the text to how i need it.
Here is my code, would appreciate any feedback on how to improve as i have recently gotten back into python and scripting in general.
import requests
import re
from bs4 import BeautifulSoup
from googlesearch import search
def get_url():
vuln = input("Paste Vulnerability Name: ") + "tenable"
for url in search(vuln, tld='com',lang='en',num=1,start=0,stop=1,pause=2.0):
return url
def get_scores(url):
response = requests.get(url)
html = response.text
cvss3_temporal_v = re.search("CVSS:3.0/E:./RL:./RC:.",html)
cvss3_v = re.search("CVSS:3.0/AV:./AC:./PR:./UI:./S:./C:./I:./A:.",html)
cvss3_basescore = re.search("Base Score:....",html)
print("Base Score: ",cvss3_basescore)
print("Vector: ",cvss3_v)
print("Temporal Vector: ",cvss3_temporal_v)
urll = get_url()
get_scores(urll)
### IMPROVEMENTS ###
# Include the base score in output
# Tidy up output
# Vulnerability list?
# modify to accept flags, i.e python3 CVSS3-Grabber.py -v VULNAME ???
# State whether it is a failing issue or Action point
Thanks!
Don't print the match object. Print the match value.
In Python the value is accessible through the .group() method. If there are no regex subgroups (or you want the entire match, like in this case), don't specify any arguments when you call it:
print("Vector: ", cvss3_v.group())
I'm triyng to print out all the Ip address from this website https://hidemy.name/es/proxy-list/#list
but nothing happens
code in python 2.7:
import requests
from bs4 import BeautifulSoup
def trade_spider(max_pages): #go throw max pages of the website starting from 1
page = 0
value = 0
print('proxies')
while page <= 18:
value += 64
url = 'https://hidemy.name/es/proxy-list/?start=' + str(value) + '#list' #add page number to link
source_code = requests.get(url) #get website html code
plain_text = source_code.text
soup = BeautifulSoup(plain_text, 'html.parser')
for link in soup.findAll('td',{'class': 'tdl'}): #get the link of this class
proxy = link.string #get the string of the link
print(proxy)
page += 1
trade_spider(1)
You don't seeing any output because there is no matching elements in your soup.
I've tried to dump all the variables to output stream and figured out that this website is blocking crawlers. Try to print plain_text variable. It'll probably only contain warning message like:
It seems you are bot. If so, please use separate API interface. It
cheap and easy to use.
I am new to python as well as scrapy.
I am trying to crawl a seed url https://www.health.com/patients/status/.This seed url contains many urls. But I want to fetch only urls that contain Faci/Details/#somenumber from the seed url .The url will be like below:
https://www.health.com/patients/status/ ->https://www.health.com/Faci/Details/2
-> https://www.health.com/Faci/Details/3
-> https://www.health.com/Faci/Details/4
https://www.health.com/Faci/Details/2 -> https://www.health.com/provi/details/64
-> https://www.health.com/provi/details/65
https://www.health.com/Faci/Details/3 -> https://www.health.com/provi/details/70
-> https://www.health.com/provi/details/71
Inside each https://www.health.com/Faci/Details/2 page there is https://www.health.com/provi/details/64
https://www.health.com/provi/details/65 ... .Finally I want to fetch some datas from
https://www.health.com/provi/details/#somenumber url.How can I achieve the same?
As of now I have tried the below code from scrapy tutorial and able to crawl only url that contains https://www.health.com/Faci/Details/#somenumber .Its not going to https://www.health.com/provi/details/#somenumber .I tried to set depth limit in settings.py file.But it doesn't worked.
import scrapy
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
from news.items import NewsItem
class MySpider(CrawlSpider):
name = 'provdetails.com'
allowed_domains = ['health.com']
start_urls = ['https://www.health.com/patients/status/']
rules = (
Rule(LinkExtractor(allow=('/Faci/Details/\d+', )), follow=True),
Rule(LinkExtractor(allow=('/provi/details/\d+', )),callback='parse_item'),
)
def parse_item(self, response):
self.logger.info('Hi, this is an item page! %s', response.url)
item = NewsItem()
item['id'] = response.xpath("//title/text()").extract()
item['name'] = response.xpath("//title/text()").extract()
item['description'] = response.css('p.introduction::text').extract()
filename='details.txt'
with open(filename, 'wb') as f:
f.write(item)
self.log('Saved file %s' % filename)
return item
Please help me to proceed further?
To be honest, the regex-based and mighty Rule/LinkExtractor gave me often a hard time. For simple project it is maybe an approach to extract all links on page and then look on the href attribute. If the href matches your needs, yield a new Response object with it. For instance:
from scrapy.http import Request
from scrapy.selector import Selector
...
# follow links
for href in sel.xpath('//div[#class="contentLeft"]//div[#class="pageNavigation nobr"]//a').extract():
linktext = Selector(text=href).xpath('//a/text()').extract_first()
if linktext and linktext[0] == "Weiter":
link = Selector(text=href).xpath('//a/#href').extract()[0]
url = response.urljoin(link)
print url
yield Request(url, callback=self.parse)
Some remarks to your code:
response.xpath(...).extract()
This will return a list, maybe you want to have a look on extract_first() which provide the first item (or None).
with open(filename, 'wb') as f:
This will overwrite the file several times. You will only gain the last item saved. Also you open the file in binary mode ('b'). From the filename I guess you want to read it as text? Use 'a' to append? See open() docs
An alternative is to use the -o flag to use scrapys facilities to store the items to JSON or CSV.
return item
It is a good style to yield items instead of return them. At least if you need to create several items from one page you need to yield them.
Another good approach is: Use one parse() function for one type/kind of page.
For instance every page in start_urls fill end up in parse(). From that you extract could extract the links and yield Requests for each /Faci/Details/N page with a callback parse_faci_details(). In parse_faci_details() you extract again the links of interest, create Requests and pass them via callback= to e.g. parse_provi_details().
In this function you create the items you need.
I'm using lxml to scrape through a site. I want to scrape through a search result, that contains 194 items. My scraper is able to scrape only the first page of search results. How can I scrape the rest of the search results?
url = 'http://www.alotofcars.com/new_car_search.php?pg=1&byshowroomprice=0.5-500&bycity=Gotham'
response_object = requests.get(url)
# Build DOM tree
dom_tree = html.fromstring(response_object.text)
After this there are scraping functions
def enter_mmv_in_database(dom_tree,engine):
# Getting make, model, variant
name_selector = CSSSelector('[class="secondary-cell"] p a')
name_results = name_selector(dom_tree)
for n in name_results:
mmv = str(`n.text_content()`).split('\\xa0')
make,model,variant = mmv[0][2:], mmv[1], mmv[2][:-2]
# Now push make, model, variant in Database
print make,model,variant
By looking at the list I receive I can see that only the first page of search results is parsed. How can I parse the whole of search result.
I've tried to navigate through that website but it seems to be offline. Yet, I would like to help with the logic.
What I usually do is:
Make a request to the search URL (with parameters filled)
With lxml, extract the last page available number in a pagination div.
Loop from first page to the last one, making requests and scraping desired data:
for page_number in range(1, last+1):
## make requests replacing 'page_number' in 'pg' GET variable
url = "http://www.alotofcars.com/new_car_search.php?pg={}&byshowroomprice=0.5-500&bycity=Gotham'".format(page_number)
response_object = requests.get(url)
dom_tree = html.fromstring(response_object.text)
...
...
I hope this helps. Let me know if you have any further questions.
I try to do some web scraping using Python and Beautiful Soup, but the source page of the webpage is not the prettiest. The code below is a minor part of the source page:
...717301758],"birthdayFriends":2,"lastActiveTimes":{"719317510":0,"719435783":0,...
I want to get the parameter '2' after the string 'birthdayFriends', but I have no idea how to get it. So far i have written the code below, but it only prints a empty list.
import urllib2
from bs4 import BeautifulSoup
# Create an OpenerDirector with support for Basic HTTP Authentication...
auth_handler = urllib2.HTTPBasicAuthHandler()
auth_handler.add_password(realm='PDQ Application',
uri='myWebpage',
user='myUsername',
passwd='myPassword')
opener = urllib2.build_opener(auth_handler)
# ...and install it globally so it can be used with urlopen.
urllib2.install_opener(opener)
page = urllib2.urlopen('myWebpage')
soup = BeautifulSoup(page.read())
bf = soup.findAll('birthdayFriends')
print bf
>> []
suppose somewhere in the html there is a script tag like the following:
<script>
var x = {"birthdayFriends":2,"lastActiveTimes":{"719317510":0,"719435783":0}}
</script>
then your code might look something like:
script = soup.findAll('script')[0] # or the number it appears in the file
# take the json part
j = bf.text.split('=')[1]
import json
# load json string to a dictionary
d = json.loads(j, strict=False)
print d["birthdayFriends"]
in case the content of the script tag is more complicated, consider loop over the script lines or see How can I parse Javascript variables using python?
also, for parsing JavaScript in python also see pynoceros