Extraction of text using Beautiful Soup and regular expressions in 10-K Edgar fillings - regex

I want to automatically extract section "1A. Risk Factors" from around 10000 files and write it into txt files.
A sample URL with a file can be found here
The desired section is between "Item 1a Risk Factors" and "Item 1b". The thing is that the 'item', '1a' and '1b' might look different in all these files and may be present in multiple places - not only the longest, proper one that interest me. Thus, there should be some regular expressions used, so that:
The longest part between "1a" and "1b" is extracted (otherwise the table of contents will appear and other useless elements)
Different variants of the expressions are taken into consideration
I tried to implement these two goals in the script, but as it's my first project in Python, I just randomly sorted expressions that I think might work and apparently they are in a wrong order (I'm sure I should iterate on the "< a >"elements, add each extracted "section" to a list, then choose the longest one and write it to a file, though I don't know how to implement this idea).
EDIT: Currently my method returns very little data between 1a and 1b (i think it's a page number) from the table of contents and then it stops...(?)
My code:
import requests
import re
import csv
from bs4 import BeautifulSoup as bs
with open('indexes.csv', newline='') as csvfile:
reader = csv.reader(csvfile, delimiter=',')
for line in reader:
fn1 = line[0]
fn2 = re.sub(r'[/\\]', '', line[1])
fn3 = re.sub(r'[/\\]', '', line[2])
fn4 = line[3]
saveas = '-'.join([fn1, fn2, fn3, fn4])
f = open(saveas + ".txt", "w+",encoding="utf-8")
url = 'https://www.sec.gov/Archives/' + line[4].strip()
print(url)
response = requests.get(url)
soup = bs(response.content, 'html.parser')
risks = soup.find_all('a')
regexTxt = 'item[^a-zA-Z\n]*1a.*item[^a-zA-Z\n]*1b'
for risk in risks:
for i in risk.findAllNext():
i.get_text()
sections = re.findall(regexTxt, str(i), re.IGNORECASE | re.DOTALL)
for section in sections:
clean = re.compile('<.*?>')
# section = re.sub(r'table of contents', '', section, flags=re.IGNORECASE)
# section = section.strip()
# section = re.sub('\s+', '', section).strip()
print(re.sub(clean, '', section))
The goal is to find the longest part between "1a" and "1b" (regardless of how they exactly look) in the current URL and write it to a file.

In the end I used a CSV file, that contains a column HTMURL, which is the link to htm-format 10-K. I got it from Kai Chen that created this website. I wrote a simple script that writes pure txt into files. Processing it will be a simple task now.
import requests
import csv
from pathlib import Path
from bs4 import BeautifulSoup
with open('index.csv', newline='') as csvfile:
reader = csv.reader(csvfile, delimiter=',')
for line in reader:
print(line[9])
url = line[9]
html_doc = requests.get(url).text
soup = BeautifulSoup(html_doc, 'html.parser')
print(soup.get_text())
name = line[1]
name = name.replace('/', '')
name = name.replace("/PA/", "")
name = name.replace("/DE/", "")
dir = Path(name + line[4] + ".txt")
f = open(dir, "w+", encoding="utf-8")
if dir.is_dir():
break
else: f.write(soup.get_text())

Related

There is a text file want to return content of file in json format on the matching column

there is a text file containing data in the form:
[sec1]
"ab": "s"
"sd" : "d"
[sec2]
"rt" : "ty"
"gh" : "rr"
"kk":"op"
we are supposed to return dara of matching sections in json format like if user wants sec1 so we are supposed to send sec1 contents
The format you specified is very similar to the TOML format. However, this one uses equals signs for assignments of key-value pairs.
If your format actually uses colons for the assignment, the following example may help you.
It uses regular expressions in conjunction with a defaultdict to read the data from the file. The section to be queried is extracted from the URL using a variable rule.
If there is no hit within the loaded data, the server responds with a 404 error (NOT FOUND).
import re
from collections import defaultdict
from flask import (
Flask,
abort,
jsonify
)
def parse(f):
data = defaultdict(dict)
section = None
for line in f:
if re.match(r'^\[[^\]]+\]$', line.strip()):
section = line[1:-2]
data[section] = dict()
continue
m = re.match(r'^"(?P<key>[^"]+)"\s*:\s*"(?P<val>[^"]+)"$', line.strip())
if m:
key,val = m.groups()
if not section:
raise OSError('illegal format')
data[section][key] = val
continue
return dict(data)
app = Flask(__name__)
#app.route('/<string:section>')
def data(section):
path = 'path/to/file'
with open(path) as f:
data = parse(f)
if section in data:
return jsonify(data[section])
abort(404)

Beautiful Soup multiple findAll classes

Im working on a project to get a database on how many dogs get withdrawn prior to a race.
I need to scrape the data to then write into csv.
My problem is that the data that i'm scraping has a image instead of text(Between PLC and Greayhound on the webpage).
That means im running 2 different loops to get the info i need and then there hard to concatenate back in its correct place.
Here is the code.
import requests
import csv
URL = "https://www.thedogs.com.au/Racing/MeetResults.aspx?meetId=255268"
page = requests.get(URL)
from bs4 import BeautifulSoup
soup = BeautifulSoup(page.text, 'html.parser')
#soup.findAll('td', class_='ResultsCenteredCellContents'):
odds=[]
dog = soup.findAll('img' )
for a in dog:
odds.append(a['src'].strip())
odds1=[]
for b in soup.findAll('td'):
odds1.append(b.text.strip())
So if i could run all the code i need in the one loop that can be written in CSV that would be great.
Yes they are displayed in images but if you notice the images are named according to the number it represents src="../Images/BoxNumber4.gif that image represents 4
import requests , csv
from bs4 import BeautifulSoup
def SaveAsCsv(list_of_rows,file_name):
try:
print('\nSaving CSV Result')
with open(file_name, 'a', newline='', encoding='utf-8') as outfile:
writer = csv.writer(outfile)
writer.writerow(list_of_rows)
print("rsults saved successully")
except PermissionError:
print(f"Please make sure {file_name} is closed \n")
URL = "https://www.thedogs.com.au/Racing/MeetResults.aspx?meetId=255268"
page = requests.get(URL, headers={'User-Agent': 'Mozilla/5.0'})
soup = BeautifulSoup(page.text, 'html.parser')
tables = soup.findAll('table',{'id':"gvRaceResults"}) # getting all 11 tabs
for table_index , table in enumerate(tables,1):
print(f'Getting tab {table_index} out of {len(tables)} ')
rows = table.findAll('tr')
header_row = [row.text.strip() for row in rows[0].findAll('th')]
SaveAsCsv(header_row , 'thedogs.csv')
for index , row in enumerate(rows[1:],1):
print(f'Getting row {index} out of {len(rows[1:])} ')
row_list = []
colms = row.findAll('td')
for col in colms :
if 'ResultsContentsNumber' in col.attrs['class']:
image_src = col.find('img').get('src') #src="../Images/BoxNumber4.gif">
image_num = image_src.split('BoxNumber')[1].split('.')[0] # 4
row_list.append(image_num)
continue
row_list.append(col.text.strip())
SaveAsCsv(row_list , 'thedogs.csv')
Output:

How to use a concatenated string for get Method of requests?

I'm trying to write a small crawler to crawl multiple wikipedia pages.
I want to make the crawl somewhat dynamic by concatenating the hyperlink for the exact wikipage from a file which contains a list of names.
For example, the first line of "deutsche_Schauspieler.txt" says "Alfred Abel" and the concatenated string would be "https://de.wikipedia.org/wiki/Alfred Abel". Using the txt file will result in heading being none, yet when I complete the link with a string inside the script, it works.
This is for python 2.x.
I already tried to switch from " to ',
tried + instead of %s
tried to put the whole string into the txt file (so that first line reads "http://..." instead of "Alfred Abel"
tried to switch from "Alfred Abel" to "Alfred_Abel
from bs4 import BeautifulSoup
import requests
file = open("test.txt","w")
f = open("deutsche_Schauspieler.txt","r")
content = f.readlines()
for line in content:
link = "https://de.wikipedia.org/wiki/%s" % (str(line))
response = requests.get(link)
html = response.content
soup = BeautifulSoup(html)
heading = soup.find(id='Vorlage_Personendaten')
uls = heading.find_all('td')
for item in uls:
file.write(item.text.encode('utf-8') + "\n")
f.close()
file.close()
I expect to get the content of the table "Vorlage_Personendaten" which actually works if i change line 10 to
link = "https://de.wikipedia.org/wiki/Alfred Abel"
# link = "https://de.wikipedia.org/wiki/Alfred_Abel" also works
But I want it to work using the textfile
Looks like the problem in your text file where you have used "Alfred Abel" that is why you are getting the following exceptions
uls = heading.find_all('td')
AttributeError: 'NoneType' object has no attribute 'find_all'
Please remove the string quotes "Alfred Abel" and use Alfred Abel inside the text file deutsche_Schauspieler.txt . it will work as expected.
I found the solution myself.
Although there are no additionaly lines on the file, the content array displays like
['Alfred Abel\n'], but printing out the first index of the array will result in 'Alfred Abel'. It still gets interpreted like the string in the array, thus forming a false link.
So you want to move the last(!) character from the current line.
A solution could look like so:
from bs4 import BeautifulSoup
import requests
file = open("test.txt","w")
f = open("deutsche_Schauspieler.txt","r")
content = f.readlines()
print (content)
for line in content:
line=line[:-1] #Note how this removes \n which are technically two characters
link = "https://de.wikipedia.org/wiki/%s" % str(line)
response = requests.get(link)
html = response.content
soup = BeautifulSoup(html,"html.parser")
try:
heading = soup.find(id='Vorlage_Personendaten')
uls = heading.find_all('td')
for item in uls:
file.write(item.text.encode('utf-8') + "\n")
except:
print ("That did not work")
pass
f.close()
file.close()

How can I use values from a csv file, to loop my python script?

I have a problem. I use python 2.7.13 to gather some data from a web page.
I try to gather the data from multiply articles.
I use the folowing script to gather the data that I want to have.
import urllib2
i=0
line = open("artikelen.csv", "r").readlines()[i]
url = 'http://shop.niemann-frey.de/cshop/product.html?SID='
SID = '1cfbf44f2a062d40b1d8dd3fd9c434ff'
curl = '&art_nr='
artnr = line
response = urllib2.urlopen(url+SID+curl+artnr)
webContent = response.read()
for item in webContent.split("</title></head>"):
if "<html><head><title>" in item:
artikel = item [ item.find("<html><head><title>")+len("<html><head><title>") : ]
print "artikelnummer is: " + artikel
import csv
with open(artikel + '.csv', 'w') as fw:
writer = csv.writer(fw, delimiter=',')
f = open(artikel+'.csv','wb')
for item in webContent.split("&mode=thumb&dbname=bilder"):
if "https://shop.niemann-frey.de/cshop/lib/progs/getmedia.php?id=" in item:
nummer = item [ item.find("https://shop.niemann-frey.de/cshop/lib/progs/getmedia.php?id=")+len("https://shop.niemann-frey.de/cshop/lib/progs/getmedia.php?id=") : ]
print nummer
f.write(nummer+";")
f.close
i+=1
The list I use looks more or less like this
Artikelnummer:1B0009281B0012791B0011141B001271etc.
I want to use this values as "artnr" and I want to preform my script for all the values in the list. How can I manage this?Thank you all in advance!
Simple as:
with open("artikelen.csv", "r") as csv:
for artnr in csv:
# your code here

Beautifulsoup: exclude unwanted parts

I realise it's probably a very specific question but I'm struggling to get rid of some parts of text I get using the code below. I need a plain article text which I locate by finding "p" tags under 'class':'mol-para-with-font'. Somehow I get lots of other stuff like author's byline, date stamp, and most importantly text from adverts on the page. Examining html I cannot see them containing the same 'class':'mol-para-with-font' so I'm puzzled (or maybe I've been staring at it for too long...). I know there are lots of html gurus here so I'll be grateful for your help.
My code:
import requests
import translitcodec
import codecs
def get_text(url):
r = requests.get(url)
soup = BeautifulSoup(r.content, "lxml")
# delete unwanted tags:
for s in soup(['figure', 'script', 'style', 'table']):
s.decompose()
article_soup = [s.get_text(separator="\n", strip=True) for s in soup.find_all( ['p', {'class':'mol-para-with-font'}])]
article = '\n'.join(article_soup)
text = codecs.encode(article, 'translit/one').encode('ascii', 'replace') #replace traslit with ascii
text = u"{}".format(text) #encode to unicode
print text
url = 'http://www.dailymail.co.uk/femail/article-4703718/How-Alexander-McQueen-Kate-s-royal-tours.html'
get_text(url)
Only 'p'-s with class="mol-para-with-font" ?
This will give it to you:
import requests
from bs4 import BeautifulSoup as BS
url = 'http://www.dailymail.co.uk/femail/article-4703718/How-Alexander-McQueen-Kate-s-royal-tours.html'
r = requests.get(url)
soup = BS(r.content, "lxml")
for i in soup.find_all('p', class_='mol-para-with-font'):
print(i.text)