For my assignment, I am trying to scrape information off the following website: https://www.blueroomcinebar.com/movies/now-showing/.
My code needs to find movie names, times and posters. Both the movie times and posters appear to be displayed in the list I have created according to the order they appear in the HTML, however, the names seem to be in alphabetical order.
We are not allowed to use BeautifulSoup
This is my current code for scraping movies:
from re import findall, finditer, MULTILINE, DOTALL
from urllib.request import urlopen
movies_name = []
movies_times = []
movies_image = []
movies_list = []
movies_page = urlopen("https://www.blueroomcinebar.com/movies/now-showing/").read().decode('utf-8')
#Add movies to Movies at Blue Room Screen
find_movie_names = findall(r'<h1>(.*?)</h1>', movies_page)
find_movie_times = findall(r'<p>([0-9]{1,2}:[0-9]{2} AM|PM)</p>', movies_page)
find_movie_image = findall(r'<div class="poster" style="background-image: url\((.*?)\)">', movies_page)
print(find_movie_names)
#Add movies to arrays
for movie in find_movie_names:
movies_name.append(movie)
for movie in find_movie_times:
movies_times.append(movie)
for movie in find_movie_image:
movies_image.append(movie)
print(movies_name)
print(movies_image)
for movie in range(len(movies_name)):
movies_list.append("{};{};{}".format(movies_name[movie], movies_times[movie], movies_image[movie - 1]))
Currently, the names are in the list in the order of
['Aladdin', 'Avengers: Endgame', 'Chandigarh Amritsar Chandigarh', 'John Wick - Parabellum', 'Long Shot', 'Pokemon Detective Pikachu', 'Poms', 'The Hustle', 'Top End Wedding']
They should be in the order:
['Avengers: Endgame', 'Long Shot', 'Pokemon Detective Pikachu', 'The Hustle', 'John Wick - Parabellum', 'Aladdin', 'Chandigarh Amritsar Chandigarh']
N.P.
There may be a movie that comes up a second time with the precursor OCAP. I'm not 100% sure why it has that but it seems to be some kind of special screening that rotates through different movies each day.
Related
I'm working on a text-mining use case in python. These are the sentences of interest:
As a result may continue to be adversely impacted, by fluctuations in foreign currency exchange rates. Certain events such as the threat of additional tariffs on imported consumer goods from China, have increased. Stores are primarily located in shopping malls and other shopping centers.
How can I extract the sentence with the keyword "China"? I do need a sentence before and after that, actually atleast two sentences before and after.
I've tried the below, as was answered here:
import nltk
from nltk.tokenize import word_tokenize
sents = nltk.sent_tokenize(text)
my_sentences = [sent for sent in sents if 'China' in word_tokenize(sent)]
Please help!
TL;DR
Use sent_tokenize, keep track of the index where the focus word and window the sentences to get the desired result.
from itertools import chain
from nltk import sent_tokenize, word_tokenize
from nltk.tokenize.treebank import TreebankWordDetokenizer
word_detokenize = TreebankWordDetokenizer().detokenize
text = """As a result may continue to be adversely impacted, by fluctuations in foreign currency exchange rates. Certain events such as the threat of additional tariffs on imported consumer goods from China, have increased global economic and political uncertainty and caused volatility in foreign currency exchange rates. Stores are primarily located in shopping malls and other shopping centers, certain of which have been experiencing declines in customer traffic."""
tokenized_text = [word_tokenize(sent) for sent in sent_tokenize(text)]
sent_idx_with_china = [idx for idx, sent in enumerate(tokenized_text)
if 'China' in sent or 'china' in sent]
window = 2 # If you want 2 sentences before and after.
for idx in sent_idx_with_china:
start = max(idx - window, 0)
end = min(idx+window, len(tokenized_text))
result = ' '.join(word_detokenize(sent) for sent in tokenized_text[start:end])
print(result)
Another example, pip install wikipedia first:
from itertools import chain
from nltk import sent_tokenize, word_tokenize
from nltk.tokenize.treebank import TreebankWordDetokenizer
word_detokenize = TreebankWordDetokenizer().detokenize
import wikipedia
text = wikipedia.page("Winnie The Pooh").content
tokenized_text = [word_tokenize(sent) for sent in sent_tokenize(text)]
sent_idx_with_china = [idx for idx, sent in enumerate(tokenized_text)
if 'China' in sent or 'china' in sent]
window = 2 # If you want 2 sentences before and after.
for idx in sent_idx_with_china:
start = max(idx - window, 0)
end = min(idx+window, len(tokenized_text))
result = ' '.join(word_detokenize(sent) for sent in tokenized_text[start:end])
print(result)
print()
[out]:
Ashdown Forest in England where the Pooh stories are set is a popular
tourist attraction, and includes the wooden Pooh Bridge where Pooh and
Piglet invented Poohsticks. The Oxford University Winnie the Pooh
Society was founded by undergraduates in 1982. == Censorship in China
== In the People's Republic of China, images of Pooh were censored in mid-2017 from social media websites, when internet memes comparing
Chinese president Xi Jinping to Pooh became popular. The 2018 film
Christopher Robin was also denied a Chinese release.
I need to extract useful text from news articles. I do it with BeautifulSoup but the output sticks together some paragraphs which prevents me from analysing the text further.
My code:
import requests
from bs4 import BeautifulSoup
r = requests.get("http://www.bbc.co.uk/news/uk-england-39607452")
soup = BeautifulSoup(r.content, "lxml")
# delete unwanted tags:
for s in soup(['figure', 'script', 'style']):
s.decompose()
article_soup = [s.get_text() for s in soup.find_all(
'div', {'class': 'story-body__inner'})]
article = ''.join(article_soup)
print(article)
The output looks like this (just first 5 sentences):
The family of British student Hannah Bladon, who was stabbed to death in Jerusalem, have said they are "devastated" by the "senseless
and tragic attack".Ms Bladon, 20, was attacked on a tram in Jerusalem
on Good Friday.She was studying at the Hebrew University of Jerusalem
at the time of her death and had been taking part in an archaeological
dig that morning.Ms Bladon was stabbed several times in the chest and
died in hospital. She was attacked by a man who pulled a knife from
his bag and repeatedly stabbed her on the tram travelling near Old
City, which was busy as Christians marked Good Friday and Jews
celebrated Passover.
I tried adding a space after certain punctuations like ".", "?", and "!".
article = article.replace(".", ". ")
It works with paragraphs (although I believe there should be a smarter way of doing this) but not with subtitles for different sections of the articles which don't have any punctuation in the end. They are structured like this:
</p>
<h2 class="story-body__crosshead">
Subtitle text
</h2>
<p>
I will be grateful for your advice.
PS: adding a space when I 'join' the article_soup doesn't help.
You can use separator in your get_text, which will fetch all the strings in the current element separated by the given character.
article_soup = [s.get_text(separator="\n", strip=True) for s in soup.find_all( 'div', {'class': 'story-body__inner'})]
I would like to execute this and get all of the text from the title and href attributes. The code runs, and I do get all of the needed data, but I would like to assign the outputs to an array and when I attempt to assign this just gives me the last instance of the attributes being true in the HTML.
from bs4 import BeautifulSoup
import urllib
r = urllib.urlopen('http://www.genome.jp/kegg-bin/show_pathway?map=hsa05215&show_description=show').read()
soup = BeautifulSoup((r), "lxml")
for area in soup.find_all('area', href=True):
print area['href']
for area in soup.find_all('area', title=True):
print area['title']
If it helps, I'm doing this because I will create a list with the data later. I'm just beginning to learn, so extra explanations are much appreciated.
You need to use list comprehensions:
links = [area['href'] for area in soup.find_all('area', href=True)]
titles = [area['title'] for area in soup.find_all('area', title=True)]
I am trying to code a program in Python 2.7.9 to crawl and gather the club names, addresses and phone numbers from the website http://tennishub.co.uk/
The following code gets the job done, except for it doesn't move on to the subsequent pages for each location such as
/Berkshire/1
/Berkshire/2
/Berkshire/3
..and so on.
import requests
from bs4 import BeautifulSoup
def tennis_club():
url = 'http://tennishub.co.uk/'
r = requests.get(url)
soup = BeautifulSoup(r.text)
for link in soup.select('div.countylist a'):
href = 'http://tennishub.co.uk' + link.get('href')
pages_data(href)
def pages_data(item_url):
r = requests.get(item_url)
soup = BeautifulSoup(r.text)
g_data = soup.select('table.display-table')
for item in g_data:
print item.contents[1].text
print item.contents[3].findAll('td')[1].text
try:
print item.contents[3].find_all('td',{'class':'telrow'})[0].text
except:
pass
try:
print item.contents[5].findAll('td',{'class':'emailrow'})[0].text
except:
pass
print item_url
tennis_club()
I have tried tweaking the code to the best of my understanding but it doesn't work at all.
Can someone please advise what do I need to do so that the program goes through all the pages of a location, collects the data and move on the to next location and so on.
You are going to need to put another for loop into this code:
for link in soup.select('div.countylist a'):
href = 'http://tennishub.co.uk' + link.get('href')
# new for loop goes here #
pages_data(href)
If you want to brute force it you just have the for loop go as many times as the area with the most clubs (Surrey), however you would double, triple, quadruple, etc. count the last clubs for many of the areas. This is ugly but you can get away with it if you are using a database where you don't insert duplicates. However it is unacceptable if you are writing to a file. In that case you will need to pull the number in parenthesis after the area Berkshire (39). To get that number you can do a get_text() on the div.countylist which would change the above to
for link in soup.select('div.countylist'):
for endHref in link.find_all('a'):
numClubs = endHref.next
#need to clean up endHrefNum here to remove spaces and parens
endHrefNum = numClubs//10 + 1 #add one because // gives the floor
href = 'http://tennishub.co.uk' + endHref.get('href') + / + endHrefNum
pages_data(href)
(disclaimer: I didn't run this through bs4 so there might be syntax errors (and you might need to use something other than .next, but the logic should help you)
I'm trying to scrape data for Miami Heat and their opponent from a table at http://www.scoresandodds.com/grid_20111225.html. The problem I have is that tables for NBA and NFL and other sports are all identicaly marked and all the data I get is from the NFL table. Another problem is that I would like to scrape data for the entire season and the number of different tables changes and the position of Miami changes in the table. This is the code I've been using for different tables till now;
So why is this not getting the job done? Thx for you patience; I'm a real begginer, and I've been trying to solve this problem for some days now, to no effect.
def tableSnO(htmlSnO):
gameSections = soup.findAll('div', 'gameSection')
for gameSection in gameSections:
header = gameSection.find('div', 'header')
if header.get('id') == 'nba':
rows = gameSections.findAll('tr')
def parse_string(el):
text = ''.join(el.findAll(text=True))
return text.strip()
for row in rows:
data = map(parse_string, row.findAll('td'))
return data
Lately I decided to try a different approach; if I scrape the entire page and get the index of the data in question (this is where it stops:) I could just get the next set of data from the list, since that structure of the table never changes. I could also get the opponent's team name the same way I get the htmlSnO . It feels like this is such basic stuff and it's killing me that I can't get it right.
def tableSnO(htmlSnO):
oddslist = soupSnO.find('table', {"width" : "100%", "cellspacing" : "0", "cellpadding" : "0"})
rows = oddslist.findAll('tr',)
def parse_string(el):
text = ''.join(el.findAll(text=True))
return text.strip()
for row in rows:
data = map(parse_string, row.findAll('td'))
for teamName in data:
if re.match("(.*)MIAMI HEAT(.*)", teamName):
return teamName
return data.index(teamName)
New and final answer with working code:
The section of the page you want has this:
<div class="gameSection">
<div class="header" id="nba">
This should let you get at the NBA tables:
def tableSnO(htmlSnO):
gameSections = soup.findAll('div', 'gameSection')
for gameSection in gameSections:
header = gameSection.find('div', 'header')
if header.get('id') == 'nba':
# process this gameSection
print gameSection.prettify()
As a complete example, here's the full code I used to test:
import sys
import urllib2
from bs4 import BeautifulSoup
f = urllib2.urlopen('http://www.scoresandodds.com/grid_20111225.html')
html = f.read()
soup = BeautifulSoup(html)
gameSections = soup.findAll('div', 'gameSection')
for gameSection in gameSections:
header = gameSection.find('div', 'header')
if header.get('id') == 'nba':
table = gameSection.find('table', 'data')
print table.prettify()
This prints the NBA data table.