I have this code:
from bs4 import BeautifulSoup
import urllib
url = 'http://www.brothersoft.com/windows/mp3_audio/midi_tools/'
html = urllib.urlopen(url)
soup = BeautifulSoup(html)
for a in soup.select('div.freeText dl a[href]'):
print "http://www.borthersoft.com"+a['href'].encode('utf-8','replace')
What I get is:
http://www.borthersoft.com/synthfont-159403.html
http://www.borthersoft.com/midi-maker-23747.html
http://www.borthersoft.com/keyboard-music-22890.html
http://www.borthersoft.com/mp3-editor-for-free-227857.html
http://www.borthersoft.com/midipiano---midi-file-player-recorder-61384.html
http://www.borthersoft.com/notation-composer-32499.html
http://www.borthersoft.com/general-midi-keyboard-165831.html
http://www.borthersoft.com/digital-music-mentor-31262.html
http://www.borthersoft.com/unisyn-250033.html
http://www.borthersoft.com/midi-maestro-13002.html
http://www.borthersoft.com/music-editor-free-139151.html
http://www.borthersoft.com/midi-converter-studio-46419.html
http://www.borthersoft.com/virtual-piano-65133.html
http://www.borthersoft.com/yamaha-9000-drumkit-282701.html
http://www.borthersoft.com/virtual-midi-keyboard-260919.html
http://www.borthersoft.com/anvil-studio-6269.html
http://www.borthersoft.com/midicutter-258103.html
http://www.borthersoft.com/softick-audio-gateway-55913.html
http://www.borthersoft.com/ipmidi-161641.html
http://www.borthersoft.com/d.accord-keyboard-chord-dictionary-28598.html
There should be 526 application links to be printed out.
But I only get twenty?
What is not enough with the code?
There's only 20 application links in a page.
You have to iterate all pages to get all links:
from bs4 import BeautifulSoup
import urllib
for page in range(1, 27+1): # currently there are 27 pages.
url = 'http://www.brothersoft.com/windows/mp3_audio/midi_tools/{}.html'.format(page)
html = urllib.urlopen(url)
soup = BeautifulSoup(html)
for a in soup.select('div.freeText dl a[href]'):
print "http://www.borthersoft.com"+a['href'].encode('utf-8','replace')
Related
Below python code not working to fetch the data from given link. Please help me how to make it possible
import urllib2
from bs4 import BeautifulSoup
quote_page = 'http://www.smartvidya.co.in/2016/11/ugc-net-paper-1-previous-year-questions_14.html'
page = urllib2.urlopen(quote_page)
soup = BeautifulSoup(page, 'html.parser')
name_box = soup.find('div', attrs={'class': 'MsoNormal'})
print name_box
Try this :
import urllib2
from bs4 import BeautifulSoup
quote_page = 'http://www.smartvidya.co.in/2016/11/ugc-net-paper-1-previous-year-questions_14.html'
page = urllib2.urlopen(quote_page)
soup = BeautifulSoup(page, 'html.parser')
for name_box in soup.findAll('div',attrs={'class': 'MsoNormal'}):
print name_box.text
Hope this helps!
I want to get all the product links from specific category by using BeautifulSoup in Python.
I have tried the following but don't get a result:
import lxml
import urllib2
from bs4 import BeautifulSoup
html=urllib2.urlopen("http://www.bedbathandbeyond.com/store/category/bedding/bedding/quilts-coverlets/12018/1-96?pagSortOpt=DEFAULT-0&view=grid")
br= BeautifulSoup(html.read(),'lxml')
for links in br.findAll('a', class_='prodImg'):
print links['href']
You use urllib2 wrong.
import lxml
import urllib2
from bs4 import BeautifulSoup
#create a http request
req=urllib2.Request("http://www.bedbathandbeyond.com/store/category/bedding/bedding/quilts-coverlets/12018/1-96?pagSortOpt=DEFAULT-0&view=grid")
# send the request
response = urllib2.urlopen(req)
# read the content of the response
html = response.read()
br= BeautifulSoup(html,'lxml')
for links in br.findAll('a', class_='prodImg'):
print links['href']
from bs4 import BeautifulSoup
import requests
html=requests.get("http://www.bedbathandbeyond.com/store/category/bedding/bedding/quilts-coverlets/12018/1-96?pagSortOpt=DEFAULT-0&view=grid")
br= BeautifulSoup(html.content,"lxml")
data=br.findAll('div',attrs={'class':'productShadow'})
for div in br.find_all('a'):
print div.get('href')
try this code
Recently I've tried to use urllib2 and BeautifulSoup to extract the source coede of some web page, however, failed with the output of improper code.
The script is as follows (run in Python IDLE)
import urllib2
from bs4 import BeautifulSoup
web = "http://www.qq.com"
page = urllib2.urlopen(web)
soup = BeautifulSoup(page, "html.parser")
print soup.prettify()
I found that the charset of "http://www.qq.com" is gb2312, so added something in the above script like this:
import urllib2
from bs4 import BeautifulSoup
web = "http://www.qq.com"
page = urllib2.urlopen(web)
soup = BeautifulSoup(page, "html.parser", from_encoding="gb2312")
print soup.prettify()
But the result is frustrating. Is there any solution available?
The screenshot of error message:
Error Message
Last Weekend I added the module sys in the above code but it prints nothing, without a warning this time.
#coding=utf-8
import urllib2
from bs4 import BeautifulSoup
import sys
reload(sys)
sys.setdefaultencoding('gbk')
web = "http://www.qq.com"
page = urllib2.urlopen(web)
soup = BeautifulSoup(page, "html.parser")
print soup.prettify()
Can you post the error message? Or is the problem that it's just not displaying Chinese characters to the screen?
Try switching to gb18030 encoding. Even though the page says its charset is gb2313, there must be a character that's messing up the decoding. Switching encodings turned my terminal output from garbage to Chinese characters (Source)
import urllib2
from bs4 import BeautifulSoup
web = "http://www.qq.com"
page = urllib2.urlopen(web)
soup = BeautifulSoup(page, "html.parser", from_encoding="gb18030")
print soup.prettify()
How I can get the name of url with BeautifulSoup.
I've this code:
from BeautifulSoup import BeautifulSoup
import urllib2
import re
html_page = urllib2.urlopen("http://www.youtube.com")
soup = BeautifulSoup(html_page)
list = soup.findAll('div', attrs={'class':'profileBox'})
for div in list:
print div.a['href']
---------------------------------
sam utx
-------------------------------------
This print the href("/sam") but I need is the url's name (sam utx).
How I can make this?
You can select the text inside the div itself with this :
div.a.string
You can read more about this here.
I have this code:
import urllib
import urlparse
from bs4 import BeautifulSoup
url = "http://www.downloadcrew.com/?act=search&cat=51"
pageHtml = urllib.urlopen(url)
soup = BeautifulSoup(pageHtml)
for a in soup.select("div.productListingTitle a[href]"):
try:
print (a["href"]).encode("utf-8","replace")
except:
print "no link"
pass
But when I run it, I only get 20 links only. The output should be more than 20 links.
Because you only download the first page of content.
Just use a loop to donwload all pages:
import urllib
import urlparse
from bs4 import BeautifulSoup
for i in xrange(3):
url = "http://www.downloadcrew.com/?act=search&page=%d&cat=51" % i
pageHtml = urllib.urlopen(url)
soup = BeautifulSoup(pageHtml)
for a in soup.select("div.productListingTitle a[href]"):
try:
print (a["href"]).encode("utf-8","replace")
except:
print "no link"
if you do'nt know the count of pages, you can
import urllib
import urlparse
from bs4 import BeautifulSoup
i = 0
while 1:
url = "http://www.downloadcrew.com/?act=search&page=%d&cat=51" % i
pageHtml = urllib.urlopen(url)
soup = BeautifulSoup(pageHtml)
has_more = 0
for a in soup.select("div.productListingTitle a[href]"):
has_more = 1
try:
print (a["href"]).encode("utf-8","replace")
except:
print "no link"
if has_more:
i += 1
else:
break
I run it on my computer and it get 60 link of three pages.
Good luck~