I want to crawl a blog which has several categories of websites . Starting navigating the page from the first category, my goal is to collect every webpage by following the categories . I have collected the websites from the 1st category but the spider stops there , can't reach the 2nd category .
An example draft :
my code :
import scrapy
from scrapy.contrib.spiders import Rule, CrawlSpider
from scrapy.contrib.linkextractors import LinkExtractor
from final.items import DmozItem
class my_spider(CrawlSpider):
name = 'heart'
allowed_domains = ['greek-sites.gr']
start_urls = ['http://www.greek-sites.gr/categories/istoselides-athlitismos']
rules = (Rule(LinkExtractor(allow=(r'.*categories/.*', )), callback='parse', follow=True),)
def parse(self, response):
self.logger.info('Hi, this is an item page! %s', response.url)
categories = response.xpath('//a[contains(#href, "categories")]/text()').extract()
for category in categories:
item = DmozItem()
item['title'] = response.xpath('//a[contains(text(),"gr")]/text()').extract()
item['category'] = response.xpath('//div/strong/text()').extract()
return item
The problem is simple: the callback has to be different than parse, so I suggest you name your method parse_site for example and then you are ready to continue your scraping.
If you make the change below it will work:
rules = (Rule(LinkExtractor(allow=(r'.*categories/.*', )), callback='parse_site', follow=True),)
def parse_site(self, response):
The reason for this is described in the docs:
When writing crawl spider rules, avoid using parse as callback, since the CrawlSpider uses the parse method itself to implement its logic. So if you override the parse method, the crawl spider will no longer work.
Related
I am attempting to produce a csv output of select items contained in a particular class (title, link, price) that parses out each item in its own column, and each instance in its own row using itemloaders and the items module.
I can produce the output using a self-contained spider (without use of items module), however, I'm trying to learn the proper way of detailing the items in the items module, so that I can eventually scale up projects using the proper structure. (I will detail this code as 'Working Row Output Spider Code' below)
I have also attempted to incorporate solutions determined or discussed in related posts; in particular:
Writing Itemloader By Item to XML or CSV Using Scrapy posted by Sam
Scrapy Return Multiple Items posted by Zana Daniel
by using a for loop as he notes at the bottom of the comments section. However, I can get scrapy to accept the for loop, it just doesn't result in any change, that is the items are still grouped in single fields rather than being output into independent rows.
Below is a detail of the code contained in two project attempts --'Working Row Output Spider Code' that does not incorporate items module and items loader, and 'Non Working Row Output Spider Code'-- and the corresponding output of each.
Working Row Output Spider Code: btobasics.py
import scrapy
import urlparse
class BasicSpider(scrapy.Spider):
name = 'basic'
allowed_domains = ['http://http://books.toscrape.com/']
start_urls = ['http://books.toscrape.com//']
def parse(self, response):
titles = response.xpath('//*[#class="product_pod"]/h3//text()').extract()
links = response.xpath('//*[#class="product_pod"]/h3/a/#href').extract()
prices = response.xpath('//*[#class="product_pod"]/div[2]/p[1]/text()').extract()
for item in zip(titles, links, prices):
# create a dictionary to store the scraped info
scraped_info = {
'title': item[0],
'link': item[1],
'price': item[2],
}
# yield or give the scraped info to scrapy
yield scraped_info
Run Command to produce CSV: $ scrapy crawl basic -o output.csv
Working Row Output WITHOUT STRUCTURED ITEM LOADERS
Non Working Row Output Spider Code: btobasictwo.py
import datetime
import urlparse
import scrapy
from btobasictwo.items import BtobasictwoItem
from scrapy.loader.processors import MapCompose
from scrapy.loader import ItemLoader
class BasicSpider(scrapy.Spider):
name = 'basic'
allowed_domains = ['http://http://books.toscrape.com/']
start_urls = ['http://books.toscrape.com//']
def parse(self, response):
# Create the loader using the response
links = response.xpath('//*[#class="product_pod"]')
for link in links:
l = ItemLoader(item=BtobasictwoItem(), response=response)
# Load fields using XPath expressions
l.add_xpath('title', '//*[#class="product_pod"]/h3//text()',
MapCompose(unicode.strip))
l.add_xpath('link', '//*[#class="product_pod"]/h3/a/#href',
MapCompose(lambda i: urlparse.urljoin(response.url, i)))
l.add_xpath('price', '//*[#class="product_pod"]/div[2]/p[1]/text()',
MapCompose(unicode.strip))
# Log fields
l.add_value('url', response.url)
l.add_value('date', datetime.datetime.now())
return l.load_item()
Non Working Row Output Items Code: btobasictwo.items.py
from scrapy.item import Item, Field
class BtobasictwoItem(Item):
# Primary fields
title = Field()
link = Field()
price = Field()
# Log fields
url = Field()
date = Field()
Run Command to produce CSV: $ scrapy crawl basic -o output.csv
Non Working Row Code Output WITH STRUCTURED ITEM LOADERS
As you can see, when attempting to incorporate the items module, itemloaders and a for loop to structure the data, it does not seperate the instances by row, but rather puts all instances of a particular item (title, link, price) in 3 fields.
I would greatly appreciate any help on this, and apologize for the lengthy post. I just wanted to document as much as possible so that anyone wanting to assist could run the code themselves, and/or fully appreciate the problem from my documentation. (please leave a comment instructing on length of post if you feel it is not appropriate to be this lengthly).
Thanks very much
You need to tell your ItemLoader to use another selector:
def parse(self, response):
# Create the loader using the response
links = response.xpath('//*[#class="product_pod"]')
for link in links:
l = ItemLoader(item=BtobasictwoItem(), selector=link)
# Load fields using XPath expressions
l.add_xpath('title', './/h3//text()',
MapCompose(unicode.strip))
l.add_xpath('link', './/h3/a/#href',
MapCompose(lambda i: urlparse.urljoin(response.url, i)))
l.add_xpath('price', './/div[2]/p[1]/text()',
MapCompose(unicode.strip))
# Log fields
l.add_value('url', response.url)
l.add_value('date', datetime.datetime.now())
yield l.load_item()
I'm trying to collect the weather data for year 2000 from this site.
My code for the spider is:
import scrapy
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors import LinkExtractor
class weather(CrawlSpider):
name = 'data'
start_urls = [
'https://www1.ncdc.noaa.gov/pub/data/uscrn/products/daily01/'
]
custom_settings = {
'DEPTH_LIMIT': '2',
}
rules = (
Rule(LinkExtractor(restrict_xpaths=
('//table/tr[2]/td[1]/a',)),callback='parse_item',follow=True),
)
def parse_item(self, response):
for b in response.xpath('//table')
yield scrapy.request('/tr[4]/td[2]/a/#href').extract()
yield scrapy.request('/tr[5]/td[2]/a/#href').extract()
The paths with 'yield' are the links to two text file and i want to scrape the data from these text files and store it separately in two different files but I don't know how to continue.
I don't usually use CrawlSpider so I'm unfamiliar with it, but it seems like you should be able to create another xpath (preferably something more specific than "/tr[4]/td[2]/a/#href") and supply a callback function.
However, in a "typical" scrapy project using Spider instead of CrawlSpider, you would simply yield a Request with another callback function to handle extracting and storing to the database. For example:
def parse_item(self, response):
for b in response.xpath('//table')
url = b.xpath('/tr[4]/td[2]/a/#href').extract_first()
yield scrapy.Request(url=url, callback=extract_and_store)
url = b.xpath('/tr[5]/td[2]/a/#href').extract_first()
yield scrapy.Request(url=url, callback=extract_and_store)
def extract_and_store(self, response):
"""
Scrape data and store it separately
"""
I try to find a way to scrape and parse more pages in the signed in area.
These example links accesible from signed in I want to parse.
#http://example.com/seller/demand/?id=305554
#http://example.com/seller/demand/?id=305553
#http://example.com/seller/demand/?id=305552
#....
I want to create spider that can open each one of these links and then parse them.
I have created another spider which can open and parse only one of them.
When I tried to create "for" or "while" to call more requests with other links it allowed me not because I cannot put more returns into generator, it returns error. I also tried link extractors, but it didn't work for me.
Here is my code:
#!c:/server/www/scrapy
# -*- coding: utf-8 -*-
from scrapy import Spider
from scrapy.selector import Selector
from scrapy.http import FormRequest
from scrapy.http.request import Request
from scrapy.spiders import CrawlSpider, Rule
from array import *
from stack.items import StackItem
from scrapy.linkextractors import LinkExtractor
class Spider3(Spider):
name = "Spider3"
allowed_domains = ["example.com"]
start_urls = ["http://example.com/login"] #this link lead to login page
When I am signed in it returns page with url, that contains "stat", that is why I put here first "if" condition.
When I am signed in, I request one link and call function parse_items.
def parse(self, response):
#when "stat" is in url it means that I just signed in
if "stat" in response.url:
return Request("http://example.com/seller/demand/?id=305554", callback = self.parse_items)
else:
#this succesful login turns me to page, it's url contains "stat"
return [FormRequest.from_response(response,
formdata={'ctl00$ContentPlaceHolder1$lMain$tbLogin': 'my_login', 'ctl00$ContentPlaceHolder1$lMain$tbPass': 'my_password'},callback=self.parse)]
Function parse_items simply parse desired content from one desired page:
def parse_items(self,response):
questions = Selector(response).xpath('//*[#id="ctl00_ContentPlaceHolder1_cRequest_divAll"]/table/tr')
for question in questions:
item = StackItem()
item['name'] = question.xpath('th/text()').extract()[0]
item['value'] = question.xpath('td/text()').extract()[0]
yield item
Can you help me please to update this code to open and parse more than one page in each sessions?
I don't want to sign in over and over for each request.
The session most likely depends on the cookies and scrapy manages that by itself. I.e:
def parse_items(self,response):
questions = Selector(response).xpath('//*[#id="ctl00_ContentPlaceHolder1_cRequest_divAll"]/table/tr')
for question in questions:
item = StackItem()
item['name'] = question.xpath('th/text()').extract()[0]
item['value'] = question.xpath('td/text()').extract()[0]
yield item
next_url = '' # find url to next page in the current page
if next_url:
yield Request(next_url, self.parse_items)
# scrapy will retain the session for the next page if it's managed by cookies
I am currently working on the same problem. I use InitSpider so I can overwrite __init__ and init_request. The first is just for initialisation of custom stuff and the actual magic happens in my init_request:
def init_request(self):
"""This function is called before crawling starts."""
# Do not start a request on error,
# simply return nothing and quit scrapy
if self.abort:
return
# Do a login
if self.login_required:
# Start with login first
return Request(url=self.login_page, callback=self.login)
else:
# Start with pase function
return Request(url=self.base_url, callback=self.parse)
My login looks like this
def login(self, response):
"""Generate a login request."""
self.log('Login called')
return FormRequest.from_response(
response,
formdata=self.login_data,
method=self.login_method,
callback=self.check_login_response
)
self.login_data is a dict with post values.
I am still a beginner with python and scrapy, so I might be doing it the wrong way. Anyway, so far I have produced a working version that can be viewed on github.
HTH:
https://github.com/cytopia/crawlpy
I am able to scrape all the stories from the first page,my problem is how to move to the next page and continue scraping stories and name,kindly check my code below
# -*- coding: utf-8 -*-
import scrapy
from cancerstories.items import CancerstoriesItem
class MyItem(scrapy.Item):
name = scrapy.Field()
story = scrapy.Field()
class MySpider(scrapy.Spider):
name = 'cancerstories'
allowed_domains = ['thebreastcancersite.greatergood.com']
start_urls = ['http://thebreastcancersite.greatergood.com/clickToGive/bcs/stories/']
def parse(self, response):
rows = response.xpath('//a[contains(#href,"story")]')
#loop over all links to stories
for row in rows:
myItem = MyItem() # Create a new item
myItem['name'] = row.xpath('./text()').extract() # assign name from link
story_url = response.urljoin(row.xpath('./#href').extract()[0]) # extract url from link
request = scrapy.Request(url = story_url, callback = self.parse_detail) # create request for detail page with story
request.meta['myItem'] = myItem # pass the item with the request
yield request
def parse_detail(self, response):
myItem = response.meta['myItem'] # extract the item (with the name) from the response
#myItem['name']=response.xpath('//h1[#class="headline"]/text()').extract()
text_raw = response.xpath('//div[#class="photoStoryBox"]/div/p/text()').extract() # extract the story (text)
myItem['story'] = ' '.join(map(unicode.strip, text_raw)) # clean up the text and assign to item
yield myItem # return the item
You could change your scrapy.Spider for a CrawlSpider, and use Rule and LinkExtractor to follow the link to the next page.
For this approach you have to include the code below:
...
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
...
rules = (
Rule(LinkExtractor(allow='\.\./stories;jsessionid=[0-9A-Z]+?page=[0-9]+')),
)
...
class MySpider(CrawlSpider):
...
This way, for each page you visit the spider will create a request for the next page (if present), follow it when finishes the execution for the parse method, and repeat the process again.
EDIT:
The rule I wrote is just to follow the next page link not to extract the stories, if your first approach works it's not necessary to change it.
Also, regarding the rule in your comment, SgmlLinkExtractor is deprecated so I recommend you to use the default link extractor, and the rule itself is not well defined.
When the parameter attrs in the extractor is not defined, it searchs links looking for the href tags in the body, which in this case looks like ../story/mother-of-4435 and not /clickToGive/bcs/story/mother-of-4435. That's the reason it doesn't find any link to follow.
you can follow next pages manually if you would use scrapy.spider class,example:
next_page = response.css('a.pageLink ::attr(href)').extract_first()
if next_page:
absolute_next_page_url = response.urljoin(next_page)
yield scrapy.Request(url=absolute_next_page_url, callback=self.parse)
Do not forget to rename your parse method to parse_start_url if you want to use CralwSpider class
I'm trying to use scrapy for crawl a phpbb-based forum. My knowledge level of scrapy is quite basic (but improving).
Extract the contents of a forum thread's first page was more or less easy. My successful scraper was this:
import scrapy
from ptmya1.items import Ptmya1Item
class bastospider3(scrapy.Spider):
name = "basto3"
allowed_domains = ["portierramaryaire.com"]
start_urls = [
"http://portierramaryaire.com/foro/viewtopic.php?f=3&t=3821&st=0&sk=t&sd=a"
]
def parse(self, response):
for sel in response.xpath('//div[2]/div'):
item = Ptmya1Item()
item['author'] = sel.xpath('div/div[1]/p/strong/a/text()').extract()
item['date'] = sel.xpath('div/div[1]/p/text()').extract()
item['body'] = sel.xpath('div/div[1]/div/text()').extract()
yield item
However, when I tried to crawl using "next page" link I have failed after a lot of frustrating hours. I would like to show you my attempts, in order to ask for an advice. Note: I would prefer to obtain a solution for the SgmlLinkExtractor variants, since they are more flexible and powerful, but I priorize success after so many attempts
First one, SgmlLinkExtractor with restricted path. 'Next page xpath' is
/html/body/div[1]/div[2]/form[1]/fieldset/a
Indeed, I tested with the shell that
response.xpath('//div[2]/form[1]/fieldset/a/#href')[1].extract()
returns a correct value for the "next page" link. However, I want to note that the cited xpath offers TWO links
>>> response.xpath('//div[2]/form[1]/fieldset/a/#href').extract()
[u'./search.php?sid=5aa2b92bec28a93c85956e83f2f62c08', u'./viewtopic.php?f=3&t=3821&st=0&sk=t&sd=a&sid=5aa2b92bec28a93c85956e83f2f62c08&start=15']
thus, my failed scraper was
import scrapy
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from ptmya1.items import Ptmya1Item
class bastospider3(scrapy.Spider):
name = "basto7"
allowed_domains = ["portierramaryaire.com"]
start_urls = [
"http://portierramaryaire.com/foro/viewtopic.php?f=3&t=3821&st=0&sk=t&sd=a"
]
rules = (
Rule(SgmlLinkExtractor(allow=(), restrict_xpaths=('//div[2]/form[1]/fieldset/a/#href')[1],), callback="parse_items", follow= True)
)
def parse_item(self, response):
for sel in response.xpath('//div[2]/div'):
item = Ptmya1Item()
item['author'] = sel.xpath('div/div[1]/p/strong/a/text()').extract()
item['date'] = sel.xpath('div/div[1]/p/text()').extract()
item['body'] = sel.xpath('div/div[1]/div/text()').extract()
yield item
Second one, SgmlLinkExtractor with allow. More primitive and unsuccessful too
import scrapy
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from ptmya1.items import Ptmya1Item
class bastospider3(scrapy.Spider):
name = "basto7"
allowed_domains = ["portierramaryaire.com"]
start_urls = [
"http://portierramaryaire.com/foro/viewtopic.php?f=3&t=3821&st=0&sk=t&sd=a"
]
rules = (
Rule(SgmlLinkExtractor(allow=(r'viewtopic.php?f=3&t=3821&st=0&sk=t&sd=a&start.',),), callback="parse_items", follow= True)
)
def parse_item(self, response):
for sel in response.xpath('//div[2]/div'):
item = Ptmya1Item()
item['author'] = sel.xpath('div/div[1]/p/strong/a/text()').extract()
item['date'] = sel.xpath('div/div[1]/p/text()').extract()
item['body'] = sel.xpath('div/div[1]/div/text()').extract()
yield item
Finally, I returned to the damn paleolithic age, or to its first tutorial equivalent. I try to use the loop included at the end of the beginner's tutorial. Another failure
import scrapy
import urlparse
from ptmya1.items import Ptmya1Item
class bastospider5(scrapy.Spider):
name = "basto5"
allowed_domains = ["portierramaryaire.com"]
start_urls = [
"http://portierramaryaire.com/foro/viewtopic.php?f=3&t=3821&st=0&sk=t&sd=a"
]
def parse_articles_follow_next_page(self, response):
item = Ptmya1Item()
item['cacho'] = response.xpath('//div[2]/form[1]/fieldset/a/#href').extract()[1][1:] + "http://portierramaryaire.com/foro"
for sel in response.xpath('//div[2]/div'):
item['author'] = sel.xpath('div/div[1]/p/strong/a/text()').extract()
item['date'] = sel.xpath('div/div[1]/p/text()').extract()
item['body'] = sel.xpath('div/div[1]/div/text()').extract()
yield item
next_page = response.xpath('//fieldset/a[#class="right-box right"]')
if next_page:
cadenanext = response.xpath('//div[2]/form[1]/fieldset/a/#href').extract()[1][1:]
url = urlparse.urljoin("http://portierramaryaire.com/foro",cadenanext)
yield scrapy.Request(url, self.parse_articles_follow_next_page)
In all the cases, what I have obtained is a cryptic error message from which I cannot obtain a hint for the solution of my problem.
2015-10-08 21:24:46 [scrapy] DEBUG: Crawled (200) <GET http://portierramaryaire.com/foro/viewtopic.php?f=3&t=3821&st=0&sk=t&sd=a> (referer: None)
2015-10-08 21:24:46 [scrapy] ERROR: Spider error processing <GET http://portierramaryaire.com/foro/viewtopic.php?f=3&t=3821&st=0&sk=t&sd=a> (referer: None)
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/twisted/internet/defer.py", line 577, in _runCallbacks
current.result = callback(current.result, *args, **kw)
File "/usr/local/lib/python2.7/dist-packages/scrapy/spiders/__init__.py", line 76, in parse
raise NotImplementedError
NotImplementedError
2015-10-08 21:24:46 [scrapy] INFO: Closing spider (finished)
I really would appreciate any advice (or better, a working solution) for the problem. I'm utterly stuck on this and no matter how much I read, I am not able to find a solution :(
The cryptic error message occurs because you do not use the parse method. That's the default entry-point of scrapy when it wants to parse a response.
However you only defined a parse_articles_follow_next_page or parse_item function -- which are definitely no parse functions.
And this is not because of the next site but the first site: Scrapy cannot parse the start_url so your tries are not reached in any case. Try to change your parse_items to parse and execute your approaches again for the palaeolithic solution.
If you are using a Rule then you need to use a different spider. For those use CrawlSpider which you can see in the tutorials. In this case do not override the parse method but use the parse_items as you do. That's because CrawlSpider uses parse to forward the responses to the callback method.
Thanks to GHajba, the problem is solved. The solution is developed on the commentaries.
However, the spider doesn't return the results in order. It starts on http://portierramaryaire.com/foro/viewtopic.php?f=3&t=3821&st=0&sk=t&sd=a
and it should walk through "next page" urls, which are like this: http://portierramaryaire.com/foro/viewtopic.php?f=3&t=3821&st=0&sk=t&sd=a&start=15
incrementing the 'start' variable with 15 post each time.
Indeed, the spider returns first the page produced 'start=15', then 'start=30', then 'start=0', then again 'start=15', then 'start=45'...
I am not sure if I have to create a new question or if it would be better for future readers to develop the question here. What do you think?
since this is 5 year old - many many new approaches are out there.
btw: see https://github.com/Dascienz/phpBB-forum-scraper
Python-based web scraper for phpBB forums. Project can be used as a
template for building your own custom Scrapy spiders or for one-off
crawls on designated forums. Please keep in mind that aggressive
crawls can produce significant strain on web servers, so please
throttle your request rates.
The phpBB.py spider scrapes the following information from forum
posts: Username User Post Count Post Date & Time Post Text Quoted Text
If you need additional data scraped, you will have to create
additional spiders or edit the existing spider.
Edit phpBB.py and Specify: allowed_domains start_urls username &
password forum_login=False or forum_login=True
see also
import requests
forum = "the forum name"
headers = {'User-Agent': 'Mozilla/5.0'}
payload = {'username': 'username', 'password': 'password', 'redirect':'index.php', 'sid':'', 'login':'Login'}
session = requests.Session()
r = session.post(forum + "ucp.php?mode=login", headers=headers, data=payload)
print(r.text)
but wait: we can - instead of manipulating the website using requests,
also make use a browser automation such as mechanize offers this.
This way we don't have to manage the own session and only have a few lines of code to craft each request.
a interesting example is on GitHub https://github.com/winny-/sirsi/blob/317928f23847f4fe85e2428598fbe44c4dae2352/sirsi/sirsi.py#L74-L211