Trying to extract from the deep node with scrapy, results are bad - python-2.7

As a beginner I'm having a hard time, so I'm here to ask for help.
I'm trying to extract prices from the html page, which are nested deeply:
second price location:
from scrapy.spider import Spider
from scrapy.selector import Selector
from mymarket.items import MymarketItem
class MySpider(Spider):
name = "mymarket"
allowed_domains = ["url"]
start_urls = [
"http://url"
]
def parse(self, response):
sel = Selector(response)
titles = sel.xpath('//table[#class="tab_product_list"]//tr')
items = []
for t in titles:
item = MymarketItem()
item["price"] = t.xpath('//tr//span[2]/text()').extract()
items.append(item)
return items
I'm trying to export scraped prices to csv. they do export but are being populated like this:
And I want them to be sorted like this in .csv:
etc.
Can anybody point out where is the faulty part of the xpath or how I can make prices be sorted "properly" ?

It's difficult to say what's wrong with the path. Install firepath extension for Firefox to test your xpath queries. One note for now:
titles = sel.xpath('//table[#class="tab_product_list"]//tr')
In your screenshot you have nested tables, so //tr will give trs from nested tables too.
def parse(self, response):
sel = Selector(response)
titles = sel.xpath('//table[#class="tab_product_list"]/tr') # or with tbody
items = []
for t in titles:
item = MymarketItem()
item["price"] = t.xpath('.//span[#style="color:red;"]/text()').extract()[0]
items.append(item)
return items

.extract() returns a list, even if just one argument found, take the first element of the list .extract()[0]

Related

Scrapy: Return Each Item in a New CSV Row Using Item Loader

I am attempting to produce a csv output of select items contained in a particular class (title, link, price) that parses out each item in its own column, and each instance in its own row using itemloaders and the items module.
I can produce the output using a self-contained spider (without use of items module), however, I'm trying to learn the proper way of detailing the items in the items module, so that I can eventually scale up projects using the proper structure. (I will detail this code as 'Working Row Output Spider Code' below)
I have also attempted to incorporate solutions determined or discussed in related posts; in particular:
Writing Itemloader By Item to XML or CSV Using Scrapy posted by Sam
Scrapy Return Multiple Items posted by Zana Daniel
by using a for loop as he notes at the bottom of the comments section. However, I can get scrapy to accept the for loop, it just doesn't result in any change, that is the items are still grouped in single fields rather than being output into independent rows.
Below is a detail of the code contained in two project attempts --'Working Row Output Spider Code' that does not incorporate items module and items loader, and 'Non Working Row Output Spider Code'-- and the corresponding output of each.
Working Row Output Spider Code: btobasics.py
import scrapy
import urlparse
class BasicSpider(scrapy.Spider):
name = 'basic'
allowed_domains = ['http://http://books.toscrape.com/']
start_urls = ['http://books.toscrape.com//']
def parse(self, response):
titles = response.xpath('//*[#class="product_pod"]/h3//text()').extract()
links = response.xpath('//*[#class="product_pod"]/h3/a/#href').extract()
prices = response.xpath('//*[#class="product_pod"]/div[2]/p[1]/text()').extract()
for item in zip(titles, links, prices):
# create a dictionary to store the scraped info
scraped_info = {
'title': item[0],
'link': item[1],
'price': item[2],
}
# yield or give the scraped info to scrapy
yield scraped_info
Run Command to produce CSV: $ scrapy crawl basic -o output.csv
Working Row Output WITHOUT STRUCTURED ITEM LOADERS
Non Working Row Output Spider Code: btobasictwo.py
import datetime
import urlparse
import scrapy
from btobasictwo.items import BtobasictwoItem
from scrapy.loader.processors import MapCompose
from scrapy.loader import ItemLoader
class BasicSpider(scrapy.Spider):
name = 'basic'
allowed_domains = ['http://http://books.toscrape.com/']
start_urls = ['http://books.toscrape.com//']
def parse(self, response):
# Create the loader using the response
links = response.xpath('//*[#class="product_pod"]')
for link in links:
l = ItemLoader(item=BtobasictwoItem(), response=response)
# Load fields using XPath expressions
l.add_xpath('title', '//*[#class="product_pod"]/h3//text()',
MapCompose(unicode.strip))
l.add_xpath('link', '//*[#class="product_pod"]/h3/a/#href',
MapCompose(lambda i: urlparse.urljoin(response.url, i)))
l.add_xpath('price', '//*[#class="product_pod"]/div[2]/p[1]/text()',
MapCompose(unicode.strip))
# Log fields
l.add_value('url', response.url)
l.add_value('date', datetime.datetime.now())
return l.load_item()
Non Working Row Output Items Code: btobasictwo.items.py
from scrapy.item import Item, Field
class BtobasictwoItem(Item):
# Primary fields
title = Field()
link = Field()
price = Field()
# Log fields
url = Field()
date = Field()
Run Command to produce CSV: $ scrapy crawl basic -o output.csv
Non Working Row Code Output WITH STRUCTURED ITEM LOADERS
As you can see, when attempting to incorporate the items module, itemloaders and a for loop to structure the data, it does not seperate the instances by row, but rather puts all instances of a particular item (title, link, price) in 3 fields.
I would greatly appreciate any help on this, and apologize for the lengthy post. I just wanted to document as much as possible so that anyone wanting to assist could run the code themselves, and/or fully appreciate the problem from my documentation. (please leave a comment instructing on length of post if you feel it is not appropriate to be this lengthly).
Thanks very much
You need to tell your ItemLoader to use another selector:
def parse(self, response):
# Create the loader using the response
links = response.xpath('//*[#class="product_pod"]')
for link in links:
l = ItemLoader(item=BtobasictwoItem(), selector=link)
# Load fields using XPath expressions
l.add_xpath('title', './/h3//text()',
MapCompose(unicode.strip))
l.add_xpath('link', './/h3/a/#href',
MapCompose(lambda i: urlparse.urljoin(response.url, i)))
l.add_xpath('price', './/div[2]/p[1]/text()',
MapCompose(unicode.strip))
# Log fields
l.add_value('url', response.url)
l.add_value('date', datetime.datetime.now())
yield l.load_item()

Scrapy not working on OBD site

I'm trying to use scrapy-spider on oneblockdown.it to get all the products from the latest products and to store them into a DB.
Some sites into my monitor are working, but someone such as OBD is not working and not uploading nothing to the db. This is my function:
class OneBlockDownSpider(Spider):
name = "OneBlockDownSpider"
allowded_domains = ["oneblockdown.it"]
start_urls = [OneBlockDownURL]
def __init__(self):
logging.critical("OneBlockDown STARTED.")
def parse(self, response):
products = Selector(response).xpath("//div[#id='product-list']")
for product in products:
item = OneBlockDownItem()
item['name'] = product.xpath('.//div[#class="catalogue-product-title"]//h3').extract.first
item['link'] = product.xpath('.//div[#class="catalogue-product-title"]//h3/a/#href').extract.first
# # item['image'] = "http:" + product.xpath("/div[#class='catalogue-product-cover']/a[#class='catalogue-product-cover-image']/img/#src").extract()[0]
# item['size'] = '**NOT SUPPORTED YET**'
yield item
yield Request(OneBlockDownURL, callback=self.parse, dont_filter=True, priority=15)
I guess I'm using the wrong xpath, but I can't solve it
First of all the site is Cloudflare protected (prevent scraping).
Also you have several issues with your code:
Your products is a single node
You're using extract.first instead of extract_first()
products = response.xpath("//div[#id='product-list']/div")
for product in products:
item = OneBlockDownItem()
item['name'] = product.xpath('.//div[#class="catalogue-product-title"]//h3').extract_first()
item['link'] = product.xpath('.//div[#class="catalogue-product-title"]//h3/a/#href').extract_first()
yield item
You should start all your xpaths with '.' when using a relative selector like product:
item['image'] = "http:" + product.xpath("./div[#class='catalogue-product-cover']/a[#class='catalogue-product-cover-image']/img/#src").extract()[0]
Otherwise, It will try to get the element with this xpath: /body/div[#class='catalogue-product-cover']

Python scrapy - yield initial items and items from callback to csv

So I've managed to write a spider that extracts the download links of "Videos" and "English Transcripts" from this site . Looking at the cmd window i can see that all the correct information has been scraped.
The issue I am having is that the output csv file only contains the "Video" links and not the "English Transcripts" links (even though you can see that it's been scraped in the cmd window).
I've tried a few suggestions from other posts but none of them seem to work.
The following picture is how I'd like the output to look like:
CSV Output Picture
this is my current spider code:
import scrapy
class SuhbaSpider(scrapy.Spider):
name = "suhba2"
start_urls = ["http://saltanat.org/videos.php?topic=SheikhBahauddin&gopage={numb}".format(numb=numb)
for numb in range(1,3)]
def parse(self, response):
yield{
"video" : response.xpath("//span[#class='download make-cursor']/a/#href").extract(),
}
fullvideoid = response.xpath("//span[#class='media-info make-cursor']/#onclick").extract()
for videoid in fullvideoid:
url = ("http://saltanat.org/ajax_transcription.php?vid=" + videoid[21:-2])
yield scrapy.Request(url, callback=self.parse_transcript)
def parse_transcript(self, response):
yield{
"transcript" : response.xpath("//a[contains(#href,'english')]/#href").extract(),
}
You are yielding two different kinds of items - one containing just video attribute and one containing just transcript attribute. You have to yield one kind of item composed of both attributes. For that, you have to create item in parse and pass it to second level request using meta. Then, in the parse_transcript, you take it from meta, fill additional data and finally yield the item. The general pattern is described in Scrapy documentation.
The second thing is that you extract all videos at once using extract() method. This yields a list where it's hard afterwards to link each individual element with corresponding transcript. Better approach is to loop over each individual video element in the HTML and yield item for each video.
Applied to your example:
import scrapy
class SuhbaSpider(scrapy.Spider):
name = "suhba2"
start_urls = ["http://saltanat.org/videos.php?topic=SheikhBahauddin&gopage={numb}".format(numb=numb) for numb in range(1,3)]
def parse(self, response):
for video in response.xpath("//tr[#class='video-doclet-row']"):
item = dict()
item["video"] = video.xpath(".//span[#class='download make-cursor']/a/#href").extract_first()
videoid = video.xpath(".//span[#class='media-info make-cursor']/#onclick").extract_first()
url = "http://saltanat.org/ajax_transcription.php?vid=" + videoid[21:-2]
request = scrapy.Request(url, callback=self.parse_transcript)
request.meta['item'] = item
yield request
def parse_transcript(self, response):
item = response.meta['item']
item["transcript"] = response.xpath("//a[contains(#href,'english')]/#href").extract_first()
yield item

Need help understanding the output of the program

I was working with ma project XYZ
and I got stuck in extracting text in from the source
gifts
I want to extrack the href as content
I tried this
from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from XYZ.items import XYZ
class MySpider(BaseSpider):
name = "main"
allowed_domains = ["XYZ"]
start_urls = ["XYZ"]
def parse(self, response):
hxs = HtmlXPathSelector(response)
titles = hxs.select("//a[#data-tracking-id='mdd-heading']")
items = []
for titles in titles:
item = XYZ()
item ["title"] = titles.select("text()").extract()
item ["link"] = titles.select("#href").extract()
items.append(item)
print "www.xyz.com"+str(item["link"])
return items
and the output was
www.xyz.com[u'/gifts']
I was expecting output as
www.xyz.com/gifts
What i did wrong.... ?
According to the documentation for Selector's extract():
extract()
Serialize and return the matched nodes as a list of unicode
strings. Percent encoded content is unquoted.
So, extract() returns a list and you need the first item from it. Use item['link'][0].
Also, there are other problems in the code:
for titles in titles loop doesn't make sense, you need a separate loop variable
HtmlXPathSelector is deprecated, use Selector
use urljoin() to join the parts of a url
Here's the complete code with fixes and other improvements:
from urlparse import urljoin
from scrapy.spider import BaseSpider
from scrapy.selector import Selector
from XYZ.items import XYZ
class MySpider(BaseSpider):
name = "main"
allowed_domains = ["XYZ"]
start_urls = ["XYZ"]
def parse(self, response):
titles = response.xpath("//a[#data-tracking-id='mdd-heading']")
for title in titles:
item = XYZ()
item ["title"] = title.xpath("text()").extract()[0]
item ["link"] = title.xpath("#href").extract()[0]
print urljoin("www.xyz.com", item["link"])
yield item

Can't get additional items from url

I'm scraping few items from this site, but it grabs items only from the first product and doesn't loop further. I know I'm doing simple stupid mistake, but if you can just point out where I got this wrong, I'll appreciate it.
Here is the spider:
from scrapy.item import Item, Field
from scrapy.spider import BaseSpider
from scrapy.selector import Selector
import re
from zoomer.items import ZoomerItem
class ZoomSpider(BaseSpider):
name = "zoomSp"
allowed_domains = ["zoomer.ge"]
start_urls = [
"http://zoomer.ge/index.php?cid=35&act=search&category=1&search_type=mobile"
]
def parse(self, response):
sel = Selector(response)
titles = sel.xpath('//div[#class="productContainer"]/div[5]')
items = []
for t in titles:
item = ZoomerItem()
item["brand"] = t.xpath('//div[#class="productListContainer"]/div[3]/text()').re('^([\w, ]+)')
item["price"] = t.xpath('//div[#class="productListContainer"]/div[4]/text()').extract()[0].strip()
item["model"] = t.xpath('//div[#class="productListContainer"]/div[3]/text()').re('\s+(.*)$')[0].strip()
items.append(item)
return(items)
P.S. Also can't get regex for "brand" string to get only the first word "Blackberry" from the string:
"BlackBerry P9981 Porsche Design".
The <div/> element with the class productContainer is just a container and only appears one time, thus it is not repeating. The repeating element which you want to iterate over is the one with the class productListContainer.
def parse(self, response):
sel = Selector(response)
titles = sel.xpath('//div[#class="productContainer"]/div[5]/div[#class="productListContainer"]')
items = []
for t in titles:
item = ZoomerItem()
item["brand"] = t.xpath('div[3]/text()').re('^([\w\-]+)')
item["price"] = t.xpath('div[#class="productListPrice"]/div/text()').extract()
item["model"] = t.xpath('div[3]/text()').re('\s+(.*)$')[0].strip()
items.append(item)
items.append(item)
return(items)
This function is untested as I am not a python guy, so you might have to fiddle around a bit.