How to click on a element through Selenium Python - python-2.7

I'm trying to fetch data for facebook account using selenium browser python but can't able to find the which element I can look out for clicking on an export button.
See attached screenshot
I tried but it seems giving me an error for the class.
def login_facebook(self, username, password):
chrome_options = webdriver.ChromeOptions()
preference = {"download.default_directory": self.section_value[24]}
chrome_options.add_experimental_option("prefs", preference)
self.driver = webdriver.Chrome(self.section_value[20], chrome_options=chrome_options)
self.driver.get(self.section_value[25])
username_field = self.driver.find_element_by_id("email")
password_field = self.driver.find_element_by_id("pass")
username_field.send_keys(username)
self.driver.implicitly_wait(10)
password_field.send_keys(password)
self.driver.implicitly_wait(10)
self.driver.find_element_by_id("loginbutton").click()
self.driver.implicitly_wait(10)
self.driver.get("https://business.facebook.com/select/?next=https%3A%2F%2Fbusiness.facebook.com%2F")
self.driver.get("https://business.facebook.com/home/accounts?business_id=698597566882728")
self.driver.get("https://business.facebook.com/adsmanager/reporting/view?act="
"717590098609803&business_id=698597566882728&selected_report_id=23843123660810666")
# self.driver.get("https://business.facebook.com/adsmanager/manage/campaigns?act=717590098609803&business_id"
# "=698597566882728&tool=MANAGE_ADS&date={}-{}_{}%2Clast_month".format(self.last_month,
# self.first_day_month,
# self.last_day_month))
self.driver.find_element_by_id("export_button").click()
self.driver.implicitly_wait(10)
self.driver.find_element_by_class_name("_43rl").click()
self.driver.implicitly_wait(10)
Can you please let me know how can i click on Export button?

Well, I'm able to resolve it by using xpath.
Here is the solution
self.driver.find_element_by_xpath("//*[contains(#class, '_271k _271m _1qjd layerConfirm')]").click()

The element with text as Export is a dynamically generated element so to locate the element you have to induce WebDriverWait for the element to be clickable and you can use either of the locator strategies:
Using CSS_SELECTOR:
WebDriverWait(self.driver, 20).until(EC.element_to_be_clickable((By.CSS_SELECTOR, "a.layerConfirm>div[data-hover='tooltip'][data-tooltip-display='overflow']"))).click()
Using XPATH:
WebDriverWait(self.driver, 20).until(EC.element_to_be_clickable((By.XPATH, "//button[contains(#class, 'layerConfirm')]/div[#data-hover='tooltip' and text()='Export']"))).click()
Note : You have to add the following imports :
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC

to run automation scripts on applications like facebook, youtube quite a hard because they are huge coporations and their web applications are developed by the worlds best developers but its not impossible to run automation scripts sometimes elements are generated dynamically sometimes hidden or inactive you cant just go and click
one solution is you can do by click action by xpath realtive or absolute their is not id specified as "export_button" in resource file i think this might help you
you can also find element by class name or css selector as i see in screen shot the class name is present "_271K _271m _1qjd layerConfirm " you can perform click action on that

Related

Flask - Generated PDF can be viewed but cannot be downloaded

I recently started learning flask and created a simple webapp which randomly generates kids' math work sheets in PDF based on user input.
The PDF opens automatically in a browser and can be viewed. But when I try downloading it both on a PC and in Chrome iOS, I get error messages (Chrome PC: Failed - Network error / Chrome iOS:the file could not be downloaded at this time).
You can try it out here: kidsmathsheets.com
I suspect it has something to do with the way I'm generating and returning the PDF file. FYI I'm using ReportLab to generate the PDF. My code below (hosted on pythonanywhere):
from reportlab.lib.pagesizes import A4, letter
from reportlab.pdfgen import canvas
from reportlab.platypus import Table
from flask import Flask, render_template, request, Response
import io
from werkzeug import FileWrapper
# Other code to take in input and generate data
filename=io.BytesIO()
if letter_size:
c = canvas.Canvas(filename, pagesize=letter)
else:
c = canvas.Canvas(filename, pagesize=A4)
pdf_all(c, p_set, answer=answers, letter=letter_size)
c.save()
filename.seek(0)
wrapped_file = FileWrapper(filename)
return Response(wrapped_file, mimetype="application/pdf", direct_passthrough=True)
else:
return render_template('index.html')
Any idea what's causing the issue? Help is much appreciated!
Please check whether you are using an ajax POST request for invoking the endpoint to generate your data and display the PDF respectively. If this is the case - quite probably this causes the behaviour our observe. You might want to try invoking the endpoint with a GET request to /my-endpoint/some-hashed-non-reusable-id-of-my-document where some-hashed-non-reusable-id-of-my-documentwill tell the endpoint which document to serve without allowing users to play around with guesstimates about what other documents you might have. You might try it first like:
#app.route('/display-document/<document_id>'):
def display_document(document_id):
document = get_my_document_from_wherever_it_is(document_id)
binary = get_binary_data_from_document(document)
.........
Prepare response here
.......
return send_file(binary, mimetype="application/pdf")
Kind note: a right click and 'print to pdf' will work but this is not the solution we want

Python Requests: Website asks to 'Please turn JavaScript on and reload the page' [duplicate]

I'm trying to develop a simple web scraper. I want to extract text without the HTML code. It works on plain HTML, but not in some pages where JavaScript code adds text.
For example, if some JavaScript code adds some text, I can't see it, because when I call:
response = urllib2.urlopen(request)
I get the original text without the added one (because JavaScript is executed in the client).
So, I'm looking for some ideas to solve this problem.
EDIT Sept 2021: phantomjs isn't maintained any more, either
EDIT 30/Dec/2017: This answer appears in top results of Google searches, so I decided to update it. The old answer is still at the end.
dryscape isn't maintained anymore and the library dryscape developers recommend is Python 2 only. I have found using Selenium's python library with Phantom JS as a web driver fast enough and easy to get the work done.
Once you have installed Phantom JS, make sure the phantomjs binary is available in the current path:
phantomjs --version
# result:
2.1.1
#Example
To give an example, I created a sample page with following HTML code. (link):
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>Javascript scraping test</title>
</head>
<body>
<p id='intro-text'>No javascript support</p>
<script>
document.getElementById('intro-text').innerHTML = 'Yay! Supports javascript';
</script>
</body>
</html>
without javascript it says: No javascript support and with javascript: Yay! Supports javascript
#Scraping without JS support:
import requests
from bs4 import BeautifulSoup
response = requests.get(my_url)
soup = BeautifulSoup(response.text)
soup.find(id="intro-text")
# Result:
<p id="intro-text">No javascript support</p>
#Scraping with JS support:
from selenium import webdriver
driver = webdriver.PhantomJS()
driver.get(my_url)
p_element = driver.find_element_by_id(id_='intro-text')
print(p_element.text)
# result:
'Yay! Supports javascript'
You can also use Python library dryscrape to scrape javascript driven websites.
#Scraping with JS support:
import dryscrape
from bs4 import BeautifulSoup
session = dryscrape.Session()
session.visit(my_url)
response = session.body()
soup = BeautifulSoup(response)
soup.find(id="intro-text")
# Result:
<p id="intro-text">Yay! Supports javascript</p>
We are not getting the correct results because any javascript generated content needs to be rendered on the DOM. When we fetch an HTML page, we fetch the initial, unmodified by javascript, DOM.
Therefore we need to render the javascript content before we crawl the page.
As selenium is already mentioned many times in this thread (and how slow it gets sometimes was mentioned also), I will list two other possible solutions.
Solution 1: This is a very nice tutorial on how to use Scrapy to crawl javascript generated content and we are going to follow just that.
What we will need:
Docker installed in our machine. This is a plus over other solutions until this point, as it utilizes an OS-independent platform.
Install Splash following the instruction listed for our corresponding OS.Quoting from splash documentation:
Splash is a javascript rendering service. It’s a lightweight web browser with an HTTP API, implemented in Python 3 using Twisted and QT5.
Essentially we are going to use Splash to render Javascript generated content.
Run the splash server: sudo docker run -p 8050:8050 scrapinghub/splash.
Install the scrapy-splash plugin: pip install scrapy-splash
Assuming that we already have a Scrapy project created (if not, let's make one), we will follow the guide and update the settings.py:
Then go to your scrapy project’s settings.py and set these middlewares:
DOWNLOADER_MIDDLEWARES = {
'scrapy_splash.SplashCookiesMiddleware': 723,
'scrapy_splash.SplashMiddleware': 725,
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810,
}
The URL of the Splash server(if you’re using Win or OSX this should be the URL of the docker machine: How to get a Docker container's IP address from the host?):
SPLASH_URL = 'http://localhost:8050'
And finally you need to set these values too:
DUPEFILTER_CLASS = 'scrapy_splash.SplashAwareDupeFilter'
HTTPCACHE_STORAGE = 'scrapy_splash.SplashAwareFSCacheStorage'
Finally, we can use a SplashRequest:
In a normal spider you have Request objects which you can use to open URLs. If the page you want to open contains JS generated data you have to use SplashRequest(or SplashFormRequest) to render the page. Here’s a simple example:
class MySpider(scrapy.Spider):
name = "jsscraper"
start_urls = ["http://quotes.toscrape.com/js/"]
def start_requests(self):
for url in self.start_urls:
yield SplashRequest(
url=url, callback=self.parse, endpoint='render.html'
)
def parse(self, response):
for q in response.css("div.quote"):
quote = QuoteItem()
quote["author"] = q.css(".author::text").extract_first()
quote["quote"] = q.css(".text::text").extract_first()
yield quote
SplashRequest renders the URL as html and returns the response which you can use in the callback(parse) method.
Solution 2: Let's call this experimental at the moment (May 2018)...
This solution is for Python's version 3.6 only (at the moment).
Do you know the requests module (well who doesn't)?
Now it has a web crawling little sibling: requests-HTML:
This library intends to make parsing HTML (e.g. scraping the web) as simple and intuitive as possible.
Install requests-html: pipenv install requests-html
Make a request to the page's url:
from requests_html import HTMLSession
session = HTMLSession()
r = session.get(a_page_url)
Render the response to get the Javascript generated bits:
r.html.render()
Finally, the module seems to offer scraping capabilities.
Alternatively, we can try the well-documented way of using BeautifulSoup with the r.html object we just rendered.
Maybe selenium can do it.
from selenium import webdriver
import time
driver = webdriver.Firefox()
driver.get(url)
time.sleep(5)
htmlSource = driver.page_source
If you have ever used the Requests module for python before, I recently found out that the developer created a new module called Requests-HTML which now also has the ability to render JavaScript.
You can also visit https://html.python-requests.org/ to learn more about this module, or if your only interested about rendering JavaScript then you can visit https://html.python-requests.org/?#javascript-support to directly learn how to use the module to render JavaScript using Python.
Essentially, Once you correctly install the Requests-HTML module, the following example, which is shown on the above link, shows how you can use this module to scrape a website and render JavaScript contained within the website:
from requests_html import HTMLSession
session = HTMLSession()
r = session.get('http://python-requests.org/')
r.html.render()
r.html.search('Python 2 will retire in only {months} months!')['months']
'<time>25</time>' #This is the result.
I recently learnt about this from a YouTube video. Click Here! to watch the YouTube video, which demonstrates how the module works.
It sounds like the data you're really looking for can be accessed via secondary URL called by some javascript on the primary page.
While you could try running javascript on the server to handle this, a simpler approach to might be to load up the page using Firefox and use a tool like Charles or Firebug to identify exactly what that secondary URL is. Then you can just query that URL directly for the data you are interested in.
This seems to be a good solution also, taken from a great blog post
import sys
from PyQt4.QtGui import *
from PyQt4.QtCore import *
from PyQt4.QtWebKit import *
from lxml import html
#Take this class for granted.Just use result of rendering.
class Render(QWebPage):
def __init__(self, url):
self.app = QApplication(sys.argv)
QWebPage.__init__(self)
self.loadFinished.connect(self._loadFinished)
self.mainFrame().load(QUrl(url))
self.app.exec_()
def _loadFinished(self, result):
self.frame = self.mainFrame()
self.app.quit()
url = 'http://pycoders.com/archive/'
r = Render(url)
result = r.frame.toHtml()
# This step is important.Converting QString to Ascii for lxml to process
# The following returns an lxml element tree
archive_links = html.fromstring(str(result.toAscii()))
print archive_links
# The following returns an array containing the URLs
raw_links = archive_links.xpath('//div[#class="campaign"]/a/#href')
print raw_links
Selenium is the best for scraping JS and Ajax content.
Check this article for extracting data from the web using Python
$ pip install selenium
Then download Chrome webdriver.
from selenium import webdriver
browser = webdriver.Chrome()
browser.get("https://www.python.org/")
nav = browser.find_element_by_id("mainnav")
print(nav.text)
Easy, right?
You can also execute javascript using webdriver.
from selenium import webdriver
driver = webdriver.Firefox()
driver.get(url)
driver.execute_script('document.title')
or store the value in a variable
result = driver.execute_script('var text = document.title ; return text')
I personally prefer using scrapy and selenium and dockerizing both in separate containers. This way you can install both with minimal hassle and crawl modern websites that almost all contain javascript in one form or another. Here's an example:
Use the scrapy startproject to create your scraper and write your spider, the skeleton can be as simple as this:
import scrapy
class MySpider(scrapy.Spider):
name = 'my_spider'
start_urls = ['https://somewhere.com']
def start_requests(self):
yield scrapy.Request(url=self.start_urls[0])
def parse(self, response):
# do stuff with results, scrape items etc.
# now were just checking everything worked
print(response.body)
The real magic happens in the middlewares.py. Overwrite two methods in the downloader middleware, __init__ and process_request, in the following way:
# import some additional modules that we need
import os
from copy import deepcopy
from time import sleep
from scrapy import signals
from scrapy.http import HtmlResponse
from selenium import webdriver
class SampleProjectDownloaderMiddleware(object):
def __init__(self):
SELENIUM_LOCATION = os.environ.get('SELENIUM_LOCATION', 'NOT_HERE')
SELENIUM_URL = f'http://{SELENIUM_LOCATION}:4444/wd/hub'
chrome_options = webdriver.ChromeOptions()
# chrome_options.add_experimental_option("mobileEmulation", mobile_emulation)
self.driver = webdriver.Remote(command_executor=SELENIUM_URL,
desired_capabilities=chrome_options.to_capabilities())
def process_request(self, request, spider):
self.driver.get(request.url)
# sleep a bit so the page has time to load
# or monitor items on page to continue as soon as page ready
sleep(4)
# if you need to manipulate the page content like clicking and scrolling, you do it here
# self.driver.find_element_by_css_selector('.my-class').click()
# you only need the now properly and completely rendered html from your page to get results
body = deepcopy(self.driver.page_source)
# copy the current url in case of redirects
url = deepcopy(self.driver.current_url)
return HtmlResponse(url, body=body, encoding='utf-8', request=request)
Dont forget to enable this middlware by uncommenting the next lines in the settings.py file:
DOWNLOADER_MIDDLEWARES = {
'sample_project.middlewares.SampleProjectDownloaderMiddleware': 543,}
Next for dockerization. Create your Dockerfile from a lightweight image (I'm using python Alpine here), copy your project directory to it, install requirements:
# Use an official Python runtime as a parent image
FROM python:3.6-alpine
# install some packages necessary to scrapy and then curl because it's handy for debugging
RUN apk --update add linux-headers libffi-dev openssl-dev build-base libxslt-dev libxml2-dev curl python-dev
WORKDIR /my_scraper
ADD requirements.txt /my_scraper/
RUN pip install -r requirements.txt
ADD . /scrapers
And finally bring it all together in docker-compose.yaml:
version: '2'
services:
selenium:
image: selenium/standalone-chrome
ports:
- "4444:4444"
shm_size: 1G
my_scraper:
build: .
depends_on:
- "selenium"
environment:
- SELENIUM_LOCATION=samplecrawler_selenium_1
volumes:
- .:/my_scraper
# use this command to keep the container running
command: tail -f /dev/null
Run docker-compose up -d. If you're doing this the first time it will take a while for it to fetch the latest selenium/standalone-chrome and the build your scraper image as well.
Once it's done, you can check that your containers are running with docker ps and also check that the name of the selenium container matches that of the environment variable that we passed to our scraper container (here, it was SELENIUM_LOCATION=samplecrawler_selenium_1).
Enter your scraper container with docker exec -ti YOUR_CONTAINER_NAME sh , the command for me was docker exec -ti samplecrawler_my_scraper_1 sh, cd into the right directory and run your scraper with scrapy crawl my_spider.
The entire thing is on my github page and you can get it from here
A mix of BeautifulSoup and Selenium works very well for me.
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from bs4 import BeautifulSoup as bs
driver = webdriver.Firefox()
driver.get("http://somedomain/url_that_delays_loading")
try:
element = WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.ID, "myDynamicElement"))) #waits 10 seconds until element is located. Can have other wait conditions such as visibility_of_element_located or text_to_be_present_in_element
html = driver.page_source
soup = bs(html, "lxml")
dynamic_text = soup.find_all("p", {"class":"class_name"}) #or other attributes, optional
else:
print("Couldnt locate element")
P.S. You can find more wait conditions here
Using PyQt5
from PyQt5.QtWidgets import QApplication
from PyQt5.QtCore import QUrl
from PyQt5.QtWebEngineWidgets import QWebEnginePage
import sys
import bs4 as bs
import urllib.request
class Client(QWebEnginePage):
def __init__(self,url):
global app
self.app = QApplication(sys.argv)
QWebEnginePage.__init__(self)
self.html = ""
self.loadFinished.connect(self.on_load_finished)
self.load(QUrl(url))
self.app.exec_()
def on_load_finished(self):
self.html = self.toHtml(self.Callable)
print("Load Finished")
def Callable(self,data):
self.html = data
self.app.quit()
# url = ""
# client_response = Client(url)
# print(client_response.html)
You'll want to use urllib, requests, beautifulSoup and selenium web driver in your script for different parts of the page, (to name a few).
Sometimes you'll get what you need with just one of these modules.
Sometimes you'll need two, three, or all of these modules.
Sometimes you'll need to switch off the js on your browser.
Sometimes you'll need header info in your script.
No websites can be scraped the same way and no website can be scraped in the same way forever without having to modify your crawler, usually after a few months. But they can all be scraped! Where there's a will there's a way for sure.
If you need scraped data continuously into the future just scrape everything you need and store it in .dat files with pickle.
Just keep searching how to try what with these modules and copying and pasting your errors into the Google.
Pyppeteer
You might consider Pyppeteer, a Python port of the Chrome/Chromium driver front-end Puppeteer.
Here's a simple example to show how you can use Pyppeteer to access data that was injected into the page dynamically:
import asyncio
from pyppeteer import launch
async def main():
browser = await launch({"headless": True})
[page] = await browser.pages()
# normally, you go to a live site...
#await page.goto("http://www.example.com")
# but for this example, just set the HTML directly:
await page.setContent("""
<body>
<script>
// inject content dynamically with JS, not part of the static HTML!
document.body.innerHTML = `<p>hello world</p>`;
</script>
</body>
""")
print(await page.content()) # shows that the `<p>` was inserted
# evaluate a JS expression in browser context and scrape the data
expr = "document.querySelector('p').textContent"
print(await page.evaluate(expr, force_expr=True)) # => hello world
await browser.close()
asyncio.get_event_loop().run_until_complete(main())
See Pyppeteer's reference docs.
Try accessing the API directly
A common scenario you'll see in scraping is that the data is being requested asynchronously from an API endpoint by the webpage. A minimal example of this would be the following site:
<body>
<script>
fetch("https://jsonplaceholder.typicode.com/posts/1")
.then(res => {
if (!res.ok) throw Error(res.status);
return res.json();
})
.then(data => {
// inject data dynamically via JS after page load
document.body.innerText = data.title;
})
.catch(err => console.error(err))
;
</script>
</body>
In many cases, the API will be protected by CORS or an access token or prohibitively rate limited, but in other cases it's publicly-accessible and you can bypass the website entirely. For CORS issues, you might try cors-anywhere.
The general procedure is to use your browser's developer tools' network tab to search the requests made by the page for keywords/substrings of the data you want to scrape. Often, you'll see an unprotected API request endpoint with a JSON payload that you can access directly with urllib or requests modules. That's the case with the above runnable snippet which you can use to practice. After clicking "run snippet", here's how I found the endpoint in my network tab:
This example is contrived; the endpoint URL will likely be non-obvious from looking at the static markup because it could be dynamically assembled, minified and buried under dozens of other requests and endpoints. The network request will also show any relevant request payload details like access token you may need.
After obtaining the endpoint URL and relevant details, build a request in Python using a standard HTTP library and request the data:
>>> import requests
>>> res = requests.get("https://jsonplaceholder.typicode.com/posts/1")
>>> data = res.json()
>>> data["title"]
'sunt aut facere repellat provident occaecati excepturi optio reprehenderit'
When you can get away with it, this tends to be much easier, faster and more reliable than scraping the page with Selenium, Pyppeteer, Scrapy or whatever the popular scraping libraries are at the time you're reading this post.
If you're unlucky and the data hasn't arrived via an API request that returns the data in a nice format, it could be part of the original browser's payload in a <script> tag, either as a JSON string or (more likely) a JS object. For example:
<body>
<script>
var someHardcodedData = {
userId: 1,
id: 1,
title: 'sunt aut facere repellat provident occaecati excepturi optio reprehenderit',
body: 'quia et suscipit\nsuscipit recusandae con sequuntur expedita et cum\nreprehenderit molestiae ut ut quas totam\nnostrum rerum est autem sunt rem eveniet architecto'
};
document.body.textContent = someHardcodedData.title;
</script>
</body>
There's no one-size-fits-all way to obtain this data. The basic technique is to use BeautifulSoup to access the <script> tag text, then apply a regex or a parse to extract the object structure, JSON string, or whatever format the data might be in. Here's a proof-of-concept on the sample structure shown above:
import json
import re
from bs4 import BeautifulSoup
# pretend we've already used requests to retrieve the data,
# so we hardcode it for the purposes of this example
text = """
<body>
<script>
var someHardcodedData = {
userId: 1,
id: 1,
title: 'sunt aut facere repellat provident occaecati excepturi optio reprehenderit',
body: 'quia et suscipit\nsuscipit recusandae con sequuntur expedita et cum\nreprehenderit molestiae ut ut quas totam\nnostrum rerum est autem sunt rem eveniet architecto'
};
document.body.textContent = someHardcodedData.title;
</script>
</body>
"""
soup = BeautifulSoup(text, "lxml")
script_text = str(soup.select_one("script"))
pattern = r"title: '(.*?)'"
print(re.search(pattern, script_text, re.S).group(1))
Check out these resources for parsing JS objects that aren't quite valid JSON:
How to convert raw javascript object to python dictionary?
How to Fix JSON Key Values without double-quotes?
Here are some additional case studies/proofs-of-concept where scraping was bypassed using an API:
How can I scrape yelp reviews and star ratings into CSV using Python beautifulsoup
Beautiful Soup returns None on existing element
Extract data from BeautifulSoup Python
Scraping Bandcamp fan collections via POST (uses a hybrid approach where an initial request was made to the website to extract a token from the markup using BeautifulSoup which was then used in a second request to a JSON endpoint)
If all else fails, try one of the many dynamic scraping libraries listed in this thread.
Playwright-Python
Yet another option is playwright-python, a port of Microsoft's Playwright (itself a Puppeteer-influenced browser automation library) to Python.
Here's the minimal example of selecting an element and grabbing its text:
from playwright.sync_api import sync_playwright
with sync_playwright() as p:
browser = p.chromium.launch()
page = browser.new_page()
page.goto("http://whatsmyuseragent.org/")
ua = page.query_selector(".user-agent");
print(ua.text_content())
browser.close()
As mentioned, Selenium is a good choice for rendering the results of the JavaScript:
from selenium.webdriver import Firefox
from selenium.webdriver.firefox.options import Options
options = Options()
options.headless = True
browser = Firefox(executable_path="/usr/local/bin/geckodriver", options=options)
url = "https://www.example.com"
browser.get(url)
And gazpacho is a really easy library to parse over the rendered html:
from gazpacho import Soup
soup = Soup(browser.page_source)
soup.find("a").attrs['href']
I recently used requests_html library to solve this problem.
Their expanded documentation at readthedocs.io is pretty good (skip the annotated version at pypi.org). If your use case is basic, you are likely to have some success.
from requests_html import HTMLSession
session = HTMLSession()
response = session.request(method="get",url="www.google.com/")
response.html.render()
If you are having trouble rendering the data you need with response.html.render(), you can pass some javascript to the render function to render the particular js object you need. This is copied from their docs, but it might be just what you need:
If script is specified, it will execute the provided JavaScript at
runtime. Example:
script = """
() => {
return {
width: document.documentElement.clientWidth,
height: document.documentElement.clientHeight,
deviceScaleFactor: window.devicePixelRatio,
}
}
"""
Returns the return value of the executed script, if any is provided:
>>> response.html.render(script=script)
{'width': 800, 'height': 600, 'deviceScaleFactor': 1}
In my case, the data I wanted were the arrays that populated a javascript plot but the data wasn't getting rendered as text anywhere in the html. Sometimes its not clear at all what the object names are of the data you want if the data is populated dynamically. If you can't track down the js objects directly from view source or inspect, you can type in "window" followed by ENTER in the debugger console in the browser (Chrome) to pull up a full list of objects rendered by the browser. If you make a few educated guesses about where the data is stored, you might have some luck finding it there. My graph data was under window.view.data in the console, so in the "script" variable passed to the .render() method quoted above, I used:
return {
data: window.view.data
}
Easy and Quick Solution:
I was dealing with same problem. I want to scrape some data which is build with JavaScript. If I scrape only text from this site with BeautifulSoup then I ended with tags in text.
I want to render this tag and wills to grab information from this.
Also, I dont want to use heavy frameworks like Scrapy and selenium.
So, I found that get method of requests module takes urls, and it actually renders the script tag.
Example:
import requests
custom_User_agent = "Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:47.0) Gecko/20100101 Firefox/47.0"
url = "https://www.abc.xyz/your/url"
response = requests.get(url, headers={"User-Agent": custom_User_agent})
html_text = response.text
This will renders load site and renders tags.
Hope this will help as quick and easy solution to render site which is loaded with script tags.

Scraping data off flipkart using scrapy

I am trying to scrape some information from flipkart.com for this purpose I am using Scrapy. The information I need is for every product on flipkart.
I have used the following code for my spider
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors import LinkExtractor
from scrapy.selector import HtmlXPathSelector
from tutorial.items import TutorialItem
class WebCrawler(CrawlSpider):
name = "flipkart"
allowed_domains = ['flipkart.com']
start_urls = ['http://www.flipkart.com/store-directory']
rules = [
Rule(LinkExtractor(allow=['/(.*?)/p/(.*?)']), 'parse_flipkart', cb_kwargs=None, follow=True),
Rule(LinkExtractor(allow=['/(.*?)/pr?(.*?)']), follow=True)
]
#staticmethod
def parse_flipkart(response):
hxs = HtmlXPathSelector(response)
item = FlipkartItem()
item['featureKey'] = hxs.select('//td[#class="specsKey"]/text()').extract()
yield item
What my intent is to crawl through every product category page(specified by the second rule) and follow the product page(first rule) within the category page to scrape data from the products page.
One problem is that I cannot find a way to control the crawling and scrapping.
Second flipkart uses ajax on its category page and displays more products when a user scrolls to the bottom.
I have read other answers and assessed that selenium might help solve the issue. But I cannot find a proper way to implement it into this structure.
Suggestions are welcome..:)
ADDITIONAL DETAILS
I had earlier used a similar approach
the second rule I used was
Rule(LinkExtractor(allow=['/(.?)/pr?(.?)']),'parse_category', follow=True)
#staticmethod
def parse_category(response):
hxs = HtmlXPathSelector(response)
count = hxs.select('//td[#class="no_of_items"]/text()').extract()
for page num in range(1,count,15):
ajax_url = response.url+"&start="+num+"&ajax=true"
return Request(ajax_url,callback="parse_category")
Now i was confused on what to use for callback "parse_category" or "parse_flipkart"
Thank you for your patience
Not sure what you mean when you say that you can't find a way to control the crawling and scraping. Creating a spider for this purpose is already taking it under control, isn't it? If you create proper rules and parse the responses properly, that is all you need. In case you are referring to the actual order in which the pages are scraped, you most likely don't need to do this. You can just parse all the items in whichever order, but gather their location in the category hierarchy by parsing the breadcrumb information above the item title. You can use something like this to get the breadcrumb in a list:
response.css(".clp-breadcrumb").xpath('./ul/li//text()').extract()
You don't actually need Selenium, and I believe it would be an overkill for this simple issue. Using your browser (I'm using Chrome currently), press F12 to open the developer tools. Go to one of the category pages, and open the Network tab in the developer window. If there is anything here, click the Clear button to clear things up a bit. Now scroll down until you see that additional items are being loaded, and you will see additional requests listed in the Network panel. Filter them by Documents (1) and click on the request in the left pane (2). You can see the URL for the request (3) and the query parameters that you need to send (4). Note the start parameter which will be the most important since you will have to call this request multiple times while increasing this value to get new items. You can check the response in the Preview pane (5), and you will see that the request from the server is exactly what you need, more items. The rule you use for the items should pick up those links too.
For a more detail overview of scraping with Firebug, you can check out the official documentation.
Since there is no need to use Selenium for your purpose, I shall not cover this point more than adding a few links that show how to use Selenium with Scrapy, if the need ever occurs:
https://gist.github.com/cheekybastard/4944914
https://gist.github.com/irfani/1045108
http://snipplr.com/view/66998/

Django Forbid HttpResponse

I am working on a tiny movie manager by using the out-of-the-box admin module in Django.
I add a "Play" link on the movie admin page to play the movie, by passing the id of this movie. So the backend is something like this:
import subprocess
def play(request, movie_id):
try:
m = Movie.objects.get(pk=movie_id)
subprocess.Popen([PLAYER_PATH, m.path + '/' + m.name])
return HttpResponseRedirect("/admin/core/movie")
except Movie.DoesNotExist:
return HttpResponse(u"The movie is not exist!")
As the code above reveals, every time I click the "play" link, the page will be refreshed to /admin/core/movie, which is the movie admin page, I just do not want the backend to do this kind of things, because I may use the "Search" functions provided by the admin module, so the URL before clicking on "Play" may be something like: "/admin/core/movie/?q=gun", if that response takes effect, then the query criteria will be removed.
So, my thought is whether I can forbid the HttpResponse, in order to let me stay on the current page.
Any suggestions on this issue ?
Thanks in advance.
I used the custom action in admin to implement this function.
So finally I felt that actions are something like procedures, which have no return values, and requests are something like methods(views) with return values...
Thanks !

Dynamically generating Hudson custom workspace path

I'm trying to get a Hudson job to get built in a custom workspace path that is automatically generated using yyyyMMdd-HHmm. I can get the $BUILD_ID variable expanded as mentioned in bug 3997, and that seems to work fine. However, the workspace path is incorrect as it is of the format yyyy-MM-dd_HH-mm-ss. I've tried using the ZenTimestamp plugin v2.0.1, which changes the $BUILD_ID, but this only seems to take effect after the workspace is created.
Is there a method of defining a custom workspace in the manner that I want it?
You can use a groovy script to achieve that.
import hudson.model.*;
import hudson.util.*;
import java.util.*;
import java.text.*;
import java.io.*;
//Part 1 : Recover build parameter
AbstractBuild currentBuild = (AbstractBuild) Thread.currentThread().executable;
def envVars= currentBuild.properties.get("envVars");
def branchName = envVars["BRANCH_NAME"];
//Part 2 : Define new workspace Path
def newWorkspace = "C:\\Build\\"+branchName;
//Part 3 : Change current build workspace
def newWorspaceFilePath = new FilePath(new File(newWorkspace));
currentBuild.setWorkspace(newWorspaceFilePath);