selenium webdriver tab not switching - python-2.7

selenium webdriver tab swithching not working .
code is :
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
driver = webdriver.Chrome()
driver.get('https://www.google.com')
driver.implicitly_wait(2)
driver.find_element_by_tag_name('body').send_keys(Keys.CONTROL+'t')
driver.switch_to.window(driver.window_handles[-1])
driver.get('http://www.rediff.com')
driver.find_element_by_tag_name('body').send_keys(Keys.CONTROL+'t')
driver.switch_to.window(driver.window_handles[-1])
driver.get('http://www.stackoverflow.com')
driver.switch_to.window(driver.window_handles[0])
In the last line if i change the index from [0] to [1] or [2] there is no change.

Use Keys to have the browser go back to the tab you want. If you want to jump to tab 1 use: driver.find_element_by_tag_name('body').send_keys(Keys.CONTROL + '1')

Related

Python 2.7 Selenium unable to extract data

I am trying to extra data by return error
NoSuchElementException: Message: u'Unable to locate element: {"method":"xpath","selector":"//*[#id=\'searchpopbox\']"}' ; Stacktrace:
at FirefoxDriver.findElementInternal_ (file:///tmp/tmpjVcHQR/extensions/fxdriver#googlecode.com/components/driver_component.js:8444)
at FirefoxDriver.findElement (file:///tmp/tmpjVcHQR/extensions/fxdriver#googlecode.com/components/driver_component.js:8453)
at DelayedCommand.executeInternal_/h (file:///tmp/tmpjVcHQR/extensions/fxdriver#googlecode.com/components/command_processor.js:10456)
at DelayedCommand.executeInternal_ (file:///tmp/tmpjVcHQR/extensions/fxdriver#googlecode.com/components/command_processor.js:10461)
at DelayedCommand.execute/< (file:///tmp/tmpjVcHQR/extensions/fxdriver#googlecode.com/components/command_processor.js:10401)
My code is as below and I am trying to get the list from the link
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
profile = webdriver.FirefoxProfile()
profile.set_preference('browser.download.folderList', 2)
profile.set_preference('browser.download.manager.showWhenStarting', False)
browser = webdriver.Firefox(profile)
url = 'https://www.bursamarketplace.com/index.php?tpl=th001_search_ajax'
browser.get(url)
time.sleep(15)
a = browser.find_element_by_xpath("//*[#id='searchpopbox']")
print a
I am seeking your help to get the right xpath for the url.
This gets all the listing for that table.
from webdriver_manager.chrome import ChromeDriverManager
from selenium import webdriver
driver = webdriver.Chrome(ChromeDriverManager().install())
driver.get("https://www.bursamarketplace.com/index.php?tpl=th001_search_ajax")
time.sleep(15)
a = driver.find_element_by_xpath("//*[#id='searchpopbox']")
print(a.text)
Or without chromedrivermanager same thing applies to firefox
.Chrome(executable_path='absolutepathofchromedriver.exe')

How can I add my web scrape process using bs4 to selenium automation in Python to make it one single process which just asks for a zipcode?

I am using selenium to go to a website and then go to the search button type a zipcode which I am entering beforehand and then for that zip code I want the link that the webpage has to feed my web scraper created using beautiful soup and once the link comes up I can scrape required data to get my csv.
What I want:
I am having trouble getting that link to the beautiful soup URL. I basically want to automate it so that I just have to enter a zip code and it gives me my CSV.
What I am able to get:
I am able to enter the zip code and search using selenium and then add that url to my scraper to give csv.
Code I am using for selenium :
driver = webdriver.Chrome('/Users/akashgupta/Desktop/Courses and Learning/Automating Python and scraping/chromedriver')
driver.get('https://www.weather.gov/')
messageField = driver.find_element_by_xpath('//*[#id="inputstring"]')
messageField.click()
messageField.send_keys('75252')
time.sleep(3)
showMessageButton = driver.find_element_by_xpath('//*[#id="btnSearch"]')
showMessageButton.click()
#web scraping Part:
url="https://forecast.weather.gov/MapClick.php?lat=32.99802500000004&lon=-96.79775499999994#.Xo5LnFNKgWo"
res= requests.get(url)
soup=BeautifulSoup(res.content,'html.parser')
tag=soup.find_all('div',id='seven-day-forecast-body')
weekly=soup.find_all(class_='tombstone-container')
main=soup.find_all(class_='period-name')
description=soup.find_all(class_='short-desc')
temp=soup.find_all(class_='temp')
Period_Name=[]
Desc=[]
Temp=[]
for a in range(0,len(main)):
Period_Name.append(main[a].get_text())
Desc.append(description[a].get_text())
Temp.append(temp[a].get_text())
df = pd.DataFrame(list(zip(Period_Name, Desc,Temp)),columns =['Period_Name', 'Short_Desc','Temperature'])
from selenium import webdriver
import time
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.ui import WebDriverWait
driver = webdriver.Chrome('chromedriver.exe')
driver.get('https://www.weather.gov/')
messageField = driver.find_element_by_xpath('//*[#id="inputstring"]')
messageField.click()
messageField.send_keys('75252')
time.sleep(3)
showMessageButton = driver.find_element_by_xpath('//*[#id="btnSearch"]')
showMessageButton.click()
WebDriverWait(driver, 10).until(EC.url_contains("https://forecast.weather.gov/MapClick.php")) # here you are waiting until url will match your output pattern
currentURL = driver.current_url
print(currentURL)
time.sleep(3)
driver.quit()
#web scraping Part:
res= requests.get(currentURL)
....

chromedriver can't click when running a script, but can in shell

I have a problem in general with clicking in Chromedriver when the code is being ran by Python. This code is used in the script:
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.ui import WebDriverWait
driver.get("https://www.marktplaats.nl/")
cook_button = WebDriverWait(driver, 15).until(EC.element_to_be_clickable((By.XPATH, "//form[#method='post']/input[#type='submit']"))).click()
It just times out giving "NoSuchElementException". But if I put those lines manually in the Shell, it clicks like normal. For what it's worth, I'm using the latest 2.40 Chromedriver and Chrome v67. Running it headless doesn't make any difference.
EDIT
The program actually breaks after on the third command when it tries to find an element that doesn't exist because the click wasn't completed
driver.get(master_link) # get the first page
wait_by_class("search-results-table")
page_2_el = driver.find_element_by_xpath("//span[#id='pagination-pages']/a[contains(#data-ga-track-event, 'gination')]")
So, page_2_el command gives this exception, but only because the click before wasn't completed successfully to remove the warning about cookies.And I'm sure the xpath search is good because it runs with geckodriver in Firefox, but won't do it here with Chromedriver.
EDIT2 See a video of the bug here https://streamable.com/tv7w4 Notice how it flinches a bit, see when it writes on the console "before click" and "after click"
SOLUTION
Replaced
cook_button = WebDriverWait(driver, 15).until(EC.element_to_be_clickable((By.XPATH, "//form[#method='post']/input[#type='submit']"))).click()
With
N_click_attempts = 0
while 1:
if N_click_attempts == 10:
print "Something is wrong. "
break
print "Try to click."
N_click_attempts = N_click_attempts+1
try:
cook_button = WebDriverWait(driver, 15).until(EC.element_to_be_clickable((By.XPATH, "//form[#method='post']/input[#type='submit']"))).click()
time.sleep(2.0)
except:
time.sleep(2.0)
break
It seems that the click is now completed. I have other clicks in the script and they work fine with element.click(), this one was problematic for some reason.
Your path is correct, but I would suggest a smaller one:
//form/input[2]
And about NoSuchElementException - you can try to add a pause, to wait until element loads and becomes 'visible' for selenium. Like this:
import time
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.ui import WebDriverWait
driver.get("https://www.marktplaats.nl/")
cook_button = WebDriverWait(driver, 15).until(EC.element_to_be_clickable((By.XPATH, "//form[#method='post']/input[#type='submit']"))).click()
time.sleep(5) # wait 5 seconds until DOM will reload
According edit in the question I would suggest to add time.sleep(5) after clicking on the button. And for the same reason, because after clicking the whole DOM reloads and selenium should wait until reload will be done. On my computer it takes about 2-3 seconds to full reload the DOM.

Selenium - chromeDriver u'unknown error : chrome field to start

I'm pretty new in selenium and getting an error with ChromeWebDriver.
I'm using: Chrome 36, ChromeWebDriver 2.10, Windows 7
Here's my code:
from selenium import webdriver
webD = webdriver.Chrome();
But I get the response
unknown error : chrome field to start
How can I fix this?
You may need to download the chrome executable driver from http://chromedriver.storage.googleapis.com/index.html and set the executable path accordingly.
Sample Python Code :
import os
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
chromedriver = "./chromedriver"
os.environ["webdriver.chrome.driver"] = chromedriver
driver = webdriver.Chrome(chromedriver)
#driver = webdriver.Firefox()
driver.get("http://www.python.org")
print driver.title
assert "Python" in driver.title
For more information and end to end script follow
Reference

(Python,Selenium) How to minimize firefox window while running

how to minimize a firefox window by using selenium and python
i tried with
try:
body=None
body = driver.find_element_by_tag_name("body")
body.send_keys("{%+" "+N}")
print "entered keys"
except NoSuchElementException:
print "item body is not exists"
code:2
------
body.send_keys(Keys.CONTROL+Keys.ESCAPE+'D')
code:3
------
body.send_keys("{%" "n}")
Nothing worked for me i want to minimize my firefox window while running or after invoking
or run in invisible mode which has no focus
The following code should help:
from selenium import webdriver
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.common.keys import Keys
driver = webdriver.Firefox()
driver.get("http://www.google.com")
actionChain = ActionChains(driver).key_down(Keys.ALT)
actionChain.send_keys(Keys.SPACE)
actionChain.send_keys("n")
actionChain.perform()