How to retrieve the page load timeout through Selenium and Python - python-2.7

Is there any method in Python + Selenium for retrieving the webdriver's current page load timeout?
I know to use set_page_load_timeout() and examining the Chromedriver logs shows that this modified its internal state so I am wondering if there's way to query for it?
Alternatively, I will simply save the value on my side of the code. The retrieval would be helpful to verify that the timeout was successfully set and later on that it's still the same.

When you initialize the WebDriver it is configured with a default page_load_timeout of 300000 seconds which you can extract from the capabilities dictionary as follows:
Code Block:
from selenium import webdriver
driver = webdriver.Firefox(executable_path=r'C:\Utility\BrowserDrivers\geckodriver.exe')
dict = driver.capabilities['timeouts']
print(dict["pageLoad"])
driver.quit()
Console Output:
300000

Related

Google cloud functions missing logs issue

I have a small python CF conencted to a PubSub topic that should send out some emails using the sendgrid API.
The CF can dynamically load & run functions based on a env var (CF_FUNCTION_NAME) provided (monorepo architecture):
# main.py
import logging
import os
from importlib import import_module
def get_function(function_name):
return getattr(import_module(f"functions.{function_name}"), function_name)
def do_nothing(*args):
return "no function"
cf_function_name = os.getenv("CF_FUNCTION_NAME", False)
disable_logging = os.getenv("CF_DISABLE_LOGGING", False)
def run(*args):
if not disable_logging and cf_function_name:
import google.cloud.logging
client = google.cloud.logging.Client()
client.get_default_handler()
client.setup_logging()
print("Logging enabled")
cf = get_function(cf_function_name) if cf_function_name else do_nothing
return cf(*args)
This works fine, except for some issues related to Stackdriver logging:
The print statement "Logging enabled" shoud be printed every invocation, but only happens once?
Exceptions rasied in the dynamically loaded function are missing in the logs, instead the logs just show 'finished with status crash', which is not very useful.
Screenshot of the stackdriver logs of multiple subsequent executions:
stackdriver screenshot
Is there something I'm missing here?
Is my dynamic loading of funcitons somehow messing witht the logging?
Thanks.
I don't see any issue here. When you load your function for the first time, one instance is created and the logging is enabled (your logging trace). Then, the instance stay up until its eviction (unpredictable!).
If you want to see several trace, perform 2 calls in the same time. Cloud Function instance can handle only one request at the same time. 2 calls in parallel imply the creation of another instance and thus, a new logging initialisation.
About the exception, same things. If you don't catch and print it, nothing will be logged. Simply catch them!
It seems like there is an issue with Cloud Functions and Python for a month now, where errors do not get logged automatically with tracebacks and categorized correctly as "Error": GCP Cloud Functions no longer categorizes errors correctly with tracebacks

Cassandra python driver: Client request timeout

I setup a simple script to insert a new record into a Cassandra database. It works fine on my local machine, but I am getting timeout errors from the client when I moved the database to a remote machine. How do I properly set the timeout for this driver? I have tried many things. I hacked the timeout in my IDE and got it to work without timing out, so I know for sure its just a timeout problem.
How I setup my Cluster:
profile = ExecutionProfile(request_timeout=100000)
self.cluster = Cluster([os.getenv('CASSANDRA_NODES', None)], auth_provider=auth_provider,
execution_profiles={EXEC_PROFILE_DEFAULT: profile})
connection.setup(hosts=[os.getenv('CASSANDRA_SEED', None)],
default_keyspace=os.getenv('KEYSPACE', None),
consistency=int(os.getenv('CASSANDRA_SESSION_CONSISTENCY', 1)), auth_provider=auth_provider,
connect_timeout=200)
session = self.cluster.connect()
The query I am trying to perform:
model = Model.create(buffer=_buffer, lock=False, version=self.version)
13..': 'Client request timeout. See Session.execute_async'}, last_host=54.213..
The record I'm inserting is 11mb, so I can understand there is a delay, just increasing the timeout should do it, but I can't seem to figure it out.
The default request timeout is an attribute of the Session object (version 2.0.0 of the driver and later).
session = cluster.connect(keyspace)
session.default_timeout = 60
This is the simplest answer (no need to mess about with an execution profile), and I have confirmed that it works.
https://datastax.github.io/python-driver/api/cassandra/cluster.html#cassandra.cluster.Session
You can set request_timeout in the Cluster constructor:
self.cluster = Cluster([os.getenv('CASSANDRA_NODES', None)],
auth_provider=auth_provider,
execution_profiles={EXEC_PROFILE_DEFAULT: profile},
request_timeout=10)
Reference: https://datastax.github.io/python-driver/api/cassandra/cluster.html
Based on the documentation, request_timeout is an attribute of ExecutionProfile class, and you can give an execution profile to the cluster constructor (this is an example).
So, you can do:
from cassandra.cluster import Cluster
from cassandra.cluster import ExecutionProfile
execution_profil = ExecutionProfile(request_timeout=600)
profiles = {'node1': execution_profil}
cluster = Cluster([os.getenv('CASSANDRA_NODES', None)], execution_profiles=profiles)
session = cluster.connect()
session.execute('SELECT * FROM test', execution_profile='node1')
Important: when you use execute or èxecute_async, you have to specify the execution_profile name.

How to set timeouts of db calls using flask and SQLAlchemy?

I need to set timeout of db calls, and I looked into SQLAlchemy documentation http://flask-sqlalchemy.pocoo.org/2.1/config/
There are many configuration parameters, but never illustrate an example of how to use them. Could anyone show me how to use SQLALCHEMY_POOL_TIMEOUT in order to set timeout of db calls? I have them in my .py files, but I don't know whether I use the parameter correctly.
app = Flask(__name__)
app.config["LOGGER_NAME"] = ' '.join([app.logger_name,
socket.gethostname(), instance_id])
app.config["SQLALCHEMY_DATABASE_URI"] = config.sqlalchemy_database_uri
app.config["SQLALCHEMY_TRACK_MODIFICATIONS"] = False
app.config["SQLALCHEMY_POOL_TIMEOUT"] = 30
The document only states that "Specifies the connection timeout for the pool. Defaults to 10." and I don't even know the unit of this 10, is it seconds or milliseconds?
The unit is seconds. As can be seen in the later documentation. Configuration — Flask-SQLAlchemy Documentation (2.3)

Can I slow down Django

Simple question really
./manage.py runserver
Can I slow down localhost:8000 on my development machine so I can simulate file uploads and work on the look and feel of ajax uploading?
depending on where you want to simulate such you could simply sleep?
from time import sleep
sleep(500)
On osx or freebds, you can use ipfw to limit bandwidth on specific ports:
sudo ipfw pipe 1 config bw 1Bytes/s delay 100ms
sudo ipfw add 1 pipe 1 src-port 8000
Do not forget to delete it when you do not need it anymore:
sudo ipfw delete 1
Credit: jaguarcy
For osx there is also free app that will allow this:
http://slowyapp.com/
You could write a customized upload handler or subclass current upload handler to mainly slow down receive_data_chunk() method in it. Or set a pdb breakpoint inside receive_data_chunk() and manually proceed the uploading. Or even simpler, try to upload some large file.
I'm a big fan of the Charles HTTP Proxy. It lets you throttle the connection and can simulate all sorts of network conditions.
http://www.charlesproxy.com/
Use the slow file upload handler from django-gubbins:
import time
from django.core.files.uploadhandler import FileUploadHandler
class SlowFileUploadHandler(FileUploadHandler):
"""
This is an implementation of the Django file upload handler which will
sleep between processing chunks in order to simulate a slow upload. This
is intended for development when creating features such as an AJAXy
file upload progress bar, as uploading to a local process is often too
quick.
"""
def receive_data_chunk(self, raw_data, start):
time.sleep(2)
return raw_data
def file_complete(self, file_size):
return None
You can either enable this globally, by adding it to:
FILE_UPLOAD_HANDLERS = (
"myapp.files.SlowFileUploadHandler",
"django.core.files.uploadhandler.MemoryFileUploadHandler",
"django.core.files.uploadhandler.TemporaryFileUploadHandler",
)
Or enable it for a specific request:
request.upload_handlers.insert(0, SlowFileUploadHandler())
Make sure the request is excepted from CSRF checking, as mentioned at https://docs.djangoproject.com/en/dev/topics/http/file-uploads/#id1
If you want to slow things down across all requests a very easy way to go would be to use ngrok https://ngrok.com/ . Use the ngrok url for requests then connect to a vpn in another country. That will make your requests really slow.

App Engine local datastore content does not persist

I'm running some basic test code, with web.py and GAE (Windows 7, Python27). The form enables messages to be posted to the datastore. When I stop the app and run it again, any data posted previously has disappeared. Adding entities manually using the admin (http://localhost:8080/_ah/admin/datastore) has the same problem.
I tried setting the path in the Application Settings using Extra flags:
--datastore_path=D:/path/to/app/
(Wasn't sure about syntax there). It had no effect. I searched my computer for *.datastore, and couldn't find any files, either, which seems suspect, although the data is obviously being stored somewhere for the duration of the app running.
from google.appengine.ext import db
import web
urls = (
'/', 'index',
'/note', 'note',
'/crash', 'crash'
)
render = web.template.render('templates/')
class Note(db.Model):
content = db.StringProperty(multiline=True)
date = db.DateTimeProperty(auto_now_add=True)
class index:
def GET(self):
notes = db.GqlQuery("SELECT * FROM Note ORDER BY date DESC LIMIT 10")
return render.index(notes)
class note:
def POST(self):
i = web.input('content')
note = Note()
note.content = i.content
note.put()
return web.seeother('/')
class crash:
def GET(self):
import logging
logging.error('test')
crash
app = web.application(urls, globals())
def main():
app.cgirun()
if __name__ == '__main__':
main()
UPDATE:
When I run it via command line, I get the following:
WARNING 2012-04-06 19:07:31,266 rdbms_mysqldb.py:74] The rdbms API is not available because the MySQLdb library could not be loaded.
INFO 2012-04-06 19:07:31,778 appengine_rpc.py:160] Server: appengine.google.com
WARNING 2012-04-06 19:07:31,783 datastore_file_stub.py:513] Could not read datastore data from c:\users\amy\appdata\local\temp\dev_appserver.datastore
WARNING 2012-04-06 19:07:31,851 dev_appserver.py:3394] Could not initialize images API; you are likely missing the Python "PIL" module. ImportError: No module named _imaging
INFO 2012-04-06 19:07:32,052 dev_appserver_multiprocess.py:647] Running application dev~palimpsest01 on port 8080: http://localhost:8080
INFO 2012-04-06 19:07:32,052 dev_appserver_multiprocess.py:649] Admin console is available at: http://localhost:8080/_ah/admin
Suggesting that the datastore... didn't install properly?
As of 1.6.4, we stopped saving the datastore after every write. This method did not work when simulating the transactional model found in the High Replication Datastore (you would lose the last couple writes). It is also horribly inefficient. We changed it so the datastore dev stub flushes all writes and saves its state on shut down. It sounds like the dev_appserver is not shutting down correctly. You should see:
Applying all pending transactions and saving the datastore
in the logs when shutting down the server (see source code and source code). If you don't, it means that the dev_appserver is not being shut down cleanly (with a TERM signal or KeyInterrupt).