I need to capture the Chrome console-debug.log file.
e.g.
chrome.exe --enable-logging
This dumps a console-debug.log file in the UserData folder. I tested this manually and it works.
I can't see to use Selenium to load Chrome remotely with arguments.
I have checked out the docs here and tried to follow
https://sites.google.com/a/chromium.org/chromedriver/capabilities
Now I have two machines, one is a linux machine that hosts my python test code and another is a Windows VM that I run Chrome on for testing against different urls.
The Windows VM has Selenium standalone server 2.45.0 and its' start-up script points to the location of chromedriver.exe (version 2.23.409699). It also starts IEDriverServer but we don't use that right now. I start it using a batch file in the Startup folder that has the line.
java -jar C:\Users\Remote\Desktop\selenium\selenium-server-standalone-2.45.0.jar -Dwebdriver.chrome.driver=C:\Users\Remote\Desktop\selenium\webdrivers\chromedriver.exe -Dwebdriver.ie.driver=C:\Users\Remote\Desktop\selenium\webdrivers\IEDriverServer.exe
On the linux machine I am using a Python implementation, and we use webdriver.Remote to connect to Selenium standalone server. For debug purposes I'm running PyCharm so I can step through the code.
The code I use to setup the connection is:
options = webdriver.ChromeOptions()
options.add_argument("--enable-logging")
caps = options.to_capabilities()
self.selenium = webdriver.Remote(
command_executor='http://%s:4444/wd/hub' % browser_host_ip_address,
desired_capabilities=caps,
proxy=proxy)
If I step through with PyCharm and monitor the caps object just before going in to the webdriver.Remote code I get an object that looks like this:
{'chromeOptions': {'args': ['--enable-logging'], 'extensions': []}, 'javascriptEnabled': True, 'platform': 'ANY', 'browserName': 'chrome', 'version': ''}
This chromeOptions object makes it all the way to the Selenium standalone server and that then outputs to the server console:
10:59:41.109 INFO - Done: [new session: Capabilities [{browserName=chrome, javascriptEnabled=true, chromeOptions={args=[--enable-logging], extensions=[]}, version=, platform=ANY}]]
This then doesn't seem to make it through to chromedriver. I created a simple C program that outputs any arguments passed to it and replaced chromedriver with this and the only arguments passed to it were the --port=xxxxx where x is a number.
So chromeOptions makes it all the way through but fails at the Selenium server. I imagine I am doing something silly but for the life of me I can't work out what.
I have tried the latest version of Selenium standalone server 3.0 beta 2 and 2.53.1, and also 2.39.0 in case something was brought in to stop it. I have also tried multiple chromedriver versions.
I am assuming my original premise that I can use chromeOptions in this way is wrong, but I'm not sure.
I need to use 2 different machines so I assume I would need a webdriver.Remote object as webdriver.Chrome doesn't support remote connections. Is that true?
Any ideas or suggestions would be appreciated?
Adam
Related
After quite a bit of trial and error and a step by step attempt to find solutions I thought I share the problems here and answer them myself according to what I've found. There is not too much documentation on this anywhere except small bits and pieces and this will hopefully help others in the future.
Please note that this is specific to Django, Celery, Redis and the Digital Ocean App Platform.
This is mostly about the below errors and further resulting implications:
OSError: [Errno 38] Function not implemented
and
Cannot connect to redis://......
The first error happens when you try run the celery command celery -A your_app worker --beat -l info
or similar on the App Platform. It appears that this is currently not supported on digital ocean. The second error occurs when you make a number of potential mistakes.
PART 1:
While Digital Ocean might remedy this in the future here is an approach that will offer a workaround. The problem is the not supported execution pool. Google "celery execution pools" if you want to know more and how they work. The default one is prefork. But what you need is either gevent or eventlet. I went with the former for my purposes.
Whichever you pick you will have to install it as it doesn't come with celery by default. In my case it was: pip install gevent (and don't forget adding it to your requirements as well).
Once you have that you can re-run the celery command but note that gevent and beat are not supported within a single command (will result in an error). Instead do the following:
celery -A your_app worker --pool=gevent -l info
and then separately (if you want to run beat that is) in another terminal/console
celery -A your_app beat -l info
In the first line you can also specify the concurrency like so: --concurrency=100. This is not required but useful. Read up on it what it does as that goes beyond the solution here.
PART 2:
In my specific case I've tested the above locally (development) first to make sure they work. The next issue was getting this into production. I use Redis as the db/broker.
In my specific setup I have most of my celery configuration in the_main_app/celery/__init__.py file but sometimes people put it directly into the_main_app/celery.py. Whichever it is you do make sure that the REDIS_URL is set correctly. For development it usually looks something like this:
YOUR_VAR_NAME = os.environ.get('REDIS_URL', 'redis://localhost:6379') where YOUR_VAR_NAME is then set to the broker with everything as below:
YOUR_VAR_NAME = os.environ.get('REDIS_URL', 'redis://localhost:6379')
app = Celery('the_main_app')
app.conf.broker_url = YOUR_VAR_NAME
The remaining settings are all documented on the "celery first steps with django" help page but are not relevant for what I am showing here.
PART 3:
When you setup your Redis Database on the App Platform (which is very simple) you will see the connection details as 'public network' and 'VPC network'.
The celery documentation says to use the following URL format for production: redis://:password#hostname:port/db_number. This didn't work. If you are not using a yaml file then you can simply copy paste the entire connection string (select from the dropdown!) from the Redis DB connection details and then setup an App-Level environment variable in your Digital Ocean project named REDIS_URL and paste in that entire string (and also encrypt it!).
The string should look like something like this (redis with 2 s!)
rediss://USER:PASS#URL.db.ondigitialocean.com:PORT.
You are almost done. The last step is to setup the workers. It was fine for me to run the PART 1 commands as console commands on the App Platform to test them but eventually I've setup a small worker (+ Add Component) for each line pasted them into the Run Command.
That is basically the process step by step. Good luck!
I have a html file with one button. When the button is clicked, a javascript function "run" is called:
function run() {
window.open("http://localhost/cgi-bin/run.py", "_self");
}
run.py is simply trying to run a helloworld.exe program, that outputs in a terminal the string "helloworld", but nothing happens, and browser keeps "waiting for localhost" indefinitely.
#!python
import sys, string, os, cgitb
cgitb.enable()
os.system("helloworld.exe")
I have tried helloworld.exe alone and it works, I have run run.py on the terminal, and it worked, and also I have tested on the browser the test site http://localhost/cgi-bin/helloworld.py, and it worked fine (helloworld.py is another script just to see if my apache is configured OK).
I am using wamp.
What I am trying to do is a bigger program that allows a client connect to a server, and "interact" with a program on the server side. The program is already done in c++, and won't be translated into php or javascript.
EDIT: I have been trying with the functions: subprocess.Popen, subprocess.call and os.system. I have also tested the code to run .exe files created by me living at apache/cgi-bin folder or executables like wordpad, living at c:\windows. And it always succeeds when the python script runs from the terminal, and it never works when trying from the browser. Is it possible that it is because of the server I am using? I use the apache from wamp, and have added the sentence "AddHandler cgi-script .exe" to the httpd.conf file.
I am sure it doesn't work locally. os.system returns you the exit code of the command, not its output.
You will need to use subprocess.Popen and pipeline the output to read the output.
import subprocess
p = subprocess.Popen("whatever.exe",stdout=subprocess.PIPE)
print p.stdout.read()
p.wait()
Also, the output of your CGI script is not valid HTTP protocol, and depending on your server that could be causing problems (some servers sanitize the output of the scripts, some others expect them to be written properly).
To me this code example works (on GNU/Linux using weborf as server, but it should be the same). You can try setting the content type and sending the \r\n\r\n final sequence. Servers are not expected to send that because the CGI script might want to add some more HTTP headers.
#!/usr/bin/env python
import cgitb
import subprocess
import sys
cgitb.enable()
sys.stdout.write("Content-type: text/plain\r\n\r\n")
p = subprocess.Popen("/usr/games/fortune",stdout=subprocess.PIPE)
data = p.stdout.read()
p.wait()
print data
In general my experience has been that if something works locally and doesn't work when you install all the same code on a different server, you are looking at a permissions problem. Check the permissions that visitors on your site have and make sure that the user coming through your web server has the right permissions to run helloworld.exe and run.py.
I'm on Ubuntu 12.04, using jetty (9_M4), solr (4.0.0) through django-haystack (2.0beta) installed in a django 1.4.2 site.
I've had to make a number of jumps through hoops to get this up and running, as there is very little documentation for getting solr 4.0 up and running in Ubuntu with django-haystack. But how hard could it be?
My main confusion is between what Jetty is doing, and what Solr is doing.
So, I installed Jetty via this tutorial making a small adjustment to the init file as I note in the comment on that tutorial. Jetty is now running, I can see it in browser, even after a reboot.
Great.
Move onto installing Solr via this tutorial again with adjustments. Instead of:
cp -R apache-solr-4.0.0/example/solr /opt
I use:
cp -R apache-solr-4.0.0/example/* /opt/solr/
and therefore add the following to /etc/default/jetty:
JAVA_OPTIONS="-Dsolr.solr.home=/opt/solr/solr $JAVA_OPTIONS"
I can't exactly remember why I did that, but there was a reason at the time. I stop using that tutorial at that point, as I don't understand the solr concept of core very well, and I'm already flustered at how annoyingly difficult this is.
(For context, when I set up django-haystack 2.0 with solr 3.5 about 6 months ago it was terrifyingly easy and didn't require a separate jetty installation - all up took me about two hours)
Anyway, I go back to my Django installation, create the schema.xml, make the stopwords-en.txt changes, copy it across to /opt/solr/solr/collection1/conf.
I edit /opt/solr/solr/collection1/conf/solrconfig.xml to remove the reference to updateLog since any attempt I made to add version field to schema.xml failed dismally with some sort of character error. See here (lucene -solr-user mailing list) and here (django-haystack github) for more info on this.
Finally, I cd into /opt/solr and run it:
sudo java -jar start.jar
Ba-da-boom! I get some results (when I go to my django site and use the search I've set up). Fantastic. This is really great. Now I just need to make the starting of solr persistent.
I create an /etc/init/solr that looks like this:
description "Solr Search Server"
# Make sure the file system and network devices have started before
# we begin the daemon
start on (filesystem and net-device-up IFACE!=lo)
# Stop the event daemon on system shutdown
stop on shutdown
# Respawn the process on unexpected termination
respawn
# The meat and potatoes
exec /usr/bin/java -jar /opt/solr/start.jar >> /var/log/solr.log 2>&1
I restart the server and nothing - I can see solr running, but I'm not getting any results in my django search.
I remove the init file and try running from the cli again - yep, sweet.
So, my questions are:
What the hell have I done wrong?
How do I get solr to start at boot and respawn if it dies accidentally AND produce results through my Django/haystack interface
Why do I need jetty and solr running simultaneously, and what is the relationship of /opt/jetty/webapps/solr.war to my /opt/solr? Am I creating causing conflicts?
Why was this so easy with solr 3.5 and so difficult now? I ask this honestly - I don't want a list of excuses or explanations from solr developers - I want to know how my understanding can be so limited in the first instance (solr 3.5) and get it running in two hours and why I now need to have a comprehensively deeper understanding of jetty/solr architecture and cli/shell script hacking to get it to run?
I am not promising to get all your things, but (numbers do not match questions):
1) Jetty is a web-server. Solr runs as a (web) application inside that web server, however:
2) Jetty can also run an embedded webserver, which is how Solr download works. When you do java -jar start.jar that runs Jetty with everything preconfigured. In which case you do not need a standalone Jetty. I suggest start with embedded Jetty, then switch to external one. However, if only your local app talks to local Solr server, you may be able to get quite far without needing full Jetty.
3) You don't need all the stuff you find in example directory - it has multiple confugurations and support files and is somewhat nested (which is confusing)
4) To start you need two things: Running solr; your configuration directory
5) The easiest way to get Solr running is to put the whole distrubution directory (I know - large) somewhere (e.g. /opt/solr).
6) Your configuration directory is very simple. All you need is two files to start, three if you are picky about names:
- (wherever, but make sure Solr can read/write there)
-- solr.xml (if you are picking about collection name, otherwise you can skip it)
-- collection1/ (that's default name, you can change that in solr.xml)
-- collection1/conf/ (this is configuration directory, Solr will add data directory on the same level once you start right)
schema.xml
-- collection1/conf/shema.xml
-- collection1/conf/solrconfig.xml
7) Then, you need to be in the example directory and run java -Dsolr.solr.home= start.jar . This will get all the pieces up and running on port :8983 . Solr 4 has a pretty new admin interface, so visit it with your browser, maybe do the tutorial, etc.
If you need help with minimal functioning schema/solrconfig files, ask separately, but you cannot just use ones from the example directory as it has all the other file references in the fieldType analysers (though you could just comment those lines out).
When I build project from terminal by using 'xcodebuild' command I succeed, but when I try to do run same script from cron task I receive error
"Code Sign error: The identity '****' doesn't match any valid certificate/private key pair in the default keychain"
I think problem is in settings and permissions of crontab utility, it seems crontab does not see my keychain
Can anyone provide me terminal command how to make my keychain visible for crontab
I encountered a similar issue with trying to build nightly via cron. The only resolution I found was to create a plist in /Library/LaunchDaemons/ and load it via launchctl. The key necessary is "SessionCreate" otherwise you will quickly run in to problems similar to what was encountered with trying to use cron -- namely that your user login.keychain is not available to the process. "SessionCreate" is similar to "su -l" in that (as far as I understand) it simulates a login and thus default keychains you expect will be available; otherwise, you are stuck with only the System keychain despite the task running as your user.
I found the answers (though not the top answer currently) here useful in troublw shooting this issue: Missing certificates and keys in the keychain while using Jenkins/Hudson as Continuous Integration for iOS and Mac development
You execute your cron job with which account ?
most probably the problem !!
You can add
echo `whoami`
at the beginning of your script to see with which user the script is launched.
Also when a Bash script is launched from cron, it don't use the same environment variable (non login shell) as when you launch it as a user.
When the script launches from cron, it doesn't load your $HOME/.profile (or .bash_profile). Anything you run from cron has to be 100% self-sufficient in terms of it's environment. I'd suggest you make yourself a file called something like "set_build_env.sh" It should contain everything from your .profile that you need to build, such as $PATH, $HOME, $CLASSPATH etc. Then in your build script, load set_build_env.sh using the dot notation or source cmd as ericc said. You should also remove the build-specific lines from your.profile and then source set_build_env from there too so only one place to maintain. Example:
source /home/dmitry/set_build_env.sh #absolute path
. /home/dmitry/set_build_env.sh #dot-space notation same as "source"
I am looking for a way to simplify remote deployment of a django application directly from PyCharm.
Even if deploying the files itself works just file with the remote host and upload, I was not able to find a way to run the additional commands on the server site (like manage.py syncdb).
I am looking for a fully automated solution, one that would work at single click (or command).
I don't know much about PyCharm so maybe you could do something from the IDE, but I think you'll probably want to take a look at the fabric project (http://docs.fabfile.org/en/1.0.1/index.html)
It's a python deployment automation tool that's pretty great.
Here is one of my fabric script files. Note that I make a lot of assumptions (This is my own that I use) that completely depend on how you want to set up your project, such as I use virtualenv, pip, and south as well as my own personal preference for how to deploy and where to deploy to.
You'll likely want to rework or simplify it to meet your needs.
You may use File > Settings > Tools > External Tools to run arbitrary external executable files. You may write a small command that connects over SSH and issues a [set of] command. Then the configured tool would be executable
For example, in my project based on tornado, I run the instances using supervisord, which, according to answer here, cannot restart upon code change.
I ended up writing a small tool on paramiko, that connects via ssh and runs supervisorctl restart. The code is below:
import paramiko
from optparse import OptionParser
parser = OptionParser()
parser.add_option("-s",
action="store",
dest="server",
help="server where to execute the command")
parser.add_option("-u",
action="store",
dest="username")
parser.add_option("-p",
action="store",
dest="password")
(options, args) = parser.parse_args()
client = paramiko.SSHClient()
client.load_system_host_keys()
client.connect(hostname=options.server, port=22, username=options.username, password=options.password)
command = "supervisorctl reload"
(stdin, stdout, stderr) = client.exec_command(command)
for line in stdout.readlines():
print line
client.close()
External Tool configuration in Pycharm:
program: <PYTHON_INTERPRETER>
parameters: <PATH_TO_SCRIPT> -s <SERVERNAME> -u <USERNAME> -p <PASSWORD>