"xml.sax._exceptions.SAXReaderNotAvailable: No parsers found" when run in jenkins - python-2.7

So I'm working towards having automated staging deployments via Jenkins and Ansible. Part of this is using a script called ec2.py from ansible in order to dynamically retrieve a list of matching servers to deploy to.
SSH-ing into the Jenkins server and running the script from the jenkins user, the script runs as expected. However, running the script from within jenkins leads to the following error:
ERROR: Inventory script (ec2/ec2.py) had an execution error: Traceback (most recent call last):
File "/opt/bitnami/apps/jenkins/jenkins_home/jobs/Deploy API/workspace/deploy/ec2/ec2.py", line 1262, in <module>
Ec2Inventory()
File "/opt/bitnami/apps/jenkins/jenkins_home/jobs/Deploy API/workspace/deploy/ec2/ec2.py", line 159, in __init__
self.do_api_calls_update_cache()
File "/opt/bitnami/apps/jenkins/jenkins_home/jobs/Deploy API/workspace/deploy/ec2/ec2.py", line 386, in do_api_calls_update_cache
self.get_instances_by_region(region)
File "/opt/bitnami/apps/jenkins/jenkins_home/jobs/Deploy API/workspace/deploy/ec2/ec2.py", line 417, in get_instances_by_region
reservations.extend(conn.get_all_instances(filters = { filter_key : filter_values }))
File "/opt/bitnami/apps/jenkins/jenkins_home/jobs/Deploy API/workspace/deploy/.local/lib/python2.7/site-packages/boto/ec2/connection.py", line 585, in get_all_instances
max_results=max_results)
File "/opt/bitnami/apps/jenkins/jenkins_home/jobs/Deploy API/workspace/deploy/.local/lib/python2.7/site-packages/boto/ec2/connection.py", line 681, in get_all_reservations
[('item', Reservation)], verb='POST')
File "/opt/bitnami/apps/jenkins/jenkins_home/jobs/Deploy API/workspace/deploy/.local/lib/python2.7/site-packages/boto/connection.py", line 1181, in get_list
xml.sax.parseString(body, h)
File "/usr/lib/python2.7/xml/sax/__init__.py", line 43, in parseString
parser = make_parser()
File "/usr/lib/python2.7/xml/sax/__init__.py", line 93, in make_parser
raise SAXReaderNotAvailable("No parsers found", None)
xml.sax._exceptions.SAXReaderNotAvailable: No parsers found
I don't know too much about python, so I'm not sure how to debug this issue further.

So it turns out the issue was to do with Jenkins overwriting the default LD_LIBRARY_PATH variable. By unsetting that variable before running python, I was able to make the python app work!

Related

trying to use cookiecutter-django, getting errors and does not create anything

trying to get a Django project started using cookiecutter-django and can't seem to get it to generate anything.
using Python 3.6, Django 2.0.5, cookiecutter 1.6.0 (then created a virtualenv and entered a new, blank directory)
so I enter this command:
cookiecutter https://github.com/pydanny/cookiecutter-django
and get this error traceback:
Traceback (most recent call last):
File "c:\python\python36\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "c:\python\python36\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "c:\Python\python36\Scripts\cookiecutter.exe\__main__.py", line 9, in
<module>
File "c:\python\python36\lib\site-packages\click\core.py", line 722, in
__call__
return self.main(*args, **kwargs)
File "c:\python\python36\lib\site-packages\click\core.py", line 697, in main
rv = self.invoke(ctx)
File "c:\python\python36\lib\site-packages\click\core.py", line 895, in
invoke
return ctx.invoke(self.callback, **ctx.params)
File "c:\python\python36\lib\site-packages\click\core.py", line 535, in
invoke
return callback(*args, **kwargs)
File "c:\python\python36\lib\site-packages\cookiecutter\cli.py", line 120,
in main
password=os.environ.get('COOKIECUTTER_REPO_PASSWORD')
File "c:\python\python36\lib\site-packages\cookiecutter\main.py", line 63,
in cookiecutter
password=password
File "c:\python\python36\lib\site-packages\cookiecutter\repository.py", line
103, in determine_repo_dir
no_input=no_input,
File "c:\python\python36\lib\site-packages\cookiecutter\vcs.py", line 99, in
clone
stderr=subprocess.STDOUT,
File "c:\python\python36\lib\subprocess.py", line 336, in check_output
**kwargs).stdout
File "c:\python\python36\lib\subprocess.py", line 418, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['git', 'clone',
'https://github.com/pydanny/cookiecutter-django']' returned non-zero exit
status 128.
OK - figured out how to get this to work.
used Github desktop
from cookiecutter-django repository, right click
open it Git Shell
this opens a Powershell window.
CD to directory where project will be placed in.
cookiecutter https://github.com/pydanny/cookiecutter-django
and it works.
not sure exactly why this works when regular CMD and elevated CMD do not, but this was the only way I could get it to work.
This is a permission issue with github due to the need to setup ssh keys. By the way I'm using ubuntu 12.
https://help.github.com/articles/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent/ - create a key first in your machine using the instructions in the link. Once you have your ssh key, proceed to step 2. (Step 2 is indicated in the first link as last step)
https://help.github.com/articles/adding-a-new-ssh-key-to-your-github-account - add the generated ssh key to your github account.

'No such file or directory' error after submitting a training job

I execute:
gcloud beta ml jobs submit training ${JOB_NAME} --config config.yaml
and after about 5 minutes the job errors out with this error:
Traceback (most recent call last):
File "/usr/lib/python2.7/runpy.py", line 162, in _run_module_as_main "__main__", fname, loader, pkg_name)
File "/usr/lib/python2.7/runpy.py", line 72, in _run_code exec code in run_globals
File "/root/.local/lib/python2.7/site-packages/trainer/task.py", line 232, in <module> tf.app.run()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 30, in run sys.exit(main(sys.argv[:1] + flags_passthrough))
File "/root/.local/lib/python2.7/site-packages/trainer/task.py", line 228, in main run_training()
File "/root/.local/lib/python2.7/site-packages/trainer/task.py", line 129, in run_training data_sets = input_data.read_data_sets(FLAGS.train_dir, FLAGS.fake_data)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/learn/python/learn/datasets/mnist.py", line 212, in read_data_sets with open(local_file, 'rb') as f: IOError: [Errno 2] No such file or directory: 'gs://my-bucket/mnist/train/train-images.gz'
The strange thing is, as far as I can tell, that file exists at that url.
This error usually indicates you are using a multi-region GCS bucket for your output. To avoid this error you should use a regional GCS bucket. Regional buckets provide stronger consistency guarantees which are needed to avoid these types of errors.
For more information about properly setting up GCS buckets for Cloud ML please refer to the Cloud ML Docs
Normal IO does not know how to deal with GCS gs:// correctly. You need:
first_data_file = args.train_files[0]
file_stream = file_io.FileIO(first_data_file, mode='r')
# run experiment
model.run_experiment(file_stream)
But ironically, you can move files from the gs://bucket to your root directory, which your programs CAN then actually see:
with file_io.FileIO(gs://presentation_mplstyle_path, mode='r') as input_f:
with file_io.FileIO('presentation.mplstyle', mode='w+') as output_f:
output_f.write(input_f.read())
mpl.pyplot.style.use(['./presentation.mplstyle'])
And finally, moving a file from your root back to a gs://bucket:
with file_io.FileIO(report_name, mode='r') as input_f:
with file_io.FileIO(job_dir + '/' + report_name, mode='w+') as output_f:
output_f.write(input_f.read())
Should be easier IMO.

Jenkins Job Builder Configuration in Python

While updating the Jobs in Jenkins Job-Builder using jenkins-jobs update I'm getting the below error.
INFO:root:Updating jobs in ['jobs'] ([])
Traceback (most recent call last):
File "/usr/bin/jenkins-jobs", line 10, in <module>
sys.exit(main())
File "/usr/lib/python2.7/site-packages/jenkins_jobs/cmd.py", line 191, in main
execute(options, config)
File "/usr/lib/python2.7/site-packages/jenkins_jobs/cmd.py", line 372, in execute
n_workers=options.n_workers)
File "/usr/lib/python2.7/site-packages/jenkins_jobs/builder.py", line 348, in update_jobs
self.load_files(input_fn)
File "/usr/lib/python2.7/site-packages/jenkins_jobs/builder.py", line 293, in load_files
self.parser.parse(in_file)
File "/usr/lib/python2.7/site-packages/jenkins_jobs/parser.py", line 128, in parse
self.parse_fp(fp)
File "/usr/lib/python2.7/site-packages/jenkins_jobs/parser.py", line 105, in parse_fp
cls, dfn = next(iter(item.items()))
AttributeError: 'str' object has no attribute 'items'
Job-Builder Version : 1.6.1
Python Version : 2.7
OS : RHEL 7.1
I've tried this in different machines but with no luck.
The AttributeError: 'str' object has no attribute 'items'error is very common in python, it will be more helpful if you share the code or atlease where the error is appearing.
You are using "Jenkins Job Builder" for configuring jenkins and you are getting error while updating jenkins jobs. The update command is used to deploy the job to jenkins after you have tested the job definition. The update command requires a configuration file.
You should pass that configuration file as it is, not in string format and also the jobs should be in non string format inside configuration file, I mean not in ' OR " single or double quotes.

Trying to setup a Ghost blog with Buster

I'm trying to set up a static Ghost blog with Github hosting using the Buster Static generator. I've tried various instructions including:
https://stefanscherer.github.io/setup-ghost-for-github-pages/
http://blog.sunnyg.io/2015/09/24/ghost-with-github/
But when I get to the "buster generate" command I get the following output in terminal.
It is running fine locally.
Can anyone point me in the right direction?
buster generate
--2016-03-07 23:53:11-- http://localhost:2368/
Resolving localhost... ::1, 127.0.0.1
Connecting to localhost|::1|:2368... failed: Connection refused.
Connecting to localhost|127.0.0.1|:2368... connected.
HTTP request sent, awaiting response... 200 OK
Length: 4508 (4.4K) [text/html]
Saving to: '/Users/philip/Development/Node/ghost-0.7.8/static/index.html'
fixing links in /Users/philip/Development/Node/ghost-0.7.8/static/index.html
Traceback (most recent call last):
File "/usr/local/bin/buster", line 9, in <module>
load_entry_point('buster==0.1.3', 'console_scripts', 'buster')()
File "/usr/local/Cellar/python/2.7.8_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/buster/buster.py", line 90, in main
newtext = fixLinks(filetext, parser)
File "/usr/local/Cellar/python/2.7.8_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/buster/buster.py", line 64, in fixLinks
d = PyQuery(bytes(bytearray(text, encoding='utf-8')), parser=parser)
File "/usr/local/Cellar/python/2.7.8_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pyquery/pyquery.py", line 226, in __init__
elements = fromstring(context, self.parser)
File "/usr/local/Cellar/python/2.7.8_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pyquery/pyquery.py", line 90, in fromstring
result = custom_parser(context)
File "/usr/local/Cellar/python/2.7.8_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/lxml/html/__init__.py", line 867, in fromstring
doc = document_fromstring(html, parser=parser, base_url=base_url, **kw)
File "/usr/local/Cellar/python/2.7.8_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/lxml/html/__init__.py", line 752, in document_fromstring
value = etree.fromstring(html, parser, **kw)
File "src/lxml/lxml.etree.pyx", line 3213, in lxml.etree.fromstring (src/lxml/lxml.etree.c:82934)
File "src/lxml/parser.pxi", line 1819, in lxml.etree._parseMemoryDocument (src/lxml/lxml.etree.c:124533)
File "src/lxml/parser.pxi", line 1707, in lxml.etree._parseDoc (src/lxml/lxml.etree.c:123074)
File "src/lxml/parser.pxi", line 1079, in lxml.etree._BaseParser._parseDoc (src/lxml/lxml.etree.c:117114)
File "src/lxml/parser.pxi", line 573, in lxml.etree._ParserContext._handleParseResultDoc (src/lxml/lxml.etree.c:110510)
File "src/lxml/parser.pxi", line 683, in lxml.etree._handleParseResult (src/lxml/lxml.etree.c:112276)
File "src/lxml/parser.pxi", line 624, in lxml.etree._raiseParseError (src/lxml/lxml.etree.c:111367)lxml.etree.XMLSyntaxError: None
As stated in the comment, I did get in working however instead of just doing the 'Quick Install' recommended by many, I went the route of the 'Developer Install' guide here https://github.com/TryGhost/Ghost using the Stable branch.
After you run through that (with your local server running), in another terminal, run
$ buster setup
<Enter git repo>
$ buster generate --domain=localhost:2368
$ buster deploy (or as most sane people prefer, just git push)
Full instructions here: http://phil-a.github.io/getting-ghost-running-on-github-with-buster

cannot run eb push to send new version to elastic beanstalk

I have just set up a new project, the elastic beanstalk environment is running ok with sample application. this was all set up with eb cli.
when I try to do eb push with my new application i get the following
Traceback (most recent call last): File
".git/AWSDevTools/aws.elasticbeanstalk.push", line 57, in
dev_tools.push_changes(opts.get("env"), opts.get("commit")) File "/Users/Mark/workspace/edu/gc/.git/AWSDevTools/aws/dev_tools.py", line
196, in push_changes
self.create_application_version(env, commit, version_label) File "/Users/Mark/workspace/edu/gc/.git/AWSDevTools/aws/dev_tools.py", line
184, in create_application_version
self.upload_file(bucket_name, archived_file) File "/Users/Mark/workspace/edu/gc/.git/AWSDevTools/aws/dev_tools.py", line
145, in upload_file
key.set_contents_from_filename(archived_file) File "/Library/Python/2.7/site-packages/boto-2.28.0-py2.7.egg/boto/s3/key.py",
line 1315, in set_contents_from_filename
encrypt_key=encrypt_key) File "/Library/Python/2.7/site-packages/boto-2.28.0-py2.7.egg/boto/s3/key.py",
line 1246, in set_contents_from_file
chunked_transfer=chunked_transfer, size=size) File "/Library/Python/2.7/site-packages/boto-2.28.0-py2.7.egg/boto/s3/key.py",
line 725, in send_file
chunked_transfer=chunked_transfer, size=size) File "/Library/Python/2.7/site-packages/boto-2.28.0-py2.7.egg/boto/s3/key.py",
line 914, in _send_file_internal
query_args=query_args File "/Library/Python/2.7/site-packages/boto-2.28.0-py2.7.egg/boto/s3/connection.py",
line 633, in make_request
retry_handler=retry_handler File "/Library/Python/2.7/site-packages/boto-2.28.0-py2.7.egg/boto/connection.py",
line 1046, in make_request
retry_handler=retry_handler) File "/Library/Python/2.7/site-packages/boto-2.28.0-py2.7.egg/boto/connection.py",
line 919, in _mexe
request.body, request.headers) File "/Library/Python/2.7/site-packages/boto-2.28.0-py2.7.egg/boto/s3/key.py",
line 815, in sender
http_conn.send(chunk) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/httplib.py",
line 805, in send
self.sock.sendall(data) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/ssl.py",
line 229, in sendall
v = self.send(data[count:]) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/ssl.py",
line 198, in send
v = self._sslobj.write(data) socket.error: [Errno 32] Broken pipe Cannot run aws.push for local repository HEAD:
I have another elastic beanstalk app that is running and when I run eb push in that directory it works fine so I dont think its anything to do with ruby or other dependancies not being installed. I also made changes and made another commit with a very simple message to make sure that wasnt causing the problem and still no joy
the difference between the app that can be pushed and this one is the aws account. the user credentials for this elastic beanstalk app that wont push are admin credentials
this link solved my problem, it was as a result of using two different accounts
http://www.acnenomor.com/207992p1/unable-to-deploy-to-aws-elastic-beanstalk-using-git