I'm a little stumped here. I've never seen an error from inspect.py before, but here I am trying to install some SSL certificates with certbot and an error occurs. The certbot log including the stack trace is here, but the error is:
File "/usr/lib64/python2.7/inspect.py", line 815, in getargspec
raise TypeError('{!r} is not a Python function'.format(func))
TypeError: <method-wrapper '__ne__' of type object at 0x1eeab80> is not a Python function
And occurs when running certbot certonly. Other commands, such as retrieving the version, are fine.
I've tried a few things (uninstalling/reinstalling, etc) but to no avail. I'm trying to avoid installing from git or some other source (trying to stick with yum). Some more details
Centos # 7.1.1503 (Core)
certbot # 0.8.1
What's strange is this error seems to indicate an error in implementation, but I find this strange, since I'm using certbot on another CentOS 7 machine without issue.
Any help is greatly appreciated. I will open an issue on GitHub if appropriate, but figured I would ask here first.
You need update cryptography with:
pip2 install -U cryptography
and maybe pyOpenSSL:
pip2 install -U pyOpenSSL
You can use one command to check all requirements:
pip2 install -U certbot
Related
I want to use pycurl in order to have TTFB and TTLB, but am unable to call pycurl in an AWS lambda.
To focus on the issue, let say I call this simple lambda function:
import json
import pycurl
import certifi
def lambda_handler(event, context):
client_curl = pycurl.Curl()
client_curl.setopt(pycurl.CAINFO, certifi.where())
client_curl.setopt(pycurl.URL, "https://www.arolla.fr/blog/author/edouard-gomez-vaez/") #set url
client_curl.setopt(pycurl.FOLLOWLOCATION, 1)
client_curl.setopt(pycurl.WRITEFUNCTION, lambda x: None)
content = client_curl.perform()
dns_time = client_curl.getinfo(pycurl.NAMELOOKUP_TIME) #DNS time
conn_time = client_curl.getinfo(pycurl.CONNECT_TIME) #TCP/IP 3-way handshaking time
starttransfer_time = client_curl.getinfo(pycurl.STARTTRANSFER_TIME) #time-to-first-byte time
total_time = client_curl.getinfo(pycurl.TOTAL_TIME) #last requst time
client_curl.close()
data = json.dumps({'dns_time':dns_time,
'conn_time':conn_time,
'starttransfer_time':starttransfer_time,
'total_time':total_time,
})
return {
'statusCode': 200,
'body': data
}
I have the following error, which is understandable:
Unable to import module 'lambda_function': No module named 'pycurl'
I followed the tuto https://aws.amazon.com/fr/premiumsupport/knowledge-center/lambda-layer-simulated-docker/ in order to create a layer, but then have the following error while generated the layer with docker (I extracted the interesting part):
Could not run curl-config: [Errno 2] No such file or directory: 'curl-config': 'curl-config'
I even tried to generate the layer just launching on my own machine:
pip install -r requirements.txt -t python/lib/python3.6/site-packages/
zip -r mypythonlibs.zip python > /dev/null
And then uploading the zip as a layer in aws, but I then have another error when lanching the lambda:
Unable to import module 'lambda_function': libssl.so.1.0.0: cannot open shared object file: No such file or directory
It seems that the layer has to be built on a somehow extended target environment.
After a couple of hours scratching my head, I managed to resolve this issue.
TL;DR: build the layer by using a docker image inherited from the aws one, but with the needed libraries, for instance libcurl-devel, openssl-devel, python36-devel. Have a look at the trick Note 3 :).
The detailed way:
Prerequisite: having Docker installed
In a empty directory, copy your requirements.txt containing pycurl (in my case: pycurl~=7.43.0.5)
In this same directory, create the following Dockerfile (cf Note 3):
FROM public.ecr.aws/sam/build-python3.6
RUN yum install libcurl-devel python36-devel -y
RUN yum install openssl-devel -y
ENV PYCURL_SSL_LIBRARY=openssl
RUN ln -s /usr/include /var/lang/include
Build the docker image:
docker build -t build-python3.6-pycurl .
build the layer using this image (cf Note 2), by running:
docker run -v "$PWD":/var/task "build-python3.6-pycurl" /bin/sh -c "pip install -r requirements.txt -t python/lib/python3.6/site-packages/; exit"
Zip the layer by running:
zip mylayer.zip python > /dev/null
Send the file mylayer.zip to aws as a layer and make your lambda points to it (using the console, or following the tuto https://aws.amazon.com/fr/premiumsupport/knowledge-center/lambda-layer-simulated-docker/).
Test your lambda and celebrate!
Note 1. If you want to use python 3.8, just change 3.6 or 36 by 3.8 and 38.
Note 2. Do not forget to remove the python folder when regenerating the layer, using admin rights if necessary.
Note 3. Mind the symlink in the last line of the DockerFile. Without it, gcc won't be able to find some header files, such as Python.h.
Note 4. Compile pycurl with openssl backend, for it is the ssl backend used in the lambda executing environment. Or else you'll get a libcurl link-time ssl backend (openssl) is different from compile-time ssl backend error when executing the lambda.
When I try to do a pip install Flask I get some ssl errors. If I try to add exceptions there is no difference. I googled around and see some discussion on this from a year ago but nothing else (see https://github.com/pypa/pip/issues/5063)
(venv) pip install -U flask --trusted-host=pypi.python.org --trusted-host=pypi.org --trusted-host=files.pythonhosted.org
Collecting flask
Could not fetch URL https://pypi.python.org/simple/flask/: There was a problem confirming
the ssl certificate: [SSL: TLSV1_ALERT_PROTOCOL_VERSION] tlsv1 alert protocol version (_ssl.c:590) - skipping
Could not find a version that satisfies the requirement flask (from versions: )
No matching distribution found for flask
NOTE : THIS MIGHT ALREADY SOMETHING THAT YOU HAVE TRIED
This is probably due to the fact that you are using --trusted-host=pypi.python.org .
This has happened since sometime during April 2018, the Python Package Index was migrated from pypi.python.org to pypi.org. This means "trusted-host" commands using the old domain no longer work.
So the command you are looking for would be pip install -U flask --trusted-host pypi.org --trusted-host=files.pythonhosted.org.
For furthur details , have a look at this this.
We've been using appcfg.py request_logs to download GAE logs, every once in a while it throws the error:
httplib2.SSLHandshakeError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:661)
But after a few times trying it works out, sometimes also it works after updating gcloud using gcloud components update. We thought it might be some network throttling issue of some kind and didn't give it enough thought. Lately though, we're trying to figure out what is causing this.
The full command we use is:
appcfg.py request_logs -A testapp --version=20180321t073239 --severity=0 all_logs.log --append --no_cookies
It seems the error is related to httplib2 library, but since it is part of the appcfg.py calls we're not sure we should tamper with something within its calls
Versions:
Python 2.7.13
Google Cloud SDK 196.0.0
app-engine-python 1.9.67
This has become more persistent now and I couldn't download logs for a few days now no matter how many times I try.
Looking at the download logs command I tried the same command again but without the --no_cookies flag to see what would happen.
appcfg.py request_logs -A testapp --version=20180321t073239 --severity=0 all_logs.log --append
I got the error:
Error 403: --- begin server output ---
You do not have permission to modify this app (app_id=u'e~testapp').
--- end server output ---
Which lead me to the answer provided here https://stackoverflow.com/a/34694577/1394228 by #ninjahoahong. This worked for me and logs where downloaded from first trial in case someone faces the same issue
There's also this Google Group post which I didn't try but seems like it does the same thing.
Not sure if removing the file ~/.appcfg_oauth2_tokens would have other effects, yet to find out.
Update:
I also found out that my httplib2 located at /Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/httplib2 was version = "0.7.5", I upgraded it to version = '0.11.3' using target location(directory) upgrade command:
sudo pip2 install --upgrade httplib2 -t /Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/httplib2/
Being the resident tech in the family I'm helping with launching the new family business website. My experience is extremely limited when it comes to coding and web development (I made a basic html/css website in high school). Please bear with me
So far I have the domain, hosting and DNS working. The host is AWS Lightsail with Wordpress running on Ubuntu 16.04 and Bitnami. Now I'm trying to get SSL setup as we want to have credit card payment on the website. After a couple of days of research I've gone down the path of Let's Encrypt and I'm trying to get the certificate on the server. Stop me if I've already made some sort of critical error.
Anyway, I'm using instructions from: https://certbot.eff.org/#ubuntuxenial-apache
and I've made some progress until. See the full paste from putty:
https://pastebin.com/dhLs7c3A
root#ip-172-26-2-150:/home/bitnami# sudo certbot --apache -d profq.com.au -d www.profq.com.au
To summarize I ran the line:
"root#ip-172-26-2-150:/home/bitnami# sudo certbot --apache -d profq.com.au -d www.profq.com.au"
and the issue starts at line:
"Error while running apache2ctl graceful.
httpd not running, trying to start
Action 'graceful' failed."
Any help or advice is greatly appreciated. Thank you
Have you simply tried the Bitnami tool, sounds relevant to what you described it sounds like wordpress on lightsail.
To launch the Bitnami HTTPS Configuration Tool, execute the following command and follow the prompts:
sudo /opt/bitnami/bncert-tool
You may need to run sudo su to run as root.
This should easily fix the issue.
I run into the same issue yesterday and since no solution has been suggested I will write how I fixed it.
Apparently this issue is not directly connected with the Lightsail instance or the running Apache server, but with the Bitnami stack on top of it. Here are the steps to install letsencrypt certifiaticate, taken from here.
Prerequisite
The first thing you need to do is make sure all the packages are updated on your server. You can do that with below command.
sudo apt update
sudo apt upgrade
1. INSTALL CERTBOT
First, create a directory where you want to install a Certbot client and move into that directory.
sudo mkdir /opt/bitnami/letsencrypt
cd /opt/bitnami/letsencrypt
Now go ahead and install the Certbot client from official certbot distribution. You also need to make sure that the script has the execute privilege.
sudo wget https://dl.eff.org/certbot-auto
sudo chmod a+x ./certbot-auto
Now run the certbot-auto script to complete the installation. The script might show some errors but you can ignore it. It will run and download all the dependency needed for it.
sudo ./certbot-auto
2. GENERATE CERTIFICATE
Once the Certbot client is installed, you can go ahead and generate the certificate for your domain.
sudo ./certbot-auto certonly --webroot -w /opt/bitnami/apache2/htdocs/{example} -d www.example.com -d example.com
^{example} above is optional only if you don't store the file in the htdocs folder itself. www.example.com and example.com should be your domain name.
I run into issue after running this command since I didn't have CNAME record set for the www. version of my site. The error was:
DNS problem: NXDOMAIN looking up A for www.example.com
To fix it go to your lightsail page, open Netowkring tab and select the DNS zone for your site. Click on Add record under DNS records, select CNAME, in the subdomain enter just www and in the maps to field enter your domain without www. prefix. After doing that running the above command should pass without any issues.
If you need to get certificates for multiple domains, follow this guide. It is basically adding new path to each domains home directory, resulting in the following command:
certbot certonly --webroot -w /opt/bitnami/apache2/htdocs/example -d www.example.com -d example.com -w /opt/bitnami/apache2/htdocs/other -d www.other.net -d example.net
3. Link Let's Encrypt SSL Certificate to Apache
You can just copy your SSL certificate on these locations and restart Apache to enable the new file. But with this approach, you will have to copy the files again when you renew your certificate.
So the better approach is to create a symbolic link to your certificate files. Whenever you renew your license, it can take effect without this extra step.
You can use the below commands to create a symbolic link.
sudo ln -s /etc/letsencrypt/live/[DOMAIN]/fullchain.pem /opt/bitnami/apache2/conf/server.crt
sudo ln -s /etc/letsencrypt/live/[DOMAIN]/privkey.pem /opt/bitnami/apache2/conf/server.key
Make sure that the certificate file name and path is correct. If you receive an error that file already exists, use the below command to rename the files. Then rerun the above two commands.
mv /opt/bitnami/apache2/conf/server.key /opt/bitnami/apache2/conf/serverkey.old
mv /opt/bitnami/apache2/conf/server.crt /opt/bitnami/apache2/conf/servercrt.old
Once your symbolic links are in place you can restart the Apache server to make it into effect. Use the below command to restart the Apache server. You can restart it from the Lightsail page as well.
sudo /opt/bitnami/ctlscript.sh restart apache
That's it. After this, going to https://example.com should work and you should see your certificate.
Notice. The certificate is valid for 3 months only, so you need to refresh it every 3 months manually or make a cron job for that. To refresh it once it is time for that, follow the below commands:
sudo apt update
sudo apt upgrade
cd /opt/bitnami/letsencrypt
sudo ./certbot-auto renew
sudo /opt/bitnami/ctlscript.sh restart apache
When I would like to push incremental changes to the AWS Elastic Beanstalk solution I get the following:
$ git aws.push
Updating the AWS Elastic Beanstalk environment None...
Error: Failed to get the Amazon S3 bucket name
I've already added FULLS3Access to my AWS users policies.
I had a similar issue today and here are the steps I followed to investigate :-
I modified line no 133 at .git/AWSDevTools/aws/dev_tools.py to print the exception like
except Exception, e:
print e
* Please make sure of spaces as Python does not work in case of spaces.
I ran command git aws.push again
and here is the exception printed :-
BotoServerError: 403 Forbidden
{"Error":{"Code":"SignatureDoesNotMatch","Message":"Signature not yet current: 20150512T181122Z is still later than 20150512T181112Z (20150512T180612Z + 5 min.)","Type":"Sender"},"
The issue is because there was a time difference in server and machine I corrected it and it stated working fine.
Basically the Exception will helps to let you know exact root cause, It may be related to Secret key as well.
It may have something to do with the boto-library (related thread). If you are on ubuntu/debian try this:
Remove old version of boto
sudo apt-get remove python-boto
Install newer version
sudo apt-get install python-pip
sudo pip install -U boto
Other systems (e.g. Mac)
Via easy_install
sudo easy_install pip
pip install boto
Or simply build from source
git clone git://github.com/boto/boto.git
cd boto
python setup.py install
Had the same problem a moment ago.
Note:
I just noticed your environment is called none. Did you follow all instructions and executed eb config/eb init?
One more try:
Add export PATH=$PATH:<path to unzipped eb CLI package>/AWSDevTools/Linux/ to your path and execute AWSDevTools-RepositorySetup.sh maybe something is wrong with your repository setup (notice the none weirdness). Other possible solutions:
Doublecheck AWSCredentials (maybe you are using different Key IDs / Wrong CredentialsFile-format)
Old/mismatching versions of eb client & python (check with eb -v and python -v) (current client is this)
Use amazons policy validator to doublecheck if your AWS User is allowed to perform all actions
If all that doesn't help im out of options. Good luck.