I have a Django application and am using AWS servers to host the app. Before, I used to do
git add .
git commit -m 'made changes'
git aws.push
and it used to work perfectly fine. All of a sudden, I did it again after a few weeks and now it says
Error: Failed to get the Amazon S3 bucket name
. When I do
eb status
it says
routines: SSL3_GET_SERVER_CERTIFICATE: certificate verify failed
Why is it giving me these errors when it used to work perfectly fine a few weeks ago? I never changed my IAM user's password. Does it change automatically? Because I still have the credentials file when I created the IAM user and the password in that file is the same password I am using.
Is there any error log where I can get further information to debug this issue?
It's a trite answer, but I'd suggest updating eb. They've made some improvements to the API since I last updated earlier this week, and some changes to their architecture when it comes to Python apps (now defaulting to Python 3). Running pip install awsebcli --upgrade may do the trick, if Amazon have made potentially breaking changes, or the Boto library is out-of-date on your machine.
Related
I've got a Django project which works great. Previously we just cloned down and used password authentication. I changed the remote to git#bitbucket.org:myteam/our_repo.git
Recently we started requiring 2FA, so now we can only clone down over SSH.
For this project, I created an access key (read-only, which is all I need for cloning down on a staging server) and I was able to clone down the repo (git clone git#bitbucket.org:myteam/our_repo.git) without issue and get it all set up. This appeared to have worked.
The other server admin remoted in and tried to run git pull origin master, he got a permission issue. His windows user is part of the Administrators group - but for some reason that didn't matter. His local user had to be added to the directory with full access before he could run git pull origin master
It appears that this permission issue is causing other issues, too. File uploads (from the Django admin) are no longer actually uploading the files into the directory on the server - my guess is that this is related to the permissions issue, too. Nothing was changed to impact this - the project was just cloned down over SSH.
Does cloning something down over SSH change the permissions on the directories or somehow lock it down more? This wasn't an issue before - only since we've switched over to SSH.
Any feedback is helpful!
Does cloning something down over SSH change the permissions on the directories or somehow lock it down more?
No, it does not change anything locally.
And 2FA is only impacting HTTPS URL (where your password must be a PAT, Persoanl Access Token)
It has no bearing on SSH URLS.
Check first ssh -Tv git#github.com output.
I'm having the sample problem as Vaclav. I've followed the GCR quick start to the letter which entailed creating a new project (called gcr-project) and copying the code for a Flask (python) app.
After building the docker image, I entered the commands:
gcloud auth configure-docker
docker tag quickstart-image gcr.io/gcr-project/quickstart-image:tag1
docker push gcr.io/gcr-project/quickstart-image:tag1
The response was:
unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
So it would be nice to know if the issue is with the credentials (I'm using cloud SDK OK for other projects) or permissions. The documentation here suggests you need storage-admin rights but the projects already has it, see screen cap here
Would appreciate any tips for trouble shooting this as I was looking for to using the GCR but this problem is a hard stop for me.
UPDATE:
I tried the same process with the cloud shell
me#cloudshell:~ (gcr-project-XXXXXX)$ docker push gcr.io/gcr-project/quickstart-image:tag1
The push refers to repository [gcr.io/gcr-project/quickstart-image]
4399528b7213: Preparing
1d10b1eeca74: Preparing
75156020d862: Preparing
c5697656a146: Preparing
2a435270de82: Preparing
c35f70b5c25a: Waiting
28e260baaf1b: Waiting
556c5fb0d91b: Waiting
denied: Token exchange failed for project 'gcr-project'. Please enable Google Container Registry API in Cloud Console at https://console.cloud.google.com/apis/api/containerregistry.googleapis.com/overview?project=gcr-project before performing this operation.
me#cloudshell:~ (gcr-project-XXXXXX)$
This prompted me to check the API & Services dashboard to confirm the container-registry API was enabled - It is.
UPDATE 2:
I'm having these problems on a machine running ubuntu 19.04. Per the comments below I was able to do a push via the cloud shell. So I then went through the same exercise on a MacBook Pro - worked no problems.
So I then uninstalled Cloud SDK per the doco having used the standard linux install instructions previously. I then re-installed using the debian-ubuntu install instructions (version 274.0.1-0)... STILL no go.
When I do a docker pull on the image (because push worked on MBP) I get this error: Error response from daemon: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
And when I do a push I get this error: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
So at this stage, given the success on the MBP and the lack thereof on the linux/ubuntu machine, the problem is constrained to to linux/ubuntu installs.
UPDATE 3:
I got on to a separate ubuntu server, did a clean install with sudo snap install google-cloud-sdk --classic , did everything else per the docs and still had the exact same problem. So I recon this is a linux google cloud SDK specific problem.
Is there anyone out there Ubuntu land who as been able install and use cloud SDK with GCR recently?????????
I was able to replicate this issue on multiple ubuntu machines. I tried again after the most recent cloud SDK update (276.0.0) but had no luck.
In the end I went with json key file authentincati described in the docs here as a work around which worked fine.
I'm working with a website running on laravel. The site works fine on my local through Homestead, no problems.
Recently, I pushed the git repo up to a server that never had this site running on it before. I set everything up right (had some nginx config issues for a while, but got those all sorted out). Nginx has the public folder set as the site root, so it hits the proper index page when you load the page.
What I'm getting is a 500 error. My error logs reveal the following is the reason:
site_root/public/../bootstrap/autoload.php - Failed to open stream: permission denied
in
site_root/public/index.php on line 22
I can confirm that the bootstrap folder and the autoload.php file are both accessible by the web user, and have permissions that should allow access.
I've read a few cases online of people solving this issue with a 'composer install'. I tried updating composer, doing an install, and dumping its cache. I also tried removing the vendor folder (which had been a part of the git repo), and running composer install to regenerate it. None of these have worked. Happy to supply any info that will help. This is Laravel 5.2 running on Ubuntu Server 14.04 with nginx, all on an AWS box.
Solved it. This was actually an issue with site-wide permissions. They were set to 770 instead of 775. I suspect that I can and should restrict them more. For now, I'm just happy to have it loading again.
Moral of the story is to check your permissions site-wide, not necessarily just on the file which gives you the fatal error. You may continue to get the same fatal error, despite permissions being wide-open on the mentioned file. If so, look for permissions issues elsewhere.
I am trying to deploy an application version but eb deploy command fails with:
ERROR: Update environment operation is complete, but with errors. For
more information, see troubleshooting documentation.
I checked the logs, made some changes to the code, committed and deployed again and guess what, it failed again. The logs indicate the same error, disregarding my changes. The error occurs in a file in this directory /var/app/ondeck/app/, when I go check, I can see the previous version is there.
I tried deploying using the Elastic Beanstalk dashboard, but somehow the instance is not receiving the new version. Can someone help me with this? Thanks.
Just had the same problem and noticed in the documentation
"Note
If you have initialized a git repository in your project folder, the EB CLI will always deploy the latest commit, even if you have pending changes. Commit your changes prior to running eb deploy to deploy them to your environment."
made the commits and worked fine
I am following the tutorial for deploying a django project on AWS elastic beanstalk here:
http://docs.amazonwebservices.com/elasticbeanstalk/latest/dg/create_deploy_Python_django.html
My app works when I test locally but when I deploy, I'm getting a 404 error. Looking at the event logs, I see this message:
Error running user's commands : An error occurred running '. /opt/python/ondeck/env && PYTHONPATH=/opt/python/ondeck/app: django-admin.py syncdb --noinput' (rc: 127) /bin/sh: django-admin.py: command not found
That leads me to believe that the tutorial is missing a part about installing django files on the server or at least configuring my project to recognize django-admin.py. I have django installed on my local machine so it works there.
I know python support is brand new for elastic beanstalk but has anyone deployed django to it?
I believe you don't need to put container_commands in .config because there is no database or table at this moment.
Did you made the step?: Freeze the requirements.txt file.
(djangodev)# pip freeze > requirements.txt
Note
Make sure your requirements.txt file contains the following:
Django==1.4.1
MySQL-python==1.2.3
I had the same problem because I skipped it. Once I did it, add, commit and push. It works!
I followed the same tutorial recently and had a similar result.
At step 6, upon seeing the default django 'congrats' page render locally, I deployed to EB as instructed and got a 404 instead of the default 'congrats' page.
I decided to use the code up to that point as a foundation for following the 'getting started with django tutorial' which led me to a successful rendering of a 'home' view. This is a much more useful place to be anyway. I do agree that there is something wrong with the AWS tutorial and posted to the AWS forums here.
If you can, you should try to access the log file; it might give you a better idea of what's going on. Here's a link that might help:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.loggingS3.title.html