Travis -Gcloud crashed (AttributeError): '_RSAPrivateKey' object has no attribute 'sign' - google-cloud-platform

UPDATE
Here is part of travis file
before_install:
#openssl stuff regarding credentials.tar.gz
- if [ ! -d "${GAE_PYTHONPATH}" ]; then python scripts/fetch_gae_sdk.py $(dirname
"${GAE_PYTHONPATH}"); fi
- if [ ! -d $HOME/google-cloud-sdk/bin ]; then rm -rf $HOME/google-cloud-sdk; curl
https://sdk.cloud.google.com | bash; fi
- tar -xzf credentials.tar.gz
- "$HOME/google-cloud-sdk/bin/gcloud components update"
- "pip install -U cryptography"
- "$HOME/google-cloud-sdk/bin/gcloud auth activate-service-account --key-file travis-credentials.json" # ERROR HAPPENS HERE
#ERROR IS = Gcloud crashed (AttributeError): '_RSAPrivateKey' object has no attribute 'sign'
- "$HOME/google-cloud-sdk/bin/gcloud auth configure-docker"
And this is what I can't understand. I would assume if it was something related to a sudden version upgrade from gcloud and it is incompatible with crpyotgraphy then lots of applications would have failed and my fix attempts would fix it. But this used to work until I get the aforementioned email so I am thinking something is messed up after that email but it is just a wild guess.
All of the Travis file
language: python
python: 2.7
branches:
only:
- master
services:
- docker
cache:
directories:
- "$HOME/google-cloud-sdk/"
env:
- GAE_PYTHONPATH=${HOME}/.cache/google_appengine PATH=$PATH:${HOME}/google-cloud-sdk/bin
PYTHONPATH=${PYTHONPATH}:${GAE_PYTHONPATH} CLOUDSDK_CORE_DISABLE_PROMPTS=1
before_install:
#unrelated stuff
- if [ ! -d "${GAE_PYTHONPATH}" ]; then python scripts/fetch_gae_sdk.py $(dirname
"${GAE_PYTHONPATH}"); fi
- if [ ! -d $HOME/google-cloud-sdk/bin ]; then rm -rf $HOME/google-cloud-sdk; curl
https://sdk.cloud.google.com | bash; fi
- tar -xzf credentials.tar.gz
- "$HOME/google-cloud-sdk/bin/gcloud components update"
- "pip install -U cryptography"
- "$HOME/google-cloud-sdk/bin/gcloud auth activate-service-account --key-file travis-credentials.json"
- "$HOME/google-cloud-sdk/bin/gcloud auth configure-docker"
install:
#push image to gcr
script:
- echo "done"
The same question is asked here question but updating cryptography module didn't resolve the issue(I tried 3 different versions from the latest to one listed in the answer -> 2.6.1). 3 days ago I received an email from google which said the following.
Hello Cloud Shell user,
It's been over 120 days since you opened Cloud Shell from the Google
Cloud Platform console. In 7 days, your Cloud Shell home directory
will be automatically scheduled for deletion.
To keep your Cloud Shell home directory and its data, just log in and
open Cloud Shell.
I opened the shell to keep it activated but when I try to deploy my django application with travis I got the following error when I tried to execute a gcloud command.
$HOME/google-cloud-sdk/bin/gcloud auth activate-service-account --key-file travis-credentials.json
ERROR: gcloud crashed (AttributeError): '_RSAPrivateKey' object has no attribute 'sign'
I tried 2.6.1, 2.8 (my previous version), 3.4.1(most updated version), but none of them worked. Any idea on how to fix this? My last build was a month ago and it worked successfully without changing anything in configuration

Apparently, this issue is related to the gcloud version I was using.I always fetch the latest version (which is currently 331.0.0). Although it is not desirable, downgrading gcloud SDK to 330.0.0 resolved the issue.
gcloud components update --version 330.0.0

Related

Where do I put `.aws/credentials` for Docker awslogs log-driver (and avoid NoCredentialProviders)?

The Docker awslogs documentation states:
the default AWS shared credentials file (~/.aws/credentials of the root user)
Yet if I copy my AWS credentials file there:
sudo bash -c 'mkdir -p $HOME/.aws; cp .aws/credentials $HOME/.aws/credentials'
... and then try to use the driver:
docker run --log-driver=awslogs --log-opt awslogs-group=neiltest-deleteme --rm hello-world
The result is still the dreaded error:
docker: Error response from daemon: failed to initialize logging driver: failed to create Cloudwatch log stream: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors.
Where does this file really need to go? Is it because the Docker daemon isn't running as root but rather some other user and, if so, how do I determine that user?
NOTE: I can work around this on systems using systemd by setting environment variables. But this doesn't work on Google CloudShell where the Docker daemon has been started by some other method.
Ah ha! I figured it out and tested this on Debian Linux (on my Chromebook w/ Linux VM and Google CloudShell):
The .aws folder must be in the root folder of the root user not in the $HOME folder!
Based on that I was able to successfully run the following:
pushd $HOME; sudo bash -c 'mkdir -p /.aws; cp .aws/* /.aws/'; popd
docker run --log-driver=awslogs --log-opt awslogs-region=us-east-1 --log-opt awslogs-group=neiltest-deleteme --rm hello-world
I initially figured this all out by looking at the Docker daemon's process information:
DOCKERD_PID=$(ps -A | grep dockerd | grep -Eo '[0-9]+' | head -n 1)
sudo cat /proc/$DOCKERD_PID/environ
The confusing bit is that Docker's documentation here is wrong:
the default AWS shared credentials file (~/.aws/credentials of the root user)
The true location is /.aws/credentials. I believe this is because the daemon starts before $HOME is actually defined since it's not running as a user process. So starting a shell as root will tell you a different story for tilde or $HOME:
sudo sh -c 'cd ~/; echo $PWD'
That outputs /root but using /root/.aws/credentials does not work!

Deploying Rails 6.0 to AWS EB, Webpacker requires Node.js version error

I am attempting to upload a rails app that was recently updated from rails 5.2 to 6 to AWS Elastic Beanstalk. We had someone else working on this, but with the pandemic he had to step away - and now our site is kind of in limbo and I have not been able to update it. I have searched many different variations of my problem but no solutions have worked yet.
The app was working on EB with rails 5.2. I have the app running in 6.0 locally. When I eb deploy I get this error:
MacBook-Pro:app $ eb deploy
Starting environment deployment via CodeCommit
--- Waiting for Application Versions to be pre-processed ---
Finished processing application version app-0e294-200420_110159
2020-04-21 00:22:24 INFO Environment update is starting.
2020-04-21 00:23:07 INFO Deploying new version to instance(s).
2020-04-21 00:27:59 ERROR [Instance: i-0e613ac1fe175f3f6] Command failed on instance. Return code: 1 Output: (TRUNCATED)...-- : Writing /var/app/ondeck/public/assets/application-06fe3df6175ba0def3d0e732489f883d0c09de.css.gz
Webpacker requires Node.js ">=10.13.0" and you are using v6.17.1
Please upgrade Node.js https://nodejs.org/en/download/
Exiting!.
Hook /opt/elasticbeanstalk/hooks/appdeploy/pre/11_asset_compilation.sh failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
2020-04-21 00:27:59 INFO Command execution completed on all instances. Summary: [Successful: 0, Failed: 1].
2020-04-21 00:28:00 ERROR Unsuccessful command execution on instance id(s) 'i-0e613ac1fe175f3f6'. Aborting the operation.
2020-04-21 00:28:00 ERROR Failed to deploy application.
ERROR: ServiceError - Failed to deploy application.
It was giving me a bundler error before this, that I was able to fix by adding a file into .ebextensions that installs the correct version of bundler. I figured the solution to this would be similar.
This post was close to my problem:
Deploy rails react app with webpacker gem on AWS elastic beanstalk
So I added this file to my .ebextensions based off the selected answer of that:
01_update_note.config
commands:
01_download_nodejs:
command: curl --silent --location https://rpm.nodesource.com/setup_10.x | sudo bash -
02_install_nodejs:
command: yum -y install nodejs
However, it did not appear to do anything, I still get the same error. I tried a couple variations of the file based off a few other blog posts about the issue, but the error remains. Is anyone able to point me in the right direction or offer any insight into the problem? I apologize for not being very familiar with AWS or EB yet, but I will do my best to answer additional questions.
Maybe it is caused by yarn install later.
I try the following scripts and remove yarn install then set RAILS_SKIP_ASSET_COMPILATION=false and it works for me.
commands:
01_install_yarn:
command: "sudo wget https://dl.yarnpkg.com/rpm/yarn.repo -O /etc/yum.repos.d/yarn.repo && curl --silent --location https://rpm.nodesource.com/setup_12.x | sudo bash - && sudo yum install yarn -y"
02_download_nodejs:
command: curl --silent --location https://rpm.nodesource.com/setup_12.x | sudo bash -
03_install_nodejs:
command: yum -y install nodejs
04_install_packages:
command: sudo yum install -y yarn
This is how I did it on Amazon Linux 2:
Create this file in .platform/hooks/prebuild/yarn_config.sh:
#!/usr/bin/env bash
curl --silent --location https://rpm.nodesource.com/setup_16.x | sudo bash -
sudo yum -y install nodejs
sudo wget https://dl.yarnpkg.com/rpm/yarn.repo -O /etc/yum.repos.d/yarn.repo
sudo yum -y install yarn
yarn install
Give it the right permission: chmod +x .platform/hooks/prebuild/yarn_config.sh
And the error is gone, while you assets still compile (unlike with accepted answer)

AWS CodeBuild as non-root user

Is there a way to drop root user on AWS CodeBuild?
We are building a Yocto project that fails on CodeBuild if we're root (Bitbake sanity check).
Our desperate approach doesn't work either:
...
build:
commands:
- chmod -R 777 $(pwd)/ && chown -R builder $(pwd)/ && su -c "$(pwd)/make.sh" -s /bin/bash builder
...
Fails with:
bash: /codebuild/output/src624711770/src/.../make.sh: Permission denied
Any idea how we could run this a non-root?
I am succeeded in using non-root user in AWS CodeBuild.
It takes much more than knowing some CodeBuild options to come up with a practical solution.
Everyone should spot run-as option quite easily.
The next question is "which user?"; you cannot just put any word as a username.
In order to find out which users are available, the next clue is at Docker images provided by CodeBuild section. There, you'll find a link to each image definition.
For me, the link leads me to this page on GitHub
After inspecting the source code of Dockerfile, we'll know that there is a user called codebuild-user available. And we can use this codebuild-user for our run-as in the buildspec.
Then we'll face with a whole lot of other problems because the standard image only installs runtime of each language for root only.
This is as far as generic explanations can go.
For me, I wanted to use the Ruby runtime, so my only concern is the Ruby runtime.
If you use CodeBuild for something else, you are on your own now.
In order to utilize Ruby runtime as codebuild-user, we have to expose them from the root user. To do that, I change the required permissions and owner of .rbenv used by the CodeBuild image with the following command.
chmod +x ~
chown -R codebuild-user:codebuild-user ~/.rbenv
The bundler (Ruby's dependency management tool) still wants to access the home directory, which is not writable. We have to set up an environment variable to make it use other writable location as the home directory.
The environment variable is BUNDLE_USER_HOME.
Put everything together; my buildspec looks like:
version: 0.2
env:
variables:
RAILS_ENV: test
BUNDLE_USER_HOME: /tmp/bundle-user
BUNDLE_SILENCE_ROOT_WARNING: true
run-as: codebuild-user
phases:
install:
runtime-versions:
ruby: 2.x
run-as: root
commands:
- chmod +x ~
- chown -R codebuild-user:codebuild-user ~/.rbenv
- bundle config set path 'vendor/bundle'
- bundle install
build:
commands:
- bundle exec rails spec
cache:
paths:
- vendor/bundle/**/*
My points are:
It is, indeed, possible.
Show how I did it for my use case.
Thank you for this feature request. Currently you cannot run as a non-root user in CodeBuild, I have passed it to the team for further review. Your feedback is very much appreciated.
To run CodeBuild as non root you need to specify a Linux username using the run-as tag in your buildspec.yaml as shown in the docs
version: 0.2
run-as: Linux-user-name
env:
variables:
key: "value"
key: "value"
parameter-store:
key: "value"
key: "value"
phases:
install:
run-as: Linux-user-name
runtime-versions:
runtime: version
What we ended up doing was the following:
Create a Dockerfile which contains all the stuff to build a Yocto / Bitbake project in which we ADD the required sources and create an user builder which we use to build our project.
FROM ubuntu:16.04
RUN apt-get update && apt-get -y upgrade
# Required Packages for the Host Development System
RUN apt-get install -y gawk wget git-core diffstat unzip texinfo gcc-multilib \
build-essential chrpath socat cpio python python3 python3-pip python3-pexpect \
xz-utils debianutils iputils-ping vim
# Additional host packages required by poky/scripts/wic
RUN apt-get install -y curl dosfstools mtools parted syslinux tree
# Create a non-root user that will perform the actual build
RUN id builder 2>/dev/null || useradd --uid 30000 --create-home builder
RUN apt-get install -y sudo
RUN echo "builder ALL=(ALL) NOPASSWD: ALL" | tee -a /etc/sudoers
# Fix error "Please use a locale setting which supports utf-8."
# See https://wiki.yoctoproject.org/wiki/TipsAndTricks/ResolvingLocaleIssues
RUN apt-get install -y locales
RUN sed -i -e 's/# en_US.UTF-8 UTF-8/en_US.UTF-8 UTF-8/' /etc/locale.gen && \
echo 'LANG="en_US.UTF-8"'>/etc/default/locale && \
dpkg-reconfigure --frontend=noninteractive locales && \
update-locale LANG=en_US.UTF-8
ENV LC_ALL en_US.UTF-8
ENV LANG US.UTF-8
ENV LANGUAGE en_US.UTF-8
WORKDIR /home/builder/
ADD ./ ./
USER builder
ENTRYPOINT ["/bin/bash", "-c", "./make.sh"]
We build this docker during the Codebuild pre_build step and run the actual build in the ENTRYPOINT (in make.sh) when we run the image. After the container has been excited, we copy the artifacts to the Codebuild host and put them on S3:
version: 0.2
phases:
pre_build:
commands:
- mkdir ./images
- docker build -t bob .
build:
commands:
- docker run bob:latest
post_build:
commands:
# copy the last excited container's images into host as build artifact
- docker cp $(docker container ls -a | head -2 | tail -1 | awk '{ print $1 }'):/home/builder/yocto-env/build/tmp/deploy/images ./images
- tar -cvzf artifacts.tar.gz ./images/*
artifacts:
files:
- artifacts.tar.gz
The only drawback this approach has, is the fact that we can't (easily) use Codebuild's caching functionality. But the build is sufficiently fast for us, since we do local builds during the day and basically one rebuild from scratch at night, which takes about 90 minutes (on the most powerful Codebuild instance).
Sigh, so I came across this question and I am disappointed that there is no good or simple answer to this problem. There are many, many processes that strongly discourage running as root like composer and others that will flat-out refuse like wp-cli. If you are using the Ubuntu "standard image" provided by AWS, then there appears to be an existing user in the /etc/passwd file, dockremap:x:1000:1000::/home/dockremap:/bin/sh. I think this user is for userns-remap in docker and I am not sure about it's availability. The other option that astonishingly hasn't been mentioned is running useradd -N -G users develop to create a new user in the container. It is far simpler than spinning up a custom container for something so trivial.

Installing gcloud on Travis CI

I'm following this tutorial on how to use Travis CI with Google Cloud for Continuous Deployments:
https://cloud.google.com/solutions/continuous-delivery-with-travis-ci
When Travis builds, it tells me that the gcloud command is not found. Here's my .travis file:
sudo: false
language: python
cache:
directories:
- "$HOME/google-cloud-sdk/"
env:
- GAE_PYTHONPATH=${HOME}/.cache/google_appengine PATH=$PATH:${HOME}/google-cloud-sdk/bin
PYTHONPATH=${PYTHONPATH}:${GAE_PYTHONPATH} CLOUDSDK_CORE_DISABLE_PROMPTS=1
before_install:
- openssl aes-256-cbc -K $encrypted_404aa45a170f_key -iv $encrypted_404aa45a170f_iv
-in credentials.tar.gz.enc -out credentials.tar.gz -d
- if [ ! -d "${GAE_PYTHONPATH}" ]; then python scripts/fetch_gae_sdk.py $(dirname
"${GAE_PYTHONPATH}"); fi
- if [ ! -d ${HOME}/google-cloud-sdk ]; then curl https://sdk.cloud.google.com | bash;
fi
- tar -xzf credentials.tar.gz
- mkdir -p lib
- gcloud auth activate-service-account --key-file client-secret.json
install:
- gcloud config set project continuous-deployment-192112
- gcloud -q components update gae-python
- pip install -r requirements.txt -t lib/
script:
- python test_main.py
- gcloud -q preview app deploy app.yaml --promote
- python e2e_test.py
This is the same file provided by the example repository from the tutorial. The line that fails is:
- gcloud auth activate-service-account --key-file client-secret.json
Even though it's already checked for the SDK and installed it if it isn't there.
I've already tried adding - source ~/.bash_profile after the install, but this doesn't work.
Am I missing a command somewhere?
I ran into the same issue and this has worked for me:
- if [ ! -d "$HOME/google-cloud-sdk" ]; then
export CLOUD_SDK_REPO="cloud-sdk-$(lsb_release -c -s)";
echo "deb http://packages.cloud.google.com/apt $CLOUD_SDK_REPO main" | sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list;
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - ;
sudo apt-get update && sudo apt-get install google-cloud-sdk;
fi
The only issue however is since it needs sudo, it will run on gce which is much slower then ec2
https://docs.travis-ci.com/user/reference/overview/#Virtualisation-Environment-vs-Operating-System
Updated:
This is the best solution -
How to install Google Cloud SDK on Travis?

Why I cant export PATH by command .ebextensions config

Hi I want to use goofys on AWS ElasticBeanstalk php 7.0 environment.
I create .ebextentions/00_install_goofy.config.
(install golang from binary because golang version by yum is old.
packages:
yum:
fuse: []
commands:
100_install_golang_01:
command: wget https\://storage.googleapis.com/golang/go1.9.linux-amd64.tar.gz
100_install_golang_02:
command: tar -C /usr/local -xzf go1.9.linux-amd64.tar.gz
100_install_golang_03:
command: export GOROOT=/usr/local/go
test: [ -z "$GOROOT" ]
100_install_golang_04:
command: export GOPATH=/home/ec2-user/go
test: [ -z "$GOPATH" ]
100_install_golang_05:
command: export PATH=$PATH\:$GOROOT/bin\:$GOPATH/bin
100_install_golang_06:
command: echo $GOPATH > gopath
But 100_install_golang_03 not work well...
Test for Command 100_install_golang_03
[2017-09-09T14:39:52.422Z] INFO [3034] - [Application deployment app-f68c-170909_143641#1/StartupStage0/EbExtensionPreBuild/Infra-EmbeddedPreBuild/prebuild_1_yubion_website] : Completed activity.
[2017-09-09T14:39:52.422Z] INFO [3034] - [Application deployment app-f68c-170909_143641#1/StartupStage0/EbExtensionPreBuild/Infra-EmbeddedPreBuild] : Activity execution failed, because: [Errno 2] No such file or directory (ElasticBeanstalk::ExternalInvocationError)
I cant export env and path. Can I set PATH on .ebextensions?
Or is there better way to install goofys on ElasticBeanstalk automatically.
finally I find commands defined by .ebextensions run NO EVIRONMET VALUE.
It work on an environment like sandbox.
So scope of "export" commands is only "command" section.
if you want to use PATH in commands, you have to add export command to every commands.
Additionally if you want use PATH after eb deployed, see following link.
How can I add PATH on Elastic Beanstalk