aws s3 ls: An HTTP Client raised an unhandled exception: Invalid header value - amazon-web-services

I'm trying to implement a pipeline that package and copy Python code to S3 using Gitlab CI.
Here is the job that is causing the problem:
package:
stage: package
image: python:3.8
script:
- apt-get update && apt-get install -y zip unzip jq
- pip3 install awscli
- aws s3 ls
- ./cicd/scripts/copy_zip_to_s3.sh
only:
refs:
- developer
I want to mention that in the section before_script in .gitlab-ci.yml, I've already exported the AWS credentials (AWS SECRET ACCESS KEY, AWS_ACCESS_KEY_ID, etc) from Gitlab environment variables.
I've checked thousands of times my credentials and they are totally correct. I want also to mention that the same script works perfectly for another project under the same group in Gitlab.
Here is the error:
$ aws s3 ls
An HTTP Client raised an unhandled exception: Invalid header value b'AWS4-HMAC-SHA256 Credential=AKIAZXXXXXXXXXX\n/2020XX2/us-east-1/sts/aws4_request, SignedHeaders=content-type;host;x-amz-date, Signature=ab53XX6eb72XXXXXX2152e4XXXX93b104XXXXXXX363b1da6f9XXXXX'
ERROR: Job failed: exit code 1
./cicd/scripts/copy_zip_to_s3.sh do the package and the copy, same error occurs when executing it, that's why I've added a simple aws command aws s3 ls to show that even a simple 'ls' is not working.
Any solutions, please? Thank you all in advance.

This was because of an additional line added to AWS ACCESS KEY variable.
Thanks to #jordanm

I had similar issue when running a bash script on Cygwin in Windows. The fix was removing the \r\n from the end of the values I was putting into environment variables.
Here's my whole script if anyone is interested. It assumes a new AWS role, sets those creds into environment variables, then opens a new bash shell which will respect those set variables.
#!/bin/bash
hash aws 2>/dev/null
if [ $? -ne 0 ]; then
echo >&2 "'aws' command line tool required, but not installed. Aborting.";
exit 1;
fi;
unset AWS_ACCESS_KEY_ID
unset AWS_SECRET_ACCESS_KEY
unset AWS_SESSION_TOKEN
ROLEID=123918273981723
TARGETARN="arn:aws:iam::${ROLEID}:role/OrganizationAccountAccessRole"
COMMAND="aws sts assume-role --role-arn $TARGETARN --role-session-name damien_was_here"
RESULT=$($COMMAND)
#the calls to tr -d \r\n are the important part with regards to this question.
AccessKeyId=$(echo -n "$RESULT" | jq -r '.Credentials.AccessKeyId' | tr -d '\r\n')
SecretAcessKey=$(echo -n "$RESULT" | jq -r '.Credentials.SecretAccessKey' | tr -d '\r\n')
SessionToken=$(echo -n "$RESULT" | jq -r '.Credentials.SessionToken' | tr -d '\r\n')
export AWS_ACCESS_KEY_ID=$AccessKeyId
export AWS_SECRET_ACCESS_KEY=$SecretAcessKey
export AWS_SESSION_TOKEN=$SessionToken
echo Running a new bash shell with the environment variable set.
bash

Related

Error deploying AWS CDK stacks with Azure Pipelines (using python in CDK)

app.py example of how my stacks are defined (with some information changed as you can imagine)
Stack1(app, "Stack1",env=cdk.Environment(account='123456789', region='eu-west-1'))
In my azure pipeline I'm trying to do a cdk deploy
- task: AWSShellScript#1
inputs:
awsCredentials: 'Service_connection_name'
regionName: 'eu-west-1'
scriptType: 'inline'
inlineScript: |
sudo bash -c "cdk deploy '*' -v --ci --require-approval-never"
displayName: "Deploying CDK stacks"
but getting errors. I have the service connection to AWS configured, but the first error was
[Stack_Name] failed: Error: Need to perform AWS calls for account [Account_number], but no credentials have been configured
Stack_Name and Account_Number have been redacted
After this error, I decided to add a step to my pipeline and manually create the files .aws/config and .aws/credentials
- script: |
echo "Preparing for CDK"
echo "Creating directory"
sudo bash -c "mkdir -p ~/.aws"
echo "Writing to files"
sudo bash -c "echo -e '[default]\nregion = $AWS_REGION\noutput = json' > ~/.aws/config"
sudo bash -c "echo -e '[default]\naws_access_key_id = $AWS_ACCESS_KEY_ID\naws_secret_access_key = $AWS_SECRET_ACCESS_KEY' > ~/.aws/credentials"
displayName: "Setting up files for CDK"
After this I believed the credentials would be fixed but it still failed. The verbose option revealed the following error amongst the output:
Setting "CDK_DEFAULT_REGION" environment variable to
So instead of setting the region to "eu-west-1" it is being set to nothing
I imagine I'm missing something, so please, educate me and help me get this working
This happens because you're launching separate instances of a shell with sudo bash, and they don't share the credential environment variables that the AWSShellScript task is populating.
To fix the credentials issue, replace the inline script with just cdk deploy '*' -v --ci --require-approval never

Building container from Windows (local) or Linux (AWS EC2) has different effects

I have been playing around with AWS Batch, and I am having some trouble understanding why everything work when I build a docker image from my local windows machine and push it to ECR, while it doesn't work when I do this from a ubuntu EC2 instance.
What I show below is adapted from this tutorial.
The docker file is very simple:
FROM python:3.6.10-alpine
RUN apk add --no-cache --upgrade bash
COPY ./ /usr/local/aws_batch_tutorial
RUN pip3 install -r /usr/local/aws_batch_tutorial/requirements.txt
WORKDIR /usr/local/aws_batch_tutorial
Where the local folder contains the following bash script (run_job.sh):
#!/bin/bash
error_exit () {
echo "${BASENAME} - ${1}" >&2
exit 1
}
################################################################################
###### Convert envinronment variables to command line arguments ########
pat="--([^ ]+).+"
arg_list=""
while IFS= read -r line; do
# Check if line contains a command line argument
if [[ $line =~ $pat ]]; then
E=${BASH_REMATCH[1]}
# Check that a matching environmental variable is declared
if [[ ! ${!E} == "" ]]; then
# Make sure argument isn't already include in argument list
if [[ ! ${arg_list} =~ "--${E}=" ]]; then
# Add to argument list
arg_list="${arg_list} --${E}=${!E}"
fi
fi
fi
done < <(python3 script.py --help)
################################################################################
python3 -u script.py ${arg_list} | tee "${save_name}.txt"
aws s3 cp "./${save_name}.p" "s3://bucket/${save_name}.p" || error_exit "Failed to upload results to s3 bucket."
aws s3 cp "./${save_name}.txt" "s3://bucket/logs/${save_name}.txt" || error_exit "Failed to upload logs to s3 bucket."
It also contains a requirement.txt file with a three packages (awscli,boto3,botocore),
and a dummy python script (script.py) that simply lists the files in a s3 bucket and saves the list in a file that is then uploaded to s3.
Both in my local windows environment and in the EC2 instance I have set up my AWS credentials with aws configure, and in both cases I can successfully build the image, tag it and push it to ECR.
The problem arises when I submit the job on AWS Batch, which should run the ECR container using the command ["./run_job.sh"]:
if AWS Batch uses the ECR image pushed from windows, everything works fine
if it uses the image pushed from ec2 linux, the job fails, and the only info I can get is this:
Status reason: Task failed to start
I was wondering if anyone has any idea of what might be causing the error.
I think I fixed the problem.
The run_job.sh script in the docker image has to have execute permissions to be run by AWS Batch (but I think this is true in general).
For some reason, when the image is built from Windows, the script has this permission, but it doesn't when the image is built from linux (aws ec2 - ubuntu instance).
I fixed the problem by adding the following line in the Dockerfile:
RUN chmod u+x run_job.sh

Userdata ec2 is not excuted

I am setting up a web app through code pipeline. My cloud formation script is creating an ec2 instance. In that ec2 user data, I have written a logic to get a code from the s3 and copy the code in the ec2 and start the server. A web app is in Python Pyramid framework.
code pipeline is connected with GitHub. It creates a zip file and uploads to the s3 bucket. (That is all in a buildspec.yml file)
When I changed the user data script and run code pipeline it works fine.
But When I changed some web app(My code base) file and re-run the code pipeline. That change is not reflected.
This is for ubuntu ec2 instance.
#cloud-boothook
#!/bin/bash -xe
echo "hello "
exec > /etc/setup_log.txt 2> /etc/setup_err.txt
sleep 5s
echo "User_Data starts"
rm -rf /home/ubuntu/c
mkdir /home/ubuntu/c
key=`aws s3 ls s3://bucket-name/pipeline-name/MyApp/ --recursive | sort | tail -n 1 | awk '{print $4}'`
aws s3 cp s3://bucket-name/$key /home/ubuntu/c/
cd /home/ubuntu/c
zipname="$(cut -d'/' -f3 <<<"$key")"
echo $zipname
mv /home/ubuntu/c/$zipname /home/ubuntu/c/c.zip
unzip -o /home/ubuntu/c/c.zip -d /home/ubuntu/c/
echo $?
python3 -m venv venv
venv/bin/pip3 install -e .
rm -rf cc.zip
aws configure set default.region us-east-1
venv/bin/pserve development.ini http_port=5000 &
The expected result is when I run core pipeline, every time user data script will execute.
Give me a suggestion, any other
The User-Data script gets executed exactly once upon instance creation. If you want to periodically synchronize your code changes to the instance you should think about implementing a CronJob in your User-Data script or use a service like AWS CodeDeploy to deploy new versions (this is the preferred approach).
CodePipeline uses a different S3 object for each pipeline execution artifact, so you can't hardcore a reference to it. You could publish the artifact to a fixed location. You might want to consider using CodeDeploy to deploy the latest version of your application.

How to set an environment variable in Amazon EC2

I created a tag on the AWS console for one of my EC2 instances.
However, when I look on the server, no such environment variable is set.
The same thing works with elastic beanstalk. env shows the tags I created on the console.
$ env
[...]
DB_PORT=5432
How can I set environment variables in Amazon EC2?
You can retrieve this information from the meta data and then run your own set environment commands.
You can get the instance-id from the meta data (see here for details: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html#instancedata-data-retrieval)
curl http://169.254.169.254/latest/meta-data/instance-id
Then you can call the describe-tags using the pre-installed AWS CLI (or install it on your AMI)
aws ec2 describe-tags --filters "Name=resource-id,Values=i-5f4e3d2a" "Name=Value,Values=DB_PORT"
Then you can use OS set environment variable command
export DB_PORT=/what/you/got/from/the/previous/call
You can run all that in your user-data script. See here for details: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html
Lately, it seems AWS Parameter Store is a better solution.
Now there is even a secrets manager which auto manages sensitive configurations as database keys and such..
See this script using SSM Parameter Store based of the previous solutions by Guy and PJ Bergeron.
https://github.com/lezavala/ec2-ssm-env
I used a combination of the following tools:
Install jq library (sudo apt-get install -y jq)
Install the EC2 Instance Metadata Query Tool
Here's the gist of the code below in case I update it in the future: https://gist.github.com/marcellodesales/a890b8ca240403187269
######
# Author: Marcello de Sales (marcello.desales#gmail.com)
# Description: Create Create Environment Variables in EC2 Hosts from EC2 Host Tags
#
### Requirements:
# * Install jq library (sudo apt-get install -y jq)
# * Install the EC2 Instance Metadata Query Tool (http://aws.amazon.com/code/1825)
#
### Installation:
# * Add the Policy EC2:DescribeTags to a User
# * aws configure
# * Souce it to the user's ~/.profile that has permissions
####
# REboot and verify the result of $(env).
# Loads the Tags from the current instance
getInstanceTags () {
# http://aws.amazon.com/code/1825 EC2 Instance Metadata Query Tool
INSTANCE_ID=$(./ec2-metadata | grep instance-id | awk '{print $2}')
# Describe the tags of this instance
aws ec2 describe-tags --region sa-east-1 --filters "Name=resource-id,Values=$INSTANCE_ID"
}
# Convert the tags to environment variables.
# Based on https://github.com/berpj/ec2-tags-env/pull/1
tags_to_env () {
tags=$1
for key in $(echo $tags | /usr/bin/jq -r ".[][].Key"); do
value=$(echo $tags | /usr/bin/jq -r ".[][] | select(.Key==\"$key\") | .Value")
key=$(echo $key | /usr/bin/tr '-' '_' | /usr/bin/tr '[:lower:]' '[:upper:]')
echo "Exporting $key=$value"
export $key="$value"
done
}
# Execute the commands
instanceTags=$(getInstanceTags)
tags_to_env "$instanceTags"
If you are using linux or mac os for your ec2 instance then,
Go to your root directory and write command:
vim .bash_profile
You can see your bash_profile file and now press 'i' for inserting a lines, then add
export DB_PORT="5432"
After adding this line you need to save file, so press 'Esc' button then press ':' and after colon write 'w' it will save the file without exiting.
For exit, again press ':' after that write 'quit' and now you are exit from the file. To check that your environment variable is set or not write below commands:
python
>>>import os
>>>os.environ.get('DB_PORT')
>>>5432
Following the instructions given by Guy, I wrote a small shell script. This script uses AWS CLI and jq. It lets you import your AWS instance and AMI tags as shell environment variables.
I hope it can help a few people.
https://github.com/12moons/ec2-tags-env

AWS CLI command completion with fish shell

Has anybody been able to set up auto-complete for the AWS CLI with fish shell? The AWS documentation only offers the guide for bash, tcsh, and zsh.
Bash exports the variables COMP_LINE and COMP_POINT that is used by the aws_completer script provided by the Amazon. Is there any equivalent for fish? I'm new with the fish shell and I'm giving it a try.
Building upon David Roussel's answers I cooked up the following:
function __fish_complete_aws
env COMP_LINE=(commandline -pc) aws_completer | tr -d ' '
end
complete -c aws -f -a "(__fish_complete_aws)"
Put this in a file $HOME/.config/fish/completions/aws.fish so fish can autoload it when necessary.
aws_completer appends a space after every option it prints and that gets escaped as \ so trimming it solves the trailing backslashes.
Now we can test the completion with the following:
> complete -C'aws co'
codebuild
codecommit
codepipeline
codestar
cognito-identity
cognito-idp
cognito-sync
comprehend
comprehendmedical
connect
configure
configservice
Using the commandline -c helps if you move back the cursor since it cuts the command line at the cursor so aws_completer can offer the right completions.
I also want to get his to work, and I've made some progress, but it's not perfect.
First I look some advise from here which helps to seem how to emulate the bash environment variables that as_completer expects.
Putting it together I get this:
complete -c aws -f -a '(begin; set -lx COMP_SHELL fish; set -lx COMP_LINE (commandline); /usr/local/bin/aws_completer; end)'
That mostly works but I get spurious extra slashes, so if I try to complete "aws ec2 describe-instances --" I get:
dave#retino ~> aws ec2 describe-instances --
--ca-bundle\ --color\ --filters\ --no-dry-run\ --output\ --region\
--cli-connect-timeout\ --debug\ --generate-cli-skeleton --no-paginate\ --page-size\ --starting-token\
--cli-input-json\ --dry-run\ --instance-ids\ --no-sign-request\ --profile\ --version\
--cli-read-timeout\ --endpoint-url\ --max-items\ --no-verify-ssl\ --query\
It looks to me like there is a trailing whitespace char, but I tried to remove it using sed:
complete -c aws -f -a '(begin; set -lx COMP_SHELL fish; set -lx COMP_LINE (commandline); echo (/usr/local/bin/aws_completer | sed -e \'s/[ ]*//\') ; end)'
But this doesn't seem to help. It seems that fish expects a different output format than bash for it's completer. And indeed the fish decimation for the complete builtin doe say that it expects a space separated list.
So I tried joining the lines with xargs:
complete -c aws -f -a '(begin; set -lx COMP_SHELL fish; set -lx COMP_LINE (commandline); echo (/usr/local/bin/aws_completer | sed -e \'s/[ ]*//\') | xargs echo ; end)'
But this doesn't work either. I just get one completion
This is annoying, I'm so close, but it doesn't work!
While the provided answer doesn't answer the question directly about the using fish; I intend to provide an answer to help in the context of auto-completion & shell.
Amazon has launched a new CLI based tool forked from AWSCLI.
aws-shell is a command-line shell program that provides convenience
and productivity features to help both new and advanced users of the
AWS Command Line Interface. Key features include the following.
Fuzzy auto-completion
Commands (e.g. ec2, describe-instances, sms, create-queue)
Options (e.g. --instance-ids, --queue-url)
Resource identifiers (e.g. Amazon EC2 instance IDs, Amazon SQS queue URLs, Amazon SNS topic names)
Dynamic in-line documentation
Documentation for commands and options are displayed as you type
Execution of OS shell commands
Use common OS commands such as cat, ls, and cp and pipe inputs and outputs without leaving the shell
Export executed commands to a text editor To find out more, check out the related blog post on AWS Command Line Interface blog.
Add this line to your .config/fish/config.fish
complete --command aws --no-files --arguments '(begin; set --local --export COMP_SHELL fish; set --local --export COMP_LINE (commandline); aws_completer | sed \'s/ $//\'; end)'
In case you want to make sure that aws-cli is installed:
test -x (which aws_completer); and complete --command aws --no-files --arguments '(begin; set --local --export COMP_SHELL fish; set --local --export COMP_LINE (commandline); aws_completer | sed \'s/ $//\'; end)'
All credits belong to this issue thread and a comment by an awesome SO contributor #scooter-dangle.
It's actually possible to map bash's completion to fish's.
See the npm completions.
However it's probably still better to write a real fish script (it's not hard!).
The command I use in my virtualenv/bin/activate is this:
complete -C aws_completer aws
Looks like aws-cli has fish support too. There is a bundled installer provided with aws-cli that might be worth checking out: activate.fish. I found it in the same bin directory as the aws command.
For example:
ubuntu#ip-xxx-xx-x-xx:/data/src$ tail -n1 ~/venv/bin/activate
complete -C aws_completer aws
ubuntu#ip-xxx-xx-x-xx:/data/src$ source ~/venv/bin/activate
(venv) ubuntu#ip-xxx-xx-x-xx:/data/src$ aws s3 <- hitting TAB here
cp ls mb mv presign rb rm sync website
(venv) ubuntu#ip-xxx-xx-x-xx:/data/src$ aws s3