I'm deploying the front-end of my website to amazon s3 via Gitlab pipelines. My previous deployments have worked successfully but the most recent deployments do not. Here's the error:
Completed 12.3 MiB/20.2 MiB (0 Bytes/s) with 1 file(s) remaining
upload failed: dist/vendor.bundle.js.map to s3://<my-s3-bucket-name>/vendor.bundle.js.map Unable to locate credentials
Under my secret variables I have defined four. They are S3 credential variables (AWS_ACCESS_KEY_ID & AWS_SECRET_ACCESS_KEY) for two different buckets. One pair is for the testing branch and the other is for the production branch.
Not - the production environment variables are protected and the other variables are not.
Here's the deploy script that I run:
#/bin/bash
#upload files
aws s3 cp ./dist s3://my-url-$1 --recursive --acl public-read
So why am I getting this credential location error? Surely it should just pick up the environment variables automatically (the unprotected ones) and deploy them. Do I need to define the variables in the job and refer to them?
(I encountered this issue many times - Adding another answer for people that have the same error - from other reasons).
A quick checklist.
Go to Setting -> CI/CD -> Variables and check:
If both AWS_ACCESS_KEY_ID & AWS_SECRET_ACCESS_KEY environment variables exist.
If both names are spelled right.
If their state is defined as protected - they can only be ran against protected branches (like master).
If error still occurs:
Make sure the access keys still exist and active on your account.
Delete current environment variables and replace them with new generated access keys and make sure AWS_SECRET_ACCESS_KEY doesn't contain any special characters (can lead to strange errors).
The actual problem was a collision to do with naming the variables. For both branches the variables were called AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. However the problem wasn't just to rename them as the pipeline still didn't pick them up.
I printed the password to the logs to determine which password was being picked up by which branch but found that neither was being taken up. The solution was to have a unique name for each password for each branch (e.g. PRODUCTION_ACCESS_KEY_ID and TESTING_ACCESS_KEY_ID) and in the build script refer to them:
deploy_app_production:
environment:
name: production
url: <url>
before_script:
- echo "Installing ruby & dpl"
- apt-get update && apt-get install -y ruby-full
- gem install dpl
stage: deploy
tags:
- nv1
script:
- echo "Deploying to production"
- sh deploy.sh production $PRODUCTION_ACCESS_KEY_ID $PRODUCTION_SECRET_ACCESS_KEY
only:
- master
And in the deploy.sh I referred to the passed in variables (though I did end up switching to dpl):
dpl --provider=s3 --access-key-id=$2 --secret-access-key=$3 --bucket=<my-bucket-name>-$1 --region=eu-west-1 --acl=public_read --local-dir=./dist --skip_cleanup=true
Have you tried running the Docker image you use in your GitLab pipelines script locally, in interactive mode?
That way, you could verify that the environment variables not being picked up are indeed the problem. (I.e. if you set the same environment variables locally, and it works, then yes, that's the problem.)
I'm suspecting that maybe the credentials are picked up just fine, and maybe they just don't have all the required permissions to perform the requested operation. I know the error message says otherwise, but S3 error messages tend to be quite misleading when it comes to permission problems.
For anyone still struggling after reading the other answers, make sure your branch is protected if you are trying to read protected variables. Gitlab doesn't expose those variables to non-protected branches due to which you might see the error.
https://docs.gitlab.com/ee/ci/variables/index.html#add-a-cicd-variable-to-a-project
To protect your branch go to Settings -> Repository -> Protected Branches and mark your required branch as protected. (This step might change in future)
Related
Hi I have my secrets in Secretmanager in one project and want to know how to copy them or migrate them to other project.
Is there a mechanism to do it smoothly.
As of today there is no way to have GCP move the Secret between projects for you.
It's a good feature request that you can file here: https://b.corp.google.com/issues/new?component=784854&pli=1&template=1380926
edited according to John Hanley's comment
I just had to deal with something similar myself, and came up with a simple bash script that does what I need. I run Linux.
there are some prerequisites:
download the gcloud cli for your OS.
get the list of secrets you want to migrate (you can do it by setting up the gcloud with the source project gcloud config set project [SOURCE_PROJECT], and then running gcloud secrets list)
then once you have the list, convert it textually to a list in
format "secret_a" "secret_b" ...
the last version of each secret is taken, so it must not be in a "disabled" state, or it won't be able to move it.
then you can run:
$(gcloud config set project [SOURCE_PROJECT])
declare -a secret_array=("secret_a" "secret_b" ...)
for i in "${secret_array[#]}"
do
SECRET_NAME="${i}_env_file"
SECRET_VALUE=$(gcloud secrets versions access "latest" --secret=${SECRET_NAME})
echo $SECRET_VALUE > secret_migrate
$(gcloud secrets create ${SECRET_NAME} --project [TARGET_PROJECT] --data-file=secret_migrate)
done
rm secret_migrate
what this script does, is set the project to the source one, then get the secrets, and one by one save it to file, and upload it to the target project.
the file is rewritten for each secret and deleted at the end.
you need to replace the secrets array (secret_array), and the project names ([SOURCE_PROJECT], [TARGET_PROJECT]) with your own data.
I used this version below, which also sets a different name, and labels according to the secret name:
$(gcloud config set project [SOURCE_PROJECT])
declare -a secret_array=("secret_a" "secret_b" ...)
for i in "${secret_array[#]}"
do
SECRET_NAME="${i}"
SECRET_VALUE=$(gcloud secrets versions access "latest" --secret=${SECRET_NAME})
echo $SECRET_VALUE > secret_migrate
$(gcloud secrets create ${SECRET_NAME} --project [TARGET_PROJECT] --data-file=secret_migrate --labels=environment=test,service="${i}")
done
rm secret_migrate
All "secrets" MUST be decrypted and compiled in order to be processed by a CPU as hardware decryption isn't practical for commercial use. Because of this getting your passwords/configuration (in PLAIN TEXT) is as simple as logging into one of your deployments that has the so called "secrets" (plain text secrets...) and typing 'env' a command used to list all environment variables on most Linux systems.
If your secret is a text file just use the program 'cat' to read the file. I haven't found a way to read these tools from GCP directly because "security" is paramount.
GCP has methods of exec'ing into a running container but you could also look into kubectl commands for this too. I believe the "PLAIN TEXT" secrets are encrypted on googles servers then decrypted when they're put into your cluser/pod.
I have a cloud formation template where I have all the resources and details for the project.
I have the cfn-lint setup locally and it is running perfectly fine. However when I push the code changes, build fails at deployment stage due to cfn-nag stating some simple changes which could be fixed.
I'm using windows machine and I need a way to run this cfn-nag locally so that I could check this just like cfn-lint and fix them locally instead of waiting 40 minutes for build till it reaches deployment stage.
I referred several posts online, found below two helpful
https://stelligent.com/2018/03/23/validating-aws-cloudformation-templates-with-cfn_nag-and-mu/
https://github.com/stelligent/cfn_nag
What is the difference between cfn-nag and cfn-lint and why lint is not failing on what cfn-nag is complaining about?
The above links have some instructions on Ruby and Brew but I'm using Nodejs, felt lost. Please help.
CFN-Nag looks for patterns in AWS CloudFormation templates that may indicate insecure infrastructure,
Ex:
IAM rules that are too permissive (wildcards),
Security group rules that are too permissive (wildcards),
Access logs that aren’t enabled,
Encryption that isn’t enabled,
CFN-Lint scans the AWS CloudFormation template by processing a collection of Rules, where every rule handles a specific function check or validation of the template. It validates against AWS CloudFormation Resource specification.
This collection of rules can be extended with custom rules using the --append-rules argument.
Ex: Whitespaces, alignment(YAML), type checks, valid values for resource properties, and other best practices.
Those two links you previded above have all the information needed, just not directly for a Nodejs developer using a Windows machine.
Step1: Pull the docket image stelligent/cfn-nag
Step2: Add the script to your package.json for cfn-nag
Ex:
"scripts" : {
"cfn:nag": "cfn-nag"
}
If you're using docker-compose.yml
Add the cfn-nag image details to your docker-compose.yml like below
cfn-nag:
image: "stelligent/cfn-nag"
volumes:
-./path_of_cfn_file_to_copy: /path_to_copy_to
command: ${COMMAND: -/path_to_copy_tp/cfn_file}
Just set the scripts in package.json to run via docker-compose
"cfn:nag": "docker-compose run --rm cfn-nag"
Currently our singleton application including 5 containers goes through AWS pipeline into code build and then code deploy into ECS services. During codebuild base on an ENV set in codebuild $Stage it can be dev, prod or staging and loads a specific config file for which contains all the ENV variables each container needs. See below:
build:
commands:
#Get commit id
- "echo STAGE $STAGE"
- "export STAGE=$STAGE"
#Assigning AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY needs to be done in two steps, otherwise it ends up in "Partial credentials found in env" error
- "export ANSIBLE_VARS=\"\
USE_EXISTING_VPC=true \
DISABLE_BASIC_AUTH=true\""
- "export DOCKER_ARGS=\"-e COMMIT_ID=$GIT_COMMIT -e APP_ENV=$STAGE
Problem 1: is these config files are within the repo and anybody can modify them. So there are lots of human errors like the production redirect Url is pointing to the wrong place, or new ENV is not set.
So I want to move away from loading different config files and move ENV variables to AWS to handle. Something like during code build it will load from parameter store. Is this correct way?
Problem 2 is there are lots of ENV variables, is the only option to list them one by one in the CloudFormation template ? Are there any other better way to load all of ENV variable into DOCKER_ARG from above build command ?
We have some docker images we build with sbt-native-packager that need to interact with AWS services. When running them outside of AWS, we need to explicitly provide credentials.
I know we can explicitly pass environment variables containing the AWS credentials. Doing this complicates keeping our credentials secret. One option is to provide them via the command line, typically storing them into our shell history (yes I know this can be avoided by adding a space to the start of the command, but that is easy to forget) and putting them at higher risk of accidental copy/paste sharing. Alternatively, we can provide them via an env-file. But this exposes us to possibly checking them into version control or pushing them to another server unintentionally.
We've found that the ideal practice is to mount our local ~/.aws/ directory into the running user's home directory for the docker container. However, our attempts at getting this to work with the sbt-native-packager images have been unsuccessful.
One unique detail for sbt-native-packager images (compared to our others) is they are build using docker's ENTRYPOINT instead of CMD to start the application. I don't know if this has bearing on the problem.
So the question: Is it possible to provide AWS credentials to a docker container created by sbt-native-packager by mounting the AWS credentials folder via command line parameters at startup?
The problem I was running into was related to permissions. The .aws files have very restricted access on my machine, and the default user within the sbt-native-packager image is daemon. This user does not have access to read my files when mounted into the container.
I am able to obtain the behavior I desire by adding the following flags to my docker run command: -v ~/.aws/:/root/.aws/ --user=root
I was able to discover this by using the --entrypoint=ash flag when running to look at the HOME environment variable (location to mount the /.aws/ folder) and attempting to cat the contents of mounted folder.
Now I just need to understand what security vulnerabilities I'm opening myself up to by running docker containers in this way.
I'm not entirely sure why mounting ~/.aws would be a problem - typically it could be related to read permissions on that directory and the different UID between the host system and the container.
That said, I can suggest a couple of workarounds:
Use an environment variable file instead of explicitly specifying them in the command line. In docuer run, you can do this by specifying --env-file. To me this sounds like the most simple approach.
Mount a different credentials file and provide the AWS_CONFIG_FILE environment variable to specify it's location.
I'm looking to automatically deploy my app once we release a new version. We use CircleCI, so firing these commands shouldn't be a big deal.
cf login -a https://api.lyra-836.appcloud.swisscom.com -u myuser -p seret
cf push myapp
However I don't want to expose my personal credentials (Passeport acount) into our git repository. Is it possible to generate an API key for that purpose?
How do you handle that? I might also need to ssh into the instance to fire some migrations scripts after the deployment, same goes there.
Currently Swisscoms Application cloud does not offer technical accounts but you can create an additional account easily. Then add it to your org/space as developer and it should be able to fulfill your needs.
CircleCI documentation has a section about handling secrets: Using CircleCI Environment Variables
Setting environment variables for all commands without adding them to
git
Occasionally, you’ll need to add an API key or some other secret
as an environment variable. You might not want to add the value to
your git history. Instead, you can add environment variables using the
Project settings > Environment Variables page of your project.
This documentation describes how to store encrypted stuff within your VCS.
If you prefer to keep your sensitive environment variables checked
into git, but encrypted, you can follow the process outlined at
circleci/encrypted-files.