Running CDK bootstrap against LocalStack fails with credentials error - amazon-web-services

I'm running LocalStack in docker and trying to deploy CDK resources into it.
LocalStack seems to run OK:
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1cbda0d0c6c5 localstack/localstack:latest "docker-entrypoint.sh" 2 days ago Up 2 days 127.0.0.1:53->53/tcp, 127.0.0.1:443->443/tcp, 127.0.0.1:4510-4530->4510-4530/tcp, 127.0.0.1:4566->4566/tcp, 127.0.0.1:4571->4571/tcp, 127.0.0.1:53->53/udp, 5678/tcp, 0.0.0.0:8080->8080/tcp, :::8080->8080/tcp localstack_main
I can successfully deploy resources to it using awslocal:
infra> awslocal s3api create-bucket --bucket my-bucket
Location: /my-bucket
infra> awslocal s3api list-buckets
Buckets:
- CreationDate: '2021-11-15T10:28:03+00:00'
Name: my-bucket
Owner:
DisplayName: webfile
ID: bcaf1ffd86f41161ca5fb16fd081034f
Credentials are stored in a named profile:
infra> echo $AWS_PROFILE
LS
infra> cat ~/.aws/config
[default]
region=eu-west-2
output=yaml
[profile LS]
region=eu-west-2
output=yaml
infra> cat ~/.aws/credentials
[default]
aws_access_key_id=test
aws_secret_access_key=test
[LS]
aws_access_key_id=test
aws_secret_access_key=test
However, the problem I'm facing is when I try to introduce CDK to this. My stack is not using an environment. I want to keep it environment agnostic.
const app = new cdk.App();
new InfrastructureStack(app, 'my-stack', {});
When I run cdklocal bootstrap or cdklocal bootstrap --profile LS it returns the following error:
Unable to resolve AWS account to use. It must be either configured when you define your CDK Stack, or through the environment
From the docs I am expecting an environment agnostic stack to deploy the bootstrap resources into the default account and region.
I've also tried explicitly using the account 000000000000 as I've seen some people do with cdklocal bootstrap --profile LS aws://000000000000/eu-west-2 which results in this different error:
⏳ Bootstrapping environment aws://000000000000/eu-west-2...
❌ Environment aws://000000000000/eu-west-2 failed bootstrapping: Error: Need to perform AWS calls for account 000000000000, but no credentials have been configured
at SdkProvider.forEnvironment (/Users/willem/.nvm/versions/node/v14.16.1/lib/node_modules/aws-cdk/lib/api/aws-auth/sdk-provider.ts:149:46)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at Function.lookup (/Users/willem/.nvm/versions/node/v14.16.1/lib/node_modules/aws-cdk/lib/api/bootstrap/deploy-bootstrap.ts:30:17)
at Bootstrapper.legacyBootstrap (/Users/willem/.nvm/versions/node/v14.16.1/lib/node_modules/aws-cdk/lib/api/bootstrap/bootstrap-environment.ts:60:21)
at /Users/willem/.nvm/versions/node/v14.16.1/lib/node_modules/aws-cdk/lib/cdk-toolkit.ts:463:24
at async Promise.all (index 0)
at CdkToolkit.bootstrap (/Users/willem/.nvm/versions/node/v14.16.1/lib/node_modules/aws-cdk/lib/cdk-toolkit.ts:460:5)
at initCommandLine (/Users/willem/.nvm/versions/node/v14.16.1/lib/node_modules/aws-cdk/bin/cdk.ts:267:9)
Need to perform AWS calls for account 000000000000, but no credentials have been configured
EDIT: Worth noting that the same issues occur if I bypass bootstrapping altogether and just run cdlocal deploy | cdklocal deploy --profile LS. I've also specified the environment in the CDK source code like this:
const {
CDK_DEFAULT_ACCOUNT = '0000000000',
CDK_DEFAULT_REGION = 'eu-west-2',
} = process.env;
new InfrastructureStack(app, 'my-stack', {
env: { account: CDK_DEFAULT_ACCOUNT, region: CDK_DEFAULT_REGION },
});
Context:
Mac OS Big Sur
ZSH 5.8
AWS version: aws-cli/2.2.40 Python/3.8.8 Darwin/20.6.0 exe/x86_64 prompt/off
CDK version 1.132.0

I've just spent e few hours debugging this same issue.
When I ran cdk bootstrap --profile XXX -v (the -v flag shows more log info), I saw an error where it was trying to get a default AWS account from a cache file located at .cdk/chache/account_partitions.json
This file had a list for the other profiles in the following format:
"AccessKey": {
"accountId": "awsAccountNumber",
"partition": "aws"
}
When I added the info for my profile there, the bootstrap action completed.
I haven't figured out when and how this cache file is updated, but at least it resolved the first problem.
I know this is an old post, but it might help someone else...

Related

UnrecognizedClientException when running `aws ecr get-login-password --region eu-west-3` from gitlab CI

I'm trying to run the following command from gitlab CI:
$ aws ecr get-login-password --region eu-west-3
Here's how the job in the .gitlab-ci.yml looks like this
publish-job:
stage: publish
image:
name: amazon/aws-cli:latest
entrypoint: [""]
script:
- aws configure set aws_access_key_id MY_ACCESS_KEY_ID
- aws configure set aws_secret_access_key MY_SECRET_ACCESS_KEY
- aws configure set default.region eu-west-3
- aws ecr get-login-password --region eu-west-3
And at the last step I get the following error:
$ aws ecr get-login-password --region eu-west-3
An error occurred (UnrecognizedClientException) when calling the GetAuthorizationToken operation: The security token included in the request is invalid.
I know there's a similar question on stack overflow but I think it's not the same problem. In that question it's an issue that has to do with permissions. In my case I'm pretty sure it isn't for 2 reasons:
I gave the user associated with the access key AdministratorAccess (temporarily in order to rule out the possibility that I'm dealing with an permissions issue)
I performed the exact same steps (by copying and pasting) with docker and it works, so it's not the credentials. Here's the Dockerfile:
FROM amazon/aws-cli:latest
RUN aws configure set aws_access_key_id THE_SAME_ACCESS_KEY_ID
RUN aws configure set aws_secret_access_key THE_SAME_SECRET_ACCESS_KEY
RUN aws configure set default.region eu-west-3
RUN aws ecr get-login-password --region eu-west-3
Then I ran $ docker build --progress=plain . and the last step returned a hash
Any Idea why those steps give inconsistent results? And how to fix the CI?
I declared an AWS_DEFAULT_REGION environment variable that was preventing the cli from executing the command (even though I hardcoded the credentials at this stage). When I removed the environment variable, everything started working properly.

AWS CDK deploy from circleCi fails with credential error but other aws services do not

I am running a cdk deploy build on circleCi, and when the step CDK deploy comes it gives me "Need to perform AWS calls for account ************, but no credentials have been configured".
But for the troubleshooting i tried other commands as well like
aws s3 ls
aws aws cloudformation list-stacks
These above commands we working fine, also able to run command to create a cloudformation with same config but not able to run cdk deploy. the access key and secret i am using has Admin access.
Set the creds with a profile name using aws-cli Orb in CircleCI and
try using the below command to deploy with CDK
cdk deploy --all --profile cdkprofile
For reference, in CircleCI
orbs:
aws-cli: circleci/aws-cli#2.0.3
commands:
env-setup:
description: AWS Env Setup
steps:
- aws-cli/setup:
profile-name: cdkprofile
aws-access-key-id: AWS_ACCESS_KEY_ID
aws-secret-access-key: AWS_SECRET_ACCESS_KEY
And assumption is AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are set as CircleCI env variables
As a starting note: The best way to troubleshoot is with cdk [command] --verbose (see CLI ref)
CDK has an internal mechanism for finding credentials not directly using AWS CLI (AWS CLI is not a requirement for CDK to run)
In a similar situation with a CI tool, the issue was simply that the ~/.aws/credentials file did not exist (not that you need it with AWS CLI, but in the situation for CDK, it was required)
Credit to this issue reporting: https://github.com/aws/aws-cdk/issues/6947#issue-586402006
Solution tested for above:
For an EC2 running CI tool, with EC2 IAM role
Where ~/.aws/config exists and defined profile(s) with:
credential_source = Ec2InstanceMetadata
role_arn = arn:aws:iam:::role/role-to-assume-in-acctId
Create empty ~/.aws/credentials file
Example error for the problem solved above (from verbose output)
Resolving default credentials
Notices refreshed
Unable to determine the default AWS account: ProcessCredentialsProviderFailure: Profile myprofile did not include credential process
Other causes found in other issues/comments could relate to:
Duplicate profiles
Having credential_process in the profile, set to empty
Needing --profile parameter to be added

How do you specify AWS credentials when running AWS CLI from a Dockerfile in an AWS SAM pipeline?

I have an app using:
SAM
AWS S3
AWS Lambda based on Docker
AWS SAM pipeline
Github function
In the Dockerfile I have:
RUN aws s3 cp s3://mylambda/distilBERT distilBERT.tar.gz
Resulting in the error message:
Step 6/8 : RUN aws s3 cp s3://mylambda/distilBERT distilBERT.tar.gz
---> Running in 786873b916db
fatal error: Unable to locate credentials
Error: InferenceFunction failed to build: The command '/bin/sh -c aws s3 cp s3://mylambda/distilBERT distilBERT.tar.gz' returned a non-zero code: 1
I need to find a way to store the credential in a secured manner. Is it possible with GitHub secrets or something?
Thanks
My solution may be a bit longer but I feel it solves your problem, and
It does not expose any secrets
It does not require any manual work
It is easy to change your AWS keys later if required.
Steps:
You can add the environment variables in Github actions(since you already mentioned Github actions) as secrets.
In your Github CI/CD flow, when you build the Dockerfile, you can create a aws credentials file.
- name: Configure AWS credentials
echo "
[default]
aws_access_key_id = $ACCESS_KEY
aws_secret_access_key = $SECRET_ACCESS_KEY
" > credentials
with:
ACCESS_KEY: ${{ secrets.AWS_ACCESS_KEY_ID }}
SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
In your Dockerfile, you can add instructions to COPY this credentials file and store it
COPY credentials credentials
RUN mkdir ~/.aws
RUN mv credentials ~/.aws/credentials
Changing your credentials requires just changing your github actions.
Docker by default does not have access to the .aws folder running on the host machine. You could either pass the AWS credentials as environment variables to the Docker image:
ENV AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
ENV AWS_SECRET_ACCESS_KEY=...
Keep in mind, hardcoding AWS credentials in a Dockerfile is a bad practice. In order to avoid this, you can pass the environment variables at runtime with using docker run -e MYVAR1 or docker run --env MYVAR2=foo arguments. Other solution would be to use an .env file for the environment variables.
A more involved solution would be to map a volume for the ~/.aws folder from the host machine in the Docker image.

`aws: error: argument --region: expected one argument` when running Kubernetes on AWS

I'm following this guide to set up Kubernetes on an Ubuntu 14.04 image on AWS.
sudo apt-get update
sudo apt-get install curl
sudo apt-get install awscli
aws configure # enter credentials, etc.
# fix `locale` errors
export LC_ALL=en_US.UTF-8
export LANG=en_US.UTF-8
export KUBE_AWS_ZONE=us-east-1b
export NUM_NODES=2
export MASTER_SIZE=t2.micro
export NODE_SIZE=t2.micro
export AWS_S3_BUCKET=my.s3.bucket.kube
export AWS_S3_REGION=us-east-1b
export INSTANCE_PREFIX=k8s
export KUBERNETES_PROVIDER=aws
curl -sS https://get.k8s.io | bash
This fails, however...
ubuntu#ip-172-31-24-216:~$ curl -sS https://get.k8s.io | bash
Downloading kubernetes release v1.2.4 to /home/ubuntu/kubernetes.tar.gz
--2016-05-21 17:01:20-- https://storage.googleapis.com/kubernetes-release/release/v1.2.4/kubernetes.tar.gz
Resolving storage.googleapis.com (storage.googleapis.com)... 74.125.29.128, 2607:f8b0:400d:c03::80
Connecting to storage.googleapis.com (storage.googleapis.com)|74.125.29.128|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 496696744 (474M) [application/x-tar]
Saving to: ‘kubernetes.tar.gz’
100%[======================================>] 496,696,744 57.4MB/s in 8.2s
2016-05-21 17:01:29 (58.1 MB/s) - ‘kubernetes.tar.gz’ saved [496696744/496696744]
Unpacking kubernetes release v1.2.4
Creating a kubernetes on aws...
... Starting cluster in us-east-1b using provider aws
... calling verify-prereqs
... calling kube-up
Starting cluster using os distro: jessie
Uploading to Amazon S3
+++ Staging server tars to S3 Storage: my.s3.bucket.kube/devel
usage: aws [options] <command> <subcommand> [parameters]
aws: error: argument --region: expected one argument
I tried editing cluster/aws/util.sh to print out s3_bucket_location (following advice from this question, and I get an empty string. I'm guessing that's why it fails?
The docs say an empty string for US East is normal, but I tried changing region (with everything else remaining the same) and I still get an empty string.
The s3 bucket does get created.
Any help would be appreciated.
Looks to me like you are getting region and zone confused.
Use the ec2-describe-regions command as follows to describe your regions.
PROMPT> ec2-describe-regions
REGION us-east-1 ec2.us-east-1.amazonaws.com
REGION ap-northeast-1 ec2.ap-northeast-1.amazonaws.com
REGION ap-southeast-1 ec2.ap-southeast-1.amazonaws.com
..
Use the ec2-describe-availability-zones command as follows to describe your Availability Zones within the us-east-1 region.
PROMPT> ec2-describe-availability-zones --region us-east-1
AVAILABILITYZONE us-east-1a available us-east-1
AVAILABILITYZONE us-east-1b available us-east-1
AVAILABILITYZONE us-east-1c available us-east-1
AVAILABILITYZONE us-east-1d available us-east-1
be sure to use a region in export AWS_S3_REGION=

AWS CodeDeploy - Error deploying - ApplicationDoesNotExistException

I want to deploy a project in AWS using :
$ aws --region eu-central-1 deploy push --application-name DemoApp --s3-location s3://paquirrin-codedeploy/Project1.zip --ignore-hidden-file --source .
But I got this error:
A client error (ApplicationDoesNotExistException) occurred when calling the RegisterApplicationRevision operation: Applications not found for 289558260222
but the application exists:
$ aws deploy list-applications
{
"applications": [
"DemoApp"
]
}
and CodeDeploy agent is running
[root#ip-171-33-54-212 ~]# /etc/init.d/codedeploy-agent status
The AWS CodeDeploy agent is running as PID 2649
but I haven't found the folder deployment-root inside /opt/codedeploy-agent !
You are deploying to region eu-central-1. But you may not be listing the applications in eu-central-1 using following command:
aws deploy list-applications
Instead, use following command to ensure that application exists:
aws deploy list-applications --region eu-central-1