How to use ECSCredentials with AWS SDK? - amazon-web-services

I have a react application that is being run on an ECS container. I am trying to provide AWS credentials via the ECS container so that I can make calls to other AWS services from within the application.
This is what I have currently tried:
const chain = new AWS.CredentialProviderChain();
chain.providers.push(new AWS.ECSCredentials());
const secretsmanager = new AWS.SecretsManager({
region: 'us-east-2',
credentialProvider: chain,
});
However, when I run this code I get the following error:
ECSCredentials is not a constructor
I have also tried not including the ECSCredentials directly, but then I get an error stating that there are missing credentials in the config.
Does anyone know what I am doing wrong and why it is saying that this is not a constructor? I have followed the SDK documentation as best as I can tell.

Related

Fargate tasks not calling aws services without aws configure

I am unable to call aws services from fargate tasks - secrets manager and sns.
I want these services to be invoked from inside the docker image which is hosted on ECR. When I run the pipeline everything loads and run correctly except when the script inside the docker container is invoked, it throws an error. The script makes a call to either secrets manager or sns. The error thrown is -
Unable to locate credentials. You can configure credentials by running "aws configure".
If I do aws configure then the error goes away and every things works smoothly. But I do not want to store the aws credentials anywhere.
When I open task definitions I can see two roles - pipeline-task and ecsTaskExecutionRole
Although, I have given full administrator rights to both of these roles, the pipeline still throws error. Is there any place missing where I can assign roles/policies etc. I want to completely avoid using aws configure.
If the script with the issue is not a PID 1 process ( used to stop and start the container ), it will not automatically read the Task Role (pipeline-task-role). From your description, this sounds like the case.
Add this to your Dockerfile:
RUN echo 'export $(strings /proc/1/environ | grep AWS_CONTAINER_CREDENTIALS_RELATIVE_URI)' >> /root/.profile
The AWS SDK from the script should know where to pick up the credentials from after that.
I don't know if my problem was the same as yours but I also experienced this kind of problem where I had been set the task role but the container don't get the right permissions. After spending a few days, I discovered that if you set any of the AWS environment variables
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
or AWS_DEFAULT_REGION
into the task definition, the "Credential provider chain" will stop at the "Static credentials" step, so the SDK your script is using will look for the other credentials within the .aws/.credentials file and as it can't find them it throws Unable to locate credentials.
If you want to know further about the Credential provider chain you could read about it in https://docs.aws.amazon.com/sdkref/latest/guide/standardized-credentials.html

Terraform deployment of Docker Containers to aws ecr

I am having issues deploying my docker images to aws ecr as part of a terraform deployment and I am trying to think through the best long term strategy.
At the moment I have a terraform remote backend in s3 and dynamodb on let's call it my master account. I then have dev/test etc environments in separate accounts. The terraform deployment is currently run off my local machine (mac) and uses the aws 'master' account and its credentials which in turn assumes a role in the target deployment account to create the resources as per:
provider "aws" { // tell terraform which SDK it needs to load
alias = "target"
region = var.region
assume_role {
role_arn = "arn:aws:iam::${var.deployment_account}:role/${var.provider_env_deployment_role_name}"
}
}
I am creating a number of ecs services with Fargate deployments. The container images are built in separate repos by GitHub Actions and saved as GitHub packages. These package names and versions are being deployed after the creation of the ecr and service (maybe that's not ideal thinking about it) and this is where the problems arise.
The process is to pull the image from GitHub Packages, retag it and upload to the ecr using multiple executions of a null_resource local-exec. Works fine stand alone but has problems as part of the terraform process. I think the reason is that the other resources use the above provider to get permissions but as null_resource does not accept a provider it cannot get the permissions this way. So I have been passing the aws creds values into the shell. Not convinced this is really secure but that's currently moot as it ain't working either. I get this error:
Error saving credentials: error storing credentials - err: exit status 1, out: `error storing credentials - err: exit status 1, out: `The specified item already exists in the keychain.``
Part of me thinks this is the wrong approach and that as I migrate to deploying via a Github action I can separate the infrastructure deployment via terraform from what is really the application deployment and just use GitHub secrets to reset the credentials values then run the script.
Alternatively, maybe the keychain thing just goes away and my process will work fine? Secure ??
That's fine for this scenario but it isn't really a generic approach for all my use cases.
I am shortly going to start deploying multiple aws lambda functions with docker containers. Haven't done it before but it looks like the process is going to be: create ecr, deploy container, deploy lambda function. This really implies that the container deployment should integral to the terraform deployment which loops back to my issue with the local-exec??
I found Actions to deploy to ECR which would imply splitting the deployments into multiple files but that seems inelegant and potentially brittle.
Maybe there is a simple solution, but given where I am trying to go with this, what is my best approach?
I know this isn't a complete answer, but you should be pulling your AWS creds from environment variables. I don't really understand if you need credentials for different accounts, but if you do then swap them during the progress of your action. See https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html . Terraform should pick these up and automatically use them for AWS access.
Instead of those hard coded access key/secret access keys I'd suggest making use of Github/AWS's ability to assume role through temporary credentials with OIDC https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-amazon-web-services
You'd likely only define one initial role that you'd authenticate into and from there assume into the other accounts you're deploying into.
These the assume role credentials are only good for an hour and do not have the operation overhead of having to rotate them.
As suggested by Kevin Buchs answer...
My primary issue was related to deploying from a mac and the use of the keychain. As this was not on the critical path I went round it and set up a GitHub Action.
The Action loaded environmental variables from GitHub secrets for my 'master' aws account credentials.
AWS_ACCESS_KEY_ID: ${{ secrets.NK_AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.NK_AWS_SECRET_ACCESS_KEY }}
I also loaded the target accounts credentials into environmental variables in the same way BUT with the prefix TF_VAR.
TF_VAR_DEVELOP_AWS_ACCESS_KEY_ID: ${{ secrets.DEVELOP_AWS_ACCESS_KEY_ID }}
TF_VAR_DEVELOP_AWS_SECRET_ACCESS_KEY: ${{ secrets.DEVELOP_AWS_SECRET_ACCESS_KEY }}
I then declare terraform variables which will be automatically populated from the environment variables.
variable "DEVELOP_AWS_ACCESS_KEY_ID" {
description = "access key for the dev account"
type = string
}
variable "DEVELOP_AWS_SECRET_ACCESS_KEY" {
description = "secret access key for the dev account"
type = string
}
And when I run a shell script with a local exec:
resource "null_resource" "image-upload-to-importcsv-ecr" {
provisioner "local-exec" {
command = "./ecr-push.sh ${var.DEVELOP_AWS_ACCESS_KEY_ID} ${var.DEVELOP_AWS_SECRET_ACCESS_KEY} "
}
}
Within the script I can then use these arguments to set the credentials eg
AWS_ACCESS=$1
AWS_SECRET=$1
.....
export AWS_ACCESS_KEY_ID=${AWS_ACCESS}
export AWS_SECRET_ACCESS_KEY=${AWS_SECRET}
and the script now has credentials to do whatever.

Serverless Error: The security token included in the request is invalid

when i type serverless deploy appear this error:
ServerlessError: The security token included in the request is invalid.
I had to specify sls deploy --aws-profile in my serverless deploy commands like this:
sls deploy --aws-profile common
Can you provide more information?
Make sure that you've got the correct credentials in ~/.aws/config and ~/.aws/credentials. You can set these up by running aws configure. More info here: https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-quick-configuration
Also make sure that the IAM user in question has as an attached security policy that allows access to everything you need, such as CloudFormation.
Create a new user in AWS (don't use the root key).
In the SSH keys for AWS CodeCommit, generate a new Access Key.
Copy the values and run this:
serverless config credentials --overwrite --provider aws --key bar --secret foo
sls deploy
In my case it was missing the localstack entry in the serverless file.
I had everything that should be inside it, but it was all inside custom (instead of custom.localstack).
In my case, I added region to the provider. I suppose it's not read from the credentials file.
provider:
name: aws
runtime: nodejs12.x
region: cn-northwest-1
In my case, multiple credentials are stored in the ~/.aws/credentials file.
And serverless is picking the default credentials.
So, I kept the new credentials under [default] and removed the previous credentials. And that worked for me.
to run the function from AWS you need to configure AWS with access_key_id and secret_access_key
but
to might get this error if you want to run the function locally
so for that use this command
sls invoke local -f functionName
it will run the function locally not on aws
If none of these answers work, it's maybe because you need to add a provider in your serverless account and add your AWS keys.

Is it possible to provide my AWS credentials in the docker.withRegistry call in jenkins pipeline?

In my Jenkinsfile, I am trying to push the image that I have built using the docker plugin like follows:
docker.withRegistry('https://<my-id>.dkr.ecr.us-east-1.amazonaws.com/', 'ecr:us-east-1:awscreds') {
docker.image('image').push('latest')
}
The pipeline fails every time with the message ERROR: Could not find credentials matching ecr:us-east-1:awscreds but I do have my AWS key ID and secret key in my Jenkins credentials with the ID "awscreds".
What could be a potential fix for this?
Alternatively, can I provide my credentials directly instead of mentioning the credential ID in the call?
I had the same error message. Make sure the Amazon ECR plugin is installed and up to date and that you reboot jenkins after the installation.

AWS credentials not working - ~/.aws/credentials

I'm having a problem with my AWS credentials. I used the credentials file that I created on ~/.aws/credentials just as it is written on the AWS doc. However, apache just can't read it.
First, I was getting this error:
Error retrieving credentials from the instance profile metadata server. When you are not running inside of Amazon EC2, you must provide your AWS access key ID and secret access key in the "key" and "secret" options when creating a client or provide an instantiated Aws\Common\Credentials CredentialsInterface object.
Then I tried some solutions that I found on internet. For example, I tried to check my HOME variable. It was /home/ubuntu. I tried also to move my credentials file to the /var/www directory even if it is not my web server directory. Nothing worked. I was still getting the same error.
As a second solution, I saw that we could call directly the CredentialsProvider and indicate the directory on the client.
https://forums.aws.amazon.com/thread.jspa?messageID=583216&#583216
The error changed but I couldn't make it work:
Cannot read credentials from /.aws/credentials
I saw also that we could use the default provider of the CredentialsProvider instead of indicating a path.
http://docs.aws.amazon.com/aws-sdk-php/v3/guide/guide/credentials.html#using-credentials-from-environment-variables
I tried and I keep getting the same error:
Cannot read credentials from /.aws/credentials
Just in case you need this information, I'm using aws/aws-sdk-php (3.2.5). The service I'm trying to use is the AWS Elastic Transcoder. My EC2 instance is an Ubuntu 14.04. It runs a Symfony application deployed using Capifony.
Before I try on this production server, I tried it in a development server where it works perfectly only with the ~/.aws/credentials file. This development server is exactly a copy of the production server. However, it doesn't use Capifony for the deployment. It is just a normal git clone of the project. And it has only one EBS volume while the production server has one for the OS and one for the application.
Ah! And I also checked if the permissions/owners of the credentials file were the same on both servers and they are the same. I tried a 777 to see if it could change something but nothing.
Does anybody have an idea?
It sounds like you're doing it wrong. You do not need to deploy credentials to an EC2 instance in order to have that instance interact with other AWS services, and if fact should not ever deploy credentials to an EC2 instance.
Instead, when you create your instance, you associate an IAM role with it. That role has policies that control access to the other AWS services.
You can create an empty role, launch the instance, and then modify the role later. You can't assign a role after the instance has been launched.
You can now add roles to an instance after it has been assigned.
It is still considered a best practice to not deploy actual credentials to an EC2 instance.
If this can help someone, I managed to make my .ini file work, doing this way:
$profile = 'default';
$path = '/mnt/app/www/.aws/credentials/default.ini';
$provider = CredentialProvider::ini($profile, $path);
$provider = CredentialProvider::memoize($provider);
$client = ElasticTranscoderClient::factory(array(
'region' => 'eu-west-1',
'version' => '2012-09-25',
'credentials' => $provider
));
The CredentialProvider is explained on this doc:
http://docs.aws.amazon.com/aws-sdk-php/v3/guide/guide/credentials.html#ini-provider
I still don't understand why my application can't read the file on the home directory (~/.aws/credentials/default.ini) on one server but in the other it does.
If someone knows something about it, please let me know.
The SDK reads from a file located at ~/.aws/credentials, but it looks like you're saving a file at ~/.aws/credentials/default.ini. If you move the file, the error you were experiencing should be cleared up.
2 Ways of solving this problem to me Node.js
Its going to get my credentials from /home/{USER}/.aws/credentials usin' the default profile
const aws = require('aws-sdk');
aws.config.credentials = aws.SharedIniFileCredentials({profile: 'default'})
...
The hardcoded way
var lambda = new aws.Lambda({
region: 'us-east-1',
accessKeyId: <KEY>
secretAccessKey: <KEY>
});