We are using Dynamodb Local to do integration testing. It is launched inside a container, and within that container, we need to connect to Dynamodb local. Here is how the DocumentClient is initialized:
const doc = new AWS.DynamoDB.DocumentClient({
region: 'localhost',
endpoint: 'http://localhost:5000/'
});
However, when I try connect try batchwrite, like so doc.batchWrite(buildSetData).promise(), the promise is never fulfilled. For those that are wondering, the batchwrite is in JavaScript, and .promise() just returned a JS promise.
However, when I run my setup locally (outside of docker container), everything works perfectly.
TLDR: Why can't I connect to DynamoDb Local inside my container.
The problem was due to the docker environment not having credentials. I assumed that dynamodb-local would not need AWS credentials, and even though it is not making connected to AWS, dynamodb-local still needs them (in fact, they can even be non-sensical credentials, as long as the keys are present).
TLDR: If anyone else has this problem, just define the following keys in your docker env:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
Related
I am running MinIO docker instance in my local environment for testing(integration) my SAM(Lambda) application.
MinIO gives all the standard AWS S3 facility locally. And it can be connected locally as following
awsSession := session.Must(session.NewSession(&aws.Config{
S3ForcePathStyle: aws.Bool(true),
// TODO: In production this should be picked automatically as per the region
Endpoint: aws.String("host.docker.internal:9000"),
DisableSSL: aws.Bool(true),
}))
I want to find out if there is an environment variable which can be used by config to pick endpoint? Hence, in my case I should be able to set the environment variable to host.docker.internal:9000.
Using the aws-sdk for node, I am initializing sqs
sqs = new AWS.SQS({
region: 'us-east-1',
});
In the prod environment this works great because the role running this has the required permissions to connect with SQS. However, running locally is a problem because those prod permissions are not applied to my local dev environment.
I can fix this problem by adding them in
sqs = new AWS.SQS({
region: 'us-east-1',
accessKeyId: MY_AWS_ACCESS_KEY_ID,
secretAccessKey: MY_AWS_SECRET_ACCESS_KEY,
});
But for local development, I don't make any requests to AWS since I'm using a localstack queue.
How can I initialize AWS.SQS so that it continues to function in prod without specifying the AWS keys for local development?
The AWS SDK and CLI read credentials from multiple locations in a well-defined order.
For example you can create a local credential file, and SQS will automatically use credentials from it without making any changes to your original production code.
% cat ~/.aws/credentials
[default]
aws_access_key_id=AAA
was_secret_access_key=BBB
If you have multiple environments, you can specify them in this file by name. The SDK and CLI will typically read the $AWS_PROFILE environmental variable and use the specified profile from your credentials (or [default] if the environmental var is missing).
I am not using AWS resources when developing locally, so it should not be necessary to provide AWS key credentials at all.
If you want to work around this problem, you can just set bogus values.
const SQS = new AWS.SQS({
region: 'us-east-1',
accessKeyId: 'na',
secretAccessKey: 'na',
});
I am having issues deploying my docker images to aws ecr as part of a terraform deployment and I am trying to think through the best long term strategy.
At the moment I have a terraform remote backend in s3 and dynamodb on let's call it my master account. I then have dev/test etc environments in separate accounts. The terraform deployment is currently run off my local machine (mac) and uses the aws 'master' account and its credentials which in turn assumes a role in the target deployment account to create the resources as per:
provider "aws" { // tell terraform which SDK it needs to load
alias = "target"
region = var.region
assume_role {
role_arn = "arn:aws:iam::${var.deployment_account}:role/${var.provider_env_deployment_role_name}"
}
}
I am creating a number of ecs services with Fargate deployments. The container images are built in separate repos by GitHub Actions and saved as GitHub packages. These package names and versions are being deployed after the creation of the ecr and service (maybe that's not ideal thinking about it) and this is where the problems arise.
The process is to pull the image from GitHub Packages, retag it and upload to the ecr using multiple executions of a null_resource local-exec. Works fine stand alone but has problems as part of the terraform process. I think the reason is that the other resources use the above provider to get permissions but as null_resource does not accept a provider it cannot get the permissions this way. So I have been passing the aws creds values into the shell. Not convinced this is really secure but that's currently moot as it ain't working either. I get this error:
Error saving credentials: error storing credentials - err: exit status 1, out: `error storing credentials - err: exit status 1, out: `The specified item already exists in the keychain.``
Part of me thinks this is the wrong approach and that as I migrate to deploying via a Github action I can separate the infrastructure deployment via terraform from what is really the application deployment and just use GitHub secrets to reset the credentials values then run the script.
Alternatively, maybe the keychain thing just goes away and my process will work fine? Secure ??
That's fine for this scenario but it isn't really a generic approach for all my use cases.
I am shortly going to start deploying multiple aws lambda functions with docker containers. Haven't done it before but it looks like the process is going to be: create ecr, deploy container, deploy lambda function. This really implies that the container deployment should integral to the terraform deployment which loops back to my issue with the local-exec??
I found Actions to deploy to ECR which would imply splitting the deployments into multiple files but that seems inelegant and potentially brittle.
Maybe there is a simple solution, but given where I am trying to go with this, what is my best approach?
I know this isn't a complete answer, but you should be pulling your AWS creds from environment variables. I don't really understand if you need credentials for different accounts, but if you do then swap them during the progress of your action. See https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html . Terraform should pick these up and automatically use them for AWS access.
Instead of those hard coded access key/secret access keys I'd suggest making use of Github/AWS's ability to assume role through temporary credentials with OIDC https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-amazon-web-services
You'd likely only define one initial role that you'd authenticate into and from there assume into the other accounts you're deploying into.
These the assume role credentials are only good for an hour and do not have the operation overhead of having to rotate them.
As suggested by Kevin Buchs answer...
My primary issue was related to deploying from a mac and the use of the keychain. As this was not on the critical path I went round it and set up a GitHub Action.
The Action loaded environmental variables from GitHub secrets for my 'master' aws account credentials.
AWS_ACCESS_KEY_ID: ${{ secrets.NK_AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.NK_AWS_SECRET_ACCESS_KEY }}
I also loaded the target accounts credentials into environmental variables in the same way BUT with the prefix TF_VAR.
TF_VAR_DEVELOP_AWS_ACCESS_KEY_ID: ${{ secrets.DEVELOP_AWS_ACCESS_KEY_ID }}
TF_VAR_DEVELOP_AWS_SECRET_ACCESS_KEY: ${{ secrets.DEVELOP_AWS_SECRET_ACCESS_KEY }}
I then declare terraform variables which will be automatically populated from the environment variables.
variable "DEVELOP_AWS_ACCESS_KEY_ID" {
description = "access key for the dev account"
type = string
}
variable "DEVELOP_AWS_SECRET_ACCESS_KEY" {
description = "secret access key for the dev account"
type = string
}
And when I run a shell script with a local exec:
resource "null_resource" "image-upload-to-importcsv-ecr" {
provisioner "local-exec" {
command = "./ecr-push.sh ${var.DEVELOP_AWS_ACCESS_KEY_ID} ${var.DEVELOP_AWS_SECRET_ACCESS_KEY} "
}
}
Within the script I can then use these arguments to set the credentials eg
AWS_ACCESS=$1
AWS_SECRET=$1
.....
export AWS_ACCESS_KEY_ID=${AWS_ACCESS}
export AWS_SECRET_ACCESS_KEY=${AWS_SECRET}
and the script now has credentials to do whatever.
I have a react application that is being run on an ECS container. I am trying to provide AWS credentials via the ECS container so that I can make calls to other AWS services from within the application.
This is what I have currently tried:
const chain = new AWS.CredentialProviderChain();
chain.providers.push(new AWS.ECSCredentials());
const secretsmanager = new AWS.SecretsManager({
region: 'us-east-2',
credentialProvider: chain,
});
However, when I run this code I get the following error:
ECSCredentials is not a constructor
I have also tried not including the ECSCredentials directly, but then I get an error stating that there are missing credentials in the config.
Does anyone know what I am doing wrong and why it is saying that this is not a constructor? I have followed the SDK documentation as best as I can tell.
I'm having a problem with my AWS credentials. I used the credentials file that I created on ~/.aws/credentials just as it is written on the AWS doc. However, apache just can't read it.
First, I was getting this error:
Error retrieving credentials from the instance profile metadata server. When you are not running inside of Amazon EC2, you must provide your AWS access key ID and secret access key in the "key" and "secret" options when creating a client or provide an instantiated Aws\Common\Credentials CredentialsInterface object.
Then I tried some solutions that I found on internet. For example, I tried to check my HOME variable. It was /home/ubuntu. I tried also to move my credentials file to the /var/www directory even if it is not my web server directory. Nothing worked. I was still getting the same error.
As a second solution, I saw that we could call directly the CredentialsProvider and indicate the directory on the client.
https://forums.aws.amazon.com/thread.jspa?messageID=583216򎘰
The error changed but I couldn't make it work:
Cannot read credentials from /.aws/credentials
I saw also that we could use the default provider of the CredentialsProvider instead of indicating a path.
http://docs.aws.amazon.com/aws-sdk-php/v3/guide/guide/credentials.html#using-credentials-from-environment-variables
I tried and I keep getting the same error:
Cannot read credentials from /.aws/credentials
Just in case you need this information, I'm using aws/aws-sdk-php (3.2.5). The service I'm trying to use is the AWS Elastic Transcoder. My EC2 instance is an Ubuntu 14.04. It runs a Symfony application deployed using Capifony.
Before I try on this production server, I tried it in a development server where it works perfectly only with the ~/.aws/credentials file. This development server is exactly a copy of the production server. However, it doesn't use Capifony for the deployment. It is just a normal git clone of the project. And it has only one EBS volume while the production server has one for the OS and one for the application.
Ah! And I also checked if the permissions/owners of the credentials file were the same on both servers and they are the same. I tried a 777 to see if it could change something but nothing.
Does anybody have an idea?
It sounds like you're doing it wrong. You do not need to deploy credentials to an EC2 instance in order to have that instance interact with other AWS services, and if fact should not ever deploy credentials to an EC2 instance.
Instead, when you create your instance, you associate an IAM role with it. That role has policies that control access to the other AWS services.
You can create an empty role, launch the instance, and then modify the role later. You can't assign a role after the instance has been launched.
You can now add roles to an instance after it has been assigned.
It is still considered a best practice to not deploy actual credentials to an EC2 instance.
If this can help someone, I managed to make my .ini file work, doing this way:
$profile = 'default';
$path = '/mnt/app/www/.aws/credentials/default.ini';
$provider = CredentialProvider::ini($profile, $path);
$provider = CredentialProvider::memoize($provider);
$client = ElasticTranscoderClient::factory(array(
'region' => 'eu-west-1',
'version' => '2012-09-25',
'credentials' => $provider
));
The CredentialProvider is explained on this doc:
http://docs.aws.amazon.com/aws-sdk-php/v3/guide/guide/credentials.html#ini-provider
I still don't understand why my application can't read the file on the home directory (~/.aws/credentials/default.ini) on one server but in the other it does.
If someone knows something about it, please let me know.
The SDK reads from a file located at ~/.aws/credentials, but it looks like you're saving a file at ~/.aws/credentials/default.ini. If you move the file, the error you were experiencing should be cleared up.
2 Ways of solving this problem to me Node.js
Its going to get my credentials from /home/{USER}/.aws/credentials usin' the default profile
const aws = require('aws-sdk');
aws.config.credentials = aws.SharedIniFileCredentials({profile: 'default'})
...
The hardcoded way
var lambda = new aws.Lambda({
region: 'us-east-1',
accessKeyId: <KEY>
secretAccessKey: <KEY>
});