I'm currently using Traefik and Lego in order to have HTTPS connection for my docker containers (as mentioned here)
In the following documentation, it's mentioned that I need to use the following provider to do DNS Challenge.
But I get this error:
AccessDeniedException: User: arn:aws:sts::<USER_ID>:assumed-role/AmazonLightsailInstanceRole/<AN_ID> is not authorized to perform: lightsail:CreateDomainEntry on resource: arn:aws:lightsail:us-east-1:<INSTANCE_ID>:*
and another for DeleteDomainEntry, even though I have lightsail:* on Resource: * permission on the IAM user used for configuration.
If I understand correctly Lightsail is managed separately for the other AWS services and thus we need to use STS for connecting to it (tell me if I'm wrong). So my question is this, how can I set the permissions for the temporary token to be able to do CreateDomainEntry and DeleteDomainEntry?
Further information
My instance's region is eu-west-3 (I tried changing the region in Lego config, doesn't work)
The <USER_ID> seen in the error does not correspond to the id found in the ARN of the domain. It correspond to the first number in the supportCode when doing aws lightsail get-domains --region us-east-1 in the CLI.
Lego and Traefik do not call the AssumeRole directly and do not create the temporary token (checked source code)
I'm using AWS_ACCESS_KEY_ID_FILE and AWS_SECRET_ACCESS_KEY_FILE in Traefik environment configuration.
The error message tells that Lego made the request using the IAM role assigned to your lightsail instance. I guess your instance lacks permissions to modify DNS settings for lightsail.
You should create a new user in AWS IAM and enable programmatic access in order to obtain AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
Then, pass those values as environment variables to your containers running Lego. Lego will use those env vars to authenticate with Lightsail APIs in us-east-1. [1]
My instance's region is eu-west-3 (I tried changing the region in Lego config, doesn't work)
Your Lego instance must call AWS APIs in us-east-1, see [2][3].
Lego and Traefik do not call the AssumeRole directly and do not create the temporary token
I guess Traefik/Lego assume the lightsail instance role automatically using EC2 instance metadata service, see [4]:
For applications, AWS CLI, and Tools for Windows PowerShell commands that run on the instance, you do not have to explicitly get the temporary security credentials—the AWS SDKs, AWS CLI, and Tools for Windows PowerShell automatically get the credentials from the EC2 instance metadata service and use them. To make a call outside of the instance using temporary security credentials (for example, to test IAM policies), you must provide the access key, secret key, and the session token.
I'm using AWS_ACCESS_KEY_ID_FILE and AWS_SECRET_ACCESS_KEY_FILE in Traefik environment configuration.
I could not find those env vars in the Lego source code [1]. Make sure that Lego is actually using your configured AWS credentials. The error message posted above suggests it's not using them and falls back to the instance profile instead.
[1] https://github.com/go-acme/lego/blob/master/providers/dns/lightsail/lightsail.go#L81
[2] https://docs.aws.amazon.com/cli/latest/reference/lightsail/create-domain-entry.html#examples
[3] https://github.com/go-acme/lego/blob/master/providers/dns/lightsail/lightsail.go#L69
[4] https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html#instance-metadata-security-credentials
There were too many confusing things. Martin Löper's answer and answer on the github issue I opened helped me to clear things out.
Here is what was confusing:
Lego lightsail provider documentation is listing the environment variable and then say The environment variable names can be suffixed by _FILE to reference a file instead of a value. Turns out, Lego's code never call their getOrFile method on the credentials. Furthermore, AWS sdk does not check variables with the _FILE suffix for AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
The error message from AWS is little bit misleading. I thought all that time that it was a permission problem but in fact it was a authentication problem (little bit different in my opinion).
And here is how to solve it (little bit different from what proposed):
I use the AWS_SHARED_CREDENTIALS_FILE (mentioned here) environment variable so that I can use docker secrets by specifying /run/secrets/aws_shared_credentials file. This is more secure (more info here). AWS sdk will automatically detect this env variable and initialize this new session correctly.
Related
I'm trying to use the aws-sdk-go in my application. It's running on EC2 instance. Now in the Configuring Credentials of the doc,https://docs.aws.amazon.com/sdk-for-go/api/, it says it will look in
*Environment Credentials - Set of environment variables that are useful when sub processes are created for specific roles.
* Shared Credentials file (~/.aws/credentials) - This file stores your credentials based on a profile name and is useful for local development.
*EC2 Instance Role Credentials - Use EC2 Instance Role to assign credentials to application running on an EC2 instance. This removes the need to manage credential files in production.`
Wouldn't the best order be the reverse order? But my main question is do I need to ask the instance if it has a role and then use that to set up the credentials if it has a role? This is where I'm not sure of what I need to do and how.
I did try a simple test of creating a empty config with essentially only setting the region and running it on the instance with the role and it seems to have "worked" but in this case, I am not sure if I need to explicitly set the role or not.
awsSDK.Config{
Region: awsSDK.String(a.region),
MaxRetries: awsSDK.Int(maxRetries),
HTTPClient: http.DefaultClient,
}
I just want to confirm is this the proper way of doing it or not. My thinking is I need to do something like the following
role = use sdk call to get role on machine
set awsSDK.Config { Credentials: credentials form of role,
...
}
issue service command with returned client.
Any more docs/pointers would be great!
I have never used the go SDK, but the AWS SDKs I used automatically use the EC2 instance role if credentials are not found from any other source.
Here's an AWS blog post explaining the approach AWS SDKs follow when fetching credentials: https://aws.amazon.com/blogs/security/a-new-and-standardized-way-to-manage-credentials-in-the-aws-sdks/. In particular, see this:
If you use code like this, the SDKs look for the credentials in this
order:
In environment variables. (Not the .NET SDK, as noted earlier.)
In the central credentials file (~/.aws/credentials or
%USERPROFILE%.awscredentials).
In an existing default, SDK-specific
configuration file, if one exists. This would be the case if you had
been using the SDK before these changes were made.
For the .NET SDK, in the SDK Store, if it exists.
If the code is running on an EC2
instance, via an IAM role for Amazon EC2. In that case, the code gets
temporary security credentials from the instance metadata service; the
credentials have the permissions derived from the role that is
associated with the instance.
In my apps, when I need to connect to AWS resources, I tend to use an access key and secret key that have specific predefined IAM roles. Assuming I have those two, the code I use to create a session is:
awsCredentials := credentials.NewStaticCredentials(awsAccessKeyID, awsSecretAccessKey, "")
awsSession = session.Must(session.NewSession(&aws.Config{
Credentials: awsCredentials,
Region: aws.String(awsRegion),
}))
When I use this, the two keys are usually specified as either environment variables (if I deploy to a docker container).
A complete example: https://github.com/retgits/flogo-components/blob/master/activity/amazons3/activity.go
I have EC2 instance.
I'm trying to call aws s3 from it but getting an error
Unable to locate credentials
I tried aws configure which does show everything as empty.
I see IAM role for S3 full permissions assigned to this instance.
Do I need any additional configuration?
If you run aws on an Amazon EC2 instance that has an assigned role, then it should find the credentials automatically.
You can also use this to view the credentials from the instance:
curl http://169.254.169.254/latest/meta-data/iam/security-credentials/
The role name should be listed. Then append it to the command:
curl http://169.254.169.254/latest/meta-data/iam/security-credentials/ROLE-NAME/
You should then be presented with AccessKeyId, SecretAccessKey, etc.
If that information does not appear, it suggests that your role is not correctly attached to the instance. Try unassigning the role and then assign it to the instance again.
First read and follow John Rotenstein's advice.
The SDK (from which the CLI is built from) searches for credentials in the following order:
Environment variables (AWS_ACCESS_KEY_ID ....)
Default Profile (credentials file)
ECS Container Credentials (if running on ECS)
Instance Profile
After verifying that your EC2 instance has credentials in the metadata, double check 1 & 2 to make sure that there are no other credentials present even if empty.
Note: This link argues with my last point (empty credentials file). I never create (store) credentials on an EC2 instance and I only use IAM Roles.
Instance Metadata
For different AWS services, I need different IAM users to secure the access control. Sometimes, I even need to use different IAM user credentials within a single project in a EC2 instance. What's the proper way to manage this and how I can deploy/attach these IAM user credentials to a single EC2 instance?
While I fully agree with accepted answer that using static credentials is one way of solving this problem, I would like to suggest some improvements over it (and proposed Secrets Manager).
What I would advise as architectural step forward to achieve full isolation of credentials, having them dynamic, and not stored in central place (Secrets Manager proposed above) is dockerizing application and running on AWS Elastic Container Service (ECS). This way you can assign different IAM role to different ECS Tasks.
Benefits over Secrets Manager solution
- use case of someone tampering with credentials in Secrets Manager is fully avoided, as credentials are of dynamic nature (temporary, and automatically assumed through SDKs)
Credentials are managed on AWS side for you
Only ECS Service can assume this IAM role, meaning you can't have actual person stealing the credentials, or developer connecting to production environment from his local machine with this credentials.
AWS Official Documentation for Task Roles
The normal way to provide credentials to applications running on an Amazon EC2 instance is to assign an IAM Role to the instance. Temporary credentials associated with the role when then be provided via Instance Metadata. The AWS SDKs will automatically use these credentials.
However, this only works for one set of credentials. If you wish to use more than one credential, you will need to provide the credentials in a credentials file.
The AWS credentials file can contain multiple profiles, eg:
[default]
aws_access_key_id = AKIAaaaaa
aws_secret_access_key = abcdefg
[user2]
aws_access_key_id = AKIAbbbb
aws_secret_access_key = xyzzzy
As a convenience, this can also be configured via the AWS CLI:
$ aws configure --profile user2
AWS Access Key ID [None]: AKIAbbbb
AWS Secret Access Key [None]: xyzzy
Default region name [None]: us-east-1
Default output format [None]: text
The profile to use can be set via an Environment Variable:
Linux: export AWS_PROFILE="user2"
Windows: set AWS_PROFILE="user2"
Alternatively, when calling AWS services via an SDK, simply specify the Profile to use. Here is an example with Python from Credentials — Boto 3 documentation:
session = boto3.Session(profile_name='user2')
# Any clients created from this session will use credentials
# from the [user2] section of ~/.aws/credentials.
dev_s3_client = session.client('s3')
There is an equivalent capability in the SDKs for other languages, too.
have been given access key and secret key through IAM. But restricted to open IAM through my AWS console.
After setting the environment variables for access key and secret key, region.
executed ./ec2.py --list which gives 403 forbidden error. What will be the problem?
And i have seen my policy. the policy structure of my IAM is
Statements :
Effect:allow
Resource:ec2:*
Sorry I cannot copy my policy structure. And i run behind a proxy. I don't think so proxy may be a drawback because am getting response.
The AWS console can be connected only by having a remote desktop gateway and a server. Will this may be a problem. But I have my access id and secret id.
It sounds like you either do not have your environment set up correctly, or have the incorrect permissions to list metadata about your EC2 instances. If it's the former, you need to export your AWS_ACCESS and AWS_SECRET, e.g:
export AWS_SECRET_ACCESS_KEY=your-aws-secret-key
export AWS_ACCESS_KEY_ID=your-aws-access-key
If you are referring to permissions on the remote host for making calls to EC2 then you can do this by creating IAM roles which delegate various rights to instances that belong to the role.
the mistake was the role assigning and corporate proxy.
For some reason Packer fails to authenticate to AWS, using plain aws client works though, and my environment variables are correctly set:
AWS_ROLE_SESSION_NAME=...
AWS_SESSION_TOKEN=...
AWS_SECRET_ACCESS_KEY=...
AWS_ROLE=...
AWS_ACCESS_KEY_ID=...
AWS_CLI=...
AWS_ACCOUNT=...
AWS_SECURITY_TOKEN=...
I am using authentication using aws saml, and Packer gives me the following:
Error querying AMI: AWS was not able to validate the provided access credentials (AuthFailure)
The problem lies within the way Packer authenticates with AWS.
Packer is written in go and uses goamz for authentication. When creating a config using aws saml, a couple of files are generated in ~/.aws : config and credentials.
Turns out this credentials file takes precedence over the environment variables, so if these credentials are incorrect and you rely on your environment variables, you will get the same error.
Since aws-saml needs aws_access_key_id and aws_secret_access_key to be defined, deleting the credentials file would not suffice in this case.
We had to copy these values into ~/.aws/config and delete the credentials file, then Packer was happy to use our environment variables.
A ticket has been raised in github for goamz so AWS CLI and Packer can have the same authenticating behavior, feel free to vote it up if you have the issue too : https://github.com/mitchellh/goamz/issues/171