I am trying to work with the new Instance Metadata Service Version 2 (IMDSv2) API.
It works as expected when I try to query the metadata manually as described on Retrieve instance metadata - Amazon Elastic Compute Cloud.
However, if I try to query for the instance tags it fails with error message:
Couldn't find AWS credentials in environment, credentials file, or IAM role
The tags query is done by the Rusoto SDK that I am using, that works when I set --http-tokens optional as described on Configure the instance metadata options - Amazon Elastic Compute Cloud.
I don't fully understand why setting the machine to work with IMDSv2 would effect the DescribeTags request, as I believe it's not using the same API - so I am guessing that's a side effect.
If I try and do a manual query using curl (instead of using the SDK):
https://ec2.amazonaws.com/?Action=DescribeTags&Filter.1.Name=resource-id&Filter.1.Value.1=ami-1a2b3c4d
I get:
The action DescribeTags is not valid for this web service
Thanks :)
The library that I was using (Rusoto SDK 0.47.0) doesn't support fetching the credentials needed when the host is set to work with the IMDSv2.
The workaround was to manually query for the IAM role credentials.
First, you get the token:
GET /latest/api/token
Next, use the token header "X-aws-ec2-metadata-token" with the value from the previous:
GET /meta-data/iam/security-credentials
Afterwards, use the result from the previous query (and don't forget to set the token header), and query:
GET /meta-data/iam/security-credentials/<query 2 result>
This will provide with the following data:
struct SecurityCredentials {
#[serde(rename = "AccessKeyId")]
access_key_id: String,
#[serde(rename = "SecretAccessKey")]
secret_access_key: String,
#[serde(rename = "Token")]
token: String,
}
Then what I needed to do was to build a custom credentials provider using that data (but this part is already lib specific).
Related
This is a bit of a newbie question, but I've just gotten started with GCP provisioning using Terraform / Terragrunt, and I find the workflow with obtaining GCP credentials quite confusing. I've come from using AWS exclusively, where obtaining credentials, and configuring them in the AWS CLI was quite straightforward.
Basically, the Google Cloud Provider documentation states that you should define a provider block like so:
provider "google" {
credentials = "${file("account.json")}"
project = "my-project-id"
region = "us-central1"
zone = "us-central1-c"
}
This credentials field shows I (apparently) must generate a service account, and keep a JSON somewhere on my filesystem.
However, if I run the command gcloud auth application-default login, this generates a token located at ~/.config/gcloud/application_default_credentials.json; alternatively I can also use gcloud auth login <my-username>. From there I can access the Google API (which is what Terraform is doing under the hood as well) from the command line using a gcloud command.
So why does the Terraform provider require a JSON file of a service account? Why can't it just use the credentials that the gcloud CLI tool is already using?
By the way, if I configure Terraform to point to the application_default_credentials.json file, I get the following errors:
Initializing modules...
Initializing the backend...
Error: Failed to get existing workspaces: querying Cloud Storage
failed: Get
https://www.googleapis.com/storage/v1/b/terraform-state-bucket/o?alt=json&delimiter=%2F&pageToken=&prefix=projects%2Fsomeproject%2F&prettyPrint=false&projection=full&versions=false:
private key should be a PEM or plain PKCS1 or PKCS8; parse error:
asn1: syntax error: sequence truncated
if I configure Terraform to point to the application_default_credentials.json file, I get the following errors:
The credentials field in provider config expects a path to service account key file, not user account credentials file. If you want to authenticate with your user account try omitting credentials and then running gcloud auth application-default login; if Terraform doesn't find your credentials file you can set the GOOGLE_APPLICATION_CREDENTIALS environment variabe to point to ~/.config/gcloud/application_default_credentials.json.
Read here for more on the topic of service accounts vs user accounts. For what it's worth, Terraform docs explicitly advice against using application-default login:
This approach isn't recommended- some APIs are not compatible with credentials obtained through gcloud
Similarly GCP docs state the following:
Important: For almost all cases, whether you are developing locally or in a production application, you should use service accounts, rather than user accounts or API keys.
Change the credentials to point directly to the file location. Everything else looks good.
Example: credentials = "/home/scott/gcp/FILE_NAME"
Still it is not recommended to use gcloud auth application-default login, Best best approaches are
https://www.terraform.io/docs/providers/google/guides/provider_reference.html#credentials-1
I'm trying to use the aws-sdk-go in my application. It's running on EC2 instance. Now in the Configuring Credentials of the doc,https://docs.aws.amazon.com/sdk-for-go/api/, it says it will look in
*Environment Credentials - Set of environment variables that are useful when sub processes are created for specific roles.
* Shared Credentials file (~/.aws/credentials) - This file stores your credentials based on a profile name and is useful for local development.
*EC2 Instance Role Credentials - Use EC2 Instance Role to assign credentials to application running on an EC2 instance. This removes the need to manage credential files in production.`
Wouldn't the best order be the reverse order? But my main question is do I need to ask the instance if it has a role and then use that to set up the credentials if it has a role? This is where I'm not sure of what I need to do and how.
I did try a simple test of creating a empty config with essentially only setting the region and running it on the instance with the role and it seems to have "worked" but in this case, I am not sure if I need to explicitly set the role or not.
awsSDK.Config{
Region: awsSDK.String(a.region),
MaxRetries: awsSDK.Int(maxRetries),
HTTPClient: http.DefaultClient,
}
I just want to confirm is this the proper way of doing it or not. My thinking is I need to do something like the following
role = use sdk call to get role on machine
set awsSDK.Config { Credentials: credentials form of role,
...
}
issue service command with returned client.
Any more docs/pointers would be great!
I have never used the go SDK, but the AWS SDKs I used automatically use the EC2 instance role if credentials are not found from any other source.
Here's an AWS blog post explaining the approach AWS SDKs follow when fetching credentials: https://aws.amazon.com/blogs/security/a-new-and-standardized-way-to-manage-credentials-in-the-aws-sdks/. In particular, see this:
If you use code like this, the SDKs look for the credentials in this
order:
In environment variables. (Not the .NET SDK, as noted earlier.)
In the central credentials file (~/.aws/credentials or
%USERPROFILE%.awscredentials).
In an existing default, SDK-specific
configuration file, if one exists. This would be the case if you had
been using the SDK before these changes were made.
For the .NET SDK, in the SDK Store, if it exists.
If the code is running on an EC2
instance, via an IAM role for Amazon EC2. In that case, the code gets
temporary security credentials from the instance metadata service; the
credentials have the permissions derived from the role that is
associated with the instance.
In my apps, when I need to connect to AWS resources, I tend to use an access key and secret key that have specific predefined IAM roles. Assuming I have those two, the code I use to create a session is:
awsCredentials := credentials.NewStaticCredentials(awsAccessKeyID, awsSecretAccessKey, "")
awsSession = session.Must(session.NewSession(&aws.Config{
Credentials: awsCredentials,
Region: aws.String(awsRegion),
}))
When I use this, the two keys are usually specified as either environment variables (if I deploy to a docker container).
A complete example: https://github.com/retgits/flogo-components/blob/master/activity/amazons3/activity.go
In my Jenkinsfile, I am trying to push the image that I have built using the docker plugin like follows:
docker.withRegistry('https://<my-id>.dkr.ecr.us-east-1.amazonaws.com/', 'ecr:us-east-1:awscreds') {
docker.image('image').push('latest')
}
The pipeline fails every time with the message ERROR: Could not find credentials matching ecr:us-east-1:awscreds but I do have my AWS key ID and secret key in my Jenkins credentials with the ID "awscreds".
What could be a potential fix for this?
Alternatively, can I provide my credentials directly instead of mentioning the credential ID in the call?
I had the same error message. Make sure the Amazon ECR plugin is installed and up to date and that you reboot jenkins after the installation.
I'm running Spark2 in local mode on a Amazon EC2, when I'm trying to read data from S3 I'm getting the following exception:
java.lang.IllegalArgumentException: AWS Access Key ID and Secret Access Key must be specified as the username or password (respectively) of a s3 URL, or by setting the fs.s3.awsAccessKeyId or fs.s3.awsSecretAccessKey properties (respectively)
I can, but I rather not manually set the AccessKey and the SecretKey from the code because of security issues.
The EC2 is set with an IAM rule that allow it full access to the relevant S3 Bucket. For every other Amazon API calls it is sufficient but it seems that the spark is ignoring it.
Can I set the spark to use this IAM rule instead of the AccessKey and the SecretKey?
Switch to using the s3a:// scheme (with the Hadoop 2.7.x JARs on your classpath) and this happens automatically. The "s3://" scheme with non-EMR versions of spark/hadoop is not the connector you want (it's old, non-interoperable and has been removed from recent versions)
I am using hadoop-2.8.0 and spark-2.2.0-bin-hadoop2.7.
Spark-S3-IAM integration is working well with the following AWS packages on driver.
spark-submit --packages com.amazonaws:aws-java-sdk:1.7.4,org.apache.hadoop:hadoop-aws:2.7.3 ...
Scala codes snippet:
sc.textFile("s3a://.../file.gz").count()
Is it possible to get AWS instance info, local to the instance, without using credentials? I know the command line tool can do it, but it needs credentials. There is also the metadata commands, but those don't seem to return Tags, which is what I need.
I thought there was a way to curl an IP and get back json, but I can't find it.
It is not possible to retrieve tags directly from within the EC2 instance via the local metadata service as the metadata service does not know the tags. You have (at least) two options:
launch the instance with an IAM role (or somehow provide other credentials to the instance) that includes permission to call ec2:DescribeTags and then retrieve the tags dynamically - you'll need the instance ID for this and you can get that from the metadata service
if the tags are known at launch time and are not going to change after launch, you could simply pass them into the EC2 instance as part of the userdata (e.g. as environment variables or written to a text file at launch).
Unfortunately, you'll need credentials to retrieve tags. I do this by creating an IAM user that only has the ec2:Describe* role; it can then enumerate the instances in your account and retrieve their tags, with ec2-describe-tags or similar.
You can use the metadata API to retrieve the current instance ID, then pass that to ec2-describe tags to retrieve the tags for the current instance:
ec2-describe-tags -O YOUR_IAM_KEY -W YOUR_IAM_SECRET --filter="resource-id=`curl -s http://169.254.169.254/latest/meta-data/instance-id`"
Yes you can get the EC2 instance tags without credentials. You do this using the EC2 Roles / Profiles for the EC2 instance. I know that this has already been mentioned but I'd like to expand on this a little. Technically you're not actually doing anything without credentials. Credentials are always involved unless you're just making queries to the metadata.
What Boto and other similar frameworks do is they query the ec2 instance metadata to get the credentials for the role. Just replace the last part s3access with the name of the profile / role.
curl http://169.254.169.254/latest/meta-data/iam/security-credentials/s3access
Returns
{
"Code" : "Success",
"LastUpdated" : "2012-04-26T16:39:16Z",
"Type" : "AWS-HMAC",
"AccessKeyId" : "AKIAIOSFODNN7EXAMPLE",
"SecretAccessKey" : "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
"Token" : "token",
"Expiration" : "2012-04-27T22:39:16Z"
}
This response includes the access credentials required to make the API request. When the credentials expire the framework will request a new set of credentials using the same method and repeat this process as many times as necessary.
I highly recommend using a framework because making the requests directly to the REST API requires that you perform the authentication yourself. If that's the direction you decide to go here are some more resources to help you out.
Signature Version 2
Describe Tags API