Terraform: Deleted AWS keys before running Terraform Destroy - amazon-web-services

I have created some resources in AWS using TF these were used using the aws access and secret key. I accidentally deleted those keys not realising I needed to use TF destroy.
I have created new keys but I'm guessing TF requires the original keys.
I get this error
Error: error configuring Terraform AWS Provider: error validating provider credentials: error calling sts:GetCallerIdentity: SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.
How do I get around this?

Related

Azure Devops YML pipeline error for aws keys

I am getting below error while running Azure DevOps Pipeline. I have added correct AWS 'Access key' and 'Secret Access key'. However it is still failing.
I checked from backend windows server. It is working fine manually but giving below error when I run the pipeline. Not sure what is missing, can please you suggest ?
Note: I am using IAM role to access the AWS environment and providing 'keys' of same role
An error occurred (SignatureDoesNotMatch) when calling the AssumeRole operation: The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.
Cannot index into a null array.

Issues using aws ssm parameter for private repository authentication in task definition

So I am using ghcr as my container registry and having some images there. Trying to deploying those image using aws ecs and task definitions through terraform.
In task definition, I have specified the image url and repository credentials. To access my image, I need to provide my ghcr username and token. I have stored my ghcr username and token as json object, in aws secret manager and aws ssm paraters as well.
{username: xxx,password: xxx}
If I use aws secret manager key arn as credentialsParameter, it works. If I use aws ssm parameter arn there, it is giving error.
How to use ssm parameter which has text as above json object in credentialsParameter ? Is there is anyway or workaround to do that ? or I should use only secret manager key arn.

An error occurred (UnrecognizedClientException) when calling the ListTopicRules operation: The security token included in the request is invalid

I got this error when using aws CLI, following the tutorial shows below to configuring the AWS CLI, I can only set up the AWS Access Key ID and AWS Secret Access Key. Where can I set the security token?
https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html
I found the answer to this. It might seem very simple though.
Basically you were following the tutorial literally, which means you most probably input the exact AWS Access Key ID and AWS Secret Access Key mentioned in the tutorial.
However, those keys in red are just examples. What you should use are the keys from your own AWS account, i.e. My Security Credentials.

Spark credential chain ordering - S3 Exception Forbidden

I'm running Spark 2.4 on an EC2 instance. I am assuming an IAM role and setting the key/secret key/token in the sparkSession.sparkContext.hadoopConfiguration, along with the credentials provider as "org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider".
When I try to read a dataset from s3 (using s3a, which is also set in the hadoop config), I get an error that says
com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 403, AWS Service: Amazon S3, AWS Request ID: 7376FE009AD36330, AWS Error Code: null, AWS Error Message: Forbidden
read command:
val myData = sparkSession.read.parquet("s3a://myBucket/myKey")
I've repeatedly checked the S3 path and it's correct. My assumed IAM role has the right privileges on the S3 bucket. The only thing I can figure at this point is that spark has some sort of hidden credential chain ordering and even though I have set the credentials in the hadoop config, it is still grabbing credentials from somewhere else (my instance profile???). But I have no way to diagnose that.
Any help is appreciated. Happy to provide any more details.
spark-submit will pick up your env vars and set them as the fs.s3a access +secret + session key, overwriting any you've already set.
If you only want to use the IAM credentials, just set fs.s3a.aws.credentials.provider to com.amazonaws.auth.InstanceProfileCredentialsProvider; it'll be the only one used
Further Reading: Troubleshooting S3A

Verifying AWS Command Line Interface credentials are configured correctly

I seem to have problems running a command to verify that my credentials are configured correctly and that I can connect to AWS as stated here:https://docs.aws.amazon.com/cli/latest/userguide/tutorial-ec2-ubuntu.html:
When running:
$ aws ec2 describe-regions --output table
I get the following output:
An error occurred (AuthFailure) when calling the DescribeRegions
operation: AWS was not able to validate the provided access
credentials
What am I missing?
After installing the AWS CLI (on a fedora machine), I ran
$ aws configure
for AWS Access Key ID and AWS Secret Access Key:
I went to AWS website and created an IAM user.
For that user, I have gone to the security credentials tab and
I have created a new Access key, which is key value pair of Access key ID,Secret access key.
I have used those values for AWS Access Key ID and AWS Secret Access Key but I keep getting the above error message.
What am I missing? Thanks in advance.
You need to pass the profile parameter. This link from AWS has more details