AWS Sagemaker on local machine: Invalid security token included in the request - amazon-web-services

I am trying to get AWS Sagemaker to run locally. I found this jupyter notebook
https://gitlab.com/juliensimon/aim410/-/blob/master/local_training.ipynb
I logged into AWS via saml2aws and hence have valid credentials, entered my specific region as well as the Sagemaker Execution Role ARN and specify below the specific image I want to pull.
However when starting the .fit() i getthe following ClientError:
ClientError: An error occurred (InvalidClientTokenId) when calling the GetCallerIdentity operation: The security token included in the request is invalid.
Can someone give my a hint or suggestion how to solve this issue?
Thanks!

Try to verify your AWS credentials are setup properly, bypassing Boto3, by running a cell with something like:
!aws sagemaker list-endpoints
If this fails, then your AWS CLI credentials aren't setup correctly, or your saml2aws process, or your role has no SageMaker permissions.

Related

Azure Devops YML pipeline error for aws keys

I am getting below error while running Azure DevOps Pipeline. I have added correct AWS 'Access key' and 'Secret Access key'. However it is still failing.
I checked from backend windows server. It is working fine manually but giving below error when I run the pipeline. Not sure what is missing, can please you suggest ?
Note: I am using IAM role to access the AWS environment and providing 'keys' of same role
An error occurred (SignatureDoesNotMatch) when calling the AssumeRole operation: The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.
Cannot index into a null array.

How to fix expired token in AWS s3 copy command?

I need to run the command aws s3 cp <filename> <bucketname> from an EC2 RHEL instance to copy a file from the instance to an S3 bucket.
When I run this command, I receive this error: An error occurred (ExpiredToken) when calling the PutObject operation: The provided token has expired
I also found that this same error occurs when trying to run many other CLI commands from the instance.
I do not want to change my IAM role because the command was previously working perfectly fine and IAM policy changes must go through an approval process. I have double checked the IAM role the instance is assuming and it still contains the correct configuration for allowing PutObject on the correct resources.
What can I do to allow AWS CLI commands to work again in my instance?
AWS API tokens are time-sensitive, and VMs in the cloud tend to suffer from clock drift.
Check that time is accurate on the RHEL instance, and use ntp servers to make sure any drift is regularly corrected.

Spark credential chain ordering - S3 Exception Forbidden

I'm running Spark 2.4 on an EC2 instance. I am assuming an IAM role and setting the key/secret key/token in the sparkSession.sparkContext.hadoopConfiguration, along with the credentials provider as "org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider".
When I try to read a dataset from s3 (using s3a, which is also set in the hadoop config), I get an error that says
com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 403, AWS Service: Amazon S3, AWS Request ID: 7376FE009AD36330, AWS Error Code: null, AWS Error Message: Forbidden
read command:
val myData = sparkSession.read.parquet("s3a://myBucket/myKey")
I've repeatedly checked the S3 path and it's correct. My assumed IAM role has the right privileges on the S3 bucket. The only thing I can figure at this point is that spark has some sort of hidden credential chain ordering and even though I have set the credentials in the hadoop config, it is still grabbing credentials from somewhere else (my instance profile???). But I have no way to diagnose that.
Any help is appreciated. Happy to provide any more details.
spark-submit will pick up your env vars and set them as the fs.s3a access +secret + session key, overwriting any you've already set.
If you only want to use the IAM credentials, just set fs.s3a.aws.credentials.provider to com.amazonaws.auth.InstanceProfileCredentialsProvider; it'll be the only one used
Further Reading: Troubleshooting S3A

CLI command "describe-instances" throw error "An error occurred (AuthFailure) when calling the

I was able to install CLI on windows 16 AWS instance. when I try "aws ec2 describe-instances" CLI command, I get the following error
CLI command "describe-instances" throw error "An error occurred (AuthFailure) when calling the DescribeInstances operation: AWS was not able to validate the provided access credentials"
In .aws\config file I have following content:
[default]
region = us-west-2
How can authorization fail when it took my access key id and secret access key without any issue.
Verify if your datetime is sync ok.
use: ntpdate ntp.server
bests
I deleted my two configuration files from .aws directory and re-ran "aws config"
That fixed the problem for me.
My Steps:
Go to your .aws directory under Users e.g. "c:\Users\Joe\.aws"
Two files: configure and credential. Delete both files
Rerun configure: "aws configure"
Note when you run aws configure you will need the AWS Access and Secret Key. If you don't have them you can just create another.
Steps:
Goto "My Security Credentials" Under you Account Name in AWS Console.
Expand Access Key panel.
Create New Access Key.
When you first ran aws configure, it just populated the local credentials in %UserProfile%\.aws\credentials; it didn't validate them with AWS.
(aws-cli doesn't know what rights your user has until it tries to do an operation -- all of the access control happens on AWS's end. It just tries to do what you ask, and tells you if it doesn't have access, like you saw.)
That said, if you're running the CLI from an AWS instance, you might want to consider applying a role to that instance, so you don't have to store your keys on the instance.
My Access and Security keys are correct. My server time was good. I got error while using Ap-south-1 region. After I changed my region to us-west-2, it worked without any problem.
I tried setting that too on my windows environment. didn't work and getting error above.
so I tried setting my environment
SET AWS_ACCESS_KEY_ID=YOUR_ACCESS_KEY
SET AWS_SECRET_ACCESS_KEY=***YOUR_SECRET_ACCESS_KEY*
and then tried running command like "aws ec2 describe-instance"
I tried many things. Finally, just uninstalling and installing again (not repairing) did the trick. Just make sure to save a copy of your credentials (key and key ID) to use later when calling aws configure.

AWS SDK Error: Failed to get the Amazon S3 bucket name

I am attempting to setup AWS with an Elastic Beanstalk instance I have previously created. I have entered the various details into the config file, however when I try to use aws.push I get the error message
Updating the AWS Elastic Beanstalk environment x-xxxxxx...
Error: Failed to get the Amazon S3 bucket name
I have checked the credentials in IAM and have full administrator privileges. When I run eb status show green I get the following message:
InvalidClientTokenId. The security token included in the request is invalid.
Run aws configure again, and re-enter your credentials.
It's likely you're using old (disabled or deleted) access/secret keys, or you accidentally swapped the access key with the secret key.
For me is was that my system clock was off by more than 5 minutes. Updating the time fixed the issue.