aws access s3 from spark using IAM role - amazon-web-services
I want to access s3 from spark, I don't want to configure any secret and access keys, I want to access with configuring the IAM role, so I followed the steps given in s3-spark
But still it is not working from my EC2 instance (which is running standalone spark)
it works when I tested
[ec2-user#ip-172-31-17-146 bin]$ aws s3 ls s3://testmys3/
2019-01-16 17:32:38 130 e.json
but it did not work when I tried like below
scala> val df = spark.read.json("s3a://testmys3/*")
I am getting the below error
19/01/16 18:23:06 WARN FileStreamSink: Error while looking for metadata directory.
com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 400, AWS Service: Amazon S3, AWS Request ID: E295957C21AFAC37, AWS Error Code: null, AWS Error Message: Bad Request
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:798)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:421)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528)
at com.amazonaws.services.s3.AmazonS3Client.headBucket(AmazonS3Client.java:1031)
at com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Client.java:994)
at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:297)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2669)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
at org.apache.spark.sql.execution.datasources.DataSource$.org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary(DataSource.scala:616)
this config worked
./spark-shell \
--packages com.amazonaws:aws-java-sdk:1.7.4,org.apache.hadoop:hadoop-aws:2.7.3 \
--conf spark.hadoop.fs.s3a.endpoint=s3.us-east-2.amazonaws.com \
spark.hadoop.fs.s3a.aws.credentials.provider=com.amazonaws.auth.InstanceProfileCredentialsProvider \
--conf spark.executor.extraJavaOptions=-Dcom.amazonaws.services.s3.enableV4=true \
--conf spark.driver.extraJavaOptions=-Dcom.amazonaws.services.s3.enableV4=true
"400 Bad Request" is fairly unhelpful, and not only does S3 not provide much, the S3A connector doesn't date print much related to auth either. There's a big section on troubleshooting the error
The fact it got as far as making a request means that it has some credentials, only the far end doesn't like them
Possibilities
your IAM role doesn't have the permissions for s3:ListBucket. See IAM role permissions for working with s3a
your bucket name is wrong
There's some settings in fs.s3a or the AWS_ env vars which get priority over the IAM role, and they are wrong.
You should automatically have IAM auth as an authentication mechanism with the S3A connector; its the one which is checked last after: config & env vars.
Have a look at what is set in fs.s3a.aws.credentials.provider -it must be unset or contain the option com.amazonaws.auth.InstanceProfileCredentialsProvider
assuming you also have hadoop on the command line, grab storediag
hadoop jar cloudstore-0.1-SNAPSHOT.jar storediag s3a://testmys3/
it should dump what it is up to regarding authentication.
Update
As the original poster has commented, it was due to v4 authentication being required on the specific S3 endpoint. This can be enabled on the 2.7.x version of the s3a client, but only via Java system properties. For 2.8+ there are some fs.s3a. options you can set it instead
step1. to config spark container framework like Yarn core-site.xml.Then restart Yarn
fs.s3a.aws.credentials.provider--
com.cloudera.com.amazonaws.auth.InstanceProfileCredentialsProvider
fs.s3a.endpoint--
s3-ap-northeast-2.amazonaws.com
fs.s3.impl--
org.apache.hadoop.fs.s3a.S3AFileSystem
step2. spark shell to test as follow.
val rdd=sc.textFile("s3a://path/file")
rdd.count()
rdd.take(10).foreach(println)
It works for me
Related
Flink job cannot fetch EC2 IAM Role
I am trying to use a aws s3 Bucket as a file Source for my flink streaming. Therefore I need to set a IAM Role or AWS credentials flink 1.13 docs. Unfortunately I always get an error which says that he cannot fetch the security details at http://169.254.169.254/latest/meta-data/iam/security-credentials/. If I make a curl on the flink worker with this URL I get the rolename as a response. When I add the rolename to the curl http://169.254.169.254/latest/meta-data/iam/security-credentials/{role_name} I can get the temporary credentials from the role. So here is my question: How can I tell flink which role it shall use? I don't see any properties where I can tell flink the name of the IAM Role. Or am I doing something wrong? Locally it works fine with setting the aws credentials, but I want to solve this with IAM Roles for the EC2 instances because it is much more beautiful. I cannot find any description to this process in neither the flink 1.13 docs or the presto docs. I use flink 1.13 and the s3-presto library. 2021-05-06 10:17:22,910 WARN org.apache.flink.runtime.taskmanager.Task [] - Source: Custom File Source (1/1)#1 (9bb80a7b4f4aafd734c926e90b02d318) switched from RUNNING to FAILED with failure cause: com.amazonaws.SdkClientException: The requested metadata is not found at http://169.254.169.254/latest/meta-data/iam/security-credentials/ at com.amazonaws.internal.EC2ResourceFetcher.doReadResource(EC2ResourceFetcher.java:89) at com.amazonaws.internal.EC2ResourceFetcher.doReadResource(EC2ResourceFetcher.java:70) at com.amazonaws.internal.InstanceMetadataServiceResourceFetcher.readResource(InstanceMetadataServiceResourceFetcher.java:75) at com.amazonaws.internal.EC2ResourceFetcher.readResource(EC2ResourceFetcher.java:66) at com.amazonaws.auth.InstanceMetadataServiceCredentialsFetcher.getCredentialsEndpoint(InstanceMetadataServiceCredentialsFetcher.java:58) at com.amazonaws.auth.InstanceMetadataServiceCredentialsFetcher.getCredentialsResponse(InstanceMetadataServiceCredentialsFetcher.java:46) at com.amazonaws.auth.BaseCredentialsFetcher.fetchCredentials(BaseCredentialsFetcher.java:112) at com.amazonaws.auth.BaseCredentialsFetcher.getCredentials(BaseCredentialsFetcher.java:68) at com.amazonaws.auth.InstanceProfileCredentialsProvider.getCredentials(InstanceProfileCredentialsProvider.java:166) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.getCredentialsFromContext(AmazonHttpClient.java:1257) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.runBeforeRequestHandlers(AmazonHttpClient.java:833) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:783) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:770) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:744) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:704) at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:686) at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:550) at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:530) at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5062) at com.amazonaws.services.s3.AmazonS3Client.getBucketRegionViaHeadRequest(AmazonS3Client.java:5850) at com.amazonaws.services.s3.AmazonS3Client.fetchRegionFromCache(AmazonS3Client.java:5823) at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5046) at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5008) at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1338) at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1312) at com.facebook.presto.hive.s3.PrestoS3FileSystem.lambda$getS3ObjectMetadata$2(PrestoS3FileSystem.java:563) at com.facebook.presto.hive.RetryDriver.run(RetryDriver.java:138) at com.facebook.presto.hive.s3.PrestoS3FileSystem.getS3ObjectMetadata(PrestoS3FileSystem.java:560) at com.facebook.presto.hive.s3.PrestoS3FileSystem.getFileStatus(PrestoS3FileSystem.java:311) at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1734) at org.apache.flink.fs.s3presto.common.HadoopFileSystem.exists(HadoopFileSystem.java:165) at org.apache.flink.core.fs.PluginFileSystemFactory$ClassLoaderFixingFileSystem.exists(PluginFileSystemFactory.java:148) at org.apache.flink.streaming.api.functions.source.ContinuousFileMonitoringFunction.run(ContinuousFileMonitoringFunction.java:215) at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:110) at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:66) at org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:269)
Okay I am just stupid. The request of the IAM Role was taking place in the flink manager and not the flink worker. I simply added the IAM Role to the EC2 Instance of the flink Manager and it worked!
Spark credential chain ordering - S3 Exception Forbidden
I'm running Spark 2.4 on an EC2 instance. I am assuming an IAM role and setting the key/secret key/token in the sparkSession.sparkContext.hadoopConfiguration, along with the credentials provider as "org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider". When I try to read a dataset from s3 (using s3a, which is also set in the hadoop config), I get an error that says com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 403, AWS Service: Amazon S3, AWS Request ID: 7376FE009AD36330, AWS Error Code: null, AWS Error Message: Forbidden read command: val myData = sparkSession.read.parquet("s3a://myBucket/myKey") I've repeatedly checked the S3 path and it's correct. My assumed IAM role has the right privileges on the S3 bucket. The only thing I can figure at this point is that spark has some sort of hidden credential chain ordering and even though I have set the credentials in the hadoop config, it is still grabbing credentials from somewhere else (my instance profile???). But I have no way to diagnose that. Any help is appreciated. Happy to provide any more details.
spark-submit will pick up your env vars and set them as the fs.s3a access +secret + session key, overwriting any you've already set. If you only want to use the IAM credentials, just set fs.s3a.aws.credentials.provider to com.amazonaws.auth.InstanceProfileCredentialsProvider; it'll be the only one used Further Reading: Troubleshooting S3A
Serverless Error: The security token included in the request is invalid
when i type serverless deploy appear this error: ServerlessError: The security token included in the request is invalid.
I had to specify sls deploy --aws-profile in my serverless deploy commands like this: sls deploy --aws-profile common
Can you provide more information? Make sure that you've got the correct credentials in ~/.aws/config and ~/.aws/credentials. You can set these up by running aws configure. More info here: https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-quick-configuration Also make sure that the IAM user in question has as an attached security policy that allows access to everything you need, such as CloudFormation.
Create a new user in AWS (don't use the root key). In the SSH keys for AWS CodeCommit, generate a new Access Key. Copy the values and run this: serverless config credentials --overwrite --provider aws --key bar --secret foo sls deploy
In my case it was missing the localstack entry in the serverless file. I had everything that should be inside it, but it was all inside custom (instead of custom.localstack).
In my case, I added region to the provider. I suppose it's not read from the credentials file. provider: name: aws runtime: nodejs12.x region: cn-northwest-1
In my case, multiple credentials are stored in the ~/.aws/credentials file. And serverless is picking the default credentials. So, I kept the new credentials under [default] and removed the previous credentials. And that worked for me.
to run the function from AWS you need to configure AWS with access_key_id and secret_access_key but to might get this error if you want to run the function locally so for that use this command sls invoke local -f functionName it will run the function locally not on aws
If none of these answers work, it's maybe because you need to add a provider in your serverless account and add your AWS keys.
How to configure Spark running in local-mode on Amazon EC2 to use the IAM rules for S3
I'm running Spark2 in local mode on a Amazon EC2, when I'm trying to read data from S3 I'm getting the following exception: java.lang.IllegalArgumentException: AWS Access Key ID and Secret Access Key must be specified as the username or password (respectively) of a s3 URL, or by setting the fs.s3.awsAccessKeyId or fs.s3.awsSecretAccessKey properties (respectively) I can, but I rather not manually set the AccessKey and the SecretKey from the code because of security issues. The EC2 is set with an IAM rule that allow it full access to the relevant S3 Bucket. For every other Amazon API calls it is sufficient but it seems that the spark is ignoring it. Can I set the spark to use this IAM rule instead of the AccessKey and the SecretKey?
Switch to using the s3a:// scheme (with the Hadoop 2.7.x JARs on your classpath) and this happens automatically. The "s3://" scheme with non-EMR versions of spark/hadoop is not the connector you want (it's old, non-interoperable and has been removed from recent versions)
I am using hadoop-2.8.0 and spark-2.2.0-bin-hadoop2.7. Spark-S3-IAM integration is working well with the following AWS packages on driver. spark-submit --packages com.amazonaws:aws-java-sdk:1.7.4,org.apache.hadoop:hadoop-aws:2.7.3 ... Scala codes snippet: sc.textFile("s3a://.../file.gz").count()
The AWS Access Key Id does not exist in our records
I created a new Access Key and configured that in the AWS CLI with aws configure. It created the .ini file in ~/.aws/config. When I run aws s3 ls it gives: A client error (InvalidAccessKeyId) occurred when calling the ListBuckets operation: The AWS Access Key Id you provided does not exist in our records. AmazonS3FullAccess policy is also attached to the user. How to fix this?
It might be happening that you have the old keys exported via env variables (bash_profile) and since the env variables have higher precedence over credential files it is giving the error "the access key id does not exists". Remove the old keys from the bash_profile and you would be good to go. Happened with me once earlier when I forgot I have credentials in bash_profile and gave me headache for quite some time :)
It looks like some values have been already set for the environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. If it is like that, you could see some values when executing the below commands. echo $AWS_SECRET_ACCESS_KEY echo $AWS_ACCESS_KEY_ID You need to reset these variables, if you are using aws configure To reset, execute below commands. unset AWS_ACCESS_KEY_ID unset AWS_SECRET_ACCESS_KEY
Need to add aws_session_token in credentials, along with aws_access_key_id,aws_secret_access_key
None of the up-voted answers work for me. Finally I pass the credentials inside the python script, using the client API. import boto3 client = boto3.client( 's3', aws_access_key_id=ACCESS_KEY, aws_secret_access_key=SECRET_KEY, aws_session_token=SESSION_TOKEN) Please notice that the aws_session_token argument is optional. Not recommended for public work, but make life easier for simple trial.
For me, I was relying on IAM EC2 roles to give access to our machines to specific resources. I didn't even know there was a credentials file at ~/.aws/credentials, until I rotated/removed some of our accessKeys at the IAM console to tighten our security, and that suddenly made one of the scripts stop working on a single machine. Deleting that credentials file fixed it for me.
I made the mistake of setting my variables with quotation marks like this: AWS_ACCESS_KEY_ID="..."
You may have configured AWS credentials correctly, but using these credentials, you may be connecting to some specific S3 endpoint (as was the case with me). Instead of using: aws s3 ls try using: aws --endpoint-url=https://<your_s3_endpoint_url> s3 ls Hope this helps those facing the similar problem.
you can configure profiles in the bash_profile file using <profile_name> aws_access_key_id = <access_key> aws_secret_access_key = <acces_key_secret> if you are using multiple profiles. then use: aws s3 ls --profile <profile_name>
You may need to set the AWS_DEFAULT_REGION environment variable.
In my case, I was trying to provision a new bucket in Hong Kong region, which is not enabled by default, according to this: https://docs.aws.amazon.com/general/latest/gr/s3.html It's not totally related to OP's question, but to topic per se, so if anyone else like myself finds trapped on this edge case: I had to enable that region manually, before operating on that AWS s3 region, following this guide: https://docs.aws.amazon.com/general/latest/gr/rande-manage.html
I have been looking for information about this problem and I have found this post. I know it is old, but I would like to leave this post in case anyone has problems. Okay, I have installed the AWS CLI and opened: It seems that you need to run aws configure to add the current credentials. Once changed, I can access
Looks like ~/.aws/credentials was not created. Try creating it manually with this content: [default] aws_access_key_id = sdfesdwedwedwrdf aws_secret_access_key = wedfwedwerf3erfweaefdaefafefqaewfqewfqw (on my test box, if I run aws command without having credentials file, the error is Unable to locate credentials. You can configure credentials by running "aws configure".) Can you try running these two commands from the same shell you are trying to run aws: $ export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE $ export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY and then try aws command.
another thing that can cause this, even if everything is set up correctly, is running the command from a Makefile. for example, I had a rule: awssetup: aws configure aws s3 sync s3://mybucket.whatever . when I ran make awssetup I got the error: fatal error: An error occurred (InvalidAccessKeyId) when calling the ListObjects operation: The AWS Access Key Id you provided does not exist in our records.. but running it from the command line worked.
Adding one more answer since all the above cases didn't work for me. In AWS console, check your credentials(My Security Credentials) and see if you have entered the right credentials. Thanks to this discussion: https://forums.aws.amazon.com/message.jspa?messageID=771815
This could happen because there's an issue with your AWS Secret Access Key. After messing around with AWS Amplify, I ran into this issue. The quickest way is to create a new pair of AWS Access Key ID and AWS Secret Access Key and run aws configure again. I works for me. I hope this helps.
To those of you who run aws s3 ls and getting this exception. Make sure You have permissions to all regions under the provided AWS Account. When running aws s3 ls you try to pull all the s3 buckets under the AWS Account. therefore, in case you don't have permissions to all regions, you'll get this exception - An error occurred (InvalidAccessKeyId) when calling the ListBuckets operation: The AWS Access Key Id you provided does not exist in our records. Follow Describing your Regions using the AWS CLI for more info.
I had the same problem in windows and using the module aws-sdk of javascript. I have changed my IAM credentials and the problem persisted even if i give the new credentials through the method update like this s3.config.update({ accessKeyId: 'ACCESS_KEY_ID', secretAccessKey: 'SECRET_ACCESS_KEY', region: 'REGION', }); After a while i found that the module aws-sdk had created a file inside the folder User on windows with this path C:\Users\User\.aws\credentials . The credentials inside this file take precedence over the other data passed through the method update. The solution for me was to write here C:\Users\User\.aws\credentials the new credentials and not with the method s3.config.update
Kindly export the below variables from the credential file from the below directory. path = .aws/ filename = credentials export aws_access_key_id = AK###########GW export aws_secret_access_key = g#############################J
Hopefully this saves others from hours of frustration: call aws.config.update({ before initializing s3. const AWS = require('aws-sdk'); AWS.config.update({ accessKeyId: 'AKIAW...', secretAccessKey: 'ptUGSHS....' }); const s3 = new AWS.S3(); Credits to this answer: https://stackoverflow.com/a/61914974/11110509
I tries below steps and it worked: 1. cd ~ 2. cd .aws 3. vi credentials 4. delete aws_access_key_id = aws_secret_access_key = by placing cursor on that line and pressing dd (vi command to delete line). Delete both the line and check gain.
If you have an AWS Educate account and you get this problem: An error occurred (InvalidAccessKeyId) when calling the ListBuckets operation: The AWS Access Key Id you provided does not exist in our records". The solution is here: Go to your C:/ drive and search for .aws folder inside your main folder in windows. Inside that folder you get the "credentials" file and open it with notepad. Paste the whole key credential from AWS account to the same notepad and save it. Now you are ready to use you AWS Educate account.
Assuming you already checked Access Key ID and Secret... you might want to check file team-provider-info.json which can be found under amplify/ folder "awscloudformation": { "AuthRoleName": "<role identifier>", "UnauthRoleArn": "arn:aws:iam::<specific to your account and role>", "AuthRoleArn": "arn:aws:iam::<specific to your account and role>", "Region": "us-east-1", "DeploymentBucketName": "<role identifier>", "UnauthRoleName": "<role identifier>", "StackName": "amplify-test-dev", "StackId": "arn:aws:cloudformation:<stack identifier>", "AmplifyAppId": "<id>" } IAM role being referred here should be active in IAM console.
If you get this error in an Amplify project, check that "awsConfigFilePath" is not configured in amplify/.config/local-aws-info.json In my case I had to remove it, so my environment looked like the following: { // **INCORRECT** // This will not use your profile in ~/.aws/credentials, but instead the // specified config file path // "dev": { // "configLevel": "project", // "useProfile": false, // "awsConfigFilePath": "/Users/dev1/.amplify/awscloudformation/cEclTB7ddy" // }, // **CORRECT** "dev": { "configLevel": "project", "useProfile": true, "profileName": "default", } }
Maybe you need to active you api keys in the web console, I just saw that mine were inactive for some reason...
Thanks, everyone. This helped to solve. Something somehow happened which changed the keys & I didn't realize since everything was working fine until I connected to S3 from a spark...then from the command line also error started coming even in AWS s3 ls Steps to solve Run AWS configure to check if keys are set up (verify from last 4 characters & just keep pressing enter) AWS console --> Users --> click on the user --> go to security credentials--> check if the key is the same that is showing up in AWS configure If both not the same, then generate a new key, download csv run --> AWS configure, set up new keys try AWS s3 ls now Change keys at all places in my case it was configs in Cloudera.
I couldn't figure out how to get the system to accept my Vocareum credentials so I took advantage of the fact that if you configure your instance to use IAM roles, the SDK automatically selects the IAM credentials for your application, eliminating the need to manually provide credentials. Once a role with appropriate permissions was applied to the EC2 instance, I didn't need to provide any credentials.
Open the ~/.bash_profile file and edit the info with the new values that you received at the time of creating the new user: export AWS_ACCESS_KEY_ID= export AWS_SECRET_ACCESS_KEY= export AWS_DEFAULT_REGION=us-east-1 Afterward, run the command: source ~/.bash_profile This will enable the new keys for the local machine. Now, we will need to configure the info in the terminal as well. Run the command - aws configure Provide the new values as requested and you are good to go.
In my case, I was using aws configure However, I hand-edited the .aws/config file to export the KeyID and key environment variables. This apparently caused a silent error and saw the error listed above. I solved this by destroying the .aws directory and running aws configure again.
I have encountered this issue when trying to export RDS Postgres data to S3 following this official guide. TL;DR Troubleshooting tips: Reset RDS credentials using: DROP EXTENSION aws_s3 CASCADE; DROP EXTENSION aws_commons CASCADE; CREATE EXTENSION aws_s3 CASCADE; Delete and add DB instance role used for s3Export feature. Optionally reset RDS credentials (previous action point) once again after that. Below you will find more details on my case. In particular, I have encountered: [XX000] ERROR: could not upload to Amazon S3 Details: Amazon S3 client returned 'The AWS Access Key Id you provided does not exist in our records.'. To be able to perform export to S3, RDS DB instance should be configured to assume a role with permission to write to S3 bucket, the guide describes these steps. The reason of an error was in aws_s3.query_export_to_s3 Postgres procedure using some (cached?) invalid assumed credentials. I am still not aware which credentials has it been using but I have managed to achieve the same behaviour using AWS CLI: I have assumed a role (aws sts assume-role), And then tried to perform another action (aws s3 cp in particular) with this credentials without session token (only AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY without AWS_SESSION_TOKEN). This resulted in the same error from AWS CLI: An error occurred (InvalidAccessKeyId) when calling the PutObject operation: The AWS Access Key Id you provided does not exist in our records. In short: hard resetting RDS credentials helped.
I just found another cause/remedy for this error/situation. I was getting the error running a PowerShell script. The error was happening on an execution of Write-S3Object. I have been working with AWS for a while now and have been running this script with success, but had not run it in a while. My usual method of setting AWS credentials is: Set-AWSCredential -ProfileName <THE_PROFILE_NAME> I tried the "aws configure" command and every other recommendation in this forum post. No luck. Well, I am aware of the .aws\credentials file and took a look in there. I have only three profiles, with one being [default]. Everything was looking good, but then I noticed a new element in there, present in all 3 profiles, that I had not seen before: toolkit_artifact_guid=64GUID3-GUID-GUID-GUID-004GUID236 (GUID redacting added by me) Then I noticed this element differed between the profile I was running with and the [default] profile, which was the same profile, except for that. On a hunch I changed the toolkit_artifact_guid in the [default] to match it to my target profile, and no more error. I have no idea why.