I did aws configure and inputted my key and secret key. i have checked that my account exists when I ran:
aws iam list-account-aliases
and my alias appeared.
However, when i try to upload a file to aws, I am recieving this error:
/Users/kchen/campaiyn-web/node_modules/skipper-s3/node_modules/knox/lib/client.js:197
if (!options.key) throw new Error('aws "key" required');
^
Error: aws "key" required
at new Client (/Users/kchen/campaiyn-web/node_modules/skipper-s3/node_modules/knox/lib/client.js:197:27)
at Function.exports.createClient (/Users/kchen/campaiyn-web/node_modules/skipper-s3/node_modules/knox/lib/client.js:925:10)
at Writable.onFile (/Users/kchen/campaiyn-web/node_modules/skipper-s3/index.js:248:22)
at doWrite (_stream_writable.js:292:12)
at writeOrBuffer (_stream_writable.js:278:5)
at Writable.write (_stream_writable.js:207:11)
at Transform.ondata (_stream_readable.js:528:20)
at emitOne (events.js:77:13)
at Transform.emit (events.js:169:7)
at readableAddChunk (_stream_readable.js:146:16)
at Transform.Readable.push (_stream_readable.js:110:10)
at Transform.push (_stream_transform.js:128:32)
at /Users/kchen/campaiyn-web/node_modules/sails/node_modules/skipper/standalone/Upstream/build-renamer-stream.js:49:19
at Object.opts.saveAs (/Users/kchen/campaiyn-web/node_modules/sails/node_modules/skipper/standalone/Upstream/prototype.upload.js:71:7)
at determineBasename (/Users/kchen/campaiyn-web/node_modules/sails/node_modules/skipper/standalone/Upstream/build-renamer-stream.js:32:17)
at Transform.__renamer__._transform (/Users/kchen/campaiyn-web/node_modules/sails/node_modules/skipper/standalone/Upstream/build-renamer-stream.js:40:7)
I feel that my key was not installed correctly or am i looking at this incorrectly?
The library you are using (skipper) does not know how to pick up the credentials.
I would recommend either passing the credential in explicitly or switching to using the nodejs AWS SDK which definitely supports this.
Related
I have a react Amplify App. All I want to do is work on it, push the changes to amplify, etc. These are all standard and basic commands like amplify push.
The problem is that shortly after starting to work on my app ( a month or two ), I was no longer allowed to push, pull, or work on the app from the command line. There is no explanation, and the only error is this ...
An error occurred during the push operation: /
Access Denied
✅ Report saved: /var/folders/8j/db7_b0d90tq8hgpfcxrdlr400000gq/T/storygraf/report-1658279884644.zip
✔ Done
The logs created from the error show this.
error.json
{
"message": "Access Denied",
"code": "AccessDenied",
"region": null,
"time": "2022-07-20T01:20:01.876Z",
"requestId": "DRFVQWYWJAHWZ8JR",
"extendedRequestId": "hFfxnwUjbtG/yBPYG+GW3B+XfzgNiI7KBqZ1vLLwDqs/D9Qo+YfIc9dVOxqpMo8NKDtHlw3Uglk=",
"statusCode": 403,
"retryable": false,
"retryDelay": 60.622127086356855
}
I have two users in my .aws/credentials file. One is the default (which is my work account). The other is called "personal". I have tried to push with
amplify push
amplify push --profile default
amplify push --profile personal
It always results in the same.
I followed the procedure located here under the title "Create environment variables to assume the IAM role and verify access" and entered a new AWS_ACCESS_KEY_ID and a new AWS_SECRET_ACCESS_KEY. When I then run the command ...
aws sts get-caller-id
It returns the correct Arn. However, there is a AWS_SESSION_TOKEN variable that the docs say need to be set, and I have no idea what that is.
Running amplify push under this new profile still results in an error.
I have also tried
AWS_PROFILE=personal aws sts get-caller-identity
Again, this results in the correct settings, but the amplify push still fails for the same reasons.
At this point, i'm ready to drop it and move to something else. I've been debugging this for literally months now and it would be far easier to setup a standard react app on S3 and stand up my resources manually without dealing with this.
Any help is appreciated.
This is the same issue for me. There seems to be no way to reconfigure the CLI once its authentication method is set to profile. I'm trying to change it back to amplify studio and have not been able to crack the code on updating it. Documentation in this area is awful.
In the amplify folder there is a .config directory. There are three files:
local-aws-info.json
local-env-info.json
project-config.json
project-config.json is required, but the local-* files maintain state for your local configuration. Delete these and you can reinit the project and reauthenticate the amplify cli for the environment
I am facing this weird scenario. I generate my AWS AccessKeyId, SecretAccessKey and SessionToken by running assume-role-with-saml command. After copying these values to .aws\credentials file, I try to run command "aws s3 ls" and can see all the S3 buckets. Similarly I can run any AWS command to view objects and it works perfectly fine.
However, when I write .Net Core application to list objects, it doesn't work on my computer. Same .Net application works find on other colleagues' computers. We all have access to AWS through the same role. There are no users in IAM console.
Here is the sample code, but I am not sure there is nothing wrong with the code, because it works fine on other users' computers.
var _ssmClient = new AmazonSimpleSystemsManagementClient();
var r = _ssmClient.GetParameterAsync(new Amazon.SimpleSystemsManagement.Model.GetParameterRequest
{
Name = "/KEY1/KEY2",
WithDecryption = true
}).ConfigureAwait(false).GetAwaiter().GetResult();
Any idea why running commands through CLI works and API calls don't work? Don't they both look at the same %USERPROFILE%.aws\credentials file?
I found it. Posting here since it can be useful for someone having same issue.
Go to this folder: %USERPROFILE%\AppData\Local\AWSToolkit
Take a backup of all files and folders and delete all from above location.
This solution applies only if you can run commands like "aws s3 ls" and get the results successfully, but you get error "The provided token has expired" while running the same from .Net API libraries.
I working through the Build On Serverless|S2 E4 video and I've gotten to the point of creating an authenticated HTTP datasource using the AWS CLI. I'm getting this error.
Parameter validation failed:
Unknown parameter in httpConfig: "authorizationConfig", must be one of: endpoint
I think I'm using the same information provided in the video, repository and gist, updated for my own aws account. It seems like it's some kind of formatting or missing information error, but, I'm just not seeing the problem.
When I remove the "authorizationConfig" property from the state-machine-datasource.json the command works.
I've reviewed the code against the information in the video as well as documentation and examples here and here provided by aws
This is the command I'm running.
aws appsync create-data-source --api-id {my app sync app id} --name ProcessBookingStateMachine
--type HTTP --http-config file://src/backend/booking/state-machine-datasource.json
--service-role-arn arn:aws:iam::{my account}:role/AppSyncProcessBookingState --profile default
This is my state-machine-datasource.json:
{
"endpoint": "https://states.us-east-2.amazonaws.com",
"authorizationConfig": {
"authorizationType": "AWS_IAM",
"awsIamConfig": {
"signingRegion": "us-east-2",
"signingServiceName": "states"
}
}
}
Thanks,
I needed to update my aws cli to the latest version. The authenticated http datasource is something fairly new I guess.
I am having a hard time setting up the credentials for AWS S3 usage via aws-php-sdk within Media Temple.
I continue to receive the error: Cannot read credentials from /.aws/credentials
I tried to follow the guide to install the AWS CLI via https://docs.aws.amazon.com/cli/latest/userguide/awscli-install-linux.html. I then used the following to set the credentials via https://docs.aws.amazon.com/cli/latest/userguide/tutorial-ec2-ubuntu.html#configure-cli
... But I get that error still.
I then had a chat with Media Temple support, who created the .aws/credentials file in root, but then the error message changed to:
Warning: is_readable(): open_basedir restriction in effect. File(/.aws/credentials) is not within the allowed path(s)
MT advised me to not change the basedir settings. They also advised me to simply change where the credentials are read from if possible.
Anyone successfully use AWS credentials on MT?
Trying to do this with the AWS CLI via SSH was like beating my head against a brick wall on Media Temple.
I then tried to set the credentials via environment variables, but that was a no-go.
I then got the idea to put the credentials file within a directory that PHP could access. However, I had to set the location where aws-php-sdk would look for it. I found the environment variable within some documentation and tried to set the variable via php's setenv() function. No dice.
I then searched the aws-php-sdk for the initial error I was seeing, backtracked until I could find where the credentials file location was being set. Turns out the documentation was wrong and the correct environment variable name was HOME.
In the end, all that was needed was to set HOME prior to using AWS. Easy enough, but should have been 100x easier to figure out. Something along these lines:
// Set environment variable for credentials location
putenv('HOME=../');
// Set bucket name
$this->bucket = $bucket;
// Create an S3Client
$this->s3Client = new Aws\S3\S3Client([
'profile' => $this->profile,
'version' => $this->version,
'region' => $this->region
]);
Im facing some sort of credentials issue when trying to connect to my dynamoDB on aws. Locally it all works fine and I can connect using env variables for AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_DEFAULT_REGION and then
dynamoConnection = boto3.resource('dynamodb', endpoint_url='http://localhost:8000')
When changing to live creds in the env variables and setting the endpoint_url to the dynamoDB on aws this fails with:
"botocore.exceptions.ClientError: An error occurred (InvalidSignatureException) when calling the Query operation: The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details."
The creds are valid as they are used in a different app which talks to the same dynamoDB. Ive also tried not using env variables but rather directly in the method but the error persisted. Furthermore, to avoid any issues with trailing spaces Ive even used the credentials directly in the code. Im using Python v3.4.4.
Is there maybe a header that also should be set that Im not aware of? Any hints would be apprecihated.
EDIT
Ive now also created new credentials (to make sure there are only alphanumerical signs) but still no dice.
You shouldn't use the endpoint_url when you are connecting to the real DynamoDB service. That's really only for connecting to local services or non-standard endpoints. Instead, just specify the region you want:
dynamoConnection = boto3.resource('dynamodb', region_name='us-west-2')
It sign that your time zone is different. Maybe you can check your:
1. Time zone
2. Time settings.
If there are some automatic settings, you should fix your time settings.
"sudo hwclock --hctosys" should do the trick.
Just wanted to point out that accessing DynamoDB from a C# environment (using AWS.NET SDK) I ran into this error and the way I solved it was to create a new pair of AWS access/secret keys.
Worked immediately after I changed those keys in the code.