I'm trying to test some Node.js code from my local machine for use in an AWS Lambda function. This involves signing a request with Signature Version 4.
I've signed in with my access key using AWS CLI but when I try to make a request using the following code I get this error at signer.addAuthorization. What step am I missing? It works fine from a Lambda function.
Code:
const AWS = require('aws-sdk');
const creds = new AWS.EnvironmentCredentials('AWS');
...
var signer = new AWS.Signers.V4(req, 'es');
signer.addAuthorization(creds, new Date());
Error:
TypeError [ERR_INVALID_ARG_TYPE]: The "key" argument must be one of type string, TypedArray, or DataView. Received type undefined
at new Hmac
According to the documentation of EnvironmentCredentials,
By default, this class will look for the matching environment variables prefixed by a given envPrefix
Therefore you need to set AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_SESSION_TOKEN environment variables before invoking your code.
In AWS Lambda environment, these environment variables are already set, that's why it is working.
Related
I am trying to use Glue schema registry service in AWS with scala (or java should be useful also) and I tested two ways to assume a role but it results in an error:
"Unable to load credentials from any of the providers in the chain AwsCredentialsProviderChain(credentialsProviders=[SystemPropertyCredentialsProvider(), EnvironmentVariableCredentialsProvider(), WebIdentityTokenCredentialsProvider(), ProfileCredentialsProvider(), ContainerCredentialsProvider(), InstanceProfileCredentialsProvider()])"
I don't want to use environment variables so I tried STS to assume a role with the following code:
val assumeRoleRequest = AssumeRoleRequest.builder.roleSessionName(UUID.randomUUID.toString).roleArn("roleArn").build
val stsClient = StsClient.builder.region(Region.EU_CENTRAL_1).build
val stsAssumeRoleCredentialsProvider = StsAssumeRoleCredentialsProvider.builder.stsClient(stsClient).refreshRequest(assumeRoleRequest).build
val glueClient = GlueClient
.builder()
.region(Region.EU_CENTRAL_1)
.credentialsProvider(stsAssumeRoleCredentialsProvider)
Based on https://stackoverflow.com/a/62930761/17221117
The second way I used is using the following AWS code official documentation
But it fails also... I don't understand if this generate a token that I should use or just executing this code should work.
Anyone can help me with this?
I have the following typescript code that generates a key that I need to store in KMS
import {LocalCryptUtils} from 'crypt-util';
const fs = require('fs');
const cryptUtils = new LocalCryptUtils();
cryptUtils.createMasterPrivateKey();
var file = fs.createWriteStream('../../config/derived.txt')
file.write('{\n');
file.write(` "public_key_derived" : "${cryptUtils.derivePublicKey(0, 0)}", \n`)
file.write(` "public_address_derived" : "${cryptUtils.deriveAddress(0, 0)}" \n`)
file.end('}\n');
console.log(`{ "masterKey" : "${cryptUtils.exportMasterPrivateKey()}" }`);
All the resources I found on the internet are explaining how to include KMS or Parameter Store variables within a Lambda function. But my requirement is reverse that. I need to use a Lambda function to generate a key and store in KMS, and one other smaller function to generate a key and store in Parameter Store. As a result use that in ECS Fargate.
I would appreciate your guidance with this.
PS. I'm using Terraform as IaC tool.
I am using CDK to set up code pipelines in AWS. The pipeline stage needs to download the source code from github so uses an oauth token to authenticate the request. I would like to be able to access the token from AWS Parameter Store and NOT from AWS Secret Manager when setting the value in the stage of the pipeline.
There are plenty of examples using Secret Manager to do this. However there are no examples using the Parameter Store or hardcoding the token in plain text within the CDK project.
We are using typescript with CDK 1.3.0.
I have tried storing the token in the Parameter Store. When storing as a secure String you need to additionally specify the version when retrieving the value. However I cannot then cast to a SecretValue that is required to set oauthToken property in the pipeline stage.
Get the value from the Parameter Store ..
// get the secureString
const secureString = ssm.StringParameter.fromSecureStringParameterAttributes(construct,'MySecretParameter', {
parameterName: 'my-secure-parameter-name',
version: 1,
});
I need to cast the secretString to a CDK.SecretValue to then use it to set the oauthToken. I cannot see how to do this.
const sourceAction = new codepipelineactions.GitHubSourceAction({
actionName: 'Source',
owner: owner,
repo: repository,
oauthToken: githubOAuthAccessToken,
output: sourceOutput,
branch: branch,
trigger: codepipelineactions.GitHubTrigger.WEBHOOK,
});
The CDK documentation says that is is advisable to store tokens in Secret Manager.
"It is recommended to use a Secret Manager SecretString to obtain the token"
It does not say that tokens cannot be retrieved from other sources and used. I would be grateful if the situation could be clarified and if anyone stores tokens outside Secrets Manager and is still able to use them to set the Token in the source stage of a pipeline.
You can use cdk.SecretValue.ssmSecure or cdk.SecretValue.plainText:
oauthToken: cdk.SecretValue.ssmSecure('param-name', 'version');
// OR
oauthToken: cdk.SecretValue.plainText('oauth-token-here');
From the doc for plainText:
Do not use this method for any secrets that you care about. The only reasonable use case for using this method is when you are testing.
The previous answer by #jogold does partially work. However, at the time of this writing SecretValue.ssmSecure is not supported by Cloudformation and you will get an error such as: FAILED, SSM Secure reference is not supported in: .
There is an open issue on the CDK roadmap: https://github.com/aws-cloudformation/cloudformation-coverage-roadmap/issues/227. The plaintext option is not truly viable as the secret will be exposed in CFN template.
[AWS s3 undefined 0.006s 0 retries] headObject({ Bucket: 'mypicturebank', Key: 'testing' })
There was an error creating your album: TypeError [ERR_INVALID_ARG_TYPE]: The "key" argument must be one of type string, TypedArray, or DataView. Received type undefined
#Ken G I'm not sure what language or framework you are using, though I ran across the similar error message The "key" argument must be one of type string, TypedArray, or DataView. Received type undefined just now.
In my case and for anyone else who stumbles across this, it was because I was providing the PascalCase AWS credentials which are returned from AWS STS.assumeRole, where I needed to provide camelCase AWS credentials keys:
import AWS from 'aws-sdk'
# Bad
const s3 = new AWS.S3({
credentials: {
AccessKeyId: process.env.AWS_ACCESS_KEY,
SecretAccessKey: process.env.AWS_SECRET_KEY,
},
...
})
# Good
const s3 = new AWS.S3({
credentials: {
accessKeyId: process.env.AWS_ACCESS_KEY,
secretAccessKey: process.env.AWS_SECRET_KEY,
},
...
})
I am using the aws-sdk for JavaScript on macOS with Node v10.16.2, with the goal of using s3.createPresignedPost.
Possibly related, on an AWS Lambda instance running node 10.x I was seeing a different error message for the same code: Key must be a buffer.
This issue could be also caused by the reason if you using some env variable in your code and it doesn't exist in the env file or is undefined in it like I was using file stack api so shared key required which is actually missing in the env file.
Im facing some sort of credentials issue when trying to connect to my dynamoDB on aws. Locally it all works fine and I can connect using env variables for AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_DEFAULT_REGION and then
dynamoConnection = boto3.resource('dynamodb', endpoint_url='http://localhost:8000')
When changing to live creds in the env variables and setting the endpoint_url to the dynamoDB on aws this fails with:
"botocore.exceptions.ClientError: An error occurred (InvalidSignatureException) when calling the Query operation: The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details."
The creds are valid as they are used in a different app which talks to the same dynamoDB. Ive also tried not using env variables but rather directly in the method but the error persisted. Furthermore, to avoid any issues with trailing spaces Ive even used the credentials directly in the code. Im using Python v3.4.4.
Is there maybe a header that also should be set that Im not aware of? Any hints would be apprecihated.
EDIT
Ive now also created new credentials (to make sure there are only alphanumerical signs) but still no dice.
You shouldn't use the endpoint_url when you are connecting to the real DynamoDB service. That's really only for connecting to local services or non-standard endpoints. Instead, just specify the region you want:
dynamoConnection = boto3.resource('dynamodb', region_name='us-west-2')
It sign that your time zone is different. Maybe you can check your:
1. Time zone
2. Time settings.
If there are some automatic settings, you should fix your time settings.
"sudo hwclock --hctosys" should do the trick.
Just wanted to point out that accessing DynamoDB from a C# environment (using AWS.NET SDK) I ran into this error and the way I solved it was to create a new pair of AWS access/secret keys.
Worked immediately after I changed those keys in the code.