I'm getting
AccessDeniedException: User: arn:aws:iam::[ACCOUNTID]:user/esc-user-name is not authorized to perform: ssm:GetParametersByPath on resource: arn:aws:ssm:eu-central-1:[ACCOUNTID]:* because no identity-based policy allows the ssm:GetParametersByPath action
expection while reading the Second page, while Fist page of variables is fetched successfuly.
While deploying the Next.js application via ECS, I need to get relevant env variables during instance startup. The ESC-related role is permitted to read params on a particular path, like
/my-nextjs-app/stg/
I have no problems getting the expected result for the first page of params with
(code is not exactly the same for simplicity)
const initialCommand = new GetParametersByPathCommand({
Path: "/my-nextjs-app/stg/",
WithDecryption: true,
Recursive: true,
});
const response = await ssmClient.send(initialCommand)
As soon I receive a NextToken in the response, and trying to use it to fetch the next page in kind-of this way:
const nextCommand = new GetParametersByPathCommand({
Path: "/my-nextjs-app/stg/",
WithDecryption: true,
Recursive: true,
NextToken: response.NextToken,
});
await ssmClient.send(nextCommand)
And I got the permission denied error mentioned abowe.
It feels like when NextToken is defined in the command, SSMClient just ignores the Path param, and tries to use Token as source of all required data (I guess it somehow encoded into it, including pagination)
Giving permission to a whole arn:aws:ssm:eu-central-1:[ACCOUNTID]:* is not an option due to security reasons, and feels "dirty" anyway. My assumption is that if SSMClient was able to fetch the first page successfully - it should be able to proceed with the next ones as well with no additional permissions.
Meanwhile, using boto3 - all good with the same user\role.
Is it worth a bug report to #aws-sdk/client-ssm, or is there anything I've missed?
Related
In a docker container, the scality/s3server-image is running. I am connecting to it with NodeJS using the #aws-sdk/client-s3 API.
The S3Client setup looks like this:
const s3Client = new S3Client({
region: undefined, // See comment below
endpoint: 'http://127.0.0.1:8000',
credentials: {
accessKeyId: 'accessKey1',
secretAccessKey: 'verySecretKey1',
},
})
Region undefined: this answer to a similar question mentions to leave the region out, but, accessing the region with await s3Client.config.region() still displays eu-central-1, which was the value I passed to the constructor in a previous version. Although I changed it to undefined, it does still take the old configuration. Could that be connected to the issue?
It was possible to successfully create a bucket (test) and it could be listed by running a ListBucketsCommand (await s3Client.send(new ListBucketsCommand({}))).
However, as mentionned in the title, uploading content or streams to the Bucket with
bucketParams = {
Bucket: 'test',
Key: 'test.txt',
Body: 'Test Content',
}
await s3Client.send(new PutObjectCommand(bucketParams))
does not work, instead I am getting a DNS resolution error (which seems odd, since I manually typed the IP-address, not localhost.
Anyway, here the error message:
Error: getaddrinfo EAI_AGAIN test.127.0.0.1
at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:72:26) {
errno: -3001,
code: 'EAI_AGAIN',
syscall: 'getaddrinfo',
hostname: 'test.127.0.0.1',
'$metadata': { attempts: 1, totalRetryDelay: 0 }
}
Do you have any idea on
why the region is still configured and/or
why the DNS lookup happens / and then fails, but only when uploading, not when retrieving metadata about the Buckets / creating the Buckets?
For the second question, I found a workaround:
Instead of specifying the IP-Address directly, using endpoint: http://localhost:8000 (so using the Hostname instead of the IP-Adress) fixes the DNS lookup exception. However, there is no obvious reason on why this should happen.
So i am attempting to create a lambda function inside of my vpc that requires s3 access. Most of the time it goes off without a hitch, however, sometimes it will just hang on s3.getObject until the function times out, there is no error when this happens. I have set up a VPC endpoint and the endpoint is in the route table for the (private) subnet, ensured that access to the endpoint is not being blocked by either the security group or NACL, and IAM permissions all seem to be in order (though if that was the issue one would expect an error message).
I've boiled my code down to a simple get/put for the purposes of debugging this issue, but here it is in case i am missing the incredibly obvious. I've spent hours googling this, tried everything suggested/i can think of... and am basically out of ideas at this point... so i cannot emphasize enough how much i appreciate any help that can be given
Update: i have run my code from an ec2 instance inside the same vpc/subnet/security group as the lambda... seems to not have the same problem, so the issue seems to be with the lambda configuration rather than any of the network configuration
try {
const getParams = {
Bucket: 'MY_BUCKET',
Key: '/path/to/file'
};
console.log('************** about to get', getParams);
const getObject = await s3.getObject(getParams).promise();
console.log('************** gotObject', getObject);
const uploadParams = {
Bucket: 'MY_BUCKET',
Key: '/new/path/to/file',
Body: getObject.Body
};
console.log('************** about to put', uploadParams);
const putObject = await s3.putObject(uploadParams).promise();
console.log('*************** object was put', putObject);
} catch (err) {
console.log('***************** error', err);
}
};```
I am having a very peculiar issue regarding User creation in AWS Cognito.
When I am in the same region (e.g US-East-AWS and Creating User from NA) then it is working like a charm. But when someone is in other region (e.g US-EAST-AWS and creating user from EU with a slow internet connection) then I am getting UserNotFoundException.
Below is the how I am handling User creation in my Lambda function (JS)
await cognitoidentityserviceprovider.adminCreateUser(params).promise()
if (customer.MFA === true) {
await toggleCognitoUsersMFA(jsonBody.email, true)
}
Check your MessageAction value in your params.
If the value is set to RESEND, and you're creating a new user, you'll get this User Not Found message.
The current AWS docs indicate that either a RESEND or SUPPRESS value is required, this is not true, you can actually omit the value.
When updating a Cloudfront distribution using the AWS Node SDK I get the following exception:
message:
'The If-Match version is missing or not valid for the resource.',
code: 'InvalidIfMatchVersion',
time: 2020-08-23T19:37:22.291Z,
requestId: '43a7707f-7177-4396-8013-4a704b7b11ea',
statusCode: 400,
retryable: false,
retryDelay: 57.05741843025996 }
What am I doing wrong here?
This error occurs because you need to first get the current distribution config with cloudfront.getDistributionConfig and then copy the ETag and DistributionConfig fields into your cloudfront.updateDistribution call. You modify the DistributionConfig as needed to achieve the actual config update you're trying to do.
The docs do a poor job of explaining this (the instructions are for the REST API instead of the actual SDK you're using).
The following is an example of updating a Cloudfront distribution to be disabled by pulling the latest distribution config, modifying it, and then performing an update with it:
async function disable_cloudfront_distribution(dist_id) {
// We need to pull the previous distribution config to update it.
const previous_distribution_config = await cloudfront.getDistributionConfig({
Id: dist_id
}).promise();
const e_tag = previous_distribution_config.ETag;
// Update config to be disabled
previous_distribution_config.DistributionConfig.Enabled = false;
// Create update distribution request with distribution ID
// and the copied config along with the returned etag for ifmatch.
const params = {
Id: dist_id,
DistributionConfig: previous_distribution_config.DistributionConfig,
IfMatch: e_tag
};
return cloudfront.updateDistribution(params).promise();
}
The following function (in Javascript) is supposed to accept accessKey and secretKey and check whether they are correct:
function checkKeys(accessKey, secretKey) {
var cred = new AWS.Credentials(accessKey, secretKey, null);
cred.get(function(err) {
if (err) {
console.log("ERROR!")
} else {
console.log("Keys are OK")
}
})
}
I'd expect that get() method returns an error in case of incorrect credentials. I don't know why, but it doesn't matter what credentials I give, I never get an error, and console always prints "Keys are OK".
You cannot really check that credentials are "correct". Think about user roles. User may be authorized to make one API call, but not another. The only generic thing you can do is make sure that ID+key pair you have are valid AWS credentials with STS.getCallerIdentity() (see SDK docs for details). Note that it will return caller details even if this caller has no access to any of the services, so take the results with a grain of salt.
You're just storing credentials in a local object, and then retrieving them. You need to trigger an actual AWS API call in order to verify if the credentials are valid.