SavingsPlans: Credential should be scoped to a valid region - amazon-web-services

I am trying to List all available savings plan in my account using
var savingsPlans = new AWS.SavingsPlans({endpoint: 'savingsplans.amazonaws.com', region: region});
const listResponse: AWS.SavingsPlans.DescribeSavingsPlansResponse = await savingsPlans.describeSavingsPlans().promise();
But getting following error in some scenarios. In some scenarios, it is working fine. What could be the issue? Please help.
"error": {
"message": "Credential should be scoped to a valid region, not 'us-west-2'. ",
"code": "InvalidSignatureException",
"time": "2022-03-04T09:47:56.535Z",
"statusCode": 403,
"retryable": false,
"retryDelay": 97.23145392059766
}
I am running above code for all regions in my account. Fore some regions, I am not getting any error. But for some regions I am getting error. For example: I am getting error for region ap-northeast-1 but I am not getting error for us-east-1 region.
When I run the same code with Go SDK, it is working for both regions. not sure what is wrong here.

This issue is solved when we use v3 aws-sdk version.
import { SavingsplansClient, DescribeSavingsPlansCommand } from "#aws-sdk/client-savingsplans";
const savingsPlans = new SavingsplansClient();
const command = new DescribeSavingsPlansCommand(})
const listResponse = await savingsPlans.send(command);
Seems there are some issues in v2 for savings plans.

Related

How to configure lifecycle for S3 incomplete multi-part upload

I've observed for failed multipart upload (like crash or stop in the middle), the partially uploaded object still exist in storage.
I want to configure lifecycle rules for these incomplete objects via either minio or S3 C++ SDk.
I want to configure something like
{
"ID": "bucket-lifecycle-incomplete-chunk-upload",
"Status": "Enabled",
"NoncurrentVersionExpiration": {
"NoncurrentDays": 1
},
"AbortIncompleteMultipartUpload": {
"DaysAfterInitiation": 1
}
},
My C++ code looks like the following:
Aws::S3::Model::AbortIncompleteMultipartUpload incomplete_upload_config;
incomplete_upload_config.SetDaysAfterInitiation(days);
Aws::S3::Model::NoncurrentVersionExpiration version_expire;
version_expire.SetNoncurrentDays(1);
auto status = Aws::S3::Model::ExpirationStatus::Enabled;
Aws::S3::Model::LifecycleRule rule;
rule.SetID("bucket-lifecycle-incomplete-chunk-upload");
rule.SetStatus(std::move(status));
rule.SetNoncurrentVersionExpiration(std::move(version_expire));
rule.SetAbortIncompleteMultipartUpload(std::move(incomplete_upload_config));
Aws::S3::Model::BucketLifecycleConfiguration bkt_config;
bkt_config.AddRules(std::move(rule));
Aws::S3::Model::PutBucketLifecycleConfigurationRequest config_req{};
config_req.SetBucket(bucket);
config_req.SetLifecycleConfiguration(std::move(bkt_config));
auto outcome = client->PutBucketLifecycleConfiguration(config_req);
And I get the following result:
Received HTTP return code: 400; Failed to update config for bucket <bucket-name> because MalformedXML: Unable to parse ExceptionName: MalformedXML Message:
The pain point for this error is: I cannot find which additional or missing fields lead to this error.

AWS CDK access Denied when trying to create connection AWS glue

I have written a CDK code for creating a connection but I am getting an error while creating:
User: arn:aws:iam::XXXXXXX:root is not authorized to perform: glue:CreateConnection on resource: arn:aws:glue:us-east-2:Connectionnew:catalog (Service: AWSGlue; Status Code: 400; Error Code: AccessDeniedException; Request ID: a8702efb-4467-4ffb-8fe0-18468f336299)
Below is my simple Code:
glue_connection = glue.CfnConnection(self, "Connectionnew",
catalog_id = "Connectionnew",
connection_input = {
"connectionType":"JDBC",
"Name":"JDBCConnection",
"connectionProperties": {
"JDBC_CONNECTION_URL": "jdbc:redshift://non-prod-royalties2.xxxxxxx.us-east-1.redshift.amazonaws.com:xxx/xxxxx",
"USERNAME":"xxxxxx",
"Password":"xxxxxxxx"
}
}
)
Please help me with this
Being that you are using the root account (which is not advisable), it's not an issue of your active AWS user having the incorrect permissions.
Likely, the connection details you are providing are incorrect. The username/password might be correct but the formatting of the JSON is questionable. I'd check to see if the JDBC keys are case-sensitive because that could be your issue.
I was able to get this issue resolved but putting the AWS accountnumber as below:
glue_connection = glue.CfnConnection(self, "Connectionnew", catalog_id = "AWSAccountNumber", connection_input = { "connectionType": "JDBC", "Name": "JDBCConnection", "connectionProperties": { "JDBC_CONNECTION_URL": "jdbcredshiftlink", "USERNAME": "xxxxxx", "PASSWORD": "xxxxxxxx" } } )

Terraform "primary workGroup could not be created"

I'm trying to execute query on my table In amazone but i cant execute any query i had this error msg :
Before you run your first query, you need to set up a query result location in Amazon S3.
Your query has the following error(s):
No output location provided. An output location is required either through the Workgroup result configuration setting or as an API input. (Service: AmazonAthena; Status Code: 400; Error Code: InvalidRequestException; Request ID: b6b9aa41-20af-4f4d-91f6-db997e226936)
So i'm trying to add Workgroup but i have this problem
'Error: error creating Athena WorkGroup: InvalidRequestException: primary workGroup could not be created
{
RespMetadata: {
StatusCode: 400,
RequestID: "c20801a0-3c13-48ba-b969-4e28aa5cbf86"
},
AthenaErrorCode: "INVALID_INPUT",
Message_: "primary workGroup could not be created"
}
'
Mycode
resource "aws_s3_bucket" "tony" {
bucket = "tfouh"
}
resource "aws_athena_workgroup" "primary" {
name = "primary"
depends_on = [aws_s3_bucket.tony]
configuration {
enforce_workgroup_configuration = false
publish_cloudwatch_metrics_enabled = true
result_configuration {
output_location = "s3://${aws_s3_bucket.tony.bucket}/"
encryption_configuration {
encryption_option = "SSE_S3"
}
}
}
}
please if there are solution
This probably happens because you already have primary work group. Thus, you can't create new one of the same name. Just create a work group with different name if you want:
name = "primary2"
#Marcin suggested a valid approach, but what may be closer to what you are looking for would to to import existing workgroup into the state:
terraform import aws_athena_workgroup.primary primary
Once the state knows about the already existing resource it can do the plan and apply possible changes.

s3 SignedUrl x-amz-security-token

const AWS = require('aws-sdk');
export function main (event, context, callback) {
const s3 = new AWS.S3();
const data = JSON.parse(event.body);`
const s3Params = {
Bucket: process.env.mediaFilesBucket,
Key: data.name,
ContentType: data.type,
ACL: 'public-read',
};
const uploadURL = s3.getSignedUrl('putObject', s3Params);
callback(null, {
statusCode: 200,
headers: {
'Access-Control-Allow-Origin': '*'
},
body: JSON.stringify({ uploadURL: uploadURL }),
})
}
When I test it locally it works fine, but after deployment it x-amz-security-token, and then I get access denied response. How can I get rid of this x-amz-security-token?
I was having the same issue. Everything was working flawlessly using serverless-offline but when I deployed to Lambda I started receiving AccessDenied issues on the URL. When comparing the URLs returned between the serverless-offline and AWS deployments I noticed the only difference was the inclusion of the X-Amz-Security-Token in the URL as a query string parameter. After some digging I discovered the token being assigned was based upon the assumed role the lambda function had. All I had to do was grant the appropriate S3 policies to the role and it worked.
I just solved a very similar, probably the same issue as you have. I say probably because you dont say what deployment entails for you. I am assuming you are deploying to Lambda but you may not be, this may or may not apply but if you are using temporary credentials this will apply.
I initially used the method you use above but then was using the npm module aws-signature-v4 to see if it was different and was getting the same error you are.
You will need the token, it is needed when you have signed a request with temporary credentials. In Lambda's case the credentials are in the runtime, including the session token, which you need to pass, the same is most likely true elsewhere as well but I'm not sure I haven't used ec2 in a few years.
Buried in the docs (and sorry I cannot find the place this is stated) it is pointed out that some services require that the session_token be processed with the other canonical query params. The module I'm using was tacking it on at the end, as the sig v4 instructions seem to imply, so I modified it so the token is canonical and it works.
We've updated the live version of the aws-signature-v4 module to reflect this change and now it works nicely for signing your s3 requests.
Signing is discussed here.
I would use the module I did as I have a feeling the sdk is doing the wrong thing for some reason.
usage example (this is wrapped in a multiPart upload thus the part number and upload Id):
function createBaseUrl( bucketName, uploadId, partNumber, objectKey ) {
let url = sig4.createPresignedS3URL( objectKey, {
method: "PUT",
bucket: bucketName,
expires: 21600,
query: `partNumber=${partNumber}&uploadId=${uploadId}`
});
return url;
}
I was facing the same issue, I'm creating a signed URL using library Boto3 in python3.7
All though this is not a recommended way to solve, it worked for me.
The request methods should be POST, content-type=['multipart/form-data']
Create a client in like this.
# Do not hard code credentials
client = boto3.client(
's3',
# Hard coded strings as credentials, not recommended.
aws_access_key_id='YOUR_ACCESS_KEY',
aws_secret_access_key='YOUR_SECRET_ACCESS_KEY'
)
Return response
bucket_name = BUCKET
acl = {'acl': 'public-read-write'}
file_path = str(file_name) //file you want to upload
response = s3_client.generate_presigned_post(bucket_name,
file_path,
Fields={"Content-Type": ""},
Conditions=[acl,
{"Content-Type": ""},
["starts-with", "$success_action_status", ""],
],
ExpiresIn=3600)

How to use AWS Video Rekognition with an unauthenticated identity?

i have followed the steps of the documentation but i received:
User: arn:aws:sts::xxxxxxxxxxxx:assumed-role/CognitoRkUnauth_Role/CognitoIdentityCredentials is not authorized to perform: iam:PassRole on resource: arn:aws:iam::xxxxxxxxxx:role/CognitoRkUnauth_Role
The code fails en NotificationChannel. Without this i received the jobId correctly
var params = {
Video: {
S3Object: {
Bucket: 'mybucket',
Name: 'myvideoa1.mp4'
}
},
ClientRequestToken: 'LabelDetectionToken',
MinConfidence: 70,
NotificationChannel: {
SNSTopicArn: 'arn:aws:sns:us-east-1:xxxxxxxx:RekognitionVideo',
RoleArn: 'arn:aws:iam::xxxxxx:role/CognitoRkUnauth_Role'
},
JobTag: "DetectingLabels"
}
I set configuration to CognitoRkUnauth_Role instead of a iam user. Translation worked doing this.
In RoleArn I created another Role but it fails too.
I am not the root user.
I know I need to give more information but if someone can guide me, i will start again the configuration.
I am beginner in aws and i dont understand several things at all.
(english is not my first language)
Well, I had to create a new User. I think there is no reason for doing what i wanted to do :p