I'm trying to hit the localstack s3 service with aws sdk and it works well with aws cli but the aws sdk is behaving weird by adding the bucketname to the front of the url mentioning unable to connect.
[![INTELLIJ debug][1]][1]
Code is as below
public void testS3() {
final String localStackS3URL = "http://localhost:4566";
final String REGION = "us-east-1";
final AwsClientBuilder.EndpointConfiguration endpoint = new AwsClientBuilder.EndpointConfiguration(localStackS3URL, REGION);
final AmazonS3 client = AmazonS3ClientBuilder.standard()
.withEndpointConfiguration(endpoint)
.build();
if(!client.doesBucketExistV2("test")){
client.createBucket("test");
}
}
Can anyone help me what is wrong here ? It works with aws cli but the aws sdk is prefixing the bucket name strangely.
[![cmd aws cli][2]][2]
Thanks in advance
[1]: https://i.stack.imgur.com/wMI8D.png
[2]: https://i.stack.imgur.com/L0jLV.png
try adding the http client parameter while building the S3 client it worked for me
httpClient(UrlConnectionHttpClient.builder().build())
Related
I'm trying to migrate Several spring boot services to EKS and they can't retrieve aws credentials from credentials chain and pods are failing with following error: Unable to load credentials from any of the providers in the chain AwsCredentialsProviderChain
These are what I've tried so far:
I'm using Web identity token from AWS STS for credentials retrieval.
#Bean
public AWSCredentialsProvider awsCredentialsProvider() {
if (System.getenv("AWS_WEB_IDENTITY_TOKEN_FILE") != null) {
return WebIdentityTokenCredentialsProvider.builder().build();
}
return new DefaultAWSCredentialsProviderChain();
}
#Bean
public SqsClient sqsClient(AWSCredentialsProvider awsCredentialsProvider) {
return SqsClient
.builder()
.credentialsProvider(() -> (AwsCredentials) awsCredentialsProvider.getCredentials())
.region(Region.EU_WEST_1).build();
}
#Bean
public SnsClient snsClient(AWSCredentialsProvider awsCredentialsProvider) {
return SnsClient
.builder()
.credentialsProvider(() -> (AwsCredentials) awsCredentialsProvider.getCredentials())
.region(Region.EU_WEST_1).build();
}
The services also have aws-java-sdk-sts maven dependency packaged.
IAM role for the services is also fine and AWS_WEB_IDENTITY_TOKEN_FILE is a also automatically created within pod after each Jenkins build based on K8s manifest file.
From pod I can make GET and POST request to SNS and SQS without any problem.
Problem was fixed.
Main issue was conflicting AWS SDK BOM version with individual models. Also previous version of BOM I was using wasn't supporting AWS SDK v2.x .
These are the main take aways from the issue:
AWS SDK authenticate services using credentials provider chain . The default credential provider chain of the AWS SDK for Java 2.x searches for credentials in your environment using a predefined sequence.
1.1 As of AWS SDK for Java 2.x Web identity token from AWS STS is within default provider chain.
1.2 As long as using v2 of the SDK and having the STS dependency makes explicit configuration of Web identity token redundant.
1.3 Make sure candidate service is using AWS SDK v2 as it’ll reduce the configuration code to minimum.
If a candidate service using AWS SDK v1 following configuration should be added as Web identity token isn’t in default provider chain for v1.
#Bean
public AWSCredentialsProvider awsCredentialsProvider() {
if (System.getenv("AWS_WEB_IDENTITY_TOKEN_FILE") != null) {
return WebIdentityTokenCredentialsProvider.builder().build();
}
return new DefaultAWSCredentialsProviderChain();
}
Last but not least try to use try to use latest AWS SDK BOM dependency . (currently all modules have the same version, but this may not always be the case)
You should have roleArn, sessionname and token details in the identity token cred provider build.
Try this
return WebIdentityTokenCredentialsProvider.builder()
.roleArn(System.getenv("AWS_ROLE_ARN"))
.roleSessionName(System.getenv("AWS_ROLE_SESSION_NAME"))
.webIdentityTokenFile(System.getenv("AWS_WEB_IDENTITY_TOKEN_FILE"))
.build();
than just returning as return WebIdentityTokenCredentialsProvider.builder().build();
You can try to create the file:
Windows: C:\Users[username].aws\config
Mac: /Users/[username]/.aws/config
Linux: /home/[username]/.aws/config
and add an AWS credential to it.
Ex:
[default]
aws_access_key_id = key_value
aws_secret_access_key = secret_value
We recently migrated from AWS SDK 1.x to 2.x for Java and put operation for AWS S3 is not happening. S3 bucket is encryption enabled ( using AWS KMS key).
Below is the code I am trying to use but I am getting error mentioned below that
Server Side Encryption with AWS KMS managed key requires HTTP header x-amz-server-side-encryption : aws:kms (Service: S3, Status Code: 400, Request ID: xxx, Extended Request ID: xxx/rY9ydIzxi3NROPiM=)
Update : I figured it out myself. So anyone who wants to benefit to connect to AWS S3 bucket using KMS key and AWS SDK 2.x for Java use the below code it should work
Map<String, String> metadata = new HashMap<>();
metadata.put("x-amz-server-side-encryption", "aws:kms");
PutObjectRequest request = PutObjectRequest.builder()
//.bucketKeyEnabled(true)
.bucket(bucketName)
.key(Key)
.metadata(metadata)
.serverSideEncryption(ServerSideEncryption.AWS_KMS)
.ssekmsKeyId("arn:aws:kms:xxxxx")
.build();
File outputFile = new File("filename");
try (PrintWriter pw = new PrintWriter(outputFile)) {
data.stream()
.forEach(pw::println);
}
awsS3Client.putObject(request, RequestBody.fromBytes(String.join(System.lineSeparator(),
data).getBytes(StandardCharsets.UTF_8)));
I am trying to send a 'hello world' message to an AWS IoT endpoint.
The Amazon documentation at
https://docs.aws.amazon.com/panorama/latest/dev/applications-awssdk.html
has this simple code sample:
import boto3
iot_client=boto3.client('iot-data')
topic = "panorama/panorama_my-appliance_Thing_a01e373b"
iot_client.publish(topic=topic, payload="my message")
This code works fine when I put it inside a Lambda function.
But When I try to run this code on my PC in a stand-alone Python application, I get the error message:
certificate verify failed: unable to get local issuer certificate
(_ssl.c:1125)
I do have an .aws/credentials file with entries like
[default]
aws_access_key_id = xxxxxxxxxx
aws_secret_access_key = xxxxxxxxxx
I checked the endpoint is correct:
aws iot describe-endpoint
command returns a valid -ats end point like:
"endpointAddress": "xxxxxxx-ats.iot.us-east-2.amazonaws.com"
If I specify this end point while creating the client:
iot_client=boto3.client('iot-data',
region_name='us-east-2',
endpoint_url=xxxxxxx-ats.iot.us-east-2.amazonaws.com)
I get the error:
ValueError: Invalid endpoint: xxxxxx-ats.iot.us-east-2.amazonaws.com
What am I missing? Do I need to download any certificate files? If so, this code does not seem to use any certificates.
The same setup is working with S3 or DynamoDB:
s3 = boto3.resource('s3')
and
dynamodb = boto3.resource('dynamodb')
are working fine on my PC.
I had this same issue and adding https:// fixed it for me.
iot_client=boto3.client('iot-data',
region_name='us-east-2',
endpoint_url=https://xxxxxxx-ats.iot.us-east-2.amazonaws.com)
I have boto client like this
client = boto3.client('rekognition', region_name="us-east-1")
I am using this client to detect text from image and deployed code in AWS region where Rekognition api is not available but provided the region-name where it is available in client. On executing/Testing the lambda function, it is giving
errorMessage": "Could not connect to the endpoint URL: \"https://rekognition.ap-south-1.amazonaws.com/"
Why it is picking ap-south-1 as i provided in client-"us-east-1"
client = boto3.client('rekognition', region_name="us-east-1")
But when I run the code locally with region-name:- ap-south-1 and in client
client = boto3.client('rekognition', region_name="us-east-1")
its running wonderfully
but not running on AWS lambda
While successfully running when both the regions are same(us-east-1)
So great if anyone can provide any suggestion, Required Help soon!!!!!!!
As on March 15th 2018, AWS Rekogniton is not supported in Mumbai (ap-south-1)
See supported regions: Amazon Rekognition - Available Regions
I'm trying to just test out AWS s3 with eclipse using Java, I'm just trying to execute the Amazon s3 sample, but it doesn't recognise my credentials, and I'm sure my credentials are legitimate, it gives me the following error:
===========================================
Getting Started with Amazon S3
===========================================
Listing buckets
Caught an AmazonServiceException, which means your request made it to Amazon S3, but was rejected with an error response for some reason.
Error Message: Status Code: 403, AWS Service: Amazon S3, AWS Request ID: 057D91D336C1FASC, AWS Error Code: InvalidAccessKeyId, AWS Error Message: The AWS Access Key Id you provided does not exist in our records.
HTTP Status Code: 403
AWS Error Code: InvalidAccessKeyId
Error Type: Client
Request ID: 057D91D336C1FASC
a little update here:
so there's a credential file that aws creates in the computer system. mine case was '/Users/macbookpro/.aws/credentials'
the file in this place decides the default accessKeyId and stuff.. go ahead and update it.
So I ran into the same issue, but i think i figured it out.
I was using Node.js, but i think the problem should be the same since it's how they have structured their object was the issue.
in javascript if you run this in the backend,
var aws = require('aws-sdk');
aws.config.accessKeyId= "Key bablbalab"
console.log(aws.config.accessKeyId)
you will find it prints out something different. coz the correct way of setting the accessKeyId isn't what they have provided in the official website tutorial
aws.config.accessKeyId="balbalb"
or
aws.config.loadFromPath = ('./awsConfig.json')
or any of that.
If you log the entire "aws.config", you will find the correct way is
console.log(aws.config)
console.log(aws.config.credentials.secretAccessKey)
aws.config.credentials.secretAccessKey="Key balbalab"
you see the structure of the object? there's the inconsistence