I have an application instance running in EKS with the following variables set:
declare -x AWS_DEFAULT_REGION="us-west-2"
declare -x AWS_REGION="us-west-2"
declare -x AWS_ROLE_ARN="xxxxx"
declare -x AWS_WEB_IDENTITY_TOKEN_FILE="/var/run/secrets/eks.amazonaws.com/serviceaccount/token"
As I understand there is a default Java SDK authorization chain that contains com.amazonaws.auth.WebIdentityTokenCredentialsProvider which builds com.amazonaws.services.securitytoken.AWSSecurityTokenService under the hood.
But I can't realize how this circular dependency is solved? I mean you need to specify credentials during creation of AWSSecurityTokenService but credentials create service itself.
I have practical requirements to do that, I want to customize endpoint in sts client but can't since circular dependency.
AWSSecurityTokenServiceClientBuilder.standard()
.withCredentials(new STSAssumeRoleWithWebIdentitySessionCredentialsProvider.Builder(
"arn",
"session",
"tokenfile")
.withStsClient(xxxx)
.build())
.withEndpointConfiguration(new AwsClientBuilder.EndpointConfiguration("http://localhost:4566", null))
.build()
It was easy. It's just done with anonymous auth (https://github.com/aws/aws-sdk-java/blob/1.11.792/aws-java-sdk-sts/src/main/java/com/amazonaws/auth/STSAssumeRoleWithWebIdentitySessionCredentialsProvider.java#L122-L125)
return AWSSecurityTokenServiceClientBuilder.standard()
.withClientConfiguration(clientConfiguration)
.withCredentials(new AWSStaticCredentialsProvider(new AnonymousAWSCredentials()))
.build();
Related
I am trying to use my code to access S3 using aws sdk and in a java service. But I receiving following exception.
Code I use to as follows.
configure the builder.
ClientConfiguration clientConfiguration = new ClientConfiguration();
clientConfiguration.setRetryPolicy( PredefinedRetryPolicies.getDefaultRetryPolicyWithCustomMaxRetries(5));
AmazonS3ClientBuilder s3ClientCommonBuilder = AmazonS3ClientBuilder.standard()
.withClientConfiguration(clientConfiguration)
.withForceGlobalBucketAccessEnabled(true)
.withPathStyleAccessEnabled(true);
s3ClientCommonBuilder.withCredentials(new AWSStaticCredentialsProvider(new BasicAWSCredentials(awsKey, awsSecret)));
return s3ClientCommonBuilder;
create the AwsS3Client
AmazonS3ClientBuilder internalS3ClientBuilder =
createS3ClientCommonBuilder(s3bucket, awsKey, awsSecret);
if (internalServiceEndpoint != null && !internalServiceEndpoint.isEmpty()) {
internalS3ClientBuilder.withEndpointConfiguration(new AwsClientBuilder.EndpointConfiguration(internalServiceEndpoint, Region.EU_Ireland.getFirstRegionId()));
}
And then at run time when the client is attempting to connect to s3 following exception is thrown.
1) Error in custom provider, com.amazonaws.SdkClientException: Unable to find a region via the region provider chain. Must provide an explicit region in the builder or setup environment to supply a region.
while locating AwsS3ClientProvider
at org.test.ApiServerBootstrap.configure(ApiServerBootstrap.java:103)
while locating org.test.AWSS3Client
for the 3rd parameter of org.testDataUploader.<init>(DataUploader.java:60)
at org.test.DataUploader.class(DataUploader.java:49)
while locating org.test.DataUploader
for the 3rd parameter of org.test.StoreService.<init>(StoreService.java:56)
at org.test.services.v3.StoreService.class(StoreService.java:53)
while locating org.test.StoreService
for the 1st parameter of org.test.verticles.EventBusConsumerVerticle.<init>
(EventBusConsumerVerticle.java:35)
at
org.test.verticles.EventBusConsumerVerticle.class(EventBusConsumerVerticle.java:35)
while locating org.test.verticles.EventBusConsumerVerticle
for the 3rd parameter of org.test.ApiServer.<init>(ApiServer.java:52)
while locating org.test.ApiServer
1 error
at com.google.inject.internal.InternalProvisionException.toProvisionException(InternalProvisionException.java:226)
at com.google.inject.internal.InjectorImpl$1.get(InjectorImpl.java:1053)
at com.google.inject.internal.InjectorImpl.getInstance(InjectorImpl.java:1086)
at org.test.ApiServer.main(ApiServer.java:46)
Caused by: com.amazonaws.SdkClientException: Unable to find a region via the region provider chain. Must provide an explicit region in the builder or setup environment to supply a region.
at com.amazonaws.client.builder.AwsClientBuilder.setRegion(AwsClientBuilder.java:462)
at com.amazonaws.client.builder.AwsClientBuilder.configureMutableProperties(AwsClientBuilder.java:424)
at com.amazonaws.client.builder.AwsSyncClientBuilder.build(AwsSyncClientBuilder.java:46)
at org.test.AWSS3Client.create(AWSS3Client.java:99)
I have tested this with some other computers to verify it is my aws client configuration that is is wrong.
My .aws/config file looks as follows
$ cat .aws/config
[default]
region = eu-west-1
output = json
my .aws/credintials look like follows.
$ cat .aws/credentials
[default]
aws_access_key_id=XXXXXXXX.....
aws_secret_access_key=XXXXXXXX
my aws client verison is as follows.
$ aws --version
aws-cli/2.2.40 Python/3.8.8 Linux/5.11.0-34-generic exe/x86_64.ubuntu.20 prompt/off
I appreciate if some one can figure out what am I missing for the aws client to connect to s3.
You want to make sure you are using the DefaultAWSCredentialsProviderChain() class when building your S3Client.
Also you can create an environment variable AWS_REGION=eu-west-1
I am trying to create a lambda S3 listener leveraging Lambda as a native image. The point is to get the S3 event and then do some work by pulling the file, etc. To get the file I am using het AWS 2.x S3 client as below
S3Client.builder().httpClient().build();
This code results in
2020-03-12 19:45:06,205 ERROR [io.qua.ama.lam.run.AmazonLambdaRecorder] (Lambda Thread) Failed to run lambda: software.amazon.awssdk.core.exception.SdkClientException: Unable to load an HTTP implementation from any provider in the chain. You must declare a dependency on an appropriate HTTP implementation or pass in an SdkHttpClient explicitly to the client builder.
To resolve this I added the aws apache client and updated the code to do the following:
SdkHttpClient httpClient = ApacheHttpClient.builder().
maxConnections(50).
build()
S3Client.builder().httpClient(httpClient).build();
I also had to add:
[
["org.apache.http.conn.HttpClientConnectionManager",
"org.apache.http.pool.ConnPoolControl","software.amazon.awssdk.http.apache.internal.conn.Wrapped"]
]
After this I am now getting the following stack trace:
Caused by: java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty
at java.security.cert.PKIXParameters.setTrustAnchors(PKIXParameters.java:200)
at java.security.cert.PKIXParameters.<init>(PKIXParameters.java:120)
at java.security.cert.PKIXBuilderParameters.<init>(PKIXBuilderParameters.java:104)
at sun.security.validator.PKIXValidator.<init>(PKIXValidator.java:86)
... 76 more
I am running version 1.2.0 of qurkaus on 19.3.1 of graal. I am building this via Maven and the the provided docker container for Quarkus. I thought the trust store was added by default (in the build command it looks to be accurate) but am I missing something? Is there another way to get this to run without the setting of the HttpService on the S3 client?
There is a PR, under review at the moment, that introduces AWS S3 extension both JVM & Native. AWS clients are fully Quarkified, meaning configured via application.properties and enabled for dependency injection. So stay tuned as it most probably be available in Quarkus 1.5.0
I want to test my AWS code locally so I have to set a proxy to a AWS client.
There is a proxy host (http://user#pass:my-corporate-proxy.com:8080) set in my environment via a variable HTTPS_PROXY.
I didn't find a way how to set the proxy as whole so I came up with this code:
AmazonSNS sns = AmazonSNSClientBuilder.standard()
.withClientConfiguration(clientConfig(System.getenv("HTTPS_PROXY")))
.withRegion(Regions.fromName(System.getenv("AWS_REGION")))
.withCredentials(new DefaultAWSCredentialsProviderChain())
.build();
ClientConfiguration clientConfig(String proxy) {
ClientConfiguration configuration = new ClientConfiguration();
if (proxy != null && !proxy.isEmpty()) {
Matcher matcher = Pattern.compile("(\\w{3,5})://((\\w+):(\\w+)#)?(.+):(\\d{1,5})").matcher(proxy);
if (!matcher.matches()) {
throw new IllegalArgumentException("Proxy not valid: " + proxy);
}
configuration.setProxyHost(matcher.group(5));
configuration.setProxyPort(Integer.parseInt(matcher.group(6)));
configuration.setProxyUsername(matcher.group(3));
configuration.setProxyPassword(matcher.group(4));
}
return configuration;
}
The whole method clientConfig is only boilerplate code.
Is there any elegant way how to achieve this?
As far as I can tell while using AWS SDK V1 (1.11.840), if you have environment variables such as HTTP(S)_PROXY or http(s)_proxy set at runtime, or properties like http(s).proxyHost, proxyPort, proxyUser, and proxyPassword passed to your application, you don't have to set any of that. It gets automatically read into the newly created ClientConfigiration.
As, such you'd only want to set the ProxyAuthenticationMethod, if needed.
ClientConfiguration clientConfig(ProxyAuthenticationMethod authMethod) {
ClientConfiguration conf = new ClientConfiguration();
List<ProxyAuthenticationMethod> proxyAuthentication = new ArrayList<>(1);
proxyAuthentication.add(authMethod);
conf.setProxyAuthenticationMethods(proxyAuthentication);
return conf;
}
ProxyAuthenticationMethod can be ProxyAuthenticationMethod.BASIC or DIGEST or KERBEROS or NTLM or SPNEGO
I can confirm setting the parameters "http(s).proxyHost" (and others) work out of the box, you need however to specify a port, otherwise AWS SDK (1) will not pick it up.
java -Dhttps.proxyHost=proxy.company.com -Dhttps.proxyPort=8080 -Dhttps.proxyUser=myUsername -Dhttps.proxyPassword=myPassword <app>
Username & passsword are optional.
See for more info:
https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/net/doc-files/net-properties.html
and
What Java properties to pass to a Java app to authenticate with a http proxy
How do we create a AWS service client (e.g. EC2, Autoscaling) without using a session, and instead with directly using the sahred credentials, like in boto3.
Using session like this works:
sess := session.New(&aws.Config{
Region: aws.String("us-east-1"),
Credentials: credentials.NewSharedCredentials("", profile),
})
svc := ec2.New(sess)
However, this does not work:
svc := ec2.New(&aws.Config{
Region: aws.String("us-east-1"),
Credentials: credentials.NewSharedCredentials("", profile),
})
Error:
cannot use aws.Config literal (type *aws.Config) as type
client.ConfigProvider in argument to ec2.New: *aws.Config does not
implement client.ConfigProvider (missing ClientConfig method)
How to directly create a client with Go AWS SDK without session?
The SDK needed to avoid a circular dependency, and to do this it used an abstraction called session.Session. However, V2 gets rid of this abstraction by flattening some of the packages :)
I'm trying to read a s3 bucket from Spark and up until today Spark always complain that the request return 403
hadoopConf = spark_context._jsc.hadoopConfiguration()
hadoopConf.set("fs.s3a.access.key", "ACCESSKEY")
hadoopConf.set("fs.s3a.secret.key", "SECRETKEY")
hadoopConf.set("fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem")
logs = spark_context.textFile("s3a://mybucket/logs/*)
Spark was saying .... Invalid Access key [ACCESSKEY]
However with the same ACCESSKEY and SECRETKEY this was working with aws-cli
aws s3 ls mybucket/logs/
and in python boto3 this was working
resource = boto3.resource("s3", region_name="us-east-1")
resource.Object("mybucket", "logs/text.py") \
.put(Body=open("text.py", "rb"),ContentType="text/x-py")
so my credentials ARE invalid and the problem is definitely something with Spark..
Today I decided to turn on the "DEBUG" log for the entire spark and to my suprise... Spark is NOT using the [SECRETKEY] I have provided but instead... add a random one???
17/03/08 10:40:04 DEBUG request: Sending Request: HEAD https://mybucket.s3.amazonaws.com / Headers: (Authorization: AWS ACCESSKEY:[RANDON-SECRET-KEY], User-Agent: aws-sdk-java/1.7.4 Mac_OS_X/10.11.6 Java_HotSpot(TM)_64-Bit_Server_VM/25.65-b01/1.8.0_65, Date: Wed, 08 Mar 2017 10:40:04 GMT, Content-Type: application/x-www-form-urlencoded; charset=utf-8, )
This is why it still return 403! Spark is not using the key I provide with fs.s3a.secret.key but instead invent a random one??
For the record I'm running this locally on my machine (OSX) with this command
spark-submit --packages com.amazonaws:aws-java-sdk-pom:1.11.98,org.apache.hadoop:hadoop-aws:2.7.3 test.py
Could some one enlighten me on this?
(updated as my original one was downvoted as clearly considered unacceptable)
The AWS auth protocol doesn't send your secret over the wire. It signs the message. That's why what you see isn't what you passed in.
For further information, please reread.
I ran into a similar issue. Requests that were using valid AWS credentials returned a 403 Forbidden, but only on certain machines. Eventually I found out that the system time on those particular machines were 10 minutes behind. Synchronizing the system clock solved the problem.
Hope this helps!
It is very intriguing this random passkey. Maybe AWS SDK is getting the password from OS environment.
In hadoop 2.8, the default AWS provider chain shows the following list of providers:
BasicAWSCredentialsProvider EnvironmentVariableCredentialsProvider SharedInstanceProfileCredentialsProvider
Order, of course, matters! the AWSCredentialProviderChain, get the first keys from the first provider that provides that information.
if (credentials.getAWSAccessKeyId() != null &&
credentials.getAWSSecretKey() != null) {
log.debug("Loading credentials from " + provider.toString());
lastUsedProvider = provider;
return credentials;
}
See the code in "GrepCode for AWSCredentialProviderChain".
I face similar problem using profile credentials. SDK was ignoring the credentials inside ~/.aws/credentials (as good practice, I encourage you to not store credentials inside the program in any way).
My solution...
Set the credentials provider to use ProfileCredentialsProvider
sc._jsc.hadoopConfiguration().set("fs.s3a.endpoint", "s3.eu-central-1.amazonaws.com") # yes, I am using central eu server.
sc._jsc.hadoopConfiguration().set('fs.s3a.aws.credentials.provider', 'com.amazonaws.auth.profile.ProfileCredentialsProvider')
Folks, go for the IAM configuration based on Roles ... that will open up S3 access policies that should be added to the EMR default one.