I am trying to connect to Amazon Kendra to query already created datasource, I am getting some unmarshall error from SDK - amazon-web-services

Here is the code snippet :
String strIndexRole = "arn:aws:iam::<my acct no>:role/Kendra-CloudwatchRole";
AWSSecurityTokenService stsClient = AWSSecurityTokenServiceClientBuilder.standard()
.withCredentials(new DefaultAWSCredentialsProviderChain())
.withEndpointConfiguration(new EndpointConfiguration("console.aws.amazon.com/kendra/home?region=us-east-1", "us-east-1"))
.build();
AssumeRoleRequest roleRequest = new AssumeRoleRequest()
.withRoleArn(strIndexRole).withDurationSeconds(7200);
AssumeRoleResult roleResponse = stsClient.assumeRole(roleRequest);
This is the exception:
15:38:30.301 [main] DEBUG org.apache.http.impl.conn.PoolingHttpClientConnectionManager - Connection released: [id: 0][route: {s}->https://console.aws.amazon.com:443][total available: 1; route allocated: 1 of 50; total allocated: 1 of 50]
Exception in thread "main" com.amazonaws.SdkClientException: Unable to unmarshall response (ParseError at [row,col]:[19,24]
Message: The reference to entity "state" must end with the ';' delimiter.). Response Code: 200, Response Text: OK
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleResponse(AmazonHttpClient.java:1750)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleSuccessResponse(AmazonHttpClient.java:1446)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1368)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1145)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:802)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:770)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:744)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:704)
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:686)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:550)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:530)
at com.amazonaws.services.securitytoken.AWSSecurityTokenServiceClient.doInvoke(AWSSecurityTokenServiceClient.java:1719)
at com.amazonaws.services.securitytoken.AWSSecurityTokenServiceClient.invoke(AWSSecurityTokenServiceClient.java:1686)
at com.amazonaws.services.securitytoken.AWSSecurityTokenServiceClient.invoke(AWSSecurityTokenServiceClient.java:1675)
at com.amazonaws.services.securitytoken.AWSSecurityTokenServiceClient.executeAssumeRole(AWSSecurityTokenServiceClient.java:589)
at com.amazonaws.services.securitytoken.AWSSecurityTokenServiceClient.assumeRole(AWSSecurityTokenServiceClient.java:561)
at com.aws.kendra.trial.SampleKendraTrial.main(SampleKendraTrial.java:73)
Caused by: javax.xml.stream.XMLStreamException: ParseError at [row,col]:[19,24]
Message: The reference to entity "state" must end with the ';' delimiter.
at com.sun.org.apache.xerces.internal.impl.XMLStreamReaderImpl.next(XMLStreamReaderImpl.java:604)
at com.sun.xml.internal.stream.XMLEventReaderImpl.peek(XMLEventReaderImpl.java:276)
at com.amazonaws.transform.StaxUnmarshallerContext.nextEvent(StaxUnmarshallerContext.java:220)
at com.amazonaws.services.securitytoken.model.transform.AssumeRoleResultStaxUnmarshaller.unmarshall(AssumeRoleResultStaxUnmarshaller.java:40)
at com.amazonaws.services.securitytoken.model.transform.AssumeRoleResultStaxUnmarshaller.unmarshall(AssumeRoleResultStaxUnmarshaller.java:28)
at com.amazonaws.http.StaxResponseHandler.handle(StaxResponseHandler.java:106)
at com.amazonaws.http.StaxResponseHandler.handle(StaxResponseHandler.java:42)
at com.amazonaws.http.response.AwsResponseHandlerAdapter.handle(AwsResponseHandlerAdapter.java:69)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleResponse(AmazonHttpClient.java:1726)
... 16 more

I think part of the problem you have here is the way you are configuring your AWSSecurityTokenService. This problem is also indicated by following line in exception stack trace that you have posted above.
at com.amazonaws.services.securitytoken.AWSSecurityTokenServiceClient.assumeRole(AWSSecurityTokenServiceClient.java:561)
Please refer this on how to assume an IAM role, get temporary credentials and invoke an AWS Service (here S3 is the AWS Service being called using temporary credentials) in Java. You can use the same concept to invoke Kendra APIs. From the above example you can take a clue on how to build BasicSessionCredentials and use that to build KendraClient (similar to how AmazonS3 client was built using AmazonS3ClientBuilder in above example). Once you have built KendraClient, you can refer to this example on how to query your Kendra index.

Related

AWS SecretsManager works in Eclipse, can't connect to Service Endpoint in ColdFusion

I have the following class written in Java using Eclipse on my Amazon EC2 instance.
import java.nio.ByteBuffer;
import com.amazonaws.auth.*;
import com.amazonaws.client.builder.AwsClientBuilder;
import com.amazonaws.services.secretsmanager.*;
import com.amazonaws.services.secretsmanager.model.*;
public class SMtest {
public static void main(String[] args) {
String x = getSecret();
System.out.println(x);
}
public SMtest()
{
}
public static String getSecret() {
String secretName = "mysecret";
String endpoint = "secretsmanager.us-east-1.amazonaws.com";
String region = "us-east-1";
String result = "";
//BasicAWSCredentials awsCreds = new BasicAWSCredentials("mypublickey", "mysecretkey");
AwsClientBuilder.EndpointConfiguration config = new AwsClientBuilder.EndpointConfiguration(endpoint, region);
AWSSecretsManagerClientBuilder clientBuilder = AWSSecretsManagerClientBuilder.standard()
.withCredentials(new InstanceProfileCredentialsProvider(false));
//.withCredentials(new AWSStaticCredentialsProvider(awsCreds));
clientBuilder.setEndpointConfiguration(config);
AWSSecretsManager client = clientBuilder.build();
String secret;
ByteBuffer binarySecretData;
GetSecretValueRequest getSecretValueRequest = new GetSecretValueRequest()
.withSecretId(secretName).withVersionStage("AWSCURRENT");
GetSecretValueResult getSecretValueResult = null;
try {
getSecretValueResult = client.getSecretValue(getSecretValueRequest);
} catch(ResourceNotFoundException e) {
System.out.println("The requested secret " + secretName + " was not found");
} catch (InvalidRequestException e) {
System.out.println("The request was invalid due to: " + e.getMessage());
} catch (InvalidParameterException e) {
System.out.println("The request had invalid params: " + e.getMessage());
}
if(getSecretValueResult == null) {
result = "";
}
// Depending on whether the secret was a string or binary, one of these fields will be populated
if(getSecretValueResult.getSecretString() != null) {
secret = getSecretValueResult.getSecretString();
result = secret;
//System.out.println(secret);
}
else {
binarySecretData = getSecretValueResult.getSecretBinary();
result = binarySecretData.toString();
}
return result;
}
}
When I execute it from within Eclipse, it works just fine. When I compile the class to use it in ColdFusion (2021) from the same EC2 with the following code:
<cfscript>
obj = CreateObject("java","SMtest");
obj.init();
result = obj.getSecret();
</cfscript>
<cfoutput>#result#</cfoutput>
I get a "Failed to connect to service endpoint" error. I believe I have all the IAM credentials set up properly since it is working in straight Java. However, when I change the credentials to use the Basic Credentials with my AWS Public and Secret Key (shown in comments in the code above), it works in both Java and ColdFusion.
I created an AWS Policy that manages the Secrets Manager permissions. I also added AmazonEC2FullAccess. I also tried to create a VPC Endpoint, but this had no effect.
Why would it be working in Java and not in ColdFusion (which is based on Java) ? What roles/policies would I have to add to get it to work in ColdFusion when it is already working in Java?
STACK TRACE ADDED:
com.amazonaws.SdkClientException: Failed to connect to service endpoint:
at com.amazonaws.internal.EC2ResourceFetcher.doReadResource(EC2ResourceFetcher.java:100)
at com.amazonaws.internal.InstanceMetadataServiceResourceFetcher.getToken(InstanceMetadataServiceResourceFetcher.java:91)
at com.amazonaws.internal.InstanceMetadataServiceResourceFetcher.readResource(InstanceMetadataServiceResourceFetcher.java:69)
at com.amazonaws.internal.EC2ResourceFetcher.readResource(EC2ResourceFetcher.java:66)
at com.amazonaws.auth.InstanceMetadataServiceCredentialsFetcher.getCredentialsEndpoint(InstanceMetadataServiceCredentialsFetcher.java:58)
at com.amazonaws.auth.InstanceMetadataServiceCredentialsFetcher.getCredentialsResponse(InstanceMetadataServiceCredentialsFetcher.java:46)
at com.amazonaws.auth.BaseCredentialsFetcher.fetchCredentials(BaseCredentialsFetcher.java:112)
at com.amazonaws.auth.BaseCredentialsFetcher.getCredentials(BaseCredentialsFetcher.java:68)
at com.amazonaws.auth.InstanceProfileCredentialsProvider.getCredentials(InstanceProfileCredentialsProvider.java:165)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.getCredentialsFromContext(AmazonHttpClient.java:1266)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.runBeforeRequestHandlers(AmazonHttpClient.java:842)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:792)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:779)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:753)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:713)
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:695)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:559)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:539)
at com.amazonaws.services.secretsmanager.AWSSecretsManagerClient.doInvoke(AWSSecretsManagerClient.java:2454)
at com.amazonaws.services.secretsmanager.AWSSecretsManagerClient.invoke(AWSSecretsManagerClient.java:2421)
at com.amazonaws.services.secretsmanager.AWSSecretsManagerClient.invoke(AWSSecretsManagerClient.java:2410)
at com.amazonaws.services.secretsmanager.AWSSecretsManagerClient.executeGetSecretValue(AWSSecretsManagerClient.java:943)
at com.amazonaws.services.secretsmanager.AWSSecretsManagerClient.getSecretValue(AWSSecretsManagerClient.java:912)
at SMtest.getSecret(SMtest.java:52)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:568)
at coldfusion.runtime.java.JavaProxy.invoke(JavaProxy.java:106)
at coldfusion.runtime.CfJspPage._invoke(CfJspPage.java:4254)
at coldfusion.runtime.CfJspPage._invoke(CfJspPage.java:4217)
at cfsmtest2ecfm1508275519.runPage(D:\ColdFusion2021\cfusion\wwwroot\smtest.cfm:4)
at coldfusion.runtime.CfJspPage.invoke(CfJspPage.java:257)
at coldfusion.tagext.lang.IncludeTag.handlePageInvoke(IncludeTag.java:749)
at coldfusion.tagext.lang.IncludeTag.doStartTag(IncludeTag.java:578)
at coldfusion.filter.CfincludeFilter.invoke(CfincludeFilter.java:65)
at coldfusion.filter.ApplicationFilter.invoke(ApplicationFilter.java:573)
at coldfusion.filter.RequestMonitorFilter.invoke(RequestMonitorFilter.java:43)
at coldfusion.filter.MonitoringFilter.invoke(MonitoringFilter.java:40)
at coldfusion.filter.PathFilter.invoke(PathFilter.java:162)
at coldfusion.filter.IpFilter.invoke(IpFilter.java:45)
at coldfusion.filter.LicenseFilter.invoke(LicenseFilter.java:30)
at coldfusion.filter.ExceptionFilter.invoke(ExceptionFilter.java:97)
at coldfusion.filter.BrowserDebugFilter.invoke(BrowserDebugFilter.java:81)
at coldfusion.filter.ClientScopePersistenceFilter.invoke(ClientScopePersistenceFilter.java:28)
at coldfusion.filter.BrowserFilter.invoke(BrowserFilter.java:38)
at coldfusion.filter.NoCacheFilter.invoke(NoCacheFilter.java:60)
at coldfusion.filter.GlobalsFilter.invoke(GlobalsFilter.java:38)
at coldfusion.filter.DatasourceFilter.invoke(DatasourceFilter.java:22)
at coldfusion.filter.CachingFilter.invoke(CachingFilter.java:62)
at coldfusion.CfmServlet.service(CfmServlet.java:231)
at coldfusion.bootstrap.BootstrapServlet.service(BootstrapServlet.java:311)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:228)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:163)
at coldfusion.monitor.event.MonitoringServletFilter.doFilter(MonitoringServletFilter.java:46)
at coldfusion.bootstrap.BootstrapFilter.doFilter(BootstrapFilter.java:47)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:190)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:163)
at coldfusion.inspect.weinre.MobileDeviceDomInspectionFilter.doFilter(MobileDeviceDomInspectionFilter.java:57)
at coldfusion.bootstrap.BootstrapFilter.doFilter(BootstrapFilter.java:47)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:190)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:163)
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:190)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:163)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:202)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:97)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:542)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:143)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:78)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:373)
at org.apache.coyote.ajp.AjpProcessor.service(AjpProcessor.java:462)
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65)
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:893)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1723)
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: java.net.MalformedURLException: Unknown protocol: http
at java.base/java.net.URL.<init>(URL.java:708)
at java.base/java.net.URL.fromURI(URL.java:748)
at java.base/java.net.URI.toURL(URI.java:1139)
at com.amazonaws.internal.ConnectionUtils.connectToEndpoint(ConnectionUtils.java:83)
at com.amazonaws.internal.EC2ResourceFetcher.doReadResource(EC2ResourceFetcher.java:80)
... 80 more
Caused by: java.lang.IllegalStateException: Unknown protocol: http
at org.apache.felix.framework.URLHandlersStreamHandlerProxy.parseURL(URLHandlersStreamHandlerProxy.java:373)
at java.base/java.net.URL.<init>(URL.java:703)
... 84 more
After noticing that the bottom end of the Stack Trace says "unknown protocol: http" and "Illegal State Exception: Unknown Protocol", I changed the endpoint in my class above to:
String endpoint = "https://secretsmanager.us-east-1.amazonaws.com";
and I am running the web-site via https now. It still works with no problem running it in Eclpse, and I am still getting the same error when using ColdFusion (i.e., when using a browser)
Since the basic credentials are working and since the error is not access denied, it suggests there is no issue with your IAM setup. Instead, likely your process failed to fetch the EC2 instance metadata. For example, timeout when calling the metadata endpoint, hence the Failed to connect to service endpoint error.
One way around this is to retrieve the accessKey and secretKey manually by calling the instance metadata API. For example, using cfhttp to populate variables.
Example curl command from docs: retrieve /api/token and use it to retrieve /meta-data/iam/security-credentials/<rolename>.
[ec2-user ~]$ TOKEN=`curl -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600"` \
&& curl -H "X-aws-ec2-metadata-token: $TOKEN" -v http://169.254.169.254/latest/meta-data/iam/security-credentials/s3access
Output:
{
"Code" : "Success",
"LastUpdated" : "2012-04-26T16:39:16Z",
"Type" : "AWS-HMAC",
"AccessKeyId" : "ASIAIOSFODNN7EXAMPLE",
"SecretAccessKey" : "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
"Token" : "token",
"Expiration" : "2017-05-17T15:09:54Z"
}
From here, if you still face any errors, at least it should be more explicit to you which step has failed and why.
Just a note, AWS recommends caching the credentials until near expiry instead of querying for every transaction to avoid throttling.

AWS unable to get query result because of ResourceNotFoundException

I'm trying to get cloudwatch query with boto3, but I'm getting ResourceNotFoundException.
import boto3
if __name__ == "__main__":
client = boto3.client('logs')
response = client.start_query(
logGroupName='/aws/lambda/My-Stack-Name-SE349DJ',
startTime=123,
endTime=123,
queryString="fields #message",
limit=1
)
I attempted to the above code. And an error message is as follows.
botocore.errorfactory.ResourceNotFoundException: An error occurred (ResourceNotFoundException) when calling the StartQuery operation: Log group '/aws/lambda/My-Stack-Name-SE349DJ' does not exist for account ID '11111111' (Service: AWSLogs; Status Code: 400; Error Code: ResourceNotFoundException; Request ID: xxxxx-xxxx-xxx; Proxy: null)
What I tested are as below.
The log group exists. I tested it with Logs Insights on the aws console. Also I tested after paste the log group as it is.
I added a backslash to test if '/' is a problem (ex. '/aws/lambda/My-Stack-Name-SE349DJ') and InvalidParameterException appears.
The aws account has administrate access privileges in the log group.
I got the same error message when I tested with aws cli.
An error occurred (ResourceNotFoundException) when calling the StartQuery operation: Log group 'XXXXXXXXXXXXXX' does not exist for account ID '11111111' (Service: AWSLogs; Status Code: 400; Error Code: ResourceNotFoundException; Request ID: xxxxx-xxxx-xxx; Proxy: null)
How can I solve this problem?
Actually the reason why I'm trying this is because I need to get more than 500,000 data from the filtered log group, but 10,000 are the maximum. I think It's better to pull it out by changing the start time and end time.
There is a high possibility that there are too many data in certain time, so I think it would be better to run it with boto3 rather than directly. Is there an easy way to extract more than 500,000 pieces of data from the console or other methods?
As #Marcin commented, It was because of the region configuration.
I added these lines before creating an aws client.
from botocore.config import Config
...
my_config = Config(
region_name = 'us-east-2',
)
...
client = boto3.client('logs', config=my_config)

Getting 400 Bad Request Error while creating S3 Batch Job from Java Code

As per the doc, I am trying to create a batch job from Java Code.
I am able to create a job from console with same role and lambda arn, but from code, I am getting 400 Bad Request. Also, I don't see any error message as per this doc
Here is my code snippet -
JobOperation jobOperation = new JobOperation().withLambdaInvoke(new LambdaInvokeOperation()
.withFunctionArn("arn:aws:lambda:eu-west-1:<account_id>:function:s3BatchOperarationsPOCLambda"));
JobManifest manifest = new JobManifest()
.withSpec(new JobManifestSpec().withFormat(JobManifestFormat.S3InventoryReport_CSV_20161130)
.withFields(new String[] { "Bucket", "Key" }))
.withLocation(
new JobManifestLocation().withObjectArn("arn:aws:s3:::<bucket_name>/manifest.csv")
.withETag("e55392fa1ad40a08e40b13b3c000a0aa"));
JobReport jobReport = new JobReport().withBucket(reportBucketName).withPrefix("testreport")
.withFormat(JobReportFormat.Report_CSV_20180820).withEnabled(true).withReportScope("AllTasks");
AWSS3Control s3ControlClient = AWSS3ControlClientBuilder.standard().withRegion(Regions.US_WEST_1).build();
String roleArn = "arn:aws:iam::<account_id>:role/S3-Batch-Role";
String accountId = <account_id>;
s3ControlClient.createJob(new CreateJobRequest().withAccountId(accountId).withOperation(jobOperation)
.withManifest(manifest).withPriority(12).withRoleArn(roleArn).withReport(jobReport)
.withClientRequestToken(uuid).withDescription("S3 job").withConfirmationRequired(false));
} catch (AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it and returned an error response.
e.printStackTrace();
} catch (SdkClientException e) {
System.out.println("test2" + e.getMessage());
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
Role has full IAM and s3 batch operation permissions, also lambda has access permission for s3.
Trust policy is also defined for batch operations.
Here is my error log -
(Service: AWSS3Control; Status Code: 400; Error Code: 400 Bad Request; Request ID: null; Proxy: null)
com.amazonaws.services.s3control.model.AWSS3ControlException: null (Service: AWSS3Control; Status Code: 400; Error Code: 400 Bad Request; Request ID: null; Proxy: null)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1811)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleServiceErrorResponse(AmazonHttpClient.java:1395)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1371)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1145)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:802)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:770)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:744)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:704)
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:686)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:550)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:530)
at com.amazonaws.services.s3control.AWSS3ControlClient.doInvoke(AWSS3ControlClient.java:1532)
at com.amazonaws.services.s3control.AWSS3ControlClient.invoke(AWSS3ControlClient.java:1499)
at com.amazonaws.services.s3control.AWSS3ControlClient.invoke(AWSS3ControlClient.java:1488)
at com.amazonaws.services.s3control.AWSS3ControlClient.executeCreateJob(AWSS3ControlClient.java:265)
at com.amazonaws.services.s3control.AWSS3ControlClient.createJob(AWSS3ControlClient.java:236)
at com.code.platformintegrationsscheduler.handlers.test.createS3Job(test.java:68)
at com.code.platformintegrationsscheduler.handlers.test.main(test.java:27)
I was stuck with the same issue today and after some debugging and trying out the same operation on CLI, I found that
new JobReport().withBucket(reportBucketName)
takes a bucketArn instead of a bucket name.
The actual issue might be different in your case. I suggest you serialize your request from code and try out the same operation in CLI and match both the requests.
AWS Error messages are often not very helpful when we actually need them.
I got the issue, issue was related to the gradle versions, we need to make sure we have all aws services gradle versions to be same.
In my case -
compile group: 'com.amazonaws', name: 'aws-java-sdk-dynamodb', version: '1.11.844'
compile group: 'com.amazonaws', name: 'aws-java-sdk-iam', version: '1.11.844'
compile group: 'com.amazonaws', name: 'aws-java-sdk-events', version: '1.11.844'
compile group: 'com.amazonaws', name: 'aws-java-sdk-s3', version: '1.11.844'
compile group: 'com.amazonaws', name: 'aws-java-sdk-batch', version: '1.11.844'
compile group: 'com.amazonaws', name: 'aws-java-sdk-s3control', version:'1.11.844'

Exception in thread "JavaFX Application Thread" com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied

What is the problem and how do I fix this? The only other similar question on stackoverflow did not help.
I am running this code at repository aws-samples/aws-cognito-java-desktop-app on GitHub in Eclipse IDE for Enterprise Java Developers Version: 2019-03 (4.11.0), Build id: 20190314-1200 in a Maven Project.
I get the following error:
(The significant error message is "Access Denied".)
Exception in thread "JavaFX Application Thread" com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: FA4BD704133C23A1; S3 Extended Request ID: xmTClQgfl5OVYUeQ45mIBkbe/CA1GvSVmM+VmpT9cwTMox+n+fry6GjB1OdhTpOyRFfmjLBwUQ8=), S3 Extended Request ID: xmTClQgfl5OVYUeQ45mIBkbe/CA1GvSVmM+VmpT9cwTMox+n+fry6GjB1OdhTpOyRFfmjLBwUQ8=
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1639)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1304)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1056)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4319)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4266)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4260)
at com.amazonaws.services.s3.AmazonS3Client.listBuckets(AmazonS3Client.java:926)
at com.amazonaws.services.s3.AmazonS3Client.listBuckets(AmazonS3Client.java:932)
at com.amazonaws.sample.cognitoui.CognitoHelper.ListBucketsForUser(CognitoHelper.java:324)
at com.amazonaws.sample.cognitoui.MainForm.ShowUserBuckets(MainForm.java:31)
at com.amazonaws.sample.cognitoui.MainForm.lambda$1(MainForm.java:97)
at com.sun.javafx.event.CompositeEventHandler.dispatchBubblingEvent(CompositeEventHandler.java:86)
at com.sun.javafx.event.EventHandlerManager.dispatchBubblingEvent(EventHandlerManager.java:238)
at com.sun.javafx.event.EventHandlerManager.dispatchBubblingEvent(EventHandlerManager.java:191)
at com.sun.javafx.event.CompositeEventDispatcher.dispatchBubblingEvent(CompositeEventDispatcher.java:59)
at com.sun.javafx.event.BasicEventDispatcher.dispatchEvent(BasicEventDispatcher.java:58)
at com.sun.javafx.event.EventDispatchChainImpl.dispatchEvent(EventDispatchChainImpl.java:114)
at com.sun.javafx.event.BasicEventDispatcher.dispatchEvent(BasicEventDispatcher.java:56)
at com.sun.javafx.event.EventDispatchChainImpl.dispatchEvent(EventDispatchChainImpl.java:114)
at com.sun.javafx.event.BasicEventDispatcher.dispatchEvent(BasicEventDispatcher.java:56)
at com.sun.javafx.event.EventDispatchChainImpl.dispatchEvent(EventDispatchChainImpl.java:114)
at com.sun.javafx.event.EventUtil.fireEventImpl(EventUtil.java:74)
at com.sun.javafx.event.EventUtil.fireEvent(EventUtil.java:49)
at javafx.event.Event.fireEvent(Event.java:198)
at javafx.scene.Node.fireEvent(Node.java:8411)
at javafx.scene.control.Button.fire(Button.java:185)
at com.sun.javafx.scene.control.behavior.ButtonBehavior.mouseReleased(ButtonBehavior.java:182)
at com.sun.javafx.scene.control.skin.BehaviorSkinBase$1.handle(BehaviorSkinBase.java:96)
at com.sun.javafx.scene.control.skin.BehaviorSkinBase$1.handle(BehaviorSkinBase.java:89)
at com.sun.javafx.event.CompositeEventHandler$NormalEventHandlerRecord.handleBubblingEvent(CompositeEventHandler.java:218)
at com.sun.javafx.event.CompositeEventHandler.dispatchBubblingEvent(CompositeEventHandler.java:80)
at com.sun.javafx.event.EventHandlerManager.dispatchBubblingEvent(EventHandlerManager.java:238)
at com.sun.javafx.event.EventHandlerManager.dispatchBubblingEvent(EventHandlerManager.java:191)
at com.sun.javafx.event.CompositeEventDispatcher.dispatchBubblingEvent(CompositeEventDispatcher.java:59)
at com.sun.javafx.event.BasicEventDispatcher.dispatchEvent(BasicEventDispatcher.java:58)
at com.sun.javafx.event.EventDispatchChainImpl.dispatchEvent(EventDispatchChainImpl.java:114)
at com.sun.javafx.event.BasicEventDispatcher.dispatchEvent(BasicEventDispatcher.java:56)
at com.sun.javafx.event.EventDispatchChainImpl.dispatchEvent(EventDispatchChainImpl.java:114)
at com.sun.javafx.event.BasicEventDispatcher.dispatchEvent(BasicEventDispatcher.java:56)
at com.sun.javafx.event.EventDispatchChainImpl.dispatchEvent(EventDispatchChainImpl.java:114)
at com.sun.javafx.event.EventUtil.fireEventImpl(EventUtil.java:74)
at com.sun.javafx.event.EventUtil.fireEvent(EventUtil.java:54)
at javafx.event.Event.fireEvent(Event.java:198)
at javafx.scene.Scene$MouseHandler.process(Scene.java:3757)
at javafx.scene.Scene$MouseHandler.access$1500(Scene.java:3485)
at javafx.scene.Scene.impl_processMouseEvent(Scene.java:1762)
at javafx.scene.Scene$ScenePeerListener.mouseEvent(Scene.java:2494)
at com.sun.javafx.tk.quantum.GlassViewEventHandler$MouseEventNotification.run(GlassViewEventHandler.java:394)
at com.sun.javafx.tk.quantum.GlassViewEventHandler$MouseEventNotification.run(GlassViewEventHandler.java:295)
at java.security.AccessController.doPrivileged(Native Method)
at com.sun.javafx.tk.quantum.GlassViewEventHandler.lambda$handleMouseEvent$358(GlassViewEventHandler.java:432)
at com.sun.javafx.tk.quantum.QuantumToolkit.runWithoutRenderLock(QuantumToolkit.java:389)
at com.sun.javafx.tk.quantum.GlassViewEventHandler.handleMouseEvent(GlassViewEventHandler.java:431)
at com.sun.glass.ui.View.handleMouseEvent(View.java:555)
at com.sun.glass.ui.View.notifyMouse(View.java:937)
This is what is in the Console before the error message:
bucketslist = ===========Credentials Details.===========
Accesskey = ASIA2IPIEEKPEGGO3EW2
Secret = 25FIoW9eJPczpWKCyaaKHw2Iq+xhM3SdzxAsrslh
SessionToken = AgoJb3JpZ2luX2VjEBwaCXVzLWVhc3QtMSJGMEQCIHqf/tWxrF7qQJuAv7MUtVCQLcomUA3cgPS/rEEKaMi4AiBQxay1snMP98nPCt6LDZZEqVhIyi/3Xn56lkMc4iCRDCqmBQhlEAAaDDcwNTM5ODMxNzcyNiIM20llBNVbiMH2FXa+KoMFqDEQyC/l9JtcMKAg+OUyUPozec7Xfb7uzLPExCEp47lacWIg5DELHrmgCQHTAjFPthaZyyzt1JG0RCi4DCthacWb2TBvxo56e0hBJVLEXeSZ+LFwnNYyiidJrFs7yQxJQfuDYk4JIuY6AEfFm5R1ZfbQ0PJewx+hdAMdJIsnk8mJZ2BBWClGq+9TzPtRj/0pW5/iVGtv7NySKhUU3Cj4DWMAJsFEz6Zw+BWoNHp6EJRQa14cXkDBLvlhZTokGaC5AQD+SUOw5lXB/ScS0zbTcTqpySqoW6HknRIwph1qzWXjdgQPMppPxNGmmI3eI+Rt53ZC4mOVeKi6vSEdWIFRo0a4P7OMgdrJ2RoEI6+pIPuOkCDC/0BcS1K/W/ar7V0SRV5Ae5UnKoasRkTho2+B5juSJBJd9LH+LliVqxAGLdmVwC4AG51c6P530yYOQ7Njlc0HSloYC3oUBu8BcWIgG4JpA+m7s0ts7UVQSs/sbDksOmj8IMsg+oPtCUcnnPIQzO1CKcPMieB/BApLWKLSzLGfpz+GpRFRou42fvnfAtaH+EsHk8/+xuVxO1jzKT6Z4WF1k84r1zN7fO9UOE3exZCLzXhhQHsOyR69mb652N88OGjotxAVBJMfcGWeUYhYYNMHxQUz/lPMH5BxUWb50Qy5/XvJ8b/IH0OSTWhfS6lD6HsUit+c+C66KRAcMhB2ENOzb26LKtl4XyD0AQcBgSnx2rTfqi8zKIDkZSRgq3w09M6ngI4JXMRkHUwyQi3sNlheyB1l6amhqPeQPMWYlLYFgWtWON1nyvJg9Niq0WZvc+7qvFuhK7GRdIOeLY1kGJxqAXqZHiY584zpG1BEfxeEqzCu3tnoBTq1ASXqR+O/61OZNZ4TsL/hy7iD4JYMAdblU++/fi9DP4xNfwrv48q/tu2DrilR8kZzrzS9Eq+TVX68orsXhs1coCMgSRFRRHMQEYq0MpAmN3kKhQ0+M4QfqHkqkjKPu3LDd1kGA7uk1i/3bShnyaO9zV2JbcPBUVXne8AWPeo9e3kHgJepw9PklcJu9rS806zbiVz+lkLlnaBRiXuytgjYbObTv4SjZpGjmL2RO3L6LMT60+GDQcE=
============Bucket Lists===========
Hence I figured out that the error arises at the place in this code marked "// error here":
/**
* This method returns the details of the user and bucket lists.
*
* #param credentials Credentials to be used for displaying buckets
* #return
*/
String ListBucketsForUser(Credentials credentials) {
BasicSessionCredentials awsCreds = new BasicSessionCredentials(credentials.getAccessKeyId(), credentials.getSecretKey(), credentials.getSessionToken());
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new AWSStaticCredentialsProvider(awsCreds))
.withRegion(Regions.fromName(REGION))
.build();
StringBuilder bucketslist = new StringBuilder();
bucketslist.append("===========Credentials Details.=========== \n");
bucketslist.append("Accesskey = " + credentials.getAccessKeyId() + "\n");
bucketslist.append("Secret = " + credentials.getSecretKey() + "\n");
bucketslist.append("SessionToken = " + credentials.getSessionToken() + "\n");
bucketslist.append("============Bucket Lists===========\n");
// error here
System.out.println("bucketslist = " + bucketslist.toString());
for (Bucket bucket : s3Client.listBuckets()) {
bucketslist.append(bucket.getName());
bucketslist.append("\n");
System.out.println(" - " + bucket.getName());
}
return bucketslist.toString();
}
The role I assigned to Federated Identity did not have enough access to do what it needs to do. I gave it more access in AWS IAM Console.

S3A hadoop aws jar always return AccessDeniedException

Could anyone please help me in figure out why do I get below exception? All I'm trying to read some data from local file in my spark program and writing into S3. I have correct secret key and access key specified like this -
Do you think it's related to version mismatch of some library?
SparkConf conf = new SparkConf();
// add more spark related properties
AWSCredentials credentials = DefaultAWSCredentialsProviderChain.getInstance().getCredentials();
conf.set("spark.hadoop.fs.s3a.access.key", credentials.getAWSAccessKeyId());
conf.set("spark.hadoop.fs.s3a.secret.key", credentials.getAWSSecretKey());
The java code is plain vanilla -
protected void process() throws JobException {
JavaRDD<String> linesRDD = _sparkContext.textFile(_jArgs.getFileLocation());
linesRDD.saveAsTextFile("s3a://my.bucket/" + Math.random() + "final.txt");
This is my code and gradle.
Gradle
ext.libs = [
aws: [
lambda: 'com.amazonaws:aws-lambda-java-core:1.2.0',
// The AWS SDK will dynamically import the X-Ray SDK to emit subsegments for downstream calls made by your
// function
//recorderCore: 'com.amazonaws:aws-xray-recorder-sdk-core:1.1.2',
//recorderCoreAwsSdk: 'com.amazonaws:aws-xray-recorder-sdk-aws-sdk:1.1.2',
//recorderCoreAwsSdkInstrumentor: 'com.amazonaws:aws-xray-recorder-sdk-aws-sdk-instrumentor:1.1.2',
// https://mvnrepository.com/artifact/com.amazonaws/aws-java-sdk
javaSDK: 'com.amazonaws:aws-java-sdk:1.11.311',
recorderSDK: 'com.amazonaws:aws-java-sdk-dynamodb:1.11.311',
// https://mvnrepository.com/artifact/com.amazonaws/aws-lambda-java-events
lambdaEvents: 'com.amazonaws:aws-lambda-java-events:2.0.2',
snsSDK: 'com.amazonaws:aws-java-sdk-sns:1.11.311',
// https://mvnrepository.com/artifact/com.amazonaws/aws-java-sdk-emr
emr :'com.amazonaws:aws-java-sdk-emr:1.11.311'
],
//jodaTime: 'joda-time:joda-time:2.7',
//guava : 'com.google.guava:guava:18.0',
jCommander : 'com.beust:jcommander:1.71',
//jackson: 'com.fasterxml.jackson.module:jackson-module-scala_2.11:2.8.8',
jackson: 'com.fasterxml.jackson.core:jackson-databind:2.8.0',
apacheCommons: [
lang3: "org.apache.commons:commons-lang3:3.3.2",
],
spark: [
core: 'org.apache.spark:spark-core_2.11:2.3.0',
hadoopAws: 'org.apache.hadoop:hadoop-aws:2.8.1',
//hadoopClient:'org.apache.hadoop:hadoop-client:2.8.1',
//hadoopCommon:'org.apache.hadoop:hadoop-common:2.8.1',
jackson: 'com.fasterxml.jackson.module:jackson-module-scala_2.11:2.8.8'
],
Exception
2018-04-10 22:14:22.270 | ERROR | | | |c.f.d.p.s.SparkJobEntry-46
Exception found in job for file type : EMAIL
java.nio.file.AccessDeniedException: s3a://my.bucket/0.253592564392344final.txt: getFileStatus on
s3a://my.bucket/0.253592564392344final.txt:
com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service:
Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID:
62622F7F27793DBA; S3 Extended Request ID: BHCZT6BSUP39CdFOLz0uxkJGPH1tPsChYl40a32bYglLImC6PQo+LFtBClnWLWbtArV/z1SOt68=), S3 Extended Request ID: BHCZT6BSUP39CdFOLz0uxkJGPH1tPsChYl40a32bYglLImC6PQo+LFtBClnWLWbtArV/z1SOt68=
at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:158) ~[hadoop-aws-2.8.1.jar:na]
at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:101) ~[hadoop-aws-2.8.1.jar:na]
at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:1568) ~[hadoop-aws-2.8.1.jar:na]
at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:117) ~[hadoop-aws-2.8.1.jar:na]
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1436) ~[hadoop-common-2.8.1.jar:na]
at org.apache.hadoop.fs.s3a.S3AFileSystem.exists(S3AFileSystem.java:2040) ~[hadoop-aws-2.8.1.jar:na]
at org.apache.hadoop.mapred.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:131) ~[hadoop-mapreduce-client-core-2.6.5.jar:na]
at org.apache.spark.internal.io.HadoopMapRedWriteConfigUtil.assertConf(SparkHadoopWriter.scala:283) ~[spark-core_2.11-2.3.0.jar:2.3.0]
at org.apache.spark.internal.io.SparkHadoopWriter$.write(SparkHadoopWriter.scala:71) ~[spark-core_2.11-2.3.0.jar:2.3.0]
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scala:1096) ~[spark-core_2.11-2.3.0.jar:2.3.0]
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply(PairRDDFunctions.scala:1094) ~[spark-core_2.11-2.3.0.jar:2.3.0]
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply(PairRDDFunctions.scala:1094) ~[spark-core_2.11-2.3.0.jar:2.3.0]
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) ~[spark-core_2.11-2.3.0.jar:2.3.0]
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112) ~[spark-core_2.11-2.3.0.jar:2.3.0]
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363) ~[spark-core_2.11-2.3.0.jar:2.3.0]
Once you are playing with Hadoop Configuration classes, you need to strip out the spark.hadoop prefix, so just use fs.s3a.access.key, etc.
All the options are defined in the class org.apache.hadoop.fs.s3a.Constants: if you reference them you'll avoid typos too.
One thing to consider is all the source for spark and hadoop is public: there's nothing to stop you taking that stack trace, setting some breakpoints and trying to run this in your IDE. It's what we normally do ourselves when things get bad.