In the AWS SDK https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/kms/package-summary.html the KMS has two different types of clients. A regular client and a client builder. What is the purpose of them both? When do you choose one over the other?
I'm trying to implement envelope encryption using KMS. I want to be able to hit the KMS endpoint and encrypt the payload. Which client library should I be using?
There is only one client type: KmsClient.
It can be created in 2 ways:
Using the KmsClientBuilder returned by KmsClient.builder() to modify properties and ultimately do .Build for your customised version of the client - this KmsClientBuilder is an instance of DefaultKmsClientBuilder, which is currently the only class that implements the client builder interface.
Using the KmsClient returned by KmsClient.create(), which is exactly the equivalent (and a shortcut) to new DefaultKmsClientBuilder().build() - this method returns a client set up with the region & credentials already loaded from the respective default provider chain, for applications that don't require further customisation.
This is how the above looks like in code:
final KmsClient defaultKmsClient = KmsClient.create();
final KmsClient alsoDefaultKmsClientButLonger = KmsClient.builder().build();
final KmsClient customisedKmsClient = KmsClient.builder()
.region(...)
.credentialsProvider(...)
.httpClient(...)
.endpointOverride(...)
...
In conclusion, use KmsClient.create() if you do not require a particular configuration, as the default region and creds should be sufficient in most cases.
If not, then customise it via an instance of the builder (which can only be accessed via the KmsClient.builder() method since KmsClientBuilder is an interface).
They are not 'different'.
The builder ultimately is what creates the client.
Related
We are working on FHIR(Fast Healthcare Interoperability Resources).
We have followed “FHIR works on AWS” and deployed the Cloud Formation template given by AWS in our AWS environment. Following is the template that we have deployed.
https://docs.aws.amazon.com/solutions/latest/fhir-works-on-aws/aws-cloudformation-template.html
Requirement : we want to maintain client specific/customized ids as primary key in the server.
Problem : server not allowing us to override or maintain client specific (customized) ids as primary key. Infact, in the runtime, it is generating its own ids and ignoring the ids provided by us.
Could you please let us know if there is any way to post the FHIR resource with client specific ids into FHIR server(Dynamo DB).
We have observed that by using "PUT" call(https://hl7.org/fhir/http.html#upsert), we might be able to generate the resource with customized ids as primary keys, but there is a precondition stating that "CapabilityStatement.rest.resource.updateCreate" Flag to be updated as "True".
Is there any way to update the "CapabilityStatement.rest.resource.updateCreate" flag through AWS console or by any manual process??
We are working for FHIR(Fast Healthcare Interoperability Resources).
We have followed “FHIR works on AWS” and deployed the CloudFormation template given by AWS in our AWS environment.Following is the template that we have deployed
https://docs.aws.amazon.com/solutions/latest/fhir-works-on-aws/aws-cloudformation-template.html
Requirement : we want to maintain client specific/customized ids as primary key in the server.
Problem : server not allowing us to override or mainain client specific (customized ) ids as primary key .Infact , in the runtime, it is generating its own ids and ignoring the id given by us.
The FHIR spec allows for you to define your own IDs when using "update as create". This is when you create a new resource in the server, but use a PUT (update) request to the ID you want to create, such as Patient/1, instead of a POST (create) request to the resource URL. The server should return a 201 Created status instead of 200 OK. For more information see https://hl7.org/fhir/http.html#upsert
Not every FHIR server supports this, but if AWS does this is likely how it would work. The field in the CapabilityStatement for this feature is CapabilityStatement.rest.resource.updateCreate
EDIT:
This is possible by modifying the parameters passed to the DynamoDbDataService constructor in the deployment repo's src/config.ts
By default supportUpdateCreate, the second parameter, is set to false
const dynamoDbDataService = new DynamoDbDataService(DynamoDb, false, { enableMultiTenancy });
but you can set it to true to enable this functionality
const dynamoDbDataService = new DynamoDbDataService(DynamoDb, true, { enableMultiTenancy });
I am getting issues when I implemented Custom Domain on AWS API and generated Android SDK ... now when I make authenticated calls to my API - SDK shows a error as follows:
Region isn't specified and can't be deduced from endpoint
What shall I do to remove this issue. I am sure its due to the custom domain implementation - because if I remove the custom domain mapping and then generate SDK - all calls are work again.
Since you use a custom domain the region isn't part of the endpoint, therefore you have to provide the region to the ApiClientFactory explicitly.
Something like:
ApiClientFactory f = new ApiClientFactory()
.credentialsProvider(credentialsProvider)
.region("us-east-1") // or whatever region you have :)
.endpoint("https://myendpoint");
I am completely new to Amazon Web Services, however, I did get an account and I am able to browse our list of servers. I am trying to create a database backup programmatically using .NET. I have installed AWS for .NET and I have built and run the sample Empty console program.
I can see that I can create an instance of the RDS service with the following line:
AmazonRDS rds = AWSClientFactory.CreateAmazonRDSClient(RegionEndPoint.USEast1);
However, I notice that the rds.CreateDBSnapshot(); needs a request object but I don't see anything like CreateDBSnapshotRequest in the reference .dll, can anyone help with a working example?
Like you said CreateDBSnapshotRequest is the parameter you have to pass to this function.
CreateDBSnapshotRequest is defined in the Amazon.RDS.Model namespace within the AWSSDK.dll assembly (version 1.5.25.0)
Within CreateDBSnapshotRequest you must pass the the DB Instance Identifier (for example mydbinstance-1), that you defined when you invoked the CreateDBInstance (or one of it's related methods) and the identifier for the snapshot you wish to generate (example: my-snapshot-id) for this DB Instance.
edit / example
Well there are a couple ways to achieve this, here's one example - hope it clears up your doubts
using Amazon.RDS;
using Amazon.RDS.Model;
...
...
//gets the credentials from the default configuration
AmazonRDS rdsClient = AWSClientFactory.CreateAmazonRDSClient();
CreateDBSnapshotRequest dbSnapshotRequest = new CreateDBSnapshotRequest();
dbSnapshotRequest.DBInstanceIdentifier = "my-oracle-instance";
dbSnapshotRequest.DBSnapshotIdentifier = "daily-snapshot";
rdsClient.CreateDBSnapshot(dbSnapshotRequest);
Dont't forget that the DB Instance (in the example my-oracle-instance) must exist (duh :) and must be in the available state, like this:
How do I specify a key pair using AWS Java SDK when creating a job flow? I need to specify the key pair so that I can later ssh into the master node.
I use the RunJobFlowRequest class but it does not have a way to specify the key pair. RunInstancesRequest class provides an api (setKeyName) for this, but I want to specifically create a Job Flow.
I know how to create a job flow using the console thereby specifying the key pair. But I'm looking to automate this so I would like to figure out how to do this with the Java SDK.
thanks
Check out the setInstances method on RunJobFlowRequest:
http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/index.html
That method allows you to pass in a JobFlowInstancesConfig object:
http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/elasticmapreduce/model/JobFlowInstancesConfig.html
Inside that JobFlowInstancesConfig object, you can use the setEc2KeyName method to specify which EC2 Key Pair to enable when logging in as the hadoop user to the instances.