Using DeveloperCredentials for AWS S3 Client in C++ - c++

I am writing a C++ application in Windows using the AWS C++ SDK and need help with Developer Authenticated Identities in order to upload/download files to/from S3 in my application.
We have a backend application using Cognito to get the temporary credentials for AWS (IdentityID & OpenIDToken). I know the ProviderName and the IdentityPoolID.
Rather than describe what I've attempted, I believe it's easier to show.
Below is the snippet. I am trying to do the equivalent of "Refresh Identity" which is available in C# and Java SDK's. C# and Java both use "RefreshIdentity" which defines an IdentityState(IdentityID, ProviderName, OpenIDToken, fromCacheFlag)
Based on some documentation I found, I created a new class (MyInheritedCognitoIdentityClient) derived from CognitoIdentityClient which overrides the GetId & GetOpenIdToken functions to return the IdentityID and openIdToken received back from the server applicaton. The GetID is called, but GetOpenIdToken is not called - as the documentation implies; instead there is a call to GetCredentialsForIdentity (which fails with NotAuthorizedException).
Below is my code snippet:
`{
std::shared_ptr<MyInheritedPersistentCognitoIdentityProvider> identityProvider = std::make_shared<MyInheritedPersistentCognitoIdentityProvider>();
identityProvider->setIdentityPool(myIdentityPoolID); // us-east-1:c373a2ca-b912-3839-a65c-8d4ce53d512e -> not real
identityProvider->setAccountId(myProviderName); // login.mycompany.net
std::shared_ptr<MyInheritedCognitoIdentityClient> cognitoIdentityClient =
std::make_shared<MyInheritedCognitoIdentityClient>(); // _cognitoID and _openIdToken
std::shared_ptr<Aws::Auth::AWSCredentialsProvider> cognitoCachCredProvider =
std::make_shared<Aws::Auth::CognitoCachingAuthenticatedCredentialsProvider>(identityProvider, cognitoIdentityClient);
Aws::S3::S3Client s3Client(cognitoCachCredProvider);
/* attempt to upload/download here */
}`

you need to call GetCredentialsForIdentity server side, if not anyone can login in your own created provider (I'm assuming login.mycompany.net is custom provider)

Related

How to build request handler class in lambda function for a Springboot CRUD Application?

I am trying to invoke a lambda function by the new feature - function URL"S https://docs.aws.amazon.com/lambda/latest/dg/lambda-urls.html using which I don't require any trigger events.
I am able to get the output for a simple java application using the above approach written in documentation by using Java as runtimeEnvironment and passing Jar file from my local machine.
Now, I want to test lambda functionality by same concept but for Springboot-Application which has restful endpoints for GET,POST,PUT,DELETE for different crud operations connecting with MySql db. The problem I am facing is how to tell handler class that to route to these endpoints and trigger lambda functions.
Any help/suggestions appreciated here. Thanks in Advance !
It probably won't work with this URL's feature but it allows handling Lambda events similar to standard Spring web requests.
You can check this one: https://github.com/MelonProjectCom/lambda-http-router
Example usage:
#PathPostMapping("/example/test")
public String example(#Body String body) {
return "Hello World! - body: " + body;
}

Is it possible to connect to the Google IOTCore MQTT Bridge via Javascript?

I've been trying to use the javacscript version of the Eclipse Paho MQTT client to access the Google IOTCore MQTT Bridge, as suggested here:
https://cloud.google.com/iot/docs/how-tos/mqtt-bridge
However, whatever I do, any attempt to connect with known good credentials (working with other clients) results in this connection error:
errorCode: 7, errorMessage: "AMQJS0007E Socket error:undefined."
Not much to go on there, so I'm wondering if anyone has ever been successful connecting to the MQTT Bridge via Javascript with Eclipse Paho, the client implementation suggested by Google in their documentation.
I've gone through their troubleshooting steps, and things seem to be on the up and up, so no help there either.
https://cloud.google.com/iot/docs/troubleshooting
I have noticed that in their docs they have sample code for Java/Python, etc, but not Javascript, so I'm wondering if it's simply not supported and their documentation just fails to mention as such.
I've simplified my code to just use the 'Hello World' example in the Paho documentation, and as far as I can tell I've done things correctly (including using my device path as the ClientID, the JWT token as the password, specifying an 'unused' userName field and explicitly requiring MQTT v3.1.1).
In the meantime I'm falling back to polling via their HTTP bridge, but that has obvious latency and network traffic shortcomings.
// Create a client instance
client = new Paho.MQTT.Client("mqtt.googleapis.com", Number(8883), "projects/[my-project-id]/locations/us-central1/registries/[my registry name]/devices/[my device id]");
// set callback handlers
client.onConnectionLost = onConnectionLost;
client.onMessageArrived = onMessageArrived;
// connect the client
client.connect({
mqttVersion: 4, // maps to MQTT V3.1.1, required by IOTCore
onSuccess:onConnect,
onFailure: onFailure,
userName: 'unused', // suggested by Google for this field
password: '[My Confirmed Working JWT Token]' // working JWT token
function onFailure(resp) {
console.log(resp);
}
// called when the client connects
function onConnect() {
// Once a connection has been made, make a subscription and send a message.
console.log("onConnect");
client.subscribe("World");
message = new Paho.MQTT.Message("Hello");
message.destinationName = "World";
client.send(message);
}
// called when the client loses its connection
function onConnectionLost(responseObject) {
if (responseObject.errorCode !== 0) {
console.log("onConnectionLost:"+responseObject.errorMessage);
}
}
// called when a message arrives
function onMessageArrived(message) {
console.log("onMessageArrived:"+message.payloadString);
}
I'm a Googler (but I don't work in Cloud IoT).
Your code looks good to me and it should work. I will try it for myself this evening or tomorrow and report back to you.
I've spent the past day working on a Golang version of the samples published on Google's documentation. Like you, I was disappointed to not see all Google's regular languages covered by samples.
Are you running the code from a browser or is it running on Node.JS?
Do you have a package.json (if Node) that you would share too please?
Update
Here's a Node.JS (JavaScript but non-browser) that connects to Cloud IoT, subscribes to /devices/${DEVICE}/config and publishes to /devices/${DEVICE}/events.
https://gist.github.com/DazWilkin/65ad8890d5f58eae9612632d594af2de
Place all the files in the same directory
Replace values in index.js of the location of Google's CA and your key
Replaces [[YOUR-X]] values in config.json
Use "npm install" to pull the packages
Use node index.js
You should be able to pull messages from the Pub/Sub subscription and you should be able to send config messages to the device.
Short answer is no. Google Cloud IoT Core doesn't support WebSockets.
All the JavaScript MQTT libraries use WebSocket because JavaScript is restricted to perform HTTP requests and WebSocket connections only.

What's the best way to store token signing certificate for an AWS web app?

I am using IdentityServer4 with .NET Core 2.0 on AWS's ElasticBeanstalk. I have a certificate for signing tokens. What's the best way to store this certificate and retrieve it from the application? Should I just stick it with the application files? Throw it in an environment variable somehow?
Edit: just to be clear, this is a token signing certificate, not an SSL certificate.
I don't really like the term 'token signing certificate' because it sounds so benign. What you have is a private key (as part of the certificate), and everyone knows you should secure your private keys!
I wouldn't store this in your application files. If someone gets your source code, they shouldn't also get the keys to your sensitive data (if someone has your signing cert, they can generate any token they like and pretend to be any of your users).
I would consider storing the certificate in AWS parameter store. You could paste the certificate into a parameter, which can be encrypted at rest. You then lock down the parameter with an AWS policy so only admins and the application can get the cert - your naughty Devs dont need it! Your application would pull the parameter string when needed and turn it into your certificate object.
This is how I store secrets in my application. I can provide more examples/details if required.
Edit -- This was the final result from Stu's guidance
The project needs 2 AWS packages from Nuget to the project
AWSSDK.Extensions.NETCORE.Setup
AWSSDK.SimpleSystemsManagement
Create 2 parameters in the AWS SSM Parameter Store like:
A plain string named /MyApp/Staging/SigningCertificate and the value is a Base64 encoded .pfx file
An encrypted string /MyApp/Staging/SigningCertificateSecret and the value is the password to the above .pfx file
This is the relevant code:
// In Startup class
private X509Certificate2 GetSigningCertificate()
{
// Configuration is the IConfiguration built by the WebHost in my Program.cs and injected into the Startup constructor
var awsOptions = Configuration.GetAWSOptions();
var ssmClient = awsOptions.CreateServiceClient<IAmazonSimpleSystemsManagement>();
// This is blocking because this is called during synchronous startup operations of the WebHost-- Startup.ConfigureServices()
var res = ssmClient.GetParametersByPathAsync(new Amazon.SimpleSystemsManagement.Model.GetParametersByPathRequest()
{
Path = "/MyApp/Staging",
WithDecryption = true
}).GetAwaiter().GetResult();
// Decode the certificate
var base64EncodedCert = res.Parameters.Find(p => p.Name == "/MyApp/Staging/SigningCertificate")?.Value;
var certificatePassword = res.Parameters.Find(p => p.Name == "/MyApp/Staging/SigningCertificateSecret")?.Value;
byte[] decodedPfxBytes = Convert.FromBase64String(base64EncodedCert);
return new X509Certificate2(decodedPfxBytes, certificatePassword);
}
public void ConfigureServices(IServiceCollection servies)
{
// ...
var identityServerBuilder = services.AddIdentityServer();
var signingCertificate = GetSigningCertificate();
identityServerBuilder.AddSigningCredential(signingCertificate);
//...
}
Last, you may need to set an IAM role and/or policy to your EC2 instance(s) that gives access to these SSM parameters.
Edit: I have been moving my web application SSL termination from my load balancer to my elastic beanstalk instance this week. This requires storing my private key in S3. Details from AWS here: Storing Private Keys Securely in Amazon S3

Does AWS CPP S3 SDK support "Transfer acceleration"

I enabled "Transfer acceleration" on my bucket. But I dont see any improvement in speed of Upload in my C++ application. I have waited for more than 20 minutes that is mentioned in AWS Documentation.
Does the SDK support "Transfer acceleration" by default or is there a run time flag or compiler flag? I did not spot anything in the SDK code.
thanks
Currently, there isn't a configuration option that simply turns on transfer acceleration. You can however, use endpoint override in the client configuration to set the accelerated endpoint.
What I did to enable a (working) transfer acceleration:
set in the bucket configuration on the AWS panel "Transfer Acceleration" to enabled.
add to the IAM user that I use inside my C++ application the permission s3::PutAccelerateConfiguration
Add the following code to the s3 transfer configuration (bucket_ is your bucket name, the final URL must match the one shown in the AWS panel "Transfer Acceleration"):
Aws::Client::ClientConfiguration config;
/* other configuration options */
config.endpointOverride = bucket_ + ".s3-accelerate.amazonaws.com";
Ask for acceleration to the bucket before transfer... (docs in here )
auto s3Client = Aws::MakeShared<Aws::S3::S3Client>("Uploader",
Aws::Auth::AWSCredentials(id_, key_), config);
Aws::S3::Model::PutBucketAccelerateConfigurationRequest bucket_accel;
bucket_accel.SetAccelerateConfiguration(
Aws::S3::Model::AccelerateConfiguration().WithStatus(
Aws::S3::Model::BucketAccelerateStatus::Enabled));
bucket_accel.SetBucket(bucket_);
s3Client->PutBucketAccelerateConfiguration(bucket_accel);
You can check in the detailed logs of the AWS sdk that your code is using the accelerated entrypoint and you can also check that before the transfer start there is a call to /?accelerate (info)
What worked for me:
Enabling S3 Transfer Acceleration within AWS console
When configuring the client, only utilize the accelerated endpoint service:
clientConfig->endpointOverride = "s3-accelerate.amazonaws.com";
#gabry - your solution was extremely close, I think the reason it wasn't working for me was perhaps due to SDK changes since originally posted as the change is relatively small. Or maybe because I am constructing put object templates for requests used with the transfer manager.
Looking through the logs (Debug level) the SDK automatically concatenates the bucket used in transferManager::UploadFile() with the overridden endpoint. I was getting unresolved host errors as the requested host looked like:
[DEBUG] host: myBucket.myBucket.s3-accelerate.amazonaws.com
This way I could still keep the same S3_BUCKET macro name while only selectively calling this when instantiating a new configuration for upload.
e.g.
<<
...
auto putTemplate = new Aws::S3::Model::PutObjectRequest();
putTemplate->SetStorageClass(STORAGE_CLASS);
transferConfig->putObjectTemplate = *putTemplate;
auto multiTemplate = new Aws::S3::Model::CreateMultipartUploadRequest();
multiTemplate->SetStorageClass(STORAGE_CLASS);
transferConfig->createMultipartUploadTemplate = *multiTemplate;
transferMgr = Aws::Transfer::TransferManager::Create(*transferConfig);
auto transferHandle = transferMgr->UploadFile(localFile, S3_BUCKET, s3File);
transferMgr = Aws::Transfer::TransferManager::Create(*transferConfig);
...
>>

How to invoke an AWS Lambda function asynchronously

Does anyone know the current and correct way to invoke Amazon AWS Lambda functions asynchronously instead of synchronously?
The InvokeAsync API in the AWS Java SDK is still available but marked as deprecated and they suggest you use use the Invoke API. I can't figure out why they would be forcing us to using sync. I have a web frontend that dispatches some batch jobs. I can't expect the frontend to keep a connection open for several minutes while it waits for the response (which is actually e-mailed to them after about 4-5 minutes of processing).
Ideally I'm trying to figure out how to do this with their API Endpoints rather than the Java SDK because the environment (GAE) that I'm running my backend in doesn't support AWS's use of HttpClient.
I'm looking at the latest API docs here, and it looks like only AWSLambdaAsyncClient.invokeAsyncAsync() is deprecated. The AWSLambdaAsyncClient.invokeAsync() method is not marked as deprecated. It looks like they are just doing some code cleanup by removing the need for the InvokeAsyncRequest and InvokeAsyncResult classes and the extra invokeAsyncAsync() methods.
You should be able to use the AWSLambdaAsyncClient.invokeAsync() method which uses InvokeRequest and returns InvokeResult. You might have to set the InvocationType on the InvokeRequest to InvocationType.Event. It's not clear if that's needed if you are using the Async client.
Regarding your second question about calling Lambda functions asynchronously without using the SDK, I would look into using API Gateway as a service proxy. This is the recommended way to expose Lambda functions for asynchronous calls.
The below code can be used to invoke the Lambda asynchronously from another Lambda
AWSLambdaAsyncClient client = new AWSLambdaAsyncClient();
client.withRegion(Regions.fromName(region));
InvokeRequest request = new InvokeRequest();
request.setInvocationType("Event");
request.withFunctionName(functionName).withPayload(payload);
InvokeResult invoke = client.invoke(request);
Approach given in the accepted answer is now deprecated. Answer given by the user #dassum is the approach to be followed, but the answer lacks a bit of explanation.
When creating the InvokeRequest, set the InvocationType as "Event" for asynchronous invocation and "RequestResponse" for synchronous invocation.
AWSLambda lambda = //create your lambda client here
lambda.invoke(new InvokeRequest()
.withFunctionName(LAMBDA_FUNCTION_NAME)
.withInvocationType(InvocationType.Event) //asynchronous
.withPayload(payload))
Reference to docs:
https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/lambda/AWSLambda.html#invoke-com.amazonaws.services.lambda.model.InvokeRequest-
You can try the following:
YourCustomRequestBean request = new YourCustomRequestBean();
request.setData1(data1);
request.setData2(data2);
request.setData3(data3);
ClientConfiguration clientConfiguration = new ClientConfiguration();
clientConfiguration.setSocketTimeout(xxxx);
clientConfiguration.setRequestTimeout(xxxx);
ObjectMapper objectMapper = new ObjectMapper();
String jsonStr = objectMapper.writeValueAsString(request);
AWSLambdaAsync awsLambdaAsync =
AWSLambdaAsyncClientBuilder.standard().withRegion(<mention your region here>).withClientConfiguration(clientConfiguration).build();
InvokeRequest invokeRequest = new InvokeRequest()
.withFunctionName(lambda function urn)
.withPayload(jsonStr);
awsLambdaAsync.invokeAsync(invokeRequest);