AWS Query Can't save to a mutableList - amazon-web-services

I'm trying out AWS with Android Studio using Kotlin.
For the life of me, I cannot understand why my query is not saving into a local mutableList. When I output my list inside the while loop, everything is saved correctly, but once I leave the query, my mutableList becomes empty. Any help would be appreciated. Thank you.
Code
var userContent : MutableList<MutableList<String>> = arrayListOf()
Amplify.DataStore.query(UserInfo::class.java,
Where.matches(UserInfo.USER.eq("$myID")),
{
posts ->
while(posts.hasNext()){
val post = posts.next()
userContent.add(mutableListOf(post.content, post.username, post.password))
Log.i("Testing1","$userContent")
}
},
{Log.e("Auth","Error",it)}
)
Log.i("Testing2","$userContent")

I see you are using Amplify - which is fine. However, as an Android Developer, you have another option now. You can use the brand NEW AWS SDK for Kotlin.
This SDK is built for use with Android Studio. Using this SDK, you can build Native Android Apps that can invoke AWS services like Amazon DynamoDB.
Here is an AWS tutorial that walks you through creating an Android app that invokes Amazon DynamoDB and SNS.
Creating your first Native Android application using the AWS SDK for Kotlin

Related

Spring-Cloud-AWS vs AWS-SDK-Java 2

Ours is a Spring-Boot based application. For integration with AWS SNS and SQS, we have couple of options:
Use Spring-Cloud-AWS
Use AWS-SDK-Java 2
I wanted to know if there is any advantage in using one or the other.
When I ask AWS guys, they tell me that AWS SDK gets updated regularly and integration with SNS and SQS is not difficult. Hence, there is no need to integrate with Spring-Cloud-AWS.
I tried searching on gitter channel for Spring-Cloud and could not find any relevant information. Documentation does state that I can update the AWS-SDK version. Documentation does not state any compelling reason for not using AWS-SDK directly.
If anyone has some insights, please share.
From the AWS Spring Team:
"From now on, our efforts focus on 3.0 - based on AWS SDK 2.0."
So, if you need AWS SDK 2.0, you probably want to go directly with the SDK.
https://spring.io/blog/2021/03/17/spring-cloud-aws-2-3-is-now-available
For more on what's new on AWS Java SDK 2.0:
https://aws.amazon.com/blogs/developer/aws-sdk-for-java-2-0-developer-preview/
The main advantage over the AWS Java SDK is the Spring style convenience and head start we get by using the Spring project. As per the project documentation (https://cloud.spring.io/spring-cloud-aws/reference/html/##using-amazon-web-services)
Using the SDK, application developers still have to integrate the SDK
into their application with a considerable amount of infrastructure
related code. Spring Cloud AWS provides application developers already
integrated Spring-based modules to consume services and avoid
infrastructure related code as much as possible.

How to use realm with AWS serverless lambda?

So I'm currently using AWS serverless to make a createObject endpoint, which takes the values from the http body and uses the key-values to store an item in DynamoDB. I am doing this via AWS Lambda. We recently started using Realm (Realm.io) and we see that they have a Javascript library for node. Is it possible to invoke a realm function to store the same object, via AWS Lambda?
As I understand it, the Node SDK for Realm is only for the Professional and Enterprise editions of the Realm Object Server (ROS). The ROS can only be deployed on your own Linux instance.
Please see this for more details: https://realm.io/docs/realm-object-server/pe-ee/
Exactly as Clifton Labrum says, the professional and enterprise editions of the Realm Mobile Platform has a node.js SDK to be used on the server to listen for changes in Realms.
At every change in a Realm, you will get an event, which you can then process as you like. For instance, you can store objects in DynamoDB. You can also just leave the objects in Realms.

Serverless compute on Windows in AWS

I've got a piece of code that I need to make available over the 'Net. It's a perfect fit for an AWS Lambda with an HTTP API on top - a stateless, side effect free, rather CPU intensive function, blob in, blob out. It's written in C#/.NET, but it's not pure .NET, it makes use of the UWP API, therefore requires Windows Server 2016.
AWS Lambdas only run on Linux hosts, even C# ones. Is there any way to deploy this piece in the Amazon cloud in serverless manner - maybe something other than a Lambda? I know I can go with a EC2 VM, but this is the very kind of thing serverless architecture was invented for.
Lambda is the only option for serverless computing on AWS and Lambda functions run only on Linux machines.
If you need to run serverless functions in a Windows machine, try Azure Functions. That's the Lambda equivalent in the Microsoft cloud. I'm not sure if it runs in a Windows Server 2016 machine and couldn't find any reference to the platform, but I would expect that, as a brand new service, they are using their own edge tech.
To confirm if the platform is what you need, try this function:
using System.Management;
using System.Net;
using System.Threading.Tasks;
public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, TraceWriter log)
{
// Get OS friendly name
// http://stackoverflow.com/questions/577634/how-to-get-the-friendly-os-version-name
var caption = (from x in new ManagementObjectSearcher("SELECT Caption FROM Win32_OperatingSystem").Get().Cast<ManagementObject>()
select x.GetPropertyValue("Caption")).FirstOrDefault();
string name = caption != null ? caption.ToString() : "Unknown";
// the function response
return req.CreateResponse(HttpStatusCode.OK, name);
}
I think yoy can achieve this via combination of CodeDeploy service and AWS CodePipeline.
Refer to this article:
http://docs.aws.amazon.com/codedeploy/latest/userguide/getting-started-windows.html
to learn how to deploy code via CodeDeploy. Later see this article:
http://docs.aws.amazon.com/codepipeline/latest/userguide/getting-started-4.html
to learn how you can configure aws Pipline to call Code Deploy and later execute your batch job on created windows machine (note: you will probably want to use
S3 instead of Github - which is possible with CodePipeline).
I would consider to bootstrap whole such configuration via script - using aws cli - this way you can clean up easily your resources like this:
:aws codepipeline delete-pipeline --name "MyJob"
Of course you can configure the pipeline via aws web console and leave the pipeline configured to run your code on regular basis.

Googlescript and AWS SDK

I want to interact with Amazon Web Services DynamoDB in a Google Sheet via code in a GoogleScript. However, I cannot figure out how to integrate the AWS SDK. I am hoping to avoid having to write a library to handle the integration via the AWS HTTP API, as there are JavaScript and Java SDKs available for the SDK. Help?
(I've done some pretty extensive Google and Stack Overflow searches. This is the closest thing I've found to an answer, but that is the Google App Engine, not the Google Apps Script.)
Thanks!
I just made a function that does the basic authentication for any api request you want to make. You still might have to get some of the headers in order but it does most of the hard work for you.
For example:
function myFunction() {
AWS.init("MY_ACCESS_KEY", "MY_SECRET_KEY");
var instanceXML = AWS.request('ec2', 'us-east-1', 'DescribeInstances', {"Version":"2015-10-01"});
...
}
I put it in a repo with some documentation for use. Here's the link: https://github.com/smithy545/aws-apps-scripts
I recently added my own derivative of the "AWS via Google Apps Script" here: https://github.com/neilobremski/gas/blob/main/GasAWS.js
The JavaScript code here is a bit more modern and specifically uses the Utilities available in Google Apps Script for the SHA-256 stuff rather than the abandoned (but excellent!) Crypto-JS library.
Usage is the same except I moved the region parameter and have a global default. I did this because often resources are in the same region and repeatedly specifying it is a pain.
Responding if anyone else finds this useful, one of the responses above gave a good hint for me to get started but not able to add comment on it.
Steps in below tutorial walks through setting up API Gateway, Lambda and DynamoDb. Once setup, you can directly use URLFetch in GAS to your API Gateway.
http://docs.aws.amazon.com/lambda/latest/dg/with-on-demand-https-example.html
Code in AWS link is complete and needs no additional setup to work directly using URL Fetch. You may however want to enable security on API Gateway.
On GAS side -
var headers = {
"x-api-key" : AWS_KEY
};
var event = {};
event.operation = "create";
event.tableName = AWS_DB_NAME;
event.payload.Item = yourDataObjwithKey;
var options = {
"method":"POST",
"headers":headers,
"payload":JSON.stringify(event)
};
var url = AWS_GW;
var response = UrlFetchApp.fetch(url, options);
I am about to implement this and it looks like the best way is to use a Lambda function to query the DynamoDB via API Gateway.
It's really not that hard if you know NodeJS.

How to view table data in DynamoDB

AWS Console seems to indicate my tables have some data from my test put_item() calls, but I would like to actually see the data. Is there a means to do this in AWS Console? I've read something on AWS Explorer that can be installed as a plugin to eclipse or visual studios, but I'm a PHP developer who doesn't use Eclipse, so it seems silly to install a whole IDE just to ensure the correct data is being entered.
How can I check the data in my DynamoDB tables?
[UPDATE]
Amazon just launched "Explore Table" for DynamoDB in the AWS Management Console.
If you are using Visual Studio or Eclipse, you can use the AWS Explorer to see all of your tables and data.
In Eclipse it looks like this: