DynamoDB + Flutter - amazon-web-services

I am trying to create an app that uses AWS Services, I already use Cognito plugin for flutter but can't get it to work with DynamoDB, should I use a lambda function and point to it or is it possible to get data form a table directly from flutter, if that's the case which URL should I use?
I am new in AWS Services don’t know if is it possible to access a dynamo table with a URL or I should just use a lambda function

Since this is kind of an open-ended question and you mentioned Lambdas, I would suggest checking out the Serverless framework. They have a couple of template applications in various languages/frameworks. Serverless makes it really easy to spin up Lambdas configured to an API Gateway, and you can start with the default proxy+ resource. You can also define DynamoDB tables to be auto-created/destroyed when you deploy/destroy your serverless application. When you successfully deploy using the command 'serverless deploy' it will output the URL to access your API Gateway which will trigger your Lambda seamlessly.
Then once you have a basic "hello-word" type API hosted on AWS, you can just follow the docs along for how to set up the DynamoDB library/sdk for your given framework/language.
Let me know if you have any questions!
-PS: I would also, later on, recommend using the API Gateway Authorizer against your Cognito User Pool, since you already have auth on the Flutter app, then all you have to do is pass through the token. The Authorizer can also be easily set up via the Serverless Framework! Then your API will be authenticated at the Gateway level, leaving AWS to do all the hard work :)

If you want to read directly from Dynamo It is actually pretty easy.
First add this package to your project.
Then create your models you want to read and write. Along with conversion methods.
class Parent {
String name;
late List<Child> children;
factory Parrent.fromDBValue(Map<String, AttributeValue> dbValue) {
name = dbValue["name"]!.s!;
children = dbValue["children"]!.l!.map((e) =>Child.fromDB(e)).toList();
}
Map<String, AttributeValue> toDBValue() {
Map<String, AttributeValue> dbMap = Map();
dbMap["name"] = AttributeValue(s: name);
dbMap["children"] = AttributeValue(
l: children.map((e) => AttributeValue(m: e.toDBValue())).toList());
return dbMap;
}
}
(AttributeValue comes from the package)
Then you can consume dynamo db api as per normal.
Create Dynamo service
class DynamoService {
final service = DynamoDB(
region: 'af-south-1',
credentials: AwsClientCredentials(
accessKey: "someAccessKey",
secretKey: "somesecretkey"));
Future<List<Map<String, AttributeValue>>?> getAll(
{required String tableName}) async {
var reslut = await service.scan(tableName: tableName);
return reslut.items;
}
Future insertNewItem(Map<String, AttributeValue> dbData, String tableName) async {
service.putItem(item: dbData, tableName: tableName);
}
}
Then you can convert when getting all data from dynamo.
List<Parent> getAllParents() {
List<Map<String, AttributeValue>>? parents =
await dynamoService.getAll(tableName: "parents");
return parents!.map((e) =>Parent.fromDbValue(e)).toList()
}
You can check all Dynamo operations from here

Related

Multiple Firestore connection

I need help regarding tackling this scenario where I need to connect to multiple firestores in different google cloud projects.
Right now. I am using NestJs to retrieve data from my Firestore. Connecting to it using a JSON key generated from a Service Account.
I am planning to make this primary Firestore store data that would tell what database should the app connect to. However, I'm oblivious to how can I do the switching of service accounts/JSON keys. Since, from what I understood so far, is 1 JSON key is for 1 Firestore. I also think that it's not a good practice to store those JSON key files.
What are my possible options here?
You can use Secret Manager to store your Firestore configurations. To start:
Create a secret by navigating to Cloud Console > Secret Manager. You could also click this link.
You should enable the Secret Manager API if you haven't done so.
Click Create Secret.
Fill up the Name, for e.g. FIRESTORE.
On Secret value, you could either upload the JSON file or paste the Secret Value.
Click Create Secret.
After creating a secret, go to your project and install the #google-cloud/secret-manager:
npm i #google-cloud/secret-manager
then initiate it like this:
import {SecretManagerServiceClient} from '#google-cloud/secret-manager';
const client = new SecretManagerServiceClient();
You could now use the stored configuration on your project. See code below for reference:
import { initializeApp } from "firebase/app";
import * as functions from 'firebase-functions';
import { getFirestore, serverTimestamp, addDoc, collectionGroup, collection, query, where, getDoc, getDocs, doc, updateDoc, setDoc, arrayRemove, arrayUnion, onSnapshot, orderBy, limit, increment } from "firebase/firestore";
const client = new SecretManagerServiceClient();
// Must follow expected format: projects/*/secrets/*/versions/*
// You can always use `latest` if you want to use the latest uploaded version.
const name = 'projects/PROJECT-ID/secrets/FIRESTORE/versions/latest'
async function accessSecretVersion() {
const [version] = await client.accessSecretVersion({
name: name,
});
// Extract the payload as a string.
const payload = version?.payload?.data?.toString();
// WARNING: Do not print the secret in a production environment - this
const config = JSON.parse(payload);
const firebaseApp = initializeApp({
apiKey: config.apiKey,
authDomain: config.authDomain,
databaseURL: config.databaseURL,
projectId: config.projectId,
storageBucket: config.storageBucket,
messagingSenderId: config.messagingSenderId,
appId: config.appId,
measurementId: config.measurementId
});
const db = getFirestore(firebaseApp);
const docRef = doc(db, "cities", "SF");
const docSnap = await getDoc(docRef);
if (docSnap.exists()) {
console.log("Document data:", docSnap.data());
} else {
// doc.data() will be undefined in this case
console.log("No such document!");
}
}
accessSecretVersion();
You should also create Secrets on your different projects and make sure that each project's IAM permissions are set to access each other. You can easily choose/switch your Firestore by modifying the secret name here:
const name = 'projects/PROJECT-ID/secrets/FIRESTORE/versions/latest'
For convenience, you can identically name the secrets given that they are different projects. You can then just change the PROJECT-ID which you want to access the Firestore.
Creating and accessing secrets
Managing Secrets
Managing Secret Versions
API Reference Documentation
You may also want to checkout Secret Manager Best Practices.

AWS DynamoDB read newly inserted record

Since i am new to AWS and other AWS services. for my hands on , prepared dynamodb use case. Whenever records insert into Dynamodb, that record should move to S3 for further processing. Written below code snippet in java using KCL
public static void main(String... args) {
KinesisClientLibConfiguration workerConfig = createKCLConfiguration();
StreamsRecordProcessorFactory recordProcessorFactory = new StreamsRecordProcessorFactory();
System.out.println("Creating worker");
Worker worker = createKCLCWorker(workerConfig, recordProcessorFactory);
System.out.println("Starting worker");
worker.run();
}
public class StreamsRecordProcessorFactory implements IRecordProcessorFactory {
public IRecordProcessor createProcessor() {
return new StreamRecordsProcessor();
}
}
method in StreamRecordsProcessor class
private void processRecord(Record record) {
if (record instanceof RecordAdapter) {
com.amazonaws.services.dynamodbv2.model.Record streamRecord = ((RecordAdapter) record)
.getInternalObject();
if ("INSERT".equals(streamRecord.getEventName())) {
Map<String, AttributeValue> attributes
= streamRecord.getDynamodb().getNewImage();
System.out.println(attributes);
System.out.println(
"New item name: " + attributes.get("name").getS());
}
}
}
From my local environment , i can able to see the record whenever we added the records in dynamodb. but i have few questions.
How can i deploy this project into AWS.
What is procedure or any required configuration from AWS side.
Please share your thoughts.
You should be able to use AWS Lambda as the integration point between Kinesis that ingest data from the DynamoDB stream and your Lambda function that reads data from the stream and pushes into a Kinesis Firehose stream to be ultimately deposited in S3. Here is an AWS blog article that can serve as a high-level guide for doing this. It gives you information about the AWS components you can use to build this and additional research on each component can help you put the pieces together.
Give that a try, if you get stuck anywhere, please add a comment and I'll respond in due time.

How to unit test a lambda function that includes a dynamoDB query

I have a function in my Alexa skill's lambda function that I am trying to do a unit test for using the aws-lambda-mock-context node package. The method I am trying to test includes a call to DynamoDB to check if an item exists in my table.
At the moment, my test immediately fails with CredentialsError: Missing credentials in config. Following this blog, I tried to manually enter my Amazon IAM credentials into a .aws/credentials file. Testing with the credentials leads to the test running for 30+ seconds before timing out, with no success or fail result from DynamoDB. I am not sure where to go from here.
The function I am looking to unit test looks like this:
helper.prototype.checkForItem = function(alexa) {
var registration_id = 123;
var params = {
TableName: 'registrations',
Key: {
id: {"N" : registration_id}
}
};
return this.getItemFromDB(params).then(function(data) {
//...
}
And the call to DynamoDB:
helper.prototype.getItemFromDB = function(params) {
return new Promise(function(fulfill, reject) {
dynamoDB.getItem(params, function(err, data) {
if (err == null) {
console.log("fulfilled");
fulfill(data);
}
else {
console.log("error recieving data " + err);
reject(null);
}
});
});
}
You can use SAM Local to test you lambda:
AWS SAM is a fast and easy way of deploying your serverless
applications, allowing you to write simple templates to describe your
functions and their event sources (Amazon API Gateway, Amazon S3,
Kinesis, and so on). Based on AWS SAM, SAM Local is an AWS CLI tool
that provides an environment for you to develop, test, and analyze
your serverless applications locally before uploading them to the
Lambda runtime. Whether you're developing on Linux, Mac, or Microsoft
Windows, you can use SAM Local to create a local testing environment
that simulates the AWS runtime environment. Doing so helps you address
issues such as performance. Working with SAM Local also allows faster,
iterative development of your Lambda function code because there is no
need to redeploy your application package to the AWS Lambda runtime.
For more information, see Building a Simple Application Using SAM
Local.
if you want to do unit testing you can mock dynamo db endpoint using any mocking library like nock, also you can check fiddler request/ response what your app is making to dynamo db endpoint and then accordingly you can troubleshoot.

DynamoDB to 'vanilla' JSON

I'm starting out using some of the managed services in AWS. One thing that seems like it should be easy, is to use the API gateway to secure and expose calls to DynamoDB.
I've got this working. However, it seems a little clunky. DynamoDB returns something like this:
{
"id":{"N":"3"}
// Lots of other fields
}
When really I (and most other consumers out there) would like something like this:
{
"id":"3"
// Lots of other fields
}
The way I see it, I've got two options.
1) Add a response mapping field by field in the AWS API UI. This seems laborious and error prone:
#set($inputRoot = $input.path('$'))
{
"Id": "$elem.Id.N"
// Lots of other fields
}
2) Write a specific lambda between API Gateway and Dynamo that does this mapping. Like https://stackoverflow.com/a/42231827/2012130 This adds another thing in the mix to maintain.
Is there a better way? Am I missing something? Seems to be so close to awesome.
const AWS = require('aws-sdk');
var db = new AWS.DynamoDB.DocumentClient({
region: 'us-east-1',
apiVersion: '2012-08-10'
});
You can use ODM like dynogels,
https://github.com/clarkie/dynogels
We use that heavily without dealing with dynamodb syntaxes.
This brings lambda and language in the mix, but it is much easier to handle when an object grows larger to perform the mapping.
Hope this helps.
There's another option added today. It'll still involve a lambda step, but..
"The Amazon DynamoDB DataMapper for JavaScript is a high-level client for writing and reading structured data to and from DynamoDB, built on top of the AWS SDK for JavaScript."
https://aws.amazon.com/blogs/developer/introducing-the-amazon-dynamodb-datamapper-for-javascript-developer-preview/

Allow 3rd party app to upload file to AWS s3

I need a way to allow a 3rd party app to upload a txt file (350KB and slowly growing) to an s3 bucket in AWS. I'm hoping for a solution involving an endpoint they can PUT to with some authorization key or the like in the header. The bucket can't be public to all.
I've read this: http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUT.html
and this: http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html
but can't quite seem to find the solution I'm seeking.
I'd suggests using a combination of the AWS API gateway, a lambda function and finally S3.
You clients will call the API Gateway endpoint.
The endpoint will execute an AWS lambda function that will then write out the file to S3.
Only the lambda function will need rights to the bucket, so the bucket will remain non-public and protected.
If you already have an EC2 instance running, you could replace the lambda piece with custom code running on your EC2 instance, but using lambda will allow you to have a 'serverless' solution that scales automatically and has no min. monthly cost.
I ended up using the AWS SDK. It's available for Java, .NET, PHP, and Ruby, so there's very high probability the 3rd party app is using one of those. See here: http://docs.aws.amazon.com/AmazonS3/latest/dev/UploadObjSingleOpNET.html
In that case, it's just a matter of them using the SDK to upload the file. I wrote a sample version in .NET running on my local machine. First, install the AWSSDK Nuget package. Then, here is the code (taken from AWS sample):
C#:
var bucketName = "my-bucket";
var keyName = "what-you-want-the-name-of-S3-object-to-be";
var filePath = "C:\\Users\\scott\\Desktop\\test_upload.txt";
var client = new AmazonS3Client(Amazon.RegionEndpoint.USWest2);
try
{
PutObjectRequest putRequest2 = new PutObjectRequest
{
BucketName = bucketName,
Key = keyName,
FilePath = filePath,
ContentType = "text/plain"
};
putRequest2.Metadata.Add("x-amz-meta-title", "someTitle");
PutObjectResponse response2 = client.PutObject(putRequest2);
}
catch (AmazonS3Exception amazonS3Exception)
{
if (amazonS3Exception.ErrorCode != null &&
(amazonS3Exception.ErrorCode.Equals("InvalidAccessKeyId")
||
amazonS3Exception.ErrorCode.Equals("InvalidSecurity")))
{
Console.WriteLine("Check the provided AWS Credentials.");
Console.WriteLine(
"For service sign up go to http://aws.amazon.com/s3");
}
else
{
Console.WriteLine(
"Error occurred. Message:'{0}' when writing an object"
, amazonS3Exception.Message);
}
}
Web.config:
<add key="AWSAccessKey" value="your-access-key"/>
<add key="AWSSecretKey" value="your-secret-key"/>
You get the accesskey and secret key by creating a new user in your AWS account. When you do so, they'll generate those for you and provide them for download. You can then attach the AmazonS3FullAccess policy to that user and the document will be uploaded to S3.
NOTE: this was a POC. In the actual 3rd party app using this, they won't want to hardcode the credentials in the web config for security purposes. See here: http://docs.aws.amazon.com/AWSSdkDocsNET/latest/V2/DeveloperGuide/net-dg-config-creds.html