How to unit test a lambda function that includes a dynamoDB query - unit-testing

I have a function in my Alexa skill's lambda function that I am trying to do a unit test for using the aws-lambda-mock-context node package. The method I am trying to test includes a call to DynamoDB to check if an item exists in my table.
At the moment, my test immediately fails with CredentialsError: Missing credentials in config. Following this blog, I tried to manually enter my Amazon IAM credentials into a .aws/credentials file. Testing with the credentials leads to the test running for 30+ seconds before timing out, with no success or fail result from DynamoDB. I am not sure where to go from here.
The function I am looking to unit test looks like this:
helper.prototype.checkForItem = function(alexa) {
var registration_id = 123;
var params = {
TableName: 'registrations',
Key: {
id: {"N" : registration_id}
}
};
return this.getItemFromDB(params).then(function(data) {
//...
}
And the call to DynamoDB:
helper.prototype.getItemFromDB = function(params) {
return new Promise(function(fulfill, reject) {
dynamoDB.getItem(params, function(err, data) {
if (err == null) {
console.log("fulfilled");
fulfill(data);
}
else {
console.log("error recieving data " + err);
reject(null);
}
});
});
}

You can use SAM Local to test you lambda:
AWS SAM is a fast and easy way of deploying your serverless
applications, allowing you to write simple templates to describe your
functions and their event sources (Amazon API Gateway, Amazon S3,
Kinesis, and so on). Based on AWS SAM, SAM Local is an AWS CLI tool
that provides an environment for you to develop, test, and analyze
your serverless applications locally before uploading them to the
Lambda runtime. Whether you're developing on Linux, Mac, or Microsoft
Windows, you can use SAM Local to create a local testing environment
that simulates the AWS runtime environment. Doing so helps you address
issues such as performance. Working with SAM Local also allows faster,
iterative development of your Lambda function code because there is no
need to redeploy your application package to the AWS Lambda runtime.
For more information, see Building a Simple Application Using SAM
Local.

if you want to do unit testing you can mock dynamo db endpoint using any mocking library like nock, also you can check fiddler request/ response what your app is making to dynamo db endpoint and then accordingly you can troubleshoot.

Related

How to retrieve the schema.graphql file from #aws-cdk/aws-appsync before deployment

I am using the code-first approach in #aws-cdk/aws-appsync for generating my graphql schema. For Typescript code generation purposes I need a way to retrieve the schema.graphql before deployment (maybe somehow extracting it from the cdk synth command?).
I am not sure if we are facing the same issue. Basically I wanted to access the GQL schema in my client react app which was in a different repository than the cdk app in which the infrastructure is defined.
I ended up using the aws-cli to extract the appsync schema using the following command:
aws appsync get-introspection-schema --api-id [API-ID] --format SDL --no-include-directives outfile=[OUTFILE]
For getting the automatically created schema before deployment, use this script:
readFile('cdk.out/<YOUR-APP-NAME>.template.json', 'utf8', (err, data) => {
if (err) {
throw err;
}
const definition = JSON.parse(data)
.Resources
.<YOUR-SCHEMA-ID> // replace with your schema id
.Properties
.Definition;
writeFile('lib/domain/generated.graphql', definition, (error) => {
if (error) throw error;
});
});

How to have multiple codepipeline triggers in aws?

How can I have Aws CodePipeline be triggered by multiple sources? Imagine I want to have the same pipeline to be triggered whenever I push to two different repositories? Plus, the build stage must know which repository triggered the pipeline and pull from the right repository
Well, it will depend on the pipeline itself. Aws says codepipelines are made per project only. However, one way you could tackle this problem is by:
building a lambda function what triggers codebuild
the lambda function will have as many triggers as the number of repositories you want to trigger the same pipeline
the lambda function will pass environment variables to CodeBuild and trigger its execution
CodeBuild will work out which repo to pull from depending on the value of the environment variable
How-To:
To begin with log into the aws console and head to Lambda functions
Create a new function - or edit one, if you prefer
Choose to create a new function from scratch if you created the function on the step above
Chose the running environment --- which in this example is going to be NodeJs 10.x but you can choose the one of your preference
Use default permission settings or use an already created role for this function if you have them already. This is important because we will be editing these permissions later
Click create function to create it. - If you already have a function you can jump to the next step!
You should be prompted with this screen. Click on "Add trigger"
After that you must choose your provider. This could be any of the options, including CodeCommit. However, if you want another service to be your trigger and it is not listed here you can always create an SNS topic and make that service subscribe to it and then make the SNS topic be the trigger for your function. This is going to be the subject of another tutorial later on...
In this case it is going to be CodeCommit and you should choose it from the list. Then you will be prompted with a screen like this one to configure your preferences
One thing to keep in mind is that the Event name is quite crucial since we are going to use it in our function to choose what is going to happen. Thus, choose it properly
After that you should end up with something like this Now it's time to code our function
Since we are going to have multiple repositories trigger the same CodeBuild project you can always refer to the AWS SDK CodeBuild documentation here
The method we are going to use is the CodeBuild StartBuild method
To configure it properly we are going to have to know in which region our build project is. You can see it by going to your project on the aws console and looking at the url prefix here
Coming back to our lambda function we are going to create a json file that will store all of our data that is going to be trasfered to codebuild when the function runs. It is important to name it WITH THE EXTENSION The value of the environment variables are going to depend from trigger to trigger and it is now that the trigger name is so important. It is going to be the key of our selector. The selector is going to take the event trigger name and use it to look into the json and define the environment variables
In the repositories.json file we are going to put all the data we want and, if you know a little bit of json you can store whatever you want and pass it to our function when you want it to be passed to the CodeBuild as an environment variable
The code is as follows:
const AWS = require('aws-sdk'); // importing aws sdk
const fs = require('fs'); // importing fs to read our json file
const handler = (event, context) => {
AWS.config.update({region:'us-east-2'}); // your region can vary from mine
var codebuild = new AWS.CodeBuild(); // creating codebuild instance
//this is just so you can see aws sdk loaded properly so we are going to have it print its credentials. This is not required but shows us that the sdk loaded correctly
AWS.config.getCredentials(function(err) {
if (err) console.log(err.stack);
// credentials not loaded
else {
console.log("Access key:", AWS.config.credentials.accessKeyId);
console.log("Secret access key:", AWS.config.credentials.secretAccessKey);
}
});
var repositories = JSON.parse(fs.readFileSync('repositories.json').toString());
var selectedRepo = event.Records[0].eventTriggerName;
var params = {
projectName: 'lib-patcher-build', /* required */
artifactsOverride: {
type: 'CODEPIPELINE', /* required*/
},
environmentVariablesOverride: [
{
name: 'name-of-the-environment-variable', /* required */
value: 'its-value', /* required */
type: 'PLAINTEXT'
},
{
name: 'repo-url', /* required */
value: 'repositories[selectedRepo].url', /* required */
type: 'PLAINTEXT'
}
/* more items */
],
};
codebuild.startBuild(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
};
exports.handler = handler;
Then, in our build project we can perform a test example to see if our function worked by trying to print the environment variable we passed from our json file into our function and out to the CodeBuild Project
For our function to be able of starting the build of our CodeBuild project we must grant access to it. So:
Go back to your lambda and click on the permissions tab
Click on Manage these permissions
Click your policy name but keep in mind it will NOT be the same as mine
Edit your function's policy
Click to add permissions
Click to choose a service and choose CodeBuild
Add the permission for our function to start the build
Now you have two choices: You can either use this permission on all build projects or restrict it to specific projects. In our case we are going to restrict it since we have only one build project
To restrict you have to click the dropdown button on resources, choose specific and click on Add ARN
Then we need to open a new tab and go to our CodeBuild project. Then, copy the project's ARN
Paste the ARN on the permissions tab you previously were on and click on save changes
Review the policy
Click to save the changes
Now, if you push anything to the repository you configured the build project should get triggered and, in the build logs session you should see the value of the variable you set in your function

How to get SAM template of lambda function deployed via UI?

I wrote a bash script similar to this one: Download an already uploaded Lambda function
Everything is fine with all lambda functions that have been deployed via SAM template files. However, when I retrieve the deployment package of a lambda function (application) that has been deployed via the web UI of AWS, all I get is the index.js file in the deployment package of that function.
Anyway, it is possible to generate a SAM yaml file that describes the architecture of the given lambda application by selecting it over the Lambda Management Console via Actions > Export Function > Download AWS SAM file. Consequently, there should be a possibility to do this via aws-cli or is that not possible at all?
You can get function configuration with awscli https://docs.aws.amazon.com/cli/latest/reference/lambda/get-function-configuration.html and the response will contain a Code section with a link to the function package https://docs.aws.amazon.com/lambda/latest/dg/API_FunctionCodeLocation.html
Also, you can create a CloudFormation stack from the existing infrastructure with CloudFormer https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-using-cloudformer.html
Having a CloudFormation template you can get the resource description. https://docs.aws.amazon.com/cli/latest/reference/cloudformation/describe-stack-resources.html with the link to the function source code on S3.
See more on this in https://stackoverflow.com/a/55764927/6628583
This is a clue, not a complete answer, sorry. There should be, but there is not yet, an aws-cli to get this SAM content. The URI in the browser is obtuse with some obscure network calls, e.g.
https://us-west-2.console.aws.amazon.com/p/log/1/lambda/1/OP/&k0=feevc&m0=1&d0=%7B%22s_fid%22:%2233SNIPPEDC66-34SNIPPED7FE5B%22%7D&p0=exportSAM&cb=1595621538378&proxy-rid=7c6a6a31SNIPPEDf1a400b457cc
There seems to be no easy way to construct that URI to GET the exportSAM download.
The browser lambda.js is not much help, e.g.
var A = Object(p.connect)((function(e) {
return {
blueprint: Object(v.b)(e).query.exportBp,
integrationConfigs: h.c.getNodes(e),
downloadMessages: h.c.getDownloadMessages(e),
isOpen: !!Object(v.b)(e).query.exportModal,
exporting: h.c.getExporting(e)
}
}
), (function(e) {
return {
downloadSam: function(t) {
return e((n = t,
{
type: b.f.EXPORT_BLUEPRINT_TO_FLOURISH,
blueprintName: n
}));
var n
},
close: function() {
return e(Object(m.c)({
query: {
exportModal: void 0,
exportBp: void 0
}
}, {
persistQuery: !0
}))
}
}
}
), (function(e, t, n) {
return S(S(S(S({}, n), e), t), {}, {
downloadSam: function() {
return t.downloadSam(e.blueprint)
}
})
}

DynamoDB + Flutter

I am trying to create an app that uses AWS Services, I already use Cognito plugin for flutter but can't get it to work with DynamoDB, should I use a lambda function and point to it or is it possible to get data form a table directly from flutter, if that's the case which URL should I use?
I am new in AWS Services don’t know if is it possible to access a dynamo table with a URL or I should just use a lambda function
Since this is kind of an open-ended question and you mentioned Lambdas, I would suggest checking out the Serverless framework. They have a couple of template applications in various languages/frameworks. Serverless makes it really easy to spin up Lambdas configured to an API Gateway, and you can start with the default proxy+ resource. You can also define DynamoDB tables to be auto-created/destroyed when you deploy/destroy your serverless application. When you successfully deploy using the command 'serverless deploy' it will output the URL to access your API Gateway which will trigger your Lambda seamlessly.
Then once you have a basic "hello-word" type API hosted on AWS, you can just follow the docs along for how to set up the DynamoDB library/sdk for your given framework/language.
Let me know if you have any questions!
-PS: I would also, later on, recommend using the API Gateway Authorizer against your Cognito User Pool, since you already have auth on the Flutter app, then all you have to do is pass through the token. The Authorizer can also be easily set up via the Serverless Framework! Then your API will be authenticated at the Gateway level, leaving AWS to do all the hard work :)
If you want to read directly from Dynamo It is actually pretty easy.
First add this package to your project.
Then create your models you want to read and write. Along with conversion methods.
class Parent {
String name;
late List<Child> children;
factory Parrent.fromDBValue(Map<String, AttributeValue> dbValue) {
name = dbValue["name"]!.s!;
children = dbValue["children"]!.l!.map((e) =>Child.fromDB(e)).toList();
}
Map<String, AttributeValue> toDBValue() {
Map<String, AttributeValue> dbMap = Map();
dbMap["name"] = AttributeValue(s: name);
dbMap["children"] = AttributeValue(
l: children.map((e) => AttributeValue(m: e.toDBValue())).toList());
return dbMap;
}
}
(AttributeValue comes from the package)
Then you can consume dynamo db api as per normal.
Create Dynamo service
class DynamoService {
final service = DynamoDB(
region: 'af-south-1',
credentials: AwsClientCredentials(
accessKey: "someAccessKey",
secretKey: "somesecretkey"));
Future<List<Map<String, AttributeValue>>?> getAll(
{required String tableName}) async {
var reslut = await service.scan(tableName: tableName);
return reslut.items;
}
Future insertNewItem(Map<String, AttributeValue> dbData, String tableName) async {
service.putItem(item: dbData, tableName: tableName);
}
}
Then you can convert when getting all data from dynamo.
List<Parent> getAllParents() {
List<Map<String, AttributeValue>>? parents =
await dynamoService.getAll(tableName: "parents");
return parents!.map((e) =>Parent.fromDbValue(e)).toList()
}
You can check all Dynamo operations from here

Allow 3rd party app to upload file to AWS s3

I need a way to allow a 3rd party app to upload a txt file (350KB and slowly growing) to an s3 bucket in AWS. I'm hoping for a solution involving an endpoint they can PUT to with some authorization key or the like in the header. The bucket can't be public to all.
I've read this: http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUT.html
and this: http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html
but can't quite seem to find the solution I'm seeking.
I'd suggests using a combination of the AWS API gateway, a lambda function and finally S3.
You clients will call the API Gateway endpoint.
The endpoint will execute an AWS lambda function that will then write out the file to S3.
Only the lambda function will need rights to the bucket, so the bucket will remain non-public and protected.
If you already have an EC2 instance running, you could replace the lambda piece with custom code running on your EC2 instance, but using lambda will allow you to have a 'serverless' solution that scales automatically and has no min. monthly cost.
I ended up using the AWS SDK. It's available for Java, .NET, PHP, and Ruby, so there's very high probability the 3rd party app is using one of those. See here: http://docs.aws.amazon.com/AmazonS3/latest/dev/UploadObjSingleOpNET.html
In that case, it's just a matter of them using the SDK to upload the file. I wrote a sample version in .NET running on my local machine. First, install the AWSSDK Nuget package. Then, here is the code (taken from AWS sample):
C#:
var bucketName = "my-bucket";
var keyName = "what-you-want-the-name-of-S3-object-to-be";
var filePath = "C:\\Users\\scott\\Desktop\\test_upload.txt";
var client = new AmazonS3Client(Amazon.RegionEndpoint.USWest2);
try
{
PutObjectRequest putRequest2 = new PutObjectRequest
{
BucketName = bucketName,
Key = keyName,
FilePath = filePath,
ContentType = "text/plain"
};
putRequest2.Metadata.Add("x-amz-meta-title", "someTitle");
PutObjectResponse response2 = client.PutObject(putRequest2);
}
catch (AmazonS3Exception amazonS3Exception)
{
if (amazonS3Exception.ErrorCode != null &&
(amazonS3Exception.ErrorCode.Equals("InvalidAccessKeyId")
||
amazonS3Exception.ErrorCode.Equals("InvalidSecurity")))
{
Console.WriteLine("Check the provided AWS Credentials.");
Console.WriteLine(
"For service sign up go to http://aws.amazon.com/s3");
}
else
{
Console.WriteLine(
"Error occurred. Message:'{0}' when writing an object"
, amazonS3Exception.Message);
}
}
Web.config:
<add key="AWSAccessKey" value="your-access-key"/>
<add key="AWSSecretKey" value="your-secret-key"/>
You get the accesskey and secret key by creating a new user in your AWS account. When you do so, they'll generate those for you and provide them for download. You can then attach the AmazonS3FullAccess policy to that user and the document will be uploaded to S3.
NOTE: this was a POC. In the actual 3rd party app using this, they won't want to hardcode the credentials in the web config for security purposes. See here: http://docs.aws.amazon.com/AWSSdkDocsNET/latest/V2/DeveloperGuide/net-dg-config-creds.html