I am using AWS Amplify to set up an AppSync GraphQL API. I have a schema with an #model annotation and I am trying to write a lambda resolver that will read/write to the DynamoDB table that #model generates. However, when I attempt to test locally using amplify mock my JS function throws
error { UnknownEndpoint: Inaccessible host: `dynamodb.us-east-1-fake.amazonaws.com'. This service may not be available in the `us-east-1-fake' region.
I can't seem to find much documentation around this use case at all (most examples of lambda resolvers read from other tables / APIs that are not part of the amplify app) so any pointers are appreciated. Is running this type of setup even supported or do I have to push to AWS in order to test?
New Answer:
Amplify now has documentation on this use case: https://docs.amplify.aws/cli/usage/mock#connecting-to-a-mock-model-table
You can set environment variables for mock that will point the DDB client in the mock lambda to the local DDB instance
=====================================================================
Original Answer:
After some digging into the Amplify CLI code, I have found a solution that will work for now.
Here is where amplify mock initializes DynamoDB Local. As you can see, it does not set the --sharedDb flag which based on the docs means that the created database files will be prefixed with the access key id of the request and then the region. The access key id of requests from Amplify will be "fake" and the region is "us-fake-1" as defined here. Furthermore, the port of the DynamoDB Local instance started by Amplify is 62224 defined here.
Therefore, to connect to the tables that are created by Amplify, the following DynamoDB config is needed
const ddb = new AWS.DynamoDB({
region: 'us-fake-1',
endpoint: "http://172.16.123.1:62224/",
accessKeyId: "fake",
secretAccessKey: "fake"
})
If you want to use the AWS CLI with the tables created by Amplify, you'll have to create a new profile with the region and access keys above.
I'll still need to do some additional work to figure out a good way to have those config values switch between the local mock values and the actual ones, but this unblocks local testing for now.
As for another question that I had about where AWS::Region of "us-east-1-fake" was being set, that gets set here but it does not appear to be used anywhere else. ie, it gets set as a placeholder value when running amplify mock but using it as a region in other places for testing locally doesn't seem to work.
Please try the below setting, It's working fine for me,
const AWS = require('aws-sdk');
// Local
const dynamoDb = new AWS.DynamoDB.DocumentClient({
region: 'us-fake-1',
endpoint: "http://localhost:62224/",
accessKeyId: "fake",
secretAccessKey: "fake"
});
// Live
// const dynamoDb = new AWS.DynamoDB.DocumentClient();
your dynamodb host is incorrect. dynamodb.us-east-1-fake is not a valid host. Please update it with real dynamodb host name.
If you are running locally setup aws configure on cli first.
Related
Hello I'm starting with aws and amplify. I would like to view the mock tables that amplify mock creates but when I try to access it fails, however, appSync works fine. I'm following this tutorial and it says I should be able to access dynamodb using localhost
this is the doc link https://docs.amplify.aws/cli/usage/mock/#api-mocking-setup
Have you checked inside the amplify/mock-data directory?
For DynamoDB storage, setup is automatically done when creating a GraphQL API with no action is needed on your part. Resources for the mocked data, such as the DynamoDB Local database or objects uploaded using the local S3 endpoint, inside your project under amplify/mock-data
Is there a way to retrieve ARN of a resource through AWS SDK for Go? I created a couple of tables in DynamoDB and I want to retrieve the ARNs.
The ARN format:
arn:aws:service:region:account-id:resource-type:resource-id
How to retrieve the account-id and region via SDK for Go?
There is no generic way to get region from AWS SDK. By generic, here we consider simple code that returns a correct AWS region for your service deployed to ANY environment.
AWS assumes the opposite process. As a client, you are expected to know where your AWS resources deployed, and you have to inject region into an app that connects to AWS.
Think about your code running on your local machine in Europe, accessing AWS DynamoDB deployed in us-east-2 region, or code that needs to copy data from DB in region1 to DB in region2. In both these cases, the application cannot get the correct region without a hint.
In many cases, though, the environment where your code is deployed can provide that hint.
A few examples:
For local environment, you can configure default region for AWS SDK - https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html#cli-configure-quickstart-region. Your service picks up that region if you create client using config.LoadDefaultConfig
Another example is running your service on EC2. EC2 provides AWS metadata that includes current region and account. Current region can be requested using client.GetMetadata. GetInstanceIdentityDocument API returns both Region and Account ID.
If you control how your service is deployed, you can try to get current Region and Account ID from environment, otherwise common practice is setting ENV variables with Region and Account ID when you deploy your code.
You can change the data source of appsync service in aws console, but I am not sure if it can work after I run command
amplify push api
And haven't found a way of changing data source with aws-amplify.
There isn't a direct way, no, as AppSync doesn't have native Postgres data sources. You could theoretically do anything with a Lambda data source, though, very much including Postgres.
An AppSync dev created a sample app that shows a way to do this, via what Amplify calls custom resolvers. You can find that repo here: https://github.com/mikeparisstuff/amplify-cli-nested-api-sample
I'm beginning to work with Secrets Manager and created my first secret in AWS. During the process, it gave me some sample code to work with. I put that in a small application and ran it. The code:
String region = "us-east-1";
string secret = "";
MemoryStream memoryStream = new MemoryStream();
IAmazonSecretsManager client = new AmazonSecretsManagerClient(
RegionEndpoint.GetBySystemName(region));
GetSecretValueRequest request = new GetSecretValueRequest();
request.SecretId = "MySecretNameExample";
GetSecretValueResponse response = null;
response = client.GetSecretValue(request);
The problem is that:
I was able to successfully retrieve the secret that I created and
nowhere am I creating a Credentials object with any valid AWS credential data
Where is this code getting the credential information from?
If you refer to the documenation for the API for this line of code:
IAmazonSecretsManager client = new AmazonSecretsManagerClient(
RegionEndpoint.GetBySystemName(region));
AmazonSecretsManagerClient
You will find the following description:
Constructs AmazonSecretsManagerClient with the credentials loaded from
the application's default configuration, and if unsuccessful from the
Instance Profile service on an EC2 instance.
This means that you are either running on an EC2 or ECS service (or related service such as Beanstalk, ...) with a role assigned to the instance or your have configured your credentials in the standard method in a credentials file. The AWS SDK is helping you locate credentials.
This document link will explain in more detail how AWS credentials are managed and selected.
Working with AWS Credentials
I have seen a lot of developers get the little details wrong with how credentials work and how they are used within the SDKs. Given that AWS credentials hold the keys to the AWS kingdom, managing and protecting them is vitally important.
The AWS SDK uses the a resolution strategy that looks in a number of locations until it finds credentials it can use. Typically the DefaultProviderChain class is responsible for performing the resolution. More information is here, but the gist is the lookup is performed in the following order (for Java, other languages are similar):
environment variables
Java system properties
credentials file (e.g. in the home directory)
instance profile credentials (only available when running in AWS)
When you run within AWS infrastructure, you can assign a profile or role to the resource that's running your code. Doing that makes credentials automatically available to your code. The idea is that they've made it easy to avoid putting credentials directly into your code.
The following is how I'm connecting. Is there any way in Lambda to set environmental variables that aren't visible and ideally would cross to several lambdas? Ideally in another AWS service that I could access with the SDK and use across all AWS services?
var MYSQL = require('mysql');
var AWS = require('aws-sdk');
exports.handler = (event, context, callback) => {
var connection = MYSQL.createConnection({
host : '127.0.0.1',
port : '3306',
user : 'myuser',
password : 'mypass',
database : 'mydb'
});
connection.connect();
I am afraid that the answers given before experience some lack of relevance. The AWS recommended way of storing this sort of data like connection strings for lambda, is the Systems Manager Parameter Store:
AWS Systems Manager provides a centralized store to manage your
configuration data, whether plain-text data such as database strings
or secrets such as passwords. This enables you to separate your
secrets and configuration data from your code.
See also: https://aws.amazon.com/blogs/compute/sharing-secrets-with-aws-lambda-using-aws-systems-manager-parameter-store/
I would store that in either a DynamoDB table or an S3 bucket. You can assign an IAM role to the Lambda to allow access to these - maybe read-only.
Alternatively, Lambda's now have environment variables like Elastic Beanstalk and you could set it that way. They can be encrypted though that adds some complexity too.
You can now use encrypted environment variables.
Behind the scenes they will use KMS to encrypt the values and then will allow your Lambda functions to use them as if they were plain text within the container context.