connecting AWS SAM Local with dynamodb in docker - amazon-web-services

I've set up an api gateway/aws lambda pair using AWS sam local and confirmed I can call it successfully after running
sam local start-api
I've then added a local dynamodb instance in a docker container and created a table on it using the aws cli
But, having added the code to the lambda to write to the dynamodb instance I receive:
2018-02-22T11:13:16.172Z ed9ab38e-fb54-18a4-0852-db7e5b56c8cd error:
could not write to table: {"message":"connect ECONNREFUSED
0.0.0.0:8000","code":"NetworkingError","errno":"ECONNREFUSED","syscall":"connect","address":"0.0.0.0","port":8000,"region":"eu-west-2","hostname":"0.0.0.0","retryable":true,"time":"2018-02-22T11:13:16.165Z"}
writing event from command:
{"name":"test","geolocation":"xyz","type":"createDestination"} END
RequestId: ed9ab38e-fb54-18a4-0852-db7e5b56c8cd
I saw online that you might need to connect to the same docker network so I created a network docker network create lambda-local and have changed my start commands to:
sam local start-api --docker-network lambda-local
and
docker run -v "$PWD":/dynamodb_local_db -p 8000:8000 --network=lambda-local cnadiminti/dynamodb-local:latest
but still receive the same error
sam local is printing out 2018/02/22 11:12:51 Connecting container 98b19370ab92f3378ce380e9c840177905a49fc986597fef9ef589e624b4eac3 to network lambda-local
I'm creating the dynamodbclient using:
const AWS = require('aws-sdk')
const dynamodbURL = process.env.dynamodbURL || 'http://0.0.0.0:8000'
const awsAccessKeyId = process.env.AWS_ACCESS_KEY_ID || '1234567'
const awsAccessKey = process.env.AWS_SECRET_ACCESS_KEY || '7654321'
const awsRegion = process.env.AWS_REGION || 'eu-west-2'
console.log(awsRegion, 'initialising dynamodb in region: ')
let dynamoDbClient
const makeClient = () => {
dynamoDbClient = new AWS.DynamoDB.DocumentClient({
endpoint: dynamodbURL,
accessKeyId: awsAccessKeyId,
secretAccessKey: awsAccessKey,
region: awsRegion
})
return dynamoDbClient
}
module.exports = {
connect: () => dynamoDbClient || makeClient()
}
and inspecting the dynamodbclient my code is creating shows
DocumentClient {
options:
{ endpoint: 'http://0.0.0.0:8000',
accessKeyId: 'my-key',
secretAccessKey: 'my-secret',
region: 'eu-west-2',
attrValue: 'S8' },
service:
Service {
config:
Config {
credentials: [Object],
credentialProvider: [Object],
region: 'eu-west-2',
logger: null,
apiVersions: {},
apiVersion: null,
endpoint: 'http://0.0.0.0:8000',
httpOptions: [Object],
maxRetries: undefined,
maxRedirects: 10,
paramValidation: true,
sslEnabled: true,
s3ForcePathStyle: false,
s3BucketEndpoint: false,
s3DisableBodySigning: true,
computeChecksums: true,
convertResponseTypes: true,
correctClockSkew: false,
customUserAgent: null,
dynamoDbCrc32: true,
systemClockOffset: 0,
signatureVersion: null,
signatureCache: true,
retryDelayOptions: {},
useAccelerateEndpoint: false,
accessKeyId: 'my-key',
secretAccessKey: 'my-secret' },
endpoint:
Endpoint {
protocol: 'http:',
host: '0.0.0.0:8000',
port: 8000,
hostname: '0.0.0.0',
pathname: '/',
path: '/',
href: 'http://0.0.0.0:8000/' },
_clientId: 1 },
attrValue: 'S8' }
Should this setup work? How do I get them talking to each other?
---- edit ----
Based on a twitter conversation it's worth mentioning (maybe) that I can interact with dynamodb at the CLI and in the web shell

Many thanks to Heitor Lessa who answered me on Twitter with an example repo
Which pointed me at the answer...
dynamodb's docker container is on 127.0.0.1 from the context of my
machine (which is why I could interact with it)
SAM local's docker container is on 127.0.0.1 from the context of my
machine
But they aren't on 127.0.0.1 from each other's context
So: https://github.com/heitorlessa/sam-local-python-hot-reloading/blob/master/users/users.py#L14
Pointed me at changing my connection code to:
const AWS = require('aws-sdk')
const awsRegion = process.env.AWS_REGION || 'eu-west-2'
let dynamoDbClient
const makeClient = () => {
const options = {
region: awsRegion
}
if(process.env.AWS_SAM_LOCAL) {
options.endpoint = 'http://dynamodb:8000'
}
dynamoDbClient = new AWS.DynamoDB.DocumentClient(options)
return dynamoDbClient
}
module.exports = {
connect: () => dynamoDbClient || makeClient()
}
with the important lines being:
if(process.env.AWS_SAM_LOCAL) {
options.endpoint = 'http://dynamodb:8000'
}
from the context of the SAM local docker container the dynamodb container is exposed via its name
My two startup commands ended up as:
docker run -d -v "$PWD":/dynamodb_local_db -p 8000:8000 --network lambda-local --name dynamodb cnadiminti/dynamodb-local
and
AWS_REGION=eu-west-2 sam local start-api --docker-network lambda-local
with the only change here being to give the dynamodb container a name

If your using sam-local on a mac like alot of devs you should be able to just use
options.endpoint = "http://docker.for.mac.localhost:8000"
Or on newer installs of docker https://docs.docker.com/docker-for-mac/release-notes/#docker-community-edition-18030-ce-mac59-2018-03-26
options.endpoint = "http://host.docker.internal:8000"
Instead of having to do multiple commands like Paul showed above (but that might be more platform agnostic?).

The other answers were too overly complicated / unclear for me. Here is what I came up with.
Step 1: use docker-compose to get DynamoDB local running on a custom network
docker-compose.yml
Note the network name abp-sam-backend, service name dynamo and that dynamo service is using the backend network.
version: '3.5'
services:
dynamo:
container_name: abp-sam-nestjs-dynamodb
image: amazon/dynamodb-local
networks:
- backend
ports:
- '8000:8000'
volumes:
- dynamodata:/home/dynamodblocal
working_dir: /home/dynamodblocal
command: '-jar DynamoDBLocal.jar -sharedDb -dbPath .'
networks:
backend:
name: abp-sam-backend
volumes:
dynamodata: {}
Start DyanmoDB local container via:
docker-compose up -d dynamo
Step 2: Write your code to handle local DynamoDB endpoint
import { DynamoDB, Endpoint } from 'aws-sdk';
const ddb = new DynamoDB({ apiVersion: '2012-08-10' });
if (process.env['AWS_SAM_LOCAL']) {
ddb.endpoint = new Endpoint('http://dynamo:8000');
} else if ('local' == process.env['APP_STAGE']) {
// Use this when running code directly via node. Much faster iterations than using sam local
ddb.endpoint = new Endpoint('http://localhost:8000');
}
Note that I'm using the hostname alias dynamo. This alias is auto-created for me by docker inside the abp-sam-backend network. The alias name is just the service name.
Step 3: Launch the code via sam local
sam local start-api -t sam-template.yml --docker-network abp-sam-backend --skip-pull-image --profile default --parameter-overrides 'ParameterKey=StageName,ParameterValue=local ParameterKey=DDBTableName,ParameterValue=local-SingleTable'
Note that I'm telling sam local to use the existing network abp-sam-backend that was defined in my docker-compose.yml
End-to-end example
I made a working example (plus a bunch of other features) that can be found at https://github.com/rynop/abp-sam-nestjs

SAM starts a docker container lambci/lambda under the hood, if you have another container hosting dynamodb for example or any other services to which you want to connect your lambda, so you should have both in the same network
Suppose dynamodb (notice --name, this is the endpoint now)
docker run -d -p 8000:8000 --name DynamoDBEndpoint amazon/dynamodb-local
This will result in something like this
0e35b1c90cf0....
To know which network this was created inside:
docker inspect 0e35b1c90cf0
It should give you something like
...
Networks: {
"services_default": {//this is the <<myNetworkName>>
....
If you know your networks and want to put docker container inside specific network, you can save the above steps and do this in one command while starting container using --network option
docker run -d -p 8000:8000 --network myNetworkName --name DynamoDBEndpoint amazon/dynamodb-local
Important: Your lambda code now should have endpoint to dynamo to DynamoDBEndpoint
To say for example:
if(process.env.AWS_SAM_LOCAL) {
options.endpoint = 'http://DynamoDBEndpoint:8000'
}
Testing everything out:
Using lambci:lambda
This should only list all tables inside your other dynamodb container
docker run -ti --rm --network myNetworkName lambci/lambda:build-go1.x \
aws configure set aws_access_key_id "xxx" && \
aws configure set aws_secret_access_key "yyy" && \
aws --endpoint-url=http://DynamoDBEndpoint:4569 --region=us-east-1 dynamodb list-tables
Or to invoke a function: (Go Example, same as NodeJS)
#Golang
docker run --rm -v "$PWD":/var/task lambci/lambda:go1.x handlerName '{"some": "event"}'
#Same for NodeJS
docker run --rm -v "$PWD":/var/task lambci/lambda:nodejs10.x index.handler
More Info about lambci/lambda can be found here
Using SAM (which uses the same container lmabci/lambda):
sam local invoke --event myEventData.json --docker-network myNetworkName MyFuncName
You can always use --debug option in case you want to see more details.
Alternatively, You can also use http://host.docker.internal:8000 without the hassle of playing with docker, this URL is reserved internally and gives you an access to your host machine but make sure you expose port 8000 when you start dynamodb container. Although it is quite easy but it doesn't work in all operating systems. For more details about this feature, please check docker documentation

As #Paul mentioned, it is about configuring your network between the docker containers - lambda and database.
Another approach that worked for me (using docker-compose).
docker-compose:
version: '2.1'
services:
db:
image: ...
ports:
- "3306:3306"
networks:
- my_network
environment:
...
volumes:
...
networks:
my_network:
Then, after docker-compose up, running docker network ls will show:
NETWORK ID NAME DRIVER SCOPE
7eb440d5c0e6 dev_my_network bridge local
My docker container name is dev_db_1.
My js code is:
const connection = mysql.createConnection({
host: "dev_db_1",
port: 3306,
...
});
Then, running the sam command:
sam local invoke --docker-network dev_my_network -e my.json
Stack:
Docker: 18.03.1-ce
Docker-compose: 1.21.1
MacOS HighSierra 10.13.6

If you are using LocalStack to run DynamoDB, I believe the correct command to use the LocalStack network for SAM is:
sam local start-api --env-vars env.json --docker-network localstack_default
And in your code, the LocalStack hostname should be localstack_localstack_1
const dynamoDbDocumentClient = new AWS.DynamoDB.DocumentClient({
endpoint: process.env.AWS_SAM_LOCAL ?
'http://localstack_localstack_1:4569' :
undefined,
});
However, I launched LocalStack using docker-compose up. Using the pip CLI tool to launch LocalStack may result in different identifiers.

This may be helpful for someone who are still facing the same issue:
I also faced the same problem recently. I followed all the steps mentioned by rynop (Thanks #rynop)
I fixed the issue (on my windows) by replacing the endpoint (http://localhost:8000) with my (private) IP address (i.e http://192.168.8.101:8000) in the following code:
import { DynamoDB, Endpoint } from 'aws-sdk';
const ddb = new DynamoDB({ apiVersion: '2012-08-10' });
if (process.env['AWS_SAM_LOCAL']) {
ddb.endpoint = new Endpoint('http://dynamo:8000');
} else if ('local' == process.env['APP_STAGE']) {
// Use this when running code directly via node. Much faster iterations than using sam local
ddb.endpoint = new Endpoint('http://localhost:8000');
}

Related

NET 6 WEB API ENABLE CERTIFICATE HTTPS ON AWS EC2

I have a problem, I can't generate the certificates in AWS EC2
Linux AWS
I trying execute this command in SSH - docker run --rm -p 3000:3000 -p 3001:3001 -e ASPNETCORE_HTTPS_PORT=https://+:3001 -e ASPNETCORE_ENVIRONMENT="Development" -e ASPNETCORE_URLS=https://+:3001 $MY ECR CONTAINER HERE$
i try too docker run --rm -p 3000:3000 -p 3001:3001 -e ASPNETCORE_HTTPS_PORT=https://+:3001 -e ASPNETCORE_ENVIRONMENT="Development" -e ASPNETCORE_URLS=https://+:3001 -v ASPNETCORE_Kestrel__Certificates__Default__Password=$MY PW$* -v ASPNETCORE_Kestrel__Certificates__Default__Path=%USERPROFILE%/aspnet/https/aspnetapp.pfx $MY CONTAINER$
Error on SSH
My Dockerfile
My Launch Settings
DOTNET INFO ON LINUX AWS
AWS CERTIFICATE MANAGER
it works perfectly on HTTP 80 but to unable HTTPS 443, a docker need a certificate.
what do i need to do to generate this certificate in aws linux?
Edit*
warn: Microsoft.AspNetCore.DataProtection.Repositories.FileSystemXmlRepository[6 0]
Storing keys in a directory '/root/.aspnet/DataProtection-Keys' that may n ot be persisted outside of the container. Protected data will be unavailable whe n container is destroyed.
warn: Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[35]
No XML encryptor configured. Key {f37427eb-3dc8-4d33-9177-92caadc2c880} ma y be persisted to storage in unencrypted form.
After a lot of searching find the following answers and my project is on LIVE.
1º I edited my program.cs so that it uses HTTPS Redirection and HSTS and configured the Forward Headers
Follow the codes.
`builder.Services.Configure(options =>
{
options.ForwardedHeaders = ForwardedHeaders.XForwardedFor | ForwardedHeaders.XForwardedProto;
});`
app.UseSwagger();
app.UseSwaggerUI(c =>
{
c.SwaggerEndpoint("/swagger/v1/swagger.json",
"Api Documentation for MyLandingApp");
});
app.UseHsts();
app.UseHttpsRedirection();
app.UseCors("MyLandingAppPolicy");
app.UseForwardedHeaders();
app.Use(async (context, next) =>
{
if (context.Request.IsHttps || context.Request.Headers["X-Forwarded-Proto"] == Uri.UriSchemeHttps)
{
await next();
}
else
{
string queryString = context.Request.QueryString.HasValue ? context.Request.QueryString.Value : string.Empty;
var https = "https://" + context.Request.Host + context.Request.Path + queryString;
context.Response.Redirect(https);
}
});
app.UseAuthentication();
app.UseAuthorization();
2º I added some stuff in my Appsettings.Json
"https_port": 3001,
3ºI changed my DockerFile to create a self certificate and enable HTTPS on docker run
.
Docker File
4ª I changed the docker container execution string, removed the HTTP port that I wouldn't use anyway, I'll explain later.
docker run --rm -p 3001:3001 -e ASPNETCORE_HTTPS_PORT=https://+:3001 -e ASPNETCORE_ENVIRONMENT="Production" -e ASPNETCORE_URLS=https://+:3001 $MY CONTAINER IN ESR$
5º I configured the loudbalancer like this:
HTTP80 - Loud Balancer http80
HTTPS443 - Loud bALANCER https443
Só que tem o macete...
you need to create the target group pointing to the main server, then you will take the private IP and create a new target group
Target Group
With this you will have done the redirection and CERTIFICATE configuration for your API.
Remembering that in Listener https 443 you need a valid certificate.

No response from invoke container for FunctionName

Trying to port forward dockerized Lambda to my localhost using command:
$ sam local start-api --docker-network host
Error every time trying to access Lambda:
No response from invoke container for FunctionName
Tried also using host.docker.internal & host.docker.local networks with no success.
Any ideas? Workarounds?
That doesn't seem to work, but using your host computer's IP address does...
Say your host computer's IP address is 192.168.1.111 . You can use that from your Lambda to hit your host
You can make this configurable:
template.yml:
...
Environment:
Variables:
ENDPOINT_URL: null
env.json:
{
"Parameters":{
"ENDPOINT_URL":"http://192.168.1.111:5000"
}
}
lambda_function.py:
...
default_sns_endpoint = f'https://sns.{os.environ["AWS_REGION"]}.amazonaws.com'
endpoint_url = os.environ.get("ENDPOINT_URL", "") or default_endpoint
sns = boto3.client("sns", endpoint_url=endpoint_url)
...
start SAM:
sam local invoke --env-vars env.json

Configuring Concourse CI to use AWS Secrets Manager

I have been trying to figure out how to configure the docker version of Concourse (https://github.com/concourse/concourse-docker) to use the AWS Secrets Manager and I added the following environment variables into the docker-compose file but from the logs it doesn't look like it ever reaches out to AWS to fetch the creds. Am I missing something or should this automatically happen when adding these environment variables under environment in the docker-compose file? Here are the docs I have been looking at https://concourse-ci.org/aws-asm-credential-manager.html
version: '3'
services:
concourse-db:
image: postgres
environment:
POSTGRES_DB: concourse
POSTGRES_PASSWORD: concourse_pass
POSTGRES_USER: concourse_user
PGDATA: /database
concourse:
image: concourse/concourse
command: quickstart
privileged: true
depends_on: [concourse-db]
ports: ["9090:8080"]
environment:
CONCOURSE_POSTGRES_HOST: concourse-db
CONCOURSE_POSTGRES_USER: concourse_user
CONCOURSE_POSTGRES_PASSWORD: concourse_pass
CONCOURSE_POSTGRES_DATABASE: concourse
CONCOURSE_EXTERNAL_URL: http://XXX.XXX.XXX.XXX:9090
CONCOURSE_ADD_LOCAL_USER: test: test
CONCOURSE_MAIN_TEAM_LOCAL_USER: test
CONCOURSE_WORKER_BAGGAGECLAIM_DRIVER: overlay
CONCOURSE_AWS_SECRETSMANAGER_REGION: us-east-1
CONCOURSE_AWS_SECRETSMANAGER_ACCESS_KEY: <XXXX>
CONCOURSE_AWS_SECRETSMANAGER_SECRET_KEY: <XXXX>
CONCOURSE_AWS_SECRETSMANAGER_TEAM_SECRET_TEMPLATE: /concourse/{{.Secret}}
CONCOURSE_AWS_SECRETSMANAGER_PIPELINE_SECRET_TEMPLATE: /concourse/{{.Secret}}
pipeline.yml example:
jobs:
- name: build-ui
plan:
- get: web-ui
trigger: true
- get: resource-ui
- task: build-task
file: web-ui/ci/build/task.yml
- put: resource-ui
params:
repository: updated-ui
force: true
- task: e2e-task
file: web-ui/ci/e2e/task.yml
params:
UI_USERNAME: ((ui-username))
UI_PASSWORD: ((ui-password))
resources:
- name: cf
type: cf-cli-resource
source:
api: https://api.run.pivotal.io
username: ((cf-username))
password: ((cf-password))
org: Blah
- name: web-ui
type: git
source:
uri: git#github.com:blah/blah.git
branch: master
private_key: ((git-private-key))
When storing parameters for concourse pipelines in AWS Secrets Manager, it must follow this syntax,
/concourse/TEAM_NAME/PIPELINE_NAME/PARAMETER_NAME`
If you have common parameters that are used across the team in multiple pipelines, use this syntax to avoid creating redundant parameters in secrets manager
/concourse/TEAM_NAME/PARAMETER_NAME
The highest level that is supported is concourse team level.
Global parameters are not possible. Thus these variables in your compose environment will not be supported.
CONCOURSE_AWS_SECRETSMANAGER_TEAM_SECRET_TEMPLATE: /concourse/{{.Secret}}
CONCOURSE_AWS_SECRETSMANAGER_PIPELINE_SECRET_TEMPLATE: /concourse/{{.Secret}}
Unless you want to change the prefix /concourse, these parameters shall be left to their defaults.
And, when retrieving these parameters in the pipeline, no changes required in the template. Just pass the PARAMETER_NAME, concourse will handle the lookup in secrets manager as per the team and pipeline name.
...
params:
UI_USERNAME: ((ui-username))
UI_PASSWORD: ((ui-password))
...

Why is the UTC time in C++ 8 seconds ahead of the actual UTC time? [duplicate]

I have a simple Node app which sends messages to AWS SQS. For local development I am providing AWS SDK with region, queueUrl, accessKeyId, secretAccessKey.
Everything works fine until I dockerise the app and run as a container. Then whenever SQS wants to do something I get the following error
{ SignatureDoesNotMatch: Signature expired: 20161211T132303Z is now earlier than 20161211T142227Z (20161211T143727Z - 15 min.)
If I add correctClockSkew: true it corrects the problem.
What is docker doing to require the correctClockSkew: true but not when running Node in MacOS
Node app
process.env.TZ = 'Europe/London';
const AWS = require('aws-sdk');
AWS.config.update({
region: 'eu-west-1',
correctClockSkew: true //this has to be set when running inside a docker container?
});
const sqs = new AWS.SQS({
apiVersion: '2012-11-05',
});
sqs.sendMessage({
QueueUrl: 'https://sqs.eu-west-1.amazonaws.com/522682236448/logback-paddle-prod-errors',
MessageBody: 'HelloSQS',
}, (err, data) => {
if (err) throw err;
});
Dockerfile
FROM node
RUN mkdir -p /usr/lib/app
WORKDIR /usr/lib/app
COPY app/ /usr/lib/app/
RUN npm install
CMD ["node", "index.js"]
docker run -d user/image
Edit
Originally I created the question because I kept getting AWS incorrect time errors, now I am getting it with ElasticSearch too. Why is my container reliably out of sync with the host by about 15 mins.
Docker runs inside of a VM on Windows and MacOS, and the clock of that VM can get out of sync with that of your laptop's OS. There are quite a few solutions I've seen, mostly one off commands including:
docker run -it --rm --privileged --pid=host debian nsenter -t 1 -m -u -n -i date -u $(date -u +%m%d%H%M%Y)
And from this answer there's:
docker-machine ssh default "sudo date -u $(date -u +%m%d%H%M%Y)"
The best solution I've seen for this is to run an ntp container in privledged mode so it can constantly adjust the time on your docker host:
docker run -d --restart unless-stopped --name ntp --privileged tutum/ntpd
See the docker hub repo for more details: https://hub.docker.com/r/tutum/ntpd/

Hyperledger fabric node sdk deploy.js example failing

I'm following these instructions for setting up hyperledger fabric
http://hyperledger-fabric.readthedocs.io/en/latest/asset_setup.html
but when I run deploy.js
info: Returning a new winston logger with default configurations
info: [Peer.js]: Peer.const - url: grpc://localhost:8051 options grpc.ssl_target_name_override=tlsca, grpc.default_authority=tlsca
info: [Peer.js]: Peer.const - url: grpc://localhost:8055 options grpc.ssl_target_name_override=tlsca, grpc.default_authority=tlsca
info: [Peer.js]: Peer.const - url: grpc://localhost:8056 options grpc.ssl_target_name_override=tlsca, grpc.default_authority=tlsca
info: [Client.js]: Failed to load user "admin" from local key value store
info: [FabricCAClientImpl.js]: Successfully constructed Fabric CA service client: endpoint - {"protocol":"http","hostname":"localhost","port":8054}
info: [crypto_ecdsa_aes]: This class requires a CryptoKeyStore to save keys, using the store: {"opts":{"path":"/home/ubuntu/.hfc-key-store"}}
I'm able to use the docker cli but not node sdk.
Failed to load user "admin" from local key value store
How do I store admin user ?
Fixed after installing couchdb.
docker pull couchdb
docker run -d -p 5984:5984 --name my-couchdb couchdb
The certificate authorite services in the docker compose yaml file have a volumes section e.g.:
ccenv_latest:
volumes:
- ./ccenv:/opt/gopath/src/github.com/hyperledger/fabric/orderer/ccenv
ccenv_snapshot:
volumes:
- ./ccenv:/opt/gopath/src/github.com/hyperledger/fabric/orderer/ccenv
ca:
volumes:
- ./tmp/ca:/.fabric-ca
You need to make sure the local path is valid, so in the above configuration you need to have a ./tmp/ccenv and ./tmp/ca directories on the same level as the docker-compose yaml file.