Local stack DynamoDB is not working - amazon-web-services

I am tring to test my AWS resources locally. I found a very nice docker image which has almost all the serives avaible and same can be used for local testing. One of the services for DynamoDB is not working. This is also for my application.
I google it a lot, but I am not able to find the root cause. As per my docker container logs. Below are the logs.
docker run -it -p 4567-4578:4567-4578 -p 8080:8080 localstack/localstack
2018-07-30T12:49:17:ERROR:localstack.services.generic_proxy: Error forwarding request: expected string or buffer Traceback (most recent call last):
File "/opt/code/localstack/localstack/services/generic_proxy.py", line 181, in forward
path=path, data=data, headers=forward_headers)
File "/opt/code/localstack/localstack/services/dynamodb/dynamodb_listener.py", line 35, in forward_request
TypeError: expected string or buffer

I think you are using wrong port. The answer can be find in localstack's issue list: https://github.com/localstack/localstack/issues/675
The DynamoDB's GUI in localstack is running on port 4564. Run following command, then you are able to access the GUI on localhost:4564/shell
docker run -d -p 4569:4569 -p 4564:4564 localstack/localstack:latest
connection code
const dynamoose = require('dynamoose');
const AWS = require('aws-sdk');
dynamoose.local('http://localhost:4569');
dynamoose.AWS.config.update({
region: 'us-east-1',
});
const Purchase = dynamoose.model('test', {
test: {
type: String,
hashKey: true,
}
}, {
update: true,
});

Related

Test an AWS lambda locally using custom container image

I am trying to test the newly added feature of running / invoking lambda with custom container image, so I am building a very simple image from AWS python:3.8 base image as follows:
FROM public.ecr.aws/lambda/python:3.8
COPY myfunction.py ./
CMD ["myfunction.py"]
And here is myfunction.py
import json
import sys
def lambda_handler(event, context):
print("Hello AWS!")
print("event = {}".format(event))
return {
'statusCode': 200,
}
My question is the following: after my build is done:
docker build --tag custom .
how can I now invoke my lambda, given that I do not expose any web endpoints and assuming I am spinning up my custom container with success, (although the handler= part is a little bit unsettling in terms of whether I have configured the handler appropriately)
▶ docker run -p 9000:8080 -it custom
INFO[0000] exec '/var/runtime/bootstrap' (cwd=/var/task, handler=)
A simple curl of course fails
▶ curl -XGET http://localhost:9000
404 page not found
It turns out I have to invoke this extremely non-intuitive url
curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{}'
However I am still getting this error
WARN[0149] Cannot list external agents error="open /opt/extensions: no such file or directory"
START RequestId: f681b2ca-5e35-499d-a262-dd7bc53912f0 Version: $LATEST
Traceback (most recent call last):andler 'py' missing on module 'myfunction'
END RequestId: f681b2ca-5e35-499d-a262-dd7bc53912f0
REPORT RequestId: f681b2ca-5e35-499d-a262-dd7bc53912f0 Init Duration: 1.08 ms Duration: 248.05 ms Billed Duration: 300 ms Memory Size: 3008 MB Max Memory Used: 3008 MB
edit: Solved by changing the CMD from
CMD ["myfunction.py"]
to
CMD ["myfunction.lambda_handler"]

How to solve the error with deploying a model in aws sagemaker?

I have to deploy a custom keras model in AWS Sagemaker. I have a created a notebook instance and I have the following files:
AmazonSagemaker-Codeset16
-ann
-nginx.conf
-predictor.py
-serve
-train.py
-wsgi.py
-Dockerfile
I now open the AWS terminal and build the docker image and push the image in the ECR repository. Then I open a new jupyter python notebook and try to fit the model and deploy the same. The training is done correctly but while deploying I get the following error:
"Error hosting endpoint sagemaker-example-2019-10-25-06-11-22-366: Failed. >Reason: The primary container for production variant AllTraffic did not pass >the ping health check. Please check CloudWatch logs for this endpoint..."
When I check the logs, I find the following:
2019/11/11 11:53:32 [crit] 19#19: *3 connect() to unix:/tmp/gunicorn.sock >failed (2: No such file or directory) while connecting to upstream, client: >10.32.0.4, server: , request: "GET /ping HTTP/1.1", upstream: >"http://unix:/tmp/gunicorn.sock:/ping", host: "model.aws.local:8080"
and
Traceback (most recent call last):
File "/usr/local/bin/serve", line 8, in
sys.exit(main())
File "/usr/local/lib/python2.7/dist->packages/sagemaker_containers/cli/serve.py", line 19, in main
server.start(env.ServingEnv().framework_module)
File "/usr/local/lib/python2.7/dist->packages/sagemaker_containers/_server.py", line 107, in start
module_app,
File "/usr/lib/python2.7/subprocess.py", line 711, in init
errread, errwrite)
File "/usr/lib/python2.7/subprocess.py", line 1343, in _execute_child
raise child_exception
I tried to deploy the same model in AWS Sagemaker with these files in my local computer and the model was deployed successfully but inside AWS, I am facing this problem.
Here is my serve file code:
from __future__ import print_function
import multiprocessing
import os
import signal
import subprocess
import sys
cpu_count = multiprocessing.cpu_count()
model_server_timeout = os.environ.get('MODEL_SERVER_TIMEOUT', 60)
model_server_workers = int(os.environ.get('MODEL_SERVER_WORKERS', cpu_count))
def sigterm_handler(nginx_pid, gunicorn_pid):
try:
os.kill(nginx_pid, signal.SIGQUIT)
except OSError:
pass
try:
os.kill(gunicorn_pid, signal.SIGTERM)
except OSError:
pass
sys.exit(0)
def start_server():
print('Starting the inference server with {} workers.'.format(model_server_workers))
# link the log streams to stdout/err so they will be logged to the container logs
subprocess.check_call(['ln', '-sf', '/dev/stdout', '/var/log/nginx/access.log'])
subprocess.check_call(['ln', '-sf', '/dev/stderr', '/var/log/nginx/error.log'])
nginx = subprocess.Popen(['nginx', '-c', '/opt/ml/code/nginx.conf'])
gunicorn = subprocess.Popen(['gunicorn',
'--timeout', str(model_server_timeout),
'-b', 'unix:/tmp/gunicorn.sock',
'-w', str(model_server_workers),
'wsgi:app'])
signal.signal(signal.SIGTERM, lambda a, b: sigterm_handler(nginx.pid, gunicorn.pid))
# If either subprocess exits, so do we.
pids = set([nginx.pid, gunicorn.pid])
while True:
pid, _ = os.wait()
if pid in pids:
break
sigterm_handler(nginx.pid, gunicorn.pid)
print('Inference server exiting')
# The main routine just invokes the start function.
if __name__ == '__main__':
start_server()
I deploy the model using the following:
predictor = classifier.deploy(1, 'ml.t2.medium', serializer=csv_serializer)
Kindly let me know the mistake I am doing while deploying.
Using Sagemaker script mode can be much simpler than dealing with container and nginx low-level stuff like you're trying to do, have you considered that?
You only need to provide the keras script:
With Script Mode, you can use training scripts similar to those you would use outside SageMaker with SageMaker's prebuilt containers for various deep learning frameworks such TensorFlow, PyTorch, and Apache MXNet.
https://github.com/aws-samples/amazon-sagemaker-script-mode/blob/master/tf-sentiment-script-mode/sentiment-analysis.ipynb
You should ensure that your container can respond to GET /ping requests: https://docs.aws.amazon.com/sagemaker/latest/dg/your-algorithms-inference-code.html#your-algorithms-inference-algo-ping-requests
From the traceback, it looks like the server is failing to start when the container is started within SageMaker. I would look further in the stack trace and see why the server is failing start.
You can also try to run your container locally to debug any issues. SageMaker starts your container using the command 'docker run serve', and so you could run the same command and debug your container. https://docs.aws.amazon.com/sagemaker/latest/dg/your-algorithms-inference-code.html#your-algorithms-inference-code-run-image
you don't have gunicorn installed, that's why the error /tmp/gunicorn.sock >failed (2: No such file or directory), you need to write on Dockerfile pip install gunicorn and apt-get install nginx.

Why is the UTC time in C++ 8 seconds ahead of the actual UTC time? [duplicate]

I have a simple Node app which sends messages to AWS SQS. For local development I am providing AWS SDK with region, queueUrl, accessKeyId, secretAccessKey.
Everything works fine until I dockerise the app and run as a container. Then whenever SQS wants to do something I get the following error
{ SignatureDoesNotMatch: Signature expired: 20161211T132303Z is now earlier than 20161211T142227Z (20161211T143727Z - 15 min.)
If I add correctClockSkew: true it corrects the problem.
What is docker doing to require the correctClockSkew: true but not when running Node in MacOS
Node app
process.env.TZ = 'Europe/London';
const AWS = require('aws-sdk');
AWS.config.update({
region: 'eu-west-1',
correctClockSkew: true //this has to be set when running inside a docker container?
});
const sqs = new AWS.SQS({
apiVersion: '2012-11-05',
});
sqs.sendMessage({
QueueUrl: 'https://sqs.eu-west-1.amazonaws.com/522682236448/logback-paddle-prod-errors',
MessageBody: 'HelloSQS',
}, (err, data) => {
if (err) throw err;
});
Dockerfile
FROM node
RUN mkdir -p /usr/lib/app
WORKDIR /usr/lib/app
COPY app/ /usr/lib/app/
RUN npm install
CMD ["node", "index.js"]
docker run -d user/image
Edit
Originally I created the question because I kept getting AWS incorrect time errors, now I am getting it with ElasticSearch too. Why is my container reliably out of sync with the host by about 15 mins.
Docker runs inside of a VM on Windows and MacOS, and the clock of that VM can get out of sync with that of your laptop's OS. There are quite a few solutions I've seen, mostly one off commands including:
docker run -it --rm --privileged --pid=host debian nsenter -t 1 -m -u -n -i date -u $(date -u +%m%d%H%M%Y)
And from this answer there's:
docker-machine ssh default "sudo date -u $(date -u +%m%d%H%M%Y)"
The best solution I've seen for this is to run an ntp container in privledged mode so it can constantly adjust the time on your docker host:
docker run -d --restart unless-stopped --name ntp --privileged tutum/ntpd
See the docker hub repo for more details: https://hub.docker.com/r/tutum/ntpd/

connecting AWS SAM Local with dynamodb in docker

I've set up an api gateway/aws lambda pair using AWS sam local and confirmed I can call it successfully after running
sam local start-api
I've then added a local dynamodb instance in a docker container and created a table on it using the aws cli
But, having added the code to the lambda to write to the dynamodb instance I receive:
2018-02-22T11:13:16.172Z ed9ab38e-fb54-18a4-0852-db7e5b56c8cd error:
could not write to table: {"message":"connect ECONNREFUSED
0.0.0.0:8000","code":"NetworkingError","errno":"ECONNREFUSED","syscall":"connect","address":"0.0.0.0","port":8000,"region":"eu-west-2","hostname":"0.0.0.0","retryable":true,"time":"2018-02-22T11:13:16.165Z"}
writing event from command:
{"name":"test","geolocation":"xyz","type":"createDestination"} END
RequestId: ed9ab38e-fb54-18a4-0852-db7e5b56c8cd
I saw online that you might need to connect to the same docker network so I created a network docker network create lambda-local and have changed my start commands to:
sam local start-api --docker-network lambda-local
and
docker run -v "$PWD":/dynamodb_local_db -p 8000:8000 --network=lambda-local cnadiminti/dynamodb-local:latest
but still receive the same error
sam local is printing out 2018/02/22 11:12:51 Connecting container 98b19370ab92f3378ce380e9c840177905a49fc986597fef9ef589e624b4eac3 to network lambda-local
I'm creating the dynamodbclient using:
const AWS = require('aws-sdk')
const dynamodbURL = process.env.dynamodbURL || 'http://0.0.0.0:8000'
const awsAccessKeyId = process.env.AWS_ACCESS_KEY_ID || '1234567'
const awsAccessKey = process.env.AWS_SECRET_ACCESS_KEY || '7654321'
const awsRegion = process.env.AWS_REGION || 'eu-west-2'
console.log(awsRegion, 'initialising dynamodb in region: ')
let dynamoDbClient
const makeClient = () => {
dynamoDbClient = new AWS.DynamoDB.DocumentClient({
endpoint: dynamodbURL,
accessKeyId: awsAccessKeyId,
secretAccessKey: awsAccessKey,
region: awsRegion
})
return dynamoDbClient
}
module.exports = {
connect: () => dynamoDbClient || makeClient()
}
and inspecting the dynamodbclient my code is creating shows
DocumentClient {
options:
{ endpoint: 'http://0.0.0.0:8000',
accessKeyId: 'my-key',
secretAccessKey: 'my-secret',
region: 'eu-west-2',
attrValue: 'S8' },
service:
Service {
config:
Config {
credentials: [Object],
credentialProvider: [Object],
region: 'eu-west-2',
logger: null,
apiVersions: {},
apiVersion: null,
endpoint: 'http://0.0.0.0:8000',
httpOptions: [Object],
maxRetries: undefined,
maxRedirects: 10,
paramValidation: true,
sslEnabled: true,
s3ForcePathStyle: false,
s3BucketEndpoint: false,
s3DisableBodySigning: true,
computeChecksums: true,
convertResponseTypes: true,
correctClockSkew: false,
customUserAgent: null,
dynamoDbCrc32: true,
systemClockOffset: 0,
signatureVersion: null,
signatureCache: true,
retryDelayOptions: {},
useAccelerateEndpoint: false,
accessKeyId: 'my-key',
secretAccessKey: 'my-secret' },
endpoint:
Endpoint {
protocol: 'http:',
host: '0.0.0.0:8000',
port: 8000,
hostname: '0.0.0.0',
pathname: '/',
path: '/',
href: 'http://0.0.0.0:8000/' },
_clientId: 1 },
attrValue: 'S8' }
Should this setup work? How do I get them talking to each other?
---- edit ----
Based on a twitter conversation it's worth mentioning (maybe) that I can interact with dynamodb at the CLI and in the web shell
Many thanks to Heitor Lessa who answered me on Twitter with an example repo
Which pointed me at the answer...
dynamodb's docker container is on 127.0.0.1 from the context of my
machine (which is why I could interact with it)
SAM local's docker container is on 127.0.0.1 from the context of my
machine
But they aren't on 127.0.0.1 from each other's context
So: https://github.com/heitorlessa/sam-local-python-hot-reloading/blob/master/users/users.py#L14
Pointed me at changing my connection code to:
const AWS = require('aws-sdk')
const awsRegion = process.env.AWS_REGION || 'eu-west-2'
let dynamoDbClient
const makeClient = () => {
const options = {
region: awsRegion
}
if(process.env.AWS_SAM_LOCAL) {
options.endpoint = 'http://dynamodb:8000'
}
dynamoDbClient = new AWS.DynamoDB.DocumentClient(options)
return dynamoDbClient
}
module.exports = {
connect: () => dynamoDbClient || makeClient()
}
with the important lines being:
if(process.env.AWS_SAM_LOCAL) {
options.endpoint = 'http://dynamodb:8000'
}
from the context of the SAM local docker container the dynamodb container is exposed via its name
My two startup commands ended up as:
docker run -d -v "$PWD":/dynamodb_local_db -p 8000:8000 --network lambda-local --name dynamodb cnadiminti/dynamodb-local
and
AWS_REGION=eu-west-2 sam local start-api --docker-network lambda-local
with the only change here being to give the dynamodb container a name
If your using sam-local on a mac like alot of devs you should be able to just use
options.endpoint = "http://docker.for.mac.localhost:8000"
Or on newer installs of docker https://docs.docker.com/docker-for-mac/release-notes/#docker-community-edition-18030-ce-mac59-2018-03-26
options.endpoint = "http://host.docker.internal:8000"
Instead of having to do multiple commands like Paul showed above (but that might be more platform agnostic?).
The other answers were too overly complicated / unclear for me. Here is what I came up with.
Step 1: use docker-compose to get DynamoDB local running on a custom network
docker-compose.yml
Note the network name abp-sam-backend, service name dynamo and that dynamo service is using the backend network.
version: '3.5'
services:
dynamo:
container_name: abp-sam-nestjs-dynamodb
image: amazon/dynamodb-local
networks:
- backend
ports:
- '8000:8000'
volumes:
- dynamodata:/home/dynamodblocal
working_dir: /home/dynamodblocal
command: '-jar DynamoDBLocal.jar -sharedDb -dbPath .'
networks:
backend:
name: abp-sam-backend
volumes:
dynamodata: {}
Start DyanmoDB local container via:
docker-compose up -d dynamo
Step 2: Write your code to handle local DynamoDB endpoint
import { DynamoDB, Endpoint } from 'aws-sdk';
const ddb = new DynamoDB({ apiVersion: '2012-08-10' });
if (process.env['AWS_SAM_LOCAL']) {
ddb.endpoint = new Endpoint('http://dynamo:8000');
} else if ('local' == process.env['APP_STAGE']) {
// Use this when running code directly via node. Much faster iterations than using sam local
ddb.endpoint = new Endpoint('http://localhost:8000');
}
Note that I'm using the hostname alias dynamo. This alias is auto-created for me by docker inside the abp-sam-backend network. The alias name is just the service name.
Step 3: Launch the code via sam local
sam local start-api -t sam-template.yml --docker-network abp-sam-backend --skip-pull-image --profile default --parameter-overrides 'ParameterKey=StageName,ParameterValue=local ParameterKey=DDBTableName,ParameterValue=local-SingleTable'
Note that I'm telling sam local to use the existing network abp-sam-backend that was defined in my docker-compose.yml
End-to-end example
I made a working example (plus a bunch of other features) that can be found at https://github.com/rynop/abp-sam-nestjs
SAM starts a docker container lambci/lambda under the hood, if you have another container hosting dynamodb for example or any other services to which you want to connect your lambda, so you should have both in the same network
Suppose dynamodb (notice --name, this is the endpoint now)
docker run -d -p 8000:8000 --name DynamoDBEndpoint amazon/dynamodb-local
This will result in something like this
0e35b1c90cf0....
To know which network this was created inside:
docker inspect 0e35b1c90cf0
It should give you something like
...
Networks: {
"services_default": {//this is the <<myNetworkName>>
....
If you know your networks and want to put docker container inside specific network, you can save the above steps and do this in one command while starting container using --network option
docker run -d -p 8000:8000 --network myNetworkName --name DynamoDBEndpoint amazon/dynamodb-local
Important: Your lambda code now should have endpoint to dynamo to DynamoDBEndpoint
To say for example:
if(process.env.AWS_SAM_LOCAL) {
options.endpoint = 'http://DynamoDBEndpoint:8000'
}
Testing everything out:
Using lambci:lambda
This should only list all tables inside your other dynamodb container
docker run -ti --rm --network myNetworkName lambci/lambda:build-go1.x \
aws configure set aws_access_key_id "xxx" && \
aws configure set aws_secret_access_key "yyy" && \
aws --endpoint-url=http://DynamoDBEndpoint:4569 --region=us-east-1 dynamodb list-tables
Or to invoke a function: (Go Example, same as NodeJS)
#Golang
docker run --rm -v "$PWD":/var/task lambci/lambda:go1.x handlerName '{"some": "event"}'
#Same for NodeJS
docker run --rm -v "$PWD":/var/task lambci/lambda:nodejs10.x index.handler
More Info about lambci/lambda can be found here
Using SAM (which uses the same container lmabci/lambda):
sam local invoke --event myEventData.json --docker-network myNetworkName MyFuncName
You can always use --debug option in case you want to see more details.
Alternatively, You can also use http://host.docker.internal:8000 without the hassle of playing with docker, this URL is reserved internally and gives you an access to your host machine but make sure you expose port 8000 when you start dynamodb container. Although it is quite easy but it doesn't work in all operating systems. For more details about this feature, please check docker documentation
As #Paul mentioned, it is about configuring your network between the docker containers - lambda and database.
Another approach that worked for me (using docker-compose).
docker-compose:
version: '2.1'
services:
db:
image: ...
ports:
- "3306:3306"
networks:
- my_network
environment:
...
volumes:
...
networks:
my_network:
Then, after docker-compose up, running docker network ls will show:
NETWORK ID NAME DRIVER SCOPE
7eb440d5c0e6 dev_my_network bridge local
My docker container name is dev_db_1.
My js code is:
const connection = mysql.createConnection({
host: "dev_db_1",
port: 3306,
...
});
Then, running the sam command:
sam local invoke --docker-network dev_my_network -e my.json
Stack:
Docker: 18.03.1-ce
Docker-compose: 1.21.1
MacOS HighSierra 10.13.6
If you are using LocalStack to run DynamoDB, I believe the correct command to use the LocalStack network for SAM is:
sam local start-api --env-vars env.json --docker-network localstack_default
And in your code, the LocalStack hostname should be localstack_localstack_1
const dynamoDbDocumentClient = new AWS.DynamoDB.DocumentClient({
endpoint: process.env.AWS_SAM_LOCAL ?
'http://localstack_localstack_1:4569' :
undefined,
});
However, I launched LocalStack using docker-compose up. Using the pip CLI tool to launch LocalStack may result in different identifiers.
This may be helpful for someone who are still facing the same issue:
I also faced the same problem recently. I followed all the steps mentioned by rynop (Thanks #rynop)
I fixed the issue (on my windows) by replacing the endpoint (http://localhost:8000) with my (private) IP address (i.e http://192.168.8.101:8000) in the following code:
import { DynamoDB, Endpoint } from 'aws-sdk';
const ddb = new DynamoDB({ apiVersion: '2012-08-10' });
if (process.env['AWS_SAM_LOCAL']) {
ddb.endpoint = new Endpoint('http://dynamo:8000');
} else if ('local' == process.env['APP_STAGE']) {
// Use this when running code directly via node. Much faster iterations than using sam local
ddb.endpoint = new Endpoint('http://localhost:8000');
}

Hyperledger Fabric Chaincode Deploment - DockerFile missing

I would like to check if the following issue you have encountered while testing examples of asset_management_with_roles
Im running a 4 note Validator Node setup with 1 membersrvc in docker container setup. All setup steps have been followed but still this does not go.
Also I saw in code that a default docker image hyperledger/fabric-baseimage is needed for chaincode - that also I build from src but to no avail.
On deploying the chaincode the console at the "docker-compose up" shows the following message
CURL command to deploy:
curl -XPOST -d '{"jsonrpc": "2.0", "method": "deploy", "params": {"type": 1,"chaincodeID": { "name":"myam1","path": "github.com/hyperledger/fabric/examples/chaincode/go/asset_management_with_roles","language": "GOLANG"}, "ctorMsg": { "args": ["init"] }, "metadata":[97, 115, 115, 105, 103, 110, 101, 114] ,"secureContext": "assigner"} ,"id": 0}' http://192.168.99.100:7050/chaincode
----------- Error Message on Deploy --------------------
vp2_1 | 07:50:51.447 [dockercontroller] deployImage -> ERRO 049 Error
building images: API error (500): {"message":"Cannot locate specified
Dockerfile: Dockerfile"}
I think the rest interface was discontinued in HyperLedger 1.0 so the above command will not work.
The issue was related to different version of file at the container level. Instead of rebuilding from the SRC I downloaded the latest docker container images and all went fine after that