No response from invoke container for FunctionName - amazon-web-services

Trying to port forward dockerized Lambda to my localhost using command:
$ sam local start-api --docker-network host
Error every time trying to access Lambda:
No response from invoke container for FunctionName
Tried also using host.docker.internal & host.docker.local networks with no success.
Any ideas? Workarounds?

That doesn't seem to work, but using your host computer's IP address does...
Say your host computer's IP address is 192.168.1.111 . You can use that from your Lambda to hit your host
You can make this configurable:
template.yml:
...
Environment:
Variables:
ENDPOINT_URL: null
env.json:
{
"Parameters":{
"ENDPOINT_URL":"http://192.168.1.111:5000"
}
}
lambda_function.py:
...
default_sns_endpoint = f'https://sns.{os.environ["AWS_REGION"]}.amazonaws.com'
endpoint_url = os.environ.get("ENDPOINT_URL", "") or default_endpoint
sns = boto3.client("sns", endpoint_url=endpoint_url)
...
start SAM:
sam local invoke --env-vars env.json

Related

How to connect to Mosquitto MQTT Broker, that is running on a Google Cloud Virtual Machine Instance, using mqtt.js

What I am trying to achieve: I have a Mosquitto MQTT Broker running on a Google Cloud virtual machine (Ubuntu), and I want to be able to connect to it from my local PC using mqtt.js
My setup
I have created a VM instance in Google Cloud, running Ubuntu 20.04.LTS.
Some of the settings:
Firewall – allow HTTPS and allow HTTP
Firewall rule – opens port 1883
I installed Mosquitto MQTT Broker (version 1.6.9) on this VM.
I was able to verify the installation and that it was running, by opening to SSH terminals, one to publish, one to subscribe
mosquitto_sub -t test
mosquitto_pub -t test -m “hello”
Then I read that when I want to connect to VMs using third-party tools, I must create and upload my own SSH keys to VMs:
ssh-keygen -t rsa -f C:\keys\VM_KEYFILE -b 2048 pwd: ****
I got two files now, the private and public keys:
VM_KEYFILE
VM_KEYFILE.pub
I then used icacls to modify the private key’s permissions:
icacls.exe VM_KEYFILE /reset
icacls.exe VM_KEYFILE /grant:r “$($env:username):(r)”
icacls.exe VM_KEYFILE /inheritance:r
I then successfully connected ot the VM from a Windows console:
ssh -i "VM_KEYFILE" username#vm_public_ip_address
So now I want to try and connect using node.js
I already have a javascript file that uses mqtt.js to connect to some of the public MQTT brokers, e.g. HiveMQ
Some of its settings are:
let broker_host = 'broker.hivemq.com';
let broker_port = 1883;
let client_id = 'my_client_1';
const connection_options = {
port: broker_port,
host: broker_host,
clientId: client_id,
clean: true,
keepalive: false
};
My question: How would I modify this JavaScript file to connect to the MQTT broker that is running in the Google Cloud VM
There is no username/password/authentication set up for the broker itself, just the VM.
I tried something like this, but I have no idea how to use the SSH key
let broker_host_gcm_vm = 'https://<vm_public_ip_address>
UPDATE
I can connect to the broker from both (a) MQTT Explorer, and (b) MQTTX deskptop app.
All I have to enter for the connection details is:
Host: mqtt://<ip address>
Port: 1883
Then I can publish / subscribe successfully.
I tried changing my JavaScript connection to the following, but I still can't connect from here:
let broker_host_gcm_vm1 = 'mqtt://<ip address>';
I found the problem.
Let's say the host IP address is 11.22.33.44
The host was none of these:
let broker_host = 'http://11.22.33.44';
let broker_host = 'https://11.22.33.44';
let broker_host = 'mqqt://11.22.33.44';
let broker_host = 'mqtts://11.22.33.44';
But was simply this:
let broker_host = '11.22.33.44';
Simple when you know how :)

How to reference SSM parameter store parameters from within an in-vpc codebuild?

I have an in-vpc codebuild which is set up using an ELB as a proxy server(For limited internet access). The buildspec of that codebuild is referencing a parameter from the parameter store. However, when the build is run, it fails with
Decrypted Variables Error: RequestError: send request failed caused by: Post https://ssm.ap-northeast-1.amazonaws.com/: dial tcp 52.179.283.42:443: i/o timeout
The proxy server has access to all amazonaws.com endpoints, all HTTP_PROXY variables setup properly, and in the build spec I have also mentioned the proxy settings. (upload logs/artifcats - true). Not sure how to fix this issue, or if it is allowed to access SSM parameters from invpc codebuild?
Can you try adding the env variables in your buildspec in small caps as well which the golang programs are genrally looking for:
env:
variables:
HTTP_PROXY: "http://<proxy_server_hostname>:9480"
HTTPS_PROXY: "http://<proxy_server_hostname>:9480"
NO_PROXY: "169.254.169.254,169.254.170.2"
http_proxy: "http://<proxy_server_hostname>:9480"
https_proxy: "http://<proxy_server_hostname>:9480"
no_proxy: "169.254.169.254,169.254.170.2"
build:
commands:
- curl -v https://ssm.ap-northeast-1.amazonaws.com

Ansible deployment to windows host behind bastion

I am currently successfully using Ansible to run tasks on hosts that are in a private subnet in AWS, which the below group_vars is setting up:
ansible_ssh_common_args: '-o ProxyCommand="ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -W %h:%p -q ec2-user#bastionhost#example.com"'
This is working fine.
For Windows instances not in a private subnet the following group_vars works:
---
ansible_user: "AnsibleUser"
ansible_password: "Password"
ansible_port: 5986
ansible_connection: winrm
ansible_winrm_server_cert_validation: ignore
Now, trying to get Ansible to deploy to a Windows server behind the bastion by just using the ProxyCommand won't work - which I understand.
I believe though that there is a new protocol/module I can use called psrp.
I imagine that my group_vars for my Windows hosts needs to change to something like this:
---
ansible_user: "AnsibleUser"
ansible_password: "Password"
ansible_port: 5986
ansible_connection: psrp
ansible_psrp_cert_validation: ignore
If I run with just the above changes against instances that are publicly available (and not trying to connect via a bastion), my task seems to work fine:
Using module file /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/ansible/modules/windows/win_shell.ps1
<10.100.11.14> ESTABLISH PSRP CONNECTION FOR USER: Administrator ON PORT 5986 TO 10.100.11.14
PSRP: EXEC (via pipeline wrapper)
I know there must be more changes before I can try this on a windows server behind a bastion, but ran it anyway to see what errors I get to give me clues on what to do next. Here is the result when running this on an instance behind a bastion server:
Using module file /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/ansible/modules/windows/setup.ps1
<10.100.11.14> ESTABLISH PSRP CONNECTION FOR USER: Administrator ON PORT 5986 TO 10.100.11.14
The full traceback is:
.
.
.
.
ConnectTimeout: HTTPSConnectionPool(host='10.100.11.14', port=5986): Max retries exceeded with url: /wsman (Caused by ConnectTimeoutError(<urllib3.connection.VerifiedHTTPSConnection object at 0x110bbfbd0>, 'Connection to 10.100.11.14 timed out. (connect timeout=30)'))
It seems like Ansible is ignoring my group_vars for the ProxyCommand - which I'm not sure if that's expected.
I'm also not sure on what the next steps are to enable Ansible to deploy to Windows servers behind a bastion.
What config am I missing?
The doc says, the ansible_ssh_common_args setting is appended to sftp, scp, and ssh commands. So it sounds normal to me that is not taking into account when using winrm or psrp ansible_connection.
As explained in the link provided by Pouyan in the comments, ansible_psrp_proxy variable will be used to provide proxy information.
ansible_connection: psrp
ansible_psrp_proxy=socks5h://localhost:1234
More info on the creation of the socks proxy can be found on: https://www.bloggingforlogging.com/2018/10/14/windows-host-through-ssh-bastion-on-ansible/

connecting AWS SAM Local with dynamodb in docker

I've set up an api gateway/aws lambda pair using AWS sam local and confirmed I can call it successfully after running
sam local start-api
I've then added a local dynamodb instance in a docker container and created a table on it using the aws cli
But, having added the code to the lambda to write to the dynamodb instance I receive:
2018-02-22T11:13:16.172Z ed9ab38e-fb54-18a4-0852-db7e5b56c8cd error:
could not write to table: {"message":"connect ECONNREFUSED
0.0.0.0:8000","code":"NetworkingError","errno":"ECONNREFUSED","syscall":"connect","address":"0.0.0.0","port":8000,"region":"eu-west-2","hostname":"0.0.0.0","retryable":true,"time":"2018-02-22T11:13:16.165Z"}
writing event from command:
{"name":"test","geolocation":"xyz","type":"createDestination"} END
RequestId: ed9ab38e-fb54-18a4-0852-db7e5b56c8cd
I saw online that you might need to connect to the same docker network so I created a network docker network create lambda-local and have changed my start commands to:
sam local start-api --docker-network lambda-local
and
docker run -v "$PWD":/dynamodb_local_db -p 8000:8000 --network=lambda-local cnadiminti/dynamodb-local:latest
but still receive the same error
sam local is printing out 2018/02/22 11:12:51 Connecting container 98b19370ab92f3378ce380e9c840177905a49fc986597fef9ef589e624b4eac3 to network lambda-local
I'm creating the dynamodbclient using:
const AWS = require('aws-sdk')
const dynamodbURL = process.env.dynamodbURL || 'http://0.0.0.0:8000'
const awsAccessKeyId = process.env.AWS_ACCESS_KEY_ID || '1234567'
const awsAccessKey = process.env.AWS_SECRET_ACCESS_KEY || '7654321'
const awsRegion = process.env.AWS_REGION || 'eu-west-2'
console.log(awsRegion, 'initialising dynamodb in region: ')
let dynamoDbClient
const makeClient = () => {
dynamoDbClient = new AWS.DynamoDB.DocumentClient({
endpoint: dynamodbURL,
accessKeyId: awsAccessKeyId,
secretAccessKey: awsAccessKey,
region: awsRegion
})
return dynamoDbClient
}
module.exports = {
connect: () => dynamoDbClient || makeClient()
}
and inspecting the dynamodbclient my code is creating shows
DocumentClient {
options:
{ endpoint: 'http://0.0.0.0:8000',
accessKeyId: 'my-key',
secretAccessKey: 'my-secret',
region: 'eu-west-2',
attrValue: 'S8' },
service:
Service {
config:
Config {
credentials: [Object],
credentialProvider: [Object],
region: 'eu-west-2',
logger: null,
apiVersions: {},
apiVersion: null,
endpoint: 'http://0.0.0.0:8000',
httpOptions: [Object],
maxRetries: undefined,
maxRedirects: 10,
paramValidation: true,
sslEnabled: true,
s3ForcePathStyle: false,
s3BucketEndpoint: false,
s3DisableBodySigning: true,
computeChecksums: true,
convertResponseTypes: true,
correctClockSkew: false,
customUserAgent: null,
dynamoDbCrc32: true,
systemClockOffset: 0,
signatureVersion: null,
signatureCache: true,
retryDelayOptions: {},
useAccelerateEndpoint: false,
accessKeyId: 'my-key',
secretAccessKey: 'my-secret' },
endpoint:
Endpoint {
protocol: 'http:',
host: '0.0.0.0:8000',
port: 8000,
hostname: '0.0.0.0',
pathname: '/',
path: '/',
href: 'http://0.0.0.0:8000/' },
_clientId: 1 },
attrValue: 'S8' }
Should this setup work? How do I get them talking to each other?
---- edit ----
Based on a twitter conversation it's worth mentioning (maybe) that I can interact with dynamodb at the CLI and in the web shell
Many thanks to Heitor Lessa who answered me on Twitter with an example repo
Which pointed me at the answer...
dynamodb's docker container is on 127.0.0.1 from the context of my
machine (which is why I could interact with it)
SAM local's docker container is on 127.0.0.1 from the context of my
machine
But they aren't on 127.0.0.1 from each other's context
So: https://github.com/heitorlessa/sam-local-python-hot-reloading/blob/master/users/users.py#L14
Pointed me at changing my connection code to:
const AWS = require('aws-sdk')
const awsRegion = process.env.AWS_REGION || 'eu-west-2'
let dynamoDbClient
const makeClient = () => {
const options = {
region: awsRegion
}
if(process.env.AWS_SAM_LOCAL) {
options.endpoint = 'http://dynamodb:8000'
}
dynamoDbClient = new AWS.DynamoDB.DocumentClient(options)
return dynamoDbClient
}
module.exports = {
connect: () => dynamoDbClient || makeClient()
}
with the important lines being:
if(process.env.AWS_SAM_LOCAL) {
options.endpoint = 'http://dynamodb:8000'
}
from the context of the SAM local docker container the dynamodb container is exposed via its name
My two startup commands ended up as:
docker run -d -v "$PWD":/dynamodb_local_db -p 8000:8000 --network lambda-local --name dynamodb cnadiminti/dynamodb-local
and
AWS_REGION=eu-west-2 sam local start-api --docker-network lambda-local
with the only change here being to give the dynamodb container a name
If your using sam-local on a mac like alot of devs you should be able to just use
options.endpoint = "http://docker.for.mac.localhost:8000"
Or on newer installs of docker https://docs.docker.com/docker-for-mac/release-notes/#docker-community-edition-18030-ce-mac59-2018-03-26
options.endpoint = "http://host.docker.internal:8000"
Instead of having to do multiple commands like Paul showed above (but that might be more platform agnostic?).
The other answers were too overly complicated / unclear for me. Here is what I came up with.
Step 1: use docker-compose to get DynamoDB local running on a custom network
docker-compose.yml
Note the network name abp-sam-backend, service name dynamo and that dynamo service is using the backend network.
version: '3.5'
services:
dynamo:
container_name: abp-sam-nestjs-dynamodb
image: amazon/dynamodb-local
networks:
- backend
ports:
- '8000:8000'
volumes:
- dynamodata:/home/dynamodblocal
working_dir: /home/dynamodblocal
command: '-jar DynamoDBLocal.jar -sharedDb -dbPath .'
networks:
backend:
name: abp-sam-backend
volumes:
dynamodata: {}
Start DyanmoDB local container via:
docker-compose up -d dynamo
Step 2: Write your code to handle local DynamoDB endpoint
import { DynamoDB, Endpoint } from 'aws-sdk';
const ddb = new DynamoDB({ apiVersion: '2012-08-10' });
if (process.env['AWS_SAM_LOCAL']) {
ddb.endpoint = new Endpoint('http://dynamo:8000');
} else if ('local' == process.env['APP_STAGE']) {
// Use this when running code directly via node. Much faster iterations than using sam local
ddb.endpoint = new Endpoint('http://localhost:8000');
}
Note that I'm using the hostname alias dynamo. This alias is auto-created for me by docker inside the abp-sam-backend network. The alias name is just the service name.
Step 3: Launch the code via sam local
sam local start-api -t sam-template.yml --docker-network abp-sam-backend --skip-pull-image --profile default --parameter-overrides 'ParameterKey=StageName,ParameterValue=local ParameterKey=DDBTableName,ParameterValue=local-SingleTable'
Note that I'm telling sam local to use the existing network abp-sam-backend that was defined in my docker-compose.yml
End-to-end example
I made a working example (plus a bunch of other features) that can be found at https://github.com/rynop/abp-sam-nestjs
SAM starts a docker container lambci/lambda under the hood, if you have another container hosting dynamodb for example or any other services to which you want to connect your lambda, so you should have both in the same network
Suppose dynamodb (notice --name, this is the endpoint now)
docker run -d -p 8000:8000 --name DynamoDBEndpoint amazon/dynamodb-local
This will result in something like this
0e35b1c90cf0....
To know which network this was created inside:
docker inspect 0e35b1c90cf0
It should give you something like
...
Networks: {
"services_default": {//this is the <<myNetworkName>>
....
If you know your networks and want to put docker container inside specific network, you can save the above steps and do this in one command while starting container using --network option
docker run -d -p 8000:8000 --network myNetworkName --name DynamoDBEndpoint amazon/dynamodb-local
Important: Your lambda code now should have endpoint to dynamo to DynamoDBEndpoint
To say for example:
if(process.env.AWS_SAM_LOCAL) {
options.endpoint = 'http://DynamoDBEndpoint:8000'
}
Testing everything out:
Using lambci:lambda
This should only list all tables inside your other dynamodb container
docker run -ti --rm --network myNetworkName lambci/lambda:build-go1.x \
aws configure set aws_access_key_id "xxx" && \
aws configure set aws_secret_access_key "yyy" && \
aws --endpoint-url=http://DynamoDBEndpoint:4569 --region=us-east-1 dynamodb list-tables
Or to invoke a function: (Go Example, same as NodeJS)
#Golang
docker run --rm -v "$PWD":/var/task lambci/lambda:go1.x handlerName '{"some": "event"}'
#Same for NodeJS
docker run --rm -v "$PWD":/var/task lambci/lambda:nodejs10.x index.handler
More Info about lambci/lambda can be found here
Using SAM (which uses the same container lmabci/lambda):
sam local invoke --event myEventData.json --docker-network myNetworkName MyFuncName
You can always use --debug option in case you want to see more details.
Alternatively, You can also use http://host.docker.internal:8000 without the hassle of playing with docker, this URL is reserved internally and gives you an access to your host machine but make sure you expose port 8000 when you start dynamodb container. Although it is quite easy but it doesn't work in all operating systems. For more details about this feature, please check docker documentation
As #Paul mentioned, it is about configuring your network between the docker containers - lambda and database.
Another approach that worked for me (using docker-compose).
docker-compose:
version: '2.1'
services:
db:
image: ...
ports:
- "3306:3306"
networks:
- my_network
environment:
...
volumes:
...
networks:
my_network:
Then, after docker-compose up, running docker network ls will show:
NETWORK ID NAME DRIVER SCOPE
7eb440d5c0e6 dev_my_network bridge local
My docker container name is dev_db_1.
My js code is:
const connection = mysql.createConnection({
host: "dev_db_1",
port: 3306,
...
});
Then, running the sam command:
sam local invoke --docker-network dev_my_network -e my.json
Stack:
Docker: 18.03.1-ce
Docker-compose: 1.21.1
MacOS HighSierra 10.13.6
If you are using LocalStack to run DynamoDB, I believe the correct command to use the LocalStack network for SAM is:
sam local start-api --env-vars env.json --docker-network localstack_default
And in your code, the LocalStack hostname should be localstack_localstack_1
const dynamoDbDocumentClient = new AWS.DynamoDB.DocumentClient({
endpoint: process.env.AWS_SAM_LOCAL ?
'http://localstack_localstack_1:4569' :
undefined,
});
However, I launched LocalStack using docker-compose up. Using the pip CLI tool to launch LocalStack may result in different identifiers.
This may be helpful for someone who are still facing the same issue:
I also faced the same problem recently. I followed all the steps mentioned by rynop (Thanks #rynop)
I fixed the issue (on my windows) by replacing the endpoint (http://localhost:8000) with my (private) IP address (i.e http://192.168.8.101:8000) in the following code:
import { DynamoDB, Endpoint } from 'aws-sdk';
const ddb = new DynamoDB({ apiVersion: '2012-08-10' });
if (process.env['AWS_SAM_LOCAL']) {
ddb.endpoint = new Endpoint('http://dynamo:8000');
} else if ('local' == process.env['APP_STAGE']) {
// Use this when running code directly via node. Much faster iterations than using sam local
ddb.endpoint = new Endpoint('http://localhost:8000');
}

How to configure git to work witha proxy server

How can someone tweak aroud th heroku toolbelt i.e git and use a proxy server to connect to heroku. I am trying to connect but it keeps telling me machine actively denied it access. help
You can set the Git proxy using the below command in the git bash. Set for both HTTP and HTTPS proxy.
git config --global http.proxy http://username:password#proxy.server.com:8080
git config --global https.proxy http://username:password#proxy.server.com:8080
//Replace username with your proxy username
//Replace password with your proxy password
//Replace proxy.server.com with the proxy domain URL.
//Replace 8080 with the proxy port no configured on the proxy server.
Check How to configure Git proxy and How to unset the Git Proxy for more details
Set HTTP_PROXY and HTTPS_PROXY system environment variables in following format if proxy sever requires authentication:
HTTP_PROXY = http://username:password#proxy.server.com:portNumber
HTTPS_PROXY = https://username:password#proxy.server.com:portNumber
use HTTP_PROXY environemnt variable
e.g. HTTP_PROXY=http://localhost:9999