I'm running rake send_events locally and heroku run rake send_events on heroku.
values = {sendSmsRequest:
{
from: "ABC",
to: "5581999999999",
msg: "msg",
callbackOption: "NONE",
id: "c_1541"
}
}
headers = {
:content_type => 'application/json',
:authorization => 'Basic xxxxxxxxxxxxxxxxxxxxxxxx',
:accept => 'application/json'
}
RestClient.post 'https://api-rest.zenvia360.com.br/services/send-sms', values.to_json, headers
The log print
RestClient::Exceptions::OpenTimeout
Thanks.
OpenTimeout means that rest-client timed out trying to open a connection to the server.
Are you sure that the server is up and reachable from heroku? I'm not able to reach it from my computer or any server that I tried with.
$ nc -w 10 -v api-rest.zenvia360.com.br 443
nc: connect to api-rest.zenvia360.com.br port 443 (tcp) timed out
Related
I am migrating a Ruby 2.7 Lambda from zip file deployment to container image deployment.
When I deploy my container to AWS Lambda, the container behaves as I hope that it would.
When I attempt to test the same image locally using docker run {my-image-name}, I am encountering 2 issues
The parameters are not accessible from the event object in the same manner
The return headers and status code are not honored in the same way as they are handled from Lambda
My Questions
Do I need to bundle something else into my dockerfile to assist with Lambda simulation?
Do I need to use a different entrypoint in my dockerfile?
I found documentation for the "AWS Lambda Runtime Interface Emulator", but I am unclear if it is designed to assist with the problems I am encountering.
Here is a simplified version of what I am attempting to test.
Dockerfile
FROM public.ecr.aws/lambda/ruby:2.7
COPY * ./
RUN bundle install
CMD [ "lambda_function.LambdaFunctions::Handler.process" ]
Gemfile
source "https://rubygems.org"
lambda_function.rb
require 'json'
module LambdaFunctions
class Handler
def self.process(event:,context:)
begin
json = {
message: 'This is a placeholder for your lambda code',
event: event
}.to_json
{
headers: {
'Access-Control-Allow-Origin': '*'
},
statusCode: 200,
body: json
}
rescue => e
{
headers: {
'Access-Control-Allow-Origin': '*'
},
statusCode: 500,
body: { error: e.message }.to_json
}
end
end
end
end
Running with AWS Lambda
We have a load balancer making this Lambda available
curl -v https://{my-load-balancer-url}/owners?path=owners
Response - note that the load balancer has converted my GET request to a POST request
{
"message": "This is a placeholder for your lambda code",
"event": {
"requestContext": {
"elb": {...}
},
"httpMethod": "POST",
"path": "/owners",
"queryStringParameters": {
"path": "owners"
},
"headers": {
"accept": "*/*",
"accept-encoding": "gzip, deflate, br",
"cache-control": "no-cache",
"connection": "keep-alive",
"content-length": "0",
...
},
"body": "",
"isBase64Encoded": false
}
}
Running in docker
docker run --rm --name lsample -p 9000:8080 -d {my-image-name}
POST request to local docker
curl -v http://localhost:9000/2015-03-31/functions/function/invocations -d '{"path": "owners"}'
Note that the headers are returned as part of the respose
Response Headers
< HTTP/1.1 200 OK
< Date: Fri, 18 Dec 2020 19:06:56 GMT
< Content-Length: 166
< Content-Type: text/plain; charset=utf-8
Response Body (formatted)
{
"headers": {
"Access-Control-Allow-Origin": "*"
},
"statusCode": 200,
"body": "{\"message\":\"This is a placeholder for your lambda code\",\"event\":{\"path\":\"owners\"}}"
}
GET request to local docker
curl -v http://localhost:9000/2015-03-31/functions/function/invocations?path=owners
No content is returned
< HTTP/1.1 200 OK
< Date: Fri, 18 Dec 2020 19:10:08 GMT
< Content-Length: 0
Create a Sinatra App to simulate the actions of the Application Load Balancer
Based on the following document, I believe that I need to simulate the actions performed by the AWS Application Load Balancer that fronts my lambda code. See https://docs.aws.amazon.com/lambda/latest/dg/services-alb.html
simulate-lambda-alb/alb_simulate.rb
require 'rubygems'
require 'bundler/setup'
require 'sinatra'
require 'sinatra/base'
require 'json'
require 'httpclient'
# ruby app.rb -o 0.0.0.0
set :port, 8091
set :bind, '0.0.0.0'
get '/web' do
send_file "../web/index.html"
end
get '/web/' do
send_file "../web/index.html"
end
get '/web/:filename' do |filename|
send_file "../web/#{filename}"
end
get '/lambda*' do
path = params['splat'][0]
path=path.gsub(/^lambda\//,'')
event = {path: path, queryStringParameters: params}.to_json
cli = HTTPClient.new
url = "#{ENV['LAMBDA_DOCKER_HOST']}/2015-03-31/functions/function/invocations"
resp = cli.post(url, event)
body = JSON.parse(resp.body)
status body['statusCode']
headers body['headers']
body['body']
end
simulate-lambda-alb/Dockerfile
FROM ruby:2.7
RUN gem install bundler
COPY Gemfile Gemfile
RUN bundle install
COPY . .
EXPOSE 8091
CMD ["ruby", "alb_simulate.rb"]
docker-compose.yml
version: '3.7'
networks:
mynet:
services:
lambda-container:
container_name: lambda-container
# your container image goes here, or you can test with the following
image: cdluc3/mysql-ruby-lambda
stdin_open: true
tty: true
ports:
- published: 8090
target: 8080
networks:
mynet:
alb-simulate:
container_name: alb-simulate
# this image contains the code in this answer
image: cdluc3/simulate-lambda-alb
networks:
mynet:
environment:
LAMBDA_DOCKER_HOST: http://lambda-container:8080
ports:
- published: 8091
target: 8091
depends_on:
- lambda-container
Run Sinatra to simulate Application Load Balancer
docker-compose up
Output
curl "http://localhost:8091/lambda?path=test&foo=bar"
{"message":"This is a placeholder for your lambda code","event":{"path":"","queryStringParameters":{"path":"test","foo":"bar","splat":[""]}}}
I am trying to connect to local redis database on EC2 instance from a lambda function. However when I try to execute the code, I get the following error in the logs
{
"errorType": "Error",
"errorMessage": "Redis connection to 127.0.0.1:6379 failed - connect ECONNREFUSED 127.0.0.1:6379",
"code": "ECONNREFUSED",
"stack": [
"Error: Redis connection to 127.0.0.1:6379 failed - connect ECONNREFUSED 127.0.0.1:6379",
" at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1106:14)"
],
"errno": "ECONNREFUSED",
"syscall": "connect",
"address": "127.0.0.1",
"port": 6379
}
The security group has the following entries
Type: Custom TCP Rule
Port: 6379
Source: <my security group name>
Type: Custom TCP Rule
Port: 6379
Source: 0.0.0.0/0
My Lambda function has the following code.
'use strict';
const Redis = require('redis');
module.exports.hello = async event => {
var redis = Redis.createClient({
port: 6379,
host: '127.0.0.1',
password: ''
});
redis.on('connect', function(){
console.log("Redis client conected : " );
});
redis.set('age', 38, function(err, reply) {
console.log(err);
console.log(reply);
});
return {
statusCode: 200,
body: JSON.stringify(
{
message: 'The lambda function is called..!!',
input: event,
redis: redis.get('age')
},
null,
2
),
};
};
Please let me know where I am going wrong.
First thing, Your lambda trying to connect to localhost so this will not work. You have to place the public or private IP of the Redis instance.
But still, you need to make sure these things
Should in the same VPC as your EC2 instance
Should allow outbound traffic in the security group
Assign subnet
Your instance Allow lambda to connect with Redis in security group
const redis = require('redis');
const redis_client = redis.createClient({
host: 'you_instance_IP',
port: 6379
});
exports.handler = (event, context, callback) => {
redis_client.set("foo", "bar");
redis_client.get("foo", function(err, reply) {
redis_client.unref();
callback(null, reply);
});
};
You can also look into this how-should-i-connect-to-a-redis-instance-from-an-aws-lambda-function
On Linux Ubuntu server 20.04 LTS I was seeing a similar error after reboot of the EC2 server which for our use case runs an express app via a cron job connecting a nodeJs app (installed with nvm) using passport.js to use sessions in Redis:
Redis error: Error: Redis connection to 127.0.0.1:6379 failed - connect ECONNREFUSED 127.0.0.1:6379
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1144:16) {
errno: 'ECONNREFUSED',
code: 'ECONNREFUSED',
syscall: 'connect',
address: '127.0.0.1',
port: 6379
}
What resolved it for me, as my nodeJs app was running as Ubuntu user I needed to make that path available, was to add to the PATH within /etc/crontab by:
sudo nano /etc/crontab Just comment out the original path in there so you can switch back if required (my original PATH was set to: PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin ) and append the location of your bin you may need to refer to, in the format:
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/home/ubuntu/.nvm/versions/node/v12.20.0/bin
And the error disappeared for me
// redisInit.js
const session = require('express-session');
const redis = require('redis');
const RedisStore = require('connect-redis')(session);
const { redisSecretKey } = process.env;
const redisClient = redis.createClient();
redisClient.on('error', (err) => {
console.log('Redis error: ', err);
});
const redisSession = session({
secret: redisSecretKey,
name: 'some_redis_store_name',
resave: true,
saveUninitialized: true,
cookie: { secure: false },
store: new RedisStore(
{
host: 'localhost', port: 6379, client: redisClient, ttl: 86400
}
)
});
module.exports = redisSession;
I am struggling to send emails to my server hosted on aws elastic beanstalk, using certificates from cloudfront, i am using nodemailer to send emails, it is working on my local environment but fails once deployed to AWS
Email Code:
const transporter = nodemailer.createTransport({
host: 'mail.email.co.za',
port: 587,
auth: {
user: 'example#email.co.za',
pass: 'email#22'
},
secure:false,
tls: {rejectUnauthorized: false},
debug:true
});
const mailOptions = {
from: 'example#email.co.za',
to: email,
subject: 'Password Reset OTP' ,
text: `${OTP}`
}
try {
const response = await transporter.sendMail(mailOptions)
return {error:false, message:'OTP successfully sent' , response}
}catch(e) {
return {error:false, message:'Problems sending OTP, Please try again'}
}
Error from AWS:
The request could not be satisfied
504 ERROR The request could not be satisfied CloudFront attempted to
establish a connection with the origin, but either the
attempt failed or the origin closed the connection.
NB: The code runs fine on my local
If I have an nginx server managed with Puppet, how would I go about using nginx as a proxy for any path that starts with hostname.com/api, and forward it to hostname.com:8000/api?
Example puppet configuration for server:
nginx::resource::server { "${site_name}":
listen_port => 80,
www_root => "/var/www/frontend",
ssl_redirect => false,
ssl => true,
ssl_cert => "/etc/letsencrypt/live/${site_name}/fullchain.pem",
ssl_key => "/etc/letsencrypt/live/${site_name}/privkey.pem",
ssl_port => 443,
}
I tried this but it doesn't seem to be working (it still loads the react app instead of the api's front-facing template from django)
nginx::resource::location{'/api':
server => $site_name,
ssl => true,
proxy => "https://localhost:8000",
}
I am trying to deploy a meteor app to an AWS server, but am getting this message:
Started TaskList: Configuring App
[52.41.84.125] - Pushing the Startup Script
nodemiral:sess:52.41.84.125 copy file - src: /
Users/Olivia/.nvm/versions/node/v7.8.0/lib/node_modules/mup/lib/modules/meteor/assets/templates/start.sh, dest: /opt/CanDu/config/start.sh, vars: {"appName":"CanDu","useLocalMongo":0,"port":80,"bind":"0.0.0.0","logConfig":{"opts":{"max-size":"100m","max-file":10}},"docker":{"image":"abernix/meteord:base","imageFrontendServer":"meteorhacks/mup-frontend-server","imagePort":80},"nginxClientUploadLimit":"10M"} +0ms
[52.41.84.125] x Pushing the Startup Script: FAILED Failure
Previously I had been able to deploy using mup, but now I am getting this message. The only major thing I've changed is the Python path in my .noderc. I am also able to SSH into my amazon server directly from the terminal. My mup file is:
module.exports = {
servers: {
one: {
host: '##.##.##.###',
username: 'ec2-user',
pem: '/Users/Olivia/.ssh/oz-pair.pem'
// password:
// or leave blank for authenticate from ssh-agent
}}meteor: {
name: 'CanDu',
path: '/Users/Olivia/repos/bene_candu_v2',
servers: {
one: {}
},
buildOptions: {
serverOnly: true,
mobileSettings: {
public: {
"astronomer": {
"appId": "<key>",
"disableUserTracking": false,
"disableRouteTracking": false,
"disableMethodTracking": false
},
"googleMaps": "<key>",
"facebook":{
"permissions":["email","public_profile","user_friends"]
}
},
},
},
env: {
ROOT_URL: 'http://ec2-##-##-##-###.us-west-2.compute.amazonaws.com',
MONGO_URL: 'mongodb://. . . "
},
/*ssl: {
crt: '/opt/keys/server.crt', // this is a bundle of certificates
key: '/opt/keys/server.key', // this is the private key of the certificate
port: 443, // 443 is the default value and it's the standard HTTPS port
upload: false
},*/
docker: {
image: 'abernix/meteord:base'
},
deployCheckWaitTime: 60
}
};
And I have checked to make sure there are no trailing commas, and have tried increasing the wait time. etc. The error message I'm getting is pretty unhelpful. Does anyone have any insight? Thank you so much!