I am struggling to send emails to my server hosted on aws elastic beanstalk, using certificates from cloudfront, i am using nodemailer to send emails, it is working on my local environment but fails once deployed to AWS
Email Code:
const transporter = nodemailer.createTransport({
host: 'mail.email.co.za',
port: 587,
auth: {
user: 'example#email.co.za',
pass: 'email#22'
},
secure:false,
tls: {rejectUnauthorized: false},
debug:true
});
const mailOptions = {
from: 'example#email.co.za',
to: email,
subject: 'Password Reset OTP' ,
text: `${OTP}`
}
try {
const response = await transporter.sendMail(mailOptions)
return {error:false, message:'OTP successfully sent' , response}
}catch(e) {
return {error:false, message:'Problems sending OTP, Please try again'}
}
Error from AWS:
The request could not be satisfied
504 ERROR The request could not be satisfied CloudFront attempted to
establish a connection with the origin, but either the
attempt failed or the origin closed the connection.
NB: The code runs fine on my local
Related
I have configured Hosted zones in Route 53 with external domaine.
I have upload and deploy app express with Elastic Beanstalk.
const express = require("express")
const cors = require('cors');
const app = express()
const PORT = process.env.PORT || 8000
connection()
app.use(cors({
origin: '*'
}));
app.get('/', (req, res) => {
res.send('Hello World')
})
app.listen(PORT, () => console.log(`Listen on port ${PORT}`))
module.exports = app
I have created AWS Certificate Manager with success.
In Elastic Beanstalk > Configuration > Load balancer > add listener :
443 | HTTPS : selected my certification
When i make request with http protocol (port 80) that work.
But when i make request with https, i have error timeout.
for information my app work in Heroku with https.
EDIT:
the problem came from Hosted zones. thank for your help
I have to services (b & c) that i want to connect together via a api request. Service B send the request to service C.
Both services are connected to the same VPC via a VPC Connector. Both are set to route all traffic via the VPC. Service C's ingress is set to allow internal traffic only. Service B's ingress is set to all. Both allow unauthenticated.
When I send a request to B which should forward it to C the request is send but ends up in an error with message connect ETIMEDOUT {ip}:443
The code that sends the request:
const auth = new GoogleAuth();
const url = process.env.SERVICE_C;
const client = await auth.getIdTokenClient(url);
const clientHeaders = await client.getRequestHeaders();
const result = await axios.post(process.env.SERVICE_C, req.body, {
headers: {
'Content-Type': 'application/json',
'Authorization': clientHeaders['Authorization'],
},
});
The env variable is the url of service c
What did I not configure correctly yet?
I have set up firebase functions to receive http requests and have verified that the same is working. Now im trying to send http request to firebase from aws lambda function. But there is no response either in aws lambda or in the firebase functions log. This is my aws lambda code:
const postData = JSON.stringify({
"queryresult" : {
"parameters": {
"on": "1",
"device": "1",
"off": ""
}
}
});
const options = {
hostname: 'https://<the firebase function endpoint>',
port: 443,
path: '',
method: 'POST',
headers: {
'Content-Type': 'application/x-www-form-urlencoded',
'Content-Length': Buffer.byteLength(postData)
}
};
const req = https.request(options, postData)
.then((response) => {
console.log(response);
})
.catch((err) => {
console.log(err);
});
// Write data to request body
req.write(postData);
req.end();
}
The promise part here is suppose to execute the console logs but it is not getting executed. Is there something that i'm missing here. The host is the URL that we obtain when we deploy a function. Or is there some firebase or aws related plan problem. I'am using the spark plan in firebase. Thankyou.
I am trying to connect to local redis database on EC2 instance from a lambda function. However when I try to execute the code, I get the following error in the logs
{
"errorType": "Error",
"errorMessage": "Redis connection to 127.0.0.1:6379 failed - connect ECONNREFUSED 127.0.0.1:6379",
"code": "ECONNREFUSED",
"stack": [
"Error: Redis connection to 127.0.0.1:6379 failed - connect ECONNREFUSED 127.0.0.1:6379",
" at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1106:14)"
],
"errno": "ECONNREFUSED",
"syscall": "connect",
"address": "127.0.0.1",
"port": 6379
}
The security group has the following entries
Type: Custom TCP Rule
Port: 6379
Source: <my security group name>
Type: Custom TCP Rule
Port: 6379
Source: 0.0.0.0/0
My Lambda function has the following code.
'use strict';
const Redis = require('redis');
module.exports.hello = async event => {
var redis = Redis.createClient({
port: 6379,
host: '127.0.0.1',
password: ''
});
redis.on('connect', function(){
console.log("Redis client conected : " );
});
redis.set('age', 38, function(err, reply) {
console.log(err);
console.log(reply);
});
return {
statusCode: 200,
body: JSON.stringify(
{
message: 'The lambda function is called..!!',
input: event,
redis: redis.get('age')
},
null,
2
),
};
};
Please let me know where I am going wrong.
First thing, Your lambda trying to connect to localhost so this will not work. You have to place the public or private IP of the Redis instance.
But still, you need to make sure these things
Should in the same VPC as your EC2 instance
Should allow outbound traffic in the security group
Assign subnet
Your instance Allow lambda to connect with Redis in security group
const redis = require('redis');
const redis_client = redis.createClient({
host: 'you_instance_IP',
port: 6379
});
exports.handler = (event, context, callback) => {
redis_client.set("foo", "bar");
redis_client.get("foo", function(err, reply) {
redis_client.unref();
callback(null, reply);
});
};
You can also look into this how-should-i-connect-to-a-redis-instance-from-an-aws-lambda-function
On Linux Ubuntu server 20.04 LTS I was seeing a similar error after reboot of the EC2 server which for our use case runs an express app via a cron job connecting a nodeJs app (installed with nvm) using passport.js to use sessions in Redis:
Redis error: Error: Redis connection to 127.0.0.1:6379 failed - connect ECONNREFUSED 127.0.0.1:6379
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1144:16) {
errno: 'ECONNREFUSED',
code: 'ECONNREFUSED',
syscall: 'connect',
address: '127.0.0.1',
port: 6379
}
What resolved it for me, as my nodeJs app was running as Ubuntu user I needed to make that path available, was to add to the PATH within /etc/crontab by:
sudo nano /etc/crontab Just comment out the original path in there so you can switch back if required (my original PATH was set to: PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin ) and append the location of your bin you may need to refer to, in the format:
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/home/ubuntu/.nvm/versions/node/v12.20.0/bin
And the error disappeared for me
// redisInit.js
const session = require('express-session');
const redis = require('redis');
const RedisStore = require('connect-redis')(session);
const { redisSecretKey } = process.env;
const redisClient = redis.createClient();
redisClient.on('error', (err) => {
console.log('Redis error: ', err);
});
const redisSession = session({
secret: redisSecretKey,
name: 'some_redis_store_name',
resave: true,
saveUninitialized: true,
cookie: { secure: false },
store: new RedisStore(
{
host: 'localhost', port: 6379, client: redisClient, ttl: 86400
}
)
});
module.exports = redisSession;
I have a react application linked to a Django backend on two separate servers. I am using DRF for django and I allowed cors using django-cors-headers. For some reason when I curl POST the backend, I am able to get the request out. However when I use axios POST the backend, I get and error. The status of the POST request from axios is failed. The request and takes more than 10 seconds to complete. My code was working locally (both react and django codes), but when I deployed to AWS ec2 ubuntu, the axios requests stopped working.
Console error logs
OPTIONS http://10.0.3.98:8000/token-auth/ net::ERR_CONNECTION_TIMED_OUT
{
"config": {
"transformRequest": {},
"transformResponse": {},
"timeout": 0,
"xsrfCookieName": "XSRF-TOKEN",
"xsrfHeaderName": "X-XSRF-TOKEN",
"maxContentLength": -1,
"headers": {
"Accept": "application/json, text/plain, */*",
"Content-Type": "application/json;charset=UTF-8",
"Access-Control-Allow-Origin": "*"
},
"method": "post",
"url": "http://10.0.3.98:8000/token-auth/",
"data": "{\"username\":\"testaccount\",\"password\":\"testpassword\"}"
},
"request": {}
}
Here is my request code
axios.post('http://10.0.3.98:8000/token-auth/',
JSON.stringify(data),
{
mode: 'no-cors',
headers: {
'Content-Type': 'application/json',
'Access-Control-Allow-Origin' : '*'
},
},
).then( res => (
console.log(JSON.stringify(res)),
)
).catch( err => (
console.log(JSON.stringify(err))
)
);
my curl code that worked
curl -d '{"username":"testaccount", "password":"testpassword"}' -H "Content-Type: application/json" -X POST http://10.0.3.98:8000/token-auth/
UPDATE 1
on firefox i am getting the warning
Cross-Origin Request Blocked: The Same Origin Policy disallows reading
the remote resource at http://10.0.3.98:8000/token-auth/. (Reason:
CORS request did not succeed).[Learn More]
UPDATE 2
Perhaps it has something to do with my AWS VPC and subnets? My django server is in a private subnet while my react app is in a public subnet.
UPDATE 3 - my idea of what the problem is
I think the reason why my requests from axios aren't working is because the requests i'm making is setting the origin of the request header to http://18.207.204.70:3000 - the public/external ip address - instead of the private/internal ip address which is http://10.0.2.219:3000 - i search online that the origin is a forbidden field so it can't be changed. How can i set the origin then? Do I have to use a proxy - how can I do that.
try this http request instead of axios, it's called superagent (https://www.npmjs.com/package/superagent) , just install it to your react app via npm,
npm i superagent
and use this instead of axios.
import request from 'superagent'
const payload ={
"1": this.state.number,
"2": this.state.message
}
request.post('LINK HERE')
.set('Content-Type', 'application/x-www-form-urlencoded')
.send(payload)
.end(function(err, res){
if (res.text==='success'){
this.setState({
msgAlert: 'Message Sent!',
})
}
else{
console.log('message failed/error')
}
});
The issue here is that the request is being made on the client browser. You need to either use a reverse proxy or request directly to the api server. You cannot do a local ssh forwarding either.