aws express TIMED OUT on port 443 https - amazon-web-services

I have configured Hosted zones in Route 53 with external domaine.
I have upload and deploy app express with Elastic Beanstalk.
const express = require("express")
const cors = require('cors');
const app = express()
const PORT = process.env.PORT || 8000
connection()
app.use(cors({
origin: '*'
}));
app.get('/', (req, res) => {
res.send('Hello World')
})
app.listen(PORT, () => console.log(`Listen on port ${PORT}`))
module.exports = app
I have created AWS Certificate Manager with success.
In Elastic Beanstalk > Configuration > Load balancer > add listener :
443 | HTTPS : selected my certification
When i make request with http protocol (port 80) that work.
But when i make request with https, i have error timeout.
for information my app work in Heroku with https.
EDIT:
the problem came from Hosted zones. thank for your help

Related

How to configure HTTPS traffic from ALB to Backend IIS Website?

I have windows Ec2 instance hosting a web site in IIS and also using ALB. I have following configuration in AWS
https://mywebsite.mydomain.com -> Application Load Balancer -> Listener (443) -> Target Group -> Windows Ec2 Instance (IIS)
I have SSL certificate configured on ALB so that all the requests from users are always on HTTPS. However, the internal traffic from ALB to Ec2 is on port 80 (HTTP)
I want to configure internal traffic (ALB -> EC2) on HTTPS using self signed certificate. I can create a self signed certificate and configure in IIS. However, I am not sure what base route the target group is using to forward the traffic to instance? Does it use private ip or machine name or something else? What should be the value for dns name for the certificate?
$dnsname= "xxxxxx"
$cert = New-SelfSignedCertificate -DnsName "$dnsname" -CertStoreLocation "cert:\LocalMachine\My"
Here is terraform for the target group
resource "aws_lb_target_group" "https" {
name = "my-tg-https"
port = 443
protocol = "HTTPS"
target_type = "instance"
vpc_id = var.vpc_id
deregistration_delay = var.deregistration_delay
health_check {
path = "/"
interval = 30
healthy_threshold = 2
unhealthy_threshold = 2
timeout = 5
matcher = 200
port = 443
protocol = "HTTPS"
}
lifecycle {
create_before_destroy = true
}
}

SSL certificates and https for AWS hosted django site

I have a django site hosted on elastic beanstalk. I have obtained a AWS SSL certificate and this has been associated with the load balancer 443 HTTPS port.
In my config file I have:
MIDDLEWARE = [
...
"django.middleware.csrf.CsrfViewMiddleware",
]
CSRF_COOKIE_HTTPONLY = False
SESSION_COOKIE_HTTPONLY = True
With this setup I am able to login to the site but the browser displays 'not secure' in the address bar, and if I prepend 'https://' to the urls I get a page stating the connection is not private.
If I add
SESSION_COOKIE_SECURE = True
CSRF_COOKIE_SECURE = True
Then it becomes impossible to login (the login page just reloads) or if I got to a incognito browser I get a 'CSRF verification failed. Request aborted.' message.
Apologies for the long question, I've just tried to include ay detail that may be relevant
In settings.py
Add IP and domain in ALLOWED_HOSTS list.
You can put *, but it is not recomended. * means all host are allowed here.
ALLOWED_HOSTS = ['your_ip']
I had my load balancer listener configured wrong for port 4443. I had the instance port and instance protocol as 443 and https whereas they should be 80 and http

Cloud run service-to-service connect ETIMEDOUT

I have to services (b & c) that i want to connect together via a api request. Service B send the request to service C.
Both services are connected to the same VPC via a VPC Connector. Both are set to route all traffic via the VPC. Service C's ingress is set to allow internal traffic only. Service B's ingress is set to all. Both allow unauthenticated.
When I send a request to B which should forward it to C the request is send but ends up in an error with message connect ETIMEDOUT {ip}:443
The code that sends the request:
const auth = new GoogleAuth();
const url = process.env.SERVICE_C;
const client = await auth.getIdTokenClient(url);
const clientHeaders = await client.getRequestHeaders();
const result = await axios.post(process.env.SERVICE_C, req.body, {
headers: {
'Content-Type': 'application/json',
'Authorization': clientHeaders['Authorization'],
},
});
The env variable is the url of service c
What did I not configure correctly yet?

Need help: NodeJs app deployed on Google Cloud Run , unable to connect to Google Cloud SQL Instance

I have node Js code below which connects to PostgreSql db using connection pool:
const express = require("express");
const path = require("path");
var bodyParser = require("body-parser");
var cors = require("cors");
const app = express();
app.use(bodyParser.urlencoded({ extended: true }));
app.use(bodyParser.json());
app.use(cors());
const Pool = require("pg").Pool;
const pool = new Pool({
user: "",
host: "",
database: "",
password: "",
port: 5432,
});
app.get("/get_que", (req, res) => {
pool.query(
"SELECT json FROM surveys where available = TRUE;",
(error, results) => {
if (error) {
console.log(error);
}
res.json({ data: results.rows });
}
);
});
Above app is deployed in Google Cloud Run and configured Google Cloud SQL (PostgresSql connection), but unable to connect from Google Cloud Run app.
Using Cloud SQL proxy in local, I am able to connect to cloud sql instance.
I tried public ip of the Cloud SQL instance in the "host" parameter in the above app, it is not connecting,
How to connect to Google Cloud SQL instance from the above NodeJs app which is deployed on Google Cloud Run?.
Even though the public IP is enabled in the Cloud SQL instance by default no networks are allowed. See the adding authorized networks.
You have a very detailed guide in Connecting from Cloud Run.

ERROR: The request could not be satisfied CloudFront

I am struggling to send emails to my server hosted on aws elastic beanstalk, using certificates from cloudfront, i am using nodemailer to send emails, it is working on my local environment but fails once deployed to AWS
Email Code:
const transporter = nodemailer.createTransport({
host: 'mail.email.co.za',
port: 587,
auth: {
user: 'example#email.co.za',
pass: 'email#22'
},
secure:false,
tls: {rejectUnauthorized: false},
debug:true
});
const mailOptions = {
from: 'example#email.co.za',
to: email,
subject: 'Password Reset OTP' ,
text: `${OTP}`
}
try {
const response = await transporter.sendMail(mailOptions)
return {error:false, message:'OTP successfully sent' , response}
}catch(e) {
return {error:false, message:'Problems sending OTP, Please try again'}
}
Error from AWS:
The request could not be satisfied
504 ERROR The request could not be satisfied CloudFront attempted to
establish a connection with the origin, but either the
attempt failed or the origin closed the connection.
NB: The code runs fine on my local