Cloud run service-to-service connect ETIMEDOUT - google-cloud-platform

I have to services (b & c) that i want to connect together via a api request. Service B send the request to service C.
Both services are connected to the same VPC via a VPC Connector. Both are set to route all traffic via the VPC. Service C's ingress is set to allow internal traffic only. Service B's ingress is set to all. Both allow unauthenticated.
When I send a request to B which should forward it to C the request is send but ends up in an error with message connect ETIMEDOUT {ip}:443
The code that sends the request:
const auth = new GoogleAuth();
const url = process.env.SERVICE_C;
const client = await auth.getIdTokenClient(url);
const clientHeaders = await client.getRequestHeaders();
const result = await axios.post(process.env.SERVICE_C, req.body, {
headers: {
'Content-Type': 'application/json',
'Authorization': clientHeaders['Authorization'],
},
});
The env variable is the url of service c
What did I not configure correctly yet?

Related

SignatureDoesNotMatch on AWS signed v4 PUT request presigned url

Currently, I have an issue by creating a valid signature v4 presigned url for a PUT request.
The urls are generated on the server side and are then provided to clients.
The clients should use the urls to upload a file over an API Gateway into an Amazon S3 bucket.
To authenticate the request API Gateway IAM authentication is used.
For my use case, a direct upload into an S3 bucket via "s3-presigned-url" is not possible.
The following code describes the generation of the presigned url and is written in Typescript. The generation of the signature v4 url is based on the AWS provided package #aws-sdk/signature-v4.
import { SignatureV4 } from "#aws-sdk/signature-v4";
import { Sha256 } from "#aws-crypto/sha256-js";
import { formatUrl } from "#aws-sdk/util-format-url";
const createSignedUrl = async (credentials: {
accessKeyId: string,
secretAccessKey: string,
sessionToken: string,
}, requestParams: {
method: "GET" | "PUT",
host: string,
protocol: string,
path: string,
}) => {
const sigv4 = new SignatureV4({
service: "execute-api",
region: process.env.AWS_REGION!,
credentials: {
accessKeyId: credentials.accessKeyId,
secretAccessKey: credentials.secretAccessKey,
sessionToken: credentials.sessionToken,
},
sha256: Sha256,
applyChecksum: false
});
const signedUrlRequest = await sigv4.presign({
method: requestParams.method,
hostname: requestParams.host,
path: requestParams.path,
protocol: requestParams.protocol,
headers: {
host: requestParams.host,
},
}, {
expiresIn: EXPIRES_IN,
});
const signedUrl = formatUrl(signedUrlRequest);
return signedUrl
};
I use Postman to test the presinged urls.
If I generate a presigned url for an GET request, everything works fine.
If I generate a presigned url for an PUT request and don't set a body in Postman for the PUT request, everything works fine. But I have an empty file in my bucket ;-(.
If I generate a presigned url for an PUT request and set a body in Postman (via Body -> binary -> [select file]), it fails!
Error message:
The request signature we calculated does not match the signature you provided. ...
The AWS documentation https://docs.aws.amazon.com/general/latest/gr/create-signed-request.html describes that the payload has to be hashed within the canonical request. But I don't have the payload at that time.
Is there also an UNSIGNED-PAYLOAD option if I want to generate an presigned url for a PUT request that is sent to an API Gateway, like described in the documentation for the AWS S3 service?
How do I configure the SignatureV4 object or the presign(...) method call to generate a valid PUT request url with UNSIGNED-PAYLOAD?
I was able to compare my generated canonical requests with the canonical request that is expected by the Amazon API Gateway.
The Amazon API Gateway always expects a hash of the payload no matter if I add the query param X-Amz-Content-Sha256=UNSIGNED-PAYLOAD to the url or not.
Thus the option "UNSIGNED-PAYLOAD" as canonical request hash value for API Gateway IAM Authentication is not possible, as would be possible with Amazon S3 service.

AWS How to Invoke SSM Param Store using Private DNS Endpoint from Lamda function Nodejs

Hi have requirement where credential needs to be stored in SSM Param store and will be read by Lambda function which sits inside an VPC, and all the subnets inside my VPC is public subnet.
So when I am calling SSM Param store using below code I am getting timed out error.
const AWS = require('aws-sdk');
AWS.config.update({
region: 'us-east-1'
})
const parameterStore = new AWS.SSM();
exports.handler = async (event, context, callback) => {
console.log('calling param store');
const param = await getParam('/my/param/name')
console.log('param : ',param);
//Send API Response
return {
statusCode: '200',
body: JSON.stringify('able to connect to param store'),
headers: {
'Content-Type': 'application/json',
},
};
};
const getParam = param => {
return new Promise((res, rej) => {
parameterStore.getParameter({
Name: param
}, (err, data) => {
if (err) {
return rej(err)
}
return res(data)
})
})
}
So I created vpc endpoint for Secrets Manager which has with Private DNS name enabled.
Still I am getting timed out error for above code.
Do I need change Lambda code to specify Private DNS Endpoint in Lambda function
Below Image contains outbound rule for subnet NACL
Below Image contains outbound rule for Security Group
I managed to fix this issue. The root cause of this problem was all the subnets were public subnet. Since VPC endpoints are accessed privately without internet hence the subnets associated with Lambda function should be private subnet.
Here are the below steps I have take to fix this issue
Created a NAT Gateway in side VPC and assigned one elastic IP to it
Created new route table and pointed all the traffics to NAT gateway created in steps 1
Attached new route table to couple of subnets (which made them private)
then attached only private subnets to Lambda function
Other than this IAM role associated with Lambda function should have below 2 policy to access SSM Param store
AmazonSSMReadOnlyAccess
AWSLambdaVPCAccessExecutionRole

Need help: NodeJs app deployed on Google Cloud Run , unable to connect to Google Cloud SQL Instance

I have node Js code below which connects to PostgreSql db using connection pool:
const express = require("express");
const path = require("path");
var bodyParser = require("body-parser");
var cors = require("cors");
const app = express();
app.use(bodyParser.urlencoded({ extended: true }));
app.use(bodyParser.json());
app.use(cors());
const Pool = require("pg").Pool;
const pool = new Pool({
user: "",
host: "",
database: "",
password: "",
port: 5432,
});
app.get("/get_que", (req, res) => {
pool.query(
"SELECT json FROM surveys where available = TRUE;",
(error, results) => {
if (error) {
console.log(error);
}
res.json({ data: results.rows });
}
);
});
Above app is deployed in Google Cloud Run and configured Google Cloud SQL (PostgresSql connection), but unable to connect from Google Cloud Run app.
Using Cloud SQL proxy in local, I am able to connect to cloud sql instance.
I tried public ip of the Cloud SQL instance in the "host" parameter in the above app, it is not connecting,
How to connect to Google Cloud SQL instance from the above NodeJs app which is deployed on Google Cloud Run?.
Even though the public IP is enabled in the Cloud SQL instance by default no networks are allowed. See the adding authorized networks.
You have a very detailed guide in Connecting from Cloud Run.

ERROR: The request could not be satisfied CloudFront

I am struggling to send emails to my server hosted on aws elastic beanstalk, using certificates from cloudfront, i am using nodemailer to send emails, it is working on my local environment but fails once deployed to AWS
Email Code:
const transporter = nodemailer.createTransport({
host: 'mail.email.co.za',
port: 587,
auth: {
user: 'example#email.co.za',
pass: 'email#22'
},
secure:false,
tls: {rejectUnauthorized: false},
debug:true
});
const mailOptions = {
from: 'example#email.co.za',
to: email,
subject: 'Password Reset OTP' ,
text: `${OTP}`
}
try {
const response = await transporter.sendMail(mailOptions)
return {error:false, message:'OTP successfully sent' , response}
}catch(e) {
return {error:false, message:'Problems sending OTP, Please try again'}
}
Error from AWS:
The request could not be satisfied
504 ERROR The request could not be satisfied CloudFront attempted to
establish a connection with the origin, but either the
attempt failed or the origin closed the connection.
NB: The code runs fine on my local

AWS Elastic search service - IAM Role (Refresh session token - Node.js)

I'm using AWS Elastic search service and written a Node.js wrapper (runs within a ECS dockerized container) . IAM roles are used to create a singleton connection with Elastic search.
I'm trying to refresh my session token by checking AWS.config.credentials.needsRefresh() before every request - however it always returns true even after the session has expired. Obviously AWS is complaining with a 403 error. Any ideas will be greatly appreciated .
var AWS = require('aws-sdk');
var config = require('config');
var connectionClass = require('http-aws-es');
var elasticsearch = require('elasticsearch');
AWS.config.getCredentials(function() {
AWS.config.update({
credentials: new AWS.Credentials(AWS.config.credentials.accessKeyId,AWS.config.credentials.secretAccessKey,AWS.config.credentials.sessionToken),
region: 'us-east-1'
});
}
)
var client = new elasticsearch.Client({
host: `${config.get('elasticSearch.host')}`,
log: 'debug',
connectionClass: connectionClass,
amazonES: {
credentials: new AWS.EnvironmentCredentials('AWS')
}
});
module.exports = client;