Problem with AWS Web Application Firewall - amazon-web-services

I have an AWS EC2 instance (single machine) + Elastic IP + Route 53 (redirects the domain name to my EC2 machine).
I wanted to add AWS Web Application Firewall. So the configuration is:
EC2 instance + Application Load Balancer with Web Application Firewall + Route 53 (redirects the domain name to the load balancer).
However, with this new configuration, my Ajax scripts fail to execute. For example, the code
'use strict';
var url = require('url');
var target = 'https://DOMAIN_NAME';
exports.handler = function(event, context, callback) {
var urlObject = url.parse(target);
var mod = require(
urlObject.protocol.substring(0, urlObject.protocol.length - 1)
);
console.log('[INFO] - Checking ' + target);
var req = mod.request(urlObject, function(res) {
res.setEncoding('utf8');
res.on('data', function(chunk) {
console.log('[INFO] - Read body chunk');
});
res.on('end', function() {
console.log('[INFO] - Response end');
callback();
});
});
req.on('error', function(e) {
console.log('[ERROR] - ' + e.message);
callback(e);
});
req.end();
};
fails with "This combination of host and port requires TLS".
Some other Ajax requests fail too.
I've opened ports 80, 22, and 443 on the EC2 instance. And port 443 is opened in the Load Balancer.
I don't understand why these script fail with the new configuration. In the old configuration everything works fine.
UPDATE: It seems that one of the rules in the AWS WAF Core Rule Set is blocking my ajax requests. I guess, I'll just need to find out which one.

Related

Google cloud function with different end point sheducle with google

I have created a project in express
const express = require('express');
const app = express();
const PORT = 5555;
app.listen(PORT, () => {
console.log(`Server running on port ${PORT}`);
});
app.get('/tr', (req, res, next) => {
res.json({ status: 200, data: 'tr' })
});
app.get('/po', (req, res, next) => {
res.json({ status: 200, data: 'po' })
});
module.exports = {
app
};
deployed on cloud function with name my-transaction
and i am scheduling with google clound giving the url like
http://url/my-transaction/po
When I deployed without authentiation scheduler runs job success, but when I do with authentication it fails.
similary if i create a sample project like below
exports.helloHttp = (req, res) => {
res.json({ status: 200, data: 'test hello' })
};
and deploy similary configuring same as above with authentication it works.
only differce is in last function name is similar to entry point means
while above entry point is app with different end points.
any help,
appreciated
Thanks
This is because you need to add auth information to your http requests on cloud Scheduler
First you need to create a service account with the role Cloud Functions Invoker
when you have created the service account, you can see that has a email associated fro example:
cfinvoker#fakeproject.iam.gserviceaccount.com
After that you can create a new scheduler job with auth information by following these steps:
Select target http
Write the url (cloud function url)
Click on "show more"
Select Auth header > Add OIDC token
Write the full email address of the service account
This new job scheduler will be send the http request with the auth infromation to execute successfully your cloud function.

AWS Lambda in public subnet cannot access the internet

I'm trying to get a Lambda running inside a public subnet to communicate with the internet. I'm able to get the Lambda to hit www.google.com without a VPC (which the docs say runs one behind the scene) but cannot if I run the Lambda in a VPC.
Repro steps:
Create a Lambda (Node.js 12x) with the follow code. I named the Lambda 'curlGoogle'.
Run it to verify it succeeds and can fetch from www.google.com. There should be no VPC specified.
Go to the VPC Dashboard and use the VPC Wizard to create a VPC with a public subnet. I've tried a view values for IPv4 CIDR block (e.g. 10.1.0.0/16), IPv6 CIDR block, AZ. I usually leave 'Enable DNS hostnames' to Yes.
Change the Lambda to use the newly created VPC, Subnet and Security Group.
Verify this does not reach Google and times out.
I've tried modifications of this approach and haven't had any success (e.g. actually associating the subnet with the vpc, loosening all of settings on the Security Group and Network ACLs).
I originally tried following the one public and one private docs and failed to get that working.
Any ideas? Thanks!
- Dan
const http = require('http');
exports.handler = async (event) => {
return httprequest().then((data) => {
const response = {
statusCode: 200,
body: JSON.stringify(data),
};
return response;
});
};
function httprequest() {
return new Promise((resolve, reject) => {
const options = {
host: 'www.google.com',
path: '/',
port: 80,
method: 'GET'
};
const req = http.request(options, (res) => {
if (res.statusCode < 200 || res.statusCode >= 300) {
return reject(new Error('statusCode=' + res.statusCode));
}
var body = [];
res.on('data', function(chunk) {
body.push(chunk);
});
res.on('end', function() {
try {
body = Buffer.concat(body).toString();
} catch(e) {
reject(e);
}
resolve(body);
});
});
req.on('error', (e) => {
reject(e.message);
});
// send the request
req.end();
});
}
AWS Lambda functions are never assigned a public IP address when in a VPC, even if they are in a public subnet. So they can never access the Internet directly when running in a VPC. You have to place Lambda functions in a private subnet with a route to a NAT Gateway in order to give them access to the Internet from within your VPC.

Problems running puppeteer inside EC2 instance

I am using AWS CloudFormation to deploy my application inside AWS.
I'm using a t2.2xlarge EC2 instance inside an ECS Cluster with Load Balancing.
I have a microservice written in nodejs that process some HTML, converts it to PDF and upload the output to S3. That's where I use puppeteer.
The problem is that whenever I execute the application inside the ec2 instance, the code reaches a point in which it opens a new page and stops, not resolve, or never ends. Sincerely, I don't know what is happening.
This is part of the code snippet that it executes:
const browser = await puppeteer.launch({
args: [
'--no-sandbox',
'--disable-setuid-sandbox',
'--disable-dev-shm-usage',
'--disable-web-security'
]
});
console.log('Puppeteer before launch ...');
console.log(await browser.version());
console.log('Puppeteer launched ...');
const page = await browser.newPage();
console.log('Fetching contents ...');
const URL = `http://${FRONTEND_ENDPOINT}/Invoice/${invoiceData.id}`;
await page.goto(URL, {
waitUntil: ['networkidle0']
});
await page.content();
const bodyHandle = await page.$('body');
await page.evaluate(body => body.innerHTML, bodyHandle);
await bodyHandle.dispose();
console.log('Saving pdf file ...');
await page.emulateMedia('screen');
await page.pdf({
path: path.join(__dirname, '../tmp/page1.pdf'),
format: 'A4',
printBackground: true
});
await browser.close();
I am basically crawling a page, getting its HTML contents and converting it to PDF.
These are my logs:
Puppeteer before launch ...
Puppeteer launched ...
And it does not print anything else.
UPDATE:
This is the print of the logs using the DEBUG flag

How to make SSL-Requests with AWS-Lambda-Function?

I have a AWS-Lambda function that gots triggered after a file was placed into an S3-Bucket.
This is my lambda-function:
var http = require('http');
exports.handler = function (event, context) {
var bucket = event.Records[0].s3.bucket.name;
var key = event.Records[0].s3.object.key;
var newKey = key.split('.')[0].slice(8);
var url = 'https://xxxxx.herokuapp.com/api/messages/'+newKey+'/processingFinished'
console.log("URL:" + url)
http.get(url, function (result) {
console.log('!!!! Success, with: ' + result.statusCode);
context.done(null);
}).on('error', function (err) {
console.log('Error, with: ' + err.message);
context.done("Failed");
});
};
In CloudWatch Logfiles i see complaints that https is not supported:
2017-07-27T10:38:04.735Z 8428136e-72b7-11e7-a5b9-bd0562c862a0 Error: Protocol "https:" not supported. Expected "http:"
at new ClientRequest (_http_client.js:54:11)
at Object.exports.request (http.js:31:10)
at Object.exports.get (http.js:35:21)
at exports.handler (/var/task/index.js:18:8)
But the listed URL can be opened with any webbrowser. The Server accepts SSL and the whole API is working per SSL.
Why is AWS denying SSL-Requests? How to solve this?
Per the node documentation: (https://nodejs.org/api/https.html)
HTTPS is the HTTP protocol over TLS/SSL. In Node.js this is implemented as a separate module.
Use
http = require('https');

How Do I Enable HTTPS on my AWS Elastic Beanstalk Environment for Parse Server Example?

I am trying to enable SSL/TLS on my Parse Server on AWS so that i can receive Webhooks from Stripe.
I created a self signed certificate on my using openssl, but when i tried to send a web hook with stripe i received the following error.
Invalid TLS
My Parse server index.js is
var express = require('express');
var ParseServer = require('parse-server').ParseServer;
var path = require('path');
var databaseUri = process.env.DATABASE_URI || process.env.MONGODB_URI;
if (!databaseUri) {
console.log('DATABASE_URI not specified, falling back to localhost.');
}
var api = new ParseServer({
databaseURI: databaseUri || 'mongodb://localhost:27017/dev',
cloud: process.env.CLOUD_CODE_MAIN || __dirname + '/cloud/main.js',
appId: process.env.APP_ID || 'myAppId',
masterKey: process.env.MASTER_KEY || '', //Add your master key here. Keep it secret!
serverURL: process.env.SERVER_URL || 'http://localhost:1337/parse', // Don't forget to change to https if needed
// push: pushConfig,
// filesAdapter: filesAdapter,
push:{
ios:{
pfx:'xxxxxxxxxxxxxxxxxx', // P12 file only
bundleId: 'xxxxxxxxxxxxxxxx', // change to match bundleId
production: false // dev certificate
}
},
liveQuery: {
classNames: ["Posts", "Comments"] // List of classes to support for query subscriptions
}
});
// Client-keys like the javascript key or the .NET key are not necessary with parse-server
// If you wish you require them, you can set them as options in the initialization above:
// javascriptKey, restAPIKey, dotNetKey, clientKey
var app = express();
// Serve static assets from the /public folder
app.use('/public', express.static(path.join(__dirname, '/public')));
// Serve the Parse API on the /parse URL prefix
var mountPath = process.env.PARSE_MOUNT || '/parse';
app.use(mountPath, api);
// Parse Server plays nicely with the rest of your web routes
app.get('/', function(req, res) {
res.status(200).send('I dream of being a website. Please star the parse-server repo on GitHub!');
});
// There will be a test page available on the /test path of your server url
// Remove this before launching your app
app.get('/test', function(req, res) {
res.sendFile(path.join(__dirname, '/public/test.html'));
});
var port = process.env.PORT || 1337;
var httpServer = require('http').createServer(app);
httpServer.listen(port, function() {
console.log('parse-server-example running on port ' + port + '.');
});
ParseServer.createLiveQueryServer(httpServer);
How can I enable the https?
You need to get certificate from a trusted source . Otherwise even browsers will flag it as untrusted. Also while setting up https server you need to include this line of code :
https.createServer({
key: fs.readFileSync('Your-private-key.pem'),
cert: fs.readFileSync('your-crt-file.crt')
}, app).listen(3001,function(){
console.log('https server started on port 3001');
});
Also if you want to enforce https i would suggest you to look into express-sslify