Forward EC2 traffic from 1 instance to another? - amazon-web-services

I set up countly analytics on the free tier AWS EC2, but stupidly did not set up an elastic IP with it. No, the traffic it too great that I can't even log into the analytics as the CPU is constantly running at 100%.
I am in the process of issuing app updates to change the analytics address to a private domain that forwards to the EC2 instance, so I can change the forwarding in future.
In the mean time, is it possible for me to set up a 2nd instance and forward all the traffic from the current one to the new one?
I found this http://lastzactionhero.wordpress.com/2012/10/26/remote-port-forwarding-from-ec2/ will this work from 1 EC2 instance to another?
Thanks
EDIT ---
Countly log
/home/ubuntu/countlyinstall/countly/api/node_modules/mongoskin/node_modules/mongodb/lib/mongodb/connection/server.js:529
throw err;
^ ReferenceError: liveApi is not defined
at processUserSession (/home/ubuntu/countlyinstall/countly/api/parts/data/usage.js:203:17)
at /home/ubuntu/countlyinstall/countly/api/parts/data/usage.js:32:13
at /home/ubuntu/countlyinstall/countly/api/node_modules/mongoskin/node_modules/mongodb/lib/mongodb/collection.js:1010:5
at Cursor.nextObject (/home/ubuntu/countlyinstall/countly/api/node_modules/mongoskin/node_modules/mongodb/lib/mongodb/cursor.js:653:5)
at commandHandler (/home/ubuntu/countlyinstall/countly/api/node_modules/mongoskin/node_modules/mongodb/lib/mongodb/cursor.js:635:14)
at null. (/home/ubuntu/countlyinstall/countly/api/node_modules/mongoskin/node_modules/mongodb/lib/mongodb/db.js:1709:18)
at g (events.js:175:14)
at EventEmitter.emit (events.js:106:17)
at Server.Base._callHandler (/home/ubuntu/countlyinstall/countly/api/node_modules/mongoskin/node_modules/mongodb/lib/mongodb/connection/base.js:130:25)
at /home/ubuntu/countlyinstall/countly/api/node_modules/mongoskin/node_modules/mongodb/lib/mongodb/connection/server.js:522:20

You can follow the steps described in the blog post to do the port forwarding. Just make sure not to forward it to localhost :)
Also about 100% CPU, it is probably caused by MongoDB. Did you have a chance to check the process? In case it is mongod, issue mongotop command to see the most time consuming collection accesses. We can go from there.

Yes. It is possible. I use ngnix with Node JS app. I wanted to redirect traffic from one instance to another. Instance was in different region and not configured in same VPC as mentioned in AWS documentation.
Step 1: Go to /etc/ngnix/site-enabled and open default.conf file. Your configuration might be on different file.
Step 2: Change proxy_pass to your chosen IP/domain/sub-domain
server
{
listen 80
server_name your_domain.com;
location / {
...
proxy_pass your_ip; // You can put domain, sub-domain with protocol (http/https)
}
}
Step 3: then restart the ngnix
sudo systemctl restart nginx
This can be possible for any external instances and different VPC instances.

Related

Nextjs 404s on buildManifest across multiple EC2 instances

Context: I have a simple Next.js and KeystoneJS app. I've made duplicate deployments on 2 AWS EC2 instances. Each instance also has an Nginx reverse proxy routing port 80 to 3000 (my apps port). The 2 instances are also behind an application load balancer.
Problem: When routing to my default url, my application attempts to fetch the buildManifest for the nextjs application. This, however, 404s most of the time.
My Guess: Because the requests are coming in so close together, my load balancer is routing the second request for the buildManifest to the other instance. Since I did a separate yarn build on that instance, the build ids are different, and therefore it is not fetching the correct build. This request 404s and my site is broken.
My Question: Is there a way to ensure that all requests made from instance A get routed to instance A? Or is there a better way to do my builds on each instance such that their ids are the same? Is this a use case for Docker?
I have had a similar problem with our load balancer and specifying a custom build id seems to have fixed it. Here's the dedicated issue and this is how my next.config.js looks like now:
const execSync = require("child_process").execSync;
const lastCommitCommand = "git rev-parse HEAD";
module.exports = {
async generateBuildId() {
return execSync(lastCommitCommand).toString().trim();
},
};
If you are using a custom build directory in your next.config.js file, then remove it and use the default build directory.
Like:
distDir: "build"
Remove the above line from your next.config.js file.
Credits: https://github.com/serverless-nextjs/serverless-next.js/issues/467#issuecomment-726227243

AWS ECR endpoint no_proxy issue

I have been working on ECR.
I created an endpoint to be able to pull and push docker container without leaving the VPC.
My problem is that I am behind a proxy
My http-proxy.conf looks like this
[Service]
Environment= "http_proxy=http://x.x.x.x:8080"
Environment= "https_proxy=http://x.x.x.x:8080"
Environment= "no_proxy=.dkr.ecr.us-west2.amazonaws.com"
For some reason when I do a docker pull from one of my containers inside ECR it is really slow because it's using the proxy instead of non-proxy.
If I remove the first 2 line http and https its is really fast.
Any ideas?
Okay, after a couple of days, I found the issue; I had to add the s3 endpoint to the no_proxy. It looks like when I did a docker pull because the s3 was not reachable, it went outside the network and back in. Now I am able to pull images within the VPC!!
[Service]
Environment= "http_proxy=http://x.x.x.x:8080"
Environment= "https_proxy=http://x.x.x.x:8080"
Environment= "no_proxy=.dkr.ecr.us-east-2.amazonaws.com,.s3.us-east-2.amazonaws.com"
You can enable noProxy for any specific URL with *
As per docker man:
Configure Docker to use a proxy server | Docker Documentation
On the Docker client, create or edit the file ~/.docker/config.json in the home directory of the user which starts containers. Add JSON such as the following, substituting the type of proxy with httpsProxy or ftpProxy if necessary, and substituting the address and port of the proxy server. You can configure multiple proxy servers at the same time.
You can optionally exclude hosts or ranges from going through the proxy server by setting a noProxy key to one or more comma-separated IP addresses or hosts. Using the * character as a wildcard is supported, as shown in this example.
{
"proxies":
{
"default":
{
"httpProxy": "http://127.0.0.1:3001",
"httpsProxy": "http://127.0.0.1:3001",
"noProxy": "*.test.example.com,.example2.com"
}
}
}
Save the file.
When you create or start new containers, the environment variables are set automatically within the container.

How to prevent Rails 6 from blocking my AWS EC2 instances?

I have some Rails 6 applications, deployed at AWS, via Opsworks.
After upgrading to Rails 6 the app blocks the health check of its own instance and it causes the load balancer to take the instance down.
I would like to know how to whitelist all my EC2 instances automatically with dynamic IP addresses? Instead of adding one by one to config/application.rb?
Thanks
Rails.application.configure do
# Whitelist one hostname
config.hosts << "hostname"
# Whitelist a test domain
config.hosts << /application\.local\Z/
# config.hosts.clear
end
The work-around that worked for me was
config.hosts.clear
I posted this question a while back. A safer solution would be reading the IP addresses from environmental variables that can be set from the AWS console.
config.hosts << ENV["INSTANCE_IP"]
config.hosts << ENV["INSTANCE_IP2"]
...
config.hosts << ENV["INSTANCE_IPn"]
At least in this way it does not require a new git commit every time the IP address changes when the instance has a dynamic IP.
Simple solution is to allow the Health Checker user agent, add this to your production.log
config.host_authorization = {
exclude: ->(request) { request.user_agent =~ /ELB-HealthChecker/ }
}
Looks like it has been resolved in the latest versions atleast works on 6.1 and above
https://guides.rubyonrails.org/configuring.html#actiondispatch-hostauthorization
You can exclude certain requests from Host Authorization checks by setting config.host_configuration.exclude:
# Exclude requests for the /healthcheck/ path from host checking
Rails.application.config.host_configuration = {
exclude: ->(request) { request.path =~ /healthcheck/ }
}

Enabling HA namenodes on a secure cluster in Cloudera Manager fails

I am running a CDH4.1.2 secure cluster and it works fine with the single namenode+secondarynamenode configuration, but when I try to enable High Availability (quorum based) from the Cloudera Manager interface it dies at step 10 of 16, "Starting the NameNode that will be transitioned to active mode namenode ([my namenode's hostname])".
Digging into the role log file gives the following fatal error:
Exception in namenode joinjava.lang.IllegalArgumentException: Does not contain a valid host:port authority: [my namenode's fqhn]:[my namenode's fqhn]:0 at
org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:206) at
org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:158) at
org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:147) at
org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:143) at
org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:547) at
org.apache.hadoop.hdfs.server.namenode.NameNode.startCommonServices(NameNode.java:480) at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:443) at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:608) at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:589) at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1140) at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1204)
How can I resolve this?
It looks like you have two problems:
The NameNode's IP address is resolving to "my namenode's fqhn" instead of a regular hostname. Check your /etc/hosts file to fix this.
You need to configure dfs.https.port. With Cloudera Manager free edition, you must have had to add the appropriate configs to the safety valves to enable security. As part of that, you need to configure the dfs.https.port.
Given that this code path is traversed even in the non-HA mode, I'm surprised that you were able to get your secure NameNode to start up correctly before enabling HA. In case you haven't already, I recommend that you first enable security, test that all HDFS roles start up correctly and then enable HA.

Using an AWS ELB behind Varnish - is it possible?

I'm trying to put a set of EC2 instances behind a couple of Varnish servers. Our Varnish configuration very seldom changes (once or twice a year) but we are always adding/removing/replacing web backends for all kinds of reasons (updates, problems, load spikes). This creates problems because we always have to update our Varnish configuration, which has led to mistakes and heartbreak.
What I would like to do is manage the set of backend servers simply by adding or removing them from an Elastic Load Balancer. I've tried specifying the ELB endpoint as a backend, but I get this error:
Message from VCC-compiler:
Backend host "XXXXXXXXXXX-123456789.us-east-1.elb.amazonaws.com": resolves to multiple IPv4 addresses.
Only one address is allowed.
Please specify which exact address you want to use, we found these:
123.123.123.1
63.123.23.2
31.13.67.3
('input' Line 2 Pos 17)
.host = "XXXXXXXXXXX-123456789.us-east-1.elb.amazonaws.com";
The only consistent public interface ELB provides is its DNS name. The set of IP addresses this DNS name resolves to changes over time and with load.
In this case I would rather NOT specify one exact address - I would like to round-robin between whatever comes back from the DNS. Is this possible? Or could someone suggest another solution that would accomplish the same thing?
Thanks,
Sam
You could use a NGINX web server to deal with the CNAME resolution problem:
User-> Varnish -> NGNIX -> ELB -> EC2 Instances
(Cache Section) (Application Section)
You have a configuration example in this post: http://blog.domenech.org/2013/09/using-varnish-proxy-cache-with-amazon-web-services-elastic-load-balancer-elb.html
Juan
I wouldn't recommend putting an ELB behind Varnish.
The problem lies on the fact that Varnish is resolving the name
assigned to the ELB, and it’s caching the IP addresses until the VCL
get’s reloaded. Because of the dynamic nature of the ELB, the IPs
linked to the cname can change at any time, resulting in Varnish
routing traffic to an IP which is not linked to the correct ELB
anymore.
This is an interesting article you might like to read.
Yes, you can.
in your default.vcl put:
include "/etc/varnish/backends.vcl";
and set backend to:
set req.backend = default_director;
so, run this script to create backends.vcl:
#!/bin/bash
FILE_CURRENT_IPS='/tmp/elb_current_ips'
FILE_OLD_IPS='/tmp/elb_old_ips'
TMP_BACKEND_CONFIG='/tmp/tmp_backends.vcl'
BACKEND_CONFIG='/etc/varnish/backends.vcl'
ELB='XXXXXXXXXXXXXX.us-east-1.elb.amazonaws.com'
IPS=($(dig +short $ELB | sort))
if [ ! -f $FILE_OLD_IPS ]; then
touch $FILE_OLD_IPS
fi
echo ${IPS[#]} > $FILE_CURRENT_IPS
DIFF=`diff $FILE_CURRENT_IPS $FILE_OLD_IPS | wc -l`
cat /dev/null > $TMP_BACKEND_CONFIG
if [ $DIFF -gt 0 ]; then
COUNT=0
for i in ${IPS[#]}; do
let COUNT++
IP=$i
cat <<EOF >> $TMP_BACKEND_CONFIG
backend app_$COUNT {
.host = "$IP";
.port = "80";
.connect_timeout = 10s;
.first_byte_timeout = 35s;
.between_bytes_timeout = 5s;
}
EOF
done
COUNT=0
echo 'director default_director round-robin {' >> $TMP_BACKEND_CONFIG
for i in ${IPS[#]}; do
let COUNT++
cat <<EOF >> $TMP_BACKEND_CONFIG
{ .backend = app_$COUNT; }
EOF
done
echo '}' >> $TMP_BACKEND_CONFIG
echo 'NEW BACKENDS'
mv -f $TMP_BACKEND_CONFIG $BACKEND_CONFIG
fi
mv $FILE_CURRENT_IPS $FILE_OLD_IPS
I wrote this script to have a way to auto update the vcl once a new
instance comes up or down.
it requires that the .vcl has an include to backend.vcl
This script is just a part of the solution, the tasks should be:
1. get new servername and IP (auto scale) can use AWS API cmds to do that, also via bash
2. update vcl (this script)
3. reload varnish
The script is here
http://felipeferreira.net/?p=1358
Other pepole did it in different ways
http://blog.cloudreach.co.uk/2013/01/varnish-and-autoscaling-love-story.html
You don get to 10K petitions if had to resolve an ip on each one. Varnish resolve ips on start and do not refresh it unless its restarted o reloaded. Indeed varnish refuses to start if found two ip for a dns name in a backend definition, like the ip returned for multi-az ELBs.
So we solved a simmilar issue placing varnish in front of nginx. Nginx can define an ELB as a backend so Varnish backend is a local nginx an nginx backend is the ELB.
But I don't feel comfy with this solution.
You Could make the ELB in your private VPC so that it would have a local ip. This way you don't have to use any DNS kind of Cnames or anything which Varnish does not support as easily.
Using internal ELB does not help the problem, because it usually have 2 Internal IP's!
Backend host "internal-XXX.us-east-1.elb.amazonaws.com": resolves to multiple IPv4 addresses.
Only one address is allowed.
Please specify which exact address you want to use, we found these:
10.30.10.134
10.30.10.46
('input' Line 13 Pos 12)
What I am not sure is if this IPs will remain always the same or they can change? anyone?
I my previous answer (more than three years ago) I hadn't solve this issue, my [nginx - varnish - nxinx ] -> ELB solution worked until ELB changes IPs
But from some time ago we are using the same setup but with nginx compiled with jdomain plugin
So the idea is to place a nginx in the same host that varnish an there configure the upstream like this:
resolver 10.0.0.2; ## IP for the aws resolver on the subnet
upstream backend {
jdomain internal-elb-dns-name port=80;
}
that upstream will automatically reconfigure the upstream ips the IP if the ELB changes its addresses
It might not be a solution using varnish but it works as expected