Context: I have a simple Next.js and KeystoneJS app. I've made duplicate deployments on 2 AWS EC2 instances. Each instance also has an Nginx reverse proxy routing port 80 to 3000 (my apps port). The 2 instances are also behind an application load balancer.
Problem: When routing to my default url, my application attempts to fetch the buildManifest for the nextjs application. This, however, 404s most of the time.
My Guess: Because the requests are coming in so close together, my load balancer is routing the second request for the buildManifest to the other instance. Since I did a separate yarn build on that instance, the build ids are different, and therefore it is not fetching the correct build. This request 404s and my site is broken.
My Question: Is there a way to ensure that all requests made from instance A get routed to instance A? Or is there a better way to do my builds on each instance such that their ids are the same? Is this a use case for Docker?
I have had a similar problem with our load balancer and specifying a custom build id seems to have fixed it. Here's the dedicated issue and this is how my next.config.js looks like now:
const execSync = require("child_process").execSync;
const lastCommitCommand = "git rev-parse HEAD";
module.exports = {
async generateBuildId() {
return execSync(lastCommitCommand).toString().trim();
},
};
If you are using a custom build directory in your next.config.js file, then remove it and use the default build directory.
Like:
distDir: "build"
Remove the above line from your next.config.js file.
Credits: https://github.com/serverless-nextjs/serverless-next.js/issues/467#issuecomment-726227243
Related
I have been working on ECR.
I created an endpoint to be able to pull and push docker container without leaving the VPC.
My problem is that I am behind a proxy
My http-proxy.conf looks like this
[Service]
Environment= "http_proxy=http://x.x.x.x:8080"
Environment= "https_proxy=http://x.x.x.x:8080"
Environment= "no_proxy=.dkr.ecr.us-west2.amazonaws.com"
For some reason when I do a docker pull from one of my containers inside ECR it is really slow because it's using the proxy instead of non-proxy.
If I remove the first 2 line http and https its is really fast.
Any ideas?
Okay, after a couple of days, I found the issue; I had to add the s3 endpoint to the no_proxy. It looks like when I did a docker pull because the s3 was not reachable, it went outside the network and back in. Now I am able to pull images within the VPC!!
[Service]
Environment= "http_proxy=http://x.x.x.x:8080"
Environment= "https_proxy=http://x.x.x.x:8080"
Environment= "no_proxy=.dkr.ecr.us-east-2.amazonaws.com,.s3.us-east-2.amazonaws.com"
You can enable noProxy for any specific URL with *
As per docker man:
Configure Docker to use a proxy server | Docker Documentation
On the Docker client, create or edit the file ~/.docker/config.json in the home directory of the user which starts containers. Add JSON such as the following, substituting the type of proxy with httpsProxy or ftpProxy if necessary, and substituting the address and port of the proxy server. You can configure multiple proxy servers at the same time.
You can optionally exclude hosts or ranges from going through the proxy server by setting a noProxy key to one or more comma-separated IP addresses or hosts. Using the * character as a wildcard is supported, as shown in this example.
{
"proxies":
{
"default":
{
"httpProxy": "http://127.0.0.1:3001",
"httpsProxy": "http://127.0.0.1:3001",
"noProxy": "*.test.example.com,.example2.com"
}
}
}
Save the file.
When you create or start new containers, the environment variables are set automatically within the container.
I would like to upload files to S3 using boto3.
The code will run on a server without DNS configured and I want that the upload process will be routed through a specific network interface.
Any idea if there's any way to solve these issues?
1) add the end point addresses for s3 to /etc/hosts, see this list http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region
2) configure a specific route to the network interface, see this info on superuser
https://superuser.com/questions/181882/force-an-application-to-use-a-specific-network-interface
As for setting a network interface, I did a workaround that allows to set the source ip for each connection made by boto.
Just change awsrequest.py AWSHTTPConnection class with the following:
a) Before init() of AWSHTTPConnection add:
source_address = None
b) Inside the init() add:
if AWSHTTPConnection.source_address is not None:
kwargs["source_address"] = AWSHTTPConnection.source_address
Now, from your code you should do the following before you start using boto:
from botocore.awsrequest import AWSHTTPConnection
AWSHTTPConnection.source_address = (source_ip_str, source_port)
Use source_port = 0 in order to let OS choose random port (you probably want this option, see python socket docs for more details)
I am writing a generic application deployment tool. It takes an application from the user and deploys it to Elastic Beanstalk. That part is working. The issue is that the users want to compose the use of the deployment tool with other operations, and right now my tool reports success when it has told the Beanstalk APIs to start the application.
Unfortunately, it is thus returning before the application itself has started. So the user is forced to write polling logic themselves to await the starting of their application.
Looking at the AWS Elastic Beanstalk API and I cannot see any methods that return any indication of such a state reporting. The closest I can find is DescribeEvents... which looks hopeful, however it seems from the examples that the granularity of the application / application container starting within the environment is not part of that API:
<DescribeEventsResponse xmlns="https://elasticbeanstalk.amazonaws.com/docs/2010-12-01/">
<DescribeEventsResult>
<Events>
<member>
<Message>Successfully completed createEnvironment activity.</Message>
<EventDate>2010-11-17T20:25:35.191Z</EventDate>
<VersionLabel>New Version</VersionLabel>
<RequestId>bb01fa74-f287-11df-8a78-9f77047e0d0c</RequestId>
<ApplicationName>SampleApp</ApplicationName>
<EnvironmentName>SampleAppVersion</EnvironmentName>
<Severity>INFO</Severity>
</member>
Note: the INFO level event is that the environment was created, nothing at the lower level of the application container starting within the environment appears to be reported...
I could mandate that the applications deployed with this tool expose a status REST endpoint, but that puts restrictions on the application.
Is there some API that I am missing that will report when the application container (e.g. Tomcat, Node, etc) is started... or better yet when the application deployed within the container is started... but I can live with the application container
Every application is supposed to expose a health URL (Beanstalk/ELB will have problems any case otherwise - it will think the instances are not responding, and might replace). This is typically a HEAD request expecting a 200 OK.
Since this is anyway expected to be there in all apps, you can probably hit this URL and check the deployment is OK. I guess Beanstalk console itself is using this method.
You can also poll using DescribeEnvironments API call which will give you the Environment CNAME (the URL to check), Health of the environment (RED, GREEN), Status (Launching | Updating | Ready | Terminating | Terminated). This API takes Environment Name as an argument. So you can just get the description of one environment.
API Documentation: http://docs.aws.amazon.com/elasticbeanstalk/latest/APIReference/API_DescribeEnvironments.html
Explanation of Environment Description in response: http://docs.aws.amazon.com/elasticbeanstalk/latest/APIReference/API_EnvironmentDescription.html
Sample Response Below:
<DescribeEnvironmentsResponse xmlns="https://elasticbeanstalk.amazonaws.com/docs/2010-12-01/">
<DescribeEnvironmentsResult>
<Environments>
<member>
<VersionLabel>Version1</VersionLabel>
<Status>Available</Status>
<ApplicationName>SampleApp</ApplicationName>
<EndpointURL>elasticbeanstalk-SampleApp-1394386994.us-east-1.elb.amazonaws.com</EndpointURL>
<CNAME>SampleApp-jxb293wg7n.elasticbeanstalk.amazonaws.com</CNAME>
<Health>Green</Health>
<EnvironmentId>e-icsgecu3wf</EnvironmentId>
<DateUpdated>2010-11-17T04:01:40.668Z</DateUpdated>
<SolutionStackName>32bit Amazon Linux running Tomcat 7</SolutionStackName>
<Description>EnvDescrip</Description>
<EnvironmentName>SampleApp</EnvironmentName>
<DateCreated>2010-11-17T03:59:33.520Z</DateCreated>
</member>
</Environments>
</DescribeEnvironmentsResult>
<ResponseMetadata>
<RequestId>44790c68-f260-11df-8a78-9f77047e0d0c</RequestId>
In your case you may want to read following documentation:
Monitoring Application Health
You can also configure Application Health Check URL for your environment. By default AWS Elastic Beanstalk uses TCP:80 check on your instances. But using the Application Health Check URL you can override this health check to use HTTP:80 by using the Application Health Check URL option as described here.
Using Status/Health from DescribeEnvironments you can check if the application has been deployed.
I set up countly analytics on the free tier AWS EC2, but stupidly did not set up an elastic IP with it. No, the traffic it too great that I can't even log into the analytics as the CPU is constantly running at 100%.
I am in the process of issuing app updates to change the analytics address to a private domain that forwards to the EC2 instance, so I can change the forwarding in future.
In the mean time, is it possible for me to set up a 2nd instance and forward all the traffic from the current one to the new one?
I found this http://lastzactionhero.wordpress.com/2012/10/26/remote-port-forwarding-from-ec2/ will this work from 1 EC2 instance to another?
Thanks
EDIT ---
Countly log
/home/ubuntu/countlyinstall/countly/api/node_modules/mongoskin/node_modules/mongodb/lib/mongodb/connection/server.js:529
throw err;
^ ReferenceError: liveApi is not defined
at processUserSession (/home/ubuntu/countlyinstall/countly/api/parts/data/usage.js:203:17)
at /home/ubuntu/countlyinstall/countly/api/parts/data/usage.js:32:13
at /home/ubuntu/countlyinstall/countly/api/node_modules/mongoskin/node_modules/mongodb/lib/mongodb/collection.js:1010:5
at Cursor.nextObject (/home/ubuntu/countlyinstall/countly/api/node_modules/mongoskin/node_modules/mongodb/lib/mongodb/cursor.js:653:5)
at commandHandler (/home/ubuntu/countlyinstall/countly/api/node_modules/mongoskin/node_modules/mongodb/lib/mongodb/cursor.js:635:14)
at null. (/home/ubuntu/countlyinstall/countly/api/node_modules/mongoskin/node_modules/mongodb/lib/mongodb/db.js:1709:18)
at g (events.js:175:14)
at EventEmitter.emit (events.js:106:17)
at Server.Base._callHandler (/home/ubuntu/countlyinstall/countly/api/node_modules/mongoskin/node_modules/mongodb/lib/mongodb/connection/base.js:130:25)
at /home/ubuntu/countlyinstall/countly/api/node_modules/mongoskin/node_modules/mongodb/lib/mongodb/connection/server.js:522:20
You can follow the steps described in the blog post to do the port forwarding. Just make sure not to forward it to localhost :)
Also about 100% CPU, it is probably caused by MongoDB. Did you have a chance to check the process? In case it is mongod, issue mongotop command to see the most time consuming collection accesses. We can go from there.
Yes. It is possible. I use ngnix with Node JS app. I wanted to redirect traffic from one instance to another. Instance was in different region and not configured in same VPC as mentioned in AWS documentation.
Step 1: Go to /etc/ngnix/site-enabled and open default.conf file. Your configuration might be on different file.
Step 2: Change proxy_pass to your chosen IP/domain/sub-domain
server
{
listen 80
server_name your_domain.com;
location / {
...
proxy_pass your_ip; // You can put domain, sub-domain with protocol (http/https)
}
}
Step 3: then restart the ngnix
sudo systemctl restart nginx
This can be possible for any external instances and different VPC instances.
I'm trying to put a set of EC2 instances behind a couple of Varnish servers. Our Varnish configuration very seldom changes (once or twice a year) but we are always adding/removing/replacing web backends for all kinds of reasons (updates, problems, load spikes). This creates problems because we always have to update our Varnish configuration, which has led to mistakes and heartbreak.
What I would like to do is manage the set of backend servers simply by adding or removing them from an Elastic Load Balancer. I've tried specifying the ELB endpoint as a backend, but I get this error:
Message from VCC-compiler:
Backend host "XXXXXXXXXXX-123456789.us-east-1.elb.amazonaws.com": resolves to multiple IPv4 addresses.
Only one address is allowed.
Please specify which exact address you want to use, we found these:
123.123.123.1
63.123.23.2
31.13.67.3
('input' Line 2 Pos 17)
.host = "XXXXXXXXXXX-123456789.us-east-1.elb.amazonaws.com";
The only consistent public interface ELB provides is its DNS name. The set of IP addresses this DNS name resolves to changes over time and with load.
In this case I would rather NOT specify one exact address - I would like to round-robin between whatever comes back from the DNS. Is this possible? Or could someone suggest another solution that would accomplish the same thing?
Thanks,
Sam
You could use a NGINX web server to deal with the CNAME resolution problem:
User-> Varnish -> NGNIX -> ELB -> EC2 Instances
(Cache Section) (Application Section)
You have a configuration example in this post: http://blog.domenech.org/2013/09/using-varnish-proxy-cache-with-amazon-web-services-elastic-load-balancer-elb.html
Juan
I wouldn't recommend putting an ELB behind Varnish.
The problem lies on the fact that Varnish is resolving the name
assigned to the ELB, and it’s caching the IP addresses until the VCL
get’s reloaded. Because of the dynamic nature of the ELB, the IPs
linked to the cname can change at any time, resulting in Varnish
routing traffic to an IP which is not linked to the correct ELB
anymore.
This is an interesting article you might like to read.
Yes, you can.
in your default.vcl put:
include "/etc/varnish/backends.vcl";
and set backend to:
set req.backend = default_director;
so, run this script to create backends.vcl:
#!/bin/bash
FILE_CURRENT_IPS='/tmp/elb_current_ips'
FILE_OLD_IPS='/tmp/elb_old_ips'
TMP_BACKEND_CONFIG='/tmp/tmp_backends.vcl'
BACKEND_CONFIG='/etc/varnish/backends.vcl'
ELB='XXXXXXXXXXXXXX.us-east-1.elb.amazonaws.com'
IPS=($(dig +short $ELB | sort))
if [ ! -f $FILE_OLD_IPS ]; then
touch $FILE_OLD_IPS
fi
echo ${IPS[#]} > $FILE_CURRENT_IPS
DIFF=`diff $FILE_CURRENT_IPS $FILE_OLD_IPS | wc -l`
cat /dev/null > $TMP_BACKEND_CONFIG
if [ $DIFF -gt 0 ]; then
COUNT=0
for i in ${IPS[#]}; do
let COUNT++
IP=$i
cat <<EOF >> $TMP_BACKEND_CONFIG
backend app_$COUNT {
.host = "$IP";
.port = "80";
.connect_timeout = 10s;
.first_byte_timeout = 35s;
.between_bytes_timeout = 5s;
}
EOF
done
COUNT=0
echo 'director default_director round-robin {' >> $TMP_BACKEND_CONFIG
for i in ${IPS[#]}; do
let COUNT++
cat <<EOF >> $TMP_BACKEND_CONFIG
{ .backend = app_$COUNT; }
EOF
done
echo '}' >> $TMP_BACKEND_CONFIG
echo 'NEW BACKENDS'
mv -f $TMP_BACKEND_CONFIG $BACKEND_CONFIG
fi
mv $FILE_CURRENT_IPS $FILE_OLD_IPS
I wrote this script to have a way to auto update the vcl once a new
instance comes up or down.
it requires that the .vcl has an include to backend.vcl
This script is just a part of the solution, the tasks should be:
1. get new servername and IP (auto scale) can use AWS API cmds to do that, also via bash
2. update vcl (this script)
3. reload varnish
The script is here
http://felipeferreira.net/?p=1358
Other pepole did it in different ways
http://blog.cloudreach.co.uk/2013/01/varnish-and-autoscaling-love-story.html
You don get to 10K petitions if had to resolve an ip on each one. Varnish resolve ips on start and do not refresh it unless its restarted o reloaded. Indeed varnish refuses to start if found two ip for a dns name in a backend definition, like the ip returned for multi-az ELBs.
So we solved a simmilar issue placing varnish in front of nginx. Nginx can define an ELB as a backend so Varnish backend is a local nginx an nginx backend is the ELB.
But I don't feel comfy with this solution.
You Could make the ELB in your private VPC so that it would have a local ip. This way you don't have to use any DNS kind of Cnames or anything which Varnish does not support as easily.
Using internal ELB does not help the problem, because it usually have 2 Internal IP's!
Backend host "internal-XXX.us-east-1.elb.amazonaws.com": resolves to multiple IPv4 addresses.
Only one address is allowed.
Please specify which exact address you want to use, we found these:
10.30.10.134
10.30.10.46
('input' Line 13 Pos 12)
What I am not sure is if this IPs will remain always the same or they can change? anyone?
I my previous answer (more than three years ago) I hadn't solve this issue, my [nginx - varnish - nxinx ] -> ELB solution worked until ELB changes IPs
But from some time ago we are using the same setup but with nginx compiled with jdomain plugin
So the idea is to place a nginx in the same host that varnish an there configure the upstream like this:
resolver 10.0.0.2; ## IP for the aws resolver on the subnet
upstream backend {
jdomain internal-elb-dns-name port=80;
}
that upstream will automatically reconfigure the upstream ips the IP if the ELB changes its addresses
It might not be a solution using varnish but it works as expected