React+SpringBoot deploy in MultiDocker in ElasticBeantalk ,not able to communicate - amazon-web-services

I create two separate Docker image for React(port 80) and SpringBoot(port 8080). while running the docker-compose on my local , it is fine working. When I deploy in MultiDocker in ElasticBeantalk ,not able to communicate.
I follow the link https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_ecs.html

ok , finally ,got the solution. So I create the one nginx server in which default.conf set the proxy_pass
e.g
location /api {
rewrite /api/(.*) /$1 break;
proxy_pass http://servername:8080/api/v1/;
}
And Made some changes in the React application to call the Rest Api.
After that It work in the Elastic Bean talk MultiDocker evn.
For reference link - https://github.com/km2411/flask_template/blob/master/nginx/default.conf

Related

Using an Nginx container to proxy to a React container on Elastic Beanstalk with docker-compose: is that a valid approach?

I'm hoping for some general feedback on my approach to trying to get a multi-container app up on Elastic Beanstalk via docker-compose (choosing "docker running on 64bit amazon linux 2" as the platform). Two main questions:
`
Am I right that Elastic Beanstalk listens on port 80 by default when we use docker-compose, and so we need to treat port 80 as the entry point to our application? (reasons for thinking so below)
I'm using an nginx container (listening on port 80 of the host machine) to route traffic to my front end container (which talks to a back end container). Is that approach approach to getting multiple containers running on EB valid? Or is there some other much more common/straighforward way of doing this, with docker-compose? (Something like, "yes! that's a valid way to do this," or "no, you are going about this the totally wrong way, go in this other direction" would be helpful)
Why Elastic Beanstalk instead of ECS? I'm just trying to understand the basic functionality of Elastic Beanstalk first.
More context:
I have three containers: 1) a react front end, 2) an express back end, and 3) an nginx proxy. Why did I add nginx? Because it's my understanding that Elastic Beanstalk sets the proxy server configuration to none when you use docker-compose (see here, under "Environments with Docker Compose"), so what we have to do is get nginx to listen on the default point of entry for an Elastic Beanstalk app, port 80, and then forward requests to other containers on the bridge network created by docker-compose. In other words, when you go to the link associated with an EB app, it will default to whatever is listening on port 80. And if that's an Nginx server, you will be routed according to the configuration of the Nginx server.
I'll paste my docker-compose.yml file below, along with my nginx Dockerfile and its default.conf file. At this point, it mostly runs on my local machine (I can get to the front and back end directly on their respective ports), but I still get an error Invalid Host header when I go to localhost:80/, where I should be proxied (by Nginx) to the React app homepage. Any errors noticed would be helpful. But I'm not necessarily looking for a solution to that specific problem. Rather, my main two questions are up above, looking for some confirmation/correction about my general approach.
Thank you!
docker-compose.yml
version: "3"
services:
nginx:
image: mfatigati/shop-proxy
ports:
- "80:80"
depends_on:
- client
server:
image: mfatigati/shop-server-amd
container_name: shop-server
ports:
- "4001:4000"
client:
image: mfatigati/shop-client-amd
container_name: shop-client
depends_on:
- server
ports:
- "3001:3000"
Dockerfile for nginx
FROM nginx
EXPOSE 80
RUN rm -r /usr/share/nginx/html*
COPY configs/default.conf /etc/nginx/conf.d/default.conf
CMD ["nginx", "-g", "daemon off;"]
default.conf for nginx
upstream shop-client {
server shop-client:3001;
}
server {
listen 80;
location / {
proxy_pass http://shop-client;
}
}
I think it might be fine. I have a similar approach on ecs. However, you will need to use nginx to serve as a reverse proxy otherwise you will run into CORS issues as soon as you run it anywhere else then on localhost.
On ecs you have options to run all containers in the same pod or in different ones. If you choose the latter (for better scalabilty, amongst other reasons), you will need to set up discovery service as well.
On Beanstalk there might be options I do not know about, but at least using nginx should be fine.
You can set beanstalk to different ports if you want, but for reverse proxy you will still need something, so I'd just leave it at 80.
EDIT: addition on reverse proxy:
The problem to solve is CORS issues you encounter.
The directive upstream is used for loadbalancing/routing. (see this question and it's answers)
To the general mechanism is to have several locations in your nginx conf that point specific requests to other servers. This way the requests can be done to the same origin (domain, port, scheme), and then nginx will forward it to the servers who then can happily reply.
Something like:
server {
listen 80;
listen [::]:80;
server_name my-app.com;
location /api/ {
proxy_pass http://api-service:8080;
}
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
}
You can have as many locations as necessary. This is very basic, and you might want to look into some other features, but this should be the basic setup.

Configuring nginx and docker on EC2 (Jetbrains)

I have setup Jetbrains Upsource and Teamcity on the same EC2 instance for evalution purposes. If I expose each container on port 80 seperately, I can reach it from the outside world. I want to know how to setup nginx so that I can reach each container over a subdomain like eg. "upsource.example.aws.com" and "teamcity.example.aws.com". I exposed the containers on ports 8080 and 8111 to the host. Is it even possible to achieve this ? If so, I dont know how to start. I read on ways to host multiple domains on one machine for a node web app though exposing the static context. But i have no idea how to make it work with the preconfigured docker images.
Can this be achieved via nginx conf file ?
If not, do I have to use two instances or is there another possibility within aws ?
Its possible with nginx. You have to use something called reverse proxy.
You can expose both the containers in different ports and redirect to these with the help of the nginx configuration.
For example if you have some containers running in port 8000 and 8001 in 127.0.0.1 you can redirect like this:
location /1 {
proxy_pass http://127.0.0.1:8000;
}
location /2 {
proxy_pass http://127.0.0.1:8001;
}
Updated Answer
You need to have 3 containers. The nginx server should be running in port 80. The other two containers will host the sites in say port 8000 (upsource.example.aws.com) and port 8111(teamcity.example.aws.com).
Update the configuration file with the location settings as shown above. Make sure that location / forwards to port 8000 and location /teamcity forwards to port 8111 in your host. More details on how to configure nginx is on the docker hub.
Working
When you go to blabla.com the nginx sever forwards it to port 8000 and when you go to blabla.com/teamcity its goes to port 8111.

django-uWSGI-nginx deployment issue: No public access behind VPN and proxy

I am trying to deploy a django application with uWSGI & nginx.
I followed this tutorial to the end and everything seems to be working fine, but for accessing the app from an external IP.
I do not own a domain name yet so my relevant nginx settings look like:
server {
listen 80;
server_name "";
}
Also, I have disabled the firewall:
iptables -S gives:
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
Now the CentOs 6.7 machine I am deploying on, has no direct access to the internet as it is behind a proxy (i.e. I need to do pip install django --http_proxy=myproxy.mynet:myport for it to work).
Among other things I have tried to use proxy_pass in my nginx configuration. One attempt:
http {
upstream corporate_proxy {
server myproxy.mynet:myport;
}
server {
listen 80;
rewrite_log on;
location / {
proxy_set_header Host "";
proxy_pass http://corporate_proxy;
}
}
}
Finally, let me add that I cannot access the machine from outside the network either, as it is inside a VPN. So in order to ssh onto it, I first need to connect to the VPN (through FortiClient console).
It is not quite clear to me how all this VPN/proxying really works and how it affects the communication between nginx and external requests. Any solutions or ways to further troubleshoot more than welcome!

Can't get server up and running digitalocean / django

I am very confused about how to set up my server because nothing seems to be working right. (I am a novice to all this)
I have the domain name dreamof.science with the registrar alp names. I have cloud hosting through digitalocean.
On digital ocean I have a droplet with nginx and django installed on my server through ssh on a secondary user (not root). There is an app I started from a book I am reading that teaches you django added onto it through my github in the directory sites/stratosphere.dreamof.science/source/django
I have been reading about this for days and the more I read the more I get confused. A records, AAA records, CNAMEs, PTR records, subdomains...I just want to know how the heck to get this server up and my app to run.
On my registrar I have my name servers pointed to the ones that digital ocean gave me for dreamof.science xx1.digitalocean.com xx2 etc.... On my registrar it also says I have 2 A records which all point to the same IP address which is the IP of my droplet. dreamof.science www.dreamof.science
I also have a CNAME to stratosphere.dreamof.science I am under the impression that this is my subdomain because you're not supposed to run apps on the regular domain....or something like that. Hence why I have my files in stratosphere.dreamof.science and nginx server config pointing towards stratosphere.dreamof.science
Now when I go to dreamof.science it just says "this webpage is unavailable." Same thing with stratosphere.dreamof.science and even when I just go directly to the server IP nothing shows up. I have the server running through the console on digital ocean and the droplet is active.
What is wrong here?
First, try creating a basic (empty) Django project in somewhere like /var/www/myproject. Start a debug server on port 8000 that accepts all connections using the runserver command like so:
python manage.py runserver 0.0.0.0:8000
Then navigate to http://dreamof.science:8000/ and see if you return a successful debug screen. That will tell you if the domain resolves correctly.
Now try setting a basic Nginx config, similar to the following:
server {
server_name dreamof.science;
listen 80;
location / {
proxy_pass http://localhost:8000;
}
}
Make sure that you don't have a firewall set to reject connections on that port.
Now try visiting http://dreamof.science/ to see if Nginx is running and set up to proxy the root domain to port 8000.

Django app running in EC2, but trying to visit elastic URL returns page not found

I'm just starting out with EC2, and I've pulled down a git repo that I started on my local machine and so I know that it works running the server from there, and it seems to works when I run my server from the EC2 instance I have running, but for some reason, when I visit the elastic IP address of that instance I get a page-not-found. Any idea on why that might be?
So, I've now started using nginx, and made a conf file following the instructions here: https://code.djangoproject.com/wiki/DjangoAndNginx that is as follows:
server {
listen 80;
server_name ec2-54-242-149-154.compute-1.amazonaws.com;
access_log /var/log/nginx/USBag.access.log;
error_log /var/log/nginx/USBag.error.log;
location /basicMap/ {
alias /home/www/ec2-54-242-149-154.compute-1.amazonaws.com/basicMap/;
expires 30d;
}
location / {
include fastcgi_params;
fastcgi_pass 127.0.0.1:8080;
}
}
basicMap is a place that I have already defined in my django app, and the linked ec2 ip is the one my server is running on. I am having a lot of difficulty finding documentation on how to proceed or how to determine if my conf file is correct or not. Using the standard python manage.py runserver doesn't work however. Advice on how to proceed?
There is a lot of info about setting up a production django server out there, and I'll give you my personal preferences below, but before all that let's backup and see if we can just get any response from the production server.
To start the development server on your EC2 instance run:
manage.py runserver 0.0.0.0:8000
That command will cause runserver to bind to all interfaces and serve files to the external world. You'll never want to do this outside of development, but it is a good way just to test if your django app is setup before complicating things. Now try hitting your EC2 instance and see if you get a response.
If that's still not working, make sure you allow incoming connections to the server's port (8000 in the command above, 80 once live). You could test that you have ports open using netcat (nc -l).
Once you are satisfied that you have your app setup, I'd recommend you use nginx as your front end webserver and gunicorn as your django webserver in production. You'll likely want to look into setting up a virtualenv, supervisord etc for your production setup (here is a tutorial: http://senko.net/en/django-nginx-gunicorn/), but all that depends on the specifics of your project.