Strapi AWS Elastic Beanstalk - Deploy success but cannot access URL - amazon-web-services

I'm trying to deploy Strapi 4 to AWS elastic beanstalk.
After deploying, the app is not accessible by the URL (e.g http://lrd-api.ap-southeast-1.elasticbeanstalk.com).
The instance is running, and docker logs show a successful deployment. The Strapi app is running and connected to the RDS database, but I am unable to access it through the URL.
Are there any additional steps I'm unaware of?
My server.ts file (config/server.ts for Strapi)
import cronTasks from './functions/cron_tasks';
export default ({ env }) => ({
host: env('HOST', '0.0.0.0'),
port: env.int('PORT', 1337),
app: {
keys: env.array('APP_KEYS'),
},
webhooks: {
populateRelations: env.bool('WEBHOOKS_POPULATE_RELATIONS', false),
},
cron: {
enabled: true,
tasks: cronTasks,
},
});
I have tried changing the EB to a load balanced instance to listen to both ports 80 and 433.

For anyone facing this same issue, this was the problem:
Elastic beanstalk load balance was listening to port 80 for the instance port 80
The app itself was on port 1337
To make it work, what I did was:
Change the port mapping in my docker-compose.yml
Instead of port 1337:1337, I used 80:1337

Related

My docker container which is running inside AWS ElasticBeanstalk is not able to connect with the host

My application runs on port 5000 and I have exposed the 5000 port in the docker file.
This is my docker-compose.yml file
"services":
"backend":
"image": "<imageURL>"
"ports":
- "5000:8080"
Container port and application port: 5000
Server port: 8080
The security group of have also been configured properly and the application is able to connect with the database but not working as I try to ping that IP of the server.
My application has an ping API.
Not sure what are you referring to with EBS server as "EBS" is Elastic Block Storage, not the computing
If you're using AWS ECS you need to configure "PortMapping" to map external ports with the container ports
https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_PortMapping.html
If you're using EC2 make sure that your have your service listening to all IPs using the netstat command
netstat -anlp | grep [your port]
and security group inbound and outbound rules are configured properly

Docker container deployed via Beanstalk cannot connect to the database on RDS

I'm new to both docker and AWS. I just created my very first docker image. The application is a backend microservice with rest controllers persisting data in a MySQL database. I've manually created the database in RDS and after running the container locally, the rest APIs work fine in Postman.
Here is the Dockerfile:
FROM openjdk:8-jre-alpine
MAINTAINER alireza.online
COPY ./target/Practice-1-1.0-SNAPSHOT.jar /myApplication/
COPY ./target/libs/ /myApplication/libs/
EXPOSE 8080
CMD ["java", "-jar", "./myApplication/Practice-1-1.0-SNAPSHOT.jar"]
Then I deployed the docker image via AWS Beanstalk. Here is the Dockerrun.aws.json:
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "aliam/backend",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "8080"
}
],
"Logging": "/var/log/nginx"
}
And everything went well:
But now, I'm getting "502 Bad Gateway" in postman when trying to run "backend.us-east-2.elasticbeanstalk.com/health".
I checked the log on Beanstalk and realized that the application has problem connecting to the RDS database:
"Could not create connection to database server. Attempted reconnect 3 times. Giving up."
What I tried to do to solve the problem:
1- I tried to assign the same security group the EC2 instance is using to my RDS instance, but it didn't work.
2- I tried to make more inbound rules on the security group to add public and private IPs of the EC2 instance but I was not sure about the port and the CIDR I should define and couldn't make it.
Any comment would be highly appreciated.
Here are resources in your stack:
LoadBalancer -> EC2 instance(s) -> MySQL database
All of them need to have SecurityGroups assigned to them, allowing connections on the right ports to the upstream resources.
So, if you assign sg-1234 security group to your EC2 instances, and sg-5678 to your RDS database, there must be a rule existing in the sg-5678 allowing inbound connections from sg-1234 (no need for CIDRs, you can open a connection from SG to SG). The typical MySQL port is 3306.
Similarly, the LoadBalancer (which is automatically created for you by ElasticBeanstalk) must have access to your EC2 instance's 8080 port. Furthermore, if you want to access your instances with the "backend.us-east-2.elasticbeanstalk.com/health" domain name, the loadbalancer would have to listen on port 80 and have a target group of your instances on 8080 port.
Hope this helps!

Meteor on AWS using Mup - SSL with ELB

I'm migrating my Meteor app to AWS, want to use ACM issued SSL cert attached to ELB.
My current setup is:
ELB with ACM SSL cert(verified that load balancing and HTTPS is working on simple HTTP server inside EC ubuntu machine)
Meteor up is deployed on EC2 machine using Mup (Please see my mup.js which works well with SSL cert physically available from file system)
I want to stop using reverse proxy from mup.js config completely and let ELB run all SSL stuff. Problem is that ELB is not able to communicate with Meteor up,
have tried different ROOT_URLs but none are working:
EC2 Elastic IP with HTTP and HTTPS
(i.e. ROOT_URL: 'https://my-ec2-elastic-ip.com', ROOT_URL: 'http://my-ec2-elastic-ip.com')
ELB domain name with HTTP and HTTPS
What should I put for ROOT_URL and is it game changer in accepting requests? i.e. if I have wrong ROOT_URL, will Meteor still be able to accept incoming requests?
Mup version: 1.4.3
Meteor version: 1.6.1
Mup config
module.exports = {
servers: {
one: {
host: 'ec2-111111.compute-1.amazonaws.com',
username: 'ubuntu',
pem: 'path to pem'
}
},
meteor: {
name: 'my-app',
path: 'path',
servers: {
one: {}
},
buildOptions: {
serverOnly: true,
},
env: {
ROOT_URL: 'https://ec2-111111.compute-1.amazonaws.com',
MONGO_URL: 'mongo url',
},
dockerImage: 'abernix/meteord:node-8.9.1-base',
deployCheckWaitTime: 30,
},
proxy: {
domains: 'ec2-111111.compute-1.amazonaws.com,www.ec2-111111.compute-1.amazonaws.com',
ssl: {
crt: './cert.pem',
key: './key.pem'
}
}
};
Resolved, first and general issue was that I was using classic ELB, which doesn't support WebSockets and was preventing DDP connection. Newer Application Load Balancer which comes with WebSocket and Sticky Sessions helped. More on the diff here: https://aws.amazon.com/elasticloadbalancing/details/#details
Another issue more specific to my use case was having no endpoint for ELB health check, I was hiding/securing everything behind basic_auth, health check was getting 403 unauthorized failing and not registering EC2 instance in ELB. Make sure you have endpoint for health check that returns 200 OK, and also revisit your security groups - check out inbound rules and make sure ELB has access to corresponding ports to EC2 instance(80, 443 etc.).

IP Address specification in deployment of Spring Cloud microservice

I am trying to develop spring cloud microservice. I developed a sample demo of Spring Cloud project by using Zuul proxy, Eureka server and Hystrix. I added my developed service as a client of Eureka server and applied the routing. All are working well. Now I need to deploy in my AWS Ec2 machine. In my local I added the default zone URL in application.properties file like the following,
eureka.client.serviceUrl.defaultZone=http://localhost:8071/eureka/
When I am moving to my Ec2 machine or by sing AWS ECS, how I can modify this IP address belongs to cloud for proper configuration? I also using localhost:8090 and 8091 like these ports for Zuul and Turbine dashboard project etc. So how I need to change this URL when I am deploying to cloud?
We use domains. So you would point an A-record of api.yourdomain.com at the IP address or load balancer alias that is supporting your services.
Why? When we decided to change infrastructure we are able to change a DNS entry rather than modify all of our microservices' configurations. We recently moved from Eureka/Zuul to AWS's ALB. Using domains allowed us to run both environments in parallel and cutover with no down time. In the event there was a failure in the new environment, the old one was still running and we could cut back with a simple A-record change.
In your application.yml file you can configure different profiles so that you can test locally and then in ECS you can define the profile to use when creating the task definition.
First here is an example of how you can configure your application.yml file to be able to run on different profiles:
############# for running locally ################
server:
port: 1234
logging:
file: logs/example.log
level:
com.example: INFO
endpoints:
health:
sensitive: true
spring:
datasource:
url: jdbc:mysql://example.us-east-1.rds.amazonaws.com/example_db?noAccessToProcedureBodies=true
username: example
password: example
driver-class-name: com.mysql.jdbc.Driver
security:
oauth2:
client:
clientId: example
clientSecret: examplesecret
scope: webapp
accessTokenUri: http://localhost:9999/uaa/oauth/token
userAuthorizationUri: http://localhost:9999/uaa/oauth/authorize
resource:
userInfoUri: http://localhost:9999/uaa/user
########## For deployment in Docker containers/ECS ########
spring:
profiles: prod
datasource:
url: jdbc:mysql://example.rds.amazonaws.com/example_db?noAccessToProcedureBodies=true
username: example
password: example
driver-class-name: com.mysql.jdbc.Driver
prodnetwork:
ipAddress: api.yourdomain.com
security:
oauth2:
client:
clientId: exampleid
clientSecret: examplesecret
scope: webapp
accessTokenUri: https://${prodnetwork.ipAddress}/v1/uaa/oauth/token
userAuthorizationUri: https://${prodnetwork.ipAddress}/v1/uaa/oauth/authorize
resource:
userInfoUri: https://${prodnetwork.ipAddress}/v1/uaa/user
Second: Setting up ECS to use your Prod profile:
When you build your docker container, tag it with your new profile's name, in this case "prod"
Third: Create a task definition and define your Docker tag in the repo URL and your new profile in your container run command:
Now when you work on your application on your local machine, you can run it with "localhost" and when you deploy it to ECS you can define your new domain/ip to be used in the run command in your container definition.

Deploying Docker to AWS Elastic Beanstalk -- how to forward port to host? (port binding)

I have a project set up with CircleCI that I am using to auto-deploy to Elastic Beanstalk. My EBS environment is a single container, auto-scaling, web environment. I am trying to run a service that listens on raw socket port 8080.
My Dockerfile:
FROM golang:1.4.2
...
EXPOSE 8080
My Dockerrun.aws.json.template:
{
"AWSEBDockerrunVersion": "1",
"Authentication": {
"Bucket": "<bucket>",
"Key": "<key>"
},
"Image": {
"Name": "project/hello:<TAG>",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "8080"
}
]
}
I have made sure to expose port 8080 on the "role" assigned to my project environment.
I used the exact deployment script from the CircleCI tutorial linked above (except with changed names).
Within the EC2 instance that is running my EBS application, I can see that the Docker container has run successfully, except that Docker did not forward the exposed port to the host container. I have encountered this in the past when I ran docker run .... without the -P flag.
Here is an example session after SSH-ing into the machine:
[ec2-user#ip-xxx-xx-xx-xx ~]$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a036bb061aea aws_beanstalk/staging-app:latest "/bin/sh -c 'go run 3 days ago Up 3 days 8080/tcp boring_hoover
[ec2-user#ip-xxx-xx-xx-xx ~]$ curl localhost:8080
curl: (7) Failed to connect to localhost port 8080: Connection refused
What I expect to see is the ->8080 or whatever in the container that forwards it onto the host.
When I do docker inspect on my container, I also see that these two configurations are not what I want:
"PortBindings": {},
"PublishAllPorts": false,
How can I trigger a port binding in my application?
Thanks in advance.
It turns out I made a misunderstanding in how Docker's networking stack works. When a port is exposed but not published, it is still available to the local network interface through the Docker container's private IP address. You can obtain this IP address by checking docker inspect <container>.
Rather than doing curl localhost:8080 I could do curl <containerIP>:8080.
In my EBS deploy, nginx was automatically setup to forward (HTTP) traffic from Port 80 to this internal private port as well.
I had the same problem in a rails container (port 3000 using puma) by default rails server only binds localhost to the listening interface, I had to use -b option to bind 0.0.0.0 and that solved the problem.
In react I have no the same problem cause npm serve package binds all interfaces by default