Spring Cloud Config Server requires restart on AWS - amazon-web-services

I am running a spring cloud config server through AWS Elastic Beanstalk. Everything seems to work fine, but after some time (around 2 weeks) I encounter problems accessing the configurations. I get the error
Loading configuration failed
without further details on the "Whitelabel error" page of spring boot (when accessing the configuration through the browser; access through client just times out).
When I restart the config server instance on Elastic Beanstalk I can access the configuration normally again (no repository/code changes, just restart).
I suppose this is not expected behavior - I don't believe there is a "timeout" on the config server after which a restart is required.
Could it have something to do with AWS?

Related

aws ec2 instance suddenly not reachable

I have an ec2 linux server with a spring boot application and nginx reverse proxy. It has been running for months.
Today I tried accessing the server and it was not reachable, nor via ssh, nor via http.
I went to cloudWatch and saw these metrics :
I don't know how to interpret this : EBSReadBytes raised to 8 GB for an hour.
To gain access to the server I had to restart the instance. After that I went to nginx access logs :
Last java logs :
I don't see anything unusual in nginx logs. (The censored lines are from legit users)
For the java app, I usually have these kind of logs on my development computer when I let it go to sleep and comeback, but never had these on my linux server.
Do you have an idea why this happened ?
I am now pretty sure it was a resource leak due to not closing a file input stream.

request times out when pinging aws load balancer

I have a dockerized Node.JS express application that I am migrating to AWS from Google Cloud. I had done this before successfully on the same project before deciding Cloud Run was more cost effective because of their free tier. Now, wanting to switch back to Fargate, but am unable to do it again due what I'm guessing is a crucial step. For minimal setup, I used the following guide: https://docs.docker.com/cloud/ecs-integration/ Essentially, using docker compose up with aws context and project name to deploy to ECS and Fargate.
The Load Balancer gives me a public DNS name in the format: xxxxx.elb.us-west-2.amazonaws.com and I have defined a port of 5002 in my Docker container. I know the issue is not related to exposing port numbers or any code-related issue since I had this successfully running in Google Cloud Run. When I try to hit any of my express endpoints, by sending POST to xxxxx.elb.us-west-2.amazonaws.com:5002/my_endpoint, I end up with Error: Request Timed Out
Note: I have already verified that my inbound security rules have been set to all traffic.
I am very new to AWS, so would love guidance if I am missing a critical step.
Thanks!
EDIT (SOLUTION): Turns out everything was deploying correctly, but after checking CloudWatch Logs, it turns out Fargate can't read environment variables defined inside of docker-compose file. Instead, they need to be defined in .env files, then read in docker-compose through -env-file flag. My code was then trying to listen on a port that was in environment variable but was undefined, so was receiving the below error in CloudWatch.

AWS EBS application timing out when changed to a single instance environment

I have a web application running on Elastic Beanstalk in load balanced environment however when I changed the configuration to a "single instance" environment the application returns a 408 Request Timeout with every https browser request to the server (custom domain).
The environment health in my AWS console shows everything is running okay so I am baffled by what could be causing the problem. When I change the configuration back to 'load balanced' everything works fine again.
When I change the configuration back to 'load balanced' everything works fine again.
Since you are using HTTPS with custom domain, when you switch to a single instance, the HTTPS functionality is lost. To make HTTPS work on a single instance, you need to obtained new SSL certificate (AWS ACM can't be used), and deploy it on your instance though re-configured Nginx:
How to Setup SSL(HTTPS) on Elastic Beanstalk Single Instance Environment

JPA cannot connect to AWS RDS from Beanstalk but it works locally

I'm deploying a Java 8 Spring Boot web app to AWS Elastic Beanstalk. I have an associated RDS MySQL instance and configured the relevant connection details.
The connection works when running the app locally, in my machine, because I set the following routing configuration for the RDS server:
As outlined, routings are also added for the security groups associated to my EC2 instances.
Therefore, running mysql on the EC2 machine works and the database can be reached.
The issue appears when deploying the app to Beanstalk, where it gets implemented into the EC2 instances. The app crashes because it gets connection refused errors when trying to connect to the MySQL RDS instance:
This doesn't seem to make any sense.
The database is accessible from both the EC2 instance (verified via the mysql command) and outside AWS, so the only remaining cause would be having misconfigured the Spring Boot app properties.
This doesn't seem to be the problem either because when running it locally, in my machine, the app has no issues connecting to the RDS instance and running normally using the production MySQL server.
I have separate application-development.properties and application-production.properties files, but I set the relevant properties to the same values:
spring.datasource.url = jdbc:mysql://XXXXXX.rds.amazonaws.com:3306/ebdb?useSSL=false&allowPublicKeyRetrieval=true&useUnicode=true&useJDBCCompliantTimezoneShift=true&useLegacyDatetimeCode=false&serverTimezone=UTC
spring.datasource.username = XXXXXX
spring.datasource.password = XXXXXX
spring.datasource.driver-class-name = com.mysql.cj.jdbc.Driver
Any pointers as to why my app could be running locally but not when deployed to Beanstalk?
Recreating both the Beanstalk environment and RDS instance seemed to fix the issue.

Elastic Beanstalk not deploying and cannot find error in error logs

I am having an issue getting my application to deploy on the elastic beanstalk. It works on my local machine flawlessly however when I try to deploy to beanstalk it fails and rolls back to the sample application. Then when I check the error.log file it is empty. Also when I check the node.js file the only output is
> Elastic-Beanstalk-Sample-App#0.0.1 start /var/app/current
> node app.js
Server running at http://127.0.0.1:8081/
I don't understand how to find the error it sends when deploying to the server. Where should I be looking?
The reason it was deleting my logs is I had rolling with additional batch enabled. Thereby deleting my logs on rollback. As far as the error was concerned it was an error in my code that caused it to require a module that didn't exist
It appears that you are running the app with the HOST set to localhost (127.0.0.1). The localhost is accessible only to internal processes on your Beanstalk instance. Change the host to 0.0.0.0 so that the app can be accessed from other IP addresses.