I am working with a Microservices App with JHipster and I want deploy it in AWS Beanstalk. I had tried in diferent ways deploy it the Registry in AWS Beanstalk but it was unsuccessfully. The Registry does not start, not work and it not connect with the microservices.
When I run the app locally, all works fine, It just happen when I deploy it on AWS Beanstalk.
I had tried and looked for all diferent ways to solve it, but I can not find de problem with de Registry. I had tried these solutions without success:
Change the Ngix port(5000,8081)
Verify the Maven Package to Production
Configure the Security Group on EC2 Instance
Open all ports in EC2 Instance
It is the log:
<i>2017-05-04 14:25:51.705 ERROR 25337 --- [et_localhost-15] c.n.e.cluster.ReplicationTaskProcessor : Network level connection to peer localhost; retrying after delay
com.sun.jersey.api.client.ClientHandlerException: java.net.SocketTimeoutException: Read timed out
at com.sun.jersey.client.apache4.ApacheHttpClient4Handler.handle(ApacheHttpClient4Handler.java:187)
at com.netflix.eureka.cluster.DynamicGZIPContentEncodingFilter.handle(DynamicGZIPContentEncodingFilter.java:48)
at com.netflix.discovery.EurekaIdentityHeaderFilter.handle(EurekaIdentityHeaderFilter.java:27)
at com.sun.jersey.api.client.Client.handle(Client.java:652)
at com.sun.jersey.api.client.WebResource.handle(WebResource.java:682)
at com.sun.jersey.api.client.WebResource.access$200(WebResource.java:74)
at com.sun.jersey.api.client.WebResource$Builder.post(WebResource.java:570)
at com.netflix.eureka.transport.JerseyReplicationClient.submitBatchUpdates(JerseyReplicationClient.java:116)
at com.netflix.eureka.cluster.ReplicationTaskProcessor.process(ReplicationTaskProcessor.java:71)
at com.netflix.eureka.util.batcher.TaskExecutors$BatchWorkerRunnable.run(TaskExecutors.java:187)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:171)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:158)
at org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:82)
at org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:271)
at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)</i>
I had multiple problems deploying jhipster registry on Aws EB. At last I tried deploying it as a docker image. Try this steps
Do mvn package docker:build
Copy the DockerFile and the war file to a folder and select both the files to create a zip file
Go to EB-Select Docker as the web server from the dropdown with single
instance (Jhipster recommends single instance for jhipster-registry so avoid load balancer)
If you're using VPC then make sure you select at least one public subnets (It took me days to figure out this step) ref: link
Related
I have the following AWS infrastructure,
VPC
ECS Fargate container running an ASP.net app
MemoryDB Redis Cluster
Elastic Beanstalk Spring Boot Web App using Spring Boot Redis
The problem is the Elastic Beanstalk Spring Boot app cannot connect to the MemoryDB cluster. It always fails with the following error
o.s.d.r.l.RedisMessageListenerContainer : Connection failure occurred.
I am able to connect to my local Redis from the Spring Boot App without any issues. Also, the ASP.net app running in ECS Fargate can connect to the MemoryDB cluster without any issues. Based on this I figured it would be relatively easy to get a Spring Boot app running in Elastic Beanstalk to connect to a MemoryDB cluster without too much trouble, boy was I wrong. I have gone over and over everything configuration related to the VPC and Elastic Beanstalk application. They are on the same VPC, they use the same subnet, they have the necessary inbound and outbound rules in the added security groups. I've added the security group created by the Elastic Beanstalk app to the MemoryDB cluster. I've tried connection through the JedisConnectionFactory with both normal and cluster configurations, I've tried with SSL on and off. I've read over the following Stackoverflow articles,
Connect to Redis (AWS) from Elastic Beanstalk instance
Connecting to AWS MemoryDB
How to connect AWS Elasticache Redis cluster to Spring Boot app?
How run docker redis in cluster mode?
aws elastic beanstalk with spring boot app
Use java Jedis connect to aws elasticache redis
What format to use when entering an IP address into an EC2 Security Group rule?
JedisCluster : redis.clients.jedis.exceptions.JedisNoReachableClusterNodeException: No reachable node in cluster
I did not find anything helpful in any of these articles. It seems to me that there is no documentation anywhere that outlines a simple base example of how to connect a Spring Boot app running in Elastic Beanstalk to an AWS MemoryDB Redis cluster.
My properties file
My RedisConfig Jedis connection factory creations functions
At this point it seems to me the problem is very likely AWS related but I cannot be certain as I don't even know what else to try at this point since I can connect without issue from ECS Fargate and from my local development environment to a local redis db. Does anyone have any suggestions or a very simple tutorial on how to connect a Springboot App running in Elastic Beanstalk to a MemoryDB cluster?
EDIT 1
After looking at the Elastic Beanstalk stdout web log im seeing this error when I use RedisClusterConfiguration connection method
Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is org.springframework.data.redis.RedisConnectionFailureException: No reachable node in cluster; nested exception is redis.clients.jedis.exceptions.JedisNoReachableClusterNodeException: No reachable node in cluster] with root cause
Jul 4 17:15:53 ip-172-31-26-91 web: redis.clients.jedis.exceptions.JedisNoReachableClusterNodeException: No reachable node in cluster
Jul 4 17:15:53 ip-172-31-26-91 web: at redis.clients.jedis.JedisSlotBasedConnectionHandler.getConnection(JedisSlotBasedConnectionHandler.java:86) ~[jedis-3.3.0.jar!/:na]
Jul 4 17:15:53 ip-172-31-26-91 web: at redis.clients.jedis.JedisSlotBasedConnectionHandler.getConnectionFromSlot(JedisSlotBasedConnectionHandler.java:103) ~[jedis-3.3.0.jar!/:na]
Also in the nginx error log from beanstalk im seeing this
connect() failed (111: Connection refused) while connecting to upstream, client: 73.33.4.162, server: , request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:5000/", host: "odyssey-env.eba-yq3g2trt.us-east-1.elasticbeanstalk.com"
Edit 2
When I use RedisStandaloneConfiguration to connect I get this error,
Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is org.springframework.data.redis.RedisConnectionFailureException: Cannot get Jedis connection; nested exception is redis.clients.jedis.exceptions.JedisConnectionException: Could not get a resource from the pool] with root cause
Jul 4 17:59:26 ip-172-31-26-91 web: java.net.ConnectException: Connection refused (Connection refused)
I have a dockerized Node.JS express application that I am migrating to AWS from Google Cloud. I had done this before successfully on the same project before deciding Cloud Run was more cost effective because of their free tier. Now, wanting to switch back to Fargate, but am unable to do it again due what I'm guessing is a crucial step. For minimal setup, I used the following guide: https://docs.docker.com/cloud/ecs-integration/ Essentially, using docker compose up with aws context and project name to deploy to ECS and Fargate.
The Load Balancer gives me a public DNS name in the format: xxxxx.elb.us-west-2.amazonaws.com and I have defined a port of 5002 in my Docker container. I know the issue is not related to exposing port numbers or any code-related issue since I had this successfully running in Google Cloud Run. When I try to hit any of my express endpoints, by sending POST to xxxxx.elb.us-west-2.amazonaws.com:5002/my_endpoint, I end up with Error: Request Timed Out
Note: I have already verified that my inbound security rules have been set to all traffic.
I am very new to AWS, so would love guidance if I am missing a critical step.
Thanks!
EDIT (SOLUTION): Turns out everything was deploying correctly, but after checking CloudWatch Logs, it turns out Fargate can't read environment variables defined inside of docker-compose file. Instead, they need to be defined in .env files, then read in docker-compose through -env-file flag. My code was then trying to listen on a port that was in environment variable but was undefined, so was receiving the below error in CloudWatch.
I want to setup a self managed docker private registry on an EC2 instance without using AWS ECR/ECS services i.e. using the docker registry:2 container image and make it accessible to the development team so that they can push/pull docker images remotely.
The development team has windows laptop with "docker for windows" installed in it.
Please note:
The EC2 instance is hosted on private subnet.
I have already created a AWS-ALB with openssl self-signed certificate and attached it to the EC2 so that the server can be accessed over HTTPS Listener.
I have deployed docker registry using below command:
docker run -d -p 8080:5000 --restart=always --name registry registry:2
I think pre-routing of 443 to 8080 is done because when I hit the browser with
https:///v2/_catalog I get an output in json format.
Currently, the catalog is empty because there is no image pushed in the registry.
I expect this docker-registry hosted on AWS-EC2 instance to be accessible remotely i.e. from windows remote machine as well.
Any references/suggestions/steps to achieve my task would be really helpful.
Hoping for a quick resolution.
Thanks and Regards,
Rohan Shetty
I have resolved the issue by following the below steps:
added --insecure-registry parameter in the docker.service file
created a new directory "certs.d/my-domain-name" at path /etc/docker.
( Please note: Here domain name is the one at which docker-registry is to be accessed)
Placed the self-signed openssl certificate and key for the domain-name inside the above mentioned directory
restart docker
I am trying to deploy an application to an ec2 instace from s3 bucket . I created an instance with the required s3 permimssion and also a code deploy application with required ec2 permissions
When I try to deploy thought I get :
The overall deployment failed because too many individual instances failed deployment, too few healthy instances are available for deployment, or some instances in your deployment group are experiencing problems. (Error code: HEALTH_CONSTRAINTS.
I shh into the ec2 instance to check the code deploy log and this is what I get in the :
2018-08-18 20:52:11 INFO [codedeploy-agent(2704)]: On Premises config file does not exist or not readable
2018-08-18 20:52:11 ERROR [codedeploy-agent(2704)]: booting child: error during start or run: Errno::ENETUNREACH - Network is unreachable - connect(2) - /usr/share/ruby/net/http.rb:878:in `initialize'
I tried changing the permissions , restarting the code deploy agent , creating a brand new codeDEploy application. Nothing seems to work.
In order for the agent to pick up commands from CodeDeploy, your host needs to have network access to the internet, which can be restricted by your EC2 security groups, VPC, configuration on your host, etc. To see if you have access, try pinging the CodeDeploy endpoint:
ping codedeploy.us-west-2.amazonaws.com
Though you should use the endpoint for the region your host is in - see here.
If you've configured the agent to use the proxy config, you may have to restart the agent like here.
I've been following the instructions for deploying cloud foundry on openstack and was having a problem with step that uploads the bosh stemcell:
$ bosh upload stemcell http://bosh-jenkins-artifacts.s3.amazonaws.com/bosh-stemcell/openstack/bosh-stemcell-latest-openstack-kvm-ubuntu.tgz
...
Error 100: Unable to connect to the OpenStack Compute API. Check task debug log for details.
...
E, [2013-09-21T09:02:11.359958 #2587] [task:1] ERROR -- : No route to host - connect(2) (Errno::EHOSTUNREACH) (Excon::Errors::SocketError)
I can ssh to the instance running Micro Bosh and confirm that it can ping the compute host, but it can't connect via tcp/http.
I've described the error in more detail here:
http://openstack.redhat.com/forum/discussion/625/ingress-issue-from-spawned-instance-to-compute-host#Item_1
It appears to basically be an openstack firewall/iptables configuration issue between spawned instance running Micro Bosh and controller/compute host running the compute API, which I can only temporarily fix via iptables. But I was surprised not to find any other cloud foundry related posts pointing to this issue, and was wondering if anyone has seen this issue and found a workaround?