When I try to create the AWS Elastic Beanstalk environment, I received the following error,
LaunchWaitCondition failed. The expected number of EC2 instances were not initialized
within the given time. Rebuild the environment. If this persists, contact support.
Wahts the issue here?
Related
I use Elastic Beanstalk for our NodeJs application. Sometimes an error message like this appears when I update some configurations like capacity:
Incorrect application version "release#016a2cd2-598187446" (deployment 34). Expected version "release#016a2cd2-598187446" (deployment 33).
I don't understand the error. The application version is the same. How can i fix this problem?
I have try to terminate EC2, rebuild the envinroment but the problem remain.
A few possible solutions:
Check that you are using the correct Elastic Beanstalk URL. The URL for your Elastic Beanstalk environment will be in the format "http://xxxxxx-env.elasticbeanstalk.com". Make sure that you are using this URL, and not the URL for your S3 bucket (which will be in the format "http://xxxxxx.s3.amazonaws.com").
If you are using a custom domain name for your Elastic Beanstalk environment, make sure that you have updated your DNS settings to point to the correct Elastic Beanstalk URL.
Make sure that you have deployed your application to the correct Elastic Beanstalk environment. You can check this by going to the "Elastic Beanstalk" tab in the AWS Management Console and selecting the environment that you want to deploy to. If you have multiple environments, make sure that you select the one that matches the URL that you are using.
I am trying to deploy an application to an ec2 instace from s3 bucket . I created an instance with the required s3 permimssion and also a code deploy application with required ec2 permissions
When I try to deploy thought I get :
The overall deployment failed because too many individual instances failed deployment, too few healthy instances are available for deployment, or some instances in your deployment group are experiencing problems. (Error code: HEALTH_CONSTRAINTS.
I shh into the ec2 instance to check the code deploy log and this is what I get in the :
2018-08-18 20:52:11 INFO [codedeploy-agent(2704)]: On Premises config file does not exist or not readable
2018-08-18 20:52:11 ERROR [codedeploy-agent(2704)]: booting child: error during start or run: Errno::ENETUNREACH - Network is unreachable - connect(2) - /usr/share/ruby/net/http.rb:878:in `initialize'
I tried changing the permissions , restarting the code deploy agent , creating a brand new codeDEploy application. Nothing seems to work.
In order for the agent to pick up commands from CodeDeploy, your host needs to have network access to the internet, which can be restricted by your EC2 security groups, VPC, configuration on your host, etc. To see if you have access, try pinging the CodeDeploy endpoint:
ping codedeploy.us-west-2.amazonaws.com
Though you should use the endpoint for the region your host is in - see here.
If you've configured the agent to use the proxy config, you may have to restart the agent like here.
I am hosting a Django site on Elastic Beanstalk. I haven't yet linked it to a custom domain and used to access it through the Beanstalk environment domain name like this: http://mysite-dev.eu-central-1.elasticbeanstalk.com/
Today I did some stress tests on the site which led it to spin up several new EC2 instances. Shortly afterwards I deployed a new version to the beanstalk environment via my local command line while 3 instances were still running in parallel. The update failed due to timeout. Once the environment had terminated all but one instance I tried the deployment again. This time it worked. But since then I cannot access the site through the EB environment domain name anymore. I alway get a "took too long to respond" error.
I can access it through my ec2 instance's IP address as well as through my load balancer's DNS. The beanstalk environment is healthy and the logs are not showing any errors. The beanstalk environment's domain is also part of my allowed hosts setting in Django. So my first assumption was that there is something wrong in the security group settings.
Since the load balancer is getting through it seems that the issue is with the Beanstalk environment's domain. As I understand the beanstalk domain name points to the load balancer which then redirects to the instances? So could it be that the environment update in combination with new instances spinning up has somehow corrupted the connection? If yes, how do I fix this and if no what else could be the cause?
Being a developer and newbie to cloud hosting my understanding is fairly limited in this respect. My issue seems to be similar to this one Elastic Beanstalk URL root not working - EC2 Elastic IP and Elastic IP Public DNS working
, but hasn't helped me further
Many Thanks!
Update: After one day everything is back to normal. The environment URL works as previously as if the dependencies had recovered overnight.
Obviously a server can experience downtime, but since the site worked fine when accessing the ec2 instance ip and the load balancer dns directly, I am still a bit puzzled about what's going on here.
If anyone has an explanantion for this behaviour, I'd love to hear it.
Otherwise, for those experiencing similar issues after a botched update: Before tearing out your hair in desperation, try just leaving the patient alone overnight and let the AWS ecosystem work its magic.
I am new to Kubernetes. I am using Kops to deploy my Kubernetes application on AWS. I have already registered my domain on AWS and also created a hosted zone and attached it to my default VPC.
Creating my Kubernetes cluster through kops succeeds. However, when I try to validate my cluster using kops validate cluster, it fails with the following error:
unable to resolve Kubernetes cluster API URL dns: lookup api.ucla.dt-api-k8s.com on 149.142.35.46:53: no such host
I have tried debugging this error but failed. Can you please help me out? I am very frustrated now.
From what you describe, you created a Private Hosted Zone in Route 53. The validation is probably failing because Kops is trying to access the cluster API from your machine, which is outside the VPC, but private hosted zones only respond to requests coming from within the VPC. Specifically, the hostname api.ucla.dt-api-k8s.com is where the Kubernetes API lives, and is the means by which you can communicate and issue commands to the cluster from your computer. Private Hosted Zones wouldn't allow you to access this API from the outside world (your computer).
A way to resolve this is to make your hosted zone public. Kops will automatically create a VPC for you (unless configured otherwise), but you can still access the API from your computer.
I encountered this last night using a kops-based cluster creation script that had worked previously. I thought maybe switching regions would help, but it didn't. This morning it is working again. This feels like an intermittency on the AWS side.
So the answer I'm suggesting is:
When this happens, you may need to give it a few hours to resolve itself. In my case, I rebuilt the cluster from scratch after waiting overnight. I don't know whether or not it was necessary to start from scratch -- I hope not.
This is all I had to run:
kops export kubecfg (cluster name) --admin
This imports the "new" kubeconfig needed to access the kops cluster.
I came across this problem with an ubuntu box. What I did was to add the dns record in the hosted zone in route 53 to /etc/hosts.
Here is how I resolved the issue :
Looks like there is a bug with kops library though it shows
**Validation failed: unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api **
when u try kops validate cluster post waiting for 10-15 mins. Behind the scene the kubernetes cluster is up ! You can verify same by doing ssh in to master node of your kunernetes cluster as below
Go to page where u can ec2 instance and your k8's instances running
copy "Public IPv4 address" of your master k8 node
post login to ec2 instance on command prompt login to master node as below
ssh ubuntu#<<"Public IPv4 address" of your master k8 node>>
Verify if you can see all node of k8 cluster with below command it should show your master node and worker node listed there
kubectl get nodes --all-namespaces
I have an Elastic Beanstalk app running on Docker set up with autoscaling. When another instance is added to my environment as a result of autoscaling, it will 502 while the instance goes through the deployment process. If I ssh into the relevant box, I can see (via docker ps) that docker is in the process of setting itself up.
How can I prevent my load balancer from directing traffic to the instance until after the instance deployment has actually completed? I found this potentially related question on SuperUser, but I think my health check URL is set-up properly -- I have it set-up to point at the root of the domain, which definitely 502s when I navigate to it in my browser, so I suspect that's not the cause of my problem.