AWS Beanstalk rolling old configuration - amazon-web-services

I have two memcached clusters created (t2.small and t2.micro)
When I updated my EB endpoint pointing to new memcached cluster which is t2.micro. It rolls back to the old cluster configuration which is cache.t2.small. It gives me warning and error message informing about the load balancer.
I have degraded status then went to OK but with the old configuration.
Anyone who had experienced the same issue?
I have tried the second time but is giving me an error.

Related

Elastic Beanstalk Environment is down: Showing certificate errors

I have a Nuxt app (Vue client with Express server) that has been working for a couple years. I made a change, committed it and now my app is down. I use Elastic Beanstalk, Code Pipeline. I use route 53 as DNS and ACM (Certificate Manager).
I went straight to my EBS console to check the environment and saw that the health was severe. The logs showed no errors so I went to Events and looked for them there and found this:
Creating Load Balancer listener failed Reason: Resource handler returned message: "Certificate 'arn:aws:acm:us-east-2:019538876777:certificate/ee8af09b-c6d7-4636-b2a6-099792f66caf' not found
I went straight to the AWS Certificate Manager and saw that one certificate had expired for my domain but that another had been issued. So that means that I shouldn't have to request a new one, which I was I thought would be the case.
Where's the disconnect, then? If I have a newly reissued certificate then why is my EBS Environment still throwing an error? Thanks for any helpful tips.

Elastic Beanstalk URL cannot access Website after successful environment update

I am hosting a Django site on Elastic Beanstalk. I haven't yet linked it to a custom domain and used to access it through the Beanstalk environment domain name like this: http://mysite-dev.eu-central-1.elasticbeanstalk.com/
Today I did some stress tests on the site which led it to spin up several new EC2 instances. Shortly afterwards I deployed a new version to the beanstalk environment via my local command line while 3 instances were still running in parallel. The update failed due to timeout. Once the environment had terminated all but one instance I tried the deployment again. This time it worked. But since then I cannot access the site through the EB environment domain name anymore. I alway get a "took too long to respond" error.
I can access it through my ec2 instance's IP address as well as through my load balancer's DNS. The beanstalk environment is healthy and the logs are not showing any errors. The beanstalk environment's domain is also part of my allowed hosts setting in Django. So my first assumption was that there is something wrong in the security group settings.
Since the load balancer is getting through it seems that the issue is with the Beanstalk environment's domain. As I understand the beanstalk domain name points to the load balancer which then redirects to the instances? So could it be that the environment update in combination with new instances spinning up has somehow corrupted the connection? If yes, how do I fix this and if no what else could be the cause?
Being a developer and newbie to cloud hosting my understanding is fairly limited in this respect. My issue seems to be similar to this one Elastic Beanstalk URL root not working - EC2 Elastic IP and Elastic IP Public DNS working
, but hasn't helped me further
Many Thanks!
Update: After one day everything is back to normal. The environment URL works as previously as if the dependencies had recovered overnight.
Obviously a server can experience downtime, but since the site worked fine when accessing the ec2 instance ip and the load balancer dns directly, I am still a bit puzzled about what's going on here.
If anyone has an explanantion for this behaviour, I'd love to hear it.
Otherwise, for those experiencing similar issues after a botched update: Before tearing out your hair in desperation, try just leaving the patient alone overnight and let the AWS ecosystem work its magic.

Weird unhealthy errors on Elastic Beanstalk Worker environment

I've seen some weird errors happening on my worker environments only (this never happens on the web environments).
I can see this in the "Health" page:
Following services are not running: aws-sqsd.
clock out of sync...
Instance ELB health is not available
Some screenshots:
This last error I realized that when I create a new worker single environment, the instance is not attached to the Load Balancer and I have to do it manually. Can anyone tell me why this is happening?
Has anyone experienced something similar?

Amazon Elastic Beanstalk is taking forever to update environment

I have battled with deploying my app on elastic beanstalk and rds for over two days now and just when i figured out the issue, elastic beanstalk is taking over 10 hours now still updating environment . I stopped the action earlier on and restart again. It's still running for over 5 hours . This started yesterday.Before it takes roughly 5 min or less to update my environment. The update were trigger after i added my rds host , user and passwords etc.
I am running Instance type: t1.micro , Proxy server: nginx, Node version: 4.4.3.
Please have anyone else encounter this issue before? How do i addressed it ?
I have solved this by Rebuilding the environment, before that I saved my configuration.
Good Luck

Diagnosing occasional HTTP 5xx errors in Elastic Beanstalk and Elastic Load Balancer

My monitoring tab in Elastic Beanstalk is showing occasional HTTP 5xx errors, both from the EB instance and the ELB that performs its load balancing.
The trouble is that I generally only see these a few hours after they occur, and by the time I log into the EB instance the logs have rotated and see no trace of the error.
What's the best way to record the request and response associated with these errors for later viewing?
Best and cheap option to achieve this is set up a cron job on the EC2 instance that will move the logs to a AWS S3 bucket each 15 min or so. Or in other word store the logs in AWS S3 so you can analyze them when ever you want.
Here are some things I've found out in the past few weeks (I'll maybe edit into a more coherent answer later):
Consider the layering here: we've got ELB -> httpd -> Tomcat (in my example). I'd forgotten about httpd (Apache 2.2 atm)
You can enable ELB logging into an S3 bucket of your choice. This allows you to see the results returned to the client
From there, trace through to httpd to see if there are any errors in /var/log/httpd
And then from there, trace through to the Tomcat logs to see if the same errors pop up there
I was seeing errors in ELB and httpd that weren't showing in Tomcat
I was also seeing a number of error messages similar to:
->
"proxy: error reading status line from remote server"
"(103)Software caused connection abort: proxy: pass request body failed"
Reading around, these may be caused by bugs in mod_proxy.