We have a single ColdFusion 10 Tomcat server on Apache with Multi-instance configuration. We have 4 instances. cfusion and cfusion2 are for visitors in a single cluster. The other two are Search instance and Scheduled Job instance. We have a couple of questions.
It does not seem to be obeying the cluster rule and moving clients from instance and instance. Cfusion is clearly on, cfusion2 seems to be stopped. But i seem to have no way of getting it restarted. What am I doing wrong there?
Question 2 how can be assured that scheduled jobs will only run from the scheduled job instance and not be touched by the other 3 instances. I don't seem to be able to delete or disable their reference with the other instances.
Related
I have two clusters in my Amazon Elastic Container Service, one for production and one as a testing environment.
Each cluster has three different services with one task each. There should be 6 tasks running.
To update a task, I always pushed my new Docker Image to the Elastic Container Registry and restarted the Service with the new Image.
Since about 2 weeks I am only able to start 2 Tasks at all. It doesn't depend on the cluster, just 2 Tasks in general.
It looks like the tasks that should start are stuck in the "In Progress" Rollout State.
Has anybody similar problem or knows how to fix this?
I wrote to the support with this issue.
"After a review, I have noticed that the XXXXXXX region has not yet been activated. In order to activate the region you will have to launch an instance, I recommended a Free Tier EC2 instance.
After the EC2 instance has been launched you can terminate it thereafter.
"
I don't know why, but it's working
I am currently working on monitoring my EC2 Instances on AWS.
I already installed NetData on a Master Instance that receives information from Slaves Instances, all is working find on that point.
NetData default history is set to 36000 seconds.
At the moment, only 4 nodes are streaming to the Master, every 15 seconds.
The Master Instance started with a t3.medium type (ambitious, I know!)
It RAM approached 100% in a few minutes.
I upgraded it to t3.large.
Tomorrow, 8 more nodes will come in production and stream on Master Instance.
Does anyone know an Instance Type that is optimised for this use case or any recommendations?
Thanks.
I am trying to achieve zero down time redeploys on AWS elastic beanstalk.
I basically have two instances on my environment coupled with Jenkins for CI (Using Tomcat).
What I am trying to achieve is each time I trigger a redeploy from Jenkins, only one instance of the environment is redeployed, then have a timeout to allow the new instance to load the application and then redeploy the second instance.
In order to achieve that timeout I am setting both the "Pause time" and "Command timeout" but unfortunately its if though this limit is not honored.
The first instance is redeployed but after around 1 minute the second instance is redeployed regardless of the timeout value I set.
Have anyone archived this? any insights on how to achieve it?
"Pause Time" relates to environmental configuration made to instances. "Command timeouts" relates to commands executed to building the environment (for example if you've customised the container). Neither have anything to do with rolling application updates or zero downtime deployments. The documentation around this stuff is confusing and fragmented.
For zero downtime application deployments, AWS EB give you two options:
Batch application updates
Running two environments and cutting over
Option 1 feels like a lot less work but in my testing hasn't been truly zero downtime. There is a HARDCODED timeout value where traffic will be routed to instances after 1 minute regardless of whether the load balancer healthcheck passes or not.
But if you still want to go ahead then running two instances and setting a batch size of 50% or 1 should give you want you want.
I am writing a django app which I plan on deploying to AWS via Elastic Beanstalk. I am trying to understand why I would need to specify 'leader_only' for a container command I want to run for my app. More details about this can be found here.
It says:
Additionally, you can use leader_only. One instance is chosen to be
the leader in an Auto Scaling group. If the leader_only value is set
to true, the command runs only on the instance that is marked as the
leader.
If I have several instances running my app because I want to scale it, wouldn't using 'leader_only' run the command on only one instance, and not affect the rest? I am probably misunderstanding the purpose of it, but that seems non-ideal because the environment in the leader may differ from the other instances, and the end user may get different results depending on which instance they happen to connect to.
From a technical point of view, elastic beanstalk is autoscaling group and when you deploy something you need to assume that potentially your commands can be executed simultaneously on several ec2 instances.
Main goal of the leader_only option is to make sure that your commands will be executed on only one ec2 instance. It is useful for use cases such as execution of the db migration scripts, creation of db, etc., that should be executed just once on one ec2. So leader_only is just a marker that some commands will be executed on this instance only.
However, you need to keep in mind, the leader attribute is set once on creation of your environment and in case if leader died and was replaced by new instance possible situation when you don't have any leaders in autoscaling group.
I've done considerable testing of this recently. Both leader_only and EB_IS_COMMAND_LEADER. Both Apache 1 and Apache 2 setups.
The two named values above can be found in many discussions, guides and documents, but the situation is basically this:
You cannot trust being able to reliably detect a leader in a multiple EC2 instance environment, except during deployment and scale up
That means you cannot use the testing of either of the values above to confirm a command will run on exactly one (not zero, not 2+) instance as part of a cron job or scheduled task.
Recent improvements and changes to the way leader status is managed may well mean that a leader is always available during deployments and scale up, but at other times, including after instance replacement, there may not be a leader instance to be found.
There are two main options available if you really need to only run a scheduled task once while managing multiple instances.
A worker environment specifically for scheduled tasks, or another external service like Lambda with EventBridge (CloudWatch Events)
Setup crons to run across all instance in deployment configs. Include a small amount of code before the cron runs which connects to the AWS api, gets a list of current instances and checks the id of the first returned against its own ID to see if it should run the cron.
I have a Question about The Cluster usage in AWS. If I have 10 Instances running, do I have One master Instance and when I run a threaded Application on One Instance, is the application able to use all Instances like it would with multiple Cores?
I have seen the Tutorials on the Website but I can't figure out how these Clusters work. If I run One Application it counts as one job even if threaded right? So will only one instance be used?
Thank you in advance.
In aws , you have one master instance and one to many core instances.
Master instance will schedule and lead the job but the core ones will do all the work.
There is also an option of task instances which will process when it comes to tasks.
Cheers,