This is the only doc i have found for Task Group and it doesn't explain how or where to create one.
I can't find any docs that adequately explain what a Task Group actually is with an example of how to create and use one. It sound like its a way for a service to run multiple different Task Definitions which would be useful to me.
For example, I added a container to a task definition and the service is balancing multiple instances of it on the cluster. But I have another container I want to deploy along with the first one, but I only want a single instance of it to run. So I can't add it to the same task definition because I'd be creating multiple instances of it and consuming unnecessary resources. Seems like this is what Task Groups are for.
You are indeed correct, there exists no proper documentation on this (I opened a support case with our AWS team to verify!).
However, all is not lost. A solution to your conundrum does indeed exist, and is a solution we use every day. You don't have to use the task group, whatever that is (since we don't actually know yet (AWS engineer is writing up some docs for me, will post them here when I get them)).
All you need though are placement constraints (your same doc), which are easy enough to setup. If you have a launch configuration, you can add something like this to the Advanced > User Data section, so that it gets run during boot (or just add it when launching your instance manually (or if you're feeling exceptionally hacky, you can logon to your instance and run the commands manually.. for science and stuff)):
echo ECS_INSTANCE_ATTRIBUTES={\"env\": \"prod\",\"primary\": \"app1\",\"secondary\": \"app2\"} >> /etc/ecs/ecs.config
Everything in quotes is arbitrarily defined by you, so use whatever tags and values make sense for your use case. If you go this route, make sure you add the following line to your docker launch command: --env-file=/etc/ecs/ecs.config
So now that you have an instance that's properly tagged (and make sure it's only the single instance you want (which means you probably need a dedicated launch configuration for this specific type of instance)), you can go ahead and create your ECS service like you were wanting to do. However, make sure you setup your Task Placement correctly, to match the roles that are now configured for your instances:
So for the example above, this service is configured to only launch this task on instances that are configured for both env==prod and secondary==app2 -- since your other two instances aren't configured for secondary==app2, they're not allowed to host this task.
It can be confusing at first, and took us a while to get right, but I hope this helps!
Response from AWS Support
I looked into the procedure how to use the Task Groups and here were my findings: - The assumption is that you already have a task group named "databases" if you had existing tasks launched from RunTask/StartTask API.
When you launch a task using the RunTask or StartTask action, you can specify the name of the task group for the task. If you don't specify a task group for the task, the default name is the family name of the task definition (for example, family:my-task-definition) - So to create a Task Group, either you define a TaskGroup (say webserver) while creating a Task on Task Console or use following command : $ aws ecs run-task --cluster <ecs-cluster> --task-definition taskGroup-td:1 --group webserver
Once created you will notice a Task running with a group: webserver.
Now you can use following placement constraints with the Task Definition to place your tasks only on the containers that are running tasks with this Task Group.
"placementConstraints":
[
{
"expression": "task:group == webserver", "type": "memberOf"
}
]
If you try to run a task with above placementConstraint, but you do not have any task running with taskGroup : webserver, you will receive following error: Run tasks failed Reasons : ["memberOf constraint unsatisfied"].
References: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-placement-constraints.html
Related
Let's say I have task definition on AWS ECS and want to schedule it to run as multiple instances with different env variables (~20 parallel tasks). I have some ideas how to do that, but not sure which one is correct.
Create multiple task definitions with different env variables, which sounds not really effective and silly
Create multiple targets for scheduled tasks, but its number is limited to 5 ones, which doesn't work for me.
Create 20 container overrides, but I didn't found a way how to do it using the user interface, and not sure it's correct too.
Could you please give me an idea? Thanks
I am trying to move the execution of automated scripts(Jenkins jobs) in my organization from local server to AWS. I have a Jenkins job to bring up an ec2 instance from the snapshot. The instance which I have brought up consists of 5 user profiles, a total of 5 automation jobs can login to the instance and execute their scripts.
My problem is, the instance I'm bringing up will be terminated once the jobs have been executed(Since we execute those scripts around 3 to 4 times in a month). So each time, I bring up an instance , it will have different IP address, which needs to be passed to the other Jenkins jobs, for the jobs to login into the instance and execute the scripts. My questions is as follows.
How to pass the details of the instance to the jenkins , making the process dynamic?
Are there any other ways , by which this problem can be solved(any solution which you might have implemented at your organisation)?
There are two things that you want to configure in your Jenkins in order to pass parameters to your jobs.
Enable "Trigger build remotely" in the Build Triggers. Read more about it here.
Enable "This project is parameterized" and configure your parameters. Read more about it here.
Now you can start your jobs remotely and pass parameters in your URI like so:
http://server/job/myjob/buildWithParameters?token=TOKEN&PARAMETER=Value
this is also explained in the second link.
Then all thats left to do is add the parameter to your command line args for your script. So if you use Execute Shell build step it would look like this:
python /path/to/file/scripit.py --some_arg ${PARAMETER}
Is there a way to add additional templates to the 'default' EC2 scheduler https://aws.amazon.com/answers/infrastructure-management/ec2-scheduler/
so say i want two separate functions/tags
start VM # 8am on a weekday
stop VM # 8pm on a weekday
There's a bit of confusion where I work with VMs not starting up because we only have a stopVM schedule, and custom start tag values are being setup wrong or not at all
going by the doco it seems like i need to set up one or the other (or a single instance that starts and stops VMs)
then use custom values in the individual tags of the VMs to assign a custom value to the start or stop time
but i want something simpler
eg add Ec2Scheduler:startAt8 - true
Ec2Scheduler:StopAt8 - true
do i need to have 2 instances of the scheduler running or can i add another row to the DynamoDB db?
The doco.pdf is not very good at explaining this.
I would suggest to use tool that have simple scheduler like CloudTiming, but unfortunately it's not free but pretty cheap. You can setup schedule for any resources in any region. From my perspective Amazon's interface is not so user-friendly for such simple action.
I've followed the "Getting Started Guide", the Two-Shards / Two-Replicas secnario and everything worked perfectly.
Then I started using the Collections API which is the preferable way of managing collection,shards and replication.
I launched two instances locally (afterwards with AWS, same problem)
I created a new collection with two shards using the following command:
/admin/collections?action=CREATE&name=collection1&numShards=2&collection.configName=collection
This successfully created two shards, one on each instance.
Then I launched another instance, expecting it to automatically set itself as a replica for the first shard, just like in the example. That didn't happen.
Is there something I'm missing?
There were two ways I was able to achieve this:
Manually, using the Collections API, I added a replica to shard1 and then another to shard2.
This is not good enough, as I will need to have this done automatically with Auto Scaling, so I'll need to micro-manage each server "role" - which replicas of which shards of which collections its handling, which complicates things a lot
The second way, which I couldn't find a documentation for is to launch an instance with a folder named "collectionX", inside a file named core.properties. In it the following line:
collection=collection1
Each instance I launched that way was automatically added as a replica in a round-robin way.
(Also working with several collections)
That's actually not a bad way at all, as I can pass parameter when I launch an AMI/instance in AWS.
Thanks everyone.
Amir
1) You are running the wrong command; the complete command is as follows:
curl 'http://localhost:8080/solr/admin/collections?action=CREATE&name=corename&numShards=2&replicationFactor=2&maxShardsPerNode=10'
Here I have given replication factor and due to which it will create the replica for your shards.
I have a AWS EC2 Windows (2008 R2 instance) which I want to start-stop using command/script from my local machine and schedule as per my usage.
I also want couple of my programs running on the EC2 instance to get start when instance starts. These programs currently are started using a bat file present in the instance.
I did following till now for the same:
1- I have an AWS user created in AWS IAM and using auth_id and key for that user for using EC2 apis and command line utilities.
2- To start and stop instance I'm using command line utilities from EC2 Util.
start ->ec2-start-instances i-instanceID
stop ->ec2-stop-instances i-instanceID
3- To schedule it I've added this to my windows scheduler.
4- Added user data for the instance in the AWS management console. My user data looks like this:
<script>
C:\Services\my_application.lnk
</script>
5- I can see the user data is present in my EC2 instance at C:\ProgramFiles\Amazon\Ec2ConfigServer\Scripts\UserScript
6- In C:\Program Files\Amazon\Ec2ConfigService\Settings\confi.xml the values of Ec2SetPassword and Ec2HandleUserData were changed to enabled and added true was added as well.
I'm facing following issues:
1- The user data scripts does not execute every time the instance is started. I'm not able to figure out why.
2- The changes made in Ec2ConfigService\Settings\confi.xml are getting reverted to the default values when the instance is restarted.
I feel this is common use case, and would like to know the best practices and approach taken for automating EC2 operations.
I also need help in starting programs on my instance- where am I going wrong or missing, what else needs to be done etc?
userdata is only executed the very first time that the instance is created. This is by design.
You've got a couple of options - all of which use your userdata script
Copy the my_application.lnk to the startup folder
Register the application in the registry "run" start key (http://blogs.msdn.com/b/powershell/archive/2006/04/25/how-to-access-or-modify-startup-items-in-the-window-registry.aspx)
Register it with the task scheduler to configure it to execute on startup (http://technet.microsoft.com/en-us/library/bb490996.aspx)