EC2 Scheduler - add additional templates - amazon-web-services

Is there a way to add additional templates to the 'default' EC2 scheduler https://aws.amazon.com/answers/infrastructure-management/ec2-scheduler/
so say i want two separate functions/tags
start VM # 8am on a weekday
stop VM # 8pm on a weekday
There's a bit of confusion where I work with VMs not starting up because we only have a stopVM schedule, and custom start tag values are being setup wrong or not at all
going by the doco it seems like i need to set up one or the other (or a single instance that starts and stops VMs)
then use custom values in the individual tags of the VMs to assign a custom value to the start or stop time
but i want something simpler
eg add Ec2Scheduler:startAt8 - true
Ec2Scheduler:StopAt8 - true
do i need to have 2 instances of the scheduler running or can i add another row to the DynamoDB db?
The doco.pdf is not very good at explaining this.

I would suggest to use tool that have simple scheduler like CloudTiming, but unfortunately it's not free but pretty cheap. You can setup schedule for any resources in any region. From my perspective Amazon's interface is not so user-friendly for such simple action.

Related

GCP cloud run send a request to all running instances

I have a rest API running on cloud run that implements a cache, which needs to be cleared maybe once a week when I update a certain property in the database. Is there any way to send a HTTP request to all running instances of my application? Right now my understanding is even if I send multiple requests and there are 5 instances, it could all go to one instance. So is there a way to do this?
Let's go back to basics:
Cloud Run instances start based on a revision/image.
If you have the above use case, where suppose you have 5 instances running and you suddenly need to re-start them as restarting the instances resolves your use case, such as clearing/rebuilding the cache, what you need to do is:
Trigger a change in the service/config, so a new revision gets
created.
This will automatically replace, so will stop and relaunch all your instances on the fly.
You have a couple of options here, choose which is suitable for you:
if you have your services defined as yaml files, the easiest is to run the replace service command:
gcloud beta run services replace myservice.yaml
otherwise add an Environmental variable like a date that you increase, and this will yield a new revision (as a change in Env means new config, new revision) read more.
gcloud run services update SERVICE --update-env-vars KEY1=VALUE1,KEY2=VALUE2
As these operations are executed, you will see a new revision created, and your active instances will be replaced on their next request with fresh new instances that will build the new cache.
You can't reach directly all the active instance, it's the magic (and the tradeoff) of serverless: you don't really know what is running!! If you implement cache on Cloud Run, you need a way to invalidate it.
Either based on duration; when expired, refresh it
Or by invalidation. But you can't on Cloud Run.
The other way to see this use case is that you have a cache shared between all your instance, and thus you need a shared cache, something like memory store. You can have only 1 Cloud Run instance which invalidate it and recreate it and all the other instances will use it.

Automating copy of Google Cloud Compute instances between projects

I need to move more than 50 compute instances from a Google Cloud project to another one, and I was wondering if there's some tool that can take care of this.
Ideally, the needed steps could be the following (I'm omitting regions and zones for the sake of simplicity):
Get all instances in source project
For each instance get machine sizing and the list of attached disks
For each disk create a disk-image
Create a new instance, of type machine sizing, in target project using the first disk-image as source
Attach remaining disk-images to new instance (in the same order they were created)
I've been checking on both Terraform and Ansible, but I have the feeling that none of them supports creating disk images, meaning that I could only use them for the last 2 steps.
I'd like to avoid writing a shell script because it doesn't seem a robust option, but I can't find tools that can help me doing the whole process either.
Just as a side note, I'm doing this because I need to change the subnet for all my machines, and it seems like you can't do it on already created machines but you need to clone them to change the network.
There is no tool by GCP to migrate the instances from one project to another one.
I was able to find, however, an Ansible module to create Images.
In Ansible:
You can specify the “source_disk” when creating a “gcp_compute_image” as mentioned here
Frederic

What is an ECS Task Group and how do I create one?

This is the only doc i have found for Task Group and it doesn't explain how or where to create one.
I can't find any docs that adequately explain what a Task Group actually is with an example of how to create and use one. It sound like its a way for a service to run multiple different Task Definitions which would be useful to me.
For example, I added a container to a task definition and the service is balancing multiple instances of it on the cluster. But I have another container I want to deploy along with the first one, but I only want a single instance of it to run. So I can't add it to the same task definition because I'd be creating multiple instances of it and consuming unnecessary resources. Seems like this is what Task Groups are for.
You are indeed correct, there exists no proper documentation on this (I opened a support case with our AWS team to verify!).
However, all is not lost. A solution to your conundrum does indeed exist, and is a solution we use every day. You don't have to use the task group, whatever that is (since we don't actually know yet (AWS engineer is writing up some docs for me, will post them here when I get them)).
All you need though are placement constraints (your same doc), which are easy enough to setup. If you have a launch configuration, you can add something like this to the Advanced > User Data section, so that it gets run during boot (or just add it when launching your instance manually (or if you're feeling exceptionally hacky, you can logon to your instance and run the commands manually.. for science and stuff)):
echo ECS_INSTANCE_ATTRIBUTES={\"env\": \"prod\",\"primary\": \"app1\",\"secondary\": \"app2\"} >> /etc/ecs/ecs.config
Everything in quotes is arbitrarily defined by you, so use whatever tags and values make sense for your use case. If you go this route, make sure you add the following line to your docker launch command: --env-file=/etc/ecs/ecs.config
So now that you have an instance that's properly tagged (and make sure it's only the single instance you want (which means you probably need a dedicated launch configuration for this specific type of instance)), you can go ahead and create your ECS service like you were wanting to do. However, make sure you setup your Task Placement correctly, to match the roles that are now configured for your instances:
So for the example above, this service is configured to only launch this task on instances that are configured for both env==prod and secondary==app2 -- since your other two instances aren't configured for secondary==app2, they're not allowed to host this task.
It can be confusing at first, and took us a while to get right, but I hope this helps!
Response from AWS Support
I looked into the procedure how to use the Task Groups and here were my findings: - The assumption is that you already have a task group named "databases" if you had existing tasks launched from RunTask/StartTask API.
When you launch a task using the RunTask or StartTask action, you can specify the name of the task group for the task. If you don't specify a task group for the task, the default name is the family name of the task definition (for example, family:my-task-definition) - So to create a Task Group, either you define a TaskGroup (say webserver) while creating a Task on Task Console or use following command : $ aws ecs run-task --cluster <ecs-cluster> --task-definition taskGroup-td:1 --group webserver
Once created you will notice a Task running with a group: webserver.
Now you can use following placement constraints with the Task Definition to place your tasks only on the containers that are running tasks with this Task Group.
"placementConstraints":
[
{
"expression": "task:group == webserver", "type": "memberOf"
}
]
If you try to run a task with above placementConstraint, but you do not have any task running with taskGroup : webserver, you will receive following error: Run tasks failed Reasons : ["memberOf constraint unsatisfied"].
References: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-placement-constraints.html

How to set environment variables on the VMs at managed instance group level

We are using queue based managed instance scaling. We need to setup environment variables on the VMs by instance-groups (so that same VM image can be used to subscribe to different queues in different instance-groups). I don’t see the option to define an environment variables when I create an instance-group.
Is there a way to use the same image across multiple instance groups and still achieve different VM behavior based on either different environment variable at instance group level or some other way?
Example: Create 2 managed-instance-groups with the same VM image. One has environment variable 'queue-name' set to 'queue-1' and the other has 'queue-name' set to 'queue-2'. The application deployed to the VMs in first instance-group pulls tasks from pub/sub queue 'queue-1' and on other group pulls tasks from 'queue-1'.
Using two templates same VM image
In order to create two instances groups with the same VM image having a different behaviour you can definitely make use of two different instance templates.
In this way you will be able to change networking configuration, startup and shutdown scripts or metadata.
For example you could make use of a startup script to set up the different environment variables and in this way connect to one. Fort example like here.
Using same Template same VM image
On the other hand if you cannot make use of two different templates I would propose a small hack but I guess there are several ways to do it.
As you noticed there is no a direct way to do it (since there is the possibility to customise already in the template creation).
I would add in the startup script a small portion of code that making use of the gcloud command understand which is the name of the instance group it belongs to and basing itself on this info set up in different ways an environment variable.
In this way you will need merely to follow some kind of pattern naming your instances, but I am sure you can find more elegant solution.
Or you could even base your decision on the hostname of the machine (but I like this solution even less).

Can I run 1000 AWS micro instances simultaneously?

I have a simple pure C program, that takes an integer as input, runs for a while (let's say an hour) and then returns me a text file. I want to run this program 1000 times with input integers from 1 to 1000.
Currently I'm running this program in parallel (4 processors), what takes 250 hours. The program is such that it fits in a AWS micro instance (I've tested it). Would it be possible to use 1000 micro instances in AWS to do the whole job in one hour? (at a cost of ~20$ - $0.02/instance)?
If it is possible, does anybody have some guidlines on how to do that?
If it is not, does anybody have an also low-budget alternative to that?
Thanks a lot!
In order to achieve this, you will need to:
Create a S3 bucket to store your bootstrapping scripts, application, input data and output data
Custom Lightweight AMI: you might want to create a custom lightweight AMI which knows about downloading the bootstrapping script
Bootstrapping Script: will download your software from your S3 bucket, parse a custom instance tag which will contain the integer [1..1000] and download any additional data.
Your application: does the processing stuff.
End of processing script is which uploads the result to a another result S3 bucket and terminates the instance, you might also want to send a SNS notification to communicate the end of processing status.
If you need result consolidation you might want to create another instance and use it as a coordinator, waiting for all "end of processing" notifications in order to finish the processing. In this case you might consider using in the future the Amazon's Hadoop map reduce engine, since it will do almost all this heavy lifting for free.
You didn't specify what language you'd like to do this in and since the app you use to deploy your program doesn't have to be in the same language I would recommend C#. There are a number of examples around the web on how to programmatically spawn new Amazon Instances using the SDK. For example:
How to start an Amazon EC2 instance programmatically in .NET
You can create your own AMIs with the program already present but that might end up being a pain if you want to make adjustments to it since it will require you to recreate the entire AMI. I'd recommend creating an extra instance or simply hosting the program in a location which is accessible from a public URL. Then I would create some kind of service which would be installed on the AMI ahead of time to allow me to specify the URL to download the app from along with whatever command line parameters I wanted for that particular instance.
Hope this helps.
Be carefull by default you can only spawn 20 instance by zone. You need to ask amazon in order to use more instances.
If you want low cost but don't care about delay you should use spot instances.