Limiting number of AWS EC2 instances a user can create - amazon-web-services

AWS IAM provides quite granular permissions in regards to the specific types of instances that can be launched by a specific user.
However, I would like to know if it is possible to create a custom policy that would enable me to set an upper limit on the number of EC2 instances that can be created by an individual user (not an account)?

AWS doesn't store which user has launched which machine.
One bypass I recently did was to externalize the logic into a Rundeck job:
the job was calling a python script, in which I controlled the number of instances launched by the user before actually creating a machine or not. The username was taken from the Rundeck user running the script (rundeck was pluged on active directory) and stored in AWS through tags
hope this helps

Related

Check if a user has privilege to start/stop/reboot ec2 instances

I am trying to programatically stop, start and reboot my ec2 instances via the below methods
Ec2AsyncClient.stopInstances(..)
Ec2AsyncClient.startInstances(..)
Ec2AsyncClient.rebootInstances(..)
What is the right way to check if the user has privileges to perform these actions on the given ec2 instances?
Apart from executing the commands another way to check is AWS's IAM Policy SImulator:
https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_testing-policies.html

How to add some new code to an existing EC2 instance

Bear with me, what I am requesting may be impossible. I am a AWS noob.
So I am going to describe to you the situation I am in...
I am doing a freelance gig and was essentially handed the keys to AWS. That is, I was handed the root user login credentials for the AWS account that powers this website.
Now there are 3 EC2 instances. One of the instances is a linux box that, from what I am being told, is running a Django Python backend.
My new "service" if you will must exist within this instance.
How do I introduce new source code into this instance? Is there a way to pull down the existing source code that lives within it?
I am not be helped by any existing/previous developers so I am kind of just handed the AWS credentials and have no idea where to start.
Is this even possible. That is, is it possible to pull the source code from an EC2 instance and/or modify the code? How do I do this?
EC2 instances are just virtual machines. So you can use SSH/SCP/SFTP files to and from. You can use the AWS CLI tools to copy stuff from S3. Dealers choice...
Now to get into this instance... If you look in the web console you can find its IP(s), what the security groups (firewall rules), and the key pair name. Hopefully they gave you the keys. You need these to SSH in.
You'll also want to check to make sure there's a security group applied that has SSH open. Hopefully only to your IP :)
If you don't have the keys you'll have to create an AMI image of the instance so you can create a new one with a key pair you do have.
Amazon has a set of tools for you in Amazon CodeSuite.
The tool used for "deploying" the code is Amazon CodeDeploy. By using this service you install an agent onto your host, then when triggered it will pull down an artifact of a code base and install it matching hosts. You can even specify additional commands through the hook system.
But you also want to trigger this to happen, maybe even automatically? CodeDeploy can be orchestrated using the CodePipeline tool.

multiuser public jupyter notebook on AWS sagemaker

I know there is a good tutorial on how to create jupyter notebooks on AWS sagemaker "the easy way".
Do you know if it is possible to allow 10 students to create jupyter-notebooks who do not have an AWS accounts, and also allow them to edit jupyter-notebooks?
Enabling multiple users to leverage the same notebook (in this case, without authentication) will involve managing your Security Groups to enable open access. You can filter, allowing access for a known IP address range, if your students are accessing it from a classroom or campus, for example.
Tips for this are available in this answer and this page from the documentation, diving into network configurations for SageMaker hosted notebook instances.
As for enabling students to spin up their own notebooks, I'm not sure if it's possible to enable completely unauthenticated AWS-level resource provisioning -- however once you've spun up a single managed notebook instance yourself, students can create their own notebooks directly from the browser in Jupyter, once they've navigated to the publicly available IP. You may need to attach a new SageMaker IAM role that enables notebook creation (amongst other things, depending on the workload requirements). Depending on the computational needs (number, duration, and types of concurrent workloads), there will be different optimal setups of number of managed instances and instance type to prevent computational bottlenecking.

Boot strapping AWS auto scale instances

We are discussing at a client how to boot strap auto scale AWS instances. Essentially, a instance comes up with hardly anything on it. It has a generic startup script that asks somewhere "what am I supposed to do next?"
I'm thinking we can use amazon tags, and have the instance itself ask AWS using awscli tool set to find out it's role. This could give puppet info, environment info (dev/stage/prod for example) and so on. This should be doable with just the DescribeTags privilege. I'm facing resistance however.
I am looking for suggestions on how a fresh AWS instance can find out about it's own purpose, whether from AWS or perhaps from a service broker of some sort.
EC2 instances offer a feature called User Data meant to solve this problem. User Data executes a shell script to perform provisioning functions on new instances. A typical pattern is to use the User Data to download or clone a configuration management source repository, such as Chef, Puppet, or Ansible, and run it locally on the box to perform more complete provisioning.
As #e-j-brennan states, it's also common to prebundle an AMI that has already been provisioned. This approach is faster since no provisioning needs to happen at boot time, but is perhaps less flexible since the instance isn't customized.
You may also be interested in instance metadata, which exposes some data such as network details and tags via a URL path accessible only to the instance itself.
An instance doesn't have to come up with 'hardly anything on it' though. You can/should build your own custom AMI (Amazon machine image), with any and all software you need to have running on it, and when you need to auto-scale an instance, you boot it from the AMI you previously created and saved.
http://docs.aws.amazon.com/gettingstarted/latest/wah-linux/getting-started-create-custom-ami.html
I would recommend to use AWS Beanstalk for creating specific instances, this makes it easier since it will create the AutoScaling groups and Launch Configurations (Bootup code) which you can edit later. Also you only pay for EC2 instances and you can manage most of the things from Beanstalk console.

Tag Nodes With Chef Roles Using cloudformation

So my goal is to launch say 100 nodes in the cloud using cloudformation and I would like to tag nodes with chef roles within my cloudformation script instead of using knife. I have setup my cloudformation nodes to automtically register themselves with the chef server and I want to use report their role to the chef server so that chef server installs the proper cookbooks on each node (depending on the node roles). I know this is possible with knife but I want to bury the node role within my cloudformation script.
How can I do so?
I do this with chef. I usually put a json file in S3 which describes the roles the machine needs to use. I create an IAM user in CloudFormation which can access the S3 bucket. Then, in my user data script, I first grab the file from S3 and then run chef-client -j /path/to/json/file. I do the same thing with the validation key, fwiw, so that the node can register itself.
HTH
I use Puppet, which is of course slightly different to Chef, but the same theory should apply. I send a JSON object as the user data when launching a new instance (also via CloudFormation), then access this data in Puppet to do dynamic configuration.
Puppet handles a lot of this automatically - e.g. it will automatically set the FACTOR_EC2_USER_DATA environment variable for me, so I just need to parse the JSON in to variables such as $role and $environment, at which point I can dynamically decide which role the instance should be assigned.
So as long as you can find some way to access the user data within Chef, the same approach should work.