Is there any API to automatically spin up AWS server - amazon-web-services

I might be naive but looking for a good solution to automatically spin up an AWS server with an API.
The use case is to create AWS EC2 instances on a click and maintain the deployments. Ansible is a probable candidate but looking for the core solution of spinning up a new EC2 machine.
Appreciate your help.

Rather than directly calling an API (eg from Java, .Net, Python, etc), you can also use the AWS Command-Line Interface (CLI).
The command you want is run-instances, which will launch a new Amazon EC2 instance.
See: AWS CLI documentation for run-instances

Related

What is the difference between an AWS Systems Manager Document of Type Automation and Command?

They seem to be serving the same purpose. They can both be broken down into steps, each step being a script.
A Command or Automation document can also both be part of SSM Associations in the State Manager.
So my question is simple. In which case would I need to create a Command document instead of an Automation document ?
From documentation:
Using Run Command, a capability of AWS Systems Manager, you can
remotely and securely manage the configuration of your managed nodes.
So with command documents you are executing commands on your managed instances (i.e. yum update)
Automation, a capability of AWS Systems Manager, simplifies common
maintenance, deployment, and remediation tasks for AWS services like
Amazon Elastic Compute Cloud (Amazon EC2), Amazon Relational Database
Service (Amazon RDS), Amazon Redshift, Amazon Simple Storage Service
(Amazon S3), and many more.
With Automation document you can interact with any AWS service to execute some actions (i.e. launch EC2 instance, crate AMI from running instance, crate RDS snapshot etc.)
Moreover you can define retries, crate process branches (i.e. when some step fails, go different path than when it success)

AWS Cloud Formation : How to automate the EC2 instance cloning / snapshot

Automating "Cloning" / "snapshot" of an already existing AWS EC2 instance.
I am able to create a AWS EC2 instance manually through Cloud Formation within the console. Alternatively , from Jenkins too I was able to perform the same operation.
Clone / Snapshot : Manually , through the options of "Snapshot" / "Create Image" I was able to spin up a new instance from the existing one. My question is can this be automated through Jenkins or script etc? The solution should be able to use either the snapshot or create image or any other options available and create a new instance from an existing one.
If the process can be automated , my request to please guide / provide steps / scripts / documents that can help me achieve the same.
Absolutely everything on AWS can be automated in multiple ways, including:
AWS Command-Line Interface (CLI)
SDKs and Programming Toolkits for AWS for multiple languages
Through IT management tools like Chef, Jenkins, Ansible, etc (which use SDKs to call AWS services on your behalf)
Please note that AWS CloudFormation is a service for deploying services, such as networking, compute and database in an automatic and reproducible manner. It is not typically used for operational activities like taking snapshots.

Accessing files in EC2 from Lambda

I have few EC2 servers in AWS. Whenever the disk space exceeds a limit, i want to delete some files (may be logs folder) in EC2 instance automatically. I am planning to use Lambda and cloudwatch for this. Can i use Lambda to interact with EC2. If not possible, what is the alternate approach to achieve this functionality.
This is not an appropriate use-case for an AWS Lambda function.
AWS Lambda is suitable for tasks where compute is required in response to an event. Your use-case, however, is to manipulate information on an EC2 instance, which does not need cloud compute.
You could run a script on each each computer, triggered by a Scheduled Task.
Alternatively, you could use the Systems Manager Run Command (also known as the EC2 Run Command), which allows you to run commands on multiple Amazon EC2 instances and view the results. This could be used to trigger a local script, or it could pass the whole command to run (including the script). It is purpose-built for the type of task you describe.
AWS Lambda has access to your instances if they are available in the internet. If they are not available in the internet, it is possible to give access to AWS lambda using a NAT or instance Gateway in your VPC.
The problem is: access to your instance does not means access to the instances filesystems. To delete the files from Lambda you can use two alternatives:
Configure a network filesystem service in your instances an connect
to this services in your lambda function. Using windows you would
just "share" your disks, but in that case you would use some SMB
library in your lambda code, that "I think" did not have native SMB
support. Just keep in mind that your security guy will scream out
loud when you propose this alternative.
Create a "agent" in your EC2 instances and keep it running as a
Windows Service and call this agent from your lambda function. In
that case, the lambda will start the execution of the agent that
will be responsible for the file deletion.
Another option, is to follow Ramesh's suggestion and create a Powershell script and configure a cron job. To be easy, you can create a Image with this Powershell script and use the image to initialize each instance. The same solution would be applicable to "the agent" solution in the lambda alternantives.
I think that, in any case, you will need to change something in your 150 servers. Using a customized image can help you to simplify this a little bit, but you will not get a solution without some changes.
According to the following thread, you cannot access files inside a EC2 VM unless you are exposing files to the public using different methodology.
AWS Forum
Quoting from the forum
If you are talking about the underlying EC2 instance, answer is No, you cannot access those files.
However as a solution for your problem, you can used scheduled job to cleanup your files depending your usage. You can use a service or cron job.

Runnable jar in AWS EC2

I have a requirement to run a runnable jar from AWS lambda. One option is to create a docker and use ECS to achieve the desired result.
I am looking for an alternative approach using EC2. Is it possible to deploy a runnable jar in EC2 and then invoke it from AWS Lambda?
Yes it's possible using EC2 Run Commands. You could use your favorite AWS SDK flavor (Java, Python, etc) to run a command on your EC2 instance from your Lambda function. Here's a good tutorial: http://docs.aws.amazon.com/systems-manager/latest/userguide/run-command.html

Is using AWS SDK to launch an instance and aws cli to manage it a good approach?

I've just started with AWS and I have some questions.
First, I followed the official documentation on how to launch an instance using AWS SDK for JAVA like this:
AmazonEC2 Client = new AmazonEC2Client(awsCreds);
CreateSecurityGroupRequest csgr = new CreateSecurityGroupRequest();
csgr.withGroupName("Azzouz_group").withDescription("My security group");
IpPermission ipPermission = new IpPermission();
ipPermission.withIpRanges("0.0.0.0/0").withIpProtocol("tcp");
AuthorizeSecurityGroupIngressRequest authorizeSecurityGroupIngressRequest = new AuthorizeSecurityGroupIngressRequest();
authorizeSecurityGroupIngressRequest.withGroupName("Azzouz_group").withIpPermissions(ipPermission);
RunInstancesRequest runInstancesRequest = new RunInstancesRequest();
runInstancesRequest.withImageId("ami-4b814f22")
.withInstanceType("m1.small")
.withMinCount(1)
.withMaxCount(1)
.withKeyName("azzouz_key")
.withSecurityGroups("Azzouz_group");
RunInstancesResult runInstancesResult = Client.runInstances(runInstancesRequest);
RunInstancesResult runInstancesResult = Client.runInstances(runInstancesRequest);
String instanceId = runInstancesResult.getReservation().getInstances().get(0).getInstanceId();
I didn't use the CreateKeyPairRequest part because I want to upload my public key to amazon so when I try to ssh into into I don't have to add -i path/to/key.pem and I have only to mention the key name in my java code ("azzouz_key") , in the next lines, $USER contains azzouz_key:
keypair=$USER # just a name
publickeyfile=$HOME/.ssh/id_rsa.pub
regions=$(aws ec2 describe-regions \
--output text \
--query 'Regions[*].RegionName')
for region in $regions; do
echo $region
aws ec2 import-key-pair \
--region "$region" \
--key-name "$keypair" \
--public-key-material "file://$publickeyfile"
done
what I want now is connect to the instance and automate some stuff. So I 'm heading to make a call to a shell from inside the java code, the script gets an instance id as a parameter, then gets the ip adress ( using aws ec2 describe-instances ), ssh into it and do some stuff.
I wanted to authorize ssh connection to the instance from any ip just as a start(0.0.0.0/0) and I'm not sure if this is what I'm supposed to do.
So, my question is: Is this the best approach?! Should I just use the aws cli to create and manage the instance?! Does just mentioning just the key pair name fits with the mechanism of uploading the public ssh key to amazon?!
Please, I'm just starting, I'm an intern and I dont yet have an access to an amazon account so I can test my work. I'm just working all of this in my mind. THANK YOU VERY MUCH!
my advice is to setup an account on AWS and start using the AWS free tier options.
All in all, it is there and it is for free (just pay attention on what you launch or use in the service).
Apart of that, your question about how to authorize connections over SSH from everywhere, this is done over security groups (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html)
About what is the best option, this really depends on you.
If you need to launch 2 instances in your life on AWS, then the console is good enough. If you want to orchestrate your hybrid setup, then probably your way is the best.
CLI is an excellent solution for daily operations too.
In simple words, there is not best way or a good or bad approach. It all depends on your needs.
I hope this helps somehow.
Automation is a huge topics.If you want to extend AWS automation using script, Before touching the API/SDK, first, you must design your own AWS resources tags naming.
Tags naming is an implicit ways to reference to AWS resource without explicitly specify the resource-id(e.g. VPC id, EC2-id ,interface-id,etc). In addition, for resource such as EC2 that doesn't allow immediate use of tag during creation, you need to study usage of "client-token".
AWS CLI allow you to do lots of automation, however, to manipulate response result, you need shell script skill to manipulate them. I suggest you pick the AWS SDK language that you are familiar with.
Cloud configuration management tools(there is limited support from tools like Ansible, saltstack,puppet) can be the next step, if you plan to extend the whole source deployment, server configuration.
You may want to consider starting off with Infrastructure as Code. Cloud Formation with Code Pipeline will ensure automated and consistent environment launches and makes you highly valuable in the marketplace.
Both can be launched and managed via the awscli. As your capabilities and the complexity of your IaC increase it may be worth looking into Terraform due to the modularity available compared to CloudFormation.