How to reassign security groups of an instance in step functions - amazon-web-services

I am running a step functions workflow but there is no predefined state to reassign security group to a specific instance using the instance id. I already have 2 security groups created (default and the one i want to assign)
What are the ways this can be done?
Tried running it as a task with the following api parameters
{
"command": [
"aws ec2 modify-instance-attribute --instance-id $instance_id --groups $security_group_id"
]
}

I believe that you are looking for the resource arn:aws:states:::aws-sdk:ec2:modifyInstanceAttribute. This uses AWS SDK Service Integrations to call the EC2 API Action.
To make it easier to find the supported service integrations for Step Functions, I suggest using Step Functions Workflow Studio. It makes it really easy to search for available API actions and to build your workflows.
And if you'd like to learn more about AWS SDK Service Integrations, check out the module in the Step Functions Workshop.

Related

AWS - Conditionally run a script on EC2 instances

I am looking for a way to conditionally run a script on every existing / new EC2 instances.
For example, in Azure, you can create an Azure Policy that is executed on every existing / new VM, and when a set of conditions apply on that VM, you can deploy a VM extension or run a DSC script.
I am looking for the equivalent service in AWS.
From AWS Systems Manager Run Command - AWS Systems Manager:
Using Run Command, a capability of AWS Systems Manager, you can remotely and securely manage the configuration of your managed instances. A managed instance is any Amazon Elastic Compute Cloud (Amazon EC2) instance or on-premises machine in your hybrid environment that has been configured for Systems Manager. Run Command allows you to automate common administrative tasks and perform one-time configuration changes at scale. You can use Run Command from the AWS Management Console, the AWS Command Line Interface (AWS CLI), AWS Tools for Windows PowerShell, or the AWS SDKs.
Administrators use Run Command to perform the following types of tasks on their managed instances: install or bootstrap applications, build a deployment pipeline, capture log files when an instance is removed from an Auto Scaling group, and join instances to a Windows domain.
You will need to trigger the Run Command to execute on nominated EC2 instances. It will not automatically run for every 'new' instance.
Alternatively, there is Evaluating Resources with AWS Config Rules - AWS Config:
Use AWS Config to evaluate the configuration settings of your AWS resources. You do this by creating AWS Config rules, which represent your ideal configuration settings. While AWS Config continuously tracks the configuration changes that occur among your resources, it checks whether these changes violate any of the conditions in your rules. If a resource violates a rule, AWS Config flags the resource and the rule as noncompliant.
For example, when an EC2 volume is created, AWS Config can evaluate the volume against a rule that requires volumes to be encrypted. If the volume is not encrypted, AWS Config flags the volume and the rule as noncompliant. AWS Config can also check all of your resources for account-wide requirements. For example, AWS Config can check whether the number of EC2 volumes in an account stays within a desired total, or whether an account uses AWS CloudTrail for logging.
You can create an AWS Config custom rule that triggers a process when a non-compliant resource is found. This way, an automated action could correct the situation.
You can also use an AWS managed service such as OpsWorks (Managed Chef/Puppet).
This can give you a way of running the commands in an organized way by allowing you to create defined sets of instances and associated resources.

terraform : find out if AWS Service is supported in targeted region

We are using codepipeline to deploy our application on to the AWS EC2 Nodes.
However codepipeline is not supported in all the AWS Regions and causing our terraform deployment to fail.
I would like to use userdatascript on AWS EC2 nodes, where AWS Regions lacking support of AWS Codepipeline.
Is there any way for me to detect/findout if codepipeline service supported/or not on targeted region through Terraform ?
AWS provides endpoint for the codepipeline in this documentation - https://docs.aws.amazon.com/general/latest/gr/codepipeline.html
My logical/hypothetical solution here is below
Run the curl command via local-exec or use http get data source to hit the endpoints on targeted region , the endpoint follow the below pattern https://codepipeline.<InsertTargetedRegion>.amazonaws.com
From the result of the step 1, make logical decision. if endpoint is reachable, create AWS Codepipeline and downstream resources, if endpoint is not reachable, create EC2 LC with userdata script and drop the AWS Codepipeline.
The other solution ( which is little clumsy ) , I can think of is to make a terraform list for the regions which do not support codepipeline as service and make some logical decision based on that.
However this clumsy solution required human effort (checking/knowing if region support aws codepipeline and update terraform list ) and updating terraform configuration every now and then.
I am wondering, if there is any other way to know if targeted region supports codepipeline or not.
Thank You.
I think that having a static list of supported regions is simply the easiest and most direct way of knowing where the script can run. Then the logic is quite easy: if the current region is supported continue, if not error and stop. Any other logic will be cumbersome and unnecessary.
Yes, you can use a static file, but is it a scalable solution? How can you track if a new region adds. I think this link will help you.
https://aws.amazon.com/blogs/aws/new-query-for-aws-regions-endpoints-and-more-using-aws-systems-manager-parameter-store/
With AWS CLI you can query services availability with regions

Update terraform resource after provisioning

so I recently asked a question about how to provision instances that depend on each other. The answer I got was that I could instantiate the 3 instances, then have a null resource with a remote-exec provisioner that would update each instances.
It works great, except that in order to work my instances need to be configured to allow ssh. And since they are in a private subnet, I first need to allow ssh in a public instance that will then bootstrap my 3 instances. This bootstrap operation requires allowing ssh on 4 instances that really don't need to once the bootstrap is complete. This is not that bad, as I can still restrict the traffic to known ip/subnet, but I still thought it was worth asking if there was some ways to avoid that problem.
Can I update the security group of running instances in a single terraform plan? Example: Instantiate 3 instances with security_group X, provision them through ssh, then update the instances with security_group Y, thus disallowing ssh. If so, how? If not, are there any other solutions to this problem?
Thanks.
Based on the comments.
Instead of ssh, you could use AWS Systems Manager Run Command:
AWS Systems Manager Run Command lets you remotely and securely manage the configuration of your managed instances. Run Command enables you to automate common administrative tasks and perform ad hoc configuration changes at scale.
This would require making your instances to be recognized by AWS Systems Manager (SSM) which requires three things:
network connectivity to SSM service. Since your instances are in private subnet, they either have to connect to the SSM service using NAT gateway or VPC interface endpoints for SSM.
SSM Agent installed and running. This is usually not an issue as most offical AMI on AWS already have it setup.
Instance role with AmazonSSMManagedInstanceCore AWS managed policy.
Since run-command is not supported by terraform, you either have to use local-exec to run the command through AWS CLI, or through lambda function using aws_lambda_invocation.

Installing AWS CLI on EC2 instances via Spinnaker/Terraform

Are there any security considerations in terms of installing the AWS CLI by making as part of baking an image AMI?
I can see the following ways in which AWS CLI can be installed:
1. Via baking image (i.e. making AWS CLI as part of Base AMI itself)
2. Via cloud init
3. Install it as pre-requisite just before your service bootstraps.
I see a strong NO (from internal community) on the above for the reason that the AWS instance (spinnaker managed) can do more than just accessing cloud native resources and is very powerful. So In this case, if we tighten the spinnaker IAM role in which it deploys instances, should it be fine?
It really depends on what are you going to do with the AWS CLI in each EC2 baked instance. IS it for debugging purposes? is it part of the functionality of your system?
if is debugging only you can enable AWS CLI but leave an AWS IAM role with minimum permissions attached. You could have a role with special permissions that you can attach to your desired instances, access the instance and perform your debugging actions.
Other than that it really is not recommended that you use AWs CLI or package installation managers inside instances.

Is using AWS SDK to launch an instance and aws cli to manage it a good approach?

I've just started with AWS and I have some questions.
First, I followed the official documentation on how to launch an instance using AWS SDK for JAVA like this:
AmazonEC2 Client = new AmazonEC2Client(awsCreds);
CreateSecurityGroupRequest csgr = new CreateSecurityGroupRequest();
csgr.withGroupName("Azzouz_group").withDescription("My security group");
IpPermission ipPermission = new IpPermission();
ipPermission.withIpRanges("0.0.0.0/0").withIpProtocol("tcp");
AuthorizeSecurityGroupIngressRequest authorizeSecurityGroupIngressRequest = new AuthorizeSecurityGroupIngressRequest();
authorizeSecurityGroupIngressRequest.withGroupName("Azzouz_group").withIpPermissions(ipPermission);
RunInstancesRequest runInstancesRequest = new RunInstancesRequest();
runInstancesRequest.withImageId("ami-4b814f22")
.withInstanceType("m1.small")
.withMinCount(1)
.withMaxCount(1)
.withKeyName("azzouz_key")
.withSecurityGroups("Azzouz_group");
RunInstancesResult runInstancesResult = Client.runInstances(runInstancesRequest);
RunInstancesResult runInstancesResult = Client.runInstances(runInstancesRequest);
String instanceId = runInstancesResult.getReservation().getInstances().get(0).getInstanceId();
I didn't use the CreateKeyPairRequest part because I want to upload my public key to amazon so when I try to ssh into into I don't have to add -i path/to/key.pem and I have only to mention the key name in my java code ("azzouz_key") , in the next lines, $USER contains azzouz_key:
keypair=$USER # just a name
publickeyfile=$HOME/.ssh/id_rsa.pub
regions=$(aws ec2 describe-regions \
--output text \
--query 'Regions[*].RegionName')
for region in $regions; do
echo $region
aws ec2 import-key-pair \
--region "$region" \
--key-name "$keypair" \
--public-key-material "file://$publickeyfile"
done
what I want now is connect to the instance and automate some stuff. So I 'm heading to make a call to a shell from inside the java code, the script gets an instance id as a parameter, then gets the ip adress ( using aws ec2 describe-instances ), ssh into it and do some stuff.
I wanted to authorize ssh connection to the instance from any ip just as a start(0.0.0.0/0) and I'm not sure if this is what I'm supposed to do.
So, my question is: Is this the best approach?! Should I just use the aws cli to create and manage the instance?! Does just mentioning just the key pair name fits with the mechanism of uploading the public ssh key to amazon?!
Please, I'm just starting, I'm an intern and I dont yet have an access to an amazon account so I can test my work. I'm just working all of this in my mind. THANK YOU VERY MUCH!
my advice is to setup an account on AWS and start using the AWS free tier options.
All in all, it is there and it is for free (just pay attention on what you launch or use in the service).
Apart of that, your question about how to authorize connections over SSH from everywhere, this is done over security groups (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html)
About what is the best option, this really depends on you.
If you need to launch 2 instances in your life on AWS, then the console is good enough. If you want to orchestrate your hybrid setup, then probably your way is the best.
CLI is an excellent solution for daily operations too.
In simple words, there is not best way or a good or bad approach. It all depends on your needs.
I hope this helps somehow.
Automation is a huge topics.If you want to extend AWS automation using script, Before touching the API/SDK, first, you must design your own AWS resources tags naming.
Tags naming is an implicit ways to reference to AWS resource without explicitly specify the resource-id(e.g. VPC id, EC2-id ,interface-id,etc). In addition, for resource such as EC2 that doesn't allow immediate use of tag during creation, you need to study usage of "client-token".
AWS CLI allow you to do lots of automation, however, to manipulate response result, you need shell script skill to manipulate them. I suggest you pick the AWS SDK language that you are familiar with.
Cloud configuration management tools(there is limited support from tools like Ansible, saltstack,puppet) can be the next step, if you plan to extend the whole source deployment, server configuration.
You may want to consider starting off with Infrastructure as Code. Cloud Formation with Code Pipeline will ensure automated and consistent environment launches and makes you highly valuable in the marketplace.
Both can be launched and managed via the awscli. As your capabilities and the complexity of your IaC increase it may be worth looking into Terraform due to the modularity available compared to CloudFormation.