Retrieve existing resource data using AWS Cloudformation - amazon-web-services

I need to retrieve existing data/properties of a given resource by using an AWS Cloudformation template. Is it possible? If it is how can I do it?
Example 1:
Output: Security Group ID which allows traffic on port 22
Example 2:
Output: Instance ID which use default VPC

AWS CloudFormation is used to deploy infrastructure from a template in a repeatable manner. It cannot provide information on any resources created by any methods outside of CloudFormation.
Your requirements seem more relevant to AWS Config:
AWS Config provides a detailed view of the configuration of AWS resources in your AWS account. This includes how the resources are related to one another and how they were configured in the past so that you can see how the configurations and relationships change over time.
An AWS resource is an entity you can work with in AWS, such as an Amazon Elastic Compute Cloud (EC2) instance, an Amazon Elastic Block Store (EBS) volume, a security group, or an Amazon Virtual Private Cloud (VPC).
Using your examples, AWS Config can list EC2 instances and any resources that are connected to the instances, such as Security Groups and VPCs. You can easily click-through these relationship and view the configurations. It is also possible to view how these configurations have changed over time, such as:
When EC2 instance changed state (eg stopped, running)
When rules changed on Security Groups
Alternatively, you can simply make API calls to AWS services to obtain the current configuration of resources, such as calling DescribeInstances to obtain a list of Amazon EC2 instances and their configurations.

Related

How to migrate AWS ECS from one account to another (in a different Region/AZ)?

Docker containers are hosted with aws ecs inside a VPC in a particular region. How do I migrate them to a different VPC in a different region?
Unfortunately, there isn't a straightforward method to migrate a service from one region to another. To accomplish this, you'll need to ensure that you have a VPC and ECS cluster set up in the target region. Then, you can create the service within that cluster and VPC.
If you're using Cloudformation or Terraform for configuration as code, simply update the region and relevant definitions, then redeploy. Otherwise, you can use the AWS CLI to extract the full definition of your cluster and service, and then recreate it in the target region. For more information, see the AWS CLI ECS reference: https://docs.aws.amazon.com/cli/latest/reference/ecs/index.html
Also, make sure that any Docker images stored in a private registry are accessible in the target region. Best of luck with your migration!

Is the AWS CLI missing data for the "ec2 describe-instancess" method?

As of the date of this question I'm using the most recent version of the AWS CLI (2.4.6) running on macOS. According to the v2 docs the Instances that are returned should include properties like InstanceLifecycle, Licenses, MetadataOptions -> PlatformDetails and several others that are missing for me. While I'm getting back most data, some fields are absent... I've tried this is two separate AWS accounts and I have admin IAM creds that I'm using locally, why does the aws ec2 describe-instances call not return all of the fields listed in the docs?
Not all outputs is available for every ec2 instance, it depends on the way of provisioning of your ec2 instances.
Ex:
InstanceLifecycle: is exclusive if you provisioned the ec2 instance as spot instance or reserved one.
Licenses: If you used BYOL when provisioning EC2 (Bring your own license)
Extra.. The docs describe every possible output from querying ec2 api endpoint, but it depends on the different parameters of your provisioned ec2 instance.
For example, try to provision a spot instance, and query the instance lifecycle.

Update terraform resource after provisioning

so I recently asked a question about how to provision instances that depend on each other. The answer I got was that I could instantiate the 3 instances, then have a null resource with a remote-exec provisioner that would update each instances.
It works great, except that in order to work my instances need to be configured to allow ssh. And since they are in a private subnet, I first need to allow ssh in a public instance that will then bootstrap my 3 instances. This bootstrap operation requires allowing ssh on 4 instances that really don't need to once the bootstrap is complete. This is not that bad, as I can still restrict the traffic to known ip/subnet, but I still thought it was worth asking if there was some ways to avoid that problem.
Can I update the security group of running instances in a single terraform plan? Example: Instantiate 3 instances with security_group X, provision them through ssh, then update the instances with security_group Y, thus disallowing ssh. If so, how? If not, are there any other solutions to this problem?
Thanks.
Based on the comments.
Instead of ssh, you could use AWS Systems Manager Run Command:
AWS Systems Manager Run Command lets you remotely and securely manage the configuration of your managed instances. Run Command enables you to automate common administrative tasks and perform ad hoc configuration changes at scale.
This would require making your instances to be recognized by AWS Systems Manager (SSM) which requires three things:
network connectivity to SSM service. Since your instances are in private subnet, they either have to connect to the SSM service using NAT gateway or VPC interface endpoints for SSM.
SSM Agent installed and running. This is usually not an issue as most offical AMI on AWS already have it setup.
Instance role with AmazonSSMManagedInstanceCore AWS managed policy.
Since run-command is not supported by terraform, you either have to use local-exec to run the command through AWS CLI, or through lambda function using aws_lambda_invocation.

Move AWS EC2 Instance to another account

I created a Amazon AWS EC2 instance under my account and made an website/ftp on it, now a new partner wants to move the instance under his company account so his company can pay the bills.
We can't change the instance IP because banks in the region are communicating with the server.
How can I move the instance to a different account without having to change anything on the configuration?
The short answer is: No, you cannot move an running instance from one account to another unless and ofcourse AWS Technical support has some magic available behind the curtains.
You can However, Create an AMI from this instance and share this AMI with other users/account. refer: http://aws.amazon.com/articles/530
To share or migrate EC2 instances from a source account to a target
account follow these steps:
Create a custom Amazon Machine Image (AMI)
from the instance you want to share or migrate. Be sure to include all
required EBS data volumes in the AMI.
Note: Data stored on instance store volumes isn't preserved in AMIs, and won't be on the instance store volumes of the instances
that you launch from the AMI.
Share the AMI with the target account
using either the EC2 console or the AWS Command Line Interface (CLI).
From the target account, find the AMI
using the EC2 console or the AWS CLI.
Launch a new instance from the shared AMI
on the target account.
Note: The private IP address of VPC instances will be different in the new account, unless you specifically set them during
launch.
Related information
Changing the Encryption State of Your Data
AWS CLI Command Reference (EC2)
Source: Transfer Amazon EC2 Instance
This is not possible.
AWS Support does not have access to copy Amazon EC2 resources or
manipulate any configuration options in AWS accounts. You can't
separate an AWS account from an Amazon.com account or transfer
resources between AWS accounts. It is possible to manually migrate
Amazon EC2 resources from one account to another by completing the
steps described here.
Source : https://aws.amazon.com/premiumsupport/knowledge-center/account-transfer-ec2-instance/
I'm working with several hundreds on EC2 instances in several AWS regions and accounts. You can move an EC2 instance to another AWS account, however, you can't move the Elastic IP and it will take up 16 steps with AWS CLI, if you want to migrate Tags and clone the Security Groups. I wrote a detailed post with the whole process at https://medium.com/#gmusumeci/how-to-move-an-ec2-instance-to-another-aws-account-e5a8f04cef21.
there are more than 10 steps involved in doing the cloud move. I would suggest you use Infrastructure as a Configuration (terraform and CloudFormation) or Infrastructure as a real code (pulumi and CDK)
however if you want to give a go at a nice tool I found called KopiCloud. Please feel welcome to try it and leave your comments below. Is good if you need to move instances on a quick lift and shift scenario.
You can re-think the design of having the banks in the region communicating to your servers via IP.
If the banks communicate using DNS names, you have much more flexibility to move your servers around.
You can also achieve improvements in high availability and resiliency by moving to DNS connections.
So a plan might be
Setup a DNS record for your existing server
Get the banks who connect to your server to connect via the DNS name
Setup your new server in the other account (other answers describe this)
Cut the banks over to your new server in the new account simply by updating the DNS record
I haven't tried load balancing across accounts, but that may be another option, which would give you HA as a bonus. By registering your current instance, and new instance in another account as targets with a load balancer and getting your clients to connect to the load balancer, you could cut over to the other account. The only part I haven't tried is registering targets in different accounts, but looks like this should be possible with an AWS Network Load Balancer

Create AWS cache clusters in VPC with CloudFormation

I am creating an AWS stack inside a VPC using CloudFormation and need to create ElastiCache clusters on it. I have investigated and there is no support in CloudFormation to create cache clusters in VPCs.
Our "workaround" was to to create the cache cluster when some "fixed" instance (like a bastion for example) bootstrap using CloudInit and AWS AmazonElastiCacheCli tools (elasticache-create-cache-subnet-group, elasticache-create-cache-cluster). Then, when front end machines bootstrap (we are using autoscaling), they use elasticache-describe-cache-clusters to get cache cluster nodes and update configuration.
I would like to know if you have different solutions to this problem.
VPC support has now been added for Elasticache in Cloudformation Templates.
To launch a AWS::ElastiCache::CacheCluster in your VPC, create a AWS::ElastiCache::SubnetGroup that defines which subnet in your VPC you want Elasticache and assign it to the CacheSubnetGroupName property of AWS::ElastiCache::CacheCluster.
You workaround is a reasonable one (and shows that you seem to be in control of your AWS operations already).
You could improve on your custom solution eventually by means of the dedicated CustomResource type, which are special AWS CloudFormation resources that provide a way for a template developer to include resources in an AWS CloudFormation stack that are provided by a source other than Amazon Web Services. - the AWS CloudFormation Custom Resource Walkthrough provides a good overview of what this is all about, how it works and what's required to implement your own.
The benefit of using this facade for a custom resource (i.e. the Amazon ElastiCache cluster in your case) is that its entire lifecycle (create/update/delete) can be handled in a similar and controlled fashion just like any officially supported CloudFormation resource types, e.g. resource creation failures would be handled transparently from the perspective of the entire stack.
However, for the use case at hand you might actually just want to wait for official support becoming available:
AWS has announced VPC support for ElastiCache in the context of the recent major Amazon EC2 Update - Virtual Private Clouds for Everyone!, which boils down to Default VPCs for (Almost) Everyone.
We want every EC2 user to be able to benefit from the advanced networking and other features of Amazon VPC that I outlined above. To enable this, starting soon, instances for new AWS customers (and existing customers launching in new Regions) will be launched into the "EC2-VPC" platform. [...]
You don’t need to create a VPC beforehand - simply launch EC2
instances or provision Elastic Load Balancers, RDS databases, or
ElastiCache clusters like you would in EC2-Classic and we’ll create a
VPC for you at no extra charge. We’ll launch your resources into that
VPC [...] [emphasis mine]
This update sort of implies that any new services will likely be also available in VPC right away going forward (else the new EC2-VPC platform wouldn't work automatically for new customers as envisioned).
Accordingly I'd expect the CloudFormation team to follow suit and complete/amend their support for deployment to VPC going forward as well.
My solution for this has been to have a controller process that polls a message queue, which is subscribed to the SNS topic which I notify CloudFormation events to (click advanced in the console when you create a CloudFormation stack to send notifications to an SNS Topic).
I pass the required parameters as tags to AWS::EC2::Subnet and have the controller pick them up, when the subnet is created. I execute the set up when a AWS::CloudFormation::WaitConditionHandle is created, and use the PhysicalResourceId to cURL with PUT to satisfy a AWS::CloudFormation::WaitCondition.
It works somewhat, but doesn't handle resource deletion in ElastiCache, because there is no AWS::CloudFormation::WaitCondition analogue for stack deletion. That's a manual operation procedure wth my approach.
The CustomResource approach looks more polished, but requires an endpoint, which I don't have. If you can put together an endpoint, that looks like the way to go.