Cloud9 and CloudFormation - access EC2 instance - amazon-web-services

I'm creating a cloud9 instance in cloudformation as follows:
Resources:
DevEnv:
Type: 'AWS::Cloud9::EnvironmentEC2'
Properties:
SubnetId: !Ref PublicSubnet
Name: MyEnvironment
InstanceType: !Ref Cloud9InstanceType
Subsequently, I want to run a bash script on that EC2 instance (within the cloudformation script) to set it up properly. How do I do that?

Unfortunately you can't. AWS::Cloud9::EnvironmentEC2 doesn't expose UserData or something like that and you cannot run SSM Documents against Cloud9 instances.
The closest you can get is using Repositories to clone a AWS CodeCommit into the instance, and that repository has a script that you run manually first time you connect...

Related

Is there a way to use resources created by ec2 user data in my cloudformation stack for another resource in the stack?

im working on a template in which i deploy an ec2 instance, in the instances user data, the instance pulls a script from a git repo and uses that script create an AMI. I would like to refer to that newly created ami’s ID in another resource in the same cloudformation stack using the either by using !ref or some other way.
so far I have placed this line below in the user data to get the name of the ami
export AMIID=$(aws ec2 describe-images --filters "Name=name,Values=ami-name" | jq -r ".Images[].ImageId")
and this line to create an entry to put the AMI ID in the parameter store
aws ssm put-parameter --name aminame --type String --value "$AMIID"
In the cloudformation stack I have a parameter here
AMI:
Type : 'AWS::SSM::Parameter::Value<AWS::EC2::Image::Id>'
Default: aminame
and in the resource block I have this the reference to the ami that looks something like this
EC2Instance:
Type: "AWS::EC2::Instance"
CreationPolicy:
ResourceSignal:
Timeout: PT120M
Properties:
ImageId: !Ref AMI
UserData:
Fn::Base64: |
#!/bin/bash
So far when I run this I get an error stating that the parameter cannot be found.. which makes sense, however is there any other way to do something like this ?
Technically you can do that using lambda:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-custom-resources.html

passing file to ec2 in cloud formation

I have a cloudfromation script that creates some ec2 instances, and later attaches them to an ELB.
I have a python server script written that I would like to have on the ec2 as soon as they are created.
Right now what I do is after the cloudformation script finishes, I use SCP to pass the script to the ec2 instances.
I was wondering if there was a way to do this within the cloudfromation, mabe under UserData?
I should point out I am very new to cloud formation. I have gone over the documentation, but have not been able to do this yet.
[EDIT] I think its important to state that I have a deploy.sh script that I run to create the stack. the script sits in the dame dir as my python server script. I AM NOT USING THE AWS CONSOLE.
this is my instance code in the cloudformation script:
EC2Instance2:
Type: AWS::EC2::Instance
Properties:
InstanceType: !Ref InstanceType
SecurityGroupIds:
- !Ref InstanceSecurityGroup
KeyName: !Ref KeyName
ImageId: !Ref LatestAmiId
UserData:
Fn::Base64:
!Sub |
#!/bin/bash
sleep 20
sudo apt-get update
sudo apt-get install python3-pip -y
sudo apt-get install python3-flask -y
I was wondering if there was a way to do this within the cloudfromation, mabe under UserData?
Yes, UserData would be the way to do it. For that you could store your file in S3. For that to work you would need to add instance role to your instance with S3 permissions. The you would use AWS CLI to copy your file from S3 to the instance.

Can we dynamically create Keypair through AWS Cloudformation and copy the .PEM file to EC2 Linux instance

My requirement is to create an EC2 instance which will have the Keypair created dynamically from the same Cloudformation template.As of now,I am creating the KeyPair from AWS console and assigning it to EC2 instance through Cloudformation by taking the input from the user.
I have checked AWS document and found that the KeyPair can be create from AWS console.
Is there anyway through which Keypair can be created from Cloudformation and copy the .PEM file in the instance.
It's about private key management.
EC2 Keypair has two components. Public and private. The public key is what AWS stores and pushes to the instance on instance creation. The private key is never stored at AWS. The moment you create keypair either with the console or via CLI - you have the one and only chance to store it on your machine.
Cloud formation has no way of storing the private key on your machine as a part of the stack initialization.
You might consider two-step approach here:
1) Either create the key or import one from your machine. In either way you and only you would have access to the Private key part.
aws ec2 import-key-pair
or
aws ec2 create-key-pair
2) Use this newly created key in cloudformation.
SshKeyParameter:
Description: SSH Keypair to login to the instance
Type: AWS::EC2::KeyPair::KeyName
...
KeyName: !Ref SshKeyParameter
followed the Anton's answer and its works fine. written shell script which is launching a cloudformation template and if key is not preset ,script willl create it and will upload it to the s3 bucket.
#!/bin/bash
Region=eu-central-1
key=myapp-engine-$Region
Available_key=`aws ec2 describe-key-pairs --key-name $key | grep KeyName | awk -F\" '{print $4}'`
if [ "$key" = "$Available_key" ]; then
echo "Key is available."
else
echo "Key is not available: Creating new key"
aws ec2 create-key-pair --key-name $key --region $Region > myapp-engine-$Region.pem
aws s3 cp myapp-engine-$Region.pem s3://mybucket/myapp-engine-$Region.pem
fi
##### create stack #########
/usr/local/bin/aws cloudformation deploy --stack-name myapp-engine --template-file ./lc.yml --parameter-overrides file://./config.json --region $Region
Below is an example of a CloudFormation launch configuration stack where you can pass the key.
Resources:
renderEnginelc:
Type: AWS::AutoScaling::LaunchConfiguration
Properties:
ImageId:
Ref: "AMIID"
SecurityGroups:
- Fn::ImportValue:
!Sub "${SGStackName}-myapp"
InstanceType:
Ref: InstanceType
LaunchConfigurationName : !Join [ "-", [ !Ref Environment, !Ref ApplicationName, lc ] ]
KeyName: !Join [ "-", [ !Ref KeyName, !Ref AWS::Region ] ]
Passing a paramter value of KeyName is "myapp-engine" and it will consider a region according to AWS::Region
Anton's answer is correct.
However, there are alternatives using other tools. Usually they allow to automate the public key import.
Ansible: https://docs.ansible.com/ansible/latest/collections/amazon/aws/ec2_key_module.html#ansible-collections-amazon-aws-ec2-key-module
Terraform: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/key_pair
Or even: https://binx.io/blog/2017/10/25/deploying-private-key-pairs-with-aws-cloudformation/

AWS CloudFormation: update-stack fails if VolumeAttachment specified

we're encountering a strange issue with AWS CloudFormation.
We're using CloudFormation in order to automate the deployment of some our machines; our CloudFormation yml describes the deployment, which contains a persistent EBS volume which was created outside the stack, and we don't want to remove or recreate along such stack (it contains a lot of the state of our application).
The relevant CloudFormation yml snippet is:
DataVolumeAttachment01:
Type: AWS::EC2::VolumeAttachment
Properties:
Device: "/dev/xvdm"
InstanceId: !Ref EC2Instance01
VolumeId: !Ref DataVolumeId
EC2Instance01:
Type: "AWS::EC2::Instance"
Properties:
ImageId: "ami-6f587e1c"
KeyName: !Ref "KeyName"
InstanceType: !Ref "InstanceType"
BlockDeviceMappings:
# Root device
- DeviceName: "/dev/sda1"
Ebs:
VolumeType: "gp2"
DeleteOnTermination: "true"
VolumeSize: 20
So, the root device is "transient" (every time the stack is updated, such volume is deleted and gets reprovisioned with userdata), while /dev/xvdm should contain our persistent data; such device gets mounted at the end of the userdata, and added to fstab.
Following AWS own documentation, I created a script that unmounts such volume from inside the VM, and I even tried deattaching such volume from the EC2 Instance, something like:
${SSH_CMD} "cd /home/application && application stop && sudo umount /data && echo data volume unmounted"
echo "detaching data volume"
VOLID=$(aws ec2 describe-volumes --filters Name=tag-key,Values="Name" Name=tag-value,Values=persistent-volume --query 'Volumes[*].{ID:VolumeId}' --output text)
aws ec2 detach-volume --volume-id "${VOLID}"
I have verified the umount and the detach succeed.
The creation of a new stack with my template and parameters succeeds.
And yet, when I launch
aws cloudformation update-stack --capabilities CAPABILITY_IAM --stack-name $STACK_NAME --template-body file://single_ec2_instance.yml --parameters file://$AWS_PARAMETERS_FILE
The update fails, with this error:
Update to resource type AWS::EC2::VolumeAttachment is not supported.
Even though I'm not changing anything within such resource.
What's up? How can I solve or work around?
It seems that the thing was a non-issue.
Either CloudFormation is influenced by exhausted t2 cpu credits (which we had exhausted, we were trying to change the instance type for that exact reason in order to use m3 or m4) or we got a bad day for EC2/CloudFormation in Ireland. Today, with the same exact setup, every update succeeded.
I just had this problem and found the solution was to Terminate my stack which creates EC2 instances and then redploy.

Run bash script when Stack Created

I'm designing an AWS stack that contains multiple instances that run a handful of services comprised of a few tasks each. One of these services uses NFS to store configuration, and this configuration needs to be setup ONCE when the stack is created.
I've come up with a way to run a configuration script ONCE when the stack is created:
Configure the service that needs to configure itself to run a single task
When the task starts, check if the configuration exists. If it doesn't, run a configuration script and then update the desired task count so that other instances are created
(1) is necessary to avoid a race condition.
Although this works well, it strikes me as a very round-about way to achieve something simple: run a bash script ONCE when my stack is created. Is there a better way to do this?
You can run a one-off Bash script using an AWS::EC2::Instance resource with an InstanceInitiatedShutdownBehavior property of terminate (to terminate the instance after the script executes), and a DependsOn attribute set to the last-created resource in your stack (so the EC2 instance gets created and the Bash script gets executed at the end):
Description: Run a bash script once when a stack is created.
Mappings:
# amzn-ami-hvm-2016.09.1.20161221-x86_64-gp2
RegionMap:
us-east-1:
"64": "ami-9be6f38c"
Resources:
MyResource:
Type: AWS::SNS::Topic
WebServer:
Type: AWS::EC2::Instance
DependsOn: MyResource
Properties:
ImageId: !FindInMap [ RegionMap, !Ref "AWS::Region", 64]
InstanceType: m3.medium
InstanceInitiatedShutdownBehavior: terminate
UserData:
"Fn::Base64":
!Sub |
#!/bin/bash
# [Contents of bash script here...]
shutdown -h now