I Have an Ansible script That runs a Cloudformation script. The problem is The ansible script should run over and over a specified time to create an unlimited amount of servers on AWS just like running a Cloudformation createstack on AWS, but when ran more than once it keeps updating the same resource created. It just changes the Name. I have been tring to fix this for 2 days. I need a way to create a NEW server in aws no matter how many times I run the ansible script. I believe the issue is the instance id. Since it sees one is created it doesn't attempt to create a new one. Here is my Cloudformation code uploaded to s3.
Parameters:
KeyPair:
Type: AWS::EC2::KeyPair::KeyName
Description: Connects to this
Resources:
ec2:
Type: AWS::EC2::Instance
Properties:
ImageId: ami-04681a1dbd79675a5
InstanceType: t2.micro
KeyName: !Ref KeyPair
And This is my Ansible code to run the ec2 server in s3 bucket. Ran like ansible-playbook provision.yml
please help.
- hosts: localhost
tasks:
- name: first Task Provision ec2
cloudformation:
stack_name: 'provisiong-ec2'
state: present
region: us-east-1
disable_rollback: true
template_url: https://s3.amazonaws.com/randombuckets/ansy2-cloudformation.template
template_parameters:
KeyPair: rabbit
It's not creating a new instance because the stack_name has not changed and your CFT only builds one host (which is already built).
Your immediate options are:
Create your instances using AutoScalingGroups (ASG) within CloudFormation. You can pass in the minimum number of hosts (MinSize) as a parameter and ASG will take care of the rest. You'll need to build in some logic to increment the count by one each iteration.
(not advised) Change the stack name every time you run the Ansible playbook
(not advised) Add another host to your CFT every time you want to run Ansible
Related
I am working on creating an AWS cloud formation stack wherein we create resources through a template. yaml and also create the folder for that resource in the project file to indicate what all files will go in that resource once it is created.
For example, I create a lambda function in the template. yaml with the name - "count_calories" and create a folder in the project file saying- "count_calories" and put a py file with lambda handler in it and requirement. txt file in it.
In a similar way, now I have to create a sagemaker notebook instance through the template.yaml and then upload jupyter notebooks in that notebook instance, each time the stack is created with that cloud formation template.
I have created the sagemaker notebook instance with the following template code :
NotebookInstance: #Sagemaker notebook instance
Type: AWS::SageMaker::NotebookInstance
Properties:
InstanceType: ml.t3.medium
NotebookInstanceName: !Sub Calorie-NotebookInstance-${EnvVar}
RoleArn: <RoleARN>
RootAccess: Enabled
VolumeSizeInGB: 200
I have 4 Jupyter notebooks and a data file that should go into this notebook instance once it is created. I want to do the upload through the code, not from the AWS console. Please suggest to me the right way to do it or point me to any example I can follow.
Many Thanks
You're on the right path by using template Type: AWS::SageMaker::NotebookInstance
Follow along the example here to create a SageMaker notebook using CFT
Consider using AWS::SageMaker::NotebookInstanceLifecycleConfig: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-sagemaker-notebookinstancelifecycleconfig.html
First, you need to refer the LifecycleConfigName in AWS::SageMaker::NotebookInstance resource, by name. That's why I'm using !GetAtt function and not !Ref.
Then, you need to create a resource AWS::SageMaker::NotebookInstanceLifecycleConfig that you referred in previous step.
Finally, in the line Fn::Base64: you insert the commands for code/file download. I'm using wget in this example, but you can probably use another bash commands, or even download a more complex script and run it. Consider the script should run in no more than 5 minutes: https://docs.aws.amazon.com/sagemaker/latest/dg/notebook-lifecycle-config.html
Please see next code example:
JupyterNotebookInstance:
Type: AWS::SageMaker::NotebookInstance
Properties:
InstanceType: ml.t3.medium
RoleArn: !GetAtt JupyterNotebookIAMRole.Arn
NotebookInstanceName: !Ref NotebookInstanceName
LifecycleConfigName: !GetAtt JupyterNotebookInstanceLifecycleConfig.NotebookInstanceLifecycleConfigName
JupyterNotebookInstanceLifecycleConfig:
Type: "AWS::SageMaker::NotebookInstanceLifecycleConfig"
Properties:
OnStart:
- Content:
Fn::Base64: "cd /home/ec2-user/SageMaker/ && wget <your_files_url_here>"
When I create a beanstalk environment using a saved configuration, it works fine but creates a new security group for no reason and attaches it to the instances. I already provide a security group to allow SSH access to the instances from VPC sources.
I followed this thread and tried to restrict this behaviour with the following config inside .ebextentions:
Resources:
AWSEBSecurityGroup: { "CmpFn::Remove" : {} }
AWSEBAutoScalingLaunchConfiguration:
Properties:
SecurityGroups:
- sg-07f419c62e8c4d4ab
Now the creation process gets stuck at:
Creating application version archive "app-210517_181530".
Uploading stage/app-210517_181530.zip to S3. This may take a while.
Upload Complete.
Environment details for: restrict-sg-poc
Application name: stage
Region: ap-south-1
Deployed Version: app-210517_181530
Environment ID: e-pcpmj9mdjb
Platform: arn:aws:elasticbeanstalk:ap-south-1::platform/Tomcat 8.5 with Corretto 11 running on 64bit Amazon Linux 2/4.1.8
Tier: WebServer-Standard-1.0
CNAME: UNKNOWN
Updated: 2021-05-17 12:45:35.701000+00:00
Printing Status:
2021-05-17 12:45:34 INFO createEnvironment is starting.
2021-05-17 12:45:35 INFO Using elasticbeanstalk-ap-south-1-############ as Amazon S3 storage bucket for environment data.
How can I do this properly so that my SG is added to the instances and no new SGs are created.
PS: I am using a shared ALB so SG created for load balancers is not a problem right now.
I would like to perform the following operations in order with CloudFormation.
Start up an EC2 instance.
Give it privileges to access the full internet using security group A.
Download particular versions of Java and Python
Remove its internet privileges by removing security group A and adding a security group B.
I observe that there is a DependsOn attribute for specifying the order in which to create resources, but I was unable to find a feature that would allow me to update the security groups on the same EC2 instance twice over the course of creating a stack.
Is this possible with CloudFormation?
Not in CloudFormation natively, but you could launch the EC2 instance with a configured userdata script that itself downloads Java/Python and the awscli, as necessary, and then uses the awscli to switch security groups for the current EC2 instance.
However, if all you need is Java and Python pre-loaded then why not simply create an AMI with them already installed and launch from that AMI?
The best way out is to utilise a Cloudformation custom resource here. You can create a lambda function that does exactly what you need. This lambda function can then be called as a custom resource function in the cloud formation template.
You can pass your new security group ID and instance ID to the lambda function and code the lambda function to use AWS SDK and do the modifications that you need.
I have leveraged it to post an update to my web server about the progress of the cloud formation template. Below is the sample code of the template.
EC2InstanceProfile:
Type: AWS::IAM::InstanceProfile
Properties:
Path: /
Roles: [!Ref 'EC2Role']
MarkInstanceProfileComplete:
Type: 'Custom::EC2InstanceProfileDone'
Version: '1.0'
DependsOn: EC2InstanceProfile
Properties:
ServiceToken: !Ref CustomResourceArn
HostURL: !Ref Host
LoginType: !Ref LoginType
SecretId: !Ref SecretId
WorkspaceId: !Ref WorkspaceId
Event: 2
Total: 3
Here the resource MarkInstanceProfileComplete is a custom resource that calls a Lambda function. It takes the event count and total count as input and processes them to calculate percentage progress. Based on that it sends out a request to my web server. For all we care, this Lambda function can do potentially anything you want it to do.
I'm using AWS Codestar setup and I would like to add a database.config to my .ebextentions folder in my rails project.
If you're wondering why I'm not adding database trough console, the Codestar's pipeline fails at the final ExecuteChangeSet stage for CloudFormation changes and throws a 404 error, I assume CodePipeline looking for the previous instance.
Error Message I've been receiving AWS suggests I edit Elastic Beanstalk directly. Really somewhat lost how I can add a database to my project using Elastic Beanstalk while not breaking Codestars CodePipline ExecuteChangeSet.
You specified the 'AWSEBRDSDBInstance' resource in your configuration to create a database instance,
without the corresponding database security group 'AWSEBRDSDBSecurityGroup'. For a better way to add
and configure a database to your environment, use 'eb create --db' or the Elastic Beanstalk console
instead of using a configuration file.
My .ebextensions/database.config file so far.
Resources:
AWSEBRDSDatabase:
Type: AWS::RDS::DBInstance
Properties:
AllocatedStorage: 5
DBInstanceClass: db.t2.micro
DBName: phctest
Engine: postgresql
EngineVersion: 10.4
MasterUsername: username
MasterUserPassword: password
I could also make a separate RDS database on it's own I thought about that, but like to leave it for Elastic Beanstalk.
I am quite new to AWS and want to know how to achieve following task with CloudFormation.
I want to spin up an EC2 instance with tomcat and deploy a java application on it. This java application will perform some operation. Once the operation is done, I want to delete all the resources created by this CloudFormation stack.
All these activities should be automatic. For example -- I will create the CloudFormation stack JSON file. At particular time of a day, a job should be kicked off (I don't know where in AWS to configure such job or how). But I know through Jenkins we can create a CloudFormation stack that will create all resources.
Then, after some time (lets say 2 hrs), another job should kick off and delete all resources created by CloudFormation.
Is this possible in AWS? If yes, any hints on how to do this?
Just to confirm, what you intend to do is have an EC2 instance get created on a schedule, and then have it shut down after 2 hours. The common way of accomplishing that is to use an Auto-Scaling Group (ASG) with a ScheduledAction to scale up and a ScheduledAction to scale down.
ASGs have a "desired capacity" (the number of instances in the ASG). You would want this to be "0" by default, change it to "1" at your desired time, and change it back to "0" two hours after that. What that will do is automatically start and subsequently terminate your EC2 instance on your schedule.
They also use a LaunchConfiguration, which is a template for your EC2 instances that will start on the schedule.
MyASG:
Type: AWS::AutoScaling::AutoScalingGroup
Properties:
AvailabilityZones: !GetAZs !Ref "AWS::Region"
LaunchConfigurationName: !Ref MyLaunchConfiguration
MaxSize: 1
MinSize: 0
DesiredCapacity: 0
ScheduledActionUp:
Type: AWS::AutoScaling::ScheduledAction
Properties:
AutoScalingGroupName: !Ref MyASG
DesiredCapacity: 1
Recurrence: "0 7 * * *"
ScheduledActionDown:
Type: AWS::AutoScaling::ScheduledAction
Properties:
AutoScalingGroupName: !Ref MyASG
DesiredCapacity: 0
Recurrence: "0 9 * * *"
MyLaunchConfiguration:
Type: AWS::AutoScaling::LaunchConfiguration
Properties:
ImageId: ami-xxxxxxxxx # <-- Specify the AMI ID that you want
InstanceType: t2.micro # <-- Chaneg the instance size if you want
KeyName: my-key # <-- Change to the name of an EC2 SSH key that you've added
UserData:
Fn::Base64: !Sub |
#!/bin/bash
yum install -y aws-cfn-bootstrap
# ...
# ... run some commands to set up the instance, if you need to
# ...
Metadata:
AWS::CloudFormation::Init:
config:
files:
"/etc/something/something.conf":
mode: 000600
owner: root
group: root
content: !Sub |
#
# Add the content of a config file, if you need to
#
Depending on what you want your instances to interact with, you might also need to add a Security Group and/or an IAM Instance Profile along with an IAM Role.
If you're using Jenkins to deploy the program that will run, you would add a step to bake an AMI, build and push a docker image, or take whatever other action you need to deploy your application to the place that it will be used by your instance.
I note that in your question you say that you want to delete all of the resources created by CloudFormation. Usually, when you deploy a stack like this, the stack remains deployed. The ASG will remain there until you decide to remove the stack, but it won't cost anything when you're not running EC2 instances. I think I understand your intent here, so the advice that I'm giving aligns with that.
You can use Lambda to execute events on a regular schedule.
Write a Lambda function that calls CloudFormation to create your stack of resources. You might even consider including a termination Lambda function in your CloudFormation stack and configure it to run on a schedule (2 hours after the stack was created) to delete the stack that the termination Lambda function itself is part of (have not tried this, but believe that it will work). Or you could trigger stack deletion from cron on the EC2 instance running your Java app, of course).
If all you want is an EC2 instance, it's probably easier to simply create the EC2 instance rather than a CloudFormation stack.
Something (eg an AWS Lambda function triggered by Amazon CloudWatch Events) calls the EC2 API to create the instance
User Data is passed to the EC2 instance to install the desired software OR use a custom AMI with all software pre-installed
Have the instance terminate itself when it has finished processing -- this could be as simple as calling the Operating System to shutdown the machine, with the EC2 Shutdown Behavior set to Terminate.