So I have created a terraform script that does the following:
Spins up an EC2 instance
Copies over some files
Run some remote commands to install stuff from repos
Creates an elasticsearch service domain
Now I need to configure the ec2 instance with the endpoint I get back from elasticsearch domain, so the application are using the right endpoint (currently it has some default value).
How can I pass the endpoint value into the file and then copy it over to the ec2 instance. What would be the recommended approach?
Thanks in advance.
Terraform will derive the order automatically, when you refer to the output of a certain resource. You can use the file provisioner to create files on the EC2 instance.
If the EC2 instances don't need to be created prior to the Elasticsearch domain, you can use the template provider and render the file based on the values of the Elasticsearch resource, render and copy up to the EC2 instances.
Related
I have a question.
So I am trying to automate my RDS connection string with terraform, but the database identifier is randomly generated per region in each aws account.
Is it possible to know before hand the database identifier? and if so i there a way i can automate it?
Below is my current scripts:
sudo socat TCP-LISTEN:5432,reuseaddr,fork TCP4:env0.**cvhjdfmcu7ed**.us-east-1.rds.amazonaws.com:5432
I current i'm using this below script to feed in variables to terraform in my userdata tpl file
sudo nohup socat TCP-LISTEN:${port},reuseaddr,forkTCP4:${name}.${connection}.${aws_region}.rds.amazonaws.com:${port}
If some here can suggests ways that i can use to automate the ${connection} variable, so that i can deploy it in any aws account and region and don't have to worry what possibly the identifier would be.
You would use the endpoint attribute available in either the aws_db_instance or aws_rds_cluster Terraform resource to access the hostname:port RDS endpoint in Terraform.
If you are not creating/managing the RDS instance in the same Terraform where you need access to the endpoint address, then you would use the appropriate Terraform datasource instead of a Terraform resource, which would lookup the RDS information and make the endpoint value available within the rest of your Terraform template.
I have requirement of creating multiple instances of the same EC2 image from lambda as the EC2 image has some Windows Processing creating PDF files. Can I launch multiple instances of the same EC2 image and pass some parameters to each ec2 instance ( say name of the bucket in S3, and names are different).
Thanks in advance.
An AWS EC2 image provides essentially a snapshot of how the server should look.
This would include:
Any packages you need installed
Any configuration you need
If you want custom configuration applied on top you would need to either:
Make use of UserData when you launch the instance to run those additional actions
Create a custom AMI with the custom configuration included
Problem:
I have an EC2 instance running and I have made some modifications to the instance: installed docker, setup directories for certs, etc. Now, I am wanting to create the same instance but use infrastructure as code principals. Instead of remembering all the additions that I have done and creating a template by hand, I am trying to find a way to export my current EC2 instance into a json or yaml format so that I can terminate this instance and create another one that is equivalent to the one running.
I have tried:
aws ec2 describe-instances
Reading through the AWS CLI EC2 docs
Reading through the CloudFormation docs
Searched Google
Searched SO
Since you have no knowledge of how the instance was setup, the only choice is to create an Amazon Machine Image (AMI). This will create an exact copy of the disk, so everything you have installed will be available to any new instances launched from the AMI. The CloudFormation template can then be configured to launch instances using this AMI.
If, on the other hand, you knew all the commands that needed to be run to configure the instance, then you could provide a User Data script that would run when new instances first boot. This would configure the instances automatically and is the recommended way to configure instances because it is easy to modify and allows instances to launch with the latest version of the Operating System.
Such a script can be provided as part of a CloudFormation template.
See: Running commands on your Linux instance at launch - Amazon EC2
One option would be to create AMI from live instance and spin up new CF stack using the AMI.
Other would be importing resource: https://aws.amazon.com/blogs/aws/new-import-existing-resources-into-a-cloudformation-stack/
There is a tool (still in beta) developed by AWS called CloudFormer:
CloudFormer is a template creation beta tool that creates an AWS CloudFormation template from existing AWS resources in your account. You select any supported AWS resources that are running in your account, and CloudFormer creates a template in an Amazon S3 bucket.
The CloudFormer is an AWS managed template. Once you launch it, the template will create an AWS::EC2::Instance for you along with a number of other related resources. You will access the instance using URL through browser, and an AWS wizard will guide you from there.
Its tutorial even shows how to create a CloudFormation template from an existing EC2 instance.
Import the EC2 instance into CloudFormation then copy it’s template.
Read more: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/resource-import.html
I'm 100% new to AWS and I'm working on deploying my personal site. I've spun up an EB environment via the AWS EB CLI and but I would also like to be able to SSH into the EC2 instance that gets created however I can't locate the private key (.pem) file that is associated with it which I need to chmod for permit SSH'ing in.
Does a private key file get created when you create an EC2 instance via Elastic Beanstalk? If so where can I find it? Thanks a ton.
It is so valuable question for the AWS beginners.
I was also confused with this question but get clear after a while.
I know you used the EB CLI for handling the EB.
With the EB CLI you don't need the .pem file for normal use.
Because the EB CLI has 'eb ssh' for connecting the EC2 instance of your EB.
Please check out : https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb3-ssh.html
Also you can't get the standard .pem file of your EB.
There are some steps.
Please check out : SSH to Elastic Beanstalk instance
Elastic beanstalk still provisions EC2 instances and an SSH key can be assign to them.
You have two options if you didn't attach a key to an instance at provision time or have since lost it.
Provision a new instances with a key attached to it.
Snapshot the instance, Provision a new instances with a key attached and references the snapshot id of the old instance.
One should be easier with Elastic Beanstalk, just provision a new environment with keys attached to the instance, you will lose data with this method though.
More in depth steps for #2 can be found here
. This will help you retain data if need be.
eb ssh only works if you have the keys and have attached them to the instance. Private key files must be located in a folder named .ssh under your user directory
eb init will ask if you want to ssh into your instance, then list out the keys in your account in that region. If a new key was created it should have outputted where the key was place on your filesystem.
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb3-init.html
eb create has a -k key option as well
If you include this option with the eb create command, the value you provide overwrites any key name that you might have specified with eb init.
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb3-create.html
Recently My Website shifted on Amazon.
I create EC2 Instance. Install lamp and Setup CodeIgnitor in /var/www/http folder.
the structure
Codeignitor folder I have folder name 'UPLOAD'.this folder is used for uploaded images and files.
I make AMI image from EC2 Instance.
I have setup Auto scaling of Ec2 instances.
When my old ec2 instance is failed then automatically new instance is created. But My all data from "UPLOAD" of folder on old ec2 instance has lost.
I want to separate "UPLOAD" folder in codeignitor from ec2 instance.
So whenever new instance is create it will get UPLOAD folder and its contents without loss.
I want to separate this upload folder. so when new instance is create then it will get this data.
how to do this.
Thanks for Advance.
Note . I have used MYSQL on Amazon RDS.
You can use a shared Elastic Block Storage mounted directory.
If you manually configure your stack using the AWS Console, go to the EC2 Service in the console, then go to Elastic Block Storage -> Volumes -> Create Volume. And in your launch configuration you can bind to this storage device.
If you are using the command line tool as-create-launch-config to create your launch config, you need the argument
--block-device-mapping "key1=value1,key2=value2..."
If you are using Cloudformation to provision your stack, refer to this template for guidance.
This assumes Codeignitor can be configured to state where its UPLOAD directory is.
As said by Mike, you can use EBS, but you can also use Amazon Simple Storage Service (S3) to store your images.
This way, whenever an instance starts, it can access all the previously uploaded images from S3. Of course, this means that you must change your code for the upload, to use the AWS API and not the filesystem to store your images to S3.