Use user data in cloudformation for 2016 windows ami - amazon-web-services

I am trying to add userdate to run some custom script when I create a windows ec2 instance using cloudformation. (Windows 2016)
"UserData" : {
"Fn::Base64" : {
"Fn::Join" : [
"",
[
"<powershell> \n",
"C:\\ProgramData\\Amazon\\EC2-Windows\\Launch\\Scripts\\InitializeI‌​nstance.ps1 \n",
"C:\\ProgramData\\Amazon\\EC2-Windows\\Launch\\Scripts\\create_folder.ps1 \n",
"New-Item -Path c:\\test3 -ItemType directory",
"</powershell>"
]
]
}
},
the above script does not seems to be working.
Basically I need to run some custom script (which I already added in my base image) and some powershell command.

By default the UserData section is not executed in 2016 Windows AMI.
You have to do the following steps manually;
Log-in to the instance. Open powershell terminal.
Go to the directory C:\ProgramData\Amazon\EC2-Windows\Launch\Scripts.
Run
InitializeInstance.ps -Schedule command.
After next start-up, the user-data section will be executed.
I believe now you encounter another issue that you can't manually login to the instance.
Then what you can do is create your own AMI by customizing Windows 2016 AMI with adding the following steps also.
Reference: https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ec2-windows-user-data.html

Related

startup scripts on Google cloud platform using Packer

Im using hashicorp's Packer to create machine images for the google cloud (AMI for Amazon). I want every instance to run a script once the instance is created on the cloud. As i understand from the Packer docs, i could use the startup_script_file to do this. Now i got this working but it seems that the script is only runned once, on image creation resulting in the same output on every running instance. How can i trigger this script only on instance creation such that i can have different output for every instance?
packer config:
{
"builders": [{
"type": "googlecompute",
"project_id": "project-id",
"source_image": "debian-9-stretch-v20200805",
"ssh_username": "name",
"zone": "europe-west4-a",
"account_file": "secret-account-file.json",
"startup_script_file": "link to file"
}]
}
script:
#!/bin/bash
echo $((1 + RANDOM % 100)) > test.log #output of this remains the same on every created instance.

Limiting Code Deploy revisions with max_revisions value is not working

I am attempting to limit the quantity of successful code deploy revisions that are preserved on the EC2 instances by editing the codedeployagent.yml file’s max_revisions value. I have currently set the value to :max_revisions: 2.
I believe that the issue I am having is due to the method that I am setting the file value. I am attempting to set the value by deploying it with the code deploy package. To do this I have created a custom codedeployagent.yml file locally at the following location:
etc/codedeploy-agent/conf/codedeployagent.yml
In my appspec.yml file I am specifying the installation location of this file by the following lines:
- source: etc/codedeploy-agent/conf/codedeployagent.yml
destination: /etc/codedeploy-agent/conf
I have found that this errors out when I attempt to deploy due to the script already being in place. To work around this, I have added a script that hooks on BeforeInstall with my appspec.yml that will remove the script prior to installing the package:
#!/bin/bash
sudo rm /etc/codedeploy-agent/conf/codedeployagent.yml
Okay, so after this I have ssh’d into the server and sure enough, the :max_revisions: 2 value is set as expected. Unfortunately, in practice I am seeing many more revisions than just two being preserved on the ec2 instances.
So, to go back to the beginning of my question here… Clearly this workaround is not the best way to update the codedeployagent.yml file. I should add that I am deploying to an auto scaling group, so this needs to be a solution that can live in the deployment scripts or cloud formation templates rather than just logging in and hardcoding the value. With all this info, what am I missing here? How can I properly limit the revisions? Thanks.
Have you restart the agent after updating the config file? Any new configurations won't work until you restart the agent.
You may try one of below approaches.
Take an AMI of an instance where you already modified max_revisions to 2 and update ASG's Launch configuration with this AMI, so that scale out instance will also have this config.
Add this config in your userdata section while creating launch configuration
Command to add in userdata section
"UserData" : { "Fn::Base64" : { "Fn::Join" : ["", [
"#!/bin/bash -xe\n",
"# Delete last line and add new line \n",
"sed '$ d' /etc/codedeploy-agent/conf/codedeployagent.yml > /etc/codedeploy-agent/conf/temp.yml\n",
"echo ':max_revisions: 2' >> /etc/codedeploy-agent/conf/temp.yml\n",
"rm -f /etc/codedeploy-agent/conf/codedeployagent.yml\n",
"mv /etc/codedeploy-agent/conf/temp.yml /etc/codedeploy-agent/conf/codedeployagent.yml\n",
"service codedeploy-agent restart\n"
]]}}
As per reference, max_revisions applies for applications per deployment group. So it keeps only 2 revisions under /opt/codedeploy-agent/deployment-root/<deployment_group_id>/. If ASG is associated with multiple applications, codedeploy will store 2 revisions of each application in its deployment_group_id directory.

How to start web server in aws cloudformation?

I have installed odoo webserver using cloudformation. But need to start services manually. How can I start my odoo webserver using cloudformaton?
I tried calling the script which starts odoo webserver by passing following command through Userdata.
"UserData":{ "Fn::Base64" : "#!/bin/bash sudo /etc/init.d/odoo-server start "}
But received following error
/bin/bash: sudo /etc/init.d/odoo-server start : No such file or directory
it looks like the first line you posted is commented out with the #:
"UserData":{ "Fn::Base64" : "#!/bin/bash sudo /etc/init.d/odoo-server start "}
It would also help if you posted the full UserData section so we can see what commands are run before that.
What ami are you using? what other resources are spun up in your template? the more info the better.
How did you install Odoo using CloudFormation? Can you share the CloudFormation template? Without that information, it'll be difficult to help you, but I'll still try pointing you in the right direction.
You don't need sudo in a UserData script since that script is always executed with sudo behind the scenes.
Look at the contents of /var/log/cloud-init-output.log on the webserver. It'll contain the console output for your UserData script execution.

Packer post process AMI to virtualbox?

I have packer configured to use the amazon-ebs builder to create a custom AMI from the Red Hat 6 image supplied by Red Hat. I'd really like to packer to post process the custom AMI into a virtualbox image for local testing. I've tried adding a simple post processor to my packer json as follows:
"post-processors": [
{
"type": "vagrant",
"keep_input_artifact": false
}
],
But all I end up with is a tiny .box file. When I add this to vagrant, it just seems to be a wrapper for my original AMI in Amazon:
$ vagrant box list
packer (aws, 0)
I was hoping to see something like this:
rhel66 (virtualbox, 0)
Can packer convert my AMI into a virtualbox image?
Post-processor in your example just gives you the vagrant for that image. That image was aws, so no it didn't change anything. To change it to virtualbox you'd have to convert it.
Per the docs have you tried:
{
"type": "virtualbox",
"only": ["virtualbox-iso"],
"artifact_type": "vagrant.box",
"metadata": {
"provider": "virtualbox",
"version": "0.0.1"
}
}
The above is untested. AWS provides some docs on exporting here

How do you pass custom environment variable on Amazon Elastic Beanstalk (AWS EBS)?

The Amazon Elastic Beanstalk blurb says:
Elastic Beanstalk lets you "open the hood" and retain full control ... even pass environment variables through the Elastic Beanstalk console.
http://aws.amazon.com/elasticbeanstalk/
How to pass other environment variables besides the one in the Elastic Beanstalk configuration?
As a heads up to anyone who uses the .ebextensions/*.config way: nowadays you can add, edit and remove environment variables in the Elastic Beanstalk web interface.
The variables are under Configuration → Software Configuration:
Creating the vars in .ebextensions like in Onema's answer still works.
It can even be preferable, e.g. if you will deploy to another environment later and are afraid of forgetting to manually set them, or if you are ok with committing the values to source control. I use a mix of both.
Only 5 values is limiting, or you may want to have a custom environment variable name. You can do this by using the configuration files. Create a directory at the root of your project called
.ebextensions/
Then create a file called environment.config (this file can be called anything but it must have the .config extension) and add the following values
option_settings:
- option_name: CUSTOM_ENV
value: staging
After you deploy your application you will see this new value under
Environment Details -> Edit Configuration -> Container
for more information check the documentation here:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html#customize-containers-format-options
Update
To prevent committing to your repository values like API keys, secrets and so on, you can put a placeholder value.
option_settings:
- option_name: SOME_API_KEY
value: placeholder-value-change-me
Later you can go to the AWS admin panel (Environment Details -> Edit Configuration -> Container) and update the values there. In my experience these values do not change after subsequent deployments.
Update 2
As #Benjamin stated in his comment, since the new look and feel was rolled out July 18, 2013 it is possible to define any number of environment variables directly from the console:
Configuration > Software Configuration > Environment Properties
In the 2016 Java8 Tomcat8 AMI, ElasticBeanstalk fails to set environment variables from the web configuration. They are really setting jvm -D properties instead.
--
"The following properties are passed into the application as environment variables. Learn more."
This statement is incorrect for the Java Tomcat ami. Amazon does not set these as environment variables. They are set as System properties passed on the command line to Tomcat as a -D property for jvm.
The method in Java to get environment variables is not the same for getting a property.
System.getenv vs System.getProperty
I ssh'd into the box and verified that the environment variable was never set. However, in the tomcat logs I can see the -D property is set.
I've changed my code to check for both locations now as a workaround.
AWS will interpret CloudFormation template strings in your environment variables. You can use this to access information about your EB environment inside your application:
In the AWS web interface the following will be evaluated as the name of your environment (note the back ticks):
`{ "Ref" : "AWSEBEnvironmentName" }`
Or, you can use an .ebextensions/*.config and wrap the CloudFormation template in back ticks (`):
{
"option_settings": [
{
"namespace": "aws:elasticbeanstalk:application:environment",
"option_name": "ENVIRONMENT_NAME",
"value": "`{ \"Ref\" : \"AWSEBEnvironmentName\" }`"
}
]
}
Alternatively, you could use the Elastic Beanstalk CLI to set environment variables.
To set an environment variable: eb setenv FOO=bar
To view the environment variables: eb printenv
Environment Details -> Edit Configuration -> Container
This seems to be the only way to set ENVs with dynamic values in beanstalk. I came up with a workaround that works for my multi-docker setup:
1) Add this to your Dockerfile before building + uploading to your ECS
repository:
CMD eval `cat /tmp/envs/env_file$`; <base image CMD goes here>;
2) In your Dockerrun.aws.json file create a volume:
{
"name": "env-file",
"host": {
"sourcePath": "/var/app/current/envs"
}
}
3) Mount volume to your container
{
"sourceVolume": "env-file",
"containerPath": "/tmp/envs",
"readOnly": true
}
4) In your .ebextensions/options.config file add a container_commands
block like so:
container_commands:
01_create_mount:
command: "mkdir -p envs/"
02_create_env_file:
command: { "Fn::Join" : [ "", [ 'echo "', "export ENVIRONMENT_NAME=" , { "Ref", "RESOURCE" }, ';" > envs/env_file;' ] ] }
5) eb deploy and your ENVS should be available in your docker container
You can add more ENVs by adding more container_commands like:
02_create_env_file_2:
command: { "Fn::Join" : [ "", [ 'echo "', "export ENVIRONMENT_NAME_2=" , { "Ref", "RESOURCE2" }, ';" >> envs/env_file;' \] \] }
Hope this helps!