chef-solo explained - amazon-web-services

Can somebody help me out understanding chef-solo. Still I didnt understood the part whether I have to run chef-solo on my machine to run provision a machine or I need to first provision a machine and install solo on the new machine that I provisioned. I need to understand end to end flow. Please help me better understanding.

There is a detailed explanation on how to use Chef-Solo in AWS environment in Integrating AWS CloudFormation With Opscode Chef.pdf
Chef Solo can be used to deploy Chef cookbooks and roles without a
dependency on a Chef Server. Chef Solo can be installed via a Ruby
Gem package; however, it requires a number of other dependent
packages to be installed. By using resource metadata and the AWS
CloudFormation helpers, you can deploy Chef Solo on a base AMI via
Cloud-init.
You can either use the cloud formation that is provided in the PDF above, or you can create the files and run the script (that are embedded in this template) yourself.

You can also use chef-solo with vagrant for testing how a target linux distro will behave on your local machine; however, your question is more about an end to end with AWS - so here we go.
end to end you need on the target machine the following dependencies at least:
ruby, the gem chef, ssh, git or some other way of getting your code to the vm.
You ssh into the machine, you get the recipes you want to use on the target machine, you run chef-solo with some parameters that specify at a minimum some attributes, the location of your cookbooks, and a run list that contains some recipes or roles to apply to the target machine. Below is an example for getting the apt recipe and mongo recipes (https://github.com/opscode-cookbooks/apt and https://github.com/edelight/chef-mongodb) ... I cloned those into the /opt/devops location on the target machine.
chef-solo -c solo.rb -j node.json
solo.rb contents
file_cache_path "/opt/devops/log"
cookbook_path "/opt/devops/cookbooks"
node.json contents
{
"node": {
"vm_ip": [ "192.168.33.10" ],
"myProject": {
"git_revision":"bzrDevel",
}
},
"run_list": ["recipe[apt]","recipe[mongodb]" ]
}

Related

MQ Custom Docker Image - MQM Group Not Found

Description: Getting the following error when running a docker build. I thought mqm group would be automatically created by default. Doesn't mention otherwise in the site link below. Can someone else try this?
System Notes:(VS Code- Docker build), windows machine.
Error:
useradd: group 'mqm' does not exist
Reference site for instructions:
IBM MQ Customer Docker Image Instructions
Docker File:
FROM ibmcom/mq
USER root
RUN useradd alice -G mqm && \
echo alice:passw0rd | chpasswd
USER mqm
COPY 20-config.mqsc /etc/mqm/
Duplicate of ibmcom/mq docker image backward compatibility issue
From 9.1.5 the container does not use OS based users or groups. This is to conform to cloud best practices. Instead a file based system is being used. This is so that when you roll-out the container in a cloud into production you can switch to an LDAP based system.
The 9.1.5 container uses htpasswd, with the relevant file in /etc/mqm/
For development, if you are not going to create new users, then you can use the 9.1.5 container. If you want to create new users, then you can use 9.1.4 or earlier, or use htpasswd with bcrypt to create the users.
I was using a deprecated site apparently that's in the docker repo link. I guess its a problem with docker and they can`t remove it. Please follow the instructions here. I had no issue.
https://github.com/ibm-messaging/mq-container

Commit .elasticbeanstalk/config.yml in Elastic Beanstalk

Is it a good approach to commit the .elasticbeanstalk/config.yml inside the git repo of a project which uses eb deploy?
We want to deploy using our CI and so we can not use the interactive eb init.
What we are thinking now is to define our dev, uat and prod inside that config.yml (if possible) and to point to that environment using eb deploy.
We saw that we could perform eb init with all necessary parameters in ebcli version2 but not in version 3 anymore? So it seems the approach is changed?
Can someone explain how to deploy EB for multiple environments, without interaction?
We want to deploy using our CI and so we can not use the interactive eb init
You can suppress the interactive mode as follows:
eb init --platform <platform-name> --region <region-name> <application-name>
Is it a good approach to commit the .elasticbeanstalk/config.yml inside the git repo of a project which uses eb deploy?
Can someone explain how to deploy EB for multiple environments, without interaction?
By design, the EBCLI avoids committing the .elasticbeanstalk/ directory since it can contain developer-specific information, which when committed to VC can cause confusion. So, it's best avoided from VC. You are free to commit it to version control. Ensure there's no sensitive information here. Logs, and saved configurations are usually stored in .elasticbeanstalk/.
You can copy pertinent portions of the .elasticbeanstalk/config.yml file into root-level file from which CI could read information such as the environment name to use.
Locally, you could create a pre-commit Git hook that would read the default environment name from the .elasticbeanstalk/config.yml file into the root-level file -- let's call it .environment_config.sh. It could be a statement as simple as export BEANSTALK_ENVIRONMENT_NAME=<environment name from .elasticbeanstalk/config.yml>
On the CI server:
3.1. Ensure PWD is git init-ed. Systems such as Jenkins usually are git init-ed with the necessary branch, so CI can simply source .environment_config.sh at this point and load the name of the environment to deploy.
3.2. eb init --platform <platform-name> --region <region-name> <application-name>
3.3. eb use $BEANSTALK_ENVIRONMENT_NAME
3.4. eb deploy
(You could combine 3.3. and 3.4. by performing eb deploy $BEANSTALK_ENVIRONMENT_NAME instead; I just wanted to demonstrate the use of eb use)
The EB CLI is really meant to be used from a workstation. I think you'd be better off scripting your CI with the AWS CLI.
A deployment with eb deploy will archive your code in S3 (or CodeCommit), create a new application version then update the environment with the new version label. All of those operations are supported with AWS CLI commands.
Or, you could write your own deployment script in Python with boto3. That's an easy option too. That's basically what the EB CLI is.

Jenkins pushes app to wrong target

we continuously build our apps with Jenkins and deploy them to our different spaces:
...
cf login -a https://api.lyra-836.appcloud.swisscom.com -u ...
cf target -s development
cf push scs-flux-monitoring-development
...
Now we recognized that the push is sometimes taking a wrong space to install the app. We think this is because of another Jenkins Job doing a parallel push. As far we can see the .cf/config.json stores the name of the Space and when another cf target is called all pushes are using that new target.
Anyone who recognized that behaviour also? Any suggestions to solve that?
Kind regards
Josef
There are a couple options you could go with:
Don't use a CI solution that allows shared state between different jobs. Just as Cloud Foundry uses containers to isolate apps, there are CI solutions out there that use containers to isolate builds. One great example is Concourse CI which is actually the main solution used by the core Cloud Foundry development teams.
Have every Jenkins job use a different location for CF_HOME so they don't all share ~jenkins/.cf:
$ cf help | grep CF_HOME
CF_HOME=path/to/dir/ Override path to default config directory

Best way to deploy play2 app using Amazon Beanstalk

I found fragmented instructions here and some other places about deploying Play2 app on amazon ec2. But did not find any neat way to deploy using Beanstalk.
Play is a nice framework and AWS beanstalk is one of the most popular services then why is there no official instruction to do this?
Has anyone found any better solution?
Deploying a Play2 app on elastic beanstalk is now easy with Docker Containers in combination with sbt's experimental docker feature.
In build.sbt specify the exposed docker ports:
dockerExposedPorts in Docker := Seq(9000)
You should automate the following steps, but you can try this out manually to test that it works:
Generate a Dockerfile for the project by running the command: sbt docker:stage.
Go to the ./target/docker/ directory.
Create an elastic beanstalk Dockerrun.aws.json file with the following contents:
{
"AWSEBDockerrunVersion": "1",
"Ports": [
{
"ContainerPort": "9000"
}
]
}
Zip up everything in that directory, let's say into a file called play2-test-docker.zip. The zip file should contain the files: Dockerfile, Dockerrun.aws.json, and files/* directory.
Go to aws beanstalk console and create a new application using the m3.medium or any instance type with enough memory for the jvm to run. Any instance with too little memory will result in a JVM error.
Select "Docker Container" in the Predefined Configuration dropdown.
In the application selection screen, select "Upload" and select the zip file you created earlier. Launch the app and then go brew some tea. This can take a very long time. Minutes. Subsequent deployments of the same app version should be slightly quicker.
Once the app is running and green in the aws console, click on the app's url and you should see the welcome screen of the application (or whatever your index file is).
Here's my solution that doesn't require any additional services/containers like Docker or Jenkins.
Create a dist folder in the root of your Play application's directory. Create a Procfile file containing the following contents and put it in the dist folder (EB requires port 5000):
web: ./bin/YOUR_APP_FILE_NAME -Dhttp.port=5000 -Dconfig.file=conf/application.conf
The YOUR_APP_FILE_NAME is the name of the executable in the bin directory, which is inside the .zip created by activator dist.
After running activator dist, you can just upload the created zip file into Elastic Beanstalk and it will automatically deploy the app. You also put whatever .ebextension folders and configuration files into the dist folder that you require for Elastic Beanstalk configuration. Ex. I have dist/.ebextensions/nginx/conf.d/proxy.conf for NGINX reverse proxy settings or dist/.ebextensions/env.config for environment variables.
Edit 2016: There's now a much better way to deploy your Playframework apps onto ElasticBeanstalk using the new Java SE containers.
Here's an article that walks you through deploying step by step using Jenkins to build and deploy your project:
https://www.davemaple.com/articles/deploy-playframework-elastic-beanstalk-jenkins/
You can use custom AMIs that I keep updated here:
https://github.com/davemaple/playframework-nginx-elastic-beanstalk
These run Nginx + Playframework and support standard zip files created using "activator dist".
We also saw this as being too much of a pain and have added native Play 2 support to Boxfuse to address this.
You can now simply do boxfuse run my-play-app-1.0.zip -env=prod and this will automatically:
create a minimal AMI tailor-made for your Play 2 app
create an elastic IP
create a security group with the correct permissions
launch an instance of your app
All future updates are performed as blue/green deployments with zero downtime.
This also works with Elastic Load Balancers and Auto-Scaling Groups and the Boxfuse free tier is designed to fit the AWS free tier.
You can read more about it here: https://boxfuse.com/blog/playframework-aws
Disclaimer: I'm the founder and CEO of Boxfuse
I had some problems with other solutions found here and there. I guess that the problem is that I'm developing on Play 2.4.
Anyway, I could deploy the app to Beanstalk using Typesafe Activator and Docker:
In build.sbt I added this lines:
import com.typesafe.sbt.packager.docker.{ExecCmd, Cmd}
// [...]
dockerCommands := Seq(
Cmd("FROM","java:openjdk-8-jre"),
Cmd("MAINTAINER","myname"),
Cmd("EXPOSE","9000"),
Cmd("ADD","stage /"),
Cmd("WORKDIR","/opt/docker"),
Cmd("RUN","[\"chown\", \"-R\", \"daemon\", \".\"]"),
Cmd("RUN","[\"chmod\", \"+x\", \"bin/myapp\"]"),
Cmd("USER","daemon"),
Cmd("ENTRYPOINT","[\"bin/myapp\", \"-J-Xms128m\", \"-J-Xmx512m\", \"-J-server\"]"),
ExecCmd("CMD")
)
I went to the project's directory and ran this command in the terminal
$ ./activator clean docker:stage
I opened the [project]/target/dockerdirectory and created the file Dockerrun.aws.json. This was its content:
{
"AWSEBDockerrunVersion": "1",
"Ports": [
{
"ContainerPort": "9000"
}
]
}
In the same target/docker directory, I tested the result, built, checked and ran the image:
$ docker build -t myapp .
$ docker images
$ docker run -p 9000:9000 myapp
As everything was ok, I zipped the content:
$ zip -r myapp.zip *
My zip file had Dockerfile, Dockerrun.aws.json and stage/* files
Finally, I created a new Beanstalk app and uploaded the zip created on the last step. I took care of select "Generic Docker" on "Predefined configuration", when I was creating the app.
Beanstalk only supports WAR deployment and Play doesn't officially support WAR deployment. If you want to use EC2 then you should instead just create an EC2 instance and follow the deployment instructions: http://www.playframework.com/documentation/2.2.x/ProductionDist
Deploying play 2.* apps in aws ec2 is very diffrent until you have found this much better way to do it. I mean ansible is promising a great solution to that. though it is still needed to work with new setup of ansible, and its playbook but that must be worthy.
I have found these reads very recently and yet to apply them in my project. I hope following reads will help you to learn more:
Ansible + play + aws ec2
Read it to know more about Ansible to deply play in aws
Thanks!
Hope this will help you to kick your start. Please do share more knowledge you gain during the procedure or if there is any simple way to solve this complicated deployment problem.

Vagrant Rsync Error before provisioning

So I'm having some adventures with the vagrant-aws plugin, and I'm now stuck on the issue of syncing folders. This is necessary to provision the machines, which is the ultimate goal. However, running vagrant provision on my machine yields
[root#vagrant-puppet-minimal vagrant]# vagrant provision
[default] Rsyncing folder: /home/vagrant/ => /vagrant
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
mkdir -p '/vagrant'
I'm almost positive the error is caused because ssh-ing manually and running that command yields 'permission denied' (obviously, a non-root user is trying to make a directory in the root directory). I tried ssh-ing as root but it seems like bad practice. (and amazon doesn't like it) How can I change the folder to be rsynced with vagrant-aws? I can't seem to find the setting for that. Thanks!
Most likely you are running into the known vagrant-aws issue #72: Failing with EC2 Amazon Linux Images.
Edit 3 (Feb 2014): Vagrant 1.4.0 (released Dec 2013) and later versions now support the boolean configuration parameter config.ssh.pty. Set the parameter to true to force Vagrant to use a PTY for provisioning. Vagrant creator Mitchell Hashimoto points out that you must not set config.ssh.pty on the global config, you must set it on the node config directly.
This new setting should fix the problem, and you shouldn't need the workarounds listed below anymore. (But note that I haven't tested it myself yet.) See Vagrant's CHANGELOG for details -- unfortunately the config.ssh.pty option is not yet documented under SSH Settings in the Vagrant docs.
Edit 2: Bad news. It looks as if even a boothook will not be "faster" to run (to update /etc/sudoers.d/ for !requiretty) than Vagrant is trying to rsync. During my testing today I started seeing sporadic "mkdir -p /vagrant" errors again when running vagrant up --no-provision. So we're back to the previous point where the most reliable fix seems to be a custom AMI image that already includes the applied patch to /etc/sudoers.d.
Edit: Looks like I found a more reliable way to fix the problem. Use a boothook to perform the fix. I manually confirmed that a script passed as a boothook is executed before Vagrant's rsync phase starts. So far it has been working reliably for me, and I don't need to create a custom AMI image.
Extra tip: And if you are relying on cloud-config, too, you can create a Mime Multi Part Archive to combine the boothook and the cloud-config. You can get the latest version of the write-mime-multipart helper script from GitHub.
Usage sketch:
$ cd /tmp
$ wget https://raw.github.com/lovelysystems/cloud-init/master/tools/write-mime-multipart
$ chmod +x write-mime-multipart
$ cat boothook.sh
#!/bin/bash
SUDOERS_FILE=/etc/sudoers.d/999-vagrant-cloud-init-requiretty
echo "Defaults:ec2-user !requiretty" > $SUDOERS_FILE
echo "Defaults:root !requiretty" >> $SUDOERS_FILE
chmod 440 $SUDOERS_FILE
$ cat cloud-config
#cloud-config
packages:
- puppet
- git
- python-boto
$ ./write-mime-multipart boothook.sh cloud-config > combined.txt
You can then pass the contents of 'combined.txt' to aws.user_data, for instance via:
aws.user_data = File.read("/tmp/combined.txt")
Sorry for not mentioning this earlier, but I am literally troubleshooting this right now myself. :)
Original answer (see above for a better approach)
TL;DR: The most reliable fix is to "patch" a stock Amazon Linux AMI image, save it and then use the customized AMI image in your Vagrantfile. See below for details.
Background
A potential workaround is described (and linked in the bug report above) at https://github.com/mitchellh/vagrant-aws/pull/70/files. In a nutshell, add the following to your Vagrantfile:
aws.user_data = "#!/bin/bash\necho 'Defaults:ec2-user !requiretty' > /etc/sudoers.d/999-vagrant-cloud-init-requiretty && chmod 440 /etc/sudoers.d/999-vagrant-cloud-init-requiretty\nyum install -y puppet\n"
Most importantly this will configure the OS to not require a tty for user ec2-user, which seems to be the root of the problem. I /think/ that the additional installation of the puppet package is not required for the actual fix (although Vagrant may use Puppet for provisioning the machine later, depending on how you configured Vagrant).
My experience with the described workaround
I have tried this workaround but Vagrant still occasionally fails with the same error. It might be a "race condition" where Vagrant happens to run its rsync phase faster than cloud-init (which is what aws.user_data is passing information to) can prepare the workaround for #72 on the machine for Vagrant. If Vagrant is faster you will see the same error; if cloud-init is faster it works.
What will work (but requires more effort on your side)
What definitely works is to run the command on a stock Amazon Linux AMI image, and then save the modified image (= create an image snapshot) as a custom AMI image of yours.
# Start an EC2 instance with a stock Amazon Linux AMI image and ssh-connect to it
$ sudo su - root
$ echo 'Defaults:ec2-user !requiretty' > /etc/sudoers.d/999-vagrant-cloud-init-requiretty
$ chmod 440 /etc/sudoers.d/999-vagrant-cloud-init-requiretty
# Note: Installing puppet is mentioned in the #72 bug report but I /think/ you do not need it
# to fix the described Vagrant problem.
$ yum install -y puppet
You must then use this custom AMI image in your Vagrantfile instead of the stock Amazon one. The obvious drawback is that you are not using a stock Amazon AMI image anymore -- whether this is a concern for you or not depends on your requirements.
What I tried but didn't work out
For the record: I also tried to pass a cloud-config to aws.user_data that included a bootcmd to set !requiretty in the same way as the embedded shell script above. According to the cloud-init docs bootcmd is run "very early" in the startup cycle for an EC2 instance -- the idea being that bootcmd instructions would be run earlier than Vagrant would try to run its rsync phase. But unfortunately I discovered that the bootcmd feature is not implemented in the outdated cloud-init version of current Amazon's Linux AMIs (e.g. ami-05355a6c has cloud-init 0.5.15-69.amzn1 but bootcmd was only introduced in 0.6.1).