i'dont really understand how to install something from GCP Marketplace to Compute Engine, which has been created already(windows servser). For instance i need to deploy Jenkins to practice with CI, but when i'm choosing that solution from Marketplace it's just deploying right below my VM in the list and looks like a separate process but i need this exactly on my RDP.
It is unlikely there is a good Marketplace based solution for your use case.
Depending on the type of solution you pick off the Marketplace, you'll get different behavior. Many of the solutions in the marketplace are self-contained -- they'll install the infrastructure they need to run, such as additional VMs. This is done via Deployment Manager. They won't install on VMs you already have provisioned. (This also lets the software and infrastructure be easily removed).
Others will just provide a container which you can place on an already running VM (for example, this jenkins package. These will require more work on your part to manage and keep updated, of course (and obviously find a container that works on your windows machine if this is the route you want to go). I don't currently see an obvious candidate in the market for Jenkins.
A third type of marketplaces package is "click to deploy". These will bring up a GKE cluster to run the containers on, but this likely isn't what you're looking for if you don't want additional VMs.
Related
I am developing a cloud solution. I have no experience in it, so I want to ask some professionals about best practices. The current question mostly related to the autoscaling groups functionality.
I've read a lot of howtos and guides and came to conclusion that the only ways to provision/configure instances in ASG are:
to pre-bake AMI;
to use user_data field.
So, let's assume, I have an autoscaling group. And I want to configure instances what it launches, for example, using chef-solo (or ansible-local, but as I understood, chef is better option for the aws).
I see only 2 ways how to do this:
Use packer and pre-bake image locally (using chef-solo provider), then update ASG configuration with the brand-new created AMI;
Use base Amazon AMI and configure images at the launch, using user_data script: install chef-solo, fetch cookbooks from the git, run chef-solo on machine.
What is better choice in you opinion and why? Also I am interested in how to update already running instances in the ASG when my chef cookbooks configuration changes.
Also, if you know better options, leave them here. I am open to discuss.
It depends on your use case.
A pre-baked AMI may be quicker to launch when scaling up, but if you need to make even small changes to the code or configuration, you'll need to bake another AMI. Using user data (whether using straight OS commands or Chef or something else) may take longer if you're installing application servers and deploying applications, and you may also be introducing external dependencies for scaling: what if the GitHub repository is off line or a necessary download is blocked?
So, if speed of scale-up is important, consider a pre-baked AMI. If you can tolerate a reasonable scale-up hit, look at a hybrid approach:
Bake into your AMI the Chef DK and any other large objects you need. For example, you might bake your application server installation into the AMI and then just have Chef configure it through user data.
Make sure your dependencies, scripts and deployables such as WAR files are in reliable repositories such as S3.
The best advice is to try both approaches to get some metrics and see how these fit your use cases.
I'm wondering how people are deploying a production-caliber Kubernetes cluster in AWS and, more importantly, how they chose their approach.
The k8s documentation points towards kops for Debian, Ubuntu, CentOS, and RHEL or kube-aws for CoreOS/Container Linux. Among these choices it's not clear how to pick one over the others. CoreOS seems like the most compelling option since it's designed for container workloads.
But wait, there's more.
bootkube seems to be next iteration of the CoreOS deployment technology and is on the roadmap for inclusion within kube-aws. Should I wait until kube-aws uses bootkube?
Heptio recently announced a Quickstart architecture for deploying k8s in AWS. This is the newest approach and so probably the least mature approach but it does seem to have gained traction from within AWS.
Lastly kubeadm is a thing and I'm not really sure where it fits into all of this.
There are probably more approaches that I'm missing too.
Given the number of options with overlapping intent it's very difficult to choose a path forward. I'm not interested in a proof-of-concept. I want to be able to deploy a secure, highly-available cluster for production use and be able to upgrade the cluster (host OS, etcd, and k8s system components) over time.
What did you choose and how did you decide?
I'd say pick anything which fit's your needs (see also Picking the right solution)...
Which could be:
Speed of the cluster setup
Integration in your existing toolchain
e.g. kops integrates with Terraform which might be a good fit for some prople
Experience within your team/company/...
e.g. how comfortable are you with the related Linux distribution
Required maturity of the tool itself
some tools are very alpha, are you willing to play to role of an early adaptor?
Ability to upgrade between Kubernetes versions
kubeadm has this on their agenda, some others prefer to throw away clusters instead of upgrading
Required integration into external tools (monitoring, logging, auth, ...)
Supported cloud providers
With your specific requirements I'd pick the Heptio or kubeadm approach.
Heptio if you can live with the given constraints (e.g. predefined OS)
kubeadm if you need more flexibility, everything done with kubeadm can be transferred to other cloud providers
Other options for AWS lower on my list:
Kubernetes the hard way - using this might be the only true way to setup a production cluster as this is the only way you can fully understand each moving part of the system. Lower on the list, because often the result from any of the tools might just be more than enough, even for production.
kube-up.sh - is deprecated by the community, so I'd not use it for new projects
kops - my team had some strange experiences with it which seemed due to our (custom) needs back then (existing VPC), that's why it's lower on my list - it would be #1 for an environment where Terraform is used too.
bootkube - lower on my list, because it's limitation to CoreOS
Rancher - interesting toolchain, seems to be too much for a single cluster
Offtopic: If you don't have to run on AWS, I'd also always consider to rather run on GCE for production workloads, as this is a well managed platform rather than something you've to build yourself.
We have multiple amazon ec2 instances behind a load balancer. Our build script is written in phing and is integrated with git.
We are looking for a tool (like Jenkins or Amazon code deploy) which could display all the active instances currently behind load balancer and then allow us to select some of them (or select a group defined previously) and then trigger either of the following (whichever is better) -
a build script hosted on the same dedicated server where the tool is hosted.
or the respective build scripts hosted on the selected ec2 instances.
We should be able to do the following -
specify a git branch name, optionally, when we trigger the build script for any group of instances.
be able to roll out in batches of boxes, so as to get some time to monitor load, and then move to next batch if all is good. Best way, I guess, would be to specify a size of the batch (e.g. 10), so that the process waits for a user prompt after rollout on every batch completes.
So, if we have to rollout two different git branches to two groups of instances, we should be able to run them in two steps (if we do not specify batch size).
Would like to know about experiences of people who dealt with something similar.
For CodeDeploy, it supports Git (more precisely, GitHub). It also allows you to deploy only to tagged EC2 instances. If combined with custom DeploymentConfig (http://docs.aws.amazon.com/codedeploy/latest/userguide/how-to-create-deployment-configuration.html), you can also control how fast (the size of the batch) to deploy.
I would re-structure the question:
The choices you have for application deployment
and whether the tool has option to perform rolling deployments.
Jenkins is software for CI/CD, which will have to use plugins,custom scripting or leverage an existing orchestration software setup for doing the deployments.
For software orchestration, you have many choices, some of the more famous tools are Chef, puppet, ansible etc.. All of these would need you to manage some kind of centralized setup. All such software support application deployment.
You need to make a decision on whether you would want to invest in maintaining such a setup.
If you decide against such a setup, you have the option of using managed services such as AWS OpsWorks, AWS CodeDeploy, hosted chef etc.
In choosing any of these services, you delegate the management of orchestration software to a vendor, which will ensure the service is up all the time.
AWS code deploy and AWS OpsWorks are managed services on aws and work pretty well on AWS setups.
AWS OpsWorks uses chef under the hood.
AWS CodeDeploy only provides a subset of what OpsWorks provides and is responsible only for deployments. With AWS code deploy you get convenient visualization of your software deployments through AWS console.
With AWS code deploy, you can achieve the goal of partial roll out to ec2 instances.
You can do the same with other tools as well but CodeDeploy on AWS environment will take least amount of work.
CodeDeploy also allows you to deploy from GIT. Please refer to the following aws documentation
http://docs.aws.amazon.com/codedeploy/latest/userguide/github-integ-tutorial.html
The pitfall with code deploy is the fact that the agent that will run on instances has been tested for and is supported for only a limited number of OS combinations.(http://docs.aws.amazon.com/codedeploy/latest/userguide/how-to-run-agent.html#how-to-run-agent-supported-oses)
Also in future if you decide to move away from AWS, you will have to redo the deployment related work.
CodeDeploy service only charges you for the underneath AWS resources.
Please find the link to pricing documentation below:
https://aws.amazon.com/codedeploy/pricing/
I've been tasked with updating some BOSH scripts/jobs/what have you, and developing them is costing me a heck of a lot of time.
I finally was keyed into using BOSH lite, but I only really see how to deploy CloudFoundry to the BOSH lite environment.
However, I'm a bit lost as to what I need to put into my BOSH lite release/manifest/what goes here?
Can someone describe their workflow with BOSH lite, and what types of information I need to put in the release manifest to deploy my release and test out my jobs and errands in BOSH lite? I have been having a difficult time finding good resources in this area, and just BOSH in general.
The high level workflow is:
on your workstation, you have a repo for your BOSH release
you have a BOSH director somewhere
you work on your release, build it, and upload it to the director
you create/modify your deployment manifest that references the uploaded release
you run bosh deploy with your manifest so that the Director can create "VMs" in a "Cloud" and put the bits of software in your release on those VMs (and run the software) in the topology described in your manifest
The three main things you need to tell a Director are the stemcell(s), release(s), and deployment manifest. By now, you have some idea what a release is, it's basically all the software that gets run.
The stemcell is the base OS image that will be common to all your deployed VMs (you can have different stemcells within a deployment, but the most common thing is to have them all the same); this is a special image that has some stuff pre-baked into it to facilitate working with BOSH. Primarily, it has a BOSH agent, this is how the Director communicates with the VMs to tell it "download this package", "download this job", "start this process", etc.
The deployment manifest is a YAML file where you specify several things:
The name of your deployment.
A list of the releases, along with specific versions, that you will be deploying as part of this deployment.
A description of the networks that you want to associate with the deployed VMs. If you're using an IaaS like AWS for example, you might be deploying into a VPC, and here is where you would specify some of your Subnet IDs.
A description of jobs, basically a list of several homogeneous clusters to be deployed, along with how many instances of VMs/nodes you want for each cluster. Say your release consists of a frontend service, a backend service, and a database service. Then you may want to deploy a frontend cluster which just runs the frontend job, and have there be 5 instances of that. And you may want 10 instances of the backend cluster, and probably just 1 instance of the database. Each job in the manifest can reference multiple jobs from multiple releases (yes, it's an unfortunate historical accident that these two things are named the same thing).
Configuration properties, e.g. your jobs might need a bunch of parameters and credentials configured, and any properties that need to be shared globally can be put in the properties section.
BOSH-Lite is a Vagrant VM which is essentially running two things you care about:
The BOSH Director
Garden, a Linux container manager (if you've heard of Docker, Garden is similar but has been around longer and is better suited for production use cases). Garden acts like "the cloud" here, when the Director needs to create a VM, it delegates to its "Cloud Provider Interface" which in turn just asks Garden to create a container.
The advantage of BOSH-Lite is that it's much cheaper and faster to launch a container within a VM on your laptop than it is to launch a real VM in AWS, vSphere, OpenStack, or other real datacenter.
First-time workflow (after starting and targetting BOSH-Lite):
$ git clone YOUR_RELEASE_REPO
$ cd YOUR_RELEASE_REPO
$ bosh create release && bosh upload release
$ # create manifest, call it manifest.yml
$ bosh -d manifest.yml deploy
Iterating:
$ # modify the code in your repo
$ bosh create release --force && bosh upload release
$ # modify your manifest if necessary
$ bosh -d manifest.yml deploy
Creating the manifest from scratch can be hard if you're not familiar with BOSH manifests. One things you may want to consider doing is following the instructions you've found for creating the BOSH-Lite manifest for Cloud Foundry. Then modify that to suit your project.
Here is the full documentation on the schema of a deployment manifest: https://bosh.io/docs/deployment-manifest.html.
If you generate a manifest and have trouble with it, you can turn to GitHub issues or the mailing list which may be better suited for back-and-forth help on getting your manifest working.
I'm using AWS Cloudformation to setup numerous elements of network infrastructure (VPCs, SecurityGroups, Subnets, Autoscaling groups, etc) for my web application. I want the whole process to be automated. I want click a button and be able to fire up the whole thing.
I have successfully created a Cloudformation template that sets up all this network infrastructure. However the EC2 instances are currently launched without any needed software on them. Now I'm trying to figure out how best to get that software on them.
To do this, I'm creating AMIs using Packer.io. But some people have instead urged me to use Cloud-Init. What heuristic should I use to decide what to bake into the AMIs and/or what to configure via Cloud-Init?
For example, I want to preconfigure an EC2 instance to allow me (saqib) to login without a password from my own laptop. Thus the EC2 must have a user. That user must have a home directory. And in that home directory must live a file .ssh/known_hosts containing encrypted codes. Should I bake these directories into the AMI? Or should I use cloud-init to set them up? And how should I decide in this and other similar cases?
I like to separate out machine provisioning from environment provisioning.
In general, I use the following as a guide:
Build Phase
Build a Base Machine Image with something like Packer, including all software required to run your application. Create an AMI out of this.
Install the application(s) onto the Base Machine Image creating an Application Image. Tag and version this artifact. Do not embed environment specific stuff here like database connections etc. as this precludes you from easily reusing this AMI across different environment runtimes.
Ensure all services are stopped
Release Phase
Spin up an environment consisting of the images and infra required, using something like CFN.
Use Cloud-Init user-data to configure the application environment (database connections, log forwarders etc.) and then start the applications/services
This approach gives the greatest flexibility and cleanly separates out the various concerns of a continuous delivery pipeline.
One of the important factors that determines how you should assemble servers, AMIs, and infrastructure planning is to answer the question: In production, how fast will I need a new instance launched?
The answer to this question will determine how much you bake into the AMI vs. how much you build after boot.
NOTE: My experience is with Chef Server so I will use Chef terminology but the concepts are the same for any other configuration management stack.
The general rule of thumb is to treat your "Infrastructure as Code". This means think about the process of launching instances, creating users on that machine, and the process of managing a known_hosts files and SSH keys the same as you would your application code. Being able to track the changes to Infrastructure in source code makes management easier, redeployments, and even CI much easier.
This Chef Introduction covers the terminology in Chef of Cookbooks, Recipes, Resources, and more. It shows you how to build a simple LAMP stack, and how you can relaunch it just as easily with one command.
So given the example in your question, at a high level I would do the following:
Launch a base Ubuntu Linux AMI (currently 14.04) with a Cloudformation script.
In the UserData section of the Instance configuration, boot strap the Chef Client Install process.
Run a Recipe to create a user.
Run a Recipe to create the known_hosts file for the user
Tools like Chef are used because you are able to break down the infrastructure into small blocks of code performing specific functions. There are numerous Cookbooks already built and available that perform the basic building blocks of creating services, installing software packages, etc.
All that being said, there are some times when you have to deviate from best practices in the interest of your specific domain and requirements. There may be situations where given all the advantages of a infrastructure management you will still need to bake items into the AMI.
Let's pretend your application does image processing and has a requirement to use ImageMagick. Let's assume that you will need to build ImageMagick from source. If you were to do this via Chef Recipes this could add another 7 minutes of just compiling ImageMagick to the normal instance boot time. If waiting 10-12 minutes is too long for a new instance to come online then you may want to consider baking your own AMI that has ImageMagick already compiled and installed.
This is an acceptable solution but you should keep in mind that managing your own fleet of pre-baked AMIs adds additional infrastructure overhead. You will need to keep your custom AMIs updated as new AMIs are released, you expand to different instance types and to different AWS Regions.