How to stop re-creating a service inside cloud foundry - cloud-foundry

New to CF. There are multiple services in my app, out of which, one particular service to be created only first time (and not to be created each time).
I'm using below code:
cf service service-name || cf create-service service-name
However, it still creating the service each time, when I deploy the app. Is there any better approach?

You should only run cf create-service once. Every time you run it, it will create a new service instance.
You want to cf bind-service or use manifest.yml to bind your services to your app. If you put the service names to bind in manifest.yml, it will just happen automatically. Otherwise you'd need to manually run cf bind-service when it's necessary, which is only really once when you initially push or after making changes to the service instance.
As to your command, it seems OK. cf service should return 1 if it doesn't find the service name you specified, which should trigger the second part of the command to create the service. At the same time, cf create-service will definitely fail if you try to create a service with a name that already exists. It'll say The service instance name is taken: foo. Thus if you're seeing the service instance being recreated by running that command, something probably deleted it prior. Maybe review any deployment code you have and check if something is deleting that service instance accidentally. You generally don't want to delete a service instance, unless you're really, really done with it. Normally, it's enough to just unbind a service.

Related

What is the Run Condition for Octopus Deploy to skip Deploy Amazon ECS Service step?

So i have two steps in Octopus Deploy. first step will deploy the cluster and task definition and the other will update the task definition and deploys a new task. The First step is only needed for the first time.. how do i skip the step with run conditions.
EDIT: Updated to better reflect the way the Octopus Deploy an ECS Service and Update an ECS Service steps work.
Typically, I wouldn't expect to see both the Deploy an ECS service step and the Update an ECS Service step in the same Octopus deployment or runbook process.
The reason being is that the Deploy an ECS service step creates a service and task for you from scratch in a cloud formation stack that Octopus then manages - it helps you get started with ECS if you haven't got something running in there already.
Conversely, you'd tend to use the update an ECS service step when the service and task already exist, and you want to deploy new versions of the task (new container images) over time. The cluster may be managed with a tool like terraform or by hand.
You'd, therefore, only need to use one or the other step, not both, as the deploy an ECS service step will update a previously deployed (by Octopus) ECS service if it exists.
If you really need to use both, then one option is to use a new step, that would be placed to run before the "first" and "second" step from your scenario.
This new step would likely be a run an AWS CLI script step. With it, you can use one of the ecs commands, probably list-services, to see if the service with the specified name already exists in the cluster.
When you know if the service exists or not, the step would then make use of an octopus output variable. You set it (assuming PowerShell script step) like so:
# Set to true if it exists
Set-OctopusVariable -name "ECSServiceExists" -value "True"
# OR False if it doesnt
Set-OctopusVariable -name "ECSServiceExists" -value "False"
Then for your "first" step that creates the service, you'd set a variable run condition to something like this:
#{unless Octopus.Deployment.Error}#{RunIfServiceDoesntExist}#{/unless}
The condition is doing two things:
Only running when there isn't a general error in the deployment (indicated by Octopus.Deployment.Error
Testing a new project variable called RunIfServiceDoesntExist to see if that evaluates to true.
For the condition to work, you need to create a new RunIfServiceDoesntExist project variable with value:
#{if Octopus.Action[Check for ECS Service].Output.ECSServiceExists == "False"}true#{/if}
Note: Replace Check for ECS Service with the name you give the new step to test for the ecs service's existence.
You can have the if condition above directly in the run condition itself, but I find the project variable hides away the complexity a little and makes it easier to read.
Hope that helps!
turns out i can update the existing service using Deploy AWS ECS Service steps itself.Just takes a really long time to deploy.
cheers....!

Force configuration update on Amazon Elastic Beanstalk

I'm building a simple web app on Elastic Beanstalk (Dockerized Python/Flask). I had it running successfully on one AWS account and wanted to migrate it to a new AWS account, so I'm recreating the Beanstalk app on the AWS console and trying to deploy the same code via eb deploy.
I noticed that when pushing a configuration update, Beanstalk will attempt the change but then roll it back if the app fails to start with the new change. This is true for several different kind of changes, and I need to make multiple to get my app fully working (basically I'm just recreating the Beanstalk settings I already have on my other AWS account):
Need to set a few environment variables
Need to set up a new RDS instance
Need to deploy my code (the new application version has been uploaded, but the deployed application version is still the old "sample application" that it started with)
All 3 must be done before this app will fully start. However, whenever I try one of these on its own, Beanstalk will attempt the change, then notice that the app fails to startup (it throws an exception on startup) and then beanstalk rolls back the change. The config rollback occurs even though I have "Ignore Health Check: true" under the deployment settings. (I would think at the very least it let me force update #3 above but apparently not.)
So I'm basically stuck because I can't do all of them at once. It there a way to --force a configuration update, so that Beanstalk doesn't rollback no matter what happens?
My other thought was that I could potentially make all the edits at once to the JSON config, but I figured that there must be a way to force config changes so people can respond quickly in a crisis without these well-intentioned guardrails.
Thanks for your help!

Creating EC2 instances from template with arguments

Let's say I have a webapp where users can click a button that starts a long running task (eg. 3 days long). The user can also select options, for example the things it wants that task to do.
No matter what, the task will be the same script that runs when the instance starts. However, I would like it to somehow take arguments from the button click to change the function of the startup script.
Is there a way to do this with AWS EC2 instances?
So, you're saying that you want to pass certain parameters to some software that will be launched on an EC2 instance.
There are many ways to do this:
When the instance is launched, you can pass User Data. This is commonly used to run a startup script, but it can also be used just to pass information to the instance that can be accessed via http://169.254.169.254/latest/user-data/. So, either pass the configuration directly or pass it as part of the startup script.
Store it in tags on the instance when the instance is launched. Once the software starts up, it can retrieve the tags associated with the instance (itself) and act appropriately.
Store the configuration in a database and have the software access the database to determine what it should do.
Store the configuration in Amazon S3 and have the software retrieve the configuration file.
Personally, I like the idea of Tags. It's very Cloud.
This behaviour isn't directly related to EC2, although EC2 could host an application that does these long-running parameterised tasks. Whether EC2 is a good choice also depends on how your tasks react to underlying failures: if the EC2 instance fails or restarts, what happens to your task?
Depending on your use case, some managed options might be:
AWS Step Functions
AWS Simple Workflow Service
AWS Batch

Capistrano and Auto-Scaling AWS

We're trying to figure out the best way to deploy to an auto-scaling AWS setup using Capistrano, and stuck on the best way to ensure new servers automatically get the latest code, without having to rely on AMIs.
Any ideas?
Using User Data, you can have your EC2 instances pull the latest code each time a new instance is launched.
More info on user data here: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html
tldr: user data is pretty much a shell script thats executed when your ec2 instance launches. you can get it to pull the latest code and run it
#Moe's answer (or something like it is the right one). But just as another thought, you could write some Ruby which queries AWS on deploy to fetch the list of servers to which Capistrano will deploy. The issue with this approach is that you will have to manually deploy to all servers every time auto-scaling adds a server, which kind of defeats the purpose.

What commands/config can I use to deploy a django+postgres+nginx using ecs-cli?

I have a basic django/postgres app running locally, based on the Docker Django docs. It uses docker compose to run the containers locally.
I'd like to run this app on Amazon Web Services (AWS), and to deploy it using the command line, not the AWS console.
My Attempt
When I tried this, I ended up with:
this yml config for ecs-cli
these notes on how I deployed from the command line.
Note: I was trying to fire up the Python dev server in a janky way, hoping that would work before I added nginx. The cluster (RDS+server) would come up, but then the instances would die right away.
Issues I Then Failed to Solve
I realized over the course of this:
the setup needs another container for a web server (nginx) to run on AWS (like this blog post, but the tutorial uses the AWS Console, which I wanted to avoid)
ecs-cli uses a different syntax for yml/json config than docker-compose, so you need to have some separate/similar code from your local docker.yml (and I'm not sure if my file above was correct)
Question
So, what ecs-cli commands and config do I use to deploy the app, or am I going about things all wrong?
Feel free to say I'm doing it all wrong. I could also use Elastic Beanstalk - the tutorials on this don't seem to use docker/docker-compose, but seem easier overall (at least well documented).
I'd like to understand why any given approach is a good way to do this.
One alternative you may wish to consider in lieu of ECS, if you just want to get it up in the amazon cloud, is to make use of docker-machine using the amazonec2 driver.
When executing the docker-compose, just ensure the remote Amazon host machine is ACTIVE which can be viewed with a docker-machine ls
One item you will have to revisit with the Amazon Mmgt Console is to open the applicable PORTS such as Port 80 and any other ports exposed in the compose file. Once the security group is in place for the VPC, you should be able to simply refer to the VPC ID on subsequent executions bypassing any need to use the Mgmt console to add the ports. You may wish to bump up the instance size from the default t2.micro to match the t2.medium specified in your NOTES.
If ECS orchestration is needed, then a task definition will need to be created containing the container definitions you require as defined in your docker compose file. My recommendation would be to take advantage of the Mgmt console to construct the definition and then grab the accompanying JSON defintion which is made available and store in your source code repository for future executions on the command line where they can be referenced in registering task definitions, executing tasks and services within a given cluster.