Cloud Foundry triggers if application was created - cloud-foundry

is there a possibility that cloud foundry triggers an function if a new application was pushed to the platform.
I would like to trigger same internal functions like registration on the API gateway. I know that I can pull the information from events API https://apidocs.cloudfoundry.org/224/events/list_all_events.html. But, is it also possible by push?

The closest thing I can think of to what you're asking is the profile script.
https://docs.cloudfoundry.org/devguide/deploy-apps/deploy-app.html#profile
The note about the Java buildpack not supporting .profile scripts is incorrect. It's a platform feature, so all buildpack's support them. The difference with Java apps is that you're probably pushing a JAR or WAR file so it's harder to make sure the file is placed in the correct location. Location of the file is everything.
When your application starts, the platform will first run the .profile script, if it exists, that is packaged with your application. It's a standard shell script and you can do whatever you like in this file.
The only caveat is that your application will not start until this script completes successfully (i.e. exit 0). Thus you have a limited amount of time for that script to run and your application to start. How much time, you ask? That is configured by cf push -t and is in seconds. You can also set it in your manifest.yml with the timeout attribute.
Time (in seconds) allowed to elapse between starting up an app and the first healthy response from the app
This is also something that each application needs to include. I suppose you could also use a custom buildpack to add that file, if you wanted to have it added across multiple applications. There's no easy way to add it for all apps though.
Hope that helps!

Related

Using cloud functions vs cloud run as webhook for dialogflow

I don't know much about web development and cloud computing. From what I've read when using Cloud functions as the webhook service for dialogflow, you are limited to write code in just 1 source file. I would like to create a real complex dialogflow agent, so It would be handy to have an organized code structure to make the development easier.
I've recently discovered Cloud run which seems like it can also handle webhook requests and makes it possible to develop a complex code structure.
I don't want to use Cloud Run just because it is inconvenient to write everything in one file, but on the other hand it would be strange to have a cloud function with a single file with thousands of lines of code.
Is it possible to have multiple files in a single cloud function?
Is cloud run suitable for my problem? (create a complex dialogflow agent)
Is it possible to have multiple files in a single cloud function?
Yes. When you deploy to Google Cloud Functions you create a bundle with all your source files or have it pull from a source repository.
But Dialogflow only allows index.js and package.json in the Built-In Editor
For simplicity, the built-in code editor only allows you to edit those two files. But the built-in editor is mostly just meant for basic testing. If you're doing serious coding, you probably already have an environment you prefer to use to code and deploy that code.
Is Cloud Run suitable?
Certainly. The biggest thing Cloud Run will get you is complete control over your runtime environment, since you're specifying the details of that environment in addition to the code.
The biggest downside, however, is that you also have to determine details of that environment. Cloud Funcitons provide an HTTPS server without you having to worry about those details, as long as the rest of the environment is suitable.
What other options do I have?
Anywhere you want! Dialogflow only requires that your webhook
Be at a public address (ie - one that Google can resolve and reach)
Runs an HTTPS server at that address with a non-self-signed certificate
During testing, it is common to run it on your own machine via a tunnel such as ngrok, but this isn't a good idea in production. If you're already familiar with running an HTTPS server in another environment, and you wish to continue using that environment, you should be fine.

Difference between cf push and cf restage

I have a app which is pushed to cf using cf push. Now I have changed the one of the environment variable and then restage the app using cf restage. What I understand is when we do a restage it will again compile the droplet and build it same can be done with cf push again for the app. So what I want to know is the difference between the 2 commands and how internally cf handles this?
The difference is that one uploads files and one does not.
When you run cf push, the cli is going to take bits off your local file system, upload them, stage your app and if that succeeds, run your app. This is what you want if you have made changes to files in your app and want to deploy them.
When you cf restage, the cli is not going to upload anything. It will just restage your app, and if that succeeds, run the app. This is what you want if there are no app change or you do not have the app source code, yet you want to force the build packs to run again & restart your app using the new droplet.
When you cf restart, the cli won't upload or restage, it will just stop and start the app. This is the fastest option, but only works if you just need to pick up environment changes like a changed service, memory limit or environment variables. It's also good, if you just want to try and have your app placed onto a different Diego Cell.
If you are just making changes to environment variables, you can probably get away with a cf restart, unless those environment variables are being used by one of your buildpacks, like JBP_CONFIG_*.
Hope that helps!
Regarding
So what I want to know is the difference between the 2 commands
cf push internally does the following sequence
uploading : creating a package from your app
staging : creating a container image using the package
starting : starting the container as a long running process, based on the container image
Some use cases where we need to restage
To introduce buildpack changes : CF operators update the buildpacks for various reasons (one example: security patches) effectively updating the runtime dependencies which our apps use.
Introduction of new software components : It might be possible that we need to introduce new / additional components which were not foreseen at the time of initial push. An example can be : introduction of monitoring agent for your app
When the root file system is changed.
In all the 3 scenarios above, our application code is not changing but we need to re-create the container image. In these cases we need to restage. If we compare with cf push , in case of restage we skip the package creation step.

Run command on Cloud Foundry in it's own container

As you can see on the official doc of Cloud Foundry
https://docs.cloudfoundry.org/devguide/using-tasks.html
A task is an application or script whose code is included as part of a deployed application, but runs independently in its own container.
I'd like to know if there's a way to run command and manipulate files directly on the main container of an application without using a SSH connection or the manifest file.
Thanks
No. Tasks run in their own container, so they cannot affect other running tasks or running application instances. That's the designed behavior.
If you need to make changes to your application, you should look at one of the following:
Use a .profile script. This will let you execute operations prior to your application starting up. It runs for every application instance that is started (I do not believe it runs for tasks) so the operation will be consistently applied.
While not recommended, you can technically background a task through the .profile script that will continue running for the duration of your app. This is not recommended because there is no monitoring of this script and if it crashes it won't cause your app container to restart.
Integrate with your buildpack. Some buildpack's like the PHP buildpack provide hooks for you to integrate and add behavior to staging. For other buildpacks, you can fork the buildpack and make it do whatever you want. This includes changing the command that's returned by the buildpack which tells the platform how to execute your droplet and ultimately what runs in the container.
You can technically modify a running app instance with cf ssh, but I wouldn't recommend it. It's something you should really only do for troubleshooting or maybe testing. Definitely not production apps. If you feel like you need to modify a running app instance for some reason, I'd suggest looking at the reasons why and look for alternative ways of accomplishing your goals.

Cloud Foundry Change Buildpack from Command Line

I have a Jenkins app running on Cloud Foundry for a POC. Since it's Jenkins it uses a bound service for file persistence.
I had to make a change to the Java Buildpack and would like Jenkins to use the updated buildpack.
I could pull the source for Jenkins from GitHub and push it again with updated references to the new build pack in the manifest.yml file or via a command line option. In theory, the bound file system service's state would remain intact. However, I haven't validated this assumption and have concerns I might loose the state.
I've looked through the client CLI to see if there's a way to explicitly swap buildpacks without another push. However, I didn't see anything.
Is anyone aware of a way to change the buildpack of an existing application without re-pushing it to Cloud Foundry?
After some research I couldn't find anyway to swap the buildpack without a push. I did discover that my bound file system service remained intact and didn't loose any work.
Answer: re-push to change the buildpack.

Is there an ideal way to move from Staging to Production for Coldfusion code?

I am trying to work out a good way to run a staging server and a production server for hosting multiple Coldfusion sites. Each site is essentially a fork of a repo, with site specific changes made to each. I am looking for a good way to have this staging server move code (upon QA approval) to the production server.
One fanciful idea involved compiling the sites each into EAR files to be run on the production server, but I cannot seem to wrap my head around Coldfusion archives, plus I cannot see any good way of automating this, especially the deployment part.
What I have done successfully before is use subversion as a go between for a site, where once a site is QA'd the code is committed and then the production server's working directory would have an SVN update run, which would then trigger a code copy from the working directory to the actual live code. This worked fine, but has many moving parts, and still required some form of server access to each server to run the commits and updates. Plus this worked for an individual site, I think it may be a nightmare to setup and maintain this architecture for multiple sites.
Ideally I would want a group of developers to have FTP access with the ability to log into some control panel to mark a site for QA, and then have a QA person check the site and mark it as stable/production worthy, and then have someone see that a site is pending and click a button to deploy the updated site. (Any of those roles could be filled by the same person mind you)
Sorry if that last part wasn't so much the question, just a framework to understand my current thought process.
Agree with #Nathan Strutz that Ant is a good tool for this purpose. Some more thoughts.
You want a repeatable build process that minimizes opportunities for deltas. With that in mind:
SVN export a build.
Tag the build in SVN.
Turn that export into a .zip, something with an installer, etc... idea being one unit to validate with a set of repeatable deployment steps.
Send the build to QA.
If QA approves deploy that build into production
Move whole code bases over as a build, rather than just changed files. This way you know what's put into place in production is the same thing that was validated. Refactor code so that configuration data is not overwritten by a new build.
As for actual production deployment, I have not come across a tool to solve the multiple servers, different code bases challenge. So I think you're best served rolling your own.
As an aside, in your situation I would think through an approach that allows for a standardized codebase, with a mechanism (i.e. an API) that allows for the customization you're describing. Otherwise managing each site as a "custom" project is very painful.
Update
Learning Ant: Ant in Action [book].
On Source Control: for the situation you describe, I would maintain a core code base and overlays per site. Export core, then site specific over it. This ensures any core updates that site specific changes don't override make it in.
Call this combination a "build". Do builds with Ant. Maintain an Ant script - or perhaps more flexibly an ant configuration file - per core & site combination. Track version number of core and site as part of a given build.
If your software is stuffed inside an installer (Nullsoft Install Shield for instance) that should be part of the build. Otherwise you should generate a .zip file (.ear is a possibility as well, but haven't seen anyone actually do this with CF). Point being one file that encompasses the whole build.
This build file is what QA should validate. So validation includes deployment, configuration and functionality testing. See my answer for deployment on how this can flow.
Deployment:
If you want to automate deployment QA should be involved as well to validate it. Meaning QA would deploy / install builds using the same process on their servers before doing a staing to production deployment.
To do this I would create something that tracks what server receives what build file and whatever credentials and connection information is necessary to make that happen. Most likely via FTP. Once transferred, the tool would then extract the build file / run the installer. This last piece is an area I would have to research as to how it's possible to let one server run commands such as extraction or installation remotely.
You should look into Ant as a migration tool. It allows you to package your build process with a simple XML file that you can run from the command line or from within Eclipse. Creating an automated build process is great because it documents the process as well as executes it the same way, every time.
Ant can handle zipping and unzipping, copying around, making backups if needed, working with your subversion repository, transferring via FTP, compressing javascript and even calling a web address if you need to do something like flush the application memory or server cache once it's installed. You may be surprised with the things you can do with Ant.
To get started, I would recommend the Ant manual as your main resource, but look into existing Ant builds as a good starting point to get you going. I have one on RIAForge for example that does some interesting stuff and calls a groovy script to do some more processing on my files during the build. If you search riaforge for build.xml files, you will come up with a great variety of them, many of which are directly for ColdFusion projects.