Difference between buildpack and droplet - cloud-foundry

Here is my understanding of cloudfoundry buildpacks and droplets
buildpack is runtimes(say, jdk) + containers(say, tomcat) + frameworks(say, spring) + service configs (if any) + your apps (if any).
droplet is buildpack deployed on a linux container.
Please correct or add additional information.

A buildpack is a collection of three binaries: detect, compile, and release. When your app bits are pushed to Cloud Foundry, the detect from all the available buildpacks are run against your bits until one returns true. For example, the Ruby buildpack checks to see if there's a Gemfile, the Python buildpack looks for a requirements.txt, etc. Next, the compile phase turns your app bits into a single runnable package, which means compiling the code (if necessary) and bundling in any additional things needed, such as JDK, tomcat, spring, etc; or a Ruby interpreter, gems, etc. That single executable is the droplet. Finally, the release phase presents the droplet and associated metadata so that the stager can upload them to the cloud controller.
To run your app, the droplet is retrieved from the cloud controller and run inside a container.
Basically, a buildpack packages your app into a droplet, which consists of your app + some other stuff, or a compiled binary version of your app, and the droplet is then run in a container.

Buildpacks are scripts which provides run time support for your Application. A build pack is a script which which contains instructions to detect, supply, finalize and release. They essentially provides your application runtime + app framework + related dependency.
Droplet is tarball file which contains Your APP+run time + framework+ dependency . This package is out put of staging and is finally deployed.

Related

CloudFoundry - How to understand the operating system(OS) environment of an app?

We push a java app on cloud foundry using cf push with below manifest file
applications:
- name: xyz-api
instances: 1
memory: 1G
buildpack: java_buildpack_offline
path: target/xyz-api-0.1-SNAPSHOT.jar
I understand that, PAAS (ex: cloud foundry) is a layer on top of IAAS(ex:vcenter hosting linux and windows VM's).
In manifest file, buildpack just talks about userspace runtime libraries required to run an app.
Coming from non-cloud background, and reading this manifest file, I would like to understand...
1) How to understand the operating system(OS) environment, that an app is running? On which operating system...
2) How app running on bosh instance different from docker container?
1) How to understand the operating system(OS) environment, that an app is running? On which operating system...
The stack determines the operating system on which your app will run. There is a stack attribute in the manifest or you can use cf push -s to indicate the stack.
You can run cf stacks to see all available stacks.
In most environments at the time of writing, you will have cflinuxfs2. This is Ubuntu Trusty 14.04. It will be replaced by cflinuxfs3 which is Ubuntu Bionic 18.04, because Trusty is only supported through April of 2019. You will always have some cflinuxfs* stack though, the number will just vary depending on when you read this.
In some environments you might also have a Windows based stack. The original Windows based stack is windows2012r2. This is quite old as I write this so you probably won't see it any more. What you're likely to see is windows2016 or possibly something even newer depending on when you read this.
If you need more control than that, you can always push a docker container. That would let you pick the full OS image for your app.
2) How app running on bosh instance different from docker container?
Apps running on Cloud Foundry aren't deployed by BOSH directly. The app runs in a container. The container is scheduled and run by Diego. Diego is a BOSH deployed VM. So there's an extra layer in there.
At the core, the difference between running your app on Cloud Foundry and running an app in a docker container is minimal. They both run in a Linux "container" which has limitations put on it by kernel namespaces & cgroups.
The difference comes in a.) how you build the container and b.) how the container is deployed.
With Cloud Foundry, you don't build the container. You provide your app to CF & CF builds the container image based on the selected stack and the additional software added by buildpacks. The output in CF terminology is called a "droplet", but it basically an OCI image (this will be even more so with buildpacks v3). When you need to upgrade or add new code, you just repeat the process and push again. The stack and buildpacks, which are automatically updated by the platform, will in turn provide you with a patched & up-to-date app image.
With Docker, you manually create your image building it up from scratch or from some trusted base image. You add you own runtimes & application code. When you need to upgrade, that's on you to pull in updates from the base image & runtimes or worse to update your from-scratch image.
When it comes to deployment, CF handles this all for you automatically. It can run any number of instance of your app that you'd like & it will automatically place those so that your app is resilient to failures in the infrastructure & in CF.
With Docker, that's on you or increasingly often on some other tool like Kubernetes.
Hope that helps!

What happens when buildpack is updated

All apps on our team use a buildpack named ruby_latest_buildpack. It's currently a renamed version of ruby_1_7_27_buildpack. We're about to make it become ruby_1_7_28_buildpack.
What will happen to deployed and running applications when we update ruby_latest_buildpack? If we restart an application, will it continue to run under the environment that was created by the buildpack at deploy time, or will it start to pickup features provided by the updated buildpack?
Once droplet is created(created while staging process) all the frameworks and runtime(which are essentially provided by Buildpacks) are already in Image. So if you just restart your application old buildpacks will be used. If you want to use updated buildpacks you will have to restage your application.

Cloud Foundry app built with PHP buildpack - custom extention disappears after deploy

I have a CIO Blumix Cloud Foundry PHP app developed that needs some additional components.
I used https://github.com/cloudfoundry/php-buildpack for the build. I read in its documentation that I can add my own extension. I did that and added a tar.tgz and added instructions in the extention.py how to install it.
The target location is: /home/vcap/.
I see the installation running okay, and I see the folder during deploy stage (in DevOps Pipelines deployment stage log&history).
But when deployment passes and I read with a deployed php page the folder, I see that it is not there. I read "container destroyed successfully" message in the log of the deploy. Maybe the whole installation environment goes destroyed? Where is a safe place in the deployment file structure where I can install components so they remain after the deployment passes?
I'm using the def compile(install): to place my unix commands. Example: os.system('ls') to list the installation folders content. They work properly.
Thx in advance!
There are two totally different environments used by your app: staging and runtime. Staging is where the buildpack runs & runtime is where the product of staging (i.e. your app) is run.
Unfortunately, paths are not the same in staging and runtime. At runtime your app lives under /app or /home/vcap/app (the former is a symlink to the latter). Staging is different. There is a /home/vcap directory but it's not used for anything.
Instead, the buildpack scripts are fed paths to use via cli arguments. This is all documented here.
As a PHP buildpack extension, you can access the cli args, and many other things, by looking at the context that is maintained by the buildpack. This gets passed directly into the buildpack extension methods like service_environment & service_commands. The compile buildpack extension method is slightly different as the argument passed in is not the content, but that argument does have a reference to the context (it's install.builder._ctx).
Having said all that, I would not recommend using PHP buildpack extensions at this point. The buildpack is being rewritten and that functionality is being dropped. It's not going to have a direct replacement, but the closest thing would be Composer's ability to execute scripts. My suggestion would be to see if you can use the Composer functionality. It'll be more portable as it won't depend on buildpack specific behavior.

Nexus 3.5.1 proxies from snapshot repo nothing but maven metadata files

I have upgraded nexus repository from 2.x to 3.x through following path:
2.4.14 -> 3.4.0 -> 3.5.1
All nexus services were packed in docker with data directory mapped from host's. For all services I use default either sonatype/nexus or sonatype/nexus3 containers. Nexus web interface is hidden behind nginx with simple reverse proxying.
I use the nexus service with boot-cj (with no credentials) tools which manages dependencies the same way as maven. Anyway the tool first downloads nexus-maven.xml with relevant sha1 files and tries to download jars. It works fine with all 2.x I had.
I created a proxy repository against remote sonatype-snapshots repo. When I start compilation I have Could not find artifact error. I found that the meatdata files are cached but all poms and jars.
I have tried to fix it by cleaning cache with the clean_cache file trick and more rough rm -rfv /srv/nexus3/nexus-data/cache/* with no success. There are no any logs about error. Also I have checked manually that required artefact exists in the remote repository. More obvious Rebuild index button gave no solution. I do not thing it is a problem with nginx, but who knows? Also leaving overnight to run the scheduled tasks did not help.
The expected artifact is org.eclipse.rdf4j:rdf4j:pom:2.3-20170901.145510-11.

WebPack on VSTS Hosted Build

We're using the hosted build agent on VSTS to build and release our ASP.NET Core code to Azure App service.
My question is: can we run WebPack to handle front-end tasks on this hosted build on VSTS or do we have to do it manually before checking the code into our repository?
Update:
I'm utilizing the new ASP.NET Core Build (Preview) template that's available on VSTS -- see below:
Here are the steps -- out of the box:
For VSTS we're working on an extension, currently it's in beta phase, you can ask for a share.
Check the VSTS marketplace.
Check this github repo.
Webpack is definitively not a first class citizen for VS2015 and VSTS. Streamlining webpack for CI/CD has been a real headache in my case, especially as webpack was introduced hastily to solve dreadful performance issues with a large monolithic SPA (ASP.NET 4.6, Kendo, 15,000 files, 2000 folders). To cut short, after trying many scenarios to make sure that freshly rebuilt bundles would end up in IIS and Azure webapp, I did a 2-pass build. The sequence of VSTS tasks is as follows: npm install global, npm install local, npm webpack install local, npm webpack install global, build pass 1, webpack, build pass 2, etc... This works with hosted and private agents, providing you supply the proper path for webpack as webpack is installed in a different location in host and in private (did not find a way to chose the webpack install location for consistency). I scorch everything before starting the build. Also need to do these in VS2015 solution : (1) unload "built" folder, and (2) Add Content Include="Built\StarStar" in project file. The "built" folder contains the bundles and should appear greyed, otherwise more bad surprises and instabilities to deal with...
Build-Pass #2 task in VSTS BUILD allows to collect the fresh bundles generated by Build-Pass #1 and includes them automatically in the package to be published.
Without a second build-pass, collecting the bundles and merging them in the zip package is a nightmare, especially when you have 15,000 files to unzip then rezip (300 ms per file!!). Did not find file-merging capability that I could readily use in VSTS.
I have my hears to the ground listening for someone coming up with a more efficient CI/CD scheme for webpack. In the meanwhile, my 2-pass-build workaround is working flawlessly, but slow indeed.
I anticipate that the advances with ASP.NET core, Angular 2 and webpack will look into solving this elegantly.