How to retrieve heapdump in PCF using SMB - cloud-foundry

I need to -XXHeapdumoOutofmemory and -XXHeapdumoFilepath option in PCF manifest yml to create heapdump on OutOfMemory .
I understand I can use SMB or NFS in vm args but how to retrieve the heapdump file when app goes OutOfMemory and not accessible.
Kindly help.

I need to -XXHeapdumoOutofmemory and -XXHeapdumoFilepath option in PCF manifest yml to create heapdump on OutOfMemory
You don't need to set these options. The Java buildpack will take care of this for you. By default, it installs a jvmkill agent which will automatically do this.
https://github.com/cloudfoundry/java-buildpack/blob/main/docs/jre-open_jdk_jre.md#jvmkill
In addition, the jvmkill agent is smart enough that if you bind a SMB or NFS volume service to your application, it will automatically save the heap dumps to that location. From the doc link above...
If a Volume Service with the string heap-dump in its name or tag is bound to the application, terminal heap dumps will be written with the pattern <CONTAINER_DIR>/<SPACE_NAME>-<SPACE_ID[0,8]>/<APPLICATION_NAME>-<APPLICATION_ID[0,8]>/<INSTANCE_INDEX>--<INSTANCE_ID[0,8]>.hprof
The key is that you name the bound volume service appropriately, i.e. the name must contain the string heap-dump.
You may also do the same thing with non-terminal heap dumps using the Java Memory Agent that the Java buildpack can install for you upon request.
I understand I can use SMB or NFS in vm args but how to retrieve the heapdump file when app goes OutOfMemory and not accessible.
To retrieve the heap dumps you need to somehow access the file server. I say "somehow" because it entirely depends on what you are allowed to do in your environment.
You may be permitted to mount the SMB/NFS volume directly to your PC. You could then access the files directly.
You may be able to retrieve the files through some other protocol like HTTP or FTP or SFTP.
You may be able to mount the SMB or NFS volume to another application, perhaps using the static file buildpack, to serve up the files for you.
You may need to request the files from an administrator with access.
Your best best is to talk with the admin for your SMB or NFS server. He or she can inform you about the options that are available to you in your environment.

Related

How to mount a file via CloudFoundry manifest similar to Kubernetes?

With Kubernetes, I used to mount a file containing feature-flags as key/value pairs. Our UI would then simply get the file and read the values.
Like this: What's the best way to share/mount one file into a pod?
Now I want to do the same with the manifest file for CloudFoundry. How can I mount a file so that it will be available in /dist folder at deployment time?
To add more information, when we mount a file, the UI later can download the file and read the content. We are using React and any call to the server has to go through Apigee layer.
The typical approach to mounting files into a CloudFoundry application is called Volume Services. This takes a remote file system like NFS or SMB and mounts it into your application container.
I don't think that's what you want here. It would probably be overkill to mount in a single file. You totally could go this route though.
That said, CloudFoundry does not have a built-in concept that's similar to Kubernetes, where you can take your configuration and mount it as a file. With CloudFoundry, you do have a few similar options. They are not exactly the same though so you'll have to make the determination if one will work for your needs.
You can pass config through environment variables (or through user-provided service bindings, but that comes through an environment variable VCAP_SERVICES as well). This won't be a file, but perhaps you can have your UI read that instead (You didn't mention how the UI gets that file, so I can't comment further. If you elaborate on that point like if it's HTTP or reading from disk, I could perhaps expand on this option).
If it absolutely needs to be a file, your application could read the environment variable contents and write it to disk when it starts. If your application isn't able to do that like if you're using Nginx, you could include a .profile script at the root of your application that reads it and generates the file. For example: echo "$CFG_VAR" > /dist/file or whatever you need to do to generate that file.
A couple of more notes when using environment variables. There are limits to how much information can go in them (sorry I don't know the exact value off the top of my head, but I think it's around 128K). It is also not great for binary configuration, in which case, you'd need to base64 encode your data first.
You can pull the config file from a config server and cache it locally. This can be pretty simple. The first thing your app does when it starts is to reach out and download the file, place it on the disk and the file will persist there for the duration of your application's lifetime.
If you don't have a server-side application like if you're running Nginx, you can include a .profile script (can be any executable script) at the root of your application which can use curl or another tool to download and set up that configuration.
You can replace "config server" with an HTTP server, Git repository, Vault server, CredHub, database, or really any place you can durably store your data.
Not recommended, but you can also push your configuration file with the application. This would be as simple as including it in the directory or archive that you push. This has the obvious downside of coupling your configuration to the application bits that you push. Depending on where you work, the policies you have to follow, and the tools you use this may or may not matter.
There might be other variations you could use as well. Loading the file in your application when it starts or through a .profile script is very flexible.

What is the easiest way to download a file out of an ECS container to local machine?

I need to do some heap dumps and it would be great to have an easy (and fast) way to get the files as seamless as possible.
Current way doing it is:
Create file
Optional: Upload SSH key to EC2 instance if not yet known(depending on security model used)
Open SSH session (not using ASM as it has some OsX flaws)
copy docker container file to EC2 instance
SCP the file to local machine
Cleanup
This seems overwhelming complex for getting a single file. Is there a more straight-forward way in doing it? As this is an on demand use case I'd be ok with manual AWS Console way or using tools to do it more convenient. THX

How to only push local changes without destroying the container?

I have deployed my app (PHP Buildpack) to production with cf push app-name. After that I worked on further features and bugfixes. Now I would to push my local changes to production. But when I do that all the images (e.g. profile image) which are being saved on the production server get lost with every push.
How do I take over only the changes in the code without losing any stored files on the production server?
It should be like a "git pull"
Your application container should be stateless. To persist data, you should use the offered services. The Swisscom Application Cloud offers an S3 compatible Dynamic Storage (e.g. for pictures or user avatars) or different database services (MongoDB, MariaDB and others). If you need to save user data, you should save it in one of these services instead of the local filesystem of the app's container. If you keep your app stateless, you can migrate and scale it more easily. You can find more information about how your app should be structured to run in a modern cloud environment here. To get more information about how to use your app with a service, please check this link.
Quote from Avoid Writing to the Local File System
Applications running on Cloud Foundry should not write files to the
local file system for the following reasons:
Local file system storage is short-lived. When an application instance
crashes or stops, the resources assigned to that instance are
reclaimed by the platform including any local disk changes made since
the app started. When the instance is restarted, the application will
start with a new disk image. Although your application can write local
files while it is running, the files will disappear after the
application restarts.
Instances of the same application do not share a
local file system. Each application instance runs in its own isolated
container. Thus a file written by one instance is not visible to other
instances of the same application. If the files are temporary, this
should not be a problem. However, if your application needs the data
in the files to persist across application restarts, or the data needs
to be shared across all running instances of the application, the
local file system should not be used. We recommend using a shared data
service like a database or blobstore for this purpose.
In future your problem will be "solved" with Volume Services (Experimental). You will have a persistent disk for your app.
Cloud Foundry application developers may want their applications to
mount one or more volumes in order to write to a reliable,
non-ephemeral file system. By integrating with service brokers and the
Cloud Foundry runtime, providers can offer these services to
developers through an automated, self-service, and on-demand user
experience.
Please subscribe to our newsletter for feature announcements. Please also monitor the CF community for upstream development.

move a virtual machine from one vCenter to another vCenter

I have the following problem:
There two separate vCenters (ESXi). They cannot see each other or communicate in any way.
I can create a Clone of a VM in vCenter1 but then I want to move that Clone in vCenter2.
Is there a way that I can copy the Cloned VM (files) on an external HDD and move them in the other vCenter?
I've figure it out the solution to my problem:
Step 1: from within the vSphere client, while connected to vCenter1, select the VM and then from "File" menu select "Export"->"Export OVF Template" (Note: make sure the VM is Powered Off otherwise this feature is not available - it will be gray). This action will allow you to save on your machine/laptop the VM (as an .vmdk, .ovf and a .mf file).
Step 2: Connect to the vCenter2 with your vSphere client and from "File" menu select "Deploy OVF Template..." and then select the location where the VM was saved in the previous step.
That was all!
Thanks!
Yes, you can do this.
Copy all of the cloned VM's files from its directory, and place it on its destination datastore.
In the VI client connected to the destination vCenter, go to the Inventory->Datastores view.
Open the datastore browser for the datastore where you placed the VM's files.
Find the .vmx file that you copied over and right-click it.
Choose 'Register Virtual Machine', and follow whatever prompts ensue. (Depending on your version of vCenter, this may be 'Add to Inventory' or some other variant)
The VM registration process should finish with the cloned VM usable in the new vCenter!
Good luck!
For moving a virtual machine you need not clone the VM, just copy the VM files (after powering the VM off) to external HDD and register the same on destination host.
A much simpler way to do this is to use vCenter Converter Standalone Client and do a P2V but in this case a V2V. It is much faster than copying the entire VM files onto some storage somewhere and copy it onto your new vCenter. It takes a long time to copy or exporting it to an OVF template and then import it. You can set your vCenter Converter Standalone Client to V2V in one step and synchronize and then have it power up the VM on the new Vcenter and shut off on the old vCenter. Simple.
For me using this method I was able to move a VM from one vCenter to another vCenter in about 30 minutes as compared to copying or exporting which took over 2hrs. Your results may vary.
This process below, from another responder, would work even better if you can present that datastore to ESXi servers on the vCenter and then follow step 2. Eliminating having to copy all the VMs then follow rest of the process.
Copy all of the cloned VM's files from its directory, and place it on its destination datastore.
In the VI client connected to the destination vCenter, go to the Inventory->Datastores view.
Open the datastore browser for the datastore where you placed the VM's files.
Find the .vmx file that you copied over and right-click it.
Choose 'Register Virtual Machine', and follow whatever prompts ensue. (Depending on your version of vCenter, this may be 'Add to Inventory' or some other variant)
Copying the VM files onto an external HDD and then bringing it in to the destination will take a lot longer and requires multiple steps. Using vCenter Converter Standalone Client will do everything for you and is much faster. No external HDD required. Not sure where you got the cloning part from. vCenter Converter Standalone Client is simply copying the VM files by importing and exporting from source to destination, shutdown the source VM, then register the VM at destination and power on. All in one step. Takes about 1 min to set that up vCenter Converter Standalone Client.
You don't have to export your VMs at all. You can move the VM and clone to a TAXI host in vCenter 1. Then add the host to vCenter 2, and vMotion away whatever VMs to other hosts previously managed by vCenter 2. When done, you can add the TAXI host back to vCenter 1.
If you'd like to do this using the command line, you can do this if you have ESXi 6.0 (or possibly even ESXi 5.5) running, by using govc, which is a very helpful utility for interacting with both your vCenter and its associated resources.
Depending on your setup, you can
# setup your credentials
export GOVC_USERNAME=YOUR_USERNAME GOVC_PASSWORD=YOUR_PASSWORD
govc export.ovf -u your-vcsa-url.example.com -vm VM_NAME -dc VMS_DATACENTER export-folder
Then, you'll have your VM VM_NAME exported in the folder export-folder. From there, you can then
govc import.ovf -u your-other-vcsa-url.example.com -vm NEW_VM_NAME -dc NEW_DATACENTER export-folder/VM_NAME.ovf
That'll import it into your other vCenter. You might have to specify -ds NEW_DATASTORE too, if you have more than one datastore available, but govc will tell you so if you need to.
The commands above require that govc is installed, which you should, because it's far better than ovftool either way.

Backup strategy for django

I recently deployed a couple of web applications built using django (on webfaction).
These would be some of the first projects of this scale that i am working on, so I wanted to know what an effective backup strategy was for maintaining backups both on webfaction and an alternate location.
EDIT:
What i want to backup?
Database and user uploaded media. (my code is managed via git)
I'm not sure there is a one size fits all answer especially since you haven't said what you intend to backup. My usual MO:
Source code: use source control such as svn or git. This means that you will usually have: dev, deploy and repository backups for code (specially in a drsc).
Database: this also depends on usage, but usually:
Have a dump_database.py management command that will introspect settings and for each db will output the correct db dump command (taking into consideration the db type and also the database name).
Have a cron job on another server that connects through ssh to the application server, executes the dump db management command, tars the sql file with the db name + timestamp as the file name and uploads it to another server (amazon's s3 in my case).
Media file: e.g. user uploads. Keep a cron job on another server that can ssh into the application server and calls rsync to another server.
The thing to keep in mind though, it what is the intended purpose of the backup.
If it's accidental (be it disk failure, bug or sql injection) data loss or simply restoring, you can keep those cron jobs on the same server.
If you also want to be safe in case the server is compromised, you cannot keep the remote backup credentials (sshkeys, amazon secret etc) on the application server! Or else an attacker will gain access to the backup server.