How to obtain output files from cloudfoundry that deployed application has created? - cloud-foundry

I have deployed a windows exe on cloudfoundry, now i want to get back the files from cloudfoundry that exe has generated.
Is there any way of retrieving files using a batch script?

It's important to note that you don't write anything of critical to the local file system. The local file system is ephemeral so if your app crashes/restarts for any reason before you obtain the files then they are gone.
Having said that, if you have an application that is doing some processing and generating output files, the recommended way to handle this would be to have the process write those files somewhere external and durable, like a database, SFTP server or S3-compatible service. In short, push the files out somewhere, instead of trying to get into the container to obtain the files.
If you must pull them out of the container, you have a couple options:
Run a small HTTP server in the container & expose the files on it. Make sure you are appropriately securing the server to prevent unauthorized access to your files.
Run scp or sftp and download the files. Instructions for doing that can be found here, but roughly speaking you run cf app app-name --guid to get the app guid, cf ssh-code to get a passcode, then scp -P 2222 -oUser=cf:<insert app-guid>/0 ssh.system_domain:my-remote-file.json ./ to get the file named my-remote-file.json. Obviously, change the path/file to what you want to download.

Related

How to create a folder in Google Drive Sync created cloud directory?

This question assumes you have used Google Drive Sync or at least have knowledge of what files it creates in your cloud drive
While using rclone to sync a local ubuntu directory to a Google Drive (a.k.a. gdrive) location, I found that rclone wasn't able to (error googleapi: Error 500: Internal Error, internalError; the Google Cloud Platform API console revealed that the gdrive API call drive.files.create was failing)
By location I mean the root of the directory structure that the Google Drive Sync app creates on the cloud (eg. emboldened of say: Computers/laptopName/(syncedFolder1,syncedFolder2,...)). In the current case, the gdrive sync app (famously unavailable on Linux) was running from a separate Windows machine. It was in this location that rclone wasn't able to create a dir.
Forget rclone. Trying to manually create the folder in the web app also fails as follows.
Working...
Could not execute action
Why is this happening and how to achieve this - making a directory in the cloud region where gdrive sync has put all my synced folders?
Basically you can't. I found an explanation here
If I am correct in my suspicion, there are a few things you have to understand:
Even though you may be able to create folders inside the Computers isolated containers, doing so will immediately create that folder not only in your cloud, but on that computer/device. Any changes to anything inside the Computers container will automatically be synced to the device/computer the container is linked to- just like any change on the device/computer side is also synced to the cloud.
It is not possible to create anything at the "root" level of each container in the cloud. If that were permitted then the actual preferences set in Backup & Sync would have to be magically altered to add that folder to the preferences. Thus this is not done.
So while folders inside the synced folders may be created, no new modifications may be made in the "root" dir

How to mount a file via CloudFoundry manifest similar to Kubernetes?

With Kubernetes, I used to mount a file containing feature-flags as key/value pairs. Our UI would then simply get the file and read the values.
Like this: What's the best way to share/mount one file into a pod?
Now I want to do the same with the manifest file for CloudFoundry. How can I mount a file so that it will be available in /dist folder at deployment time?
To add more information, when we mount a file, the UI later can download the file and read the content. We are using React and any call to the server has to go through Apigee layer.
The typical approach to mounting files into a CloudFoundry application is called Volume Services. This takes a remote file system like NFS or SMB and mounts it into your application container.
I don't think that's what you want here. It would probably be overkill to mount in a single file. You totally could go this route though.
That said, CloudFoundry does not have a built-in concept that's similar to Kubernetes, where you can take your configuration and mount it as a file. With CloudFoundry, you do have a few similar options. They are not exactly the same though so you'll have to make the determination if one will work for your needs.
You can pass config through environment variables (or through user-provided service bindings, but that comes through an environment variable VCAP_SERVICES as well). This won't be a file, but perhaps you can have your UI read that instead (You didn't mention how the UI gets that file, so I can't comment further. If you elaborate on that point like if it's HTTP or reading from disk, I could perhaps expand on this option).
If it absolutely needs to be a file, your application could read the environment variable contents and write it to disk when it starts. If your application isn't able to do that like if you're using Nginx, you could include a .profile script at the root of your application that reads it and generates the file. For example: echo "$CFG_VAR" > /dist/file or whatever you need to do to generate that file.
A couple of more notes when using environment variables. There are limits to how much information can go in them (sorry I don't know the exact value off the top of my head, but I think it's around 128K). It is also not great for binary configuration, in which case, you'd need to base64 encode your data first.
You can pull the config file from a config server and cache it locally. This can be pretty simple. The first thing your app does when it starts is to reach out and download the file, place it on the disk and the file will persist there for the duration of your application's lifetime.
If you don't have a server-side application like if you're running Nginx, you can include a .profile script (can be any executable script) at the root of your application which can use curl or another tool to download and set up that configuration.
You can replace "config server" with an HTTP server, Git repository, Vault server, CredHub, database, or really any place you can durably store your data.
Not recommended, but you can also push your configuration file with the application. This would be as simple as including it in the directory or archive that you push. This has the obvious downside of coupling your configuration to the application bits that you push. Depending on where you work, the policies you have to follow, and the tools you use this may or may not matter.
There might be other variations you could use as well. Loading the file in your application when it starts or through a .profile script is very flexible.

How to retrieve heapdump in PCF using SMB

I need to -XXHeapdumoOutofmemory and -XXHeapdumoFilepath option in PCF manifest yml to create heapdump on OutOfMemory .
I understand I can use SMB or NFS in vm args but how to retrieve the heapdump file when app goes OutOfMemory and not accessible.
Kindly help.
I need to -XXHeapdumoOutofmemory and -XXHeapdumoFilepath option in PCF manifest yml to create heapdump on OutOfMemory
You don't need to set these options. The Java buildpack will take care of this for you. By default, it installs a jvmkill agent which will automatically do this.
https://github.com/cloudfoundry/java-buildpack/blob/main/docs/jre-open_jdk_jre.md#jvmkill
In addition, the jvmkill agent is smart enough that if you bind a SMB or NFS volume service to your application, it will automatically save the heap dumps to that location. From the doc link above...
If a Volume Service with the string heap-dump in its name or tag is bound to the application, terminal heap dumps will be written with the pattern <CONTAINER_DIR>/<SPACE_NAME>-<SPACE_ID[0,8]>/<APPLICATION_NAME>-<APPLICATION_ID[0,8]>/<INSTANCE_INDEX>--<INSTANCE_ID[0,8]>.hprof
The key is that you name the bound volume service appropriately, i.e. the name must contain the string heap-dump.
You may also do the same thing with non-terminal heap dumps using the Java Memory Agent that the Java buildpack can install for you upon request.
I understand I can use SMB or NFS in vm args but how to retrieve the heapdump file when app goes OutOfMemory and not accessible.
To retrieve the heap dumps you need to somehow access the file server. I say "somehow" because it entirely depends on what you are allowed to do in your environment.
You may be permitted to mount the SMB/NFS volume directly to your PC. You could then access the files directly.
You may be able to retrieve the files through some other protocol like HTTP or FTP or SFTP.
You may be able to mount the SMB or NFS volume to another application, perhaps using the static file buildpack, to serve up the files for you.
You may need to request the files from an administrator with access.
Your best best is to talk with the admin for your SMB or NFS server. He or she can inform you about the options that are available to you in your environment.

Physical file location in Pivotal Cloud Foundry

I am working with Spring Boot and have a requirement to interact with a legacy app using a .dll file.
The .dll file requires a configuration file which has to be in a physical location like C:/...
The .dll can only read only from a physical location; not like a relative path from the src folder.
I am able to successfully interact with the legacy app in localhost with the configuration file located in C:/, but when I have to deploy in PCF is there any possibility to read the configuration file from a physical directory location in PCF?
Like in WAS we can upload a file in the server and use its physical location in the code, can something like this be done in PCF?
You cannot upload files out of band or in advance of running your app. A fresh container is created for you every time you start/stop/restart your application.
As already mentioned, you can bundle any files you require with your application. That's an easy way to make them available. An alternative would be to have your app download the files it needs from somewhere when the app starts up. Yet another option would be to create a build pack and have it install the file, although this is more work so I would suggest it unless you're trying to use the same installed files across many applications.
As far as referencing files, your app has access to the full file system in your container. Your app runs as the vcap user though, so you will have limited access to where you can read/write based on your user permissions. It's totally feasible to read and write to your user's home directory though, /home/vcap. You can also reference files that you uploaded with your app. Your app is located at /home/vcap/app, which is also the $HOME environment variable when your app runs.
Having said all that, the biggest challenge is going to be that you're trying to bundle and use a .dll which is a Windows shared library with a Java app. On Cloud Foundry, Java apps only run on Linux Cells. That means you won't really be able to run your shared library unless you can recompile it as a linux shared library.
Hope that helps!
if both are spring boot jars, you can access them using BOOT-INF/classes/..
just unzip your jar and look for that config file and put up the address
once the jar is exploded in PCF, this hierarchy is maintained

Running .net core web app on AWS Beanstalk - file write permissions

Ok, so I've got a web application written in .NET Core which I've deployed to the AWS Elastic beanstalk which was pretty easy, but I've already hit a snag.
The application fetches JSON data from an external source and writes to a local file, currently to wwwroot/data/data.json under the project root. Once deployed to AWS, this functionality is throwing an access denied exception when it tries to write the file.
I've seen something about creating a folder called .ebextensions with a config file with some container commands to set permissions to certain paths/files after deployment, and I've tried doing that, but that does not seem to do anything for me - I don't even know if those commands are even being executed so I have no idea what's happening, if anything.
This is the config file I created under the .ebextensions folder:
{
"container_commands": {
"01-aclchange": {
"command": "icacls \"C:/inetpub/AspNetCoreWebApps/app/wwwroot/data\" /grant DefaultAppPool:(OI)(CI)",
}
}
}
The name of the .config file matches the applicatio name in AWS, but I also read somewhere that the name does not matter, as long as it has the .config extension.
Has anyone successfully done something like this? Any pointers appreciated.
Rather than trying to fix permission issues writing to the local storage within AWS Elastic Beanstalk, I would instead suggest using something like Amazon S3 for storing files. Some benefits would be:
Not having to worry about file permissions.
S3 files are persistent.
You may run into issues with losing local files when you republish your application.
If you ever move to using something like containers, you will lose the file every time the container is taken down.
S3 is incredibly cheap to use.