I am working with Spring Boot and have a requirement to interact with a legacy app using a .dll file.
The .dll file requires a configuration file which has to be in a physical location like C:/...
The .dll can only read only from a physical location; not like a relative path from the src folder.
I am able to successfully interact with the legacy app in localhost with the configuration file located in C:/, but when I have to deploy in PCF is there any possibility to read the configuration file from a physical directory location in PCF?
Like in WAS we can upload a file in the server and use its physical location in the code, can something like this be done in PCF?
You cannot upload files out of band or in advance of running your app. A fresh container is created for you every time you start/stop/restart your application.
As already mentioned, you can bundle any files you require with your application. That's an easy way to make them available. An alternative would be to have your app download the files it needs from somewhere when the app starts up. Yet another option would be to create a build pack and have it install the file, although this is more work so I would suggest it unless you're trying to use the same installed files across many applications.
As far as referencing files, your app has access to the full file system in your container. Your app runs as the vcap user though, so you will have limited access to where you can read/write based on your user permissions. It's totally feasible to read and write to your user's home directory though, /home/vcap. You can also reference files that you uploaded with your app. Your app is located at /home/vcap/app, which is also the $HOME environment variable when your app runs.
Having said all that, the biggest challenge is going to be that you're trying to bundle and use a .dll which is a Windows shared library with a Java app. On Cloud Foundry, Java apps only run on Linux Cells. That means you won't really be able to run your shared library unless you can recompile it as a linux shared library.
Hope that helps!
if both are spring boot jars, you can access them using BOOT-INF/classes/..
just unzip your jar and look for that config file and put up the address
once the jar is exploded in PCF, this hierarchy is maintained
Related
This question assumes you have used Google Drive Sync or at least have knowledge of what files it creates in your cloud drive
While using rclone to sync a local ubuntu directory to a Google Drive (a.k.a. gdrive) location, I found that rclone wasn't able to (error googleapi: Error 500: Internal Error, internalError; the Google Cloud Platform API console revealed that the gdrive API call drive.files.create was failing)
By location I mean the root of the directory structure that the Google Drive Sync app creates on the cloud (eg. emboldened of say: Computers/laptopName/(syncedFolder1,syncedFolder2,...)). In the current case, the gdrive sync app (famously unavailable on Linux) was running from a separate Windows machine. It was in this location that rclone wasn't able to create a dir.
Forget rclone. Trying to manually create the folder in the web app also fails as follows.
Working...
Could not execute action
Why is this happening and how to achieve this - making a directory in the cloud region where gdrive sync has put all my synced folders?
Basically you can't. I found an explanation here
If I am correct in my suspicion, there are a few things you have to understand:
Even though you may be able to create folders inside the Computers isolated containers, doing so will immediately create that folder not only in your cloud, but on that computer/device. Any changes to anything inside the Computers container will automatically be synced to the device/computer the container is linked to- just like any change on the device/computer side is also synced to the cloud.
It is not possible to create anything at the "root" level of each container in the cloud. If that were permitted then the actual preferences set in Backup & Sync would have to be magically altered to add that folder to the preferences. Thus this is not done.
So while folders inside the synced folders may be created, no new modifications may be made in the "root" dir
With Kubernetes, I used to mount a file containing feature-flags as key/value pairs. Our UI would then simply get the file and read the values.
Like this: What's the best way to share/mount one file into a pod?
Now I want to do the same with the manifest file for CloudFoundry. How can I mount a file so that it will be available in /dist folder at deployment time?
To add more information, when we mount a file, the UI later can download the file and read the content. We are using React and any call to the server has to go through Apigee layer.
The typical approach to mounting files into a CloudFoundry application is called Volume Services. This takes a remote file system like NFS or SMB and mounts it into your application container.
I don't think that's what you want here. It would probably be overkill to mount in a single file. You totally could go this route though.
That said, CloudFoundry does not have a built-in concept that's similar to Kubernetes, where you can take your configuration and mount it as a file. With CloudFoundry, you do have a few similar options. They are not exactly the same though so you'll have to make the determination if one will work for your needs.
You can pass config through environment variables (or through user-provided service bindings, but that comes through an environment variable VCAP_SERVICES as well). This won't be a file, but perhaps you can have your UI read that instead (You didn't mention how the UI gets that file, so I can't comment further. If you elaborate on that point like if it's HTTP or reading from disk, I could perhaps expand on this option).
If it absolutely needs to be a file, your application could read the environment variable contents and write it to disk when it starts. If your application isn't able to do that like if you're using Nginx, you could include a .profile script at the root of your application that reads it and generates the file. For example: echo "$CFG_VAR" > /dist/file or whatever you need to do to generate that file.
A couple of more notes when using environment variables. There are limits to how much information can go in them (sorry I don't know the exact value off the top of my head, but I think it's around 128K). It is also not great for binary configuration, in which case, you'd need to base64 encode your data first.
You can pull the config file from a config server and cache it locally. This can be pretty simple. The first thing your app does when it starts is to reach out and download the file, place it on the disk and the file will persist there for the duration of your application's lifetime.
If you don't have a server-side application like if you're running Nginx, you can include a .profile script (can be any executable script) at the root of your application which can use curl or another tool to download and set up that configuration.
You can replace "config server" with an HTTP server, Git repository, Vault server, CredHub, database, or really any place you can durably store your data.
Not recommended, but you can also push your configuration file with the application. This would be as simple as including it in the directory or archive that you push. This has the obvious downside of coupling your configuration to the application bits that you push. Depending on where you work, the policies you have to follow, and the tools you use this may or may not matter.
There might be other variations you could use as well. Loading the file in your application when it starts or through a .profile script is very flexible.
I have deployed a windows exe on cloudfoundry, now i want to get back the files from cloudfoundry that exe has generated.
Is there any way of retrieving files using a batch script?
It's important to note that you don't write anything of critical to the local file system. The local file system is ephemeral so if your app crashes/restarts for any reason before you obtain the files then they are gone.
Having said that, if you have an application that is doing some processing and generating output files, the recommended way to handle this would be to have the process write those files somewhere external and durable, like a database, SFTP server or S3-compatible service. In short, push the files out somewhere, instead of trying to get into the container to obtain the files.
If you must pull them out of the container, you have a couple options:
Run a small HTTP server in the container & expose the files on it. Make sure you are appropriately securing the server to prevent unauthorized access to your files.
Run scp or sftp and download the files. Instructions for doing that can be found here, but roughly speaking you run cf app app-name --guid to get the app guid, cf ssh-code to get a passcode, then scp -P 2222 -oUser=cf:<insert app-guid>/0 ssh.system_domain:my-remote-file.json ./ to get the file named my-remote-file.json. Obviously, change the path/file to what you want to download.
I have a Django application deployed on Google App Engine (flexible environment).
Can I create files and remove files like this:
with open('myfile.xls', 'w') as file_object:
#writing to file
os.remove('myfile.xls')
or use NamedTemporaryFile?
Now this code works, but after some days after deploying my app become running slowly. It is not about cold start. Redeploying fixes it. Сan files not be deleted and waste disk space? Or there is another reason?
Even in the Standard env, this is not recommended. Pyhton in Std offers the location /tmp to write files, but given that App Engine scales up as needed, there is no warranty that later, the file would be still there:
Filesystem:
The runtime includes a full filesystem. The filesystem is read-only except for the location /tmp, which is a virtual disk storing data in your App Engine instance's RAM.
In the Organizing COnfiguration Files is a section about Design considerations for instance uptime, that mentions:
Your app should be "stateless" so that nothing is stored on the instance.
You should use Google Cloud Storage instead. Here you can find an example of Python Google Cloud Storage sample for Google App Engine Flexible Environment.
I have django and django's admin setup on my production box. This means that all file uploads are stored on the production box, all the media is stored there.
I have a separate server for files now ( different box, ip ). I want to upload my files there. What are the advantages and disadvantages of these methods I've thought about, and are there "better" methods you can suggest?
Setting up a script to do an rsync on the production box after a file is uploaded to the static server.
Setting up a permanent mount on the production box, by using a fileserver on the static media box such as nfs/samba/ssh-fs and then using the location of the mount as the upload_to path on the production box
Information: Both servers run debian.
EDIT: Prometheus from #django suggested http://docs.djangoproject.com/en/dev/howto/custom-file-storage/
I use Fabric. Especially of interest to you would be fabric.contrib.project.rsync_project().
To Paraphrase -
Fabric is a Python library and
command-line tool for streamlining the
use of SSH for Application Deployment
or systems administration tasks.
First use fabric.contrib.project.upload_project() to upload the entire project directory. From then on, bank on using fabric.contrib.project.rsync_project. to sync the project with local version. Also of special interest is that this uses unix rsync underneath & uses the secure scp to transfer .tar.gz data.
I guess this takes care of your needs. There might not be a need to setup a permanent mount etc.
If your static media is derived from your development process, then Fabric is the way to go. It can automate the deployment and replication of anything-- data, application files, even static database dumps-- from anywhere to anywhere.
If your static media is derived from your application server's operation-- generated PDFs and images, uploads from your users, compiled binaries (I had a customer who wanted that, a Django app that would take in raw x86 assembly and return a compiled and linked binary)-- then you want to use Django Storages, an app that abstracts the actual storage of content for any ImageField or FileField (or anything with a Python File-like interface). It has support for storing them in databases, to Amazon S3, via FTP, and a few others.