I have a Django application deployed on Google App Engine (flexible environment).
Can I create files and remove files like this:
with open('myfile.xls', 'w') as file_object:
#writing to file
os.remove('myfile.xls')
or use NamedTemporaryFile?
Now this code works, but after some days after deploying my app become running slowly. It is not about cold start. Redeploying fixes it. Сan files not be deleted and waste disk space? Or there is another reason?
Even in the Standard env, this is not recommended. Pyhton in Std offers the location /tmp to write files, but given that App Engine scales up as needed, there is no warranty that later, the file would be still there:
Filesystem:
The runtime includes a full filesystem. The filesystem is read-only except for the location /tmp, which is a virtual disk storing data in your App Engine instance's RAM.
In the Organizing COnfiguration Files is a section about Design considerations for instance uptime, that mentions:
Your app should be "stateless" so that nothing is stored on the instance.
You should use Google Cloud Storage instead. Here you can find an example of Python Google Cloud Storage sample for Google App Engine Flexible Environment.
Related
I am trying to run a wagtail site on the Google Cloud App Engine standard environment using a cloud instance of mySQL. I followed the documentation provided here: https://cloud.google.com/python/django/appengine
Almost everything is working except for when users upload images. When a user uploads an image to the deployed site a 500 error is reported to the user, and the error log shows "OSError: [Errno 30] Read-only file system".
When I run the site locally using the cloud SQL proxy this error does not occur, and I am able to upload images just fine.
Can you advise me on why this would happen, and what to change to avoid this error in deployment?
Thank you in advance!
You are trying to upload the files to the file system (the same directory where you deployed the app). App Engine does not support that because as your app scales to new instances those files would not be replicated, and therefore, if the app is dependent on that it will start to fail, so this behavior ensures the security and scalability of your application.
There are 2 ways you could work around that, as you can see on this answer I have provided on a similar issue to the one you are facing:
Start using a different directory to store it, like /tmp - with this approach you will also face the scaling up issue previosly mentioned but for temp files, will suit your needs.
Use Cloud Storage Buckets to store persistent files that will be available for all your instances to use. This is the ideal solution for a scaling app.
You can find more details on how to create this by following this link and here you can get an example on how to upload files to Cloud Storage from your Python app.
I am working with Spring Boot and have a requirement to interact with a legacy app using a .dll file.
The .dll file requires a configuration file which has to be in a physical location like C:/...
The .dll can only read only from a physical location; not like a relative path from the src folder.
I am able to successfully interact with the legacy app in localhost with the configuration file located in C:/, but when I have to deploy in PCF is there any possibility to read the configuration file from a physical directory location in PCF?
Like in WAS we can upload a file in the server and use its physical location in the code, can something like this be done in PCF?
You cannot upload files out of band or in advance of running your app. A fresh container is created for you every time you start/stop/restart your application.
As already mentioned, you can bundle any files you require with your application. That's an easy way to make them available. An alternative would be to have your app download the files it needs from somewhere when the app starts up. Yet another option would be to create a build pack and have it install the file, although this is more work so I would suggest it unless you're trying to use the same installed files across many applications.
As far as referencing files, your app has access to the full file system in your container. Your app runs as the vcap user though, so you will have limited access to where you can read/write based on your user permissions. It's totally feasible to read and write to your user's home directory though, /home/vcap. You can also reference files that you uploaded with your app. Your app is located at /home/vcap/app, which is also the $HOME environment variable when your app runs.
Having said all that, the biggest challenge is going to be that you're trying to bundle and use a .dll which is a Windows shared library with a Java app. On Cloud Foundry, Java apps only run on Linux Cells. That means you won't really be able to run your shared library unless you can recompile it as a linux shared library.
Hope that helps!
if both are spring boot jars, you can access them using BOOT-INF/classes/..
just unzip your jar and look for that config file and put up the address
once the jar is exploded in PCF, this hierarchy is maintained
I have deployed my app (PHP Buildpack) to production with cf push app-name. After that I worked on further features and bugfixes. Now I would to push my local changes to production. But when I do that all the images (e.g. profile image) which are being saved on the production server get lost with every push.
How do I take over only the changes in the code without losing any stored files on the production server?
It should be like a "git pull"
Your application container should be stateless. To persist data, you should use the offered services. The Swisscom Application Cloud offers an S3 compatible Dynamic Storage (e.g. for pictures or user avatars) or different database services (MongoDB, MariaDB and others). If you need to save user data, you should save it in one of these services instead of the local filesystem of the app's container. If you keep your app stateless, you can migrate and scale it more easily. You can find more information about how your app should be structured to run in a modern cloud environment here. To get more information about how to use your app with a service, please check this link.
Quote from Avoid Writing to the Local File System
Applications running on Cloud Foundry should not write files to the
local file system for the following reasons:
Local file system storage is short-lived. When an application instance
crashes or stops, the resources assigned to that instance are
reclaimed by the platform including any local disk changes made since
the app started. When the instance is restarted, the application will
start with a new disk image. Although your application can write local
files while it is running, the files will disappear after the
application restarts.
Instances of the same application do not share a
local file system. Each application instance runs in its own isolated
container. Thus a file written by one instance is not visible to other
instances of the same application. If the files are temporary, this
should not be a problem. However, if your application needs the data
in the files to persist across application restarts, or the data needs
to be shared across all running instances of the application, the
local file system should not be used. We recommend using a shared data
service like a database or blobstore for this purpose.
In future your problem will be "solved" with Volume Services (Experimental). You will have a persistent disk for your app.
Cloud Foundry application developers may want their applications to
mount one or more volumes in order to write to a reliable,
non-ephemeral file system. By integrating with service brokers and the
Cloud Foundry runtime, providers can offer these services to
developers through an automated, self-service, and on-demand user
experience.
Please subscribe to our newsletter for feature announcements. Please also monitor the CF community for upstream development.
For an application I'm converting to the Cloud Foundry platform, I have a couple of template files. These are basically templates for documents that will be converted to PDF's. What are my options when it comes to having these available to my application? There are no persistent system drives, so I can't just upload them, it seems. Cloud Foundry suggests for you to save them on something like Amazon S3, Dropbox or Box, or simply having them in a database as blobs, but this seems like a very curious work-around.
These templates will change separately from application files, so I'm not intending to have them in the application Jar.
Cloud Foundry suggests for you to save them on something like Amazon S3, Dropbox or Box, or simply having them in a database as blobs, but this seems like a very curious work-around.
Why do you consider this a curious work-around?
One of the primary benefits of Cloud Foundry is elastic scalability. Once your app is running on CF, you can easily scale it up and down on demand. As you scale up, new copies of the app are started in fresh containers. As you scale down, app containers are destroyed. Only the files that are part of the original app push are put into a fresh container.
If you have files like these templates that are changing over time and you store them in the container's file system, you would need to make sure that all instances of the app have the same copies of the template file at all times as you scale up and down. When you upload new templates, you would have to make sure they get distributed to all instances, not just the one instance processing the upload. As new app instances are created on scale-up, you would need to make sure they have the latest versions of the templates.
Another benefit of CF is app health management. If an instance of your app crashes for any reason, CF will detect this and start a new instance in a fresh container. Again, only files that were part of the original app push will be dropped into the fresh container. You would need to make sure that the latest version of the template files got added to the container after startup.
Storing files like this that have a lifecycle separate from the app in a persistence store outside of the app container solves all these problems, and ensures that all instances of the app get the same versions of the files as you scale instances up and down or as crashed instances are resurrected.
I have django and django's admin setup on my production box. This means that all file uploads are stored on the production box, all the media is stored there.
I have a separate server for files now ( different box, ip ). I want to upload my files there. What are the advantages and disadvantages of these methods I've thought about, and are there "better" methods you can suggest?
Setting up a script to do an rsync on the production box after a file is uploaded to the static server.
Setting up a permanent mount on the production box, by using a fileserver on the static media box such as nfs/samba/ssh-fs and then using the location of the mount as the upload_to path on the production box
Information: Both servers run debian.
EDIT: Prometheus from #django suggested http://docs.djangoproject.com/en/dev/howto/custom-file-storage/
I use Fabric. Especially of interest to you would be fabric.contrib.project.rsync_project().
To Paraphrase -
Fabric is a Python library and
command-line tool for streamlining the
use of SSH for Application Deployment
or systems administration tasks.
First use fabric.contrib.project.upload_project() to upload the entire project directory. From then on, bank on using fabric.contrib.project.rsync_project. to sync the project with local version. Also of special interest is that this uses unix rsync underneath & uses the secure scp to transfer .tar.gz data.
I guess this takes care of your needs. There might not be a need to setup a permanent mount etc.
If your static media is derived from your development process, then Fabric is the way to go. It can automate the deployment and replication of anything-- data, application files, even static database dumps-- from anywhere to anywhere.
If your static media is derived from your application server's operation-- generated PDFs and images, uploads from your users, compiled binaries (I had a customer who wanted that, a Django app that would take in raw x86 assembly and return a compiled and linked binary)-- then you want to use Django Storages, an app that abstracts the actual storage of content for any ImageField or FileField (or anything with a Python File-like interface). It has support for storing them in databases, to Amazon S3, via FTP, and a few others.