Django+Appengine - Development workflow when storing to GCS - django

I'm developing a Django app in Appengine, a function of it is to upload a file from the user to Google Cloud Storage in my app. When deployed to production, it works well and the file is uploaded without any issue.
But I need to work on it in my Dev environment as well. I have a DEBUG variable which indicated the environment. So I need to upload the file and store it somewhere in my local directory, when using the Development server. But the AppEngine sandbox prevents the use of the 'open()' system call, so I can't really write any file on my local disk when in the dev environment.
What is the common solution to this? I could simply not write the file anywhere at all in the dev environment, but I need to read this file at various later stages in the web app.

Related

Persistent UGC File Storage on AWS For Docker Application

I have a docker-based Flask app that I have been developing and it's nearing completion. I am currently moving to hosting it on AWS. The app allows users to generate various forms of content (usually image files) that are saved into a UGC folder within the /static folder of the app in my dev environment. This temporary solution worked fine in dev but it isn't going to suffice when ported to live as the static/ugc folder will be destroyed with each image update.
I therefore need an alternative solution and have been investigating EFS. Does anybody have experience with this service? Or in hosting persistent static files outside of a Docker app container in general and could advise?
You should probably look at using the S3 object storage service, via the boto3 python client.
There's also a flask extension, Flask-S3 which allows you to host general assets on S3 automatically. You'd probably need to code the logic for user-uploaded content yourself.

Serving an Angular2 App on aws s3

I have created a Angular 2 form which posts the form data to a postgres DB using a Rest API. Now, I want to serve my Angular 2 app on AWS S3. I googled on this and I found that creating a webpack is a solution but not able to create one. I want to know where to start with, to bundle my code and serve it on s3.
GitHub link for Form: https://github.com/aanirudhraj/Angular2form_signaturepad_API
Thanks for the Help!!
The quickest way is to build the app using angular-cli and then deploy the content of the 'dist' directory as a static site in S3 (an S3 bucket can be configured to host a static site; make sure you assing read permission to 'anybody' to avoid http 4xx return codes).
You just need to host it as a static site on S3.
Check this: http://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html
I infer from your code that you are using angular-cli.
Create a dev/production build
ng build --dev / ng build --prod
Content of your dist folder will contain bundled files for deployment. Your primary file for refrence will be 'index.html' as this will load you angular app.
You need to decide what kind of server you'll be using to serve you webapp.
For development purpose when we do ng serve , webpack-dev-server is used as a static file server (local development). I'll recommend should go with the most comfortable/cost effective solution you can have when deploying to actual server.
Static file Server
Directly hosting website is aws space as a static website.
Aspnet Core with static file server middleware. (*)
Nodejs Express with static file server middleware.(*)
Java serverlet for serving static files. (*)
(*)Following aproach will also allow you to have some server-side code if you require in future.
When you deploy your ng2-app, you should use AOT(ahead of time) compile.
I guess you are using JIT(just in time) compile.
In angular2 guide page,
With AOT, the browser downloads a pre-compiled version of the application. The browser loads executable code so it can render the application immediately, without waiting to compile the app first.
When you use JIT compile, your browser will download vendor.js which is defined by angular2 compiler and it will compile your app just in time. It will be too slow and your client have to download vendor file. When you use AOT, you dont have to use vendor file, so resources are being smaller.
I recommend to use AOT compile when you deploy your app, and use lazy loading for resource size.
If you are curious about ng2 AOT compile, read this guide.
angualar2-cookbook-AOT
And here is example angular2 app with webpack2 and lazy load.
use file structure and config files in here.
When I tested with example app, files bundled with aot was smaller than 500KB.
angular2-webpack2-aot
When you use aot compile with #ngtools/webpack or whatever,
just put all files in dist directory which have files compiled with aot in your S3 bucket, and I recommend to use aws cloudfront cache for your s3 bucket resources.

Can Jetty hot redeployment work without service interruption?

I'm running a web app with Jetty 9.0.5 (I could update, 9.1.2 is the latest as I write this). I have the usual web app deployer as described in the Jetty docs and defined in etc/jetty-deploy.xml. I use a Jetty xml file to define my web app context, so when I push new code to my production server, I upload a new myapp.war file using rsync and then touch that myapp.xml file. This works pretty well, but there are few seconds where the app throws a NullPointerException or other weirdness, and some users appear to be getting corrupt statically served files (.js files from the war), so that they have to flush their browser's cache for the app to work again.
Is this supposed to work perfectly, or do you expect a brief dead period like this?
I don't put the myapp.war in the webapps directory (only the the myapp.xml is there) and explodeWars is true in the deployer.

Heroku django project read-only file system

I'm deploying django project on heroku, it works fine, but in django admin, when i'm trying to upload image i got error:
OSError at /admin/blocks/block/add/
[Errno 30] Read-only file system: '/home/goldwedd'
This is by design.
Your app is compiled into a slug for fast distribution by the dyno manager. The filesystem for the slug is read-only, which means you cannot dynamically write to the filesystem for semi-permanent storage. The following types of behaviors are not supported:
Caching pages in the public directory
Saving uploaded assets to local disk (e.g. with attachment_fu or paperclip)
Writing full-text indexes with Ferret
Writing to a filesystem database like SQLite or GDBM
Accessing a git repo for an app like git-wiki
https://devcenter.heroku.com/articles/read-only-filesystem
If you want to upload files, you need to do so to S3 or any of the other storage backends supported by django-storages.
Yes, you can not upload media files at heroku. Only via git you can deploy things and if you deploy static or media files they will be available with some workarounds.
For live file upload you should consider to use an external service like Amazon S3. There is an excellent library for django to deal with it (its also suggested by the heroku dev site, as far as I remember): django-storages

How to fix the error 'input Web Services Description Language (WSDL) file is not valid'?

I'm facing a problem when I deploy a DTSX File to a production server.
In the DTSX file I consume a WebService through the Web Service Task,
The WebService asks for a WSDL File that it has to download from a local path.
There is no problem in my machine, but in the production server it won't ever exists.
I think it is not acceptable to ask to my client to get me permissions to enter in his production server and create a folder to store that wsdl file. In addition what will happen when the wsdl changes? I will have to deploy my dtsx package again and also replace the wsdl file in the server. So I think it is not an option.
So, my question is,
Is there a possible way to avoid to have a physical file with the wsdl especifcation, or it could be deployed within the dtsx deployment package, or save it in a variable, or how else I could do that?
I've been searching a lot, but still not luck.
Any help would be really appreciated.
To achieve this, one option would be to make use of Script Task and with the help of .NET namespace System.Net.WebClient, you can access the WSDL URL path and download the contents of the WSDL file to the system's temporary folder path. You can get the system's temporary folder path, which is the value of the environment variable TEMP, using the .NET method System.IO.Path.GetTempPath(). The newly generated temporary path of the WSDL file can be then stored in an SSIS package variable, which can then be configured in the Web Service Task for it to use instead of relying on a local path. Initially, during development you will need to have the WSDL file in the local path but once you deploy the package to the production, the WSDL file need not exist on the local drive.
Hope that helps.