Django google app engine No such file or directory - django

I have a Django 2.x with python 3.6 site in Google Cloud, the app is in app engine flex. (my first app :)
My app has an upload page, where I am asking the user upload a JSON file (that is never kept in the site), what I do is open it and generate another file from it
I know that django depending on the size of the file it goes into memory but I was never able to use this functionality, so what I did in local env, was creating a folder that I called, temp_reports, so I created the files here, uploaded them into a bucket and then deleted them, from temp_reports.
So I was thinking, as the site is already in gcloud, if I can directly create these files into the bucket? or do I still need to generate them in the site and then upload them?
Now if it is from my site I keep getting the following error:
Exception Value:
[Errno 2] No such file or directory: '/home/vmagent/app/temp_reports/file_516A3E1B80334372ADB440681BB5F030.xlsx
I had in my app.yaml
handlers:
- url: /temp_reports
static_dir: temp_reports
Is there something I am missing? in order to use temp_reports?
Or how can I create a file directly into my bucket?

You can certainly use the Storage Bucket without having to upload the file manually. This can be done by Google Cloud Storage client library (Preferred Method) . It allows you to store and retrieve data directly from the Storage Bucket. Secondly, you can use Cloud Storage API to do the same functionality but requires more efforts to set it up.
You want to use the upload_from_string method from google.cloud.storage.blob.Blob.
upload_from_string(data, content_type='text/plain', client=None,
predefined_acl=None)
So to create a text file directly on the bucket you could do this:
storage_client = storage.Client()
bucket = storage_client.get_bucket(‘mybucket’)
blob = bucket.blob(‘mytextfile.txt’)
blob.upload_from_string('Text file contents', content_type='text/plain')
For more information you can refer to the following page:
https://googleapis.github.io/google-cloud-python/latest/storage/blobs.html#google.cloud.storage.blob.Blob.upload_from_string

Related

S3 Static site downloads index.html after uploading files

I have a static site that I served to s3 called tidbitstatistics.com
I wrote a script using boto3 to replace the files with new ones and since then, my site doesn't open - instead it downloads the index.html file.
From what I can tell, I didn't change any settings. The site was working fine before I re-uploaded the files. Since then, I deleted all the files and re-uploaded them manually, but I am still running into the same error.
I thought this might have to do with the file types, but they were the correct text/html file types when re-uploading manually and I am adjusting my script to specify file types when calling put_object instead of upload_file with boto3.
Static site hosting is turned on for that bucket and public permissions to read are set. I'm just not sure how s3 all of a sudden won't serve my static site.
I followed the answer here, but I don't see a Content-Disposition property.
Any help would be appreciated - web development is not my strong suit!

How to use a credential file without pushing it to repository

I am building a Django web app that is deployed on GCP (google cloud platform). I need to use the google cloud storage bucket to store files generated from the app, So I added the code to settings.py
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = os.path.join(BASE_DIR, 'credential.json')
In the code, credential.json is referred to. Currently, I put the credential file in the project directory on my computer and it works fine. But now I need to push the project to a public repository for project handoff and I can’t push the credential file because it contains the private key to the cloud storage bucket. What should I do to make the program run normally without pushing the credential file to the repository or the file being accessible by other people?
Don't push the credential files and tell the user to create it in the project documentation.
A common pattern is to provide an example file with dummy data to help uses to understand the structure of the file.
You will have a documentation like this:
Copy the credential.example.json as credential.json and fill it with your server settings.
// file: credentials.example.json
{
"token": "obviously not my token",
"email": "foo#example.org",
}

Django upload doc to S3 and generate PDF preview

I'm currently using Django regular upload. After a doc is uploaded, a celery task is executed to get it converted to PDF using libreoffice. Both files are publicly accessible on /media.
Now I want to start using S3 to store my files and make them private. Here are my options:
Upload doc to S3. Then have the celery task connect to S3, download the file, convert to PDF, upload and delete from local storage.
Upload doc to regular storage, then have the celery task make the conversion, upload both files to S3 and then delete.
Restriction: the S3 bucket is not public. I'm planning on authenticating the user before downloading, and then redirecting to the bucket with a temporary valid access key.
Do I have any other options? I haven't actually tried this yet, but it seems I'm going to need multiple django storages if I choose 2., one for uploading (local) and one for downloading (S3). Is this even possible?

How to upload file in django google app engine?

I am new to Google App Engine. I have deploy the pure django application in google app engine. It is working fine. But I wrote the django file upload functionality as shown below,
def forhandler(list, project_id, task_id, request):
for i in list:
filename = i
extension = filename.content_type
print i
newdoc = FileUpload(project_id=project_id, task_id=task_id, file=filename, filetype=extension)
newdoc.save()
When I run the ./manage.py runserver. The above functionality working correctly and uploading files correctly. When I use google app engine means dev_appserver.py my_project. It is perfect but when I upload the file using above functionality It gives an error as shown below,
Exception Value: [Errno 30] Read-only file system: u'/home/nyros/Desktop/projectstat/projectstat/media/documents/2013/05/24/1354676051_chasm_fishing_w1.jpeg'
How do I upload the file using django with google app Engine ? Please solve my problem.Thanks......
The problem here is that App Engine uses a read-only filesystem, and the default Django file upload mechanism wants to store files on disk. (Regardless of whether you're using App Engine or not, storing images in an ordinary database is a bad idea.)
Instead, you should use AppEngine's Blobstore API to save the image. This is special storage App Engine provides for storing large data uploaded by users.
The good news is there's a plugin that takes care of all of this for you: http://www.allbuttonspressed.com/projects/django-filetransfers
Just follow the instructions there and you should be in business.
(Note: I've only tried this with django-nonrel. I've never tried with with vanilla Django.)
The best way to upload files in google app engine with python is using the blobstorehandler.
class Upload(blobstore_handlers.BlobstoreUploadHandler):
for upload in self.get_uploads():
try:
img = Image()
img.primary_image = upload.key()
img.put()
except:
pass

django internal file sharing with privacy

I am trying to write a job board / application system and I need the ability for clients to upload a CV and then share it with employers but I can't figure out the best way to do this. The CV needs to be kept private except for who it is shared with and there needs to be the ability for clients to update the cv after submitting it to an employer.
Is there a django app that does this already, or how would I go about setting up the privacy, file sharing etc so that the files can be copied and still private to just those shared with?
Use Apache's x-sendfile, for an example see: Having Django serve downloadable files
Store the files in a private folder. Django authorizes the request and let Apache serve the file using the x-sendfile header.
Use S3, and django-storages.
Upload the CV to S3, with the file set as private.
Create a view which will fetch a given CV from the S3 bucket, producing an "expiring URL", or that will just fetch the raw data from S3 and pass it through to the user through a view.
The file's privacy is completely controlled this way.
You could also do this by storing the uploaded file outside of your projects STATICs directory (which is assumed to be publicly accessible), and doing step 3 for that.
Or, if you want to make a DBA's head explode, store the CV as a BLOB in the database and use a view in the same way.