I am building a Django web app that is deployed on GCP (google cloud platform). I need to use the google cloud storage bucket to store files generated from the app, So I added the code to settings.py
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = os.path.join(BASE_DIR, 'credential.json')
In the code, credential.json is referred to. Currently, I put the credential file in the project directory on my computer and it works fine. But now I need to push the project to a public repository for project handoff and I can’t push the credential file because it contains the private key to the cloud storage bucket. What should I do to make the program run normally without pushing the credential file to the repository or the file being accessible by other people?
Don't push the credential files and tell the user to create it in the project documentation.
A common pattern is to provide an example file with dummy data to help uses to understand the structure of the file.
You will have a documentation like this:
Copy the credential.example.json as credential.json and fill it with your server settings.
// file: credentials.example.json
{
"token": "obviously not my token",
"email": "foo#example.org",
}
Related
I have a Django 2.x with python 3.6 site in Google Cloud, the app is in app engine flex. (my first app :)
My app has an upload page, where I am asking the user upload a JSON file (that is never kept in the site), what I do is open it and generate another file from it
I know that django depending on the size of the file it goes into memory but I was never able to use this functionality, so what I did in local env, was creating a folder that I called, temp_reports, so I created the files here, uploaded them into a bucket and then deleted them, from temp_reports.
So I was thinking, as the site is already in gcloud, if I can directly create these files into the bucket? or do I still need to generate them in the site and then upload them?
Now if it is from my site I keep getting the following error:
Exception Value:
[Errno 2] No such file or directory: '/home/vmagent/app/temp_reports/file_516A3E1B80334372ADB440681BB5F030.xlsx
I had in my app.yaml
handlers:
- url: /temp_reports
static_dir: temp_reports
Is there something I am missing? in order to use temp_reports?
Or how can I create a file directly into my bucket?
You can certainly use the Storage Bucket without having to upload the file manually. This can be done by Google Cloud Storage client library (Preferred Method) . It allows you to store and retrieve data directly from the Storage Bucket. Secondly, you can use Cloud Storage API to do the same functionality but requires more efforts to set it up.
You want to use the upload_from_string method from google.cloud.storage.blob.Blob.
upload_from_string(data, content_type='text/plain', client=None,
predefined_acl=None)
So to create a text file directly on the bucket you could do this:
storage_client = storage.Client()
bucket = storage_client.get_bucket(‘mybucket’)
blob = bucket.blob(‘mytextfile.txt’)
blob.upload_from_string('Text file contents', content_type='text/plain')
For more information you can refer to the following page:
https://googleapis.github.io/google-cloud-python/latest/storage/blobs.html#google.cloud.storage.blob.Blob.upload_from_string
I am developing an Alexa Skill with Skill Kit SDK, and now I prepare to publish my Skill's repository on Github. During development I included my Skill's app ID in the according index.js file and diligently committed my work with my local git.
Is there a risk involved in publishing my Skill's repository with my actual app ID? I could imagine that a malicious party might use the app ID (together with the ARN of my Skill's Lambda function) to send lots of requests and thus incur costs on AWS, but maybe there are other risks.
It seems to be good practice not to include the app ID in the public repository, since no example Skill of the official Amazon Alexa organization has their respective app ID included.
Commonly, people put these keys/secrets in as an environmental variable and in the code write process.env.SKILL_KIT_KEY to retrieve it.
I would strongly recommend, if you make the switch, to deactivate the key you've used and that lives in plain-text in the repo's history and obtain a new one.
Another approach is to include a configuration file that contains all login or password information. You might name this file config.js. Then exclude this file from being checked into Git by listing it in the .gitignore file.
To help others to recreate this file with their own information, provide a well commented template version of this file in the project. Append "Template" to the name (eg. config_template.js) with instructions to rename it to config.js after editing it to include all their own information.
I attached a Web Job to my Azure website. The webjob prepares a file and I want to save it on a proper folder in the website.
Environment.CurrentDirectory run on the script returns a path under a Temp directory: Temp\jobs\triggered\WEBJOBNAME\q0uwrohv.x5e
I tried to go down on the directory tree:
string path = Path.Combine(Environment.CurrentDirectory, #"..\..\..\..\..\Data")
But it doesn't work:
C:\DWASFiles\Sites\WEBSITENAME\Temp\jobs\triggered\WEBJOBNAME\q0uwrohv.x5e\..\..\..\..\..\Data
How to make and save files from WebJob to a particular path?
I don't want to use blob store.
The path for the root of your Azure Web Site is (usually) d:\home\site\wwwroot.
d:\home is also stored in an environment setting called %HOME%.
To get more insight on the different paths you can use on your site go to:
https://{sitename}.scm.azurewebsites.net, there' you'll have the Debug Console where you can browse through your site and Environment to see all the environment variables you can use.
Your WebJob will have access to the same paths/environment as your Web Site.
For more information on this administration site go to:
http://azure.microsoft.com/blog/2014/03/28/windows-azure-websites-online-tools-you-should-know-about-2/
try the following instead of putting a file location in the value parameter just put this as I show you here
you can do it in the app.config file
add key="app:TempFolderPath" value="~/temp/"/
add key="app:TempReportDirectory" value="~/temp/"/
the web job will automatically put in this location of
D:\local\Temp\jobs\continuous\ImporterWebJob\yex3ad1c.3wo\~\temp\...your file...
I hope this will not give you any errors.
I am trying to write a job board / application system and I need the ability for clients to upload a CV and then share it with employers but I can't figure out the best way to do this. The CV needs to be kept private except for who it is shared with and there needs to be the ability for clients to update the cv after submitting it to an employer.
Is there a django app that does this already, or how would I go about setting up the privacy, file sharing etc so that the files can be copied and still private to just those shared with?
Use Apache's x-sendfile, for an example see: Having Django serve downloadable files
Store the files in a private folder. Django authorizes the request and let Apache serve the file using the x-sendfile header.
Use S3, and django-storages.
Upload the CV to S3, with the file set as private.
Create a view which will fetch a given CV from the S3 bucket, producing an "expiring URL", or that will just fetch the raw data from S3 and pass it through to the user through a view.
The file's privacy is completely controlled this way.
You could also do this by storing the uploaded file outside of your projects STATICs directory (which is assumed to be publicly accessible), and doing step 3 for that.
Or, if you want to make a DBA's head explode, store the CV as a BLOB in the database and use a view in the same way.
I'm currently using django. And now I need to save a file uploaded by a user to another server which is not the one that serves the django application. The file will be saved to file system not the database. Could somebody tell me how to do this?
Default Django behavior is to save file on the filesystem, not the database itself).
You have several options how to do this, the simplest one is to have a filesystem exported from you "other" machine and mounted on the machine with the django application.
For the filesystem export you can use NFS, MogileFS or GlusterFS (which I'm using) or many more :). If you do not need real-time save&serve, simple rsync may be an option too.
Second option is to utilize existing django mechanisms StogareAPI. There are already available different storage backeds you can use and may be helpful for you (e.g. ftp).
This wont work out of the box, you need to have a mechanism(write some code) to queue the files that are uploaded through django application, then use a middleware(can be in python) to transfer files from queue to your file server. so flow is basically like this:
Accept the uploaded file via django app.
django app. writes the file info(its temporary path and name) to a queue
A middleware app reads the next file in queue.
The middleware app use some transfering protocol(sftp, scp, ftp, http etc...) to copy the file to file server.
Middleware app deletes the file from where django app is hosted so django server dont have a copy of the file.