I have React application with the custom webpack config (not CRA app). I am using Docker for the production build which is in turn run a small bash script and generates env.js file into window object (window._env = {...my vars}. The question is: how can I access window.myVAR variable in my webpack config? The main intention is to pass my production domain url value (depending on environment) from the window object into PUBLIC_URL property instead of '.'. Is there any plugins for it to read variables from the window object in webpack at runtime?
new InterpolateHtmlPlugin({
PUBLIC_URL: '.'
})
Related
I have created a Pub Sub Function in the Console and I want to upload a folder with my project using the console and not using terminal, every time I have an update.
I use Python.
In the Docs they say I can find a button to upload ZIP, but there is nothing like this.
https://cloud.google.com/functions/docs/deploying/console
Question is :
How do I upload my project from Console ? I can see the default source code in console.
Do I need to call my entry file main.py or index.py ?
Do I need to set up requirement.txt file by myself? I can't see it in my project in my machine.
You have to click 'edit' button to edit the Function, then in the 'Source' tab, left to the source, there is a drop down, where you can see "Upload Zip".
Doing this in the Terminal seems to be easier :
sudo gcloud functions deploy Project_name
I am using Google Speech API in my Django web-app. I have set up a service account for it and am able to make API calls locally. I have pointed the local GOOGLE_APPLICATION_CREDENTIALS environment variable to the service account's json file which contains all the credentials.
This is the snapshot of my Service Account's json file:
I have tried setting heroku's GOOGLE_APPLICATION_CREDENTIALS environment variable by running
$ heroku config:set GOOGLE_APPLICATION_CREDENTIALS="$(< myProjCreds.json)"
$ heroku config
GOOGLE_APPLICATION_CREDENTIALS: {
^^ It gets terminated at the first occurrence of " in the json file which is immediately after {
and
$ heroku config:set GOOGLE_APPLICATION_CREDENTIALS='$(< myProjCreds.json)'
$ heroku config
GOOGLE_APPLICATION_CREDENTIALS: $(< myProjCreds.json)
^^ The command gets saved into the environment variable
I tried setting heroku's GOOGLE_APPLICATION_CREDENTIALS env variable to the content of service account's json file but it didn't work (because apparently the this variable's value needs to be an absolute path to the json file) . I found a method which authorizes a developer account without loading json accout rather using GOOGLE_ACCOUNT_TYPE, GOOGLE_CLIENT_EMAIL and GOOGLE_PRIVATE_KEY. Here is the GitHub discussion page for it.
I want something similar (or something different) for my Django web-app and I want to avoid uploading the json file to my Django web-app's directory (if possible) for security reasons.
Depending on which library you are using for communicating with Speach API you may use several approaches:
You may serialize your JSON data using base64 or something similar and set resulting string as one environment variable. Than during you app boot you may decode this data and configure your client library appropriately.
You may set each pair from credentials file as separate env variables and use them accordingly. Maybe library that you're using support authentication using GOOGLE_ACCOUNT_TYPE, GOOGLE_CLIENT_EMAIL and GOOGLE_PRIVATE_KEY similar to the ruby client that you're linking to.
EDIT:
Assuming that you are using google official client library, you have several options for authenticating your requests, including that you are using (service account): https://googlecloudplatform.github.io/google-cloud-python/latest/core/auth.html You may save your credentials to the temp file and pass it's path to the Client object https://google-auth.readthedocs.io/en/latest/user-guide.html#service-account-private-key-files (but it seems to me that this is very hacky workaround). There is a couple of other auth options that you may use.
EDIT2:
I've found one more link with the more robust approach http://codrspace.com/gargath/using-google-auth-apis-on-heroku/. There is ruby code, but you may do something similar in Python for sure.
Let's say the filename is key.json
First, copy the content of the key.json file and add it to the environment variable, let's say KEY_DATA.
Solution 1:
If my command to start the server is node app.js, I'll do echo $KEY_DATA > key.json && node app.js
This will create a key.json file with the data from KEY_DATA and then start the server.
Solution 2:
Save the data from KEY_DATA env variable in the some variable and then parse it to JSON, so you have the object which you can pass for authentication purposes.
Example in Node.js:
const data = process.env.KEY_DATA;
const dataObj = JSON.parse(data);
Hi can I get some help with setting environment specific configuration.
I have two files for datasource
server/datasources.json
server/datasources.test.json
I use the script "SET NODE_ENV=test && mocha test/**/*.test.js" on WIndows to run my test cases and set the node environment to test.
Loopback does not load server/datasource.test.json instead the datasource from server/datasource.json is loaded.
I have confirmend the environment using process.env.NODE_ENV which logs "test
I have tried to change server/datasource.json to server/datasource.local.json, But then I get an error
WARNING: Main config file "datasources.json" is missing.
I dont understand what I am doing wrong.Am I supposed to create all the config files for the test environment like *.test.json.
Or is there a different config file where I have to define envrionment specific files.
Please check this repo https://github.com/dhruv004/sample-loopback-example
From the code If you run npm run test It loads data from local.json which is the data source for development environment.It should load data from test.json(datasource for test environment)
Looking on your repository, I can see this note from LoopBack documentation particulary relevant for you:
A LoopBack application can load multiple configuration files, that can potentially conflict with each other. The value set by the file with the highest priority will always take effect. The priorities are:
Environment-specific configuration, based on the value of NODE_ENV; for example, server/config.staging.json.
Local configuration file; for example, server/config.local.json.
Default configuration file; for example, server/config.json.
In your model-config.json all models have datasource set to db so in your case LoopBack application loads first datasources.test.json. It cannot find datasource db there (only testdb), so it falls back to datasources.json. There it finds datasource db and it uses it. Try renaming testdb in datasources.test.json to db and it will take a precedense.
I'm trying to set a logging pattern using 'logging.pattern.console' that needs to include the CloudFoundry's application name of a given application. I know that application names can be found as part of the VCAP_APPLICATION env variable with the 'application_name' key, and I can resolve env variables on Spring Cloud applications using the standard Spring placeholder notation, available on the application.yml file; but as the variable is a Json, I can't parse it nor use SpEL to obtain the requested value only.
Is there any other way to obtain the application name as set on the manifest.yml file in the application.yml?
If you are using Spring Boot, you can access the application name with the property vcap.application.name. You should be able to reference this anywhere that properties are available, like #Value annotations or in application.properties.
Spring Boot's CloudFoundryVcapEnvironmentPostProcessor takes the VCAP_SERVICES & VCAP_APPLICATION environment variables and makes them available as properties through Spring's Environment api. This should happen automatically, no config or work necessary.
I'm working on a Laravel 5.2 application where users can send a file by POST, the application stores that file in a certain location and retrieves it on demand later. I'm using Amazon Elastic Beanstalk. For local development on my machine, I would like the files to store in a specified local folder on my machine. And when I deploy to AWS-EB, I would like it to automatically switch over and store the files in S3 instead. So I don't want to hard code something like \Storage::disk('s3')->put(...) because that won't work locally.
What I'm trying to do here is similar to what I was able to do for environment variables for database connectivity... I was able to find some great tutorials where you create an .env.elasticbeanstalk file, create a config file at ~/.ebextiontions/01envconfig.config to automatically replace the standard .env file on deployment, and modify a few lines of your database.php to automatically pull the appropriate variable.
How do I do something similar with file storage and retrieval?
Ok. Got it working. In /config/filesystems.php, I changed:
'default' => 'local',
to:
'default' => env('DEFAULT_STORAGE') ?: 'local',
In my .env.elasticbeanstalk file (see the original question for an explanation of what this is), I added the following (I'm leaving out my actual key and secret values):
DEFAULT_STORAGE=s3
S3_KEY=[insert your key here]
S3_SECRET=[insert your secret here]
S3_REGION=us-west-2
S3_BUCKET=cameraflock-clips-dev
Note that I had to specify my region as us-west-2 even though S3 shows my environment as Oregon.
In my upload controller, I don't specify a disk. Instead, I use:
\Storage::put($filePath, $filePointer, 'public');
This way, it always uses my "default" disk for the \Storage operation. If I'm in my local environment, that's my public folder. If I'm in AWS-EB, then my Elastic Beanstalk .env file goes into effect and \Storage defaults to S3 with appropriate credentials.