I am trying to access variable's value from remote location(file, etc.)
I read about input variables but still couldn't achieve what I am looking for.
My case is to get the value of the variable from some remote file.
I tried something like below
terraform apply -var-file="https://s3-ap-northeast-1.amazonaws.com/..."
but this give error saying no such file or directory.
Is there any way to load values on runtime from remote location ?
EDIT: I am using mysql provider to create database & users. For setting the user password, I want to use some remote location where my passwords are kept (maybe a s3 file).
PS : I saw that there is keybase available for passwords but I wanted to check if there are other ways for achievening this ?
It's hard to download and pass the file name to terraform apply, but you can do this with bash, This is how we do in remote server.
#!/bin/bash
varfile_name=terraformvar.tfvars
varfile=$(aws s3 cp s3://bucket-********/web-app/terraform/terraform_vars.tfvars ./$varfile_name)
if [ $? -eq 0 ]; then
terraform apply -var-file=$varfile_name
else
echo "failed to download file from s3"
fi
Related
My requirement is to upload the PBIX file stored in Azure storage container to Power BI Service without downloading it to local drive as I have to use the PowerShell script in Runbook Automation
Normally we can upload the PBIX file by giving local path like below
$pbixFilePath = "C:\PBIXFileLocation\Test.pbix"
$import = New-PowerBIReport -Path $pbixFilePath -Workspace $workspace -ConflictAction CreateOrOverwrite
$import | Select-Object *
But now which path I have to use if the PBIX file is stored in Azure storage container and how the PowerShell script can be created? Is it possible?
Tried to list the blobs in the container with the Get-AzStorageBlob cmdlet and passed it as a path in above script and ended up with this error:
If possible please help me with a sample PowerShell script to achieve the above requirement
Thanks in Advance!
Issue can be resolved by following my similar post in Azure platform
AnuragSingh-MSFT is a gem explained me clearly and resolved the issue
A basic understanding of Azure Automation runbook execution should help clarify this doubt. When runbooks are designed to authenticate and run against resources in Azure, they run in an Azure sandbox. Azure Automation assigns a worker to run each job during runbook execution in the sandbox. Please see this link for more details - Runbook execution environment These sandboxes are isolated environment with access to only some of the location/path/directories.
The following section should help answer the question - ... which path I have to use if the PBIX file is stored in Azure storage container and how the PowerShell script can be created?
The script snippet provided by Manu above would download the blob content in the same directory inside sandbox from where script is running. You can access this path inside the script using "." --> for example, if the blob that you are downloading is named testBlob, it will be available in location .\testBlob. ("." stands for current directory).
Therefore, the pbixFilePath can be initialized as $pbixFilePath = ".\Test.pbix"
Another option is to use $env:temp as mentioned in the question. It is one of the environments variable available on local machine (on your workstation) which generally resolves to C:\Users<username>\AppData\Local\Temp
In Azure Automation sandbox environment, this variable resolves to C:\Users\Client\Temp
Therefore, you could download the blob content using the following line:
Get-AzStorageBlobContent -Blob $blob -Container $ContainerName -Context $Ctx -Destination $env:temp #Destination parameter sets the target folder. By default it is local directory (.)
In this case, you would initialize pbixFilePath as $pbixFilePath = $env:temp+"\Test.pbix"
Either case is fine as long as the Automation limits are not exceeded.
We run S3 sync commands in a SQL Job which syncs a local directory to an S3 bucket. At times, we'll get a sync "error" with an error code of 1, or sometimes 2. The documentation lists what each code means; error code 1 provides less details and leaves more questions remaining about the problem. It simply states "One or more Amazon S3 transfer operations failed. Limited to S3 commands."
When I run a sync command in a PowerShell script, and encounter an error (i.e. a synced document being open), the window displays an error message and which specific file that is causing a problem.
How can I capture those details in my SQL job?
I have solved this problem...
Using a PowerShell script, in the AWS s3 sync command we output the results to a text file:
aws s3 sync c:\source\dir s3://target/dir/ > F:\Source\s3Inventory\SyncOutput.txt
Then read the text file contents into a string variable:
$S3Output = Get-Content -Path C:\Source\s3Inventory\SyncOutput.txt -Raw
If the $LASTEXITCODE from the sync command does not equal 0 (indicating an error), then we send an email with the results:
if ($LASTEXITCODE -ne 0)
{
#Send email containing the value of string variable $S3Output
}
This has been placed into production and we were finally able to determine which file/object was failing.
One could certainly attach the text file to the email, rather than reading the contents into a string.
Hope this helps others!
I have script written in bash where I create a key with a certain name.
#!/bin/bash
project_id="y"
secret_id="x"
secret_value="test"
gcloud config set project "$project_id"
gcloud secrets create "$secret_id" --replication-policy="automatic"
I want to be able to also directly add the secret-value to my secret, so that I do not have to go into my GCP account and set it manually (which would defeat the purpose). I have seen that it is possible to attach files through the following command, however there does not seem to be a similar command for a secret value.
--data-file="/path/to/file.txt"
From https://cloud.google.com/sdk/gcloud/reference/secrets/create#--data-file:
--data-file=PATH
File path from which to read secret data. Set this to "-" to read the secret data from stdin.
So set --data-file to - and pass the value over stdin. Note, if you use echo use -n to avoid adding a newline.
echo -n $secret_value | gcloud secrets create ... --data-file=-
I am using Google Speech API in my Django web-app. I have set up a service account for it and am able to make API calls locally. I have pointed the local GOOGLE_APPLICATION_CREDENTIALS environment variable to the service account's json file which contains all the credentials.
This is the snapshot of my Service Account's json file:
I have tried setting heroku's GOOGLE_APPLICATION_CREDENTIALS environment variable by running
$ heroku config:set GOOGLE_APPLICATION_CREDENTIALS="$(< myProjCreds.json)"
$ heroku config
GOOGLE_APPLICATION_CREDENTIALS: {
^^ It gets terminated at the first occurrence of " in the json file which is immediately after {
and
$ heroku config:set GOOGLE_APPLICATION_CREDENTIALS='$(< myProjCreds.json)'
$ heroku config
GOOGLE_APPLICATION_CREDENTIALS: $(< myProjCreds.json)
^^ The command gets saved into the environment variable
I tried setting heroku's GOOGLE_APPLICATION_CREDENTIALS env variable to the content of service account's json file but it didn't work (because apparently the this variable's value needs to be an absolute path to the json file) . I found a method which authorizes a developer account without loading json accout rather using GOOGLE_ACCOUNT_TYPE, GOOGLE_CLIENT_EMAIL and GOOGLE_PRIVATE_KEY. Here is the GitHub discussion page for it.
I want something similar (or something different) for my Django web-app and I want to avoid uploading the json file to my Django web-app's directory (if possible) for security reasons.
Depending on which library you are using for communicating with Speach API you may use several approaches:
You may serialize your JSON data using base64 or something similar and set resulting string as one environment variable. Than during you app boot you may decode this data and configure your client library appropriately.
You may set each pair from credentials file as separate env variables and use them accordingly. Maybe library that you're using support authentication using GOOGLE_ACCOUNT_TYPE, GOOGLE_CLIENT_EMAIL and GOOGLE_PRIVATE_KEY similar to the ruby client that you're linking to.
EDIT:
Assuming that you are using google official client library, you have several options for authenticating your requests, including that you are using (service account): https://googlecloudplatform.github.io/google-cloud-python/latest/core/auth.html You may save your credentials to the temp file and pass it's path to the Client object https://google-auth.readthedocs.io/en/latest/user-guide.html#service-account-private-key-files (but it seems to me that this is very hacky workaround). There is a couple of other auth options that you may use.
EDIT2:
I've found one more link with the more robust approach http://codrspace.com/gargath/using-google-auth-apis-on-heroku/. There is ruby code, but you may do something similar in Python for sure.
Let's say the filename is key.json
First, copy the content of the key.json file and add it to the environment variable, let's say KEY_DATA.
Solution 1:
If my command to start the server is node app.js, I'll do echo $KEY_DATA > key.json && node app.js
This will create a key.json file with the data from KEY_DATA and then start the server.
Solution 2:
Save the data from KEY_DATA env variable in the some variable and then parse it to JSON, so you have the object which you can pass for authentication purposes.
Example in Node.js:
const data = process.env.KEY_DATA;
const dataObj = JSON.parse(data);
I'm working on a Laravel 5.2 application where users can send a file by POST, the application stores that file in a certain location and retrieves it on demand later. I'm using Amazon Elastic Beanstalk. For local development on my machine, I would like the files to store in a specified local folder on my machine. And when I deploy to AWS-EB, I would like it to automatically switch over and store the files in S3 instead. So I don't want to hard code something like \Storage::disk('s3')->put(...) because that won't work locally.
What I'm trying to do here is similar to what I was able to do for environment variables for database connectivity... I was able to find some great tutorials where you create an .env.elasticbeanstalk file, create a config file at ~/.ebextiontions/01envconfig.config to automatically replace the standard .env file on deployment, and modify a few lines of your database.php to automatically pull the appropriate variable.
How do I do something similar with file storage and retrieval?
Ok. Got it working. In /config/filesystems.php, I changed:
'default' => 'local',
to:
'default' => env('DEFAULT_STORAGE') ?: 'local',
In my .env.elasticbeanstalk file (see the original question for an explanation of what this is), I added the following (I'm leaving out my actual key and secret values):
DEFAULT_STORAGE=s3
S3_KEY=[insert your key here]
S3_SECRET=[insert your secret here]
S3_REGION=us-west-2
S3_BUCKET=cameraflock-clips-dev
Note that I had to specify my region as us-west-2 even though S3 shows my environment as Oregon.
In my upload controller, I don't specify a disk. Instead, I use:
\Storage::put($filePath, $filePointer, 'public');
This way, it always uses my "default" disk for the \Storage operation. If I'm in my local environment, that's my public folder. If I'm in AWS-EB, then my Elastic Beanstalk .env file goes into effect and \Storage defaults to S3 with appropriate credentials.