How to set the value of a secret in GCP through CLI? - google-cloud-platform

I have script written in bash where I create a key with a certain name.
#!/bin/bash
project_id="y"
secret_id="x"
secret_value="test"
gcloud config set project "$project_id"
gcloud secrets create "$secret_id" --replication-policy="automatic"
I want to be able to also directly add the secret-value to my secret, so that I do not have to go into my GCP account and set it manually (which would defeat the purpose). I have seen that it is possible to attach files through the following command, however there does not seem to be a similar command for a secret value.
--data-file="/path/to/file.txt"

From https://cloud.google.com/sdk/gcloud/reference/secrets/create#--data-file:
--data-file=PATH
File path from which to read secret data. Set this to "-" to read the secret data from stdin.
So set --data-file to - and pass the value over stdin. Note, if you use echo use -n to avoid adding a newline.
echo -n $secret_value | gcloud secrets create ... --data-file=-

Related

google cloud secret is there or not

I created two secrets one with
--data-file=-
and one without above flags,
So first was created as followes
echo -n "Demo" | gcloud secrets create First-password --data-file=-
Second was created as
echo -n "mySuperSecert" | gcloud secrets create xyz-password
Now when i try to retrieve the xyz-password , it reports
ERROR: (gcloud.secrets.versions.access) NOT_FOUND: Secret
[projects/ProjectNumber/secrets/XYZ-password] not found or has no
versions.
A screenshot is attached below, actual project number and variable name is hidden as i used closed one's name in example, so in screenshot it says XYZ.
How to access this secret now or delete it, It shows up when 'gcloud secrets list' but when actually try to get value it fails, and the way to reproduce the issue is not to specify
--data-file=-
You have 2 types of resources: the secret and the versions
In the first case, the gcloud CLI conveniently create the secret and the first version with the value your provided (data-file)
In the second one, the gcloud CLI only created the secret, not the version. Therefore, if you try to access the secret value, there is no value! It's totally file.
You can delete the secret if you want, but you won't be able to get the mySuperSecert value, because it has not been stored.

Pass AWS credentials in same shell in Makefile

I want to assign a result of shell script to a variable in next way:
SOME_KEY = $(shell aws secretsmanager get-value ...)
And for a specific target, i want to overwrite AWS credentials:
get-some-key: export AWS_ACCESS_KEY_ID=$(SOME_OTHER_AWS_ACCESS_KEY_ID)
get-some-key: export AWS_SECRET_ACCESS_KEY=$(SOME_OTHER_AWS_SECRET_ACCESS_KEY)
get-some-key:
echo $$SOME_KEY
I expected to get a value, but i don't, since $(shell) command uses initial AWS credentials.
What is a way to correctly pass AWS credentials to same shell?
Thanks in advance!
I think I finally understand what you're trying to do. You have aws access credential environment variables set outside make. You want to use overriding creds when calling this target. If that is correct, consider the following:
id = aws sts get-caller-identity
get-id-with-creds: export AWS_ACCESS_KEY_ID=redacted
get-id-with-creds: export AWS_SECRET_ACCESS_KEY=redacted
get-id-with-creds: get-id
get-id:
$(id)
And I verified that works:
make get-id # gives back my default user
make get-id-with-creds # gives back my user with creds in Makefile

gcloud command output formatting to use results in another gcloud command

I'm trying to automate the deletion of SSL certificates that end with a certain text pattern on GCP projects.
For this I use the command:
gcloud compute ssl-certificates list --filter="name~'819$'" --format="(name)"
Which output displays exactly this format:
NAME
certname1-1602160819
certname2-1602160819
certname3-1602160819
...and so on
The thing is that if I want to use the results from this command to then use it to input another gcloud command that deletes each certificate, I get the first variable as NAME which is the field title and obviously not a certificate.
Here is my script:
#!/bin/bash
for oldcert in $( gcloud compute ssl-certificates list --filter="name~'819$'" --format="(NAME)")
do
gcloud compute ssl-certificates delete $oldcert
done
Do you know how I could get the field name NAME out of my output so I could treat each result in another command directly.
Thanks for your precious advices
#Hitobat thanks very much for your comment
I used the csv[no-heading] option even though the tails -n +2 otion also does the job
the following commands did the job great:
#!/bin/bash
for oldcert in $( gcloud compute ssl-certificates list --filter="name~'819$'" --format="csv[no-heading](name)")
do
gcloud compute ssl-certificates delete $oldcert --quiet
done
The right format to use is --format=value[](name).
According to the docs:
value
CSV with no heading and <TAB> separator instead of <COMMA>. Used
to retrieve individual resource values.
So it's equivalent to the --format="csv[no-heading](name) that you used, but "more correct" (and a little more legible).

AWS CodeBuild cannot print parameter store environment variables

I keep track of a number in parameter store, and access it during CodeBuild executions. The CodeBuild environment is a Windows machine. I would like to print the environment variable.
I've tried the following:
Print the environment variable as-is: echo $NUMBER
Assign the environment variable to another variable and print: $TMP=$NUMBER; echo $TMP
Echo the environment variable to a text file and print that text file: Add-Content -Path number.txt -Value $NUMBER; Get-Content number.txt
All of them will be printed as asterisks. It looks like CodeBuild will automatically try to censor environment variable it deems sensitive (maybe all parameter store variables? I couldn't find any documentation on this). This particular env variable is not sensitive and we would like to print it. Is there a possible way?
Few months back, CodeBuild implemented best-effort masking of secrets in the build logs. Since the majority use case of Parameter Store is to store sensitive information like passwords, CodeBuild is masking that from build logs. When the values being set as secrets are common strings like numbers or a common word, that will get masked throughout the logs.
Our suggestion for using simple environment variables would be to go with the plain text environment variables, as opposed to Parameter Store or Secrets Manager. Parameter Store and Secrets Manager values will get masked, when the same string is found in the log.
Security is usually not a friend of convenience, so apologies for this but avoiding the leaking of secrets is the primary concern here.
This will be documented properly in the docs soon.
Edit1:
As per my tests, if the Param store variable has the value "ABC", then in the logs anywhere you have "ABC" (even if it is in any other innocent variable) it will be masked.
I guess we are back to square one with this, please use the CLI to obtain the value directly (for a secret value, highly recommend to continue using the buildspec 'parameter-store' construct):
- MY_VAR=$(aws ssm get-parameter --name BUILD_NUM --query "Parameter.Value" --output text)
- echo $MY_VAR

access terraform variable from remote file

I am trying to access variable's value from remote location(file, etc.)
I read about input variables but still couldn't achieve what I am looking for.
My case is to get the value of the variable from some remote file.
I tried something like below
terraform apply -var-file="https://s3-ap-northeast-1.amazonaws.com/..."
but this give error saying no such file or directory.
Is there any way to load values on runtime from remote location ?
EDIT: I am using mysql provider to create database & users. For setting the user password, I want to use some remote location where my passwords are kept (maybe a s3 file).
PS : I saw that there is keybase available for passwords but I wanted to check if there are other ways for achievening this ?
It's hard to download and pass the file name to terraform apply, but you can do this with bash, This is how we do in remote server.
#!/bin/bash
varfile_name=terraformvar.tfvars
varfile=$(aws s3 cp s3://bucket-********/web-app/terraform/terraform_vars.tfvars ./$varfile_name)
if [ $? -eq 0 ]; then
terraform apply -var-file=$varfile_name
else
echo "failed to download file from s3"
fi