AWS CodeBuild cannot print parameter store environment variables - amazon-web-services

I keep track of a number in parameter store, and access it during CodeBuild executions. The CodeBuild environment is a Windows machine. I would like to print the environment variable.
I've tried the following:
Print the environment variable as-is: echo $NUMBER
Assign the environment variable to another variable and print: $TMP=$NUMBER; echo $TMP
Echo the environment variable to a text file and print that text file: Add-Content -Path number.txt -Value $NUMBER; Get-Content number.txt
All of them will be printed as asterisks. It looks like CodeBuild will automatically try to censor environment variable it deems sensitive (maybe all parameter store variables? I couldn't find any documentation on this). This particular env variable is not sensitive and we would like to print it. Is there a possible way?

Few months back, CodeBuild implemented best-effort masking of secrets in the build logs. Since the majority use case of Parameter Store is to store sensitive information like passwords, CodeBuild is masking that from build logs. When the values being set as secrets are common strings like numbers or a common word, that will get masked throughout the logs.
Our suggestion for using simple environment variables would be to go with the plain text environment variables, as opposed to Parameter Store or Secrets Manager. Parameter Store and Secrets Manager values will get masked, when the same string is found in the log.
Security is usually not a friend of convenience, so apologies for this but avoiding the leaking of secrets is the primary concern here.
This will be documented properly in the docs soon.
Edit1:
As per my tests, if the Param store variable has the value "ABC", then in the logs anywhere you have "ABC" (even if it is in any other innocent variable) it will be masked.
I guess we are back to square one with this, please use the CLI to obtain the value directly (for a secret value, highly recommend to continue using the buildspec 'parameter-store' construct):
- MY_VAR=$(aws ssm get-parameter --name BUILD_NUM --query "Parameter.Value" --output text)
- echo $MY_VAR

Related

google cloud secret is there or not

I created two secrets one with
--data-file=-
and one without above flags,
So first was created as followes
echo -n "Demo" | gcloud secrets create First-password --data-file=-
Second was created as
echo -n "mySuperSecert" | gcloud secrets create xyz-password
Now when i try to retrieve the xyz-password , it reports
ERROR: (gcloud.secrets.versions.access) NOT_FOUND: Secret
[projects/ProjectNumber/secrets/XYZ-password] not found or has no
versions.
A screenshot is attached below, actual project number and variable name is hidden as i used closed one's name in example, so in screenshot it says XYZ.
How to access this secret now or delete it, It shows up when 'gcloud secrets list' but when actually try to get value it fails, and the way to reproduce the issue is not to specify
--data-file=-
You have 2 types of resources: the secret and the versions
In the first case, the gcloud CLI conveniently create the secret and the first version with the value your provided (data-file)
In the second one, the gcloud CLI only created the secret, not the version. Therefore, if you try to access the secret value, there is no value! It's totally file.
You can delete the secret if you want, but you won't be able to get the mySuperSecert value, because it has not been stored.

Powershell AWS Update-SECSecret -SecretId

I am struggling to figure out how you update a secret in AWS Secrets Manager via Powershell when the secret has multiple key/values. My code is below with the standard change, but I have two key/values that I need to update. I tried a specific syntax with JSON, which did not take very well, and I overwrote the secure key with a simple text. I have attached my failed code as well, but I am stumped; I can not find references to the syntax for multiple key/values.
SINGLE SECRET(WORKS GREAT):
$sm_value = "my_secret_name"
Update-SECSecret -SecretId $sm_value -SecretString $Password
MULTI VALUE KEYS (FAILED):
Update-SECSecret -SecretId $sm_value -SecretString '{"Current":$newpassword,"Previous":$mycurrent}'
EXAMPLE AWS SECRET
Team, after looking at the documentation in Cloudformation and AWS CLI I concluded that my JSON string was incorrect. So after playing with it for an hour or so, I came up with this format that worked.
$mycurrent = "abc1234"
$newpassword = "zxy1234"
$json = #"
{"Current":"$newpassword","Previous":"$mycurrent"}
"#
Update-SECSecret -SecretId $sm_value -SecretString $json
It seems that updating secrets for multiple key/values requires a JSON file in order to work.

How to set the value of a secret in GCP through CLI?

I have script written in bash where I create a key with a certain name.
#!/bin/bash
project_id="y"
secret_id="x"
secret_value="test"
gcloud config set project "$project_id"
gcloud secrets create "$secret_id" --replication-policy="automatic"
I want to be able to also directly add the secret-value to my secret, so that I do not have to go into my GCP account and set it manually (which would defeat the purpose). I have seen that it is possible to attach files through the following command, however there does not seem to be a similar command for a secret value.
--data-file="/path/to/file.txt"
From https://cloud.google.com/sdk/gcloud/reference/secrets/create#--data-file:
--data-file=PATH
File path from which to read secret data. Set this to "-" to read the secret data from stdin.
So set --data-file to - and pass the value over stdin. Note, if you use echo use -n to avoid adding a newline.
echo -n $secret_value | gcloud secrets create ... --data-file=-

Assigning output to variable and joining/concatenating them in AWS CLI

I have a buildspec that is part of a CodePipeline that exports to a bucket, but I need that bucket name to be passed in as a string with the pulled account number.
I have the account number successfully pulled, but I can't seem to pass it in to a variable (accountnum) nor can I get the string (lambdaapibucket) to join with the pulled accountnum to become one string/bucket name.
Here's the latest iteration of my attempts. I have tried so many different things at this point, including backticks, quotes with exit parameters, with and without echos, piping, and who knows what else I've forgotten. Thank you in advance for any ideas or points in the right direction.
- ACCOUNTNUM= aws sts get-caller-identity --output text --query 'Account'
- LambdaAPIBucket= echo lambdaapibucket-
- LambdaAPIBucketName= concat([$LambdaAPIBucket] + [$ACCOUNTNUM])
- export BUCKET=LambdaAPIBucketName
Figured it out, if anyone later on needs the answer. For the variables, the back ticks need to be done as below, and then the joining of the variables is done with them as one continuous string, no need to attach them and then assign them separately into a variable:
- ACCOUNTNUM=`aws sts get-caller-identity --output text --query 'Account'`
- LambdaAPIBucket=`echo lambdaapibucket-`
- export BUCKET=$LambdaAPIBucket$ACCOUNTNUM

aws cli lambda `update-function-configuration` deletes existing environment variables

The documentation on the AWS cil lambda states that
...You provide only the parameters you want to change...
Which I assume means that the rest of the settings would still remain the same.
However, say my lambda function has environment variables :
var1=old_val1
var2=old_val2
var3=old_val3
and then when I try doing something like this :
aws lambda update-function-configuration --function-name dummy_fun --environment '{"Variables":{"var1":"new_val1","var2":"new_val2"}}'
with the intention of updating the variables :
var1 and var2 with the new values new_val1 and new_val2 respectively, although these 2 variables DO get updated, but the third one, var3, gets deleted !
Am I doing something wrong ? Or is there a way to make sure this doesn't happen?
I can definitely handle it using a workaround wherein I can fetch the current config and then update the env variables locally and then push the entire updated config, all of this through a python code etc.
But, is that the only way to do it ? Or can there be a simpler way of doing it?
You are misinterpreting the intention of the documentation.
You provide only the parameters you want to change.
--environment is the (singular) "parameter" that you are specifying should be changed -- not the individual variables.
The environment variables are configured en bloc so there is no concept of specifying only certain environment variables that you want to be different.
aws lambda update-function-configuration --function-name my-function-name --environment Variables="{VAR1=variable_value, VAR2=variable_value}"
Description:Above Command will update the environment variables for the lambda function in aws.
It seems not easy to partially update the environment variables for a lambda with awscli.
But with the usage of the built-in JSON-based client-side filtering that uses the JMESPath syntax, I found a way to achieve what I needed to do.
NEW_ENVVARS=$(aws lambda get-function-configuration --function-name your-func-name --query "Environment.Variables | merge(#, \`{\"ENV_VAR_TO_UPDATE\":\"value_here\"}\`)")
aws lambda update-function-configuration --function-name your-func-name --environment "{ \"Variables\": $NEW_ENVVARS }"
Of course, you can update more than one environment variable with that trick.
I had the same problem where I wanted to update only one env variable of a function and not touch the rest.
I ended up writing a script in node and publishing it:
https://www.npmjs.com/package/aws-lambda-update-env
It is pretty simple to use:
update-lambda-env KEY "My New Test Value" --stack-name myApplicationStack
This will only change the variable KEY in the functions located in the stack
myApplicationStack
A better solution might be to use AWS Parameter Store if your variable is going to change often.
https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-paramstore.html