"Secret is used without being defined" Error in Google Cloud Build - google-cloud-platform

I am trying a run a google cloud build with the following configuration
{
"steps": [
{
"name": "gcr.io/cloud-builders/gcloud",
"id": "Create GitHub pull request",
"entrypoint": "bash",
"args": [
"-c",
"curl -X POST -H \"Authorization:Bearer $$GH_TOKEN\" -H 'Accept:application/vnd.github.v3+json' https://api.github.com/repos/<username>/<repo> -d '{\"head\":\"main\",\"base\":\"newbranch\", \"title\":\"NEW_PR\"}"
],
"secretEnv": ["GH_TOKEN"]
}
],
"availableSecrets": {
"secretManager": [
{
"versionName": "projects/PROJECT_ID/secrets/password/versions/latest",
"env": "GH_TOKEN"
}
]
}
}
I have created a secret in the secret manager with the name password. When I run the build, I am getting the error
invalid secrets: secretEnv "GH_TOKEN" is used without being defined
I have also checked that my cloud build service account is present in Principal and role of the Secret Manager.

Related

Running a public image from AWS ECR in ECS Cluster

I have successfully pushed my 3 docker images on ECR.
Configured an ECS cluster.
Created 3 task definitions for those 3 images stored in respective ECR repositories.
Now, I want to run a public image of redis on the same cluster as a different task. I tried created a task definition of the same using the following URL: public.ecr.aws/ubuntu/redis:latest
But as soon as I run it as a new task I get the following error:
Essential container in task exited
Any specific reason for this error or am I doing something wrong?
Ok, so the redis image needs to either set a password (I recommend this as well) or allow it to connect to the redis cluster without a password.
To configure a password or disable passwords auth, you need to set environment variables in the image. You can read the documentation under the heading Configuration
Luckily, this is easy in ECS. You need to specify the environment variable in the task definition. So either:
{
"family": "",
"containerDefinitions": [
{
"name": "",
"image": "",
...
"environment": [
{
"name": "ALLOW_EMPTY_PASSWORD",
"value": "yes"
}
],
...
}
],
...
}
or for a password:
{
"family": "",
"containerDefinitions": [
{
"name": "",
"image": "",
...
"environment": [
{
"name": "REDIS_PASSWORD",
"value": "your_password"
}
],
...
}
],
...
}
For more granular configuration you should read the documentation of the redis docker image I posted above.

gcloud alpha monitoring policies create --policy-from-file throws error "must specify a restriction on "resource.type" in the filter"

I've created a couple alert policies using cloud console, but after exporting them and changing name (via Download JSON or gcloud CLI) I can't import them back.
Details below:
Payload (name fields are removed after export):
{
"displayName": "somename",
"conditions": [
{
"displayName": "somename",
"conditionAbsent": {
"aggregations": [
{
"alignmentPeriod": "300s",
"crossSeriesReducer": "REDUCE_MEAN",
"perSeriesAligner": "ALIGN_DELTA"
}
],
"duration": "300s",
"filter": "metric.type=\"logging.googleapis.com/user/some-metric\""
}
}
],
"combiner": "OR",
"enabled": true,
"notificationChannels": [
"projects/my-prod-dod/notificationChannels/1962880049684990238",
"projects/my-prod-dod/notificationChannels/9131919367771592634"
]
}
Command:
gcloud alpha monitoring policies create --policy-from-file alert.json
Error:
Field alert_policy.conditions[0].condition_absent.filter had an invalid value of "metric.type="logging.googleapis.com/user/some-metric"": must specify a restriction on "resource.type" in the filter
Metric type is:
Screenshot of alert policy:
Adding additional filter like below solved the problem
"filter": "metric.type=\"logging.googleapis.com/user/celery-person\" resource.type=\"k8s_container\"",
Similar question:
Use a Stackdriver resource group's ID in a GCP Deployment Manager configuration

`aws ecs execute-command` results in `TargetNotConnectedException` `The execute command failed due to an internal error`

I am running a Docker image on an ECS cluster to shell into it and run some simple tests. However when I run this:
aws ecs execute-command \
--cluster MyEcsCluster \
--task $ECS_TASK_ARN \
--container MainContainer \
--command "/bin/bash" \
--interactive
I get the error:
The Session Manager plugin was installed successfully. Use the AWS CLI to start a session.
An error occurred (TargetNotConnectedException) when calling the ExecuteCommand operation: The execute command failed due to an internal error. Try again later.
I can confirm the task + container + agent are all running:
aws ecs describe-tasks \
--cluster MyEcsCluster \
--tasks $ECS_TASK_ARN \
| jq '.'
"containers": [
{
"containerArn": "<redacted>",
"taskArn": "<redacted>",
"name": "MainContainer",
"image": "confluentinc/cp-kafkacat",
"runtimeId": "<redacted>",
"lastStatus": "RUNNING",
"networkBindings": [],
"networkInterfaces": [
{
"attachmentId": "<redacted>",
"privateIpv4Address": "<redacted>"
}
],
"healthStatus": "UNKNOWN",
"managedAgents": [
{
"lastStartedAt": "2021-09-20T16:26:44.540000-05:00",
"name": "ExecuteCommandAgent",
"lastStatus": "RUNNING"
}
],
"cpu": "0",
"memory": "4096"
}
],
I'm defining the ECS Cluster and Task Definition with the CDK Typescript code:
new Cluster(stack, `MyEcsCluster`, {
vpc,
clusterName: `MyEcsCluster`,
})
const taskDefinition = new FargateTaskDefinition(stack, TestTaskDefinition`, {
family: `TestTaskDefinition`,
cpu: 512,
memoryLimitMiB: 4096,
})
taskDefinition.addContainer("MainContainer", {
image: ContainerImage.fromRegistry("confluentinc/cp-kafkacat"),
command: ["tail", "-F", "/dev/null"],
memoryLimitMiB: 4096,
// Some internet searches suggested setting this flag. This didn't seem to help.
readonlyRootFilesystem: false,
})
ECS Exec Checker should be able to figure out what's wrong with your setup. Can you give it a try?
The check-ecs-exec.sh script allows you to check and validate both your CLI environment and ECS cluster/task are ready for ECS Exec, by calling various AWS APIs on behalf of you.
Building on #clay's comment
I was also missing ssmmessages:* permissions.
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-exec.html#ecs-exec-required-iam-permissions says a policy such as
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ssmmessages:CreateControlChannel",
"ssmmessages:CreateDataChannel",
"ssmmessages:OpenControlChannel",
"ssmmessages:OpenDataChannel"
],
"Resource": "*"
}
]
}
should be attached to the role used in your "task role" (not for the "task execution role"), although the sole ssmmessages:CreateDataChannel permission does cut it.
The managed policies
arn:aws:iam::aws:policy/AmazonSSMFullAccess
arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore
arn:aws:iam::aws:policy/AmazonSSMManagedEC2InstanceDefaultPolicy
arn:aws:iam::aws:policy/AWSCloud9SSMInstanceProfile
all contain the necessary permissions, AWSCloud9SSMInstanceProfile being the most minimalistic.

Newman: Unknown encoding: latin1 pop up when running Newman cli on AWS CodeBuild

I have the Newman (postman cli) setup on AWS CodeBuild a few months ago, it was working perfectly. Then this error popped up from nowhere: error: Unknown encoding: latin1
Run the same command in local work perfectly.
Run the same command on inside a docker on AWS EC2 instance work perfectly.
It only fails on when running the AWS CodeBuild which is part of my AWS CodePipeline.
There is no any special character in the JSON file.
Here is my buildSpec for CodeBuild
version: 0.2
env:
variables:
AWS_HOST : "https://api.aws.com/demo-testing"
phases:
pre_build:
commands:
- npm install newman --global
build:
commands:
- newman run APITesting.json -e env.json --bail
Everything is working fine except
- newman run APITesting.json -e env.json
It gave me an error for no sense: error: Unknown encoding: latin1
Even though I replaced APITesting.json with demo.json
demo.json:
{
"info": {
"_postman_id": "5bc2766f-eefc-48f2-a778-f05b2b2465ef",
"name": "A",
"schema": "https://schema.getpostman.com/json/collection/v2.1.0/collection.json"
},
"item": [
{
"name": "GetMyProfile",
"event": [
{
"listen": "test",
"script": {
"id": "1b46d302-7014-4c09-bac9-751d2cec959d",
"exec": [
"pm.test(\"Status code is 200\", function () {",
" pm.response.to.have.status(200);",
"});"
],
"type": "text/javascript"
}
},
{
"listen": "prerequest",
"script": {
"id": "f9a5dc64-33ab-42b1-9efa-f0a3614db340",
"exec": [
""
],
"type": "text/javascript"
}
}
],
"request": {
"auth": {
"type": "noauth"
},
"method": "GET",
"header": [
{
"key": "Content-Type",
"value": "application/json"
},
{
"key": "user",
"value": "xxxx"
},
{
"key": "email",
"value": "xxxx#gmail.com"
},
],
"body": {
"mode": "raw",
"raw": ""
},
"url": {
"raw": "https://api.aws.com/demo-testing/api/profile",
"protocol": "https",
"host": [
"api",
"aws",
"com"
],
"path": [
"demo-testing",
"api",
"profile"
]
}
},
"response": []
}
]
}
It still complaining about the unknown encoding. I tried to use file -i or file -I to get the encoding of the file. All files have encoded in utf-8 or us-ascii
[Container] 2019/02/27 06:26:34 Running command file -i APITesting.json
APITesting.json: text/plain; charset=utf-8
[Container] 2019/02/27 06:26:34 Running command file -i env.json
env.json: text/plain; charset=us-ascii
[Container] 2019/02/27 06:26:34 Running command file -i demo.json
env.json: text/plain; charset=utf-8
Everything is running inside a Docker container, but I do not think it matters.
I searched all issues from Newman Github with no luck.
I also searched for everything related to Unknown encoding: latin1 in Google, StackOverflow, and AWS Discussion Forums with no result.
I already spent two days on it. Anyone have any clue?
Thank you so much!!!
Kun
If anyone runs into this, you can change UTF8 to UTF8 with BOM using the command below:
sed -i '1s/^\xEF\xBB\xBF//' your-file.json
This fixed the issue for us.

Connect AWS CodeDeploy to Github without using my Github user?

AWS documentation describes how you authenticate to Github using your browser, and that you're currently logged into Github as a valid user with permission to the repository you want to deploy from:
http://docs.aws.amazon.com/codedeploy/latest/userguide/github-integ.html#github-integ-behaviors-auth
Is there any way to setup CodeDeploy without linking my user and having a browser? I'd love to do this using webhooks on each repository and AWS API calls, but I'll make a Github 'service user' if I have to.
More examples:
http://blogs.aws.amazon.com/application-management/post/Tx33XKAKURCCW83/Automatically-Deploy-from-GitHub-Using-AWS-CodeDeploy
I'd love to use webhooks on my repo, or set them up myself, than permit AWS access to every repository on my Github account.
There does not appear to be an alternative to doing the OAuth flow in your browser at this point. If you're concerned about opening your whole Github account up to Amazon, creating a service user is probably the best approach, unfortunately it seems this user still needs administrative access to your repos to set up the integration.
After more research I realized my first answer is wrong, you can use AWS CLI to create a CodePipeline using a Github OAuth token. Then you can plug in your CodeDeploy deployment from there. Here's an example configuration:
{
"pipeline": {
"roleArn": "arn:aws:iam::99999999:role/AWS-CodePipeline-Service",
"stages": [
{
"name": "Source",
"actions": [
{
"inputArtifacts": [],
"name": "Source",
"actionTypeId": {
"category": "Source",
"owner": "ThirdParty",
"version": "1",
"provider": "GitHub"
},
"outputArtifacts": [
{
"name": "MyApp"
}
],
"configuration": {
"Owner": "myusername",
"Repo": "myrepo",
"Branch": "master",
"OAuthToken": "**************"
},
"runOrder": 1
}
]
},
{
"name": "Beta",
"actions": [
{
"inputArtifacts": [
{
"name": "MyApp"
}
],
"name": "CodePipelineDemoFleet",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"version": "1",
"provider": "CodeDeploy"
},
"outputArtifacts": [],
"configuration": {
"ApplicationName": "CodePipelineDemoApplication",
"DeploymentGroupName": "CodePipelineDemoFleet"
},
"runOrder": 1
}
]
}
],
"artifactStore": {
"type": "S3",
"location": "codepipeline-us-east-1-99999999"
},
"name": "MySecondPipeline",
"version": 1
}
}
You can create the pipeline using the command:
aws codepipeline create-pipeline --cli-input-json file://input.json
Make sure that the Github OAuth token has permissions admin:repo_hook and repo.
Reference: http://docs.aws.amazon.com/cli/latest/reference/codepipeline/create-pipeline.html
CodeDeploy and Github integration works based on Github Oauth. So to use the CodeDeploy and Github integration, you will have to trust CodeDeploy github application using your github account. Currently this integration will only work in your browser with a valid github account cause CodeDeploy application will always redirect back to CodeDeploy console to verify&finish the OAuth authentication process.
You can do it using this bash command
FROM LOCAL TO REMOTE
rsync --delete -azvv -e "ssh -i /path/to/pem" /path/to/local/code/* ubuntu#66.66.66.66:/path/to/remote/code
FROM REMOTE TO LOCAL
rsync --delete -azvv -e "ssh -i /path/to/pem" ubuntu#66.66.66.66:/path/to/remote/code/* /path/to/local/code
rsync checks file versions and updates the files that need to be update