check istio installed option value - istio

I have istio environment, I want to know this option value:
for eg:
--set values.global.mtls.auto=true \
--set values.global.mtls.enabled=false
how do I check istio installed option value?

Too many tries. I found command can get value.
pilot-discovery discovery -a default
},
"outboundTrafficPolicy": {
"mode": "ALLOW_ANY"
},
"enableAutoMtls": true,
"trustDomain": "cluster.local",
"trustDomainAliases": [
],
"defaultServiceExportTo": [
"*"
],
"defaultVirtualServiceExportTo": [
"*"
],

Related

"Secret is used without being defined" Error in Google Cloud Build

I am trying a run a google cloud build with the following configuration
{
"steps": [
{
"name": "gcr.io/cloud-builders/gcloud",
"id": "Create GitHub pull request",
"entrypoint": "bash",
"args": [
"-c",
"curl -X POST -H \"Authorization:Bearer $$GH_TOKEN\" -H 'Accept:application/vnd.github.v3+json' https://api.github.com/repos/<username>/<repo> -d '{\"head\":\"main\",\"base\":\"newbranch\", \"title\":\"NEW_PR\"}"
],
"secretEnv": ["GH_TOKEN"]
}
],
"availableSecrets": {
"secretManager": [
{
"versionName": "projects/PROJECT_ID/secrets/password/versions/latest",
"env": "GH_TOKEN"
}
]
}
}
I have created a secret in the secret manager with the name password. When I run the build, I am getting the error
invalid secrets: secretEnv "GH_TOKEN" is used without being defined
I have also checked that my cloud build service account is present in Principal and role of the Secret Manager.

Installing CloudHealth agent on EC2 instance - Latest CloudHealth agent

We have a large number of EC2 instances, both Windows and Linux, and we have CloudHealth v 10.0.0.180 installed. I understand there are newer versions such as 10.0.0.220 but I can't find a definitive list of the versions and which one is the latest. I have an AWS custom doc that pushes CloudHealth v10.0.0.180 (see below) but if I update that doc to push 10.0.0.220 it says it succeeds but the version does not change. Below are the URLs I am using in the doc for both v 10.0.0.180 and 10.0.0.220. The full document code is below as well.
https://s3.amazonaws.com/remote-collector/agent/windows/18/CloudHealthAgent.exe\
https://s3.amazonaws.com/remote-collector/agent/windows/22/CloudHealthAgent.exe\
{
"description": "Download and Install CloudHealth Agents",
"schemaVersion": "2.2",
"mainSteps": [
{
"inputs": {
"runCommand": [
"Write-Output \"Installing CloudHealth Agent\"",
"$url = \"https://s3.amazonaws.com/remote-collector/agent/windows/22/CloudHealthAgent.exe\"",
"$output = \"C:\\CloudHealthAgent.exe\"",
"$start_time = Get-Date",
"Invoke-WebRequest -Uri $url -OutFile $output",
"C:\\CloudHealthAgent.exe /S /v\"/l* install.log /qn CLOUDNAME=aws CHTAPIKEY=6a4290cd-116d-46f5-b8f4-eb6c6ee4bf46\"",
"Write-Output \"Time taken: $((Get-Date).Subtract($start_time).Seconds) second(s)\""
]
},
"name": "CloudHealthAgentWindows",
"action": "aws:runPowerShellScript",
"precondition": {
"StringEquals": [
"platformType",
"Windows"
]
}
},
{
"inputs": {
"runCommand": [
"echo “Installing CloudHealth Agent”",
"sudo yum install wget -y",
"wget https://s3.amazonaws.com/remote-collector/agent/v22/install_cht_perfmon.sh",
"sudo sh install_cht_perfmon.sh 20 8fdf2776-eda0-441b-bca8-0566ded6daf1 aws;"
]
},
"name": "CloudHealthAgentLinux",
"action": "aws:runShellScript",
"precondition": {
"StringEquals": [
"platformType",
"Linux"
]
}
}
]
}
I just went through updating the agents myself. I had to uninstall the old agent, reboot the instance, and then I was able to successfully install the new (22) version.

`aws ecs execute-command` results in `TargetNotConnectedException` `The execute command failed due to an internal error`

I am running a Docker image on an ECS cluster to shell into it and run some simple tests. However when I run this:
aws ecs execute-command \
--cluster MyEcsCluster \
--task $ECS_TASK_ARN \
--container MainContainer \
--command "/bin/bash" \
--interactive
I get the error:
The Session Manager plugin was installed successfully. Use the AWS CLI to start a session.
An error occurred (TargetNotConnectedException) when calling the ExecuteCommand operation: The execute command failed due to an internal error. Try again later.
I can confirm the task + container + agent are all running:
aws ecs describe-tasks \
--cluster MyEcsCluster \
--tasks $ECS_TASK_ARN \
| jq '.'
"containers": [
{
"containerArn": "<redacted>",
"taskArn": "<redacted>",
"name": "MainContainer",
"image": "confluentinc/cp-kafkacat",
"runtimeId": "<redacted>",
"lastStatus": "RUNNING",
"networkBindings": [],
"networkInterfaces": [
{
"attachmentId": "<redacted>",
"privateIpv4Address": "<redacted>"
}
],
"healthStatus": "UNKNOWN",
"managedAgents": [
{
"lastStartedAt": "2021-09-20T16:26:44.540000-05:00",
"name": "ExecuteCommandAgent",
"lastStatus": "RUNNING"
}
],
"cpu": "0",
"memory": "4096"
}
],
I'm defining the ECS Cluster and Task Definition with the CDK Typescript code:
new Cluster(stack, `MyEcsCluster`, {
vpc,
clusterName: `MyEcsCluster`,
})
const taskDefinition = new FargateTaskDefinition(stack, TestTaskDefinition`, {
family: `TestTaskDefinition`,
cpu: 512,
memoryLimitMiB: 4096,
})
taskDefinition.addContainer("MainContainer", {
image: ContainerImage.fromRegistry("confluentinc/cp-kafkacat"),
command: ["tail", "-F", "/dev/null"],
memoryLimitMiB: 4096,
// Some internet searches suggested setting this flag. This didn't seem to help.
readonlyRootFilesystem: false,
})
ECS Exec Checker should be able to figure out what's wrong with your setup. Can you give it a try?
The check-ecs-exec.sh script allows you to check and validate both your CLI environment and ECS cluster/task are ready for ECS Exec, by calling various AWS APIs on behalf of you.
Building on #clay's comment
I was also missing ssmmessages:* permissions.
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-exec.html#ecs-exec-required-iam-permissions says a policy such as
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ssmmessages:CreateControlChannel",
"ssmmessages:CreateDataChannel",
"ssmmessages:OpenControlChannel",
"ssmmessages:OpenDataChannel"
],
"Resource": "*"
}
]
}
should be attached to the role used in your "task role" (not for the "task execution role"), although the sole ssmmessages:CreateDataChannel permission does cut it.
The managed policies
arn:aws:iam::aws:policy/AmazonSSMFullAccess
arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore
arn:aws:iam::aws:policy/AmazonSSMManagedEC2InstanceDefaultPolicy
arn:aws:iam::aws:policy/AWSCloud9SSMInstanceProfile
all contain the necessary permissions, AWSCloud9SSMInstanceProfile being the most minimalistic.

Go-CD: How to use API to trigger a pipeline?

Is there an API can be used to trigger a pipeline, I did not find one in the API manual. Or, is there any other way I can trigger a pipeline using linux command?
Thanks
link to the docs: https://api.gocd.org/current/#scheduling-pipelines
POST /go/api/pipelines/:pipeline_name/schedule
in the request you can override the environmental variables, the material to use and choose to update the material before the start.
command example as taken from the documentation:
$ curl 'https://ci.example.com/go/api/pipelines/pipeline1/schedule' \
-u 'username:password' \
-H 'Accept: application/vnd.go.cd.v1+json' \
-H 'Content-Type: application/json' \
-X POST \
-d '{
"environment_variables": [
{
"name": "USERNAME",
"secure": false,
"value": "bob"
},
{
"name": "SSH_PASSPHRASE",
"value": "some passphrase",
"secure": true,
},
{
"name": "PASSWORD",
"encrypted_value": "YEepp1G0C05SpP0fcp4Jh+kPmWwXH5Nq",
"secure": true,
}
],
"materials": [
{
"fingerprint": "b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c",
"revision": "123"
},
{
"fingerprint": "7d865e959b2466918c9863afca942d0fb89d7c9ac0c99bafc3749504ded97730",
"revision": "1058e75b18e8a645dd71702851994a010789f450"
}
],
"update_materials_before_scheduling": true
}'
/go/api/pipelines/${pipelineName}/schedule works

Newman: Unknown encoding: latin1 pop up when running Newman cli on AWS CodeBuild

I have the Newman (postman cli) setup on AWS CodeBuild a few months ago, it was working perfectly. Then this error popped up from nowhere: error: Unknown encoding: latin1
Run the same command in local work perfectly.
Run the same command on inside a docker on AWS EC2 instance work perfectly.
It only fails on when running the AWS CodeBuild which is part of my AWS CodePipeline.
There is no any special character in the JSON file.
Here is my buildSpec for CodeBuild
version: 0.2
env:
variables:
AWS_HOST : "https://api.aws.com/demo-testing"
phases:
pre_build:
commands:
- npm install newman --global
build:
commands:
- newman run APITesting.json -e env.json --bail
Everything is working fine except
- newman run APITesting.json -e env.json
It gave me an error for no sense: error: Unknown encoding: latin1
Even though I replaced APITesting.json with demo.json
demo.json:
{
"info": {
"_postman_id": "5bc2766f-eefc-48f2-a778-f05b2b2465ef",
"name": "A",
"schema": "https://schema.getpostman.com/json/collection/v2.1.0/collection.json"
},
"item": [
{
"name": "GetMyProfile",
"event": [
{
"listen": "test",
"script": {
"id": "1b46d302-7014-4c09-bac9-751d2cec959d",
"exec": [
"pm.test(\"Status code is 200\", function () {",
" pm.response.to.have.status(200);",
"});"
],
"type": "text/javascript"
}
},
{
"listen": "prerequest",
"script": {
"id": "f9a5dc64-33ab-42b1-9efa-f0a3614db340",
"exec": [
""
],
"type": "text/javascript"
}
}
],
"request": {
"auth": {
"type": "noauth"
},
"method": "GET",
"header": [
{
"key": "Content-Type",
"value": "application/json"
},
{
"key": "user",
"value": "xxxx"
},
{
"key": "email",
"value": "xxxx#gmail.com"
},
],
"body": {
"mode": "raw",
"raw": ""
},
"url": {
"raw": "https://api.aws.com/demo-testing/api/profile",
"protocol": "https",
"host": [
"api",
"aws",
"com"
],
"path": [
"demo-testing",
"api",
"profile"
]
}
},
"response": []
}
]
}
It still complaining about the unknown encoding. I tried to use file -i or file -I to get the encoding of the file. All files have encoded in utf-8 or us-ascii
[Container] 2019/02/27 06:26:34 Running command file -i APITesting.json
APITesting.json: text/plain; charset=utf-8
[Container] 2019/02/27 06:26:34 Running command file -i env.json
env.json: text/plain; charset=us-ascii
[Container] 2019/02/27 06:26:34 Running command file -i demo.json
env.json: text/plain; charset=utf-8
Everything is running inside a Docker container, but I do not think it matters.
I searched all issues from Newman Github with no luck.
I also searched for everything related to Unknown encoding: latin1 in Google, StackOverflow, and AWS Discussion Forums with no result.
I already spent two days on it. Anyone have any clue?
Thank you so much!!!
Kun
If anyone runs into this, you can change UTF8 to UTF8 with BOM using the command below:
sed -i '1s/^\xEF\xBB\xBF//' your-file.json
This fixed the issue for us.