kubectl get all deployments with specific image tag - kubectl

I have a question. how to find all deployments in cluster with specific tag image.
I want something like that:
kubectl get deployment -A -o jsonpath='{range .items[*]}{.spec.template.spec.containers[*].image}{"\n"}{end}'
But with deployments names and with specific image tags.

I don't have a cluster available to try this.
You don't want to range but filter and I think (!?) you won't be able to use kubectl's JSONPath to:
filter only ${TAG} from image: ${REPO}:${TAG} but you can filter by the value of a field e.g. ${REPO}:${TAG}
Return the deployments' metadata.name values
IIRC you can't nest filter, so you can't item[?(#.spec.template.spec.containers[?(#.image=\"${IMAGE}\")].metadata.name
You can enumerate same-level fields e.g. container.name if that helps. I haven't tried this but:
IMAGE="..."
FILTER="{
.items[*].spec.template.spec.containers[?(#.image==\"${IMAGE}\")].name
}"
kubectl \
get deployments \
--all-namespaces \
--output=jsonpath="${FILTER}"
This may be better done with a tool like jq:
IMAGE="..."
FILTER="
.items[]|
select(.spec.template.spec.containers[].image==\"${IMAGE}\")
.metadata.name
"
kubectl \
get deployments \
--all-namespaces \
--output=json \
| jq -r "${FILTER}"
NOTE Using jq you can filter by ${TAG} too. I'll leave that exercise to you.

Related

gcloud command to list all project owner

Searching for a GCP cmd to list all the active owners of a project. Have tried using the below cmd but it lists all the IAM policies. I only require project owner information.
gcloud projects get-iam-policy $PROJECT-ID
Try:
PROJECT="[YOUR-PROJECT-ID]"
gcloud projects get-iam-policy ${PROJECT} \
--flatten="bindings" \
--filter="bindings.role=roles/owner" \
--format="value(bindings.members[])"
This uses gcloud's --flatten, --format and --filter. See [this] post for a very good explanation.
It's confusing but --filter can only be used on lists and so --flatten is used to convert a single resource with a single bindings into multiple documents root on bindings.
Then it's possible to filter out the roles of value roles/owner.
Then format the result to include only the members.
Note: members are prefixed with the type (user:, serviceAccount: etc.). You may want to further process these.
Or:
PROJECT="[YOUR-PROJECT-ID]"
FILTER="
.bindings[]
|select(.role==\"roles/owner\").members"
gcloud projects get-iam-policy ${PROJECT} \
--format=json \
| jq -r "${FILTER}"
If you're willing to use jq to process JSON. You can have gcloud --format=json emit JSON and then process it using jq.
The advantage of this approach is that you learn and use one tool (i.e. jq) to process JSON output from any number of commands (not just gcloud).
The disadvantage of this approach is that you need to use multiple tools (gcloud and jq) instead of just one (gcloud).
In the case of jq, it's easier (!?) to write a filter that extracts the email from the member:
FILTER="
.bindings[]
|select(.role==\"roles/owner\").members[]
|split(\":\")[1]"

What does `gcloud compute instances create` do? - POST https://compute.googleapis.com…

Some things are very easy to do with the gcloud CLI, like:
$ export network='default' instance='example-instance' firewall='ssh-http-icmp-fw'
$ gcloud compute networks create "$network"
$ gcloud compute firewall-rules create "$firewall" --network "$network" \
--allow 'tcp:22,tcp:80,icmp'
$ gcloud compute instances create "$instance" --network "$network" \
--tags 'http-server' \
--metadata \
startup-script='#! /bin/bash
# Installs apache and a custom homepage
apt update
apt -y install apache2
cat <<EOF > /var/www/html/index.html
<html><body><h1>Hello World</h1>
<p>This page was created from a start up script.</p>
</body></html>'
$ # sleep 15s
$ curl $(gcloud compute instances list --filter='name=('"$instance"')' \
--format='value(EXTERNAL_IP)')
(to be exhaustive in commands, tear down with)
$ gcloud compute networks delete -q "$network"
$ gcloud compute firewall-rules delete -q "$firewall"
$ gcloud compute instances delete -q "$instance"
…but it's not clear what the equivalent commands are from the REST API side. Especially considering the HUGE number of options, e.g., at https://cloud.google.com/compute/docs/reference/rest/v1/instances/insert
So I was thinking to just steal whatever gcloud does internally when I write my custom REST API client for Google Cloud's Compute Engine.
Running rg I found a bunch of these lines:
https://github.com/googleapis/google-auth-library-python/blob/b1a12d2/google/auth/transport/requests.py#L182
Specifically these 5 in lib/third_party:
google/auth/transport/{_aiohttp_requests.py,requests.py,_http_client.py,urllib3.py}
google_auth_httplib2/__init__.py
Below each of them I added _LOGGER.debug("With body: %s", body). But there seems to be some fancy batching going on because I almost never get that With body line 😞
Now messing with Wireshark to see what I can find, but I'm confident this is a bad rabbit hole to fall down. Ditto for https://console.cloud.google.com/home/activity.
How can I find out what body is being set by gcloud?
Add the command line option --log-http to see the REST API parameters.
There is no simple answer as the CLI changes over time. New features are added, removed, etc.

How to get a specific tag from Google GCR with regex using gcloud

I'm tagging my images with git_short_sha and branch_name and some other tags. I'm trying to get the short_sha tag of the image tagged with master using a gcloud command.
I've managed to get all tags, or a specific tag by its' location with these commands:
1. gcloud container images list-tags eu.gcr.io/${PROJECT_ID}/${IMAGE} --format='value(tags[1])' --filter="tags=master"
result:
76f1a2a
but I cannot be sure that the second element will always be the short_sha.
2. gcloud container images list-tags eu.gcr.io/${PROJECT_ID}/${IMAGE} --format='value(tags)' --filter="tags=master"
result:
1.0.0-master,76f1a2a,76f1a2a-master,master
Is it possible to get only the short_sha tag by using only gcloud command?
Try:
gcloud container images list-tags eu.gcr.io/${PROJECT_ID}/${IMAGE} \
--flatten="[].tags[]" \
--filter="tags=${TAG}"
This came up once before. If you encounter that bug, you can try the jq solution.

Google Cloud Genomics Pipeline Zone and Region Specification Error

I am new to google cloud and was told to use Variant Transforms in order to get .vcf files into Big Query. I did everything specified on the Variant Transforms read me and copy and pasted the first block of code in to a bash file:
#!/bin/bash
# Parameters to replace:
GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT
INPUT_PATTERN=gs://BUCKET/*.vcf
OUTPUT_TABLE=GOOGLE_CLOUD_PROJECT:BIGQUERY_DATASET.BIGQUERY_TABLE
TEMP_LOCATION=gs://BUCKET/temp
COMMAND="/opt/gcp_variant_transforms/bin/vcf_to_bq \
--project ${GOOGLE_CLOUD_PROJECT} \
--input_pattern ${INPUT_PATTERN} \
--output_table ${OUTPUT_TABLE} \
--temp_location ${TEMP_LOCATION} \
--job_name vcf-to-bigquery \
--runner DataflowRunner"
gcloud alpha genomics pipelines run \
--project "${GOOGLE_CLOUD_PROJECT}" \
--logging "${TEMP_LOCATION}/runner_logs_$(date +%Y%m%d_%H%M%S).log" \
--zones us-west1-b \
--service-account-scopes https://www.googleapis.com/auth/cloud-platform \
--docker-image gcr.io/gcp-variant-transforms/gcp-variant-transforms \
--command-line "${COMMAND}"
I tried to run this, while replacing the parameters appropriately and got this error:
ERROR: (gcloud.alpha.genomics.pipelines.run) INVALID_ARGUMENT: Error: validating pipeline: zones and regions cannot be specified together
I since then have tried to specify the region and zone on separate lines and have even changed the default region and zone. I have even tried example pipelines from google themselves and they still result in the same error. Am I doing something wrong or is there just something more I need to install for this to work?
You need to use the --regions flag first and in the end the --zone flag. As workaround you can set the default zone and region to your local client. Also keep in mind that the region is "us-west1" and the zone is "b"

How to migrate elasticsearch data to AWS elasticsearch domain?

I have elasticsearch 5.5 running on a server with some data indexed in it. I want to migrate this ES data to AWS elasticsearch cluster. How I can perform this migration. I got to know that one way is by creating the snapshot of ES cluster, but I am not able to find any proper documentation for this.
The best way to migrate is by using Snapshots. You will need to snapshot your data to Amazon S3 and then proceed a restore from there. Documentation for snapshots to S3 can be found here. Alternatively, you can also re-index your data though this is a longer process and there are limitations depending on the version of AWS ES.
I also recommend looking at Elastic Cloud, the official hosted offering on AWS that includes the additional X-Pack monitoring, management, and security features. The migration guide for moving to Elastic Cloud also goes over snapshots and re-indexing.
I momentarily created a shell script for this -
Github - https://github.com/vivekyad4v/aws-elasticsearch-domain-migration/blob/master/migrate.sh
#!/bin/bash
#### Make sure you have Docker engine installed on the host ####
###### TODO - Support parameters ######
export AWS_ACCESS_KEY_ID=xxxxxxxxxx
export AWS_SECRET_ACCESS_KEY=xxxxxxxxx
export AWS_DEFAULT_REGION=ap-south-1
export AWS_DEFAULT_OUTPUT=json
export S3_BUCKET_NAME=my-es-migration-bucket
export DATE=$(date +%d-%b-%H_%M)
old_instance="https://vpc-my-es-ykp2tlrxonk23dblqkseidmllu.ap-southeast-1.es.amazonaws.com"
new_instance="https://vpc-my-es-mg5td7bqwp4zuiddwgx2n474sm.ap-south-1.es.amazonaws.com"
delete=(.kibana)
es_indexes=$(curl -s "${old_instance}/_cat/indices" | awk '{ print $3 }')
es_indexes=${es_indexes//$delete/}
es_indexes=$(echo $es_indexes|tr -d '\n')
echo "index to be copied are - $es_indexes"
for index in $es_indexes; do
# Export ES data to S3 (using s3urls)
docker run --rm -ti taskrabbit/elasticsearch-dump \
--s3AccessKeyId "${AWS_ACCESS_KEY_ID}" \
--s3SecretAccessKey "${AWS_SECRET_ACCESS_KEY}" \
--input="${old_instance}/${index}" \
--output "s3://${S3_BUCKET_NAME}/${index}-${DATE}.json"
# Import data from S3 into ES (using s3urls)
docker run --rm -ti taskrabbit/elasticsearch-dump \
--s3AccessKeyId "${AWS_ACCESS_KEY_ID}" \
--s3SecretAccessKey "${AWS_SECRET_ACCESS_KEY}" \
--input "s3://${S3_BUCKET_NAME}/${index}-${DATE}.json" \
--output="${new_instance}/${index}"
new_indexes=$(curl -s "${new_instance}/_cat/indices" | awk '{ print $3 }')
echo $new_indexes
curl -s "${new_instance}/_cat/indices"
done