GCP - get url dynamically of cloud run instances - google-cloud-platform

I have some cloud run that make http requests between them, the url is hardcoded in the code, is there a way to resolve the url by the cloud run name or another attribute?

Another possible solution could be using Method: namespaces.services.get.
If the service name is known to you, you can make a GET HTTP request in API calls to https://{endpoint}/apis/serving.knative.dev/v1/{name} where endpoint is one of the supported endpoints and name is the name of the Cloud Run service to retrieve. For Cloud Run (fully managed), replace {namespace_id} with the project ID or number. It takes the form namespaces/{namespace}/services/{service}.
Authorization requires the following IAM permission on the specified resource name : run.services.get
For example :
curl -X GET -H "Authorization: Bearer $(gcloud auth print-access-token)" https://us-central1-run.googleapis.com/apis/serving.knative.dev/v1/namespaces/your-project/services/your-service| grep url
Output :
"url" :"https://cloud-run-xxxxxxxxxx-uc.a.run.app"

There is a gcloud command to do so. You could for instance get the url during your build and save it into an environment variable. The following command will get the complete url:
gcloud run services describe YOUR_CLOUDRUN_NAME --region=INSTANCE_REGION --platform=managed --format=yaml | grep -m 1 url | awk '{print $NF}'

There is no easy way for now (but Cloud Next 21 is coming, maybe great announcement on that; it's a feature requested by many Alpha tester like me).
However, you can implement a bunch of API calls to achieve that. I wrote an article where I use that to get the current Cloud Run service URL. But it could be another service.
It's in Golang. Have a look on it, and let me know if you have issues to translate the calls in your preferred language.

You can:
gcloud run services ${NAME} \
--platform=managed \
--region=${REGION} \
--project=${PROJECT} \
--format="value(status.address.url)")

Related

Why is Basic Authentication failing with Postman CLI?

I am trying to automate via the Postman CLI my collections.
I am able to run a folder (with the Postman Runner) without problems, using Basic Authentication to access many endpoints I am calling.
If I try to run the very same folder with the Postman CLI, all the protected endpoints answer with 403 Forbidden.
It seems that the requests are not using the authentication header.
Is it a known problem? Is there a workaround?
Plus, to troubleshoot better, is there a way to inspect the requests when the collection is run with the Postman CLI? I can see a recap but I cannot see the detailed requests with all the headers, body, ect...
I am running the collection/folder with
postman collection run COLLECTION_UUID -k --verbose -e ENVIRONMENT_UUID -i FOLDER_UUID --env-var "source=X.X.X.X" -d "datafile.json"

call AWS Elasticsearch Service API with cURL --aws-sigv4

when I execute
curl --request GET "https://${ES_DOMAIN_ENDPOINT}/my_index_pattern-*/my_type/_mapping" \
--user $AWS_ACCESS_KEY_ID:$AWS_SECRET_ACCESS_KEY \
--aws-sigv4 "aws:amz:ap-southeast-2:es"
where $ES_DOMAIN_ENDPOINT is my AWS Elasticsearch endpoint, I'm getting the following response:
{"message":"The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details."}
I'm confident that my $AWS_ACCESS_KEY_ID:$AWS_SECRET_ACCESS_KEY are correct.
However, when I send the same postman request with the AWS Authentication and the parameters above, the response is coming through. I compared the verbose output of both requests and they have very minor differences, such as timestamps and signature.
I'm wondering, what is wrong with the --aws-sigv4 config?
This issue happens due to the* character in the path. There is a bug report in curl repository to fix this issue https://github.com/curl/curl/issues/7559.
Meanwhile, to mitigate the error you should either remove a * from the path or build curl from the branch https://github.com/outscale-mgo/curl-appimage/tree/http_aws_sigv4_encoding.

where can i get deep documentation of ArgoCD apis

I need to list all applications based on some label filters.
https://argocd_domain/api/v1/applications
in order to list all apps from argoCD API, I want to put all possible filters.
The Argo CD API is documented in its Swagger document.
Copy and paste that JSON to the Swagger Editor, and you'll get a nicely-formatted page describing the API. Here's the section for listing applications:
The function to handle a list-applications request calls ConvertSelectorToLabelsMap. Reading the implementation of that parsing function, you can find the expected format of the selector parameter.
At a glance, it seems the format is a comma-delimited list of key=value pairs.
Using the Swagger Editor, I generated the this test URL:
curl -X GET "https://editor.swagger.io/api/v1/applications?selector=label1%3Dvalue1%2Clabel2%3Dvalue2" -H "accept: application/json"
Looks like you'll need to URL-encode the equals signs and commas.
You can find the Swagger docs by setting the path to /swagger-ui in your Argo CD server address. E.g. http://localhost:8080/swagger-ui.
You can find a hosted version of Argo's Swagger UI on https://cd.apps.argoproj.io/swagger-ui

How to delete added attestation in google cloud platform's kubernetes engine image authorization process

I have added an attestation on google cloud platform to use for image signing and attestation by the attestor and want to remove the added attestation but I can't seem to find any documentation on how this is done or even if this is how is should be done.
I have seen the one for removal of the attestor but none on removing of deleting the attestation. I had added it using the following command:
Official documentation version:
gcloud container binauthz attestations create \
--project=$PROJECT_ID \
--artifact-url="${CONTAINER_PATH}#${DIGEST}" \
--attestor=${ATTESTOR} \
--signature-file=./signature.pgp \
--public-key-id="$KEY_FINGERPRINT"
Online tutorial version:
gcloud beta container binauthz attestations create \
--artifact-url="CONTAINER_PATH#DIGEST" \
--attestor=ATTESTOT_ID \
--attestor-project=PROJECT_ID \
--signature-file=./signature.pgp \
--pgp-key-fingerprint="KEY_FINGERPRINT"
but from a more recent documentation the --attestor should include the following --attestor="projects/${ATTESTOR_PROJECT_ID}/attestors/${ATTESTOR}" and unfortunately the tutorial I am following didn't use it this way and only added the attestor_id or name. So I want to remove this version and add new one but I am getting a conflict error
Resource in project [xxxx] is the subject of a conflict: occurrence ID "f5981e62-7b42-4f57-8486-b0d9518509fa" already exists in project
So how is it to be removed.
Update: documentation used to compare to online course: https://cloud.google.com/binary-authorization/docs/making-attestations
Looks like somekind of underlying resources(the error messages indicates that) are still need to be deleted.
Found some documentation on binary authorization where they have explained complete tear down and clean-up. It looks like apart from the attestor, we need to delete some other connected resources as well.
According to the official Google Cloud Documentation on Creating attestations and project occurrences DELETE REST API method documentation, I derived the curl command to delete a specific attestations:
curl "https://containeranalysis.googleapis.com/v1beta1/projects/${ATTESTATION_PROJECT_ID}/occurrences/${OCCURRENCES_GUID}" \
--request DELETE --header "Content-Type: application/json" \
--header "Authorization: Bearer $(gcloud auth print-access-token)"
Assuming the executing user has containeranalysis.occurrences.delete permission as included in roles/containeranalysis.occurrences.editor, the response would be 200 with an empty json. I am not sure if the following header is required but at the time of my testing it wasn't.
-H "X-Goog-User-Project: ${ATTESTATION_PROJECT_ID}"
I have provided feedback to Google Cloud documentation to include my curl command above at the official Google Cloud Documentation on Creating attestations

Connect to elasticsearch in AWS using key credentials

I'm trying to post a request using curl to my es cluster in AWS using my accessKey and secretKey. I have successfully done this through postman (details here) where you can specify AWS credentials but I would like to make this work with curl. Postman can auto-generate your curl request for you but all I get are errors.
This is the generated curl request along with the response
curl -X GET \
https://search-00000000000001.eu-west-1.es.amazonaws.com/_cat/indices \
-H 'Authorization: AWS4-HMAC-SHA256 Credential=11111111111111111111/20181119/eu-west-1/es/aws4_request, SignedHeaders=cache-control;content-type;host;postman-token;x-amz-date, Signature=11111111116401882398f46011f14fdb9d55e012a4fb912706d67c1111111111' \
-H 'Content-Type: application/x-www-form-urlencoded' \
-H 'Host: search-00000000000001.eu-west-1.es.amazonaws.com' \
-H 'Postman-Token: 00000000-0000-4001-8006-9291e208a000' \
-H 'X-Amz-Date: 20181119T220000Z' \
-H 'cache-control: no-cache'
{"message":"The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details."}%
IDs have been changed to protect the innocent.
I have checked all my keys and region, and like i said this works through postman. Is it possible to access this AWS service using my keys through curl?
This is quite a long rabbit hole. Thanks to Adam for the comment that sent me in the correct direction. The link https://docs.aws.amazon.com/apigateway/api-reference/signing-requests/ really helps you understand what you need to do.
I've since found a script that follows the signing requests method outlined above. It runs in bash and whilst it is not written for use with elasticsearch requests it can be used for them.
https://github.com/riboseinc/aws-authenticating-secgroup-scripts many thanks to https://www.ribose.com for putting this on github.
If your host contains ':443' remove it and try again.
This worked for me.
"My initial problem: If I access it with Postman using the same url, I get the same error, but removing the ‘:443/’, it works fine, so it’s nothing wrong with the key and secret I’m using."