Can grafana tempo backend sign (sigv4) it's request that it sends to aws prometheus (AMP)?
metrics_generator:
registry:
external_labels:
source: tempo
cluster: example
storage:
path: /tmp/tempo/generator/wal
remote_write:
- url: https://aps-workspaces.eu-central-1.amazonaws.com/workspaces/ws-2354ezthd34w4ter/api/v1/remote_write
send_exemplars: true
Or is there a proxy server that can be run in the middle between tempo and prometheus that does the signing job?
aws-sigv4-proxy solves this issue for me.
docker run --name sigv4proxy -ti --rm \
--network=host \
public.ecr.aws/aws-observability/aws-sigv4-proxy:1.6.1 \
-v --name aps --region eu-central-1 \
--host aps-workspaces.eu-central-1.amazonaws.com
Now tempo can use localhost to access AMP (aws managed prometheus)
storage:
path: /tmp/tempo/generator/wal
remote_write:
- url: http://localhost:8080/workspaces/ws-1d8a668e-382b-4c49-9354-ad099f2b6260/api/v1/remote_write #http://prometheus:9090/api/v1/write
send_exemplars: true
Related
I have set up the Workload Identity on a GKE cluster, and now I am using a Kubernetes SA linked to an IAM SA with appropriate permissions. I checked that when I use the IAM SA key file, it gets the access I need.
However, it gets weird even when following the docs.
The first suggested check is to run this command to check the metadata server response:
$ curl -H "Metadata-Flavor: Google" http://169.254.169.254/computeMetadata/v1/instance/service-accounts/default/email
<sa_name>#<project_id>.iam.gserviceaccount.com
So far, so good. The next paragraph that describes using the Quota Project option suggests using another command, which should return the identity token. And it fails:
$ curl -H "Metadata-Flavor: Google" http://169.254.169.254/computeMetadata/v1/instance/service-accounts/default/token
Unable to generate access token; IAM returned 404 Not Found: Not found; Gaia id not found for email <sa_name>#<project_id>.iam.gserviceaccount.com
The same happens when I use the .NET SDK and call this:
var oidcToken1 = await cc.GetOidcTokenAsync(
OidcTokenOptions.FromTargetAudience(_serviceUrl),
cancellationToken
);
_addToken = async (request, token) => {
request.Headers.Authorization = new AuthenticationHeaderValue(
"Bearer",
await oidcToken1.GetAccessTokenAsync(cancellationToken: token)
);
};
The code works fine when I use the IAM SA JSON key, but when it runs in the pod that uses the Workload Identity, I get the same message as before:
Google.Apis.Auth.OAuth2.ServiceCredential Token has expired, trying to get a new one.
Google.Apis.Http.ConfigurableMessageHandler Request[00000001] (triesRemaining=3) URI: 'http://169.254.169.254/computeMetadata/v1/instance/service-accounts/default/identity?audience=https://<service_url>&format=full'
Google.Apis.Http.ConfigurableMessageHandler Response[00000001] Response status: NotFound 'Not Found'
Google.Apis.Http.ConfigurableMessageHandler Response[00000001] An abnormal response wasn't handled. Status code is NotFound
The same happens when I use gcloud auth application-default print-access-token from the Workload Identity test pod:
ERROR: (gcloud.auth.application-default.print-access-token) There was a problem refreshing your current auth tokens: ("Failed to retrieve http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/<sa_name>#<project_id>.iam.gserviceaccount.com/token?scopes=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcloud-platform from the Google Compute Engine metadata service. Status: 404 Response:\nb'Unable to generate access token; IAM returned 404 Not Found: Not found; Gaia id not found for email <sa_name>#<project_id>.iam.gserviceaccount.com\n'", <google.auth.transport.requests._Response object at 0x7feabe712910>)
I am not sure what else can be done; it seems like the whole thing just doesn't work.
I found it. I made an idiotic mistake and mistyped the IAM service account name configured in the Kubernetes service account annotation.
Google support provided troubleshooting guidelines, which I followed, and there it is important to use the output of the previous command as the input for the next, if applicable. When I tried to describe the IAM service account using the value from the Kubernetes service account annotation, I got "SA not found" error, which gave me a clue to resolve the issue.
I suspect you either misconfigured a step (the Service Account stuff is especially gnarly) or you're experiencing a car accident.
I am unable to repro your issue. It works for me.
BILLING="[YOUR-BILLING-ACCOUNT]"
Q="74552713"
PROJECT=$(whoami)-$(date +%y%m%d)-${Q}
gcloud projects create ${PROJECT}
gcloud beta billing projects link ${PROJECT} \
--billing-account=${BILLING}
gcloud services enable container.googleapis.com \
--project=${PROJECT}
POOL="${PROJECT}.svc.id.goog"
CLUSTER_PROJECT="${PROJECT}"
CLUSTER_NAME="cluster"
CLUSTER_LOCATION="us-west1-c"
# Gets the latest RAPID version
CLUSTER_VERSION=$(\
gcloud container get-server-config \
--project=${PROJECT} \
--zone=${CLUSTER_LOCATION} \
--flatten=channels \
--filter=channels.channel=RAPID \
--format="value(channels.validVersions[0])")
# My go-to test cluster config w/ workload-pool
gcloud beta container clusters create ${CLUSTER_NAME} \
--spot \
--no-enable-basic-auth \
--cluster-version=${CLUSTER_VERSION} \
--release-channel="rapid" \
--machine-type="e2-standard-2" \
--image-type="COS_CONTAINERD" \
--metadata=disable-legacy-endpoints=true \
--num-nodes=1 \
--addons=HorizontalPodAutoscaling,HttpLoadBalancing,GcePersistentDiskCsiDriver \
--enable-ip-alias \
--enable-autoupgrade \
--enable-autorepair \
--enable-managed-prometheus \
--enable-shielded-nodes \
--enable-vertical-pod-autoscaling \
--shielded-secure-boot \
--shielded-integrity-monitoring \
--no-enable-master-authorized-networks \
--max-surge-upgrade=1 \
--max-unavailable-upgrade=0 \
--node-locations=${CLUSTER_LOCATION} \
--zone=${CLUSTER_LOCATION} \
--project=${CLUSTER_PROJECT} \
--workload-pool=${POOL}
# Implicit
# gcloud container clusters get-credentials ${CLUSTER_NAME} ...
# Test that cluster's workloadPool matches expected value
GOT=$(\
gcloud container clusters describe ${CLUSTER_NAME} \
--zone=${CLUSTER_LOCATION} \
--project=${CLUSTER_PROJECT} \
--format="value(workloadIdentityConfig.workloadPool)")
WANT="${POOL}"
[ "${GOT}" == "${WANT}" ] && echo "true" || echo "false"
# Kubernetes == Google Cloud Service Account name
ACCOUNT="tester"
EMAIL="${ACCOUNT}#${PROJECT}.iam.gserviceaccount.com"
NAMESPACE="${Q}"
kubectl create namespace ${NAMESPACE}
kubectl create serviceaccount ${ACCOUNT} \
--namespace=${NAMESPACE}
gcloud iam service-accounts create ${ACCOUNT} \
--project=${PROJECT}
# Redundant for the purpose of this test
ROLE="roles/cloudprofiler.agent"
gcloud projects add-iam-policy-binding ${PROJECT} \
--member="serviceAccount:${EMAIL}" \
--role="${ROLE}"
# Allow Kubernetes robot to impersonate Cloud robot
gcloud iam service-accounts add-iam-policy-binding ${EMAIL} \
--role roles/iam.workloadIdentityUser \
--member "serviceAccount:${PROJECT}.svc.id.goog[${NAMESPACE}/${ACCOUNT}]"
# Updated IAM policy for serviceAccount [{EMAIL}].
# bindings:
# - members:
# - serviceAccount:{PROJECT}.svc.id.goog[{NAMESPACE}/{ACCOUNT}]
# role: roles/iam.workloadIdentityUser
# etag: BwXuOwsx9lk=
# version: 1
# Annotate Service Account
kubectl annotate serviceaccount ${ACCOUNT} \
--namespace=${NAMESPACE} \
iam.gke.io/gcp-service-account=${EMAIL}
# Update Pod specs
# spec:
# serviceAccountName: {ACCOUNT}
# nodeSelector:
# iam.gke.io/gke-metadata-server-enabled: "true"
POD="workload-identity-test"
echo "
apiVersion: v1
kind: Pod
metadata:
name: \"${POD}\"
namespace: \"${NAMESPACE}\"
spec:
containers:
- image: google/cloud-sdk:slim
name: \"${POD}\"
command: [\"sleep\",\"infinity\"]
serviceAccountName: \"${ACCOUNT}\"
nodeSelector:
iam.gke.io/gke-metadata-server-enabled: \"true\"
" | kubectl apply --filename=-
# Test that proxied (!) Metadata Service Account email matches
GOT=$(\
kubectl exec \
--stdin --tty \
${POD} \
--namespace=${NAMESPACE} \
-- /bin/bash -c 'curl --header "Metadata-Flavor: Google" http://169.254.169.254/computeMetadata/v1/instance/service-accounts/default/email')
WANT="${EMAIL}"
[ "${GOT}" == "${WANT}" ] && echo "true" || echo "false"
# Demonstrate that access-token is obtained
kubectl exec \
--stdin --tty \
${POD} \
--namespace=${NAMESPACE} \
-- /bin/bash -c 'curl --header "Metadata-Flavor: Google" http://169.254.169.254/computeMetadata/v1/instance/service-accounts/default/token' \
| jq -r .access_token[0:15]
# ya29.c.b0Aa9Vdy
I am using docker image to run dynamodb on local computer. Below is the docker compose file I used to launch the container.
version: "3.8"
services:
dbs:
image: amazon/dynamodb-local:1.16.0
ports:
- '8000:8000'
Then I use aws dynamodb cli to create a table on the instance.
export AWS_ACCESS_KEY_ID=test
export AWS_SECRET_ACCESS_KEY=test
export AWS_SESSION_TOKEN=test
aws dynamodb create-table \
--table-name $tableName \
--region local-env \
--attribute-definitions AttributeName=id,AttributeType=S AttributeName=type,AttributeType=S \
--key-schema AttributeName=id,KeyType=HASH AttributeName=type,KeyType=RANGE \
--billing-mode PAY_PER_REQUEST \
--endpoint-url http://localhost:8000
The table is created and works perfect. And I can use below command to list the table:
$ AWS_ACCESS_KEY_ID=test aws dynamodb list-tables --region local-env --endpoint-url http://localhost:8000
{
"TableNames": [
"test"
]
}
However, if I change the AWS_ACCESS_KEY_ID to some value other than test, it gives me an empty table list. That makes me think this instance is using AWS_ACCESS_KEY_ID as a namespace to separate tables. My question is how I can list all AWS_ACCESS_KEY_ID on this instance?
I am using Localstack in local environment to schedule cron calls every minute to a dummy POST endpoint (using https://webhook.site/) which works fine. However, only the body is kept (from put-targets, not from create-connection) but the request headers, query string parameters and path parameters are all discarded. Whether I test this with AWS CLI or with a Golang example (just to confirm the issue), the issue still persist. Just wondering if anyone has come across with such issue before or am I missing something? As the documentation states we could set these info on both Connection and Targets which I did just in case with both.
Localstack
version: "2.1"
services:
localstack:
image: "localstack/localstack"
container_name: "localstack"
ports:
- "4566-4599:4566-4599"
environment:
- DEBUG=1
- DEFAULT_REGION=eu-west-1
- SERVICES=events
- DATA_DIR=/tmp/localstack/data
- DOCKER_HOST=unix:///var/run/docker.sock
- LAMBDA_EXECUTOR=docker
volumes:
- "/tmp/localstack:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
CLI example
Create rule
aws --profile localstack --endpoint-url http://localhost:4566 events put-rule \
--name http-api-cron-rule \
--schedule-expression "cron(* * * * *)"
{
"RuleArn": "arn:aws:events:eu-west-1:000000000000:rule/http-api-cron-rule"
}
Create connection
aws --profile localstack --endpoint-url http://localhost:4566 events create-connection \
--name http-api-connection \
--authorization-type Basic \
--auth-parameters "{\"BasicAuthParameters\":{\"Username\":\"hello\",\"Password\":\"world\"},\"InvocationHttpParameters\":{\"HeaderParameters\":[{\"Key\":\"hdr\",\"Value\":\"val\",\"IsValueSecret\":false}],\"QueryStringParameters\":[{\"Key\":\"qry\",\"Value\":\"val\",\"IsValueSecret\":false}],\"BodyParameters\":[{\"Key\":\"bdy\",\"Value\":\"val\",\"IsValueSecret\":false}]}}"
{
"ConnectionArn": "arn:aws:events:eu-west-1:000000000000:connection/http-api-connection/4c6a29cf-4665-41f1-b90f-a43e41712e5e",
"ConnectionState": "AUTHORIZED",
"CreationTime": "2022-01-07T17:24:57.854127+00:00",
"LastModifiedTime": "2022-01-07T17:24:57.854127+00:00"
}
Create destination
Obtain a URL from https://webhook.site first and use below.
aws --profile localstack --endpoint-url http://localhost:4566 events create-api-destination \
--name http-api-destination \
--connection-arn "arn:aws:events:eu-west-1:000000000000:connection/http-api-connection/4c6a29cf-4665-41f1-b90f-a43e41712e5e" \
--http-method POST \
--invocation-endpoint "https://webhook.site/PUT-YOUR-OWN-UUID-HERE"
{
"ApiDestinationArn": "arn:aws:events:eu-west-1:000000000000:api-destination/http-api-destination/c582470b-4413-4dba-bde9-e7d1aef64ac9",
"ApiDestinationState": "ACTIVE",
"CreationTime": "2022-01-07T17:27:27.608361+00:00",
"LastModifiedTime": "2022-01-07T17:27:27.608361+00:00"
}
Create target
aws --profile localstack --endpoint-url http://localhost:4566 events put-targets \
--rule http-api-cron-rule \
--targets '[{"Id":"1","Arn":"arn:aws:events:eu-west-1:000000000000:api-destination/http-api-destination/c582470b-4413-4dba-bde9-e7d1aef64ac9","Input":"{\"bdyx\":\"val\"}","HttpParameters":{"PathParameterValues":["parx"],"HeaderParameters":{"hdrx":"val"},"QueryStringParameters":{"qryx":"val"}}}]'
{
"Targets": [
{
"Id": "1",
"Arn": "arn:aws:events:eu-west-1:000000000000:api-destination/http-api-destination/c582470b-4413-4dba-bde9-e7d1aef64ac9",
"Input": "{\"bdyx\":\"val\"}",
"HttpParameters": {
"PathParameterValues": [
"parx"
],
"HeaderParameters": {
"hdrx": "val"
},
"QueryStringParameters": {
"qryx": "val"
}
}
}
]
}
Logs
localstack | DEBUG:localstack.services.events.events_listener: Notifying 1 targets in response to triggered Events rule http-cron-rule
localstack | DEBUG:localstack.utils.aws.message_forwarding: Calling EventBridge API destination (state "ACTIVE"): POST https://webhook.site/your-uuid-goes-here
**Dockerfile**:
FROM java:8-jre-alpine
EXPOSE 9911
VOLUME /etc/sns
ENV AWS_DEFAULT_REGION=us-east-2 \
AWS_ACCESS_KEY_ID=XXXXXXXXXXXXXXXXXX \
AWS_SECRET_ACCESS_KEY=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX \
DB_PATH=/etc/sns/db.json
# aws-cli
RUN apk -Uuv add python py-pip && \
pip install awscli && \
apk --purge -v del py-pip && \
rm /var/cache/apk/*
ARG VERSION=0.3.0version: "3"
services:
aws-sns:
build: .
image: aws-sns-test:latest
volumes:
- ./config:/etc/sns
expose:
- 9911
ports:
- 9911:9911
hostname: aws-sns
enter code here
ADD https://github.com/s12v/sns/releases/download/$VERSION/sns-$VERSION.jar /sns.jar
CMD ["java", "-jar", "/sns.jar"]
**docker-compose.yml:**
version: "3"
services:
aws-sns:
build: .
image: aws-sns-test:latest
volumes:
- ./config:/etc/sns
expose:
- 9911
ports:
- 9911:9911
hostname: aws-sns
Later I also set the env variables using aws configure but this also didn't work.
aws configure
AWS Access Key ID [****************XXXX]:
AWS Secret Access Key [****************XXXX]:
Default region name [us-east-2]:
Default output format [None]:
I set these variables in sns conatiner( eg docker exec -it 39cb43921b31(conatinerid) sh) later as well but I didn't get desired output.
OUTPUT:
aws --endpoint-url=http://localhost:9911 sns create-topic --name local_sns
{
"TopicArn": "arn:aws:sns:us-east-1:123456789012:local_sns"
}
EXPECTED OUTPUT:
aws --endpoint-url=http://localhost:9911 sns create-topic --name local_sns
{
"TopicArn": "arn:aws:sns:us-east-2:123456789012:local_sns"
}
You can't change the region, as it is hard-coded into the source code:
val topic = Topic(s"arn:aws:sns:us-east-1:123456789012:$name", name)
The AWS credentials that you use have no effect, as they can be anything so that AWS CLI does not complain. You can also use --no-sign-request option for aws cli to eliminate the need for the credentials.
I want to control Amplify deployments from GitHub Actions because Amplify auto-build
doesn't provide a GitHub Environment
doesn't watch the CI for failures and will deploy anyways, or
requires me to duplicate the CI setup and re-run it in Amplify
didn't support running a cypress job out-of-the-box
Turn off auto-build (in the App settings / General / Branches).
Add the following script and job
scripts/amplify-deploy.sh
echo "Deploy app $1 branch $2"
JOB_ID=$(aws amplify start-job --app-id $1 --branch-name $2 --job-type RELEASE | jq -r '.jobSummary.jobId')
echo "Release started"
echo "Job ID is $JOB_ID"
while [[ "$(aws amplify get-job --app-id $1 --branch-name $2 --job-id $JOB_ID | jq -r '.job.summary.status')" =~ ^(PENDING|RUNNING)$ ]]; do sleep 1; done
JOB_STATUS="$(aws amplify get-job --app-id $1 --branch-name $2 --job-id $JOB_ID | jq -r '.job.summary.status')"
echo "Job finished"
echo "Job status is $JOB_STATUS"
deploy:
runs-on: ubuntu-latest
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: us-east-1
AWS_DEFAULT_OUTPUT: json
steps:
- uses: actions/checkout#v2
- name: Deploy
run: ./scripts/amplify-deploy.sh xxxxxxxxxxxxx master
You could improve the script to fail if the release fails, add needed steps (e.g. lint, test), add a GitHub Environment, etc.
There's also amplify-cli-action but it didn't work for me.
Disable automatic builds:
Go to App settings > general in the AWS Amplify console and disable automatic builds there.
Go to App settings > Build Settings and create a web hook which is a curl command that will trigger a build.
Example: curl -X POST -d {} URL -H "Content-Type: application/json"
Save the URL in GitHub as a secret.
Add the curl script to the GitHub actions YAML script like this:
deploy:
runs-on: ubuntu-latest
steps:
- name: deploy
run: |
URL="${{ secrets.WEBHOOK_URL }}"
curl -X POST -d {} "$URL" -H "Content-Type: application/json"
Similar to answer 2 here, but I used tags instead.
Create an action like ci.yml, turn off auto-build on the staging & prod envs in amplify and create the webhook triggers.
name: CI-Staging
on:
release:
types: [prereleased]
permissions: read-all # This is required to read the secrets
jobs:
deploy-staging:
runs-on: ubuntu-latest
permissions: read-all # This is required to read the secrets
steps:
- name: deploy
run: |
URL="${{ secrets.STAGING_DEPLOY_WEBHOOK }}"
curl -X POST -d {} "$URL" -H "Content-Type: application/json"
name: CI-production
on:
release:
types: [released]
permissions: read-all # This is required to read the secrets
jobs:
deploy-production:
runs-on: ubuntu-latest
permissions: read-all # This is required to read the secrets
steps:
- name: deploy
run: |
URL="${{ secrets.PRODUCTION_DEPLOY_WEBHOOK }}"
curl -X POST -d {} "$URL" -H "Content-Type: application/json"