Issue:
I would like to steer clear of using the traditional.
authenticationType: jwt
clientEmail: <Service Account Email>
defaultProject: <Default Project Name>
tokenUri: https://oauth2.googleapis.com/token
And use a service account json file from GCP. Is there anyway of doing this?
Environment:
OpenShift running in GCP. ServiceAccount key is mounted.
So if understand your comments correctly, you want to create a BigQuery data source using the Grafana API.
This is the JSON body to send with your request:
{
"orgId": YOUR_ORG_ID,
"name": NAME_YOU_WANT_TO_GIVE,
"type": "doitintl-bigquery-datasource",
"access": "proxy",
"isDefault": true,
"version": 1,
"readOnly": false,
"jsonData": {
"authenticationType": "jwt",
"clientEmail": EMAIL_OF_YOUR_SERVICE_ACCOUNT,
"defaultProject": YOUR_PROJECT_ID,
"tokenUri": "https://oauth2.googleapis.com/token"
},
"secureJsonData": {
"privateKey": YOUR_SERVICE_ACCOUNT_JSON_KEY_FILE
}
}
So there is no way to avoid the code snippet you wanted to "steer clear of", however there is no need to take the JSON key file apart, just provide it to privateKey. You only have to provide the service account email additionally to clientEmail and the project id to defaultProject. Otherwise not different than using the UI.
Related
yesterday I saw that Gitlab has enabled OIDC JWT tokens for jobs on ci/cd. I know that CI_JOB_JWT_V2 is marked as an alpha feature.
I was trying to use it with Workflow Identity Federation(WIF) on Gitlab runner with gcloud cli but I'm getting an error. When tried to do it through STS API I'm getting the same error. What am I missing?
{
"error": "invalid_grant",
"error_description": "The audience in ID Token [https://gitlab.com] does not match the expected audience."
}
My Gitlab JWT token after decoding looks mostly like that (ofc without details)
{
"namespace_id": "1111111111",
"namespace_path": "xxxxxxx/yyyyyyyy/zzzzzzzzzzz",
"project_id": "<project_id>",
"project_path": "xxxxxxx/yyyyyyyy/zzzzzzzzzzz/hf_service",
"user_id": "<user_id>",
"user_login": "<username>",
"user_email": "<user_email>",
"pipeline_id": "456971569",
"pipeline_source": "push",
"job_id": "2019605390",
"ref": "develop",
"ref_type": "branch",
"ref_protected": "true",
"environment": "develop",
"environment_protected": "false",
"jti": "<jti>",
"iss": "https://gitlab.com",
"iat": <number>,
"nbf": <number>,
"exp": <number>,
"sub": "project_path:xxxxxxx/yyyyyyyy/zzzzzzzzzzz/hf_service:ref_type:branch:ref:develop",
"aud": "https://gitlab.com"
}
In GCP console I have WIF pool with one provider set to OIDC named gitlab and issuer url from https://gitlab.com/.well-known/openid-configuration.
I have tried to give Service Account access to whole pool but no difference. Config created for this SA looks like below
{
"type": "external_account",
"audience": "//iam.googleapis.com/projects/<projectnumber>/locations/global/workloadIdentityPools/<poolname>/providers/gitlab",
"subject_token_type": "urn:ietf:params:oauth:token-type:jwt",
"token_url": "https://sts.googleapis.com/v1/token",
"service_account_impersonation_url": "https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/gitlab-deployer#<projectid>.iam.gserviceaccount.com:generateAccessToken",
"credential_source": {
"file": "gitlab_token",
"format": {
"type": "text"
}
}
}
By default, workload identity federation expects the aud claim to contain the URL of the workload identity pool provider. This URL looks like this:
https://iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/POOL_ID/providers/PROVIDER_ID
But your token seems to use https://gitlab.com as audience.
Either reconfigure GitHub to use the workload identity pool provider URL as audience, or reconfigure the pool to use a custom audience by running
gcloud iam workload-identity-pools providers update-oidc ... \
--allowed-audiences=https://gitlab.com
I am trying to refresh a PBI Data Flow using an ADF web activity by authenticating using the data factory's Managed Identity.
Here is my input to the activity:
{
"url": "https://api.powerbi.com/v1.0/myorg/groups/1dec5b21-ba60-409b-80cb-de61272ee504/dataflows/0e256da2-8823-498c-b779-3e7a7568137f/refreshes",
"connectVia": {
"referenceName": "My-AzureVM-IR",
"type": "IntegrationRuntimeReference"
},
"method": "POST",
"headers": {
"Content-Type": "application/json",
"User-Agent": "AzureDataFactoryV2",
"Host": "api.powerbi.com",
"Accept": "*/*",
"Connection": "keep-alive"
},
"body": "{\"notifyOption\":\"MailOnFailure\"}",
"disableCertValidation": true,
"authentication": {
"type": "MSI",
"resource": "https://analysis.windows.net/powerbi/api"
}
}
It generates the following error when doing a debug run:
Failure type: User configuration issue
Details: {"error":{"code":"InvalidRequest","message":"Unexpected dataflow error: "}}
I have tried this exact URL in Postman using Bearer Token Authentication and it works. Our AAD Admin group said they added our ADF's Managed Identity to the permission list for the PBI API, so I am not sure what is going on here.
Just an FYI, I was able to get the ADF Managed Identity working with data flow refreshes using the HTTP request in my original post.
The key was after having the Tenant Admins add the Managed Identity to a security group with API access, I then also had to add the Managed Identity to the PBI Workspace access list as a Member.
Then my API call worked from ADF using the MSI. No Bearer login token needed.
I have a website which has a React frontend hosted on Firebase and a Django backend which is hosted on Google Cloud Run. I have a Firebase rewrite rule which points all my API calls to the Cloud Run instance. However, I am unable to use the Django admin panel from my custom domain which points to Firebase.
I have tried two different versions of rewrite rules -
"rewrites": [
{
"source": "/**",
"run": {
"serviceId": "serviceId",
"region": "europe-west1"
}
},
{
"source": "**",
"destination": "/index.html"
}
]
--- AND ---
"rewrites": [
{
"source": "/api/**",
"run": {
"serviceId": "serviceId",
"region": "europe-west1"
}
},
{
"source": "/admin/**",
"run": {
"serviceId": "serviceId",
"region": "europe-west1"
}
},
{
"source": "**",
"destination": "/index.html"
}
]
I am able to see the log in page when I go to url.com/admin/, however I am unable to go any further. It just refreshes the page with empty email/password fields and no error message. Just as an FYI, it is not to do with my username and password as I have tested the admin panel and it works fine when accessing it directly using the Cloud Run url.
Any help will be much appreciated.
I didn't actually find an answer to why the admin login page was just refreshing when I was trying to log in using the Firebase rewrite rule, however I thought of an alternative way to access the admin panel using my custom domain.
I have added a custom domain to the Cloud Run instance so that is uses a subdomain of my site domain and I can access the admin panel by using admin.customUrl.com rather than customUrl.com/admin/.
I am designing and implementing a backup plan to restore my client API keys. How to go about this ?
To fasten the recovery process, I am trying to create a backup plan for taking the backup of Client API keys, probably in s3 or local. I am scratching my head from past 2 days on how to achieve this ? May be some python script or something which will take the values from apigateway and dump into some new s3 bucket. But not sure how to implement this.
You can get all apigateway API keys list using apigateway get-api-keys. Here is the full AWS CLI command.
aws apigateway get-api-keys --include-values
Remember --include-values is must to use otherwise actual API Key will not be included in the result.
It will display the result in the below format.
"items": [
{
"id": "j90yk1111",
"value": "AAAAAAAABBBBBBBBBBBCCCCCCCCCC",
"name": "MyKey1",
"description": "My Key1",
"enabled": true,
"createdDate": 1528350587,
"lastUpdatedDate": 1528352704,
"stageKeys": []
},
{
"id": "rqi9xxxxx",
"value": "Kw6Oqo91nv5g5K7rrrrrrrrrrrrrrr",
"name": "MyKey2",
"description": "My Key 2",
"enabled": true,
"createdDate": 1528406927,
"lastUpdatedDate": 1528406927,
"stageKeys": []
},
{
"id": "lse3o7xxxx",
"value": "VGUfTNfM7v9uysBDrU1Pxxxxxx",
"name": "MyKey3",
"description": "My Key 3",
"enabled": true,
"createdDate": 1528406609,
"lastUpdatedDate": 1528406609,
"stageKeys": []
}
}
]
To get API Key detail of a single API Key, use below AWS CLI command.
aws apigateway get-api-key --include-value --api-key lse3o7xxxx
It should display the below result.
{
"id": "lse3o7xxxx",
"value": "VGUfTNfM7v9uysBDrU1Pxxxxxx",
"name": "MyKey3",
"description": "My Key 3",
"enabled": true,
"createdDate": 1528406609,
"lastUpdatedDate": 1528406609,
"stageKeys": []
}
Similar to get-api-keys call, --include-value is must here, otherwise actual API Key will not be included in the result
Now you need to convert the output in a format which can be saved on s3 and later can be imported to apigateway.
You can import keys with import-api-keys
aws apigateway import-api-keys --body <value> --format <value>
--body (blob)
The payload of the POST request to import API keys. For the payload
format
--format (string)
A query parameter to specify the input format to imported API keys.
Currently, only the CSV format is supported. --format csv
Simplest style is with two fields only e.g Key,name
Key,name
apikey1234abcdefghij0123456789,MyFirstApiKey
You can see the full detail of formats from API Gateway API Key File Format.
I have implemented it in python using a lambda for backing up APIs keys. Used boto3 APIs similar to the above answer.
However, I am looking for a way to trigger the lambda with an event of "API key added/removed" :-)
I'm trying to populate a mysql db with a csv that i have in cloud storage
I'm using the API Explorer to execute the request with the following request body:
{
"importContext": {
"csvImportOptions": {
"columns": [
"col1",
"col2",
"col3"
],
"table": "table_name"
},
"database": "db_name",
"fileType": "CSV",
"kind": "sql#importContext",
"uri": "gs://some_bucket/somecsv.csv"
}
}
When i hit the execute button i receive a 200 response with the following body
{
"kind": "sql#operation",
"selfLink": "https://www.googleapis.com/sql/v1beta4/projects/somelink",
"targetProject": "some-project",
"targetId": "some-tarjet",
"targetLink": "https://www.googleapis.com/sql/v1beta4/projects/somelink",
"name": "some-name",
"operationType": "IMPORT",
"status": "PENDING",
"user": "myuser#mydomain.com",
"insertTime": "somedate",
"importContext": {
...
}
}
But if i go to the detail instance page in the google console i see this message:
gs://link-to-csv: Access denied for account
oosyrcl32gnzypxg4uhqw54uab#somename.iam.gserviceaccount.com
(permission issue?)
I'm authenticated with the same account that created the bucket in cloud storage where the csv is and this also happens using the python sdk.
You are trying to do an import from your bucket to your Cloud SQL instance, but, said import is going to be made by a service account, one in particular, which can be seen in the “Service account” section while seeing the details of your Cloud SQL instance.
It might be that the CloudSQL service account does not have appropriate permissions to access the Cloud Storage bucket with the data to import.
In order to create a successful import between SQL instance and Storage buckets, proper permissions should be set first. You should give to the service account "oosyrcl32gnzypxg4uhqw54uab#speckle-umbrella-27.iam.gserviceaccount.com" the Storage Object Viewer role.
Go to: https://console.cloud.google.com/iam-admin/iam
Click Add, to add a new member.
Paste the gserviceaccount.com email address that was presented in the error message into the New Members field.
Add 2 roles:
Cloud SQL Viewer
Storage Object Admin
Click Save.