I want to deploy my application using bitbucket pipeline in the production environment.
I followed the instruction given in https://cloud.google.com/solutions/continuous-delivery-bitbucket-app-engine but this is deploying my application in the staging environment.
My Pipeline file is
image: python:2.7
pipelines:
branches:
master:
- step:
script: # Modify the commands below to build your repository.
# Downloading the Google Cloud SDK
- curl -o /tmp/google-cloud-sdk.tar.gz https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-sdk-155.0.0-linux-x86_64.tar.gz
- tar -xvf /tmp/google-cloud-sdk.tar.gz -C /tmp/
- /tmp/google-cloud-sdk/install.sh -q
- source /tmp/google-cloud-sdk/path.bash.inc
# Authenticating with the service account key file
- echo ${GOOGLE_CLIENT_SECRET} > client-secret.json
- gcloud auth activate-service-account --key-file client-secret.json
# Linking to the Google Cloud project
- gcloud config set project $CLOUDSDK_CORE_PROJECT
- gcloud -q app deploy app.yaml
This is showing me the following error
You are about to deploy the following services:
- my-app/default/232326565655 (from [/opt/atlassian/pipelines/agent/build/app.yaml])
Deploying to URL: [https://my-app.appspot.com]
Beginning deployment of service [default]...
Some files were skipped. Pass `--verbosity=info` to see which ones.
You may also view the gcloud log file, found at
[/root/.config/gcloud/logs/2018.02.05/05.25.49.374053.log].
ERROR: gcloud crashed (UploadError): Error uploading files: HttpError accessing <https://www.googleapis.com/storage/v1/b/staging.my-app.appspot.com/o?alt=json&maxResults=1000>: response: <{'status': '403', 'content-length': '410', 'expires': 'Mon, 05 Feb 2018 05:25:52 GMT', 'vary': 'Origin, X-Origin', 'server': 'UploadServer', 'x-guploader-uploadid': 'UPLOADER_ID', 'cache-control': 'private, max-age=0', 'date': 'Mon, 05 Feb 2018 05:25:52 GMT', 'alt-svc': 'hq=":443"; ma=2592000; quic=51303431; quic=51303339; quic=51303338; quic=51303337; quic=51303335,quic=":443"; ma=2592000; v="41,39,38,37,35"', 'content-type': 'application/json; charset=UTF-8'}>, content <{
"error": {
"errors": [
{
"domain": "global",
"reason": "forbidden",
"message": "bitbucket-authorization#my-app.iam.gserviceaccount.com does not have storage.objects.list access to staging.my-app.appspot.com."
}
],
"code": 403,
"message": "bitbucket-authorization#my-app.iam.gserviceaccount.com does not have storage.objects.list access to staging.my-app.appspot.com."
}
}
Followed the same tutorial and based on the errors you presented, the service account you used ""bitbucket-authorization#my-app.iam.gserviceaccount.com" didn't have privilege to access the staging buckets.
Make sure both App Engine > App Engine Admin & Storage > Storage Object Admin roles are given to this service account from the Cloud Console.
One last thing I noticed when using a new project for that tutorial was that I had to manually enable the Google App Engine Admin API too.
EDIT:
You could use the flag --bucket=gs://BUCKETNAME in the script used by bitbucket to deploy:
i.e -> gcloud app deploy --bucket="gs://BUCKETNAME"
Related
I decided to automate the creation of GC projects using Terraform.
One resource that Terraform will create during the run, is a new GSuite user. This is done using the terraform-provider-gsuite. So I set all up (service account, domain-wide delegation, etc) and all works fine when I run the Terraform steps from my command line.
Next, instead of relying on my command line, I decided to have a Cloud Build trigger that would execute Terraform init-plan-apply. As you all know, Cloud builds run under the identity of the GCB Service Account. This means we need to give that SA the permissions that Terraform might need during the execution. So far so good.
So I run the build, and I see that the only resource that Terraform is not able to create is the GSuite user. Digging through the logs I found these 2 requests (and their responses):
GET /admin/directory/v1/users?alt=json&customer=my_customer&prettyPrint=false&query=email%3Alolloso-admin%40codedby.pm HTTP/1.1
Host: www.googleapis.com
User-Agent: google-api-go-client/0.5 (linux amd64) Terraform/0.14.7
X-Goog-Api-Client: gl-go/1.15.6 gdcl/20200514
Accept-Encoding: gzip
HTTP/2.0 400 Bad Request
Cache-Control: private
Content-Type: application/json; charset=UTF-8
Date: Sun, 28 Feb 2021 12:58:25 GMT
Server: ESF
Vary: Origin
Vary: X-Origin
Vary: Referer
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Xss-Protection: 0
{
"error": {
"code": 400,
"message": "Invalid Input",
"errors": [
{
"domain": "global",
"reason": "invalid"
}
]
}
}
POST /admin/directory/v1/users?alt=json&prettyPrint=false HTTP/1.1
Host: www.googleapis.com
User-Agent: google-api-go-client/0.5 (linux amd64) Terraform/0.14.7
Content-Length: 276
Content-Type: application/json
X-Goog-Api-Client: gl-go/1.15.6 gdcl/20200514
Accept-Encoding: gzip
{
"changePasswordAtNextLogin": true,
"externalIds": [],
"includeInGlobalAddressList": true,
"name": {
"familyName": "********",
"givenName": "*******"
},
"orgUnitPath": "/",
"password": "********",
"primaryEmail": "*********",
"sshPublicKeys": []
}
HTTP/2.0 403 Forbidden
Cache-Control: private
Content-Type: application/json; charset=UTF-8
Date: Sun, 28 Feb 2021 12:58:25 GMT
Server: ESF
Vary: Origin
Vary: X-Origin
Vary: Referer
Www-Authenticate: Bearer realm="https://accounts.google.com/", error="insufficient_scope", scope="https://www.googleapis.com/auth/admin.directory.user https://www.googleapis.com/auth/directory.user"
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Xss-Protection: 0
{
"error": {
"code": 403,
"message": "Request had insufficient authentication scopes.",
"errors": [
{
"message": "Insufficient Permission",
"domain": "global",
"reason": "insufficientPermissions"
}
],
"status": "PERMISSION_DENIED"
}
}
I think this is the API complaining that the Cloud Build Service Account does not have enough rights to access the Directory API. And here is where the situation gets wild.
In order to do so I thought to grant domain-wide delegation to the Cloud Build SA. But that SA is special and I could not find a way to grant it.
I tried then to give the role serviceAccountUser to the Cloud Build SA on my SA (the one which has domain-wide delegation). But I did not manage to succeed. In fact the build still trows the same error of insufficient permission.
I then tried to use my SA (with domain-wide delegatuion) as custom Cloud Build Service Account. Also there, no luck.
Is it even possible from a Cloud Build to access certain resources for which normally one would use domain-wide delegation?
Thanks
UPDATE 1 (using custom build service account)
As per John comment, I tried to use a user-specified service account to execute my build. The necessary setup info has been taken from the official guide.
This is my cloudbuild.yaml file
steps:
- id: 'tf init'
name: 'hashicorp/terraform'
entrypoint: 'sh'
args:
- '-c'
- |
terraform init
- id: 'tf plan'
name: 'hashicorp/terraform'
entrypoint: 'sh'
args:
- '-c'
- |
terraform plan
- id: 'tf apply'
name: 'hashicorp/terraform'
entrypoint: 'sh'
args:
- '-c'
- |
terraform apply -auto-approve
logsBucket: 'gs://tf-project-creator-cloudbuild-logs'
serviceAccount: 'projects/tf-project-creator/serviceAccounts/sa-terraform-project-creator#tf-project-creator.iam.gserviceaccount.com'
options:
env:
- 'TF_LOG=DEBUG'
where sa-terraform-project-creator#tf-project-creator.iam.gserviceaccount.com is the service account which has domain-wide delegation on my Google Workspace.
I then executed the build manually
export GOOGLE_APPLICATION_CREDENTIALS=.secrets/sa-terraform-project-creator.json; gcloud builds submit --config cloudbuild.yaml
specifying the json private key of the same SA of above.
I would have expected the build to pass but I still get the same error of above
POST /admin/directory/v1/users?alt=json&prettyPrint=false HTTP/1.1
Host: www.googleapis.com
User-Agent: google-api-go-client/0.5 (linux amd64) Terraform/0.14.7
Content-Length: 276
Content-Type: application/json
X-Goog-Api-Client: gl-go/1.15.6 gdcl/20200514
Accept-Encoding: gzip
{
"changePasswordAtNextLogin": true,
"externalIds": [],
"includeInGlobalAddressList": true,
"name": {
"familyName": "REDACTED",
"givenName": "REDACTED"
},
"orgUnitPath": "/",
"organizations": [],
"password": "REDACTED",
"primaryEmail": "REDACTED",
"sshPublicKeys": []
}
-----------------------------------------------------
2021/03/06 17:26:19 [DEBUG] Google API Response Details:
---[ RESPONSE ]--------------------------------------
HTTP/2.0 403 Forbidden
Cache-Control: private
Content-Type: application/json; charset=UTF-8
Date: Sat, 06 Mar 2021 17:26:19 GMT
Server: ESF
Vary: Origin
Vary: X-Origin
Vary: Referer
Www-Authenticate: Bearer realm="https://accounts.google.com/", www.googleapis.com/auth/directory.user"
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Xss-Protection: 0
{
"error": {
"code": 403,
"message": "Request had insufficient authentication scopes.",
{
"message": "Insufficient Permission",
"domain": "global",
"reason": "insufficientPermissions"
}
],
"status": "PERMISSION_DENIED"
}
}
Is there anything I am missing?
UPDATE 2 (check on active identity when submitting a build)
As deviavir pointed out in their comment, I tried
enabling "Service Accounts" in the GCB settings, but as suspected it did not work.
double checking the active identity while submitting the build. One of the limitations of using a custom build SA, is that the build must be manually triggered. So using gcloud, that means
gcloud builds submit --config cloudbuild.yaml
Til now, when executing this command I have always prepended it by setting GOOGLE_APPLICATION_CREDENTIALS var like this
export GOOGLE_APPLICATION_CREDENTIALS=.secrets/sa-terraform-project-creator.json
The specified private key is the key to my build SA (the one with domain-wide delegation). While doing that, I was always logged in in gcloud with another account (the Owner of the project) which does not have the domain-wide delegation permission). But I thought that by setting GOOGLE_APPLICATION_CREDENTIALS, gcloud would have picked up that credentials. I still think that is the case, but I tried to then submit the build while being logged in gcloud using that same build SA.
So I did
gcloud auth activate-service-account sa-terraform-project-creator#tf-project-creator.iam.gserviceaccount.com --key-file='.secrets/sa-terraform-project-creator.json'
and right after
gcloud builds submit --config cloudbuild.yaml
Yet again, I hit the same permission problem when accessing the Directory API.
As deviavir suspected, I start to think that during the execution of the build, the call to the Directory API is done with the wrong credentials.
Is there a way to log the identity used while executing certain Terraform plugin API calls? That would help a lot.
A Composer cluster went down because its airflow-worker pods needed a Docker image that was not accessible.
Now access to the Docker image was restore, but the airflow-scheduler pod has disappeared.
I tried updating the Composer Environment by setting a new Environment Variable, with the following error :
UPDATE operation on this environment failed X minutes ago with the following error message: (404)
Reason: Not Found
HTTP response headers: HTTPHeaderDict({
"Date": "recently",
"Audit-Id": "my-own-audit-id",
"Content-Length": "236",
"Content-Type": "application/json",
"Cache-Control": "no-cache, private"
})
HTTP response body: {
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "deployments.apps \"airflow-scheduler\" not found",
"reason": "NotFound",
"details": {
"name": "airflow-scheduler",
"group": "apps",
"kind": "deployments"
},
"code": 404
}
Error in Composer Agent
How can I launch a airflow-scheduler pod on my Composer cluster ?
What is the .yaml configuration file I need to apply ?
I tried launching the scheduler from inside another pod with airflow scheduler, and while it effectively starts a scheduler, it's not a Kubernetes pod and will not integrate well with the managed airflow cluster.
To restart the airflow-scheduler, run the following
# Fetch the old deployment, and pipe it into the replace command.
COMPOSER_WORKSPACE=$(kubectl get namespace | egrep -i 'composer|airflow' | awk '{ print $1 }')
kubectl get deployment airflow-scheduler --output yaml \
--namespace=${COMPOSER_WORKSPACE}| kubectl replace --force -f -
I have a scikit-learn model saved in Cloud Storage which I am attempting to deploy with AI Platform Prediction. When I deploy this model to a regional endpoint, the deployment completes successfully:
➜ gcloud ai-platform versions describe regional_endpoint_version --model=regional --region us-central1
Using endpoint [https://us-central1-ml.googleapis.com/]
autoScaling:
minNodes: 1
createTime: '2020-12-30T15:21:55Z'
deploymentUri: <REMOVED>
description: testing deployment to a regional endpoint
etag: <REMOVED>
framework: SCIKIT_LEARN
isDefault: true
machineType: n1-standard-4
name: <REMOVED>
pythonVersion: '3.7'
runtimeVersion: '2.2'
state: READY
However, when I try to deploy the exact same model, using the same Python/runtime versions, to the global endpoint, the deployment fails, saying there was an error loading the model:
(aiz) ➜ stanford_nlp_a3 gcloud ai-platform versions describe public_object --model=global
Using endpoint [https://ml.googleapis.com/]
autoScaling: {}
createTime: '2020-12-30T15:12:11Z'
deploymentUri: <REMOVED>
description: testing global endpoint deployment
errorMessage: 'Create Version failed. Bad model detected with error: "Error loading
the model"'
etag: <REMOVED>
framework: SCIKIT_LEARN
machineType: mls1-c1-m2
name: <REMOVED>
pythonVersion: '3.7'
runtimeVersion: '2.2'
state: FAILED
I tried making the .joblib object public to make sure there wasn't a permissions difference when trying to deploy to the two endpoints causing the issue, but the deployment to the global endpoint still failed. I removed the deploymentUri from the post since I have been experimenting with the permissions on this model object, but the paths are identical in the two different model versions.
The machine types for the two deployments have to be different, and for the regional deployment I use min nodes = 1 while for global I can use min nodes = 0, but other than that and the etags everything else is exactly the same.
I couldn't find any information in the AI Platform Prediction regional endpoints docs page which indicated certain models could only be deployed to a certain type of endpoint. The "Error loading the model" error message doesn't give me a lot to go on since it doesn't appear to be a permissions issue with the model file.
When I add the --log-http option to the create version command, I see that the errorcode is 3, but the message doesn't reveal any additional information:
➜ ~ gcloud ai-platform versions create $VERSION_NAME \
--model=$MODEL_NAME \
--origin=$MODEL_DIR \
--runtime-version=2.2 \
--framework=$FRAMEWORK \
--python-version=3.7 \
--machine-type=mls1-c1-m2 --log-http
Using endpoint [https://ml.googleapis.com/]
=======================
==== request start ====
...
...
the final response from the server looks like this:
---- response start ----
status: 200
-- headers start --
<headers>
-- headers end --
-- body start --
{
"name": "<name>",
"metadata": {
"#type": "type.googleapis.com/google.cloud.ml.v1.OperationMetadata",
"createTime": "2020-12-30T22:53:30Z",
"startTime": "2020-12-30T22:53:30Z",
"endTime": "2020-12-30T22:54:37Z",
"operationType": "CREATE_VERSION",
"modelName": "<name>",
"version": {
<version info>
}
},
"done": true,
"error": {
"code": 3,
"message": "Create Version failed. Bad model detected with error: \"Error loading the model\""
}
}
-- body end --
total round trip time (request+response): 0.096 secs
---- response end ----
----------------------
Creating version (this might take a few minutes)......failed.
ERROR: (gcloud.ai-platform.versions.create) Create Version failed. Bad model detected with error: "Error loading the model"
Can anyone explain what I am missing here?
I am trying my hands on configuring Endpoints on Cloud Functions by following the article.
Have performed the following steps:
1) Create a Google Cloud Platform (GCP) project, and deployed the following Cloud Function.
export const TestPost = (async (request: any, response: any) => {
response.send('Record created.');
});
using following command
gcloud functions deploy TestPost --runtime nodejs10 --trigger-http --region=asia-east2
Function is working fine till here.
2) Deploy the ESP container to Cloud Run using following command
gcloud config set run/region us-central1
gcloud beta run deploy CLOUD_RUN_SERVICE_NAME \
--image="gcr.io/endpoints-release/endpoints-runtime-serverless:1.30.0" \
--allow-unauthenticated \
--project=ESP_PROJECT_ID
ESP container is successfully deployed as well.
3) Create an OpenAPI document that describes the API, and configure the routes to the Cloud Functions.
swagger: '2.0'
info:
title: Cloud Endpoints + GCF
description: Sample API on Cloud Endpoints with a Google Cloud Functions backend
version: 1.0.0
host: HOST
schemes:
- https
produces:
- application/json
paths:
/Test:
get:
summary: Do something
operationId: Test
x-google-backend:
address: https://REGION-FUNCTIONS_PROJECT_ID.cloudfunctions.net/Test
responses:
'200':
description: A successful response
schema:
type: string
4) Deploy the OpenAPI document using following command
gcloud endpoints services deploy swagger.yaml
5) Configure ESP so it can find the configuration for the Endpoints service.
gcloud beta run configurations update \
--service CLOUD_RUN_SERVICE_NAME \
--set-env-vars ENDPOINTS_SERVICE_NAME=YOUR_SERVICE_NAME \
--project ESP_PROJECT_ID
gcloud alpha functions add-iam-policy-binding FUNCTION_NAME \
--member "serviceAccount:ESP_PROJECT_NUMBER-compute#developer.gserviceaccount.com" \
--role "roles/iam.cloudfunctions.invoker" \
--project FUNCTIONS_PROJECT_ID
This is done successfully
6) Sending requests to the API
Works absolutely fine.
Now I wanted to Implement authentication so I made following changes to OpenAPI document
swagger: '2.0'
info:
title: Cloud Endpoints + GCF
description: Sample API on Cloud Endpoints with a Google Cloud Functions backend
version: 1.0.0
host: HOST
schemes:
- https
produces:
- application/json
security:
- client-App-1: [read, write]
paths:
/Test:
get:
summary: Do something
operationId: Test
x-google-backend:
address: https://REGION-FUNCTIONS_PROJECT_ID.cloudfunctions.net/Test
responses:
'200':
description: A successful response
schema:
type: string
securityDefinitions:
client-App-1:
authorizationUrl: ""
flow: "implicit"
type: "oauth2"
scopes:
read: Grants read access
write: Grants write access
x-google-issuer: SERVICE_ACCOUNT#PROJECT.iam.gserviceaccount.com
x-google-jwks_uri: https://www.googleapis.com/robot/v1/metadata/x509/SERVICE_ACCOUNT#PROJECT.iam.gserviceaccount.com
I created a service account using following command.
gcloud iam service-accounts create SERVICE_ACCOUNT_NAME --display-name DISPLAY_NAME
Granted Token Creator role to service account using following
gcloud projects add-iam-policy-binding PROJECT_ID --member serviceAccount:SERVICE_ACCOUNT_EMAIL --role roles/iam.serviceAccountTokenCreator
Redeploy the OpenAPI document
gcloud endpoints services deploy swagger.yaml
Now when I test the API I get following error
{
"code": 16,
"message": "JWT validation failed: BAD_FORMAT",
"details": [
{
"#type": "type.googleapis.com/google.rpc.DebugInfo",
"stackEntries": [],
"detail": "auth"
}
] }
I am passing the access token generated via gcloud into the request using BearerToken
cmd for generating access token is gcloud auth application-default print-access-token
Can some one point out what the issue here. Thanks...
Edit#1:
I am using Postman to connect to my API's
After using the following command I am getting a different error.
Command:
gcloud auth print-identity-token SERVICE_ACCOUNT_EMAIL
Error:
{
"code": 16,
"message": "JWT validation failed: Issuer not allowed",
"details": [
{
"#type": "type.googleapis.com/google.rpc.DebugInfo",
"stackEntries": [],
"detail": "auth"
}
]
}
For ESP, you should use jwt-token, or identity token. not access token. Please check this out.
Finally I manager to solve the issue.
Two things were wrong.
1) The format of the raw JWT token, it should be as following
{
"iss": SERVICE_ACCOUNT_EMAIL,
"iat": 1560497345,
"aud": ANYTHING_WHICH_IS_SAME_AS_IN_OPENAPI_YAML_FILE,
"exp": 1560500945,
"sub": SERVICE_ACCOUNT_EMAIL
}
and then we need to generate a signed JWT token using following command
gcloud beta iam service-accounts sign-jwt --iam-account SERVICE_ACCOUNT_EMAIL raw-jwt.json signed-jwt.json
2) The security definition in the YAML file should be like following
securityDefinitions:
client-App-1:
authorizationUrl: ""
flow: "implicit"
type: "oauth2"
scopes:
read: Grants read access
write: Grants write access
x-google-issuer: SERVICE_ACCOUNT_EMAIL
x-google-jwks_uri: "https://www.googleapis.com/robot/v1/metadata/x509/SERVICE_ACCOUNT_EMAIL
x-google-audiences: ANYTHING_BUT_SAME_AS_IN_RAW_JWT_TOKEN
Following the quickstart for gcp dataflow here
I run into the following error when executing the example script here
using this command
declare -r PROJECT="beam-test"
declare -r BUCKET="gs://my-beam-test-bucket"
echo
set -v -e
python -m apache_beam.examples.wordcount \
--project $PROJECT \
--job_name $PROJECT-wordcount \
--runner DataflowRunner \
--staging_location $BUCKET/staging \
--temp_location $BUCKET/temp \
--output $BUCKET/output
which results in this error:
http_response.request_url, method_config, request)
apitools.base.py.exceptions.HttpError: HttpError accessing <https://dataflow.googleapis.com/v1b3/projects/beam-test/locations/us-central1/jobs?alt=json>: response: <{'status': '403', 'content-length': '284', 'x-xss-protection': '1; mode=block', 'x-content-type-options': 'nosniff', 'transfer-encoding': 'chunked', 'vary': 'Origin, X-Origin, Referer', 'server': 'ESF', '-content-encoding': 'gzip', 'cache-control': 'private', 'date': 'Fri, 31 Mar 2017 15:52:54 GMT', 'x-frame-options': 'SAMEORIGIN', 'alt-svc': 'quic=":443"; ma=2592000; v="37,36,35"', 'content-type': 'application/json; charset=UTF-8'}>, content <{
"error": {
"code": 403,
"message": "(f010d95b3e221bbf): Could not create workflow; user does not have write access to project: beam-test Causes: (f010d95b3e221432): Permission 'dataflow.jobs.create' denied on project: 'beam-test'",
"status": "PERMISSION_DENIED"
I have already enabled the DataFlow api for the project. And I have authorized the gcloud cli with the owner account of the project (which I assumes has full access).
How & where do I enable write permissions?
Change $PROJECT=project-name to $PROJECT=project-id
Have you tried running gcloud auth login to make sure you have a valid credential?
If yes, your default cloud project might be different than the one you're running Dataflow with. To change the default project, you can run gcloud init.
Let me know if that doesn't solve it.