How to set up GKE configuration in a yml file for Django app? - django

I am following below doc in the link to deploy the django application in the Google Kubernetes Engine.
Setting up your GKE configuration in a yml file
In the step Setting up your GKE configuration, there is yml file called polls.yaml. Where should I find this file ? If it doesn't exist yet, where should I create it and follow what template ?

I suspect this is the polls.yaml file you were looking for.
I used the following info
git clone https://github.com/GoogleCloudPlatform/python-docs-samples.git
cd python-docs-samples/kubernetes_engine/django_tutorial
in order to locate your polls.yaml file.

oh I missed the before step i took one week ago so the polls refer to the polls cluster I created before:
gcloud container clusters create polls \ --scopes "https://www.googleapis.com/auth/userinfo.email","cloud-platform" \ --num-nodes 4 --zone "us-central1-a"

Related

gcloud beta run deploy --source . throws 412

Due to corporate restrictions, I'm supposed to host everything on GCP in Europe. The organisation I work for, has set a restriction policy to enforce this.
When I deploy a cloud run instance from source with gcloud beta run deploy --source . --region europe-west1 it seems the command tries to store the temporary files in a storage bucket in the us, which is not allowed. The command then throws a 412 error.
➜ gcloud beta run deploy cloudrun-service-name --source . --platform managed --region=europe-west1 --allow-unauthenticated
This command is equivalent to running `gcloud builds submit --tag [IMAGE] .` and `gcloud run deploy cloudrun-service-name --image [IMAGE]`
Building using Dockerfile and deploying container to Cloud Run service [cloudrun-service-name] in project [PROJECT_ID] region [europe-west1]
X Building and deploying new service... Uploading sources.
- Uploading sources...
. Building Container...
. Creating Revision...
. Routing traffic...
. Setting IAM Policy...
Deployment failed
ERROR: (gcloud.beta.run.deploy) HTTPError 412: 'us' violates constraint 'constraints/gcp.resourceLocations'
I see the Artifact Registry Repository being created in the correct region, but not the storage bucket.
To bypass this I have to create a storage bucket first in the correct region with the name PROJECT_ID_cloudbuild. Is there any other way to fix this?
Looking at the error message indicates that the bucket is forced to be created in the US regardless of the Organisation policy set in Europe. As per this public issue tracker comment,
“Cloud build submit creates a [PROJECT_ID]_cloudbuild bucket in the
US. This will of course not work when resource restrictions apply.
What you can do as a workaround is to create that bucket yourself in
another location. You should do this before your first cloud build
submit.”
This has been a known issue and I found two workarounds that can help you achieve what you want.
The first workaround is by using “gcloud builds submit” with additional flags:
Create a new bucket with the name [PROJECT_ID]_cloudbuild in the
preferred location.
Specify non-buckets using --gcs-source-staging-dir and
--gcs-log-dir 2 ===> this flag is required as if it is not set
it will create a bucket in the US.
The second workaround is by using a cloudbuild.yaml and the “--gcs-source-staging-dir” flag:
Create a bucket in the region, dual-region or multi-region you may
want
Create a cloudbuild.yaml for storing a build artifacts
You can find an example of the YAML file in the following external
documentation, please note that I cannot vouch for its accuracy
since it is not from GCP.
Run the command :
gcloud builds submit
--gcs-source-staging-dir="gs://example-bucket/cloudbuild-custom" --config cloudbuild.yaml
Please try these workarounds and let me know if it worked for you.

How to connect to composer dag folder from GCP Cloud shell

I'm new to GCP. going over different documents on gcp composer and cloud shell but not able to find a place where I can connect the cloud shell environment to the composer DAG folder.
Right now, I'm creating python script outside cloud shell (local system), uploading manually to DAG folder but i want to do this on the cloud shell only. can any one give me the directions on it?
Also when I tried to use import airflow in my python file on cloud shell it gives me error that module not found. how do I install that too?
Take alook on this GCP documentation:
Adding and Updating DAGs (workflows)
among many other entries, you will find information like this one:
Determining the storage bucket name
To determine the name of the storage bucket associated with your environment:
gcloud composer environments describe ENVIRONMENT_NAME \
--location LOCATION \
--format="get(config.dagGcsPrefix)"
where:
ENVIRONMENT_NAME is the name of the environment.
LOCATION is the Compute Engine region where the environment is located.
--format is an option to specify only the dagGcsPrefix property instead of all environment details.
The dagGcsPrefix property shows the bucket name:
gs://region-environment_name-random_id-bucket/
Adding or updating a DAG
To add or update a DAG, move the Python .py file for the DAG to the environment's dags folder in Cloud Storage.
gcloud composer environments storage dags import \
--environment ENVIRONMENT_NAME \
--location LOCATION \
--source LOCAL_FILE_TO_UPLOAD
where:
ENVIRONMENT_NAME is the name of the environment.
LOCATION is the Compute Engine region where the environment is located.
LOCAL_FILE_TO_UPLOAD is the DAG to upload.

How to get my DAGs back after invalid extractor configuration upload to Airflow?

I use Bizzflow.net ETL template within my GCP project. During work on my extractor configuration (extractor.json) I have uploaded invalid configuration into my repo. Afler running git_pull DAG, my extractors related DAGs were removed, including git_pull DAG itself. How can I repare it?
This is very common issue. Current release of Bizzflow does not check validity of configuration during git_pull DAG run correctly, so when you push invalid configuration into master branch of your project repository and run git_pull, all DAGs will be removed from Airflow UI.
Fixing is easy. Just repare your broken configuration, push it into master branch of your project repo and run git pull command directly on vm-airflow machine. To do that just simply login into vm-airflow machine using
gcloud auth login
gcloud compute ssh admin#vm-airfow --project <your project id> --zone <you zone id>
and run git pull command in project repository
cd /home/admin/project
git pull
After 2-3mins. all you DAGs will be back.
Of course, you have to have appropriate permissions to do that. Typically this fix is for project administrator with GCP Owner role assigned.

Google cloud build with Compute Engine

I want to use Cloud Build with a trigger on commit to automatically fetch updated repo and run sudo supervisorctl restart on a Compute Engine instance.
On the Cloud Build settings page, there is an option to connect Compute Engine, but so far I only found examples including Kubernetes Engine and App Engine here.
Is it possible to accomplish? Is it the right way to make updates? Or should I instead restart the instance(s) with a startup-script?
There's a repo in Github from the cloud-builders-community that may be what you are looking for.
As specified in the aforementioned link, it does connect cloud Build to Compute Engine with the following steps:
A temporary SSH key will be created in your Container Builder workspace
A instance will be launched with your configured flags
The workpace will be copied to the remote instance
Your command will be run inside that instance's workspace
The workspace will be copied back to your Container Builder workspace
You will need to create an appropriate IAM role with create and destroy Compute Engine permissions:
export PROJECT=$(gcloud info --format='value(config.project)')
export PROJECT_NUMBER=$(gcloud projects describe $PROJECT --format 'value(projectNumber)')
export CB_SA_EMAIL=$PROJECT_NUMBER#cloudbuild.gserviceaccount.com
gcloud services enable cloudbuild.googleapis.com
gcloud services enable compute.googleapis.com
gcloud projects add-iam-policy-binding $PROJECT --member=serviceAccount:$CB_SA_EMAIL --role='roles/iam.serviceAccountUser' --role='roles/compute.instanceAdmin.v1' --role='roles/iam.serviceAccountActor'
And then you can configure your build step with something similar to this:
steps:
- name: gcr.io/$PROJECT_ID/remote-builder
env:
- COMMAND=sudo supervisorctl restart
You can also find more information in the examples section of the Github repo.

Howto use gcloud to create cloud launcher products clusters

I'm new to google cloud and i try to experiment it.
I can see that preparing scripts is some kind of vital if i want to create and delete clusters every days.
For dataproc clusters, it's easy :
gcloud dataproc clusters create spark-6-m \
--async \
--project=my-project-id \
--region=us-east1 \
--zone=us-east1-b \
--bucket=my-project-bucket \
--image-version=1.2 \
--num-masters=1 \
--master-boot-disk-size=10GB \
--master-machine-type=n1-standard-1 \
--worker-boot-disk-size=10GB \
--worker-machine-type=n1-standard-1 \
--num-workers=6 \
--initialization-actions=gs://dataproc-initialization-actions/jupyter2/jupyter2.sh
Now, i'd like to create a cassandra cluster. I see that the code launcher allows to do that easily too but I can't find a gcloud command to automate it.
Is there a way to create cloud launcher products clusters via gcloud ?
Thanks
Cloud Launcher deployments can be replicated from the Cloud Shell using Custom Deployments [1].
Once the Cloud Launcher deployment (in this case a Cassandra cluster) is finished the details of the deployment can be seen in the Deployment Manager [2].
The deployment details have an Overview section with the configuration and the imported files used for the deployment process. Download the “Expanded Config” file, this will be the .yaml file for the custom deployment [3]. Download the imports files to the same directory as the .yaml file to be able to deploy correctly [4].
This files and configuration will create an equivalent deployment as the Cloud Launcher.