How to access deployed project in gcloud shell - django

Deployed a django project in gcloud in flexible environment from local using gcloud app deploy.
The changes are getting reflected in the live url.
I am trying to access the deployed django project folder through gcloud shell, but not able to find it.
What am i doing wrong ?

Extended from discussion with #babygameover.
Google App Engine(GAE) is a PaaS. in GAE, one could just code locally and deploy the project, while the scaling of instances and its related resources would be taken care by gcloud.
And to have control over instances, the project should be moved into Google Compute Engine(GCE) where one would get finer control over instance configurations.

Related

How to deploy a container to multiple GCP projects and host with Cloud Run?

We have a requirement to deploy our application in multiple GCP projects with new projects being provisioned by Terraform. We have Terraform configured Cloud Build in each project, but we run into issues when the Cloud Build attempts to access the Source Repo in our centralized project.
We would prefer not to clone the repo, but rather instruct Cloud Build to consume and deploy from the central repo. It is also important that we have Cloud Build update each project as new code is deployed.
You should use a central project to run a single Cloud Build trigger that will build, push built container image in the project and deploy to Cloud Run services in other projects.
In order for the Cloud Build trigger to be allowed to deploy to Cloud Run in other projects, follow these instructions to grant the Cloud Build service agent the appropriate permission on the other projects
In order for Cloud Run to be able to import images from the central project, make sure you follow these instructions for each Service agent of each project.

AWS How to deploy my internet site created with Typescript on AWS

I have create a website using VS Code in NodeJS with typescript language.
Now I want to try to deploy it on AWS. I read so many things about EC2 , Cloud9 , Elastic Beanstalk, etc...
So I'm totally lost about what to use to deploy my website.
Honestly I'm a programmer, not a site manager or sysops.
Right Now I create an EC2 instances. One with a Key name and One with no key Name.
In the Elastic Beanstalk, I have a button Upload and Deploy.
Can someone send me the way to create my project as a valid package to upload and deploy it ?
I never deploy a website. (Normally it was the sysops at the job). So I don't know what to do to have a correct distributing package.
Does I need to create both EC2 and Beanstalk ?
Thanks
If you go with ElasticBeanstalk, it will take care of creating the EC2 instances for your.
It actually takes care of creating EC2 instance, DB, loadbalancers, CloudWatch trails and many more. This is pretty much what it does, bundles multiple AWS services and offers on panel of administration.
To get started with EB you should install the eb cli.
Then you should:
go to your directory and run eb init application-name. You'll start a wizard from eb cli asking you in which region you want to deploy, what kind of db and so on
after that your need to run eb create envname to create a new env for your newly create application.
at this point you should head to the EB aws panel and configure the start command for your app, it usually is something like this npm run prod
because you're using TS there are a few steps you need to do before being able to deploy. You should run npm run build, or whatever command you have for transpiling from TS to JS. You'll be deploying compiled scripts and not your source code.
now you are ready to deploy, you can run eb deploy, as this is your only env it should work, when you have multiple envs you can do eb deploy envname. For getting a list of all envs you can run eb list
There are quite a few steps to take care before deploying and any of them can cause multiple issues.
If your website contains only static pages you can use Amazon S3 to deploy your website.
You can put your build files in S3 bucket directly and enable static web hosting.
This will allow anyone to access your website from a url globally, for this you have to make your bucket public also.
Instead you can also use cloudfront here to keep your bucket private but allowing access to bucket through cloudfront url.
You can refer to below links for hosting website through s3.
https://docs.aws.amazon.com/AmazonS3/latest/user-guide/static-website-hosting.html
https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-serve-static-website/

Using plugin in Google Composer make it crash

I wrote a small plugin for Apache Airflow, which runs fine on my local deployment. However, when I use Google Composer, the user interface hangs and becomes unresponsive. Is there any way to restart the webserver in Google Composer
(Note: This answer is currently more suggestive than finalized.)
As far as restarting the webserver goes...
What doesn't work:
I reviewed Airflow Web Interface in the docs which describes using the webserver but not accessing it from a CLI or restarting.
While you can also run Airflow CLI commands on Composer, I don't see a command for restarting the webserver in the Airflow CLI today.
I checked the gcloud CLI in the Google Cloud SDK but didn't find a restart related command.
Here are a few ideas that may work for restarting the Airflow webserver on Composer:
In the gcloud CLI, there's an update command to change environment properties. I would assume that it restarts the scheduler and webserver (in new containers) after you change one of these to apply the new setting. You could set an arbitrary environment variable to check, but just running the update command with no changes may work.
gcloud beta composer environments update ...
Alternatively, you can update environment properties excluding environment variables in the GCP Console.
I think re-running the import plugins command would cause a scheduler/webserver restart as well.
gcloud beta composer environments storage plugins import ...
In a more advanced setup, Composer supports deploying a self-managed Airflow web server. Following the linked guide, you can: connect into your Composer instance's GKE cluster, create deployment and service Kubernetes configuration files for the webserver, and deploy both with kubectl create. Then you could run a kubectl replace or kubectl delete on the pod to trigger a fresh start.
This all feels like a bit much, so hopefully documentation or a simpler way to achieve webserver restarts emerges to succeed these workarounds.

Kubernetes Engine unable to pull image from non-private / GCR repository

I was happily deploying to Kubernetes Engine for a while, but while working on an integrated cloud container builder pipeline, I started getting into trouble.
I don't know what changed. I can not deploy to kubernetes anymore, even in ways I did before without cloud builder.
The pods rollout process gives an error indicating that it is unable to pull from the registry. Which seems weird because the images exist (I can pull them using cli) and I granted all possibly related permissions to my user and the cloud builder service account.
I get the error ImagePullBackOff and see this in the pod events:
Failed to pull image
"gcr.io/my-project/backend:f4711979-eaab-4de1-afd8-d2e37eaeb988":
rpc error: code = Unknown desc = unauthorized: authentication required
What's going on? Who needs authorization, and for what?
In my case, my cluster didn't have the Storage read permission, which is necessary for GKE to pull an image from GCR.
My cluster didn't have proper permissions because I created the cluster through terraform and didn't include the node_config.oauth_scopes block. When creating a cluster through the console, the Storage read permission is added by default.
The credentials in my project somehow got messed up. I solved the problem by re-initializing a few APIs including Kubernetes Engine, Deployment Manager and Container Builder.
First time I tried this I didn't succeed, because to disable something you have to disable first all the APIs that depend on it. If you do this via the GCloud web UI then you'll likely see a list of services that are not all available for disabling in the UI.
I learned that using the gcloud CLI you can list all APIs of your project and disable everything properly.
Things worked after that.
The reason I knew things were messed up, is because I had a copy of the same things as a production environment, and there these problems did not exist. The development environment had a lot of iterations and messing around with credentials, so somewhere things got corrupted.
These are some examples of useful commands:
gcloud projects get-iam-policy $PROJECT_ID
gcloud services disable container.googleapis.com --verbosity=debug
gcloud services enable container.googleapis.com
More info here, including how to restore service account credentials.

Deploying a custom build of Datalab to Google Cloud platform

For a project we are trying to expand Google Cloud Datalab and deploy the modified version to the Google Cloud platform. As I understand it, the deploying process normally consists of the following steps:
Build the Docker image
Push it to the Container Registry
Use the container parameter with the Google Cloud deployer to specify the correct Docker image, as explained here.
Since the default container registry, i.e. gcr.io/cloud_datalab/datalab:<tag> is off-limits for non-Datalab contributors, we pushed the Docker image to our own container registry, i.e. to gcr.io/<project_id>/datalab:<tag>.
However, the Google Cloud deployer only pulls directly from gcr.io/cloud_datalab/datalab:<tag> (with the tag specified by the containerparameter) and does not seem to allow specification of the source container registry. The deployer does not appear to be open-source, leaving us with no way to deploy our image to Google Cloud.
We have looked into creating a custom deployment similar to the example listed here but this never starts Datalab, so we suspect the start script is more complicated.
Question: How can we deploy a Datalab image from our own container registry to Google Cloud?
Many thanks in advance.
The deployment parameters can be guessed but it is easier to get the Google Cloud Datalab deployment script by sshing to the temporary compute node that is responsible for deployment and browsing the /datalab folder. This contains a runtime configuration file for use with the App Engine Flexible Environment. Using this configuration file, the google preview app deploy command (which accepts an --image parameter for Docker images) will deploy this to the App Engine correctly.