I'm trying to create .yaml and .jinja files for a full cloud deployment, but have become stuck with deploying Cloud spanner, I am aware that it is a PaaS Application, so how would such application be implemented if it can at all?
I am relatively new to this area and am currently experimenting with cloud technology.
Cloud Spanner does not require a configuration file for deployment. You can create a new Cloud Spanner instance using the Google Cloud Console UI and point your application to it. Follow the instructions in: https://cloud.google.com/spanner/docs/create-manage-instances
Related
I am new to Alibaba Cloud and I have an Apache Beam application running on Google Cloud Dataflow.
Now I want to deploy the same apache beam pipeline to Alibaba Cloud.
I am seeking help on how/what setups are needed to run an apache beam pipeline in Alibaba Cloud?
Is there a resource in Alibaba that is equivalent to Google Cloud Platform Dataflow?
You may try Alibaba DataStudio, in the Data Works section of the Alibaba Cloud Console as it is similar to Data Flow from google cloud .
You can drag and drop nodes to create a workflow, collaborate, integrate with other AC products etc.
Here is how you can create workflow: https://www.alibabacloud.com/help/doc-detail/85554.htm?spm=a2c63.l28256.a3.31.591c5b5aLHvJ68
It is not based on Apache Beam, but I believe that it will be available very soon.
Hope this will help to get idea about your needs, and what is offered right now.
I'm relatively new to Google Kubernetes Engine and Google cloud platform.
I managed to use and connect the following services.
Source Repositories
Cloud Builder and Container Registry
Kubernetes
Engine
I'm currently using git bash on my local machine to push it to Google source repositories. Google Cloud Build builds the image and creates a new artifact. Each time I change my app and push the changes to cloud repositories a new artifact is created. I would then copy the new artifact to Kubernetes Workloads Rolling Update
Is there a better way to automate this? e.g. CD/CI without
You can set the rolling update strategy in your deployment spec from the beginning.
You can then use Cloud Build to push new images to your cluster once the image has been built instead of manually going to the GKE console and update the image.
I am currently working on Google Cloud Platform to run Spark Jobs in the cloud. To do so, I am planning to use Google Cloud Dataproc.
Here's the work flow I am automatising :
Upload a csv file on Google Cloud Storage which will be the input of my Spark job
On upload, trigger a Google Cloud Functions which should create the cluster, submit a job and shutdown the cluster though the HTTP API available for Dataproc
I am able to create a cluster from my Google Cloud Function using the google apis nodejs client (http://google.github.io/google-api-nodejs-client/latest/dataproc.html). But the problem is that I cannot see this cluster on the Dataproc cluster viewer or even by using the Gcloud sdk : gcloud dataproc clusters list.
However, I am able to see my newly created cluster on Google Api explorer : https://developers.google.com/apis-explorer/#p/dataproc/v1/dataproc.projects.regions.clusters.list.
Note that I am creating my cluster in the current project.
What can I possibly do wrong not to be able to see that cluster when listing with gcloud sdk ?
Thank you in advance for your help.
Regards.
I bet it has to do with "region" field. Out of the box Cloud SDK defaults to "global" region [1]. Try using dataproc Cloud SDK commands with --region flag (e.g., gcloud dataproc clusters list --region)
[1] https://cloud.google.com/dataproc/docs/concepts/regional-endpoints
For a project we are trying to expand Google Cloud Datalab and deploy the modified version to the Google Cloud platform. As I understand it, the deploying process normally consists of the following steps:
Build the Docker image
Push it to the Container Registry
Use the container parameter with the Google Cloud deployer to specify the correct Docker image, as explained here.
Since the default container registry, i.e. gcr.io/cloud_datalab/datalab:<tag> is off-limits for non-Datalab contributors, we pushed the Docker image to our own container registry, i.e. to gcr.io/<project_id>/datalab:<tag>.
However, the Google Cloud deployer only pulls directly from gcr.io/cloud_datalab/datalab:<tag> (with the tag specified by the containerparameter) and does not seem to allow specification of the source container registry. The deployer does not appear to be open-source, leaving us with no way to deploy our image to Google Cloud.
We have looked into creating a custom deployment similar to the example listed here but this never starts Datalab, so we suspect the start script is more complicated.
Question: How can we deploy a Datalab image from our own container registry to Google Cloud?
Many thanks in advance.
The deployment parameters can be guessed but it is easier to get the Google Cloud Datalab deployment script by sshing to the temporary compute node that is responsible for deployment and browsing the /datalab folder. This contains a runtime configuration file for use with the App Engine Flexible Environment. Using this configuration file, the google preview app deploy command (which accepts an --image parameter for Docker images) will deploy this to the App Engine correctly.
I am using Django 1.7, Gunicorn and Nginx for my app. It is hosted on GCE VM instance.
I want to store all my user uploaded content in Google Cloud Storage, so that it is easily accessible in case the traffic increases and I have to use multiple VM instances behind a HTTP/Network load balancer.
Given that Google does not allow attaching a storage disk to multiple VM instances in write mode, Google Cloud Storage looks like the only option. I want to use Google Cloud Storage as a file system or something similar to that.
Please let me know if there are any other options.
Sounds like you want to use the Google Cloud Storage Python Client Library from your Django app to access GCS.
See my other answer for a list of alternatives (the original question was about persistent disk, so GCS is one of the options, as you have already discovered):
If you want to share data between them, you need to use something other than Persistent Disk, e.g., Google's Cloud Datastore, Cloud Storage, or Cloud SQL, or you can run your own database (whether SQL or NoSQL), a distributed filesystem (Ceph, Gluster), or a file server (NFS, SAMBA), among other options.