Invalid arguments when creating new datalab instance - google-cloud-platform

I am following the quickstart tutorial for datalab here, within the GCP console. When I try to run
datalab beta create-gpu datalab-instance-name
In step 3 I receive the following error
write() argument must be str, not bytes
Can anyone help explain why this is the case and how to fix it?
Thanks

Referring to the official documentation, before running Datalab instance, the corresponding APIs should be enabled: Google Compute Engine and Cloud Source Repositories APIs. To do so, visit Products -> APIs and Services -> Library and search for the APIs. Additionally, make sure that billing is enabled for your Google Cloud project.
You can also enabling the APIs by typing the following command, which will give you a prompt to enable the API:
datalab list
I made some research and found that the same issue has been reported on the Github page. If enabling API's wouldn't work, the best option would be to contribute (add a comment) in the mentioned Github topic to make it more visible to the Datalab Engineering team.

Related

Dataproc custom image: Cannot complete creation

For a project, I have to create a Dataproc cluster that has one of the outdated versions (for example, 1.3.94-debian10) that contain the vulnerabilities in Apache Log4j 2 utility. The goal is to get the alert related (DATAPROC_IMAGE_OUTDATED), in order to check how SCC works (it is just for a test environment).
I tried to run the command gcloud dataproc clusters create dataproc-cluster --region=us-east1 --image-version=1.3.94-debian10 but got the following message ERROR: (gcloud.dataproc.clusters.create) INVALID_ARGUMENT: Selected software image version 1.3.94-debian10 is vulnerable to remote code execution due to a log4j vulnerability (CVE-2021-44228) and cannot be used to create new clusters. Please upgrade to image versions >=1.3.95, >=1.4.77, >=1.5.53, or >=2.0.27. For more information, see https://cloud.google.com/dataproc/docs/guides/recreate-cluster, which makes sense, in order to protect the cluster.
I did some research and discovered that I will have to create a custom image with said version and generate the cluster from that. The thing is, I have tried to read the documentation or find some tutorial, but I still can't understand how to start or to run the file generate_custom_image.py, for example, since I am not confortable with cloud shell (I prefer the console).
Can someone help? Thank you

Error: Asset 'webhooks/ActionsOnGoogleFulfillment' cannot be deployed

I wanted to build a Google assistant with custom actions using actions-sdk. Since I am new to this, I have followed the steps in the tutorial "Build Actions for Google Assistant using Actions SDK (Level 1)" as it is, inorder to build a sample assistant. I followed the tutorial as it is. However, in step 5(Implement fulfillment), when trying to test the the fulfillment by running the command
gactions deploy preview
I am getting the below output in the terminal with error
Sending configuration files...
Sending resources...
Waiting for server to respond. It could take up to 1 minute if your cloud function needs to be redeployed.
[ERROR] Server did not return HTTP 200.
{
"error": {
"code": 400,
"message": "Asset 'webhooks/ActionsOnGoogleFulfillment' cannot be deployed. [An operation on function cf-_CcGD8lKs_F_LHmFYfJZsQ-name in region us-central1 in project <my-project-id> is already in progress. Please try again later.]"
}
}
And when I checked the "Google Cloud Platform -> Cloud Functions Console" for this project, the following is seen.
Image 1(Screenshot)
Cloud Platform Cloud Functions Console
A failed deploy of cloud function with an exclamation mark. And if I delete that functions, then immediately a new function is deployed automatically. But instead of an exclamation mark, a spinning wheel symbol(loading/still deploying) mark is present. I cannot delete that cloud function if it is still loading/deploying. Then after 10-15 min, the spinning symbol changes to exclamation symbol. And then if I delete it, then again a new one automatically appears. And it goes on like this
Image 2 (Screenshot)
Cloud Platform Cloud Function Console
This problem arises only when implementing a webhook/fulfillment(Step 5). For static Actions' response, it successfully deploys for testing on entering the command "gactions deploy preview".(Step 1 to Step 4 are successfully implemented)
I have followed the tutorial as it is, hence the code and directory structure is the same as in tutorial,(only the project-id or actions-console project name will be different).
Github Repository for Code
Since, this is only for the tutorial, at present I am not using a billing account, instead did the following changes in package.json(changed node version from 10 to 8.).
"engines": {
"node": "8"
},
Due to this continuous automatic failed deployment, when I try to explicitly deploy the project, as mentioned above, this error occurs.
"An operation on function cf-_CcGD8lKs_F_LHmFYfJZsQ-name in region us-central1 in project <my-project-id> is already in progress. Please try again later".
Can anyone please suggest how to stop this continuous automatic failed deployment of the cloud functions, so that the function I deploy will be successfully deployed? Would really appreciate your help.
(Note: This is the first time I have posted a question in stack overflow, so please let me know if there are any mistakes or stack overflow question conventions I might not have followed. I will improve it.)
Posting this as Community Wiki as it's based in the comments.
As clarified the issue seems to be the billing account, as the tutorial mentions that it's necessary to have one set for the Cloud Functions to be deployed correctly. Besides that, it's not possible to deploy Cloud Functions (webhooks) without a billing account, so yes, even though that you are not using Node.js 10, you will need to have a billing account configured for your project.
To summarize, billing account will be needed to avoid any possible deployment failure, even if you are not using Node.js 10, as explained in the followed tutorial.

Google Cloud Shell - Cannot open shell

I am really new to Google Cloud Shell, and I accidentally closed the tab for the shell...and I cannot find it now.
I know I need to click the highlighted button at the top of the console window to activate Google Cloud Shell, but it is now grey, and no shell is presented on the page (it should be at the bottom).
Can anyone help?
I observed the same issue.
I refreshed the console page ( CTRL + F5 ) and I was able to see the 'Activate Cloud Shell' button again !!
However, if above does not resolves the issue, some tips:
From google docs Google Cloud Shell Limitations
Weekly usage: Cloud Shell also has weekly usage limits. If you reach
your usage limit, you'll need to wait until the specified time (listed
under Usage Quota, found under the three dots menu icon) before you
can use Cloud Shell again.
Also, there are usage limits on this, check your email if you had violated any conditions on shell usage:
Warning: Violating the Terms of Service will result in Cloud Shell
being disabled for your account. This constitutes activity that
adversely impacts Google Cloud Platform services, other customers' or
their end users' use of services, or the Google network used to
provide these services. Coin mining and network scanning using Cloud
Shell are strictly prohibited.
I had the same issue here.
You can install the command line interface in your terminal to access Google Cloud Datalab.
Here you will find the quickstart to configure the environment.
Installing datalab component:
gcloud components install datalab
Connecting with your VM Instance:
datalab connect *instance-name*
Opening the initial page:
http://localhost:8081
See more:
Google Cloud Datalab - Quickstart
It works now. It seems that there was a service problem earlier today, so the page was not well functioning.

Is there documentation available for Google Cloud Dataflow?

Google Cloud Dataflow has been released in June 2014 (more information in this blog post), but I can't find any technical documentation on the developers section of the cloud.google.com website: https://cloud.google.com/developers/
Does someone knows where I can find more information, technical documentation about this product?
I'm really interested about how works topology, is it static or dynamic?.. etc..
Google Cloud Dataflow is now in Alpha stage. The documentation is now publicly available here: https://cloud.google.com/dataflow/. Follow the documentation link.
Please note that in Alpha - access to the managed service is limited to invite only. You can request access via the link above. Use the "Apply for Alpha" button.
The Cloud Dataflow SDK for Java has also been made public & open sourced on GitHub here: https://github.com/GoogleCloudPlatform/DataflowJavaSDK. Please note that you can download the SDK and run your Dataflow programs locally without having to execute them on the managed service. Local pipeline execution is a great way to get a feel for the programming model, but understand that the local execution is not parallelized.
We are also moving support over to StackOverflow. Please use the tag: google-cloud-dataflow.
Cheers - Eric
Google Cloud Dataflow is currently in private beta. You can apply here. Documentation is provided upon approval.

How to add a system service to Cloud Foundry step by step

I want to add new system service to micro cloud. and following the steps specified in docuement "How to add a system service to Cloud Foundry step by step" for adding echo service.
But i don't see the specified folder structure in my system where i have my micro cloud.
Thanks
Saidesh
The docs are in the source tree on CloudFoundry.org. For doing development work, that's where the best information is. Here's the doc that I used: https://github.com/cloudfoundry/oss-docs/tree/master/vcap/adding_a_system_service
One other thought tho: If you're wanting to add a "service", then I'd suggest not using Micro Cloud Foundry, but instead setting up a Ubuntu virtual machine and installing the code base from CloudFoundry.org. Instructions for doing so can be found here: https://github.com/cloudfoundry/oss-docs/tree/master/vcap/single_and_multi_node_deployments_with_dev_setup
Hope that helps,
John