I have a Flutter + Firebase app, and received an email about "Legacy GAE and GCF Metadata Server endpoints will be turned down on April 30, 2020". I updated it to v1 or whatever, and at the end of the email it suggests to turn off the endpoints completely. I'm using Google Cloud Functions and the email says
If you are using App Engine Standard or Cloud Functions, set the following environment variable: DISABLE_LEGACY_METADATA_SERVER_ENDPOINTS=true.
Upon further research this can be done through the console (https://cloud.google.com/compute/docs/storing-retrieving-metadata#custom). It says to add it as custom metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#disable-legacy-endpoints) but I'm not sure if I'm doing this right.
For additional info, the email was triggered from a few cloud functions I have where I used the firebase admin to send push notifications (via cloud messaging)
The custom metadata feature you mention is meant to be used with Compute Engine and allows to pass in arbitrary values to your project or instance, and set startup and shutdown scripts. It's a handy way to pass common environment variables to all your GCE VMs in your project. You can also use those custom metadata in App Engine Flexible instances because they are actually Compute Engine VMs in your project running your App Engine code.
Cloud Functions and App Engine Standard are fundamentally different in that they don't run in your project but in a Google-owned project. This makes your project-wide custom metadata unreachable to them.
For this reason, for Cloud Functions you'll need to set a CF-specific environment variable by either:
using the --set-env-vars flag when deploying your Function with the gcloud functions deploy command
adding it to the environment variable section of your Function when creating it via the Developer Console
Related
I am new at google cloud and this is my first experience with this platform. ( Before I was using Azure )
So I am working on a c# project and the project has a requirement to save images online and for that, I created cloud storage.
not for using the services, I find our that I have to download a service account credential file and set the path of that file in the environment variable.
Which is good and working file
RxStorageClient = StorageClient.Create();
But the problem is that. my whole project is a collection of 27 different projects and that all are in the same solution and there are multi-cloud storage account involved also I want to use them with docker.
So I was wondering. is there any alternative to this service account system? like API key or connection string like Azure provides?
Because I saw this initialization function have some other options to authenticate. but didn't saw any example
RxStorageClient = StorageClient.Create();
Can anyone please provide a proper example to connect with cloud storage services without this service account file system
You can do this instead of relying on the environment variable by downloading credential files for each project you need to access.
So for example, if you have three projects that you want to access storage on, then you'd need code paths that initialize the StorageClient with the appropriate service account key from each of those projects.
StorageClient.Create() can take an optional GoogleCredential() object to authorize it (if you don't specify, it grabs the default application credentials, which, one way to set is that GOOGLE_APPLICATION_CREDENTIALS env var).
So on GoogleCredential, check out the FromFile(String) static call, where the String is the path to the service account JSON file.
There are no examples. Service accounts are absolutely required, even if hidden from view, to deal with Google Cloud products. They're part of the IAM system for authenticating and authorizing various pieces of software for use with various products. I strongly suggest that you become familiar with the mechanisms of providing a service account to a given program. For code running outside of Google Cloud compute and serverless products, the current preferred solution involves using environment variables to point to files that contain credentials. For code running Google (like Cloud Run, Compute Engine, Cloud Functions), it's possible to provide service accounts by configuration so that the code doesn't need to do anything special.
I'm getting an issue calling an external API from a Dataflow job.
Dataflow is running under project A, and the API is hosted in GKE in project B, with Istio. The service account used to run Dataflow has access to resources (like GCS) from project A and B.
The projects don't have a default network and in order to run Dataflow, I needed to set the flag --use_public_ips to false. With that, the job runs, but the API call isn't reaching the API controller and is returning the following error:
I/O error while reading input message; nested exception is org.apache.catalina.connector.ClientAbortException: java.net.SocketTimeoutException"
I tested the same job in a separate environment with a default network and with Dataflow and GKE hosted under the same project. In that environment using --use_public_ips=true, the API call works, and using --use_public_ips=false it doesn't.
My questions are:
1 - What does the --use_public_ips flag changes exactly in terms of external access to resources and how can we configure my services to work with that?
2 - Is there's a way to run Dataflow in a project without default network (subnetwork specified at runtime), and not use the --use_public_ips flag set to false.
I don't know much about web development and cloud computing. From what I've read when using Cloud functions as the webhook service for dialogflow, you are limited to write code in just 1 source file. I would like to create a real complex dialogflow agent, so It would be handy to have an organized code structure to make the development easier.
I've recently discovered Cloud run which seems like it can also handle webhook requests and makes it possible to develop a complex code structure.
I don't want to use Cloud Run just because it is inconvenient to write everything in one file, but on the other hand it would be strange to have a cloud function with a single file with thousands of lines of code.
Is it possible to have multiple files in a single cloud function?
Is cloud run suitable for my problem? (create a complex dialogflow agent)
Is it possible to have multiple files in a single cloud function?
Yes. When you deploy to Google Cloud Functions you create a bundle with all your source files or have it pull from a source repository.
But Dialogflow only allows index.js and package.json in the Built-In Editor
For simplicity, the built-in code editor only allows you to edit those two files. But the built-in editor is mostly just meant for basic testing. If you're doing serious coding, you probably already have an environment you prefer to use to code and deploy that code.
Is Cloud Run suitable?
Certainly. The biggest thing Cloud Run will get you is complete control over your runtime environment, since you're specifying the details of that environment in addition to the code.
The biggest downside, however, is that you also have to determine details of that environment. Cloud Funcitons provide an HTTPS server without you having to worry about those details, as long as the rest of the environment is suitable.
What other options do I have?
Anywhere you want! Dialogflow only requires that your webhook
Be at a public address (ie - one that Google can resolve and reach)
Runs an HTTPS server at that address with a non-self-signed certificate
During testing, it is common to run it on your own machine via a tunnel such as ngrok, but this isn't a good idea in production. If you're already familiar with running an HTTPS server in another environment, and you wish to continue using that environment, you should be fine.
This GCloud Tutorial has a "Deploying the function", such as
gcloud functions deploy ocr-extract --trigger-bucket YOUR_IMAGE_BUCKET_NAME --entry-point
But at Quickstart: Using Client Libraries does not mention it at all, all it needs is
npm install --save #google-cloud/storage
then a few lines of code will work.
So I'm confused, do I need the "deploy" in order to have OCR, in other words what do/don't I get from "deploy"?
The command
npm install --save #google-cloud/storage
is an example of installing the Google Cloud Client Library for Node.js in your development environment, in this case, Cloud Storage API. This example is part of Setting Up a Node.js Development Environment tutorial.
Once you have coded, tested and set all the configurations for the app as described in the tutorial the next step would be the deployment, in this example a Cloud Function:
gcloud functions deploy ocr-extract --trigger-bucket YOUR_IMAGE_BUCKET_NAME --entry-point
So, note that this commands are two different steps to run OCR with Cloud Functions, Cloud Storage and other Cloud Platform components in the tutorial example using Node.js environment.
While Cloud Function (CF) is easy to understand, this answers specifically my own question, what does the "Deploy" actually do:
to have the code work for you, they must be deployed/uploaded to the GC. For people like me never done GCF this is new. My understanding was all I need to supply is credentials and satisfy the whatever server/backend (sorry, cloud) settings when my local app calls the remote Web API. That's where I stucked. The key I missed is the sample app itself is a server/backend event-handler trigger functions, and therefore Google requires them to be "deployed" just like when we deploy something during a staging or production release in a traditional corporate environment. So it's a real deploy. If you still don't get it, go to your GC admin page, menu, Cloud Function, "Overview" tab, you will see them. Hence goes to next
The 3 GC deploy command used in the Deploying Functions have ocr-extract ocr-save ocr-translate, they are not switches, they are function names that you can name them anything. Now, still in the Admin page, click on any of 3, "Source". Bang, they are there, deployed (uploaded).
Google, as this is a tutorial no one has digged into command reference book yet, I recommend adding a piece of note telling readers those 3 ocr-* can be anything you want to name.
How do we define credentials in Java program which connects to Google Cloud Platform to execute the code.
There is a standard way of setting GOOGLE_APPLICATION_CREDENTIALS env variable. I want to define in code. any suggestions?
Thanks for your response. Understood defining credentials is not recommended by GCP. So, I would use ADC(Authenticate Default Credentials).
Adding more info:
Providing credentials to your application
GCP client libraries use a strategy called Application Default Credentials (ADC) to find your application's credentials. When your code uses a client library, the strategy checks for your credentials in the following order:
First, ADC checks to see if the environment variable GOOGLE_APPLICATION_CREDENTIALS is set. If the variable is set, ADC uses the service account file that the variable points to.
If the environment variable isn't set, ADC uses the default service account that Compute Engine, Kubernetes Engine, App Engine, and Cloud Functions provide, for applications that run on those services.
If ADC can't use either of the above credentials, an error occurs.
The following code example illustrates this strategy. The example doesn't explicitly specify the application credentials. However, ADC is able to implicitly find the credentials as long as the GOOGLE_APPLICATION_CREDENTIALS environment variable is set, or as long as the application is running on Compute Engine, Kubernetes Engine, App Engine, or Cloud Functions.
Java Code:
static void authImplicit() {
// If you don't specify credentials when constructing the client, the client library will
// look for credentials via the environment variable GOOGLE_APPLICATION_CREDENTIALS.
Storage storage = StorageOptions.getDefaultInstance().getService();
System.out.println("Buckets:");
Page<Bucket> buckets = storage.list();
for (Bucket bucket : buckets.iterateAll()) {
System.out.println(bucket.toString());
}
}
You can find all these details in GCP link: https://cloud.google.com/docs/authentication/production#auth-cloud-app-engine-java