How to programmatically get current project id in Google cloud run api - google-cloud-platform

I have an API that is containerized and running inside cloud run. How can I get the current project ID where my cloud run is executing? I have tried:
I see it in textpayload in logs but I am not sure how to read the textpayload inside the post function? The pub sub message I receive is missing this information.
I have read up into querying the metadata api, but it is not very clear on how to do that again from within the api. Any links?
Is there any other way?
Edit:
After some comments below, I ended up with this code inside my .net API running inside Cloud Run.
private string GetProjectid()
{
var projectid = string.Empty;
try {
var PATH = "http://metadata.google.internal/computeMetadata/v1/project/project-id";
using (var client = new HttpClient())
{
client.DefaultRequestHeaders.Add("Metadata-Flavor", "Google");
projectid = client.GetStringAsync(PATH).Result.ToString();
}
Console.WriteLine("PROJECT: " + projectid);
}
catch (Exception ex) {
Console.WriteLine(ex.Message + " --- " + ex.ToString());
}
return projectid;
}
Update, it works. My build pushes had been failing and I did not see. Thanks everyone.

You get the project ID by sending an GET request to http://metadata.google.internal/computeMetadata/v1/project/project-id with the Metadata-Flavor:Google header.
See this documentation
In Node.js for example:
index.js:
const express = require('express');
const axios = require('axios');
const app = express();
const axiosInstance = axios.create({
baseURL: 'http://metadata.google.internal/',
timeout: 1000,
headers: {'Metadata-Flavor': 'Google'}
});
app.get('/', (req, res) => {
let path = req.query.path || 'computeMetadata/v1/project/project-id';
axiosInstance.get(path).then(response => {
console.log(response.status)
console.log(response.data);
res.send(response.data);
});
});
const port = process.env.PORT || 8080;
app.listen(port, () => {
console.log('Hello world listening on port', port);
});
package.json:
{
"name": "metadata",
"version": "1.0.0",
"description": "Metadata server",
"main": "app.js",
"scripts": {
"start": "node index.js"
},
"author": "",
"license": "Apache-2.0",
"dependencies": {
"axios": "^0.18.0",
"express": "^4.16.4"
}
}

Others have shown how to get the project name via HTTP API, but in my opinion the easier, simpler, and more performant thing to do here is to just set the project ID as a run-time environment variable. To do this, when you deploy the function:
gcloud functions deploy myFunction --set-env-vars PROJECT_ID=my-project-name
And then you would access it in code like:
exports.myFunction = (req, res) => {
console.log(process.env.PROJECT_ID);
}
You would simply need to set the proper value for each environment where you deploy the function. This has the very minor downside of requiring a one-time command line parameter for each environment, and the very major upside of not making your function depend on successfully authenticating with and parsing an API response. This also provides code portability, because virtually all hosting environments support environment variables, including your local development environment.

#Steren 's answer in python
import os
def get_project_id():
# In python 3.7, this works
project_id = os.getenv("GCP_PROJECT")
if not project_id: # > python37
# Only works on runtime.
import urllib.request
url = "http://metadata.google.internal/computeMetadata/v1/project/project-id"
req = urllib.request.Request(url)
req.add_header("Metadata-Flavor", "Google")
project_id = urllib.request.urlopen(req).read().decode()
if not project_id: # Running locally
with open(os.environ["GOOGLE_APPLICATION_CREDENTIALS"], "r") as fp:
credentials = json.load(fp)
project_id = credentials["project_id"]
if not project_id:
raise ValueError("Could not get a value for PROJECT_ID")
return project_id

I followed the tutorial Using Pub/Sub with Cloud Run tutorial
I added to the requirements.txt the module gcloud
Flask==1.1.1
pytest==5.3.0; python_version > "3.0"
pytest==4.6.6; python_version < "3.0"
gunicorn==19.9.0
gcloud
I changed index function in main.py:
def index():
envelope = request.get_json()
if not envelope:
msg = 'no Pub/Sub message received'
print(f'error: {msg}')
return f'Bad Request: {msg}', 400
if not isinstance(envelope, dict) or 'message' not in envelope:
msg = 'invalid Pub/Sub message format'
print(f'error: {msg}')
return f'Bad Request: {msg}', 400
pubsub_message = envelope['message']
name = 'World'
if isinstance(pubsub_message, dict) and 'data' in pubsub_message:
name = base64.b64decode(pubsub_message['data']).decode('utf-8').strip()
print(f'Hello {name}!')
#code added
from gcloud import pubsub # Or whichever service you need
client = pubsub.Client()
print('This is the project {}'.format(client.project))
# Flush the stdout to avoid log buffering.
sys.stdout.flush()
return ('', 204)
I checked the logs:
Hello (pubsub message).
This is the project my-project-id.

Here is a snippet of Java code that fetches the current project ID:
String url = "http://metadata.google.internal/computeMetadata/v1/project/project-id";
HttpURLConnection conn = (HttpURLConnection)(new URL(url).openConnection());
conn.setRequestProperty("Metadata-Flavor", "Google");
try {
InputStream in = conn.getInputStream();
projectId = new String(in.readAllBytes(), StandardCharsets.UTF_8);
} finally {
conn.disconnect();
}

official Google's client library:
import gcpMetadata from 'gcp-metadata'
const projectId = await gcpMetadata.project('project-id')

It should be possible to use the Platform class from Google.Api.Gax (https://github.com/googleapis/gax-dotnet/blob/master/Google.Api.Gax/Platform.cs). The Google.Api.Gax package is usually installed as dependency for the other Google .NET packages like Google.Cloud.Storage.V1
var projectId = Google.Api.Gax.Platform.Instance().ProjectId;
On the GAE platform, you can also simply check environment variables GOOGLE_CLOUD_PROJECT and GCLOUD_PROJECT
var projectId = Environment.GetEnvironmentVariable("GOOGLE_CLOUD_PROJECT")
?? Environment.GetEnvironmentVariable("GCLOUD_PROJECT");

Related

Google Cloud Functions Credentials for Local Development

I have a google cloud function. Within this function, I want to write files to GCS (google cloud storage), then get a signed URL of the file that is written to GCS and send that URL to the caller.
For local development, I run the functions locally using the functions-framework command:
functions-framework --source=.build/ --target=http-function --port 8082
When I want to write to GCS or get the signed URL, the cloud functions framework just tries to get the credentials from the signed-in gcloud CLI user. However, I want to point it to read the credentials from a service account. For all other gcloud development purposes, we have put the service account information in a local creds.json file and point the gcloud to read from that file.
Is there any way I can achieve this for functions? Meaning that when I start the functions locally (using functions-framework), I point it to the creds.json file to read the credentials from there?
All Google's SDKs, e.g., for GCS, make use of Application Default Credentials which you should be using instead of explicitly pathing to a key. If this is true for functions-framework, then exporting the variable should work.
The command gcloud auth application-default login is a better recommendation in that case, especially for testing the signed URL, because with that local credential as well as the Cloud Functions credential (through metadata server), the private key isn't present, and the signed URL must be called in a specific manner (provide token and the service account to be able to sign the URL).
Using gcloud auth application-default login creates Application Default Credentials, which have all the powers of the user's account and are persisted as a key called {HOME}/.config/gcloud/application-default_credentials.
Cloud Function Local Development Authentication
This is what I'm doing for local development of a google cloud function using Nodejs that is triggered by a pub/sub event. This function reads a file from a google cloud storage. This uses the Functions Framework Nodejs
TL;DR;
# shell A
gcloud auth application-default login
npm start
pub/sub event message
# shell B
curl -d "#mockPubSub.json" \
-X POST \
-H "Content-Type: application/json" \
http://localhost:8080
Greater Details
Cloud Function with Functions Framework
Docs: Functions Framework Nodejs
package.json
note the --target and --signature-type
{
...
"scripts": {
"start": "npx functions-framework --target=helloPubSub --signature-type=http"
},
"dependencies": {
"#google-cloud/debug-agent": "^7.0.0",
"#google-cloud/storage": "^6.0.0"
},
"devDependencies": {
"#google-cloud/functions-framework": "^3.1.2"
}
...
}
sample nodejs cloud function that downloads file into memory
/* modified from the sample
index.js
*/
const {Storage} = require('#google-cloud/storage');
function log(message, severity = 'DEBUG', payload) {
// Structured logging
// https://cloud.google.com/functions/docs/monitoring/logging#writing_structured_logs
if (!!payload) {
// If payload is an Error, get the stack trace.
if (payload instanceof Error && !!payload.stack) {
if (!!message ) {
message = message + '\n' + payload.stack;
} else {
message = payload.stack;
}
}
}
const logEntry = {
message: message,
severity: severity,
payload : payload
};
console.log(JSON.stringify(logEntry));
}
function getConfigFile(payload){
console.log("Get Config File from GCS")
const bucketName = 'some-bucket-in-a-project';
const fileName = 'config.json';
// Creates a client
const storage = new Storage();
async function downloadIntoMemory() {
// Downloads the file into a buffer in memory.
const contents = await storage.bucket(bucketName).file(fileName).download();
console.log(
`Contents of gs://${bucketName}/${fileName} are ${contents.toString()}.`
);
}
downloadIntoMemory().catch(console.error);
}
exports.helloPubSub = async (pubSubEvent, context) => {
/*
Read payload from the event and log the exception in App project if the payload cannot be parsed
*/
try {
const payload = Buffer.from(pubSubEvent.body.message.data, 'base64').toString()
const pubSubEventObj = JSON.parse(payload) ;
console.log("name: ", pubSubEventObj.name);
getConfigFile(pubSubEventObj)
} catch (err) {
log('failed to process payload: + payload \n' , 'ERROR', err);
}
};
Mock Message for Pub/Sub Event
blog reference, but I'm not using the emulator
myJson.json
{"widget": {
"debug": "on",
"window": {
"title": "Sample Konfabulator Widget",
"name": "main_window",
"width": 500,
"height": 500
},
"image": {
"src": "Images/Sun.png",
"name": "sun1",
"hOffset": 250,
"vOffset": 250,
"alignment": "center"
},
"text": {
"data": "Click Here",
"size": 36,
"style": "bold",
"name": "text1",
"hOffset": 250,
"vOffset": 100,
"alignment": "center",
"onMouseUp": "sun1.opacity = (sun1.opacity / 100) * 90;"
}
}}
encode for Pub/sub message ( likely there's a better way )
cat myJson.json | grep -v % | base64
take that output and put it into value for the data key:
mockPubSub.json
{
"message": {
"attributes": {
"greeting": "Hello from the Cloud Pub/Sub Emulator!"
},
"data": "< put the output of the base64 from above here >",
"messageId": "136969346945"
},
"subscription": "projects/myproject/subscriptions/mysubscription"
}
Follow steps from the TL;DR: above.
Disclaimers
gcloud auth application-default login uses the permissions of the user who executes the command. So remember, in production, the service account the cloud function uses will need to read from the storage bucket.
while scrubbing this (i.e. renaming bits) and copying it over, I may have messed it up. Sorry if that is true.
this is all contrived if you are curious, my design is to take a message from a cloud scheduler that includes relevant details about what to read from the config.
this article explains a way to hot-reload the cloud function
It remains unclear why I'm using ``-signature-type=http` to mock message, but for now, I am.

How to run Dataflow from python Google API Client Libraries on private subnetwork

I am trying to launch a Dataflow job using the python google api client libraries. Everything worked fine previously, until we had to migrate from default subnetwork to another private subnetwork. Previously I was launching a dataflow job with the following code:
request = dataflow.projects().locations().templates().launch(
projectId = PROJECT_ID,
location = REGION,
gcsPath = TEMPLATE_LOCATION,
body = {
'jobName': job_name,
'parameters': job_parameters,
}
)
response = request.execute()
However the job now will fail because the default subnetwork does not exist anymore, and I now need to specify to use data-subnet subnetwork.
From this documentation and also this other question, the solution would be trivial if i were to launch the script from command line by adding the flag --subnetwork regions/$REGION/subnetworks/$PRIVATESUBNET. However my case is different becuase I am trying to do it from code, and in the documentation I can't find any subnet parameter option.
You can specify a custom subnetwork like so to your pipeline
request = dataflow.projects().locations().templates().launch(
projectId = PROJECT_ID,
location = REGION,
gcsPath = TEMPLATE_LOCATION,
body = {
'jobName': job_name,
'parameters': job_parameters,
'environment': {
'subnetwork': SUBNETWORK,
}
}
)
response = request.execute()
Make sure SUBNETWORK is in the form "https://www.googleapis.com/compute/v1/projects/<project-id>/regions/<region>/subnetworks/<subnetwork-name>"

gcloud codebuild sdk, trigger build from cloud function

Trying to use the #google/cloudbuild client library in a cloud function to trigger a manual build against a project but no luck. My function runs async and does not throw an error.
Function:
exports.index = async (req, res) => {
const json = // json that contains build steps using docker, and project id
// Creates a client
const cb = new CloudBuildClient();
try {
const result = await cb.createBuild({
projectId: "myproject",
build: JSON.parse(json)
})
return res.status(200).json(result)
} catch(error) {
return res.status(400).json(error);
};
};
I am assuming from the documentation that my default service account is implicit and credentials are sources properly, or it would throw an error.
Advice appreciated.

Start/Stop Google Cloud SQL instances using Cloud Functions

I am very new to Google Cloud Platform. I am looking for ways to automate starting and stopping a mySQL instance at a predefined time.
I found that we could create a cloud function to start/stop an instance and then use the cloud scheduler to trigger this. However, I am not able to understand how this works.
I used the code that I found in GitHub.
https://github.com/chris32g/Google-Cloud-Support/blob/master/Cloud%20Functions/turn_on_cloudSQL_instance
https://github.com/chris32g/Google-Cloud-Support/blob/master/Cloud%20Functions/turn_off_CloudSQL_instance
However, I am not familiar with any of the programming languages like node, python or go. That was the reason for the confusion. Below is the code that I found on GitHub to Turn On a Cloud SQL instance:
# This file uses the Cloud SQL API to turn on a Cloud SQL instance.
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
credentials = GoogleCredentials.get_application_default()
service = discovery.build('sqladmin', 'v1beta4', credentials=credentials)
project = 'wave24-gonchristian' # TODO: Update placeholder value.
def hello_world(request):
instance = 'test' # TODO: Update placeholder value.
request = service.instances().get(project=project, instance=instance)
response = request.execute()
j = response["settings"]
settingsVersion = int(j["settingsVersion"])
dbinstancebody = {
"settings": {
"settingsVersion": settingsVersion,
"tier": "db-n1-standard-1",
"activationPolicy": "Always"
}
}
request = service.instances().update(
project=project,
instance=instance,
body=dbinstancebody)
response = request.execute()
# pprint(response)
request_json = request.get_json()
if request.args and 'message' in request.args:
return request.args.get('message')
elif request_json and 'message' in request_json:
return request_json['message']
else:
return f"Hello World!"
________________________
requirements.txt
google-api-python-client==1.7.8
google-auth-httplib2==0.0.3
google-auth==1.6.2
oauth2client==4.1.3
As I mentioned earlier, I am not familiar with Python. I just found this code on GitHub. I was trying to understand what this specific part does:
dbinstancebody = {
"settings": {
"settingsVersion": settingsVersion,
"tier": "db-n1-standard-1",
"activationPolicy": "Always"
}
}
dbinstancebody = {
"settings": {
"settingsVersion": settingsVersion,
"tier": "db-n1-standard-1",
"activationPolicy": "Always"
}
}
The code block above specifies sql instance properties you would like to update, amongst which the most relevant for your case is activationPolicy which allows you to stop / start sql instance.
For Second Generation instances, the activation policy is used only to start or stop the instance. You change the activation policy by starting and stopping the instance. Stopping the instance prevents further instance charges.
Activation policy can have two values Always or Never. Always will start the instance and Never will stop the instance.
You can use the API to amend the activationPolicy to "NEVER" to stop the server or "ALWAYS" to start it.
# PATCH
https://sqladmin.googleapis.com/sql/v1beta4/projects/{project}/instances/{instance}
# BODY
{
"settings": {
"activationPolicy": "NEVER"
}
}
See this article in the Cloud SQL docs for more info: Starting, stopping, and restarting instances. You can also try out the instances.patch method in the REST API reference.
please try the code below :
from pprint import pprint
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
import os
credentials = GoogleCredentials.get_application_default()
service = discovery.build("sqladmin", "v1beta4", credentials=credentials)
project_id = os.environ.get("GCP_PROJECT")
# setup this vars using terraform and assign the value via terraform
desired_policy = os.environ.get("DESIRED_POLICY") # ALWAYS or NEVER
instance_name = os.environ.get("INSTANCE_NAME")
def cloudsql(request):
request = service.instances().get(project=project_id, instance=instance_name)
response = request.execute()
state = response["state"]
instance_state = str(state)
x = response["settings"]
current_policy = str(x["activationPolicy"])
dbinstancebody = {"settings": {"activationPolicy": desired_policy}}
if instance_state != "RUNNABLE":
print("Instance is not in RUNNABLE STATE")
else:
if desired_policy != current_policy:
request = service.instances().patch(
project=project_id, instance=instance_name, body=dbinstancebody
)
response = request.execute()
pprint(response)
else:
print(f"Instance is in RUNNABLE STATE but is also already configured with the desired policy: {desired_policy}")
In my repo you can have more information on how to setup the cloud function using Terraform. This cloud function is intended to do what you want but it is using environment variables, if you dont want to use them, just change the variables values on the python code.
Here is my repository Repo

Google Cloud Functions - How to correctly setup the default credentials?

I'm using Google Cloud Functions to listen to a topic in Pub/Sub and send data to a collection in Firestore. The problem is: whenever I test the function (using the test tab that is provided in GCP) and check the logs from that function, it always throws this error:
Error: Could not load the default credentials.
Browse to https://cloud.google.com/docs/authentication/getting-started for more information.
That link didn't help, by the way, as they say the Application Default Credentials are found automatically, but it's not the case here.
This is how I'm using Firestore, in index.js:
const admin = require('firebase-admin')
admin.initializeApp()
var db = admin.firestore()
// ...
db.collection('...').add(doc)
In my package.json, these are the dependencies (I'm using BigQuery too, which raises the same error):
{
"name": "[function name]",
"version": "0.0.1",
"dependencies": {
"#google-cloud/pubsub": "^0.18.0",
"#google-cloud/bigquery": "^4.3.0",
"firebase-admin": "^8.6.1"
}
}
I've already tried:
Creating a new service account and using it in the function setting;
Using the command gcloud auth application-default login in Cloud Shell;
Setting the environment variable GOOGLE_APPLICATION_CREDENTIALS via Cloud Shell to a json file (I don't even know if that makes sense);
But nothing seems to work :( How can I configure this default credential so that I don't have to ever configure it again? Like, a permanent setting for the entire project so all my functions can have access to Firestore, BigQuery, IoT Core, etc. with no problems.
This is the code that I am using:
const firebase = require('firebase');
const functions = require('firebase-functions');
const admin = require('firebase-admin');
const serviceAccount = require("./key.json");
const config = {
credential: admin.credential.cert(serviceAccount),
apiKey: "",
authDomain: "project.firebaseapp.com",
databaseURL: "https://project.firebaseio.com",
projectId: "project",
storageBucket: "project.appspot.com",
messagingSenderId: "",
appId: "",
measurementId: ""
};
firebase.initializeApp(config);
const db = admin.firestore();