Cloud Functions calling another Cloud Functions faces "Access is forbidden" - google-cloud-platform

I'm trying to call a Cloud Function from another one and for that, I'm following this documentation.
I've created two functions. This is the code for the function that calls the other one:
const {get} = require('axios');
// TODO(developer): set these values
const REGION = 'us-central1';
const PROJECT_ID = 'my-project-######';
const RECEIVING_FUNCTION = 'hello-world';
// Constants for setting up metadata server request
// See https://cloud.google.com/compute/docs/instances/verifying-instance-identity#request_signature
const functionURL = `https://${REGION}-${PROJECT_ID}.cloudfunctions.net/${RECEIVING_FUNCTION}`;
const metadataServerURL =
'http://metadata/computeMetadata/v1/instance/service-accounts/default/identity?audience=';
const tokenUrl = metadataServerURL + functionURL;
exports.proxy = async (req, res) => {
// Fetch the token
const tokenResponse = await get(tokenUrl, {
headers: {
'Metadata-Flavor': 'Google',
},
});
const token = tokenResponse.data;
console.log(`Token: ${token}`);
// Provide the token in the request to the receiving function
try {
console.log(`Calling: ${functionURL}`);
const functionResponse = await get(functionURL, {
headers: {Authorization: `bearer ${token}`},
});
res.status(200).send(functionResponse.data);
} catch (err) {
console.error(JSON.stringify(err));
res.status(500).send('An error occurred! See logs for more details.');
}
};
It's almost identical to the one proposed in the documentation. I just added a couple of logs and I'm stringifying the error before logging it. Following the instructions on that page, I've also added to my hello-world function the permission for the my-project-#######appspot.gserviceaccount.com service account to have the roles/cloudfunctions.invoker role:
$ gcloud functions add-iam-policy-binding hello-world \
> --member='serviceAccount:my-project-#######appspot.gserviceaccount.com' \
> --role='roles/cloudfunctions.invoker'
bindings:
- members:
- allUsers
- serviceAccount:my-project--#######appspot.gserviceaccount.com
role: roles/cloudfunctions.invoker
etag: ############
version: 1
But still, when I call the code above, I get 403 Access is forbidden. I'm sure this is returned by the hello-world function since I can see the logs from the code. I can see the token and I can see the correct URL for the hello-world function in the logs. Also, I can call the hello-world function directly from GCP console. Both of the functions are Trigger type: HTTP and only hello-world function is Ingress settings: Allow internal traffic only. The other one, Ingress settings: Allow all traffic.
Can someone please help me understand what's wrong?

If your Hello world function is in Allow internal only mode this mean:
Only requests from VPC networks in the same project or VPC Service Controls perimeter are allowed. All other requests are rejected.
To reach the functions, you have to call it through your VPC. For this,
Create a serverless VPC connector in the same region of your function (take care, serverless VPC connector is not available in all region!!)
Add it in your second function
Route all the traffic to the serverless VPC connector (I'm not sure that if you route only internal traffic that works)

Related

Cloud Storage operation succeeds on local machine but fails in the cloud ("Error: caller does not have permission")

I am trying to deploy a simple containerized Express app using either GCE or Cloud Run.
It simply calls getSignedUrl with action set to 'read':
router.get('/', async function (req, res, next) {
const gs = new Storage();
const credentials = await gs.authClient.getCredentials();
log(`Client email: ${credentials.client_email}`);
const [url] = await gs.bucket(GS_BUCKET).file('0.txt').getSignedUrl({
version: 'v4',
action: 'read',
expires: Date.now() + 60_000,
contentType: 'text/plain',
});
res.render('index', { title: 'Express', ...credentials, url });
})
I set up my local development environment using the default service account for the project, as explained here.
Now, when I run it on my local machine, either directly (using Node.js) or in a container (using Docker), it works fine and generates a signed URL every time. When I try to build and deploy the container in the cloud (using Cloud Build + Cloud Run or GCE), however, I get the following error:
Error: The caller does not have permission
at Gaxios._request (/workspace/node_modules/gaxios/build/src/gaxios.js:130:23)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async Compute.requestAsync (/workspace/node_modules/google-auth-library/build/src/auth/oauth2client.js:382:18)
at async GoogleAuth.signBlob (/workspace/node_modules/google-auth-library/build/src/auth/googleauth.js:721:21)
at async sign (/workspace/node_modules/#google-cloud/storage/build/src/signer.js:181:35) {
The client_email property is the same in both environments: 6***********-compute#developer.gserviceaccount.com i.e. the project's default service account, which seems to have the required permissions (as shown by the operation's succeeding on my local machine).
What could cause this error and how can I find out?

DNS Lookup Error when uploading to localhost (local S3 server)

In a docker container, the scality/s3server-image is running. I am connecting to it with NodeJS using the #aws-sdk/client-s3 API.
The S3Client setup looks like this:
const s3Client = new S3Client({
region: undefined, // See comment below
endpoint: 'http://127.0.0.1:8000',
credentials: {
accessKeyId: 'accessKey1',
secretAccessKey: 'verySecretKey1',
},
})
Region undefined: this answer to a similar question mentions to leave the region out, but, accessing the region with await s3Client.config.region() still displays eu-central-1, which was the value I passed to the constructor in a previous version. Although I changed it to undefined, it does still take the old configuration. Could that be connected to the issue?
It was possible to successfully create a bucket (test) and it could be listed by running a ListBucketsCommand (await s3Client.send(new ListBucketsCommand({}))).
However, as mentionned in the title, uploading content or streams to the Bucket with
bucketParams = {
Bucket: 'test',
Key: 'test.txt',
Body: 'Test Content',
}
await s3Client.send(new PutObjectCommand(bucketParams))
does not work, instead I am getting a DNS resolution error (which seems odd, since I manually typed the IP-address, not localhost.
Anyway, here the error message:
Error: getaddrinfo EAI_AGAIN test.127.0.0.1
at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:72:26) {
errno: -3001,
code: 'EAI_AGAIN',
syscall: 'getaddrinfo',
hostname: 'test.127.0.0.1',
'$metadata': { attempts: 1, totalRetryDelay: 0 }
}
Do you have any idea on
why the region is still configured and/or
why the DNS lookup happens / and then fails, but only when uploading, not when retrieving metadata about the Buckets / creating the Buckets?
For the second question, I found a workaround:
Instead of specifying the IP-Address directly, using endpoint: http://localhost:8000 (so using the Hostname instead of the IP-Adress) fixes the DNS lookup exception. However, there is no obvious reason on why this should happen.

Lambda function is (sometimes) timing out attempting to access s3

So i am attempting to create a lambda function inside of my vpc that requires s3 access. Most of the time it goes off without a hitch, however, sometimes it will just hang on s3.getObject until the function times out, there is no error when this happens. I have set up a VPC endpoint and the endpoint is in the route table for the (private) subnet, ensured that access to the endpoint is not being blocked by either the security group or NACL, and IAM permissions all seem to be in order (though if that was the issue one would expect an error message).
I've boiled my code down to a simple get/put for the purposes of debugging this issue, but here it is in case i am missing the incredibly obvious. I've spent hours googling this, tried everything suggested/i can think of... and am basically out of ideas at this point... so i cannot emphasize enough how much i appreciate any help that can be given
Update: i have run my code from an ec2 instance inside the same vpc/subnet/security group as the lambda... seems to not have the same problem, so the issue seems to be with the lambda configuration rather than any of the network configuration
try {
const getParams = {
Bucket: 'MY_BUCKET',
Key: '/path/to/file'
};
console.log('************** about to get', getParams);
const getObject = await s3.getObject(getParams).promise();
console.log('************** gotObject', getObject);
const uploadParams = {
Bucket: 'MY_BUCKET',
Key: '/new/path/to/file',
Body: getObject.Body
};
console.log('************** about to put', uploadParams);
const putObject = await s3.putObject(uploadParams).promise();
console.log('*************** object was put', putObject);
} catch (err) {
console.log('***************** error', err);
}
};```

Programmatically invoke a specific endpoint of a webservice hosted on AWS Lambda

I have a multi-endpoint webservice written in Flask and running on API Gateway and Lambda thanks to Zappa.
I have a second, very tiny, lambda, written in Node, that periodically hits one of the webservice endpoints. I do this by configuring the little lambda to have Internet access then use Node's https.request with these options:
const options = {
hostname: 'XXXXXXXXXX.execute-api.us-east-1.amazonaws.com',
port: 443,
path: '/path/to/my/endpoint',
method: 'POST',
headers: {
'Authorization': `Bearer ${s3cretN0tSt0r3d1nTheC0de}`,
}
};
and this works beautifully. But now I am wondering whether I should instead make the little lambda invoke the API endpoint directly using the AWS SDK. I have seen other S.O. questions on invoking lambdas from lambdas but I did not see any examples where the target lambda was a multi-endpoint webservice. All the examples I found used new AWS.Lambda({...}) and then called invokeFunction with params.
Is there a way to pass, say, an event to the target lambda which contained the path of the specific endpoint I want to call? (and the auth headers, etc.) * * * * OR * * * * is this just a really dumb idea, given that I have working code already? My thinking is that a direct SDK lambda invocation might (is this true?) bypass API Gateway and be cheaper, BUT, hitting the endpoint directly via API Gateway is better for logging. And since the periodic lambda runs once a day, it's probably free anyway.
If what I have now is best, that's a fine answer. A lambda invocation answer would be cool too, since I've not been able to find a good example in which the target lambda had multiple https endpoints.
You can invoke the Lambda function directly using the invoke method in AWS SDK.
var params = {
ClientContext: "MyApp",
FunctionName: "MyFunction",
InvocationType: "Event",
LogType: "Tail",
Payload: <Binary String>,
Qualifier: "1"
};
lambda.invoke(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
/*
data = {
FunctionError: "",
LogResult: "",
Payload: <Binary String>,
StatusCode: 123
}
*/
});
Refer the AWS JavaScript SDK lambda.invoke method for more details.

Google Dataflow - Invalid Regional Endpoint - Impossible set region on template from nodejs client

I have a pipeline stored as a template. I'm using the node.js client to run this pipeline from a cloud function. Everything works fine, but when I need to run this template from different regions I get errors.
According to the documentation, I can set it through the location parameter in the payload
{
projectId: 123,
resource: {
location: "europe-west1",
jobName: `xxx`,
gcsPath: 'gs://xxx'
}
}
That gives me the following error:
The workflow could not be created, since it was sent to an invalid regional endpoint (europe-west1).
Please resubmit to a valid Cloud Dataflow regional endpoint.
I get the same error if I move the location parameter out of resource node, such as:
{
projectId: 123,
location: "europe-west1",
resource: {
jobName: `xxx`,
gcsPath: 'gs://xxx'
}
}
If I set the zone in the environment and remove the location such as:
{
projectId: 123,
resource: {
jobName: `xxx`,
gcsPath: 'gs://xxx',
environment: {
zone: "europe-west1-b"
}
}
}
I do not get any errors anymore, but dataflow UI tells me the job is running in us-east1
How can I run this template and providind the region / zone I
As explained here, there are actually two endpoints:
dataflow.projects.locations.templates.launch (API Explorer)
dataflow.projects.templates.launch (API Explorer)
For Dataflow regional endpoints to work, the first one must be used (dataflow.projects.locations.templates.launch). This way, the location parameter in the request will be accepted. Code snippet:
var dataflow = google.dataflow({
version: "v1b3",
auth: authClient
});
var opts = {
projectId: project,
location: "europe-west1",
gcsPath: "gs://path/to/template",
resource: {
parameters: launchParams,
environment: env
}
};
dataflow.projects.locations.templates.launch(opts, (err, result) => {
if (err) {
throw err;
}
res.send(result.data);
});
I have been testing this though both the API explorer and the console using Google-provided templates. Using the wordcount example I get the same generic error than you do with the API explorer, which is the same if the location name is incorrect. However, the Console provides more information:
Templated Dataflow jobs using Java or Python SDK version prior to 2.0
are not supported outside of the us-central1 Dataflow Regional
Endpoint. The provided template uses Google Cloud Dataflow SDK for
Java 1.9.1.
Which is documented here as I previously commented. Running it confirms it's using a deprecated SDK version. I would recommend doing the same process to see if this is actually your case, too.
Choosing a different template, in my case the GCS Text to BigQuery option from the Console's drop-down menu (which uses Apache Beam SDK for Java 2.2.0) with location set to europe-west1 works fine for me (and the job actually runs in that region).
TL;DR: your request is correct in your first example but you'll need to update the template to a newer SDK if you want to use regional endpoints.