Restrict access to Google Cloud Functions to a given network? - google-cloud-platform

I'm looking through Google Cloud Functions docs and I wonder if it is possible to restrict access to HTTP cloud function to the given network? I would like to avoid anyone to exhaust the free quota.
Is there any firewall rules or similar mechanism for Cloud Functions?

I don't believe there is any in-built security restrictions at the moment.
In terms of avoid quota exhaustion you could pass a header or parameter with some kind of shared secret. Even a fixed string value would help avoid this problem.

You can add authentication to a cloud function by using firebase authentication. Here's a github example of how to do to it: https://github.com/firebase/functions-samples/tree/master/authorized-https-endpoint
Note however that the authentication code is executed by your function, so rejecting unauthorized access would still consume a small portion of your free resource allowance.

The Google Function Authorizer module might be what you're looking for. It provides "a simple user authentication and management system for Google Cloud HTTP Functions." It doesn't seem to have a lot of users yet, but the project seems simple enough that you could at least use it as a basis to modify or implement your own solution if you prefer.

This article was helpful for me.
https://cloud.google.com/solutions/authentication-in-http-cloud-functions
Anyone can still invoke the function but it must contain credentials from a user that has access to the resources accessed by the function.
Before that I was doing something very simple that is probably not great for production but does provide a little bit more security that just leaving it open publicly. I call my function with a password in the payload and if it doesn't match one of the passwords I hardcoded on the function it just fails with a 403.

If you need to restrict to IP range then you can follow instructions here: https://sukantamaikap.com/posts/load-balancing-cloud-functions
The UI of Google Cloud has unfortunately changed and you need to do some searching before you get all done, but I managed to set it up. But note that the related services will cost roughly 25 eur per month at minimum.
You can estimate the pricing here:
https://cloudpricingcalculator.appspot.com/
You need to search for "Cloud Load Balancing and Network Services" and then enable "Cloud Load Balancing", "Google Cloud Armor", and "IP addresses".
Alternatively, in some cases it might be sufficient if you set the name of the function or some suffix to the name complex enough so that it will be effectively like a sort of password. Something like MyGoogleCloudFunc-abracadabra. Then it will not restrict the network but perhaps outsiders would not know the secret name anyway.

Related

How do I get the query quotas from Deployment Manager via the API?

Over at https://console.cloud.google.com/apis/api/deploymentmanager.googleapis.com/quotas or https://console.cloud.google.com/iam-admin/quotas?service=deploymentmanager.googleapis.com, I am able to see the query and well as the write quotas and are can determine if I'm going to hit limits if any.
Unfortunately, there seems to be no way to get these values programmatically using the Deployment Manager APIs (using Go) or using gcloud.
Am I missing something here, or there are some other ways of getting at these values, possibly, not via the APIs directly.
Currently, there's no way to get the quotas programmatically or with gcloud(apart from the compute engine quotas) , however, there's a feature request to get/set the project quotas via API. I suggest starring this issue to track it and ask for updates from it.
knowing of no API, which could be used to do so ...
guess one could only limit the quota per user; see the documentation.
there are several questions concerning other API (all the same).

Highly granular access control of non-AWS resources in AWS Cognito

I've got an ASP.NET Web API that is using AWS Cognito for authentication and resource access control. We've been using user pool groups up until this point to define certain entities users have access to (non-aws resources in a DB).
The problem is, now that our requirements for access control are more detailed, we are hitting the group cap of 25 per pool. I've looked into alternatives within Cognito, such as using custom attributes, but I've found that there are also limits on the number of custom attributes per pool, as well as they only support string & number types, not arrays.
Another alternative I've explored is intercepting the token when it hits our API, and adding claims based on permissions mapped in the DB. This works reasonably well, but this is only a solution server side, and I'm not entirely thrilled with needing to intercept every request to add claims with a DB call (not great for performance). We need some of these claims client side as well, so this isn't a great solution.
Beyond requesting a service limit increase to the amount of groups available per pool, am I missing anything obvious? Groups seem to be the suggested way to do this, based on documentation from AWS. Even if we went for a multi-tenant approach with multiple pools, I think the 25 group cap is still going to be an issue.
https://docs.aws.amazon.com/cognito/latest/developerguide/scenario-backend.html
You can request limit increases for nearly any part of the service. They will consider. Sometimes this is more straightforward than building side systems, as you point out. See https://docs.aws.amazon.com/cognito/latest/developerguide/limits.html

Restrict api key to specific functions in Serverless

I am using Serverless and have a service with two functions.
I have setup API Keys to restrict access to the service.
One function does a basic task
The 2nd function is activated at the end of the 1st function to return a more thorough result.
I would like all API Keys be to able to access the 1st function but only some API Keys to access the 2nd function kind of like a premium feature that is activated for some users.
The only solution I can think of that works is having two separate services but this seems like a waste of resources as I have to request one API from the other. Is there a better way to do this?
You can see the usage plans associated with an API key and act on that information, see:
https://docs.aws.amazon.com/apigateway/api-reference/link-relation/usageplan-by-id/
https://docs.aws.amazon.com/apigateway/api-reference/link-relation/apikey-usageplans
So instead of requiring a mere presence of an API key, you can inspect some properties associated with it and decide to either allow or disallow the usage of your function based on that, but you will have to implement that logic inside of a function for the biggest flexibility.
For more info see:
https://docs.aws.amazon.com/apigateway/api-reference/resource/api-key/
especially links pointing to info on plans and marketplace.

How do I configure an Amazon AWS Lambda function to prevent tailing the log in the response?

Please see this:
http://docs.aws.amazon.com/lambda/latest/dg/API_Invoke.html
LogType
You can set this optional parameter to Tail in the request
only if you specify the InvocationType parameter with value
RequestResponse. In this case, AWS Lambda returns the base64-encoded
last 4 KB of log data produced by your Lambda function in the
x-amz-log-result header.
Valid Values: None | Tail
So this means any user with valid credentials for invoking a function can also read the logs this function emits?
If so, this is an obvious vulnerability that can give some attacker useful information regarding processing of invalid input.
How do I configure an Amazon AWS Lambda function to prevent tailing the log in the response?
Update 1
1) Regarding the comment: "If a hacker can call your Lambda function, you have
more problems than seeing log files."
Not true: Lambda functions are also meant to be called directly form client code, using the SDK.
As an example, see the picture below from the book "AWS Lambda in Action":
2) Regarding the comment: "How is this a vulnerability exactly? Only someone you have provided AWS IAM credentials would be able to invoke the Lambda function."
Of course, clients do have some credentials, most of the time (for example,
from having signed in to your mobile app with their Facebook account, through Amazon Cognito). Am I supposed to trust all my users?
3) Regarding the comment: "Only if you have put some secure information to be logged."
Logs may contain sensible information. I'm not talking about secure information like passwords, but simply information to help the development team debugging, or the security team finding out about attacks. Applications may log all kinds of information, including why some invalid input failed, which can help an attacker learn what is the valid input. Also, attackers can see all the information the security team is logging about their attacks. Not good. Even privacy may be at risk depending on what you log.
Update 2
It would also solve my problem if I could somehow detect the Tail parameter in the Lambda code. Then I would just fail with a "Tail now allowed" message. Unfortunately the Context object doesn't seem to contain this information.
I think you can't configure AWS Lambda to prevent tailing the log in the response. However, you could use your own logging component instead of using the one provided by Amazon Lambda to avoid the possibility to expose them via the LogType parameter.
Otherwise, I see your point about adding complexity, but using API Gateway is the most common solution to provide the possibility to invoke Lambdas for clients applications that you do not trust.
You're right, not only it's a bad practice, it's obviously (as you already understood) introducing security vulnerabilities.
If you look carefully in the book you will also find this part:
which explains that in order to be more secure, the client requests should hit Amazon API gateway which will expose a clean API interface and which will call the relevant lambda-function without exposing it to the outer-world.
An example of such API is demo'ed in a previous page:
By introducing a middle-layer between the client and AWS-lambda, we take care of authentication, authorization, access and all other points of potential vulnerability.
This is a comment.
While this should be a comment, I am sorry that I do not have yet enough stackoverflow reputation to do so.
Before commenting on this, please note that lambda Invoke may result in more than one execution of your lambda (per AWS documentation)
Invocations occur at least once in response to an event and functions must be idempotent to handle this.
As the LogType is documented as a valid option, I don't think you can prevent it in your backend. However, you need to have a workaround to handle it. I can think of
1- Generate a junk 4KB tail log (by console.log() for example). Then, the attacker will get a junk info. (incur cost only in case of attacker)
2- Use step functions. This is not only to hide the log but to overcome the problem of 'Invocations occur at least once' and have a predictable execution of your backend. It incurs cost though.

Run untrusted code at Google Cloud Functions

I'm not actually whitelisted yet to use google cloud functions, but I have this question:
http functions have access to my google cloud context?
I wan't to run untrusted javascript code so I want to use a function as a sandbox, where the user can just run simple javascripts.
If I understand your request correctly, you are looking to have Cloud HTTP Functions evaluate user-provided Javascript code on the server side.
By your description, the only real ways the function would be able to evaluate the user's code would be essentially using eval or new Function(). To confirm the risks I mentioned, I created a cloud function that simply passes the POST request body to eval. Without any dependencies, I could issue HTTP requests on behalf of the cloud function which could be quite bad.
Given that most useful cloud functions would have "#google-cloud" as a dependency, the user could gain access to that context. I was able to require #google-cloud and get all the information accessible to that object (application credentials, application information, etc.). Having such information available to a malicious user is considerably worse that the first test. In addition, Cloud Functions are authenticated by default, presumably by default application credentials, thus gaining all the abilities of the gcloud client library.
In the end, the safest way to run user-provided code on the server would be within a container. This would essentially lock the user's code into a Linux box where the resources and networking capabilities can be entirely governed by you. On the Google Cloud Platform, you're best means of accomplishing this would likely be using App Engine as a front-end to handle user requests and Compute Engine VMs to create and run containers for user code. It's more complex but doesn't risk destroying your Google Cloud Platform project.