I want to achieve the following configuration:
https://example.com - serve Google Cloud Storage BacketA
https://example.com/files/* - serve Google Cloud Storage BacketB
https://example.com/api/* - serve google functions -> https://us-central1-{my-app-name}.cloudfunctions.net/api
I have an issue with step 2. How to specify backend as cloud functions endpoint? How to point to google function in backend configuration?
How I can do that?
You need to register your domain with your firebase project,
these two articles should get you on the right foot:
Functions overview
Serverless Overview
You need to go to Hosting, after getting started, it will provide you a way to attach your custom domain to your project, you will need to validate the domain, last time I made it, it took 72 hours; the steps needed can be found here.
All the information you need is there.
Related
I have a Cloud Function which i want to secure by allowing only access from my domain to all users. I am exploring this for days.
Google seems to limit many options and instead you are forced to buy and use more products, for example for this you need a Network Balancer, which is a great product but a monster to smaller businesses, and not everyone needs it (or wants to pay for it).
So, how do you secure a Function on the Console, without IAM (no signin needed), to only allow a certain domain calls before you expand to a Balancer ?
I do see that Google has something called Organization policies for project which supposed to restrict a domain, but the docs are not clear and outdated (indicate UI that doesn't exist)
I know that Firebase has the Anonymous User, which allow a Function to check a Google ID of an anonymous user, but everything online is a Firebase thing, and no explanation anywhere how to do this using normal Function with Python.
EDIT
I do use Firebase Hosting, but my Function is Python and it's handled from the GCP, not a Firebase Function.
Solved, you can use API Gateway, with API key, restrict the key to your domain only, and upload a config with your Function url, so you access it with a API url+key, and nobody else can just run it.
See here Cloud API Gateway doesn't allow with CORS
I wish i could connect it to a domain as well, but we can't, google seems to want everyone to use the expensive Balancer, or Firebase (charged in this case on a Function use for every website visit)
I have a web app which connect to a Cloud Function (not Firebase, but GCP). The cloud Function is Python, it is massive and it connect to SQL on Google as well.
Now, I bought a domain from Google, and I need to host a simple static website, that will access this Google Function using a Function URL, and show data to client.
It need to be fast and serve many users. ( It is sort of a search engine)
I would like to avoid Firebase Hosting for multiple reasons, but mainly because i want to stay inside the GCP, where i deploy with and monitor everything.
I realized that my options to host this static(?) website with my custom domain in GCP are :
Load Balancer - which is expensive over kill solution.
Cloud Storage - which (i might be wrong) will be very limiting later if i need to manage paying users. ( or can i just send user ID to the Function using parameters?)
Cloud Run - which i am not sure exactly yet what it does.
What is a solution that fit a light web app(html/JS) that can Auth users but connect to a massive Cloud Function using the Cloud Function URL with simple REST?
Also - can i change the URL of that Cloud Function to be my domain without Balancer ? Currently it is like project-348324
I can't see any option anywhere to set up a custom domain for my Google Cloud Function when using HTTP Triggers. Seems like a fairly major omission. Is there any way to use a custom domain instead of their location-project.cloudfunctions.net domain or some workaround to the same effect?
I read an article suggesting using a CDN in front of the function with the function URL specified as the pull zone. This would work, but would introduce unnecessary cost - and in my scenario none of the content is able to be cached so using a CDN is far from ideal.
If you connect your Cloud project with Firebase, you can connect your HTTP-triggered Cloud Functions to Firebase Hosting to get vanity URLs.
Using Cloudflare Workers (CDN, reverse proxy)
Why? Because it not only allows you to set up a reverse proxy over your Cloud Function but also allows you to configure things like - server-side rendering (SSR) at CDN edge locations, hydrating API response for the initial (SPA) webpage load, CSRF protection, DDoS protection, advanced caching strategies, etc.
Add your domain to Cloudflare; then go to DNS settings, add a A record pointing to 192.0.2.1 with Cloudflare proxy enabled for that record (orange icon). For example:
Create a Cloudflare Worker script similar to this:
function handleRequest(request) {
const url = new URL(request.url);
url.protocol = "https:";
url.hostname = "us-central1-example.cloudfunctions.net";
url.pathname = `/app${url.pathname}`;
return fetch(new Request(url.toString(), request));
}
addEventListener("fetch", (event) => {
event.respondWith(handleRequest(event.request));
});
Finally, open Workers tab in the Cloudflare Dashboard, and add a new route mapping your domain URL (pattern) to this worker script, e.g. example.com/* => proxy (script)
For a complete example, refer to GraphQL API and Relay Starter Kit (see web/workers).
Also, vote for Allow me to put a Custom Domain on my Cloud Function
in the GCF issue tracker.
Another way to do it while avoiding Firebase is to put a load balancer in front of the Cloud Function or Cloud Run and use a "Serverless network endpoint group" as the backend for the load balancer.
Once you have the load balancer set up just modify the DNS record of your domain to point to the load balancer and you are good to go.
https://cloud.google.com/load-balancing/docs/https/setting-up-https-serverless
Been a while for this answer.
Yes, now you can use a custom domain for your Google Cloud functions.
Go over to firebase and associate your project with firebase. What we are interested in here is the hosting. Install the Firebase CLI as per the firebase documentation - (very good and sweet docs here)
Now create your project and as you may have noticed on the docs, to add firebase to your project you type firebase init. Select hosting and that's it.
Once you are done, look for the firebase.json file. Then customize it like this
{
"hosting": {
"public": "public",
"ignore": [
"firebase.json",
"**/.*",
"**/node_modules/**"
],
"rewrites": [
{
"source": "myfunction/custom",
"function": "myfunction"
},
]
}
}
By default, you get a domain like https://project-name.web.app but you can add your own domain on the console.
Now deploy your site. Since you are not interested in web hosting probably you can leave as is. Now your function will execute like this
Function to execute > myfunction
Custom url > https://example.com/myfunction/custom
If you don't mind the final appearance of the url you could also setup a CNAME dns record.
function.yourdomain.com -> us-central1******.cloudfunctions.net
then you could call it like
function.yourdomain.com/function-1/?message=Hello+World
I created a bucket and configured a static website hosting
I want to use SSL so instead of using
http://my-bucket.s3-website.us-east-2.amazonaws.com/
I have to use
https://s3.us-east-2.amazonaws.com/my-bucket/
the problem with this is that the static website hosting endpoint is still http://my-bucket.s3-website.us-east-2.amazonaws.com/
I created a redirection rule on it (basically if the requested file returns 404 then I call an API) but is not working because (I assume) the endpoint is the bad one and when I try to access a file that doesn´t exist instead of getting the redirection configured in the static website I get Access Denied. how to deal with this?
notes: I tried to use s3-website.us-east-2.amazonaws.com/my-bucket/file.jpg but I get redirected to an amazon page.
You can do this by serving your content through cloudfront and then configuring your cloudfront distribution to use https
I worked at getting SSL working for a static web site on AWS using a custom domain for two days and, having Googled much and stopped by this posting, finally found this excellent and concise tutorial Example Walkthroughs - Hosting Websites on Amazon S3 on AWS at https://docs.aws.amazon.com/AmazonS3/latest/dev/hosting-websites-on-s3-examples.html. While it seems obvious now, the thing that got the SSL working for me was the final step of Update the Record Sets for Your Domain and Subdomain The guide is very to the point, well written and easy to follow so thought this would help others.
Instead of using Cloudfront (or other Amazon services except for S3) you can use this tool: https://github.com/igorkasyanchuk/amazon_static_site which allows you to publish a site and use Cloudflare. You will get https too.
To simplify life you can use a generator and then just edit config and deploy files to S3/Cloudflare.
I am considering whether to host my static website on an AWS S3 bucket or using Google Cloud Platform Storage, and, as I already use GCP and with it being generally cheaper, I would really like to utilize that option.
My issue is that I often need to create custom 301 redirects for my site, like:
https://example.com/page -> https://anotherexample.com/another-page
S3 seems to handle this well, but I'm not finding any documentation on custom redirects from GCP.
Is this possible with GCP Storage buckets yet?
You have to use a Cloud Load Balancer in front of your Google Cloud Storage static site in order to setup redirects.
You likely want to do this anyway, as it is necessary to serve https content (as opposed to http).
You can add as many redirects as you want but beware a gotcha - they charge for them. As of today, pricing is:
For first 5 forwarding rules: $0.025/hour
Each additional forwarding rule: $0.01/hour
(https://cloud.google.com/vpc/network-pricing#lb)
That adds up. Like for example, after the first 5 rules, your 6th rule costs you a mere penny per hour. But a penny per hour = $87.60 per year. So imagine you want 100 redirects... oh my.
Short answer: GCP does not seem to support such a feature.
However there is a hard way to do this in GCP:
You can setup an instance group of micro virtual machines with Nginx, configured to make redirects for you.
Then you'll need to setup a load balancer to handle all requests.
It supports forwarding rules, so you may configure it to send https://example.com/page requests to nginx VM's, and all other requests to Storage Buckets.
As pointed out by #Scalar, currently GCP does not support redirects with static website hosting in Google Cloud Storage, so just to add some more detailed information on the proposal to serving content as a backend service, let me share with you some documentation guidelines that might be helpful for you.
Currently, a Cloud Storage bucket can be served as a backend service (in fact a backend bucket) in order for your requests for static content made against a content-based load balancing system to be served by that bucket (while the rest of the requests are handled by your instances; although you can skip that part, as you only need the Cloud Storage service in your configuration).
In order to set up a backend bucket to your load balancer, you can follow the steps detailed in the following documentation page. It assumes that you have previously completed the creation of a content-based load balancer, so you can start from that example. Then, you can set up redirections so that calls to https://example.com/page are redirected to https://anotherexample.com/another-page, and they are finally processed by your load balancing service, which will direct them to your bucket. Redirects can be performed at the application level by means of the web server of your choice, but just to give you a couple of examples, you can do that using NGINX's return or rewrite directives, as explained in their official documentation, or also Apache Server's Redirect or Rewrite rules, as detailed in their documentation to.
I am managing an 8500 page static website in GCS. I managed this using Cloudflare.com . It has a preprocessor called "Workers" which use javascript.
You can either have Key-Value Pairs or an Array within the code for 301 redirects.
While Google Cloud Storage still does not support redirects, the GCP HTTP load balancer now supports it.