How can I cache a sub 5MB config for free Google cloud function? - google-cloud-platform

I have a google cloud function that uses api to serve to users information. That said I shouldn’t make an api call for each request as specified by the API good practice I’m using, so I have to cache result.
I’d like to cache it to get fast access but also still use the free tier of google preferably (or any other free tier option that works well for the job).
Thanks :)

I was not able to understand your question, but I think you are trying to create an in-memory cache solution using cloud function. TL;DR this is not possible.
Cloud Functions should be stateless, please take a look at the official documention. The same rules also apply for Cloud Run.
However, you can use a combination of tools to achieve your goal, for example, Redis Memorystore, but this is not into the free tier.
Another option is maybe to use firestore to cache your results, however, I would check first your use case to make sure you don't run out of the free tier quickly.
Finding a free solution for a in-memory solution is very difficult IMO.
Cheers

You can either keep the API result in memory, or on the /tmp disk (which is actually also stored in memory). Since the minimal memory size for a Cloud Functions image is 128MB, spending 5MB of that on cached API results seems reasonable.
As Andres answered: keep in mind that Cloud Functions are ephemeral and spin up and down as needed, so there's no saying how often a call to your Cloud Function will serve the cached results vs it calling the backend API.

Related

Google Cloud Storage serve images in different sizes?

I have stored thousands of images in GCP Cloud Storage in very high resolution. I want to serve these images in an iOS/Android App and on a website. I don't want to serve all the time the high-resolution version and wondered whether I have to create duplicate images in different resolutions - which seems very inefficient. The perfect solution would be that I can append a parameter like ?size=100 to the image URL. Is something like that natively possible with GCP Cloud Storage?
I don't find anything in the documentation from cloud storage: https://cloud.google.com/storage/docs.
Several other resources link to deprecated solutions: https://medium.com/google-cloud/uploading-resizing-and-serving-images-with-google-cloud-platform-ca9631a2c556
What is the best solution to implement such functionality?
Cloud Storage currently does not have Imaging services yet, though a Feature Request already exists. I highly suggest that you "+1" and "star" this issue to increase its chance to be prioritized in development.
You are right that this use case is common. Image API is a Legacy App Engine API. It's no longer a recommended solution because Legacy App Engine APIs are only available in older runtimes that have limited support. GCP would advise developers to use Client Libraries instead but since your requested feature is not yet available, then you'll have to use third-party imaging libraries.
In this case, developers are commonly using Cloud Functions with Cloud Storage Trigger, thus resizing and creating duplicate images in different resolutions. While you may find the solution inefficient, unfortunately there's not much choice but to process those images until the feature request becomes available in public.
One good thing though is that Cloud Functions supports multiple runtimes so you can write code in any supported languages and pick libraries you're comfortable using. If you're using Node runtime, feel free to check this sample that automatically creates thumbnail when an image is uploaded to Cloud Storage.

"Max storage size not supported" When Upgrading AWS RDS

I am using db.m5.4xlarge but our users increase lot so the server is going too slow, we want to upgrade RDS to db.m5.8xlarge, But when I try to upgrade RDS, it gave me an error (Max storage size not supported).
I think the reason is that, unlike db.m5.4xlarge, db.m5.8xlarge does not support MySQL. From docs:
Judged on discussion with you I think it might actually be more beneficial for you to take a look at creating read replicas rather than an ever growing instance.
The problem with increasing the instance as you are are now is that everytime it will simply reach another bottleneck and it becomes a single point of failure.
Instead the following strategy is more appropriate and may end up saving you cost:
Create read replicas in RDS to handle all read only SQL queries, by doing this you are going to see performance gains over your current handling. You might even be able to scale down the write cluster.
As your application is read heavy look at using caching for your applications to avoid heavy read usage. AWS provides ElastiCache a a managed service using either Redis or MemcacheD as the caching service. This again could save you money as you won't need as many live reads.
If you choose to include caching too take a look at these caching strategies to work out how you would want to use it.

Google API Speeds Slow in Cloud Run / Functions?

Bottom Line: Cloud Run and Cloud Functions seem to have bizarrely limited bandwidth to the Google Drive API endpoints. Looking for advice on how to work around, or, ideally, #Google support to fix the underlying issue(s) as I will not be the only like use case.
Background: I have what I think is a really simple use case. We're trying to automate private domain Google Drive users to take existing audio recordings and send them off to Speech API to generate a transcript on an ad hoc basis, and to dump the transcript back into the same Drive folder with email notification to the submitter. Easy, right? Only hard part is that Speech API will only read from Google Cloud Storage, so the 'hard part' should be moving the file over. 'Hard' doesn't really cover it...
Problem: Writing in nodejs and using the latest version of the official modules for Drive and GCS, the file copying was going extremely slow. When we broke things down, it became apparent that the GCS speed was acceptable (mostly -- honestly it didn't get a robust test, but was fast enough in limited testing); it was the Drive ingress which was causing the real problem. Using even the sample Google Drive Download app from the repo was slow as can be. Thinking the issue might be either my code or the library, though, I ran the same thing from the Cloud Console, and it was fast as lightning. Same with GCE. Same locally. But in Cloud Functions or Cloud Run, it's like molasses.
Request:
Has anyone in the community run into this or a like issue and found a workaround?
#Google -- Any chance that whatever the underlying performance bottleneck is, you can fix it? This is a quintessentially 'serverless' use case, and it's hard to believe that the folks who've been doing this the longest can't crack it.
Thank you all in advance!
Updated 1/4/19 -- GCS is also slow following more robust testing. Image base also makes no difference (tried nodejs10-alpine, nodejs12-slim, nodejs12-alpine without impact), and memory limits equally do not impact results locally or on GCP (256m works fine locally; 2Gi fails in GCP).
Google Issue at: https://issuetracker.google.com/147139116
Self-inflicted wound. Google-provided code seeks to be asynchronous and do work in the background. Cloud Run and Cloud Functions do not support that model (for now at least). Move to promise-chaining and all of a sudden it works like it should -- so long as the CPU keeps the attention it needs. Limits what we can do with CR / CF, but hopefully that too will evolve.

Google Cloud Functions calling approach perfomance

I found this question. It is about calling other modules inside Google Cloud infrastructure.
How do I call other Google APIs from a Cloud Function?
So my question is it possible to trigger Google Cloud Functions using this approach? And how perfromant this solution will be if it is possible?
I think that it probably can be used as code sharing mechanism, because I didn't see any information about this issue regarding GCF.
Regarding your question about triggering: Cloud Functions can be triggered in a variety of ways, including HTTP (web calls such as REST), Pub/Sub, and changes to Cloud Storage (e.g. uploaded files). The set of triggers is likely to expand over time. The latest information can be found at https://cloud.google.com/functions/docs/calling/
Regarding your question about performance: Cloud Functions, at least the current iteration, run JavaScript inside a Node application. They auto-magically scale. New instances are spun up as demand grows. They should meet the performance needs of most use cases.
Regarding your comment on code sharing: Yes. You could create a function and expose it, such as with HTTP, so that it can be used by multiple applications. You'll need to do any authentication and authorization checking per call, though.

Django development environment with SimpleDB

If I want to write a django app with Amazon SimpleDB, should I install a local SimpleDB server in my development environment? If so, is there a good one around? simpledb-dev seems to be no longer maintained. Or, should I access the DB on the cloud directly?
I would access simpledb directly, just point to a different account or a different set of domains.
By the way, I don't think there are any "local SimpleDB" servers. You would have to write your own test implementation which sounds like a nightmare, but then again maybe you have a lot of free time.
Also, you are probably going to want to actually get a feel for SimpleDB which will be easier if you are using the real thing.