Google Cloud Functions calling approach perfomance - google-cloud-platform

I found this question. It is about calling other modules inside Google Cloud infrastructure.
How do I call other Google APIs from a Cloud Function?
So my question is it possible to trigger Google Cloud Functions using this approach? And how perfromant this solution will be if it is possible?
I think that it probably can be used as code sharing mechanism, because I didn't see any information about this issue regarding GCF.

Regarding your question about triggering: Cloud Functions can be triggered in a variety of ways, including HTTP (web calls such as REST), Pub/Sub, and changes to Cloud Storage (e.g. uploaded files). The set of triggers is likely to expand over time. The latest information can be found at https://cloud.google.com/functions/docs/calling/
Regarding your question about performance: Cloud Functions, at least the current iteration, run JavaScript inside a Node application. They auto-magically scale. New instances are spun up as demand grows. They should meet the performance needs of most use cases.
Regarding your comment on code sharing: Yes. You could create a function and expose it, such as with HTTP, so that it can be used by multiple applications. You'll need to do any authentication and authorization checking per call, though.

Related

Pros and Cons of Google Dataflow VS Cloud Run while pulling data from HTTP endpoint

This is a design approach question where we are trying to pick the best option between Apache Beam / Google Dataflow and Cloud Run to pull data from HTTP endpoints (source) and put them down the stream to Google BigQuery (sink).
Traditionally we have implemented similar functionalities using Google Dataflow where the sources are files in the Google Storage bucket or messages in Google PubSub, etc. In those cases, the data arrived in a 'push' fashion so it makes much more sense to use a streaming Dataflow job.
However, in the new requirement, since the data is fetched periodically from an HTTP endpoint, it sounds reasonable to use a Cloud Run spinning up on schedule.
So I want to gather pros and cons of going with either of these approaches, so that we can make a sensible design for this.
I am not sure this question is appropriate for SO, as it opens a big discussion with different opinions, without clear context, scope, functional and non functional requirements, time and finance restrictions including CAPEX/OPEX, who and how is going to support the solution in BAU after commissioning, etc.
In my personal experience - I developed a few dozens of similar pipelines using various combinations of cloud functions, pubsub topics, cloud storage, firestore (for the pipeline process state managemet) and so on. Sometimes with the dataflow as well (embedded into the pipelieines); but never used the cloud run. But my knowledge and experience may be not relevant in your case.
The only thing I might suggest - try to priorities your requirements (in a whole solution lifecycle context) and then design the solution based on those priorities. I know - it is a trivial idea, sorry to disappoint you.

Can we use Google cloud function to convert xls file to csv

I am new to google cloud functions. My requirement is to trigger cloud function on receiving a gmail and convert the xls attachment from the email to csv.
Can we do using GCP.
Thanks in advance !
Very shortly - that is possible as far as I know.
But.
You might found that in order to automate this task in a reliable, robust and self-healing way, it may be necessary to use half a dozen cloud functions, pubsub topics, maybe a cloud storage, maybe a firestore collection, security manager, customer service account with relevant IAM permissions, and so on. Maybe more than a dozen or two dozens of different GCP resources. And, obviously, those cloud functions are to be developed (I mean the code is to be developed). All together that may be not a very easy or quick to implement.
At the same time, I personally saw (and contributed to a development of) a functional component, based on cloud functions, which together did exactly what you would like to achieve. And that was in production.

How can I cache a sub 5MB config for free Google cloud function?

I have a google cloud function that uses api to serve to users information. That said I shouldn’t make an api call for each request as specified by the API good practice I’m using, so I have to cache result.
I’d like to cache it to get fast access but also still use the free tier of google preferably (or any other free tier option that works well for the job).
Thanks :)
I was not able to understand your question, but I think you are trying to create an in-memory cache solution using cloud function. TL;DR this is not possible.
Cloud Functions should be stateless, please take a look at the official documention. The same rules also apply for Cloud Run.
However, you can use a combination of tools to achieve your goal, for example, Redis Memorystore, but this is not into the free tier.
Another option is maybe to use firestore to cache your results, however, I would check first your use case to make sure you don't run out of the free tier quickly.
Finding a free solution for a in-memory solution is very difficult IMO.
Cheers
You can either keep the API result in memory, or on the /tmp disk (which is actually also stored in memory). Since the minimal memory size for a Cloud Functions image is 128MB, spending 5MB of that on cached API results seems reasonable.
As Andres answered: keep in mind that Cloud Functions are ephemeral and spin up and down as needed, so there's no saying how often a call to your Cloud Function will serve the cached results vs it calling the backend API.

How to build complex apps with AWS Lambda and SOA?

We currently run a Java backend which we're hoping to move away from and switch to Node running on AWS Lambda & Serverless.
Ideally during this process we want to build out a fully service orientated architecture.
My question is if our frontend angular app requests the current user's ordered items to get that information it would need to hit three services, the user service, the order service and the item service.
Does this mean we would need make three get requests to these services? At the moment we would have a single endpoint built for that specific request, which can then take advantage of DB joins for optimal performance.
I understand the benefits SOA, but how to do we scale when performing more compex requests such as this? Are there any good resources I can take a look at?
Looking at your question I would advise to align your priorities first: why do you want to move away from the Java backend that you're running on now? Which problems do you want to overcome?
You're combining the microservices architecture and the concept of serverless infrastructure in your question. Both can be used in conjunction, but they don't have to. A lot of companies are using microservices, even bigger enterprises like Uber (on NodeJS), but serverless infrastructures like Lambda are really just getting started. I would advise you to read up on microservices especially, e.g. here are some nice articles. You'll also find answers to your question about performance and joins.
When considering an architecture based on Lambda, do consider that there's no state whatsoever possible in a Lambda function. This is a step further then stateless services that we usually talk about; they generally target 'client state' that does not exist anymore. But a Lambda function cannot have any state, so e.g. a persistent DB-connection pool is not possible. For all the downsides, there's also a lot of stuff you don't have to deal with which can be very beneficial, especially in terms of scalability.

Google Cloud Spanner: Want Java API for doing my own retries

This is really a question for the Google Cloud Spanner Java API team...
Looking at the new Google Cloud Spanner service, it appears that the only way to perform read/write transactions is by providing a callback, via the TransactionRunner interface.
I understand that the API is trying to hide the details of the need to automatically retry transactions as a convenience to the programmer, but this limitation is a serious problem, at least for me. I need to be able to manage the transaction lifecycle myself, even if that means I have to perform my own retries (e.g., based on catching some sort of "retryable" exception).
To make this problem more concrete, suppose you wanted to implement Spring's PlatformTransactionManager for Google Cloud Spanner, so as to fit in with your existing code, and use your existing retry logic. It appears impossible to do that with the current Java API.
It seems like it would be easy to augment the API in a backward compatible way, to add a method returning a TransactionContext to the user, and let the user handle the retries.
Am I missing something? Can this alternate (more traditional) transaction API style be added to the Java API?
You are right in that TransactionRunner is the only way to do Read write transactions in the Java Client for Cloud Spanner. We believe that most users would prefer using that vs hand rolling their own retry logic. But we realize that it might not fit the needs of all the users and would love to hear about such use cases. Can you please file a feature request and we can further discuss there.