What's the default storage for Google Cloud Run? - google-cloud-platform

There is not documentation that I can find about the storage that Google Cloud Run has. For example, does it contains few Gigabyte storage as we create a VM?
If not, is there a '/tmp' folder that I can put data temporarily into during the request? What's the limitation if available?
If neither of them available, what's the recommendation if I want to save some temporary data while running Cloud Run?

Cloud Run is a stateless service platform, and does not have any built-in storage mechanism.
Files can be temporarily stored for processing in a container instance, but this storage comes out of the available memory for the service as described in the runtime contract. Maximum memory available to a service is 8 GB.
For persistent storage the recommendation is to integrate with other GCP services that provide storage or databases.
The top services for this are Cloud Storage and Cloud Firestore.
These two are a particularly good match for Cloud Run because they have the most "serverless" compatibility: horizontal scaling to matching the scaling capacity of Cloud Run and the ability to trigger events on state changes to plug into asynchronous, serverless architectures via Cloud Pub/Sub and Cloud Storage's Registering Object Changes and Cloud Functions with Cloud Function Events & Triggers.

The writable disk storage is an in-memory filesystem, which limited by instance memory to a maximum of 8GB. Anything written to the filesystem is not persisted between instances.
See:
https://cloud.google.com/run/quotas
https://cloud.google.com/run/docs/reference/container-contract#filesystem
https://cloud.google.com/run/docs/reference/container-contract#memory

Related

How to use Cloud Scheduler to store data into Cloud Storage?

I've created a job in Google Cloud Scheduler to download data from my demo app using HTTP GET it seemed to run successfully. My question is where did it store that data? And how can I store it into Google Cloud Storage? Below is a screenshot of my job:
I'm new to Google Cloud and working with a free trial account. Please advise.
Cloud Scheduler does not process data. The data returned by your example request is discarded.
Write a Cloud Function scheduled by Cloud Scheduler. There are other services such as Firebase and Cloud Run that work well also.
What you're trying to do is that you're trying to create a scheduler job that calls a GET request, and Cloud Scheduler will do exactly that, except the part where data is stored. Since Cloud Scheduler is a managed-cron service, it doesn't matter if the URL returns data. Cloud Scheduler will call the endpoint on a timely manner and that's it. Data returned by the request will be discarded, as mentioned by John Hanley.
What you can do is to integrate scheduling with Cloud Functions, but to be clear, your app needs to do the following first:
Download from external link and save the object to /tmp within the function.
The rest of the file system is read-only, and /tmp is the only writeable part. Any files saved in /tmp are stored within the function's memory. You can clear /tmp for every successful upload to GCS.
Upload the file to Cloud Storage (using Cloud Storage client library).
Now that your app is capable of storing and uploading data, then you can make a decision where to deploy it.
You can deploy your code using Firebase CLI and use scheduled functions. The advantage is that Pub/Sub and Cloud Scheduler is taken care of automatically and configuration is done on your code. Downside is that you are limited to Node Runtime compared to GCP Cloud Functions, where there are many different programming languages available. Learn more about scheduled functions.
Second, you can deploy through gcloud CLI but for this, you need to setup Pub/Sub notifications and Cloud Scheduler. You can check more about this by navigating to this link.

Is there a way i can use GCP Storage as external drive?

I'm a hobby photographer and take load of raw photos. I was wondering if there is a possibility to get rid of my external drive and use GCP Cloud Storage instead. I would require to access, read, write files directly from Adobe LightRoom.
Can I have a drive displayed on My PC in windows 10, just like I can see C:,D: i'd like to see gs: drive there.
Thanks for the help!
If you want use GCP's storage service as a network drive, That's what Cloud filestore does.
This is not Google Cloud storage, But this can be another option.
You can mount Cloud Filestore persistent disk to your windows 10 filesystem.
This allows you to use GCP persistent disk as usual like C:,D: drives.
But it is liitle bit expensive, So you need to compare with other options.
Here is GCP price calculator.
There is a GcsFuse-Win version which is backed by Google Cloud Storage Service. This is the first gcs fuse on Windows allows you to mount the buckets/folders in the storage account as a the local folder/driver on Windows system. To install the service on the Windows you must need a GCP account. I suggest you to read out the limitations before deploy in the production.

How to mount persistent storage to Google Cloud Run?

I was trying to run a Docker image with Cloud run and realised that there is no option for adding a persistent storage. I found a list of services in https://cloud.google.com/run/docs/using-gcp-services#connecting_to_services_in_code but all of them are access from code. I was looking to share volume with persistent storage. Is there a way around it ? Is it because persistent storage might not work shared between multiple instances at the same time ? Is there are alternative solution ?
Cloud Run is serverless: it abstracts away all infrastructure management.
Also is a managed compute platform that automatically scales your stateless containers.
Filesystem access The filesystem of your container is writable and is
subject to the following behavior:
This is an in-memory filesystem, so writing to it uses the container
instance's memory. Data written to the filesystem does not persist
when the container instance is stopped.
You can use Google Cloud Storage, Firestore or Cloud SQL if your application is stateful.
3 Great Options for Persistent Storage with Cloud Run
What's the default storage for Google Cloud Run?
Cloud Run (fully managed) has known services that's not yet supported including Filestore which is also a persistent storage. However, you can consider running your Docker image on Cloud Run Anthos which runs on GKE and there you can use persistent volumes which are typically backed by Compute Engine persistent disks.
Having persistent storage in (fully managed) Cloud Run should be possible now.
Cloud Run's second generation execution environment (gen2) supports network mounted file systems.
Here are some alternatives:
Cloud Run + GCS: Using Cloud Storage FUSE with Cloud Run tutorial
Cloud Run + Filestore: Using Filestore with Cloud Run tutorial
If you need help deciding between those, check this:
Design an optimal storage strategy for your cloud workload
NOTE: At the time of this answer, Cloud Run gen2 is in Preview.

Do I need internet to list and fetch bucket object on Google Cloud Storage

I have written a very simple Nifi template which first lists and then fetches an object from a bucket on Google Cloud Storage. Obviously, when fetching the object, Nifi tries to download the object from the bucket using internet. My question is that, if I want to ingest such object to other Google Cloud services, such as Pub/Sub or Cloud Datastore, do I need to download this files to a separate node?
Why should I not have another node in Google Cloud which could be in the same group of IPs as in Google Cloud Storage? So instead of downloading from the internet it would be just transferring the object among a network?
Another question I have: does the Dataflow default templates for transferring files and objects for buckets to other services such as Pub/Sub obey similar principle? I mean if they use internet connection to transfer object from a bucket to Pub/Sub or of they transfer the object among network nodes?
Transfers among Google Cloud Platform services are made within the private network. So as long as you have set the appropriate Firewall rules, the services will be able to communicate directly through the private network thus there is no need for the files to be downloaded.
For example if you have a job where an object is downloaded from an external source to Cloud Storage and then transferred from Cloud Storage to Cloud Datastore it will use the internet to download the file to Cloud Storage and then it will use the internal private network to transfer it to Cloud Datastore.
Therefore, regarding your second question, the files and objects are transferred among network nodes for Dataflow jobs.
As described in the Dataflow Documentation - Regional endpoints:
You can minimize network latency and network transport costs by running a Cloud Dataflow job from the same region as its sources and/or sinks.
Notes about common Cloud Dataflow job sources:
Cloud Storage buckets can be regional or multi-regional resources:
When using a Cloud Storage regional bucket as a source, Google
recommends that you perform read operations in the same region.

What are strategies for bridging Google Cloud with AWS?

Let's say a company has an application with a database hosted on AWS and also has a read replica on AWS. Then that same company wants to build out a data analytics infrastructure in Google Cloud -- to take advantage of data analysis and ML services in Google Cloud.
Is it necessary to create an additional read replica within the Google Cloud context? If not, is there an alternative strategy that is frequently used in this context to bridge the two cloud services?
While services like Amazon Relational Database Service (RDS) provides read-replica capabilities, it is only between managed database instances on AWS.
If you are replicating a database between providers, then you are probably running the database yourself on virtual machines rather than using a managed service. This means the databases appear just like any resource on the Internet, so you can connect them exactly the way you would connect two resources across the internet. However, you would be responsible for managing, monitoring, deploying, etc. This takes away from much of the benefit of using cloud services.
Replicating between storage services like Amazon S3 would be easier since it is just raw data rather than a running database. Also, Big Data is normally stored in raw format rather than being loaded into a database.
If the existing infrastructure is on a cloud provider, then try to perform the remaining activities on the same cloud provider.