Currently, Cloud Run has a request limit of 32 Mb per request, which makes it impossible to upload files like videos (which placed with no changes to GCP Storage). Meantime All Quotas page doesn't list this limitation as the one you can request an increase from support. So question is - does anyone know how to increase this limit or how to make it possible (uploading video and bigger files) to Cloud Run with given limitation?
Google recommended best practice is to use Signed URLs to upload files, which is likely to be more scalable and reliable (over flaky networks) for file uploads:
see this url for further information:
https://cloud.google.com/storage/docs/access-control/signed-urls
As per official GCP documentation, the maximum request size limit for Cloud Run (which is 32 MB) cannot be increased.
Update since the other answers were posted - the request size can be unlimited if using HTTP/2.
See https://cloud.google.com/run/quotas#cloud_run_limits which says "Request Maximum HTTP/1 request size, in MB 32 if using HTTP/1 server. No limit if using HTTP/2 server."
Related
I am trying to take advantage of the built-in Cloud Storage edge caching feature. When a valid Cache-Control header is set, the files can be stored at edge locations. This is without having to set up Cloud Load Balancer & CDN. This built-in behavior is touched on in this Cloud Next '18 video.
What I am seeing though is a hard limit of 10 MB. When I store a file over 10 MB and then download it, it's missing the Age response header. A 9 MB file will have it. The 10 MB limit is mentioned in the CDN docs here, though. What doesn't make sense to me is why files over 10 MB don't get cached to the edge. After all, the Cloud Storage server meets all the requirements, it even says: Cloud Storage supports byte range requests for most objects.
Does anyone know more about the default caching limits? I can't seem to find any limits documented for Cloud Storage.
At the moment, the information of the cache limit managed by the objects in Cloud Storage is documented in the Cloud DN documentation and what you describe is an expected behavior.
Origin server does not support byte range requests: 10 MB (10,485,760
bytes)
Performing tests on my side with files of an exact size of 10,485,760 bytes include the Age field.
However files above that limit such as 10,485,770 no longer include it.
I recommend you create a feature request here in order to improve the Google Cloud Storage documentation.
In this way you will have direct communication with the team responsible for the documentation and your request may be supported by other members of the community.
I have a video streaming application which does streaming the video from google storage bucket. All the files which reside on the storage bucket are not public. Every time when users click on a video from the front-end I am generating a signed URL using API and load into the HTML5 video player.
Problem
I see if the file size is more than 100 MB it takes around 30-40 sec to load the video on front-end.
When I googled to resolved this problem, some of the articles are saying use cloud CDN and storage bucket then cache the file. As far as I know, to cache the file, the file has to publicly available. I can't make files publicly available.
So my concern is, are there any ways where we can make it scalable/ reduce the initial time?
Cloud CDN will help your latency for sure. Also, with that amount of latency it might be good to look into the actual requests that are being sent to Cloud Storage to make sure chunks are being requested and that the whole video file isn't being loaded before starting to play.
Caching the file does not require that the file is public. You can make the file private and add the Cloud CDN service into your Cloud Storage ACLs (https://cloud.google.com/cdn/docs/using-signed-urls#configuring_permissions). Also, as Kolban noted above, signed cookies might be better for your application to streamline the requests.
Not an exact answer but this site is useful to design solution using GCP.
https://gcp.solutions/diagram/media-transcoding
As mentioned earlier, CDN is right way to go for video streaming with low latency.
I have AWS Gateway API configured as proxy for S3 to upload a file to S3 bucket. I have configured binary media to support multipart/form-data
I am able to upload a file of size 10MB or less without any issue. However when the file size is more than 10MB i get 413 Request Entity Too Large issue.
I know that AAG has hard limit of 10 MB on payload.
Questions
1>Isn't adding multipart/form-data should solve the 10 MB limit issue? Do i need to configure anything else?
2>Another approach recommended is to create pre-signed url. I am assuming for this approach to work client has to make call to get pre-signed url and then use that url to upload a file. Is this the only approach to upload a large file?
Note that I have gone through several SO post regarding the same issue, but most of them are old and i am curious to see if there are any new recommendations.
The 10 MB payload limit is hard and cannot be increased [1].
It seems to be possible to split the file into chunks on the client and then put it together on the server again [2] to circumvent the 10 MB limit, but I do not think this is a reasonable approach. The pre-signed URL approach seems better to me, if you do not use a client SDK which provides functionality for chunking.
Please note that if you ever decide to move away from S3, you can still implement the very same interface on any server yourself. In my opinion it is therefore the way to go.
I am trying to upload a .bak file(24gb) to amazon s3 using multipart upload low-level API approach in Java. I was able to write the file successfully but the time it took was around 7-8 hours. I want to know what is the average/ideal time to upload a file of such a big size, is the time it took is expected or it can be improved? If there is a scope for improvement than what could be the approach?
If you are using default settings of Transfer Manager, then for multipart uploads, the DEFAULT_MINIMUM_UPLOAD_PART_SIZE is 5MB which is too low for a 24GB file. This essentially means that you'll end up having thousands of small part uploaded to s3. Since each part is uploaded by a different worker thread, your application will spend too much time in Network communication. This will not give you optimal uploading speed.
You must increase the minimum upload part size to be between 100MB to 500 MB. Use this setting : setMinimumUploadPartSize
Official Documentation for setting MinimumUploadPartSize :
Decreasing the minimum part size will cause multipart uploads to be split into a larger number of smaller parts. Setting this value too low can have a negative effect on transfer speeds since it will cause extra latency and network communication for each part.
I am certain you'll see improvement in upload throughput by tuning this setting if you are currently using default settings. Let me know if this improves the throughput.
Happy Uploading !!!
Around 90 or 100 calls per second to
pubsub_client.projects().topics().publish(topic='projects/xxxx',body=body).execute(num_retries=0)
per second from Google App Engine App to Google Cloud Pub/Sub, results in
HttpError: <HttpError 429 when requesting https://pubsub.googleapis.com/v1/projects/xxxx:publish?alt=json returned "Request throttled due to user QPS limit being reached.">
I know there is a limit on administrative operations at 100 QPS, but certainly publishing to a topic is not an administrative operation? I know pub/sub should support millions of operations per second so I know there's something wrong.
Any help or insight would be appreciated. I need to get up to at least 300 publishes per second, trying to streamline an existing implementation using pubsub. I think this may be a bug with the implementation.
I am running this code on Google App Engine python 2.7 -- using the appengine runtime, not the flexible one as that's not approved for production code yet.
Note that publisher quota is not in terms of QPS, but in terms of throughput. The default limit is 100MB/s. See the Quotas documentation for more details. Depending on the message size you are sending, you may be running into these limits.
The "user QPS limit being reached" message on a publish usually means one of three things:
You are publishing at a throughput that is higher than the default 100MB/s quota. If that is the case, then you can apply for more quota by clicking on the "Apply for higher quota" on the Pub/Sub Quota page.
You are not authenticated against the correct Cloud project. If you are authenticated in or running your Google App Engine instances in a Cloud project that differs from the one your topic is defined in, the quota you run into may not be defined in the project you expect. More information can be found in the Google Application Defaults Credentials page.
You have manually set quota in the Quota page and that is the limit you are running into.