Google Container Registry - max image size? - google-container-registry

Tried googling and reading their documentation, but I cannot find what is the larges image size they support? I have an 15.7GB image, and I cannot upload it to Container registry:
gcloud docker -- push eu.gcr.io/XXXXXX/YYYYYY:ZZZZZZ
The push refers to a repository [eu.gcr.io/XXXXXX/YYYYYY]
5efa92011d99: Retrying in 1 second
8bac40556b9d: Retrying in 8 seconds
e4990dfff478: Retrying in 14 seconds
9f8566ee5135: Retrying in 10 seconds
unknown: Bad Request.

Please contact us with the un-redacted image at gcr-contact#google.com
In general (this may not be an issue for you), the problem with large images is that the short-lived access tokens you receive via our normal token exchange will result in failed uploads. You're going to have to explore JSON key authentication in order to enable those very long sessions when uploading your images to GCS: https://cloud.google.com/container-registry/docs/advanced-authentication

Related

Bypassing Cloud Run 32mb error via HTTP2 end to end solution

I have an api query that runs during a post request on one of my views to populate my dashboard page. I know the response size is ~35mb (greater than the 32mb limits set by cloud run). I was wondering how I could by pass this.
My configuration is set via a hypercorn server and serving my django web app as an asgi app. I have 2 minimum instances, 1gb ram, 2 cpus per instance. I have run this docker container locally and can't bypass the amount of data required and also do not want to store the data due to costs. This seems to be the cheapest route. Any pointers or ideas would be helpful. I understand that I can bypass this via http2 end to end solution but I am unable to do so currently. I haven't created any additional hypecorn configurations. Any help appreciated!
The Cloud Run HTTP response limit is 32 MB and cannot be increased.
One suggestion is to compress the response data. Django has compression libraries for Python or just use zlib.
import gzip
data = b"Lots of content to compress"
cdata = gzip.compress(s_in)
# return compressed data in response
Cloud Run supports HTTP/1.1 server side streaming, which has unlimited response size. All you need to do is use chunked transfer encoding.
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Transfer-Encoding

I have tested my AWS server (8 GB RAM) on which my Moodle site is hosted for 1000 users using JMeter, I am getting 0% error, what could be the issue?

My moodle site is hosted on AWS Server of 8 GB RAM, i carried out various tests on the server using JMeter (NFT), I have tested from 15 to almost 1000 users, however I am still not getting any error(less than 0.3%). I am using the scripts provided by moodle itself. What could be the issue? Is there any issue with the script? I have attached a screenshot with this which shows the reports of 1000 users test for referenceenter image description here
If you're happy with the amount of errors and response times (maximum response time is more than 1 hour which is kind of too much for me) you can stop here and report the results.
However I doubt that a real user will be happy to wait 1 hour to see the login page so I would rather define some realistic pass/fail criteria, for example would expect the response time to be not more than 5 seconds. In this case you will have > 60% of failures if this is what you're trying to achieve.
You can consider using the following test elements
Set reasonable response timeouts using HTTP Request Defaults:
so if any request will last longer than 5 seconds it will be terminated as failed
Or use Duration Assertion
in this case JMeter will wait for the response and mark it as failed if the response time exceeds the defined duration

Cloud Run 503 error due to high cpu usage

I just implemented cloud run to process/encode video for my mobile application. I have recently gotten an unknown 503 error: POST 503 Google-Cloud-Tasks: The request failed because the HTTP connection to the instance had an error.
My process starts when a user uploads a video to cloud storage, then a function is triggered and sends the video source path to cloud tasks to be enqueued for encoding. Finally cloud run downloads the video, processes it via ffmpeg, and uploads everything to a separate bucket (all downloaded temp files are deleted).
I know video encoding is a cpu heavy task, but my application only allows up to ~3 minute videos to be encoded (usually around 100 MB). It works perfectly fine for shorter videos, but ones on the longer end flag the 503 error after processing for 2+ minutes
My instances are only used for video encoding and only allow 1 concurrent request/instance. Here are my services settings:
CPU - 2 cpu
Memory - 2 Gb
Concurrency - 1
Request Timeout - 900 seconds (15 minutes)
The documentation states that it is because of heavy cpu tasks so it's clear it is caused by the processing of heavier files, but I'm unsure what I can do to fix this given the max settings. Is it possible to set a cap on the CPU so it doesn't go overboard? Or is cloud run not a good solution for this kind of task?

Failing to push 500GB container every single time

So, I've created a container with size 430GB and the push fails every single time with the same layer.
15d907c6c4d1: Preparing
....
15d907c6c4d1: Retrying in 20 seconds
....
15d907c6c4d1: Retrying in 1 second
write tcp 10.132.0.5:50149->74.125.133.82:443: write: broken pipe
I'm doing this push from GCP virtual machine, so network should be fast and stable.
$ gcloud docker -- --version
Docker version 1.12.3, build 6b644ec
I'm quite lost how to debug the issue further.
The likely issue is that the user you're trying to push as does not have write access to the Cloud Storage destination bucket. The bucket is in the format [region].artifacts.[PROJECT-ID].appspot.com and it uses standard GCS access controls, see the documentation for further details.

Getting 403 Forbbiden error with Google cloud

We are getting "Forbidden error(403)" while trying to upload data on Google cloud when there is a time skew on my machine i.e. my machine clock is not synchronized/updated with the NTP server.
Why does Google not return the proper error information?
It is very likely that you are setting the "Date" field incorrectly. All (signed) API v1.0 requests must include a "Date" header, and that header must be part of the signature for the request. The Date field must be within 15 minutes of the real clock time that Google's servers receive your request. If your clock is more than 15 minutes skewed, your signed requests will be rejected.
For more, please see the v1.0 API documentation here: https://developers.google.com/storage/docs/reference/v1/developer-guidev1#authentication under the CanonicalHeaders section.
This is also the case with S3. See here: http://aws.amazon.com/articles/1109?_encoding=UTF8&jiveRedirect=1#04