what is the max size of video by using google photoslibrary API? - google-photos

I try to upload a video (mp4) which is 3.9GB by using the google photoslibrary api.
But it return 400 error.
Is there any file size limit for video?

As mentioned in Google Developer Documentation
All media items uploaded to Google Photos using the API are stored in
full resolution at original quality. They count toward the user’s
storage.
Note: If your uploads exceed 25MB per user, your application should
remind the user that these uploads will count towards storage in their
Google Account.
So After 25 MB it will be stored in their Google Account.IF they have that much amount of free space than yes , they can upload . If they don't have that much space available then they have to update their storage plan.
And About Your Error Code.
Exceeding quota limits
If the quota of requests to the Library API is exceeded, the API
returns an error code 429 and a message that the project has exceeded
the quota
So it will be Error Code 429 not 400 (in your case).
For detail Information you can find it here

Related

Issue with reading millions of files from cloud storage using dataflow in Google cloud

Scenario: I am trying to read files and send the data to pub/sub
Millions of files stored in a cloud storage folder(GCP)
I have created a dataflow pipeline using the template "Text files on cloud storage to Pub/Sub" from the pub/sub topic
But the above template was not able to read millions of files and failed with the following error
java.lang.IllegalArgumentException: Total size of the BoundedSource objects generated by split() operation is larger than the allowable limit. When splitting gs://filelocation/data/*.json into bundles of 28401539859 bytes it generated 2397802 BoundedSource objects with total serialized size of 199603686 bytes which is larger than the limit 20971520.
System configuration:
Apache beam: 2.38 Java SDK
Machine: High performance n1-highmem-16
Any idea on how to solve this issue? Thanks in advance
According to this document (1) you can work around this by modifying your custom BoundedSource subclass so that the generated BoundedSource objects become smaller than the 20 MB limit.
(1) https://cloud.google.com/dataflow/docs/guides/common-errors#boundedsource-objects-splitintobundles
You can also use TextIO.readAll() to avoid these limitations.

AWS service for video optimization and compression

I am trying to build a video/audio/image upload feature for a mobile application. Currently we have set the file size limit to be 1 GB for video and 50 MB for audio and images. These uploaded files will be stored in an s3 bucket and we will use AWS Cloudfront CDN to serve them to users.
I am trying to compress/optimize the size of the media content using some AWS service after they store in S3 bucket. Ideally it will be great if I can put some restriction on the output file like no video file should be greater than 200 MB or with quality greater than 720p. Can someone please help me on this that what AWS service should I use and with some helpful links if available. Thanks
The AWS Elemental MediaConvert service transcodes files on-demand. The service supports output templates which can specify output parameters including resolution, so guaranteeing a 720P maximum resolution is simple.
AWS S3 supports File Events to trigger other AWS actions, such as running a Lambda Function when a new file arrives in a bucket. The Lambda function can load & customize a job template, then submit a transcoding job to MediaConvert to transcode the newly arrived file. See ( https://docs.aws.amazon.com/lambda/latest/dg/with-s3-example.html) for details.
Limiting the size of an output file is not currently a feature within MediaConvert, but you could leverage other AWS tools to do this. Checking the size of a transcoded output could be done with another Lambda Function when the output file arrives in a certain bucket. This second Lambda Fn could then decide to re-transcode the input file with more aggressive job settings (higher compression, different codec, time clipping, etc) in order to produce a smaller output file.
Since file size is a factor for you, I recommend using QVBR or VBR Bit compression with a max bitrate cap to allow you to better predict the worst case file size at a given quality, duration & bitrate. You can allocate your '200MB' per file budget in different ways. For example, you could make 800 seconds (~13min) of 2mbps video, or 1600 seconds (~26min) of 1mbps video, et cetera. You may want to consider several quality tiers, or have your job assembly Lambda Fn do the math for you based on input file duration, which could be determined using mediainfo, ffprobe or other utilities.
FYI there are three ways customers can obtain help with AWS solution design and implementation:
[a] AWS Paid Professional Services - There is a large global AWS ProServices team able to help via paid service engagements.
The fastest way to start this dialog is by submitting the AWS Sales team 'contact me' form found here, and specifying 'Sales Support' : https://aws.amazon.com/contact-us/
[b] AWS Certified Consulting Partners -- AWS certified partners with expertise in many verticals. See search tool & listings here: https://iq.aws.amazon.com/services
[c] AWS Solutions Architects -- these services focused on Enterprise-level AWS accounts. The Sales contact form in item [a] is the best way to engage them. Purchasing AWS Enterprise Support will entitle the customer to a dedicated TAM /SA combination.

Max file count using big query data transfer job

I have about 54 000 files in my GCP bucket. When I try to schedule a big query data transfer job to move files from GCP bucket to big query, I am getting the following error:
Error code 9 : Transfer Run limits exceeded. Max size: 15.00 TB. Max file count: 10000. Found: size = 267065994 B (0.00 TB) ; file count = 54824.
I thought the max file count was 10 million.
I think that BigQuery transfer service lists all the files matching the wildcard and then use the list to load them. So it will be same that providing the full list to bq load ... therefore reaching the 10,000 URIs limit.
This is probably necessary because BigQuery transfer service will skip already loaded files, so it needs to look them one by one to decide which to actually load.
I think that your only option is to schedule a job yourself and load them directly into BigQuery. For example using Cloud Composer or writing a little cloud run service that can be invoked by Cloud Scheduler.
The Error message Transfer Run limits exceeded as mentioned before is related to a known limit for Load jobs in BigQuery. Unfortunately this is a hard limit and cannot be changed. There is an ongoing Feature Request to increase this limit but for now there is no ETA for it to be implemented.
The main recommendation for this issue is to split a single operation in multiple processes that will send data in requests that don't exceed this limit. With this we could cover the main question: "Why I see this Error message and how to avoid it?".
Is is normal to ask now "how to automate or perform these actions easier?" I can think of involve more products:
Dataflow, which will help you to process the data that will be added to BigQuery. Here is where you can send multiple requests.
Pub/Sub, will help to listen to events and automate the times where the processing will start.
Please, take a look at this suggested implementation where the aforementioned scenario is wider described.
Hope this is helpful! :)

"Android Device Verification" Service quota usage

I'm using Android Device Verification service (SafetyNet's attestation api), for verifying whether the request is sent from the same app which I built.
We have a quota limit of 10,000 (which can be increased) on the number of request we can do using SafetyNet's attestation api.
Now, I want to know if my limit is breached so that I can stop using that API.
For that I was looking into stack-driver alerting but I couldn't find Android Device Verification service in it. (Even though I was able to find it in Quotas)
You can monitor Safetynet Attestations in Stackdriver by specifying these filters and settings:
Resource Type: Consumed API
Metric: Request count (type search term "serviceruntime.googleapis.com/api/request_count" to find correct count quickly)
Add Filter service = androidcheck.googleapis.com
Use aggregator "sum" to get the total count of requests.
You can set advanced aggregation options to aggregate the daily level to compare with your quota. This can be done by setting Aligner: "sum" and Alignment period: "1440m". This gives daily sums of requests for the chart (1440m=24h*60m = number of minutes per day).

Doubts using Amazon S3 monthly calculator

I'm using Amazon S3 to store videos and some audios (average size of 25 mb each) and users of my web and android app (so far) can access them with no problem but I want to know how much I'll pay later exceeding the free stage of S3 so I checked the S3 monthly calculator.
I saw that there is 5 fields:
Storage: I put 3 gb cause right now there are 130 files (videos and audios)
PUT/COPY/POST/LIST Requests: I put 15 cause I'll upload manually around 10-15 files each month
GET/SELECT and Other Requests: I put 10000 cause a projection tells me that the users will watch/listen those files around 10000 times monthly
Data Returned by S3 Select: I put 250 Gb (10000 x 25 mb)
Data Scanned by S3 Select: I don't know what to put cause I don't need that amazon scans or analyze those files.
Am I using that calculator in a proper way?
What do I need to put in "Data Scanned by S3 Select"?
Can I put only zero?
For audio and video, you can definitely specify 0 for S3 Select -- both data scanned and data returned.
S3 Select is an optional feature that only works with certain types of text files -- like CSV and JSON -- where you make specific requests for S3 to scan through the files and return matching values, rather than you downloading the entire file and filtering it yourself.
This would not be used with audio or video files.
Also, don't overlook "Data transfer out." In addition to the "get" requests, you're billed for bandwidth when files are downloaded, so this needs to show the total size of all the downloads. This line item is data downloaded from S3 via the Internet.