How to render an AE project on Google Cloud Platform - google-cloud-platform

I've got a 2012 MacBook Pro that has been holding strong for most tasks to this day, though it would appear rendering video seems to be its breaking point.
I read that it was possible to render an After Effects project on the Google Cloud Platform, however, I can't find any tutorials on how it could be done.
I'm not really looking to purchase a $3,000 rendering station right now, which is why I am pursuing the Google Cloud option under the assumption that it would be cheaper but please correct me if there is a better alternative.
The video I want to render is about 30min long # 30fps.
Any help is appreciated.
Thanks.

The easiest way will be to use a Windows compute engine.
Depending of the video quality you want, it would be a good idea to use a GPU instance.
Here is a link to a google tutorial.

Related

Google API Speeds Slow in Cloud Run / Functions?

Bottom Line: Cloud Run and Cloud Functions seem to have bizarrely limited bandwidth to the Google Drive API endpoints. Looking for advice on how to work around, or, ideally, #Google support to fix the underlying issue(s) as I will not be the only like use case.
Background: I have what I think is a really simple use case. We're trying to automate private domain Google Drive users to take existing audio recordings and send them off to Speech API to generate a transcript on an ad hoc basis, and to dump the transcript back into the same Drive folder with email notification to the submitter. Easy, right? Only hard part is that Speech API will only read from Google Cloud Storage, so the 'hard part' should be moving the file over. 'Hard' doesn't really cover it...
Problem: Writing in nodejs and using the latest version of the official modules for Drive and GCS, the file copying was going extremely slow. When we broke things down, it became apparent that the GCS speed was acceptable (mostly -- honestly it didn't get a robust test, but was fast enough in limited testing); it was the Drive ingress which was causing the real problem. Using even the sample Google Drive Download app from the repo was slow as can be. Thinking the issue might be either my code or the library, though, I ran the same thing from the Cloud Console, and it was fast as lightning. Same with GCE. Same locally. But in Cloud Functions or Cloud Run, it's like molasses.
Request:
Has anyone in the community run into this or a like issue and found a workaround?
#Google -- Any chance that whatever the underlying performance bottleneck is, you can fix it? This is a quintessentially 'serverless' use case, and it's hard to believe that the folks who've been doing this the longest can't crack it.
Thank you all in advance!
Updated 1/4/19 -- GCS is also slow following more robust testing. Image base also makes no difference (tried nodejs10-alpine, nodejs12-slim, nodejs12-alpine without impact), and memory limits equally do not impact results locally or on GCP (256m works fine locally; 2Gi fails in GCP).
Google Issue at: https://issuetracker.google.com/147139116
Self-inflicted wound. Google-provided code seeks to be asynchronous and do work in the background. Cloud Run and Cloud Functions do not support that model (for now at least). Move to promise-chaining and all of a sudden it works like it should -- so long as the CPU keeps the attention it needs. Limits what we can do with CR / CF, but hopefully that too will evolve.

Google Cloud Vision API online vs offline pricing

I'm in need of a plug and play text recognition system after having tried some solutions such as Tesseract OCR, Google's Vision API seemed to produce the best results for me.
However I have never used any of their cloud API before but I've noticed it is able to work offline? How would billing work for this? As I understand the online version charges for every 1000 images, wouldn't the offline library circumvent this? What is the quality difference between online and offline?
Both online and offline charge based on the features used. Here is the pricing chart: https://cloud.google.com/vision/pricing
Quality should be similar for online and offline. You could run a small experiment with your own files to verify this.

Can we start making a real production app using sneak peak GDK?

At this moment we have a sneak peak GDK and there are rumors that final GDK will come by this summer along with a public google glass device.
Now, we plan to make our google glass app built on GDK and at this moment we can only use sneak peak GDK. So we basically plan to build app along with new GDK SDKs appearing so this summer we can immediately publish our GDK apps once Google starts accepting such apps.
How safe is that we start building using existing GDK? Can anyone confirm it will not be drastically changes so we don't end up in ever-changing loop?
I see that Glass guys are watching this tag so I hope someone of them can give us a direction.
[Disclaimer: I am another Glass Explorer and not a Google employee ... however I have experience in several large corporations involved in software.]
I would expect to have to make minor and perhaps major adjustments in any Glassware application development that we do. In fact, as we find anomalies or other inconsistencies, I would hope that our feedback and requests would actually help shape the initially non-beta released GDK. If we get into a "continually updating" cycle as the GDK evolves, so be it.
Just my opinion and expectation. We will focus on modularizing and hiding important elements so changes to match a new GDK can be contained.

Can I Develop Google Glass Apps Using the Mirror API Without the Hardware?

I went through the Google docs for the Mirror API a few months ago and it mentioned that only developers with the actual Glass hardware would be able to test apps through the online sandbox. Google took the Mirror API out of its "testing" phase last week. Does this mean you no longer need an actual pair of Glass to test out apps or do the same restrictions apply?
That is partly correct. Anyone can add the Mirror API to a project using the API Console or Cloud Console. See also Issue #2 which was opened on this issue and recently closed by a project member.
"Testing" the apps, however, are still a bit tricky. Aside from reading the timeline using your same Client ID/Secret, you have no way to know that the card has been sent. And you have no way to know what the card looks like.
Using Glass is still the only real way to fully understand and appreciate the UX.

Statistics based marketing campaign measurement tools

Currently using SAS as measurement engine and Business Objects as display layer. Looking to develop a new, faster, slicker solution. Has anyone developed or purchased a campaign measurement reporting system? This solution should measure everything from email stats, web stats, customer activity, lift, ROI, etc.
Ok.. I'm researching and finding nada... We are working with a team from India and they want to re-write everything from scratch.. Any solutions out there at all?
If you are already using SAS, have you looked at their Marketing Automation software?
Update:
Just saw a press release from SAS about a new "Software as a Service" Campaign Management solution. Might be worth checking out for this.
When I was a consultant, we either rolled our own or used SAS (or a combination of the two).
Another vote for roll your own, it's mad that this area is so under served. The expense of building your own solution from the ground up, and the hassle of managing a remote team makes me think you may get further by integrating some existing tools.
Google Analytics for web usage has an API, there are many web log tools, you then need to bolt in the customer figures from your end of things.
I really doubt you could do much better than SAS in this area. Especially if you pick up some of thier specialist packages.
You could have a look at R which is a pretty slick open source statistics package. Unfortunately its not used very much for marketing; most of the examples and freely available code is geared towards biochemistry, genetics etc.