I've encountered several fairly serious problems in my efforts to develop a speech-to-text application. Some of them (I hope) may just be my lack of experience/common-sense/in-depth-reading/etc. Here's the list:
Long (>60 sec) transcriptions -- Forces me to use a GS bucket to first upload the soundfile to the bucket. Trouble is:
a. I have to run the "gcloud auth login" on each machine I need to run on. I have well over 50 machines. This appears to be a purely manual operation, as you have to copy a long URL to your browser, hit enter, click on the right account, accept the permissions, then hand-copy and paste the key presented back into the gcloud prompt, and hit enter there. While it does appear to be presistent to some degree, it is subject to one interesting constraint: only 51 machines (maybe 50, I got tired trying to count) are allowed. And the earliest logged-in machine is un-loggedin to make room for the new login. This was very odious. All this hassle is purely for using the buckets. A shorter transcription will not use GS, and complete without complaint. Really!!!! Is there no better way? Do we have to use gcloud auth login? Manually???? The number of servers we can use with GoogleStorage????
Another google storage issue: transcription requires the bucket to be "public". We are pretty worried about security, and the privacy of our customers, whose recordings will be in the uploaded bucket, even if only there briefly.
The transcription app offers transcriptions in multiple languages, but the "phone_call" model is fixed to en_US, and seems to ignore the language setting. If I change the request to es_US, and supply a spanish recording, it behaves the same. (But everything works OK in the 'command_and_search' model). This seems to be in evolution, any idea when/if they will carry the multi-language features over to the phone_call model?
If anyone can help, Oh Wise Ones, please impart of thy wisdom!
murf
Related
I want to develop an app for a friend's small business that will store/serve media files. However I'm afraid of having a piece of media goes viral, or getting DDoS'd. The bill could go up quite easily with a service like S3 and I really want to avoid surprise expenses like that. Ideally I'd like some kind of max-bandwidth limit.
Now, the solutions for S3 this has been posted here
But it does require quite a few steps. So I'm wondering if there is a cloud storage solution that makes this simpler I.e. where I don't need to create a custom microservice. I've talked to the support on Digital Ocean and they also don't support this
So in the interest of saving time, and perhaps for anyone else who finds themselves in a similar dilemma, I want to ask this question here, I hope that's okay.
Thanks!
Not an out-of-the-box solution, but you could:
Keep the content private
When rendering a web page that contains the file or links to the file, have your back-end generate an Amazon S3 pre-signed URLs to grant time-limited access to the object
The back-end could keep track of the "popularity" of the file and, if it exceeds a certain rate (eg 1000 over 15 minutes), it could instead point to a small file with a message of "please try later"
I'm working on a project centered around API Change Management. I'm curious as to how AWS informs developers of changes to its APIs. Is it through the document history (https://docs.aws.amazon.com/apigateway/latest/developerguide/history.html)? Or do they send out emails to developers?
Regarding emails, are emails sent to all developers using the API (ex. API Gateway) or just developers using a particular endpoint and will be affected by the change? What is the frequency of notifications - breaking changes, minor changes, etc.
Thanks so much for your help!
For non-breaking changes, you can learn about them on the Developer Guide as you pointed out. Some of these changes are also announced on their What's New page (RSS feed). You can also follow the SDK releases which are updated often (e.g. by using the RSS feed for aws-sdk-go releases). I believe that most of the SDKs are using code generation to generate a lot of the API functionality. They push updates to these files in the SDK git repositories (ruby example, go example), but it is not clear if there is another place to find these files. It doesn't seem like they want us to consume these directly (see this developer forum thread from 2015). There's also awsapichanges.info, which appears to be built by AWS themselves.
AWS very rarely makes breaking changes to their API. Even SimpleDB, which is a very old AWS product, still works.
Having said that, they do make breaking changes from time to time, but they try to announce them well ahead of time. The biggest breaking change that they are trying to complete is probably their attempt to deprecate S3 path-style access. This was first quietly announced in their AWS Developer Forums, which caused a lot panic especially since the timeline was incredibly short. Based on the panic, AWS quickly backtracked and revised the plan, more publicly this time.
They have done some other S3 breaking changes in other ways. For example, S3 buckets must now have DNS-compliant names. This was only recently (March 1, 2018) enforced on new buckets in us-east-1, but for most other regions this was enforced from the start when the regions were made available. Old S3 buckets in us-east-1 may still have names that are not DNS-compliant.
Lambda is removing old runtimes once the version of the programming language stops being maintained (such as Python 2.7). This should be a known expectation for anyone who starts using the service, and there is always a new version that you can migrate to. AWS sends you email reminders if you still have Lambda functions that is using the old runtime, when the deadline nears.
Here is a GitHub repository where people try to track breaking changes: https://github.com/SummitRoute/aws_breaking_changes. You can see that the list is not that long.
I have just recently started to work with Google Cloud and I am trying to wrap my head around some of its inner workings, mainly the audit logging part.
What I want do is get the log activity from when my keys are used for anything and also when someone actually logged into the Google Console Cloud (it could be the Key Vault or the Key Ring, too).
I have been using power shell to extract these logs using gcloud read logging and this is where I start to doubt whether I have the right place. I will explain:
I have created new keys and I see in the Activity Panel this action, and I can already extract this through gcloud read logging resource.type=cloudkms_cryptokey (there could be a typo on the command line, since I am writing it from the top of my head, sorry for that!).
Albeit I have this information, I am rather curious if this is the correct course of action here. I saw the CreateCryptoKey and SetIamPolicy methods on my logs, alright, but am I going to see all actions related to these keys? By reading the GCloud docs, I feel as though I am only getting some of the actions?
As I have said, I am trying to work my way around the GCloud Documentation, but it is such an overwhelming amount of information that I am not really getting the proper answer I am looking for, this is why I thought about resorting to this community.
So, to summarize, am I getting all the information related to my keys the way I am doing right now? And what about the people that have access to the Google Cloud Console page, is there a way to find who accessed it and which part (Crypto Keys page, Crypto Vault page for example)? That's something I have not understood from the docs as well, sadly. Perhaps someone could show me the proper page where I can make references to what I am looking for? Because the Cloud Audit Logging page doesn't feel totally clear to me on this front (and I assume I could be at fault here, these past weeks have been harsh!)
Thanks for anyone that takes some time to answer my question!
Admin activities such as creating a key or setting IAM policy are logged by default.
Data access activities such as listing Cloud KMS resources (key rings, keys, etc.), or performing cryptographic operations (encryption, decryption, etc.) are not logged by default. You can enable data access logging, via the steps at https://cloud.google.com/kms/docs/logging. I'm not sure if that is the topic you are referring to, or https://cloud.google.com/logging/docs/audit/.
To secure AWS account it is good to have virtual MFA device, such as Google Authenticator.
Usually, you can just take a picture of the QR code, and use it on as many devices as you want (as here suggested https://webapps.stackexchange.com/a/66666/188445, sorry, couldn't comment on that answer, don't have the reputation).
However, on AWS it asks two codes to confirm, that makes me think it is device specific. Is any way to make an AWS MFA on two devices or use backup if lose my phone?
First, I'll be that guy and say - don't backup your MFA key. If you lose your device, just jump through the steps of resetting it by contacting support.
While it doesn't necessarily defeat the purpose of increasing the security, and while it's also probably not likely that someone will attempt to steal your key, I don't think you're doing yourself any favors, security-wise.
But that's not what you're asking about.
When you say "on AWS it asks two codes to confirm, that makes me think it is device specific," I'm not sure I follow. Yes, it's device specific, in that you need the specific device that either scanned the QR code, or entered the key in, in order to auth via MFA.
But just because there are two fields, it doesn't mean that there are two different QR codes or MFA keys you need - you just need the one they show you.
After you set up your authenticator, you enter the first code you see into the first field, then wait for that to cycle out, then enter the next one into the second field. Asking for two codes just ensures that your authenticator is working correctly. It's not any different that other services that use an authenticator as MFA - some only ask for the first code that appears, some ask for two. (Personally I think two is better.)
I have no other choice but to adopt iCloud right now. In the near future i would like to build my own cloud service. Is there any problem if the app transfers all the data from iCloud to my own cloud?
Only the data related to my app of course.
After user's permission.
Is Apple positive about this?
If you mean, would Apple approve an app for the store that was going to transfer the user's iCloud data to some other online service, as usual all we can do is try and gauge the odds.
None of Apple's guidelines even hint that apps may not use non-iCloud services.
Neither do they hint that there's any issue with moving data from one service to another, even if one of them is iCloud.
Apple does not look kindly on apps that transfer user data to online storage without the user's knowledge. Assuming you make it clear to users what you're doing, this is probably not an issue, but users should have the chance to opt out of your service.
Based on information available right now, what you suggest is probably OK so long as your app makes clear what's happening. It's unwise to try and predict Apple's app-approval actions too closely. They might change their policies tomorrow, or they might decide to reject your app for reasons that had not previously been stated. At the moment though, switching services like that seems likely to be accepted.