Picture Manipulation with AWS - amazon-web-services

I've been a mobile developer for a few years, and Im looking to expand to cloud integration with my apps. Im looking into AWS solutions to fill this need. I don't know a ton about servers or cloud capabilities, so I'm trying to get pointed in the right direction, and maybe be introduced to some good resources.
My goal is to be able to upload some images to AWS and manipulate these images in the cloud. I'm sure that I'll need S3 to store my images, but is an EC2 instance the correct thing to use to perform the manipulation? This is where my lack of knowledge of servers is holding me back.
I think that the best answer I could get would be a comment on whether my needs from AWS are what I listed above, and a point in the right direction towards articles to tutorials of how to get things up and running.
Thanks much for the help!

What I ended up doing was using AWS Lambda to accomplish what I needed. Running a node.js based lambda function with ffmpeg-like manipulation on the images/media that I was uploading worked out quite well.
Side note :: The processing that I was doing was fairly lightweight, so it worked well with lambda. If things scale up any further I might consider switching the processing to an EC2 instance.

Related

Need guidance with AWS website backend

I have a website that I am trying to have the web form connect to a MySQL database running on Amazon's RDS to post and retrieve information. I'm an absolute beginner with code but have managed to get myself this far (creative3c.org). I've had coworkers and friends offer some help, but their knowledge doesn't extend to everything I was told I would need (AWS API Gateway, Lambda--anything else?).
I've been pulling my hair out for a week looking through tutorials, articles, and step-by-step guides but so many presume extensive knowledge on the viewer or they all talk about what I don't need (like phpmyadmin--and php won't work for S3 or Lambda).
Am I jumping too far into the really complex stuff? The person that told me to go the AWS route is certified and brilliant with code--but unfortunately they are fickle, busy, and aren't a good teacher to distill their knowledge. I don't know if I should have gone with something simpler. If you view the website, you'll probably understand how basic it is.
I'm stuck and really stressed about finishing this website and appreciate the help to get me in the right direction! I feel I'm so close! I'm really good at scaling up from a small example of exactly what I need--I just need that initial example!
I'm pleased to hear that you've learnt so quickly. All the terminology floating around can be very confusing. Just remember: AWS is just the platform you deploy to. It can be as simple and complicated as you want it to be
I'm not an AWS expert but here's my birdseye view
You could build an entire running website on your laptop then simply deploy that wholesale to a LAMP server that you've created up in AWS. Now you have a web application running in AWS, without using any of the AWS jargon (beanstalks, lambdas...)
Thats when you would follow this link to provision your server: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/install-LAMP.html
Or you could put the database piece of your application into RDS (a database on the cloud) then put the web application piece in a seperate web server then configure those two servers to talk to each other.
You have a web site but it's now running on two seperate machines
Or (I'm a bit hazy on this) for the web app you could instead deploy bits of your web code to lambda and stick them all together
In all cases you can apply 'elastic beanstalk' to automatically grow and shrink the computers running your site.
Like I said it can be as simple and complicated as you want it to be - and you don't need it to be complicated so the BlueHost option is fine.

AWS solution for big CCTV System

I´m researching AWS for a specific situation I need to solve. I have a customer that we provide a big CCTV solution. About 800 cameras. We are now trying to migrate part of this infraestructure to a cloud solution in AWS.
It is a big step replacing all in site storage they have to a cloud based solution.
I´ve been researching on the best solutions to take care of this and found that probably the best is to develop a solution that works with Amazon Storage Gateway.
The question is: does anyone know which is the most efficient way to deal with heavy video storage on AWS?. What is the recommended way to go?.
NOTE: Hope this question is not going to closed as too broad or opinion based. I know it is in the cutting edge of it.
IF you're looking for a very durable storage, the S3 is a very very good good option. It has 11 9's durability and then you can setup life cycle rules on it too.
Many companies including acloud guru host all their videos on S3. It has the ability to even be connected to 3rd party apps both inside AWS and outside.
Not to forget S3 has read after write consistency too. Also not only is it good for just storing but you can setup S3 efficiently for streaming videos too.

Getting data from local running java app to google cloud app and back

I wanted to dive into the world of distributed systems, cloud computing, IoT, etc., and I gotta be honest, I imagined everything being a little more intuitive than it finally turned out.
I had a tiny testing architecture in mind, that I'd like to set up with Google Clouds and their services, but I am kinda stuck since I can't get my head around some concepts.
What I basically wanted to do (as a first step) is writing a simple java application that would run locally on my computer. This application should just generate random numbers and send those numbers somehow to the google cloud. On the cloud I wanted to define another java application that would manipulate those random numbers in some kind of way (it doesn't matter actually). Afterwards, the output should somehow get back to me of course. And actually, at the moment, I don't even care about how exactly. It could be somehow back to my local app (with some kind of listener, would that be possible?). But it could also simply store the results somewhere on the google cloud? Or maybe upload them to my google drive?
I guess you already noticed that - at some points - I don't even know what i want exactly, since I'm not sure of what is possible, and what not.
Could you provide me some help to get this set up?
The most important questions for me right now are:
Do I need to use a pubsub system, where my generated numbers are sent
to, and which then forwards this to the cloud app, that transforms my
data?
How do I get my data from the local app to the cloud services?
Would my data transforming app run on Google Dataflow?
Above I wrote "as a first step"... because later I would also like to send config files (for example in json format, or xml) to the cloud, and the
cloud application should transform those config files... if I get the
first scenario running the I guess this woul also be no problem
right?
Those are just a few of the questions that are on my mind currently. The most important ones I guess.
It would be a big help. Sorry, if the questions are not very precise, but I really need some kind of pointing into the right direction.
Thank you in advance!
I think it would be good to read up on some of the technologies you mention here:
Google Cloud Pubsub: Pub/Sub enables you to publish messages to a topic, and consume them in another place in the (Google) Cloud. You can see some different examples of publishers and consumers in the link. In your case you could for example write a Java application that writes random numbers to the Pub/Sub queue, where they will sit for 7 days to be consumed by another component (for example, Google Cloud Dataflow). To get started developing, you can find the SDKs here (there is a Java SDK).
Google Cloud Dataflow is managed service running Apache Beam pipelines to process your data at scale. You can learn about the different concepts here and get started designing your pipeline here. I suggest taking a look at some examples first though, which will make it more easy to grasp what is actually going on. Dataflow has a PubSub connector, so in your application you will be able to read from the topic you created before. In Dataflow you can for example multiply all your random numbers and write them to a certain sink (for example Google Cloud Storage, or even BigQuery or PubSub again).
Google Cloud Storage: is a cloud storage where you can put files, for example the output of your Dataflow pipeline. You will be able to manually download the files using the Cloud Console UI, or you can use one of the SDKs to download the output programmatically.
Hope this gives you an overview and some pointers to start. Whenever you are ready and have a more concrete use case in mind, you can start looking at some more components.

Is it possible to increase the playback rate (speed up the video) when using Amazon Elastic Transcoder?

I'm looking at speeding up a video before storing it in S3. I haven't found anything on the AWS docs about this.
Is this something that can be done with AWS Elastic Transcoder?
Thanks!
Sébastien
It's not possible. Yet.
Thought, You can try writing to their forum, asking for this feature...
It sounds like it's the only way to get this kind of functionality exported to API.
Extracted from faqs:
Q: Why is the codec parameter that I want to change not exposed by the API?
In designing Amazon Elastic Transcoder, we wanted to create a service that was simple to use. Therefore, we expose the most frequently used codec parameters. If there is a parameter that you require, please let us know by letting us know through our forum.
Aha, apparently, already did :}

Amazon EC2 scaling and upload temporary folder

I have an application based on php in one amazon instance for uploading and transcoding audio files. This application first uploads the file and after that transcodes that and finally put it in one s3 bucket. At the moment application shows the progress of file uploading and transcoding based on repeatedly ajax requests by monitoring file size in a temporary folder.
I was wondering all the time if tomorrow users rush to my service and I need to scale my service with any possible way in AWS.
A: What will happen for my upload and transcoding technique?
B: If I add more instances does it mean I have different files on different temporary conversion folders in different physical places?
C: If I want to get the file size by ajax from http://www.example.com/filesize up to the finishing process do I need to have the real address of each ec2 instance (i mean ip,dns) or all of the instances folders (or folder)?
D: When we scale what will happen for temporary folder is it correct that all of instances except their lamp stack locate to one root folder of main instance?
I have some basic information about scaling in the other hosting techniques but in amazon these questions are in my mind.
Thanks for advice.
It is difficult to answer your questions without knowing considerably more about your application architecture, but given that you're using temporary files, here's a guess:
Your ability to scale depends entirely on your architecture, and of course having a wallet deep enough to pay.
Yes. If you're generating temporary files on individual machines, they won't be stored in a shared place the way you currently describe it.
Yes. You need some way to know where the files are stored. You might be able to get around this with an ELB stickiness policy (i.e. traffic through the ELB gets routed to the same instances), but they are kind of a pain and won't necessarily solve your problem.
Not quite sure what the question is here.
As it sounds like you're in the early days of your application, give this tutorial and this tutorial a peek. The first one describes a thumbnailing service built on Amazon SQS, the second a video processing one. They'll help you design with best AWS practices in mind, and help you avoid many of the issues you're worried about now.
One way you could get around scaling and session stickiness is to have the transcoding update a database with the current progress. Any user returning checks the database to see the progress of their upload. No need to keep track of where the transcoding is taking place since the progress gets stored in a single place.
However, like Christopher said, we don't really know anything about you're application, any advice we give is really looking from the outside in and we don't have a good idea about what would be the easiest thing for you to do. This seems like a pretty simple solution but I could be missing something because I don't know anything about your application or architecture.