File upload API on EC2 with ELB and S3 - amazon-web-services

I am developing app server with NodeJS and AWS.
I am setting up the server environment with ELB and EC2s.
I am using ELB as load balancer and attached several app server EC2 instances to it.
And one EC2 instance is used for MongoDB.
My question is about request including file upload.
I think uploaded file should not be in app server (EC2 instance), so I will try to save uploaded files in S3 and allow app servers (EC2 instances) to access it.
The rough solution is that if app servers accept a file from client, move it to S3 and delete the file on the app server.
But then it will cause some performance loss and I don't feel it's a clean way.
Is this a best way? or there is another way to solve it.
I think it's best way to upload file to S3.
But file is uploaded with other data. (For example, profile upload - name: String, age: Number, profileImage: File)
I need to process other data on app server, so client should not upload to S3 directly.
Is there any better idea?
Please save me.
P.S: Please let me know if you cannot understand my expression because I am not native. If so, I will add some explanation for it with my best!

You can directly upload to S3 using temporary credentials that allow the end user to write to you bucket.
There is a good article with detailed code for doing exactly what you are trying to do with node.js here.
Answers that refer to external links are frowned upon on SO, so in a nutshell:
include the aws sdk in your application
provide it with appropriate credentials
use those credentials to generate a signed URL with a short lifespan
provide the end user with the signed URL they can then use to upload, preferably asynchronously with progress feedback

Related

Can I use AWS S3 for hosting, and EC2 to process my form submits?

So I am thinking of migrating my website to Amazon S3 since it's super cheap and fast, however, I use PHP and AJAX to submit my contact forms. Would it be possible to host my site using AWS S3 and then send all HTTP POSTs to the EC2 instance?
Yes, this is very well possible. However, if you're running an EC2 instance anyways and your traffic is not enormous, you might as well serve your static files from your EC2 instance.
It is not possible to host php site on AWS S3 only static content like images, css or js can be put their.
For dynamic content you have to make use of aws instance.
https://forums.aws.amazon.com/message.jspa?messageID=453142
Correct Usage of Amazon Web Services S3 for Server - Side Scripting

Not able to upload large files in amazon s3

I am trying to upload large files like audio and video in amazon s3, django is throwing worker timeout. If so i use different credentials then its working fine. Just replacing the credentials of other account does the work. When I revert back the credentials to the other account it’s failing. But it works for small files less than 3mb. I hope I might have missed out some settings in my amazon s3 dashboard for this particular account. could anyone help me out?

AWS Configure Tomcat's virtual directory and emailing system

I'm deploying my first application on AWS and there are a couple of things I just cannot find a solution for.
1.File system
The application is using Lucene and allows image uploading, therefore I'm guessing I need an S3 instance to host the Lucene index and the images.
For testing purposes, on my local system I would place this line of code in Tomcat 7's server.xml:
<Context path="/uploads" docBase="D:/myapp/uploads" />. Now, as you probably know, all the requests starting with /uploads would be routed to D:/myapp/uploads by the server.
Furthermore, the Lucene API needs a absolute path in order to find the Index Directory:
FSDirectory.open(new File(ConfigUtil.getProperty("D:/myapp/index")))
My first question is about this configuration in the AWS Console. How can I obtain those `D:/aaa/bbb/' paths?
2.Emailing system
After registration, a confirmation email is sent to the user. Again, in testing I used Google's smtp.gmail.com. I would need a host a username and a password to make the javax.mail API work.
I have no idea how can I obtain those credentials? Is it a AWS matter or a Domain Registrar (I'm using namecheap) matter.
Thanks for your help!
To host the images on S3, you have two options.
Either allow upload to an EBS-backed EC2 instance first, as you did on your test system, and move them to S3 afterwards asynchronously.
In this case, you can choose any path you wish on the EBS volume to temporarily store the uploaded files.
Or modify your front-end to allow submission to S3 directly.
Likewise, you can choose any path you wish on the EBS volume to store Lucene's index.
Regarding the use of javax.mail
set smtp.gmail.com as host
create a gmail account
use the newly created account's username and password

Dynamic usage of AWS S3

I trying to explore AWS S3 and I found out that we can store data and have a URL for a file which can be used on a website, but my intention is to store files on S3 and have users of my website post and retrieve files to/from S3 without my intervention. I am trying to have my server and JSP/Servlets pages on EC2 on which Tomcat (and MySQL server) will be running.
Is this possible and if yes, how can i achieve this.
Thanks,
SD
Yes, it's possible. A full answer to this question is tantamount to a consulting gig, but some resources that should get you started:
The S3 API
Elastic Beanstalk for your webtier
Amazon RDS for MySQL

Upload direct to S3 or via EC2?

I would like to build a web service for an iPhone app. As for file uploads, I'm wondering what the standard procedure and most cost-effective solution is. As far as I can see, there are two possibilities:
Client > S3: I upload a file from the iPhone to S3 directly (with the AWS SDK)
Client > EC2 > S3: I upload a file to my server (EC2 running Django) and then the server uploads the file to S3 (as detailed in this post)
I'm not planning on modifying the file in any way. I only need to tell the database to add an entry. So if I were to upload a file Client > S3, I'd need to connect to the server anyways in order to do the database entry.
It seems as if EC2 > S3 doesn't cost anything as long as the two are in the same region.
I'd be interested to hear what the advantages and disadvantages are before I start implementing file uploads.
I would definitely do it through S3 for scalability reasons. True, data between S3 and EC2 is fast and cheap, but uploads are long running, not like normal web requests. Therefore you may saturate the NIC on your EC2 instance.
Rather, return a GUID to the client, upload to S3 with the key set to the GUID and Content-Type set appropriately. Then call a web service/Ajax endpoint to create a DB record with the GUID key after the upload completes.