S3 keys from a QT/C++ client - amazon-web-services

We have an application that is written in C++/QT that sits on client machines all around the world and the client uploads files to Amazon S3, what is the best way to authenticate with Amazon without actually including the Amazon key on every client, is there any way to generate a unique key for each client (1000s of potential clients)?
Would it make more sense to send everything to an intermediate server or proxy and then from there upload the files to Amazon S3?

The optimum way to implement this is to follow the below steps:
Step1: Allow your clients to upload files to intermediate server.
Step2: Store your AWS s3 bucket user credentials in the intermediate server file or database.
Step3: Upload the files from intermediate server to the Amazon s3 bucket.
Hope it helps....

Related

Accessing Amazon S3 via FTP?

I have did a number of searches and can't seem to understand if this is doable at all.
I have a data logger that has FTP-push function. The FTP-push function have the following settings:
FTP server
Port
Upload directory
User name
Password
In general, I understand that a Filezilla client (I have a Pro edition) is able to drop files into my AWS S3 bucket and I had done this successfully in my local PC.
Is it possible to remove the Filezilla client requirement and input my S3 information directly into my data logger? Something like the below diagram:
Data logger ----FTP----> S3 bucket
If not, what will be the most sensible method to have my data logger JSON files drop into AWS S3 via FTP?
Frankly, you'd be better off with:
Logging to local files
Using a schedule to copy the log files to Amazon S3 using the aws s3 sync command
The schedule could be triggered by cron (Linux) or a Scheduled Task (Windows).
Amazon did add support recently to AWS Transfer for FTP support. This will provide an integration with Amazon S3 via FTP without setting up any additional infrastructure, however you should review the pricing at the moment.
As an alternative you could create an intermediary server that can sync between itself and AWS S3 using the cli aws s3 sync.

How to transfer a file from S3 to someones SFTP server

I have a workflow need. I have a customer that does not want to deal with our S3 folders where we drop their files. They want us to send the files directly to their SFTP account. When I unload files from my backend they automatically unload to S3 from AWS services. As this is a one time request per customer I don't wish to set up an automated transfer protocol in a Lamda or bash script. nor do I wish to go through the hassle of copying the file to my local server only to post it to the SFTP site. I would prefer to just right click on the file and select to transfer to SFTP location. Does anyone know if AWS has any plans to add file transfer protocol support into the S3 console UI? (SFTP, FTP, etc.)
What would be even better is if AWS S3 allowed all files dropped in an S3 bucket location to be automatically transferred to the SFTP location defined -- in the scenario where the customer never wishes to deal with S3, but we need to use it.
Given the current capabilities of Amazon S3, automating a send of files from Amazon S3 to an SFTP target would require the use of an AWS Lambda function.
There are a few ways to do this, since you are looking for the most easiest way i would suggest you to install s3fuse on a linux server, this enables you to mount s3 as a file system. You can directly mount it on the sftp server and copy them locally , below is the URL for s3Fuse.
https://cloud.netapp.com/blog/amazon-s3-as-a-file-system
The other method would be to use the AWS CLI to do recursive copy , this would involve installing AWS CLI and generate API keys. Below is an example of the command.
aws s3 cp s3://mybucket/test.txt test2.txt
You can revoke the API keys once you are done with the transfer!

Best choice of uploading files into S3 bucket

I have to upload video files into an S3 bucket from my React web application. I am currently developing a simple react application and from this application, I am trying to upload video files into an S3 bucket so I have decided two approaches for implementing the uploading part.
1) Amazon EC2 instance: From the front-end, I am hitting the API and the server is running in the Amazon EC2 instance. So I can upload the files into S3 bucket from the ec2 instance.
2) Amazon API Gateway + Lambda: I am directly sending the local files into an S3 bucket through API + Lambda function by calling the https URL with data.
But I am not happy with these two methods because both are more costly. I have to upload files into an S3 bucket, and the files are more than 200MB. I don't know I can optimize this uploading process. Video uploading part is necessary for my application and I should be very careful to do this part and also I have to increase the performance and cost-effective.
If someone knows any solution please share with me, I will be very helpful for me to continue my process.
Thanks in advance.
you can directly upload files from your react app to s3 using aws javascript sdk and cognito identity pools and for the optimization part you can use AWS multipart upload capability to upload file in multiple parts I'm providing links to read about it further
AWS javascript upload image example
cognito identity pools
multipart upload to S3
also consider a look at aws managed upload made for javascript sdk
aws managed upload javascript
In order to bypass the EC2, you can use a pre-authenticated POST request to directly upload you content from the browser to the S3 bucket.

Simplest way to fetch the file from FTP server (on-prem) & put into S3 bucket

As per my project requirement, I want to fetch some files from on-prem FTP server & put them into a S3 bucket. Files are of size 1-2 GB. Once the file will be put into the FTP server folder, I want that file to be uploaded to S3 bucket.
Please suggest the easiest way to achieve this?
Note- Mostly the files will be put into FTP server only once in a day, hence i dont want continuously scan the FTP server. once the files will be uploaded to S3 from FTP server, i want to terminate any resources (like EC2) created in AWS.
These are my ideas:
I think you could create an agent on your FTP server that will upload the files every N seconds/minutes/hours/Etc using the AWS CLI. This way you're avoiding external access to your FTP server.
Another approach is a Lambda function for pulling process, but like you said the FTP server doesn't allow external access.
Create a VPN between your on-prem and the cloud infra, create a Cloudwatch event and through a Lambda execute the pulling process.
Here you can configure a timeout:
Create a VPN between your on-prem and the cloud infra, from your FTP server upload the files using AWS CLI (pay attention to sync option). Take a look at this link: https://aws.amazon.com/answers/networking/accessing-vpc-endpoints-from-remote-networks/
With Jenkins create a task to execute a process that will upload the files.
You can use Storage gateway, visit its site here: https://aws.amazon.com/es/storagegateway/
Here is how we solved it.
Enable S3 acceleration on your S3 bucket. This is very much needed, since you are pushing large file.
If you have access to the server install aws cli and perform a sync on the folder to s3 bucket. AWS CLI will automatically sync your folder to bucket. This way if you change any of your existing files, it will keep in sync with S3 bucket. This is ideal and simplest way if you have access to the server and able to install aws cli.
https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration-examples.html#transfer-acceleration-examples-aws-cli
aws s3api put-bucket-accelerate-configuration --bucket bucketname --accelerate-configuration Status=Enabled
If you want to enable for specific or default profile,
aws configure set default.s3.use_accelerate_endpoint true
If you don't have access to ftp server in your premisis, you need an external server to perform this process. In this case you need to perform a poll or share file system, copy the file locally and move it to s3 bucket. There will be lot of failure points with this process.
Hope it helps.

Upload direct to S3 or via EC2?

I would like to build a web service for an iPhone app. As for file uploads, I'm wondering what the standard procedure and most cost-effective solution is. As far as I can see, there are two possibilities:
Client > S3: I upload a file from the iPhone to S3 directly (with the AWS SDK)
Client > EC2 > S3: I upload a file to my server (EC2 running Django) and then the server uploads the file to S3 (as detailed in this post)
I'm not planning on modifying the file in any way. I only need to tell the database to add an entry. So if I were to upload a file Client > S3, I'd need to connect to the server anyways in order to do the database entry.
It seems as if EC2 > S3 doesn't cost anything as long as the two are in the same region.
I'd be interested to hear what the advantages and disadvantages are before I start implementing file uploads.
I would definitely do it through S3 for scalability reasons. True, data between S3 and EC2 is fast and cheap, but uploads are long running, not like normal web requests. Therefore you may saturate the NIC on your EC2 instance.
Rather, return a GUID to the client, upload to S3 with the key set to the GUID and Content-Type set appropriately. Then call a web service/Ajax endpoint to create a DB record with the GUID key after the upload completes.