How to allow users to video files to ec2 instance - amazon-web-services

I have an application where people can upload files to my s3 bucket. But before the file is uploaded, it needs to be temp installed onto the computer. So now that I'm hosting the application on my aws ec2 instance, a temp file needs to be installed on to the ec2 instance/var/www/html/cactusjan25/videouploads/ before it goes to my s3 bucket everytime someone wants to upload a file. However, for some reason files cannot be uploaded to the ec2 instance. Are there any permission I need to set to allow any random person from anywhere in the world to upload a file to that temp directory on my ec2 instance?

Related

How to un-tar a file in s3 without passing through local machine

I have a huge tar file in an s3 bucket that I want to decompress while remaining in the bucket. I do not have enough space on my local machine to download the tar file and upload it back to the s3 bucket. Whats the best way to do this?
Amazon S3 does not have in-built functionality to manipulate files (such as compressing/decompressing).
I would recommend:
Launch an Amazon EC2 instance in the same region as the bucket
Login to the EC2 instance
Download the file from S3 using the AWS CLI
Untar the file
Upload desired files back to S3 using the AWS CLI
Amazon EC2 instances are charged per-second, so choose a small machine (eg t3a.micro) and it will be rather low-cost (perhaps under 1 cent).

How to crawl images from Amazon EC2 to S3 without saving to EC2 local?

I have a brunch of images in urls to be downloaded on Amazon EC2. I wanted to 1) download them, 2) rename it with a certain rule 3) save to S3. I know I could do it locally on EC2 and sync the folder to S3, but is there a good way that I could download and rename those images directly to S3?

Uploaded file in ec2 instances should be transferd to s3 automatically

I have a website hosted on ec2 instance(tomcat) and it has an upload image facility. My intention is to switch to CloudFront to reduce the load time of the website. Images on the website are loaded from a directory called "images" and the name of images are stored in database. when a page is loaded the name of the image is loaded from database and then the image is loaded. I can copy the images directory to s3 instance manually. However when an image is uploaded, a entry in database is made, but the "images" directory in s3 instance remain outdated. Need something so that s3 directory updates as soon as image is uploaded. I am new to s3 and CloudFront. Pleas Help!
You need to achieve this using AWS CLI and a cron job that continuously runs on your ec2 instance.
Install AWS CLI in your EC2 instance
Start a Cron job with below command
aws s3 sync [path-to-image-directory]/* s3://mybucket
And your images will go automatically to AWS s3.

Opencart images on amazon S3

I currently have setup an Elastic Beanstalk account, my website is connected to AWS's RDB and is running smoothly on an EC2 instance via an Elastic Load Balancer.
My problem is that Opencart stores its images in a relative directory '/image'. Because of the non-persistent nature of the EC2 instance, after a few days any images I upload to the website are deleted. (It is also takes a long time to make small updates via eb deploy due to the large quantity of image files that must be uploaded every time.
My solution to this was to use the amazon S3 bucket that was created when I created the Elastic Beanstalk. However, attempting to change the image directory folder to my S3 bucket http://elasticbeanstalk-ap-southeas-x-xxxxxxxxxxx.s3.amazon.com/image resulted in an error message from OpenCart on the admin panel. Stating that the image directory is 'not writeable.'
I guess I have two major questions here, how can I make it so the EC2 instance can properly read and write to the S3 bucket? Is there an alternative solution I should be using instead?
Prior to this, I used this opencart extension:
http://www.opencart.com/index.php?route=extension/extension/info&extension_id=7748&filter_search=cdn&page=1
However, as the author of it stated himself, the /images are still stored locally, only the cached files are stored on S3.
Thanks!

Autoscaling ec2 instance without loss of old data

Recently My Website shifted on Amazon.
I create EC2 Instance. Install lamp and Setup CodeIgnitor in /var/www/http folder.
the structure
Codeignitor folder I have folder name 'UPLOAD'.this folder is used for uploaded images and files.
I make AMI image from EC2 Instance.
I have setup Auto scaling of Ec2 instances.
When my old ec2 instance is failed then automatically new instance is created. But My all data from "UPLOAD" of folder on old ec2 instance has lost.
I want to separate "UPLOAD" folder in codeignitor from ec2 instance.
So whenever new instance is create it will get UPLOAD folder and its contents without loss.
I want to separate this upload folder. so when new instance is create then it will get this data.
how to do this.
Thanks for Advance.
Note . I have used MYSQL on Amazon RDS.
You can use a shared Elastic Block Storage mounted directory.
If you manually configure your stack using the AWS Console, go to the EC2 Service in the console, then go to Elastic Block Storage -> Volumes -> Create Volume. And in your launch configuration you can bind to this storage device.
If you are using the command line tool as-create-launch-config to create your launch config, you need the argument
--block-device-mapping "key1=value1,key2=value2..."
If you are using Cloudformation to provision your stack, refer to this template for guidance.
This assumes Codeignitor can be configured to state where its UPLOAD directory is.
As said by Mike, you can use EBS, but you can also use Amazon Simple Storage Service (S3) to store your images.
This way, whenever an instance starts, it can access all the previously uploaded images from S3. Of course, this means that you must change your code for the upload, to use the AWS API and not the filesystem to store your images to S3.