Opencart images on amazon S3 - amazon-web-services

I currently have setup an Elastic Beanstalk account, my website is connected to AWS's RDB and is running smoothly on an EC2 instance via an Elastic Load Balancer.
My problem is that Opencart stores its images in a relative directory '/image'. Because of the non-persistent nature of the EC2 instance, after a few days any images I upload to the website are deleted. (It is also takes a long time to make small updates via eb deploy due to the large quantity of image files that must be uploaded every time.
My solution to this was to use the amazon S3 bucket that was created when I created the Elastic Beanstalk. However, attempting to change the image directory folder to my S3 bucket http://elasticbeanstalk-ap-southeas-x-xxxxxxxxxxx.s3.amazon.com/image resulted in an error message from OpenCart on the admin panel. Stating that the image directory is 'not writeable.'
I guess I have two major questions here, how can I make it so the EC2 instance can properly read and write to the S3 bucket? Is there an alternative solution I should be using instead?
Prior to this, I used this opencart extension:
http://www.opencart.com/index.php?route=extension/extension/info&extension_id=7748&filter_search=cdn&page=1
However, as the author of it stated himself, the /images are still stored locally, only the cached files are stored on S3.
Thanks!

Related

AWS EC2 Apache file persistence

I'm new to AWS and a little perplexed as to the situation with var/www/html folder in an EC2 instance in which Apache has been installed.
After setting up an Elastic Beanstalk service and uploading the files, I see that these are stored in the regular var/www/html folder of the instance.
From reading AWS documents, it seems that instances may be deleted and re-provisioned, which is why use of an S3 bucket, EFS or EBS is recommended.
Why, then, is source code stored in the EC2 instance when using an apache server? Won't these files potentially be deleted with the instance?
If you manually uploaded some files and data to /var/www/html then off course they will be wiped out when AWS is going to replace/delete the instance, e.g. due to autoscalling.
All files that you use on EB should be part of your deployment package, and all files that, e.g. your users upload, should be stored outside of Eb, e.g. on S3.
Even if the instance is terminated for some reason, since the source code is part of your deployment package on Beanstalk it can provision another instance to replace with the exact same application and configurations. Basically you are losing this data, but it doesn't matter.
The data loss concern is for anything you do that is not part of the automated provisioning/deployment, ie any manual configuration changes or any data your application may write to ephemeral storage. This is what you would need a more persistent storage option for.
Seems that, when the app is first deployed, all files are uploaded to an S3 bucket, from where these are copied into the relevant directory of each new instance. In the event new instances have to be created (such as for auto-scaling) or replaced, these instances also pull the code from the S3 bucket. This is also how the app is re-deployed - the bucket is updated and each instance makes a new copy of the code.
Apologies if this is stating the obvious to some people, but I had seen a few similar queries.

Hosting a dynamic website on AWS with auto scaling

I am confused how to host a dynamic Laravel website on AWS. Currently, i have an auto scale group configured to MIN 1 and MAX 1. What i'm trying to achieve is lunch a new EC2 server, when the current EC2 goes down.
What i don't understand is where should i store my website code to enable the new EC2 server to obtain it automatically! I have read about storing it in S3 bucket, but my website is dynamic, not sure if that is suitable.
Any guidance would be appreciated :)
You could copy the entire contents of /var/www/html and store it in an s3 bucket.
Then you can add a bootstrap script in the ec2 instance that copies the contents of the s3 bucket to its /var/www/html directory.
You can use any engine for cloud provisioning environments and applications like Ansible or Urbancode Deploy Blueprint designer and configure deployment scripts there which may get the source/artifacts from the repository, artifactory server (Jfrag) or wherever you want to get from.

How to crawl images from Amazon EC2 to S3 without saving to EC2 local?

I have a brunch of images in urls to be downloaded on Amazon EC2. I wanted to 1) download them, 2) rename it with a certain rule 3) save to S3. I know I could do it locally on EC2 and sync the folder to S3, but is there a good way that I could download and rename those images directly to S3?

Uploaded file in ec2 instances should be transferd to s3 automatically

I have a website hosted on ec2 instance(tomcat) and it has an upload image facility. My intention is to switch to CloudFront to reduce the load time of the website. Images on the website are loaded from a directory called "images" and the name of images are stored in database. when a page is loaded the name of the image is loaded from database and then the image is loaded. I can copy the images directory to s3 instance manually. However when an image is uploaded, a entry in database is made, but the "images" directory in s3 instance remain outdated. Need something so that s3 directory updates as soon as image is uploaded. I am new to s3 and CloudFront. Pleas Help!
You need to achieve this using AWS CLI and a cron job that continuously runs on your ec2 instance.
Install AWS CLI in your EC2 instance
Start a Cron job with below command
aws s3 sync [path-to-image-directory]/* s3://mybucket
And your images will go automatically to AWS s3.

Autoscaling ec2 instance without loss of old data

Recently My Website shifted on Amazon.
I create EC2 Instance. Install lamp and Setup CodeIgnitor in /var/www/http folder.
the structure
Codeignitor folder I have folder name 'UPLOAD'.this folder is used for uploaded images and files.
I make AMI image from EC2 Instance.
I have setup Auto scaling of Ec2 instances.
When my old ec2 instance is failed then automatically new instance is created. But My all data from "UPLOAD" of folder on old ec2 instance has lost.
I want to separate "UPLOAD" folder in codeignitor from ec2 instance.
So whenever new instance is create it will get UPLOAD folder and its contents without loss.
I want to separate this upload folder. so when new instance is create then it will get this data.
how to do this.
Thanks for Advance.
Note . I have used MYSQL on Amazon RDS.
You can use a shared Elastic Block Storage mounted directory.
If you manually configure your stack using the AWS Console, go to the EC2 Service in the console, then go to Elastic Block Storage -> Volumes -> Create Volume. And in your launch configuration you can bind to this storage device.
If you are using the command line tool as-create-launch-config to create your launch config, you need the argument
--block-device-mapping "key1=value1,key2=value2..."
If you are using Cloudformation to provision your stack, refer to this template for guidance.
This assumes Codeignitor can be configured to state where its UPLOAD directory is.
As said by Mike, you can use EBS, but you can also use Amazon Simple Storage Service (S3) to store your images.
This way, whenever an instance starts, it can access all the previously uploaded images from S3. Of course, this means that you must change your code for the upload, to use the AWS API and not the filesystem to store your images to S3.