How to use amazon cloud drive data with AWS EC2 - amazon-web-services

I have about 35 gb of photos in my amazon cloud drive. I'm trying to run a convolutional neural network using the Deep Learning Linux EC2 AMI on AWS. Is there a way I could use the images in my amazon cloud drive for this purposes? Maybe access them in my python script or something?
Or is there another way of storing the 35 gb of data to be able to be used with AWS?

Interesting question and I could visualize use-case for the same. Note that S3 is in ideal location for enterprise use-cases while Amazon Drive is alternative of Google drive and is more towards customer focussed a b2c solution... Primary purpose of Amazon Drive is to let photos/documents be in sync across different devices of customer and secondly, auto upload of photos taken.
Having said that, ofcourse u can use S3 to store your images and then you have many sdks in all languages through which your ec2 can interact with s3.
But if you want to use amazon drive; you can refer https://developer.amazon.com/public/apis/experience/cloud-drive/content/restful-api
Note that as S3 is focussed towards enterprise; you have loads of options and easy apis as compared to with amazon drive.. You can also try uploading images to Google drive.. they have much better apis and your ec2 can talk to google drive apis.

You cannot access Amazon Cloud Drive. Amazon does not provide public access to its only API. Amazon Cloud Drive does not have WebDav or (S)FTP, the only supported API is poorly documented REST API. And Amazon does not authorize API keys anymore for at least a year.
Even if you got whitelisted access key year ago or earlier, API and service itself are so awkward and buggy that you get more problems than solve them. Constant TooManyRequests errors even if you didnt send any request for whole day, wierd undocumented errors.

Related

AWS Storage Gateway and vSANs

I'm learning about AWS Storage Gateway appliances, and I can't seem to find the answer to my question, it may be my lack of storage appliance knowledge or I'm not looking in the right place, but hope folks here can help or point me to some documentation.
I'm trying to use an AWS storage gateway appliance to back up/archive data from a particular application on an On-Prem Microsoft server to an Amazon s3 bucket. However, there is a requirement for the storage gateway to be compatible with vSANS storage since the site is a VMWare shop. I'm not super familiar with the VMWare suite, but I see in AWS I can configure a VMWare ESXi virtual machine by download the OVF template and deploying, is that all I need to set up the file share? How does the storage type being vSANs impact this approach? Any help is appreciated!
The storage appliance creates a file share, NFS or SMB, when using 3s storage. Your application or backup application would put the data there. vSAN won't effect anything since this would all be happening inside the VMs. It doesn't involve the underlaying storage outside the VMs, so the storage system backing the datastore doesn't come into play.
Yes, the OVF is about all you need other than a virtual network and IP for the appliance that can connect with AWS. Also a PC with a browser that can connect to both AWS and the internal ip assigned to the appliance for getting the appliance setup to your AWS. All of the instructions are baked in once you click create gateway from the AWS console.

what is AWS Lightsail bucket for?

I am new to server stuff.
I'd like to store some images on the server (willing to use Amazon lightsail) and
load those images from a mobile app and display them.
Can I do this with Amazon lightsail/storage/bucket and do I need to buy it?
I think I will store just a few images (probably less than 200 images, each image less than 1MB).
Not gonna upload them all at once.
I guess this is so simple question, but for beginners, it's not so simple.
Thanks in advance for any comments.
Amazon Lightsail is AWS' easy-to-use virtual private server. Lightsail offers virtual servers, storage, databases, networking, CDN, and monitoring for a low, predictable price.
The ideia of the Lightsail service is provide an easy to you host your application without the need of handle a lot of settings.
For your case, you simple need S3 buckets to host images, I do not think that Lightsail will the best option.
Simple save your files in a S3 bucket and load the images using presigned/public URLs, or via the SDK.
For more information about the Lightsail service take a look here: What can I do with Lightsail?

Transporting data to Amazon Workspaces

I am managing a bunch of users using Amazon Workspaces, they have terabytes of data which they want to start playing around with on their workspace.
I am wondering what is the best way to do the data upload process? Can everything just be downloaded from Google Drive or Dropbox? Or should I use something like AWS Snowball, which is specifically for migration?
While something like AWS Snowball is probably the safest, best bet, I'm kind of hesitant to add another AWS product to the mix, which is why I might just have everything be uploaded and then downloaded from Google Drive / Dropbox. Then again, I am building an AWS environment that will be used long term, and long term using Google Drive / Dropbox won't be a solution.
Thoughts to architect this out (short term and long term)?
Why would you be hesitant to include more AWS products in the mix? Generally speaking, if you aren't combining multiple AWS products to build your solutions then you aren't making very good use of AWS.
For the specific task at hand I would look into AWS WorkDocs, which is integrated very well with AWS Workspaces. If that doesn't suit your needs I would suggest placing the data files on Amazon S3.
You can use FileZilla Pro to upload your data to a AWS S3 bucket.
And use FileZilla Pro within the Workspaces instance to download the files.

How to setup shared persistent storage for multiple AWS EC2 instances?

I have a service hosted on Amazon Web Services. There I have multiple EC2 instances running with the exact same setup and data, managed by an Elastic Load Balancer and scaling groups.
Those instances are web servers running web applications based on PHP. So currently there are the very same files etc. placed on every instance. But when the ELB / scaling group launches a new instance based on load rules etc., the files might not be up-to-date.
Additionally, I'd rather like to use a shared file system for PHP sessions etc. than sticky sessions.
So, my question is, for those reasons and maybe more coming up in the future, I would like to have a shared file system entity which I can attach to my EC2 instances.
What way would you suggest to resolve this? Are there any solutions offered by AWS directly so I can rely on their services rather than doing it on my on with a DRBD and so on? What is the easiest approach? DRBD, NFS, ...? Is S3 also feasible for those intends?
Thanks in advance.
As mentioned in a comment, AWS has announced EFS (http://aws.amazon.com/efs/) a shared network file system. It is currently in very limited preview, but based on previous AWS services I would hope to see it generally available in the next few months.
In the meantime there are a couple of third party shared file system solutions for AWS such as SoftNAS https://aws.amazon.com/marketplace/pp/B00PJ9FGVU/ref=srh_res_product_title?ie=UTF8&sr=0-3&qid=1432203627313
S3 is possible but not always ideal, the main blocker being it does not natively support any filesystem protocols, instead all interactions need to be via an AWS API or via http calls. Additionally when looking at using it for session stores the 'eventually consistent' model will likely cause issues.
That being said - if all you need is updated resources, you could create a simple script to run either as a cron or on startup that downloads the files from s3.
Finally in the case of static resources like css/images don't store them on your webserver in the first place - there are plenty of articles covering the benefit of storing and accessing static web resources directly from s3 while keeping the dynamic stuff on your server.
From what we can tell at this point, EFS is expected to provide basic NFS file sharing on SSD-backed storage. Once available, it will be a v1.0 proprietary file system. There is no encryption and its AWS-only. The data is completely under AWS control.
SoftNAS is a mature, proven advanced ZFS-based NAS Filer that is full-featured, including encrypted EBS and S3 storage, storage snapshots for data protection, writable clones for DevOps and QA testing, RAM and SSD caching for maximum IOPS and throughput, deduplication and compression, cross-zone HA and a 100% up-time SLA. It supports NFS with LDAP and Active Directory authentication, CIFS/SMB with AD users/groups, iSCSI multi-pathing, FTP and (soon) AFP. SoftNAS instances and all storage is completely under your control and you have complete control of the EBS and S3 encryption and keys (you can use EBS encryption or any Linux compatible encryption and key management approach you prefer or require).
The ZFS filesystem is a proven filesystem that is trusted by thousands of enterprises globally. Customers are running more than 600 million files in production on SoftNAS today - ZFS is capable of scaling into the billions.
SoftNAS is cross-platform, and runs on cloud platforms other than AWS, including Azure, CenturyLink Cloud, Faction cloud, VMware vSPhere/ESXi, VMware vCloud Air and Hyper-V, so your data is not limited or locked into AWS. More platforms are planned. It provides cross-platform replication, making it easy to migrate data between any supported public cloud, private cloud, or premise-based data center.
SoftNAS is backed by industry-leading technical support from cloud storage specialists (it's all we do), something you may need or want.
Those are some of the more noteworthy differences between EFS and SoftNAS. For a more detailed comparison chart:
https://www.softnas.com/wp/nas-storage/softnas-cloud-aws-nfs-cifs/how-does-it-compare/
If you are willing to roll your own HA NFS cluster, and be responsible for its care, feeding and support, then you can use Linux and DRBD/corosync or any number of other Linux clustering approaches. You will have to support it yourself and be responsible for whatever happens.
There's also GlusterFS. It does well up to 250,000 files (in our testing) and has been observed to suffer from an IOPS brownout when approaching 1 million files, and IOPS blackouts above 1 million files (according to customers who have used it). For smaller deployments it reportedly works reasonably well.
Hope that helps.
CTO - SoftNAS
For keeping your webserver sessions in sync you can easily switch to Redis or Memcached as your session handler. This is a simple setting in the PHP.ini and they can all access the same Redis or Memcached server to do sessions. You can use Amazon's Elasticache which will manage the Redis or Memcache instance for you.
http://phpave.com/redis-as-a-php-session-handler/ <- explains how to setup Redis with PHP pretty easily
For keeping your files in sync is a little bit more complicated.
How to I push new code changes to all my webservers?
You could use Git. When you deploy you can setup multiple servers and it will push your branch (master) to the multiple servers. So every new build goes out to all webserver.
What about new machines that launch?
I would setup new machines to run a rsync script from a trusted source, your master web server. That way they sync their web folders with the master when they boot and would be identical even if the AMI had old web files in it.
What about files that change and need to be live updated?
Store any user uploaded files in S3. So if user uploads a document on Server 1 then the file is stored in s3 and location is stored in a database. Then if a different user is on server 2 he can see the same file and access it as if it was on server 2. The file would be retrieved from s3 and served to the client.
GlusterFS is also an open source distributed file system used by many to create shared storage across EC2 instances
Until Amazon EFS hits production the best approach in my opinion is to build a storage backend exporting NFS from EC2 instances, maybe using Pacemaker/Corosync to achieve HA.
You could create an EBS volume that stores the files and instruct Pacemaker to umount/dettach and then attach/mount the EBS volume to the healthy NFS cluster node.
Hi we currently use a product called SoftNAS in our AWS environment. It allows us to chooses between both EBS and S3 backed storage. It has built in replication as well as a high availability option. May be something you can check out. I believe they offer a free trial you can try out on AWS
We are using ObjectiveFS and it is working well for us. It uses S3 for storage and is straight forward to set up.
They've also written a doc on how to share files between EC2 instances.
http://objectivefs.com/howto/how-to-share-files-between-ec2-instances

How to use external data with Elastic MapReduce

From Amazon's EMR FAQ:
Q: Can I load my data from the internet or somewhere other than Amazon S3?
Yes. Your Hadoop application can load the data from anywhere on the internet or from other AWS services. Note that if you load data from the internet, EC2 bandwidth charges will apply. Amazon Elastic MapReduce also provides Hive-based access to data in DynamoDB.
What are the specifications for loading data from external (non-S3) sources? There seems to be a dearth of resources around this option and doesn't appear to be documented in any form.
If you want to do it "a hadoop way" you should implement DFS over your data source, or to put referances to your source URLs into some file, which will be input for the MR job.
In the same time hadoop is about moving code to data. Even EMR over S3 is not ideal in this perspectice - EC2 and S3 are different cluster. So it is hard to imegine effective MR procesing if datasource is phisically outside of the data center.
Basically what Amazon is saying that programatically you can access any content from internet or any other source via your code. For example you can access a Couch database instance via any HTTP based client APIs.
I know that Cassandra package for java has one source package named org.apache.cassandra.hadoop and there are two classes in it that are needed for getting info from Cassandra when you are running the AWS Elastic MapReduce.
Essential classes: ColumnFamilyInputFormat.java and ConfigHelper.java
Go to this link to see an example of what I'm talking about.