How to send webcam video to Amazon AWS EC2 Instance - amazon-web-services

Suppose I want to stream video captured by my webcam to an Amazon AWS EC2 Instance for the purposes of image processing in the cloud. How would one do this? The only means for file transfer that I am aware of, is scp to copy files to the remote host. I have no idea where to begin in regards to streaming video to AWS EC2. Google turned up nothing for me. Any ideas?

Here is what worked. There are likely many other methods.
1) Create a free tier Amazon AWS EC2 instance with Ubuntu Server 16.04
2) Go to the security groups, and modify the security group to allow TCP traffic to reach your instance
3) Note the public ipv4 address of your instance
4) Develop client code to open network sockets, and send data to them (Python 2.7 has the socket package)
5) Develop server code to open network sockets, and listen/accept connection (Python 2.7 works).
6) Client side needs to generate video frames from the webcam, and this is done quite easily using OpenCV2 within Python.
A great reference was the answer posted in this thread:
Send Live Video OpenCV Python

The only means for file transfer that I am aware of, is scp to copy files to the remote host.
An AWS EC2 instance can largely be treated just like any other server.. just in the Cloud. If you want to connect to it, install some software, open ports, whatever, all of that is do-able.
I'm assuming you want to "stream" video from a webcam to the EC2 instance.
You need some kind of client software where the webcam is connected to stream it to the EC2 instance. You would assign an Elastic IP to the instance and configure that software to stream it to the address.
You would then need to install or build something on the server to receive the stream and do something with it. Either save it somewhere for processing, do some live processing and stream it somewhere else, etc.
Each of these components are broad subjects and can't really recommend any particular software to accomplish this. The important part here though is that the EC2 instance can do all of this, assuming you find or build software to handle all of these tasks.

Related

AWS Nitro Enclave Socket Connection to Database

I'd like to host an app that uses a database connection in an AWS Nitro enclave.
I understand that the Nitro enclave doesn't have access to a network or persistent storage, and the only way that it can communicate with its parent instance is through the vsock.
There are some examples showing how to configure a connection from the enclave to an external url through a secure channel using the vsock and vsock proxy, but the examples focus on AWS KMS operations.
I'd like to know if it's possible to configure the secure channel through the vsock and vsock proxy to connect to a database like postgres/mysql etc...
If this is indeed possible, are there perhaps some example cofigurations somewhere?
Nitrogen is an easy solution for this, and it's completely open source (disclosure I'm one of the contributors to Nitrogen).
You can see an example configuration for deploying Redis to a Nitro Enclave here.
And a more detailed blog post walkthrough of deploying any Docker container to a Nitro Enclave here.
Nitrogen is a command line tool with three main commands:
Setup - Spawn an EC2 instance, configure SSH, and establish a VSOCK proxy for interacting with the Nitro Enclave.
Build - Create a Docker image from an arbitrary Dockerfile, and convert it to the Enclave Image File (EIF) format expected by Nitro.
Upload your EIF and launch it as a Nitro Enclave. You receive a hostname and port which is ready to proxy enclave requests to your service.
You can setup, build, and deploy any Dockerfile in a few minutes to your own AWS account.
I would recommend looking into Anjuna Security's offering: https://www.anjuna.io/amazon-nitro-enclaves
Outside of using Anjuna, you could look into the AWS Nitro SDK and use that to build a networking stack to utilize the vsock or modify an existing sample.

Can I use create a share between an EC2 instances and my local machine?

Just to give you a context... I'm new to the aws world and all the services that provides.
I have a legacy application which I need to share some binarys with a client, and I was trying to use a ec2 instance (Amazon Linux AMI) with samba, to map it into a windows local machine.
I was able to establish a conection with another ec2 instances (same vpc), just as a tryout. But I wasn't able to do so with my windows machine or even with a linux vm I have.
The inbound rules for this concept ec2 instance was fully open (All traffic allowed).
Main question
Is it possible to do? Share a file system between a ec2-instances with a (over internet) local machine?
Just saying:
S3 storage isn't an option.
And in my region FSX still ain't implemented and for latency reasons is a no go.
Please ask as many questions you want, I'll try to anwser them as fast as I Can.
Kind Regads.
TL;DR - it's possible, but there's no 'simple' solution (in my opinion).
I thought of two possible solutions that you can implement, here we go ...
1: AWS EFS, AWS Direct Connect and Docker
A possible solution would be using AWS Elastic File System (EFS), AWS Direct Connect and a Docker Linux container.
Drawbacks
If it's the first time you encounter with the above AWS services or Docker, then it's going to be a bit of a journey to learn about them
EFS pricing - it's not so cheap, and you also need to consider the inbound and outbound traffic, it's best to use the calculator that is in the pricing page
EFS performance - if you only share files then it should be okay, but if you expect to get high speeds, then remember that it's not an EBS volume, so for higher speeds you need to pay more money
AWS Direct Connect pricing - you also need to take that into consideration
Security - I'm not sure how sensitive your data is, but you need to make sure you create a very strict VPC, with Security Groups and Network Access List rules - read about the VPC Security Best Practices
Steps to implement the solution
Follow the Walkthrough: Create and Mount a File System On-Premises with AWS Direct Connect and VPN, also, here are the steps on how to combine it with Docker
(Optional) To make it a bit easier - for Windows to "support" Linux file-system, you should use Windows Git Bash. If you're not sure how to use install 3rd-party apps in Windows Git Bash (like aws-vault) then read this blog post
Create an EFS in AWS, and mount it to your EC2 instance, read more about it here
Use AWS Direct Connect to connect to your VPC from your local Windows machine
Install Docker for Windows on your local machine
Create a Docker Volume, and mount the same EFS to that volume - a good example for this step
Test it - SSH to your EC2 instance, create a file on the EFS volume and then check in your local Docker Linux container that this file appears on the EFS volume
I omitted the security steps because it's up to you how strict you want your solution to be.
2: Using S3 as a shared file-system
You can try out this tool s3fs-fuse, but you'll still need to use a Docker Linux container since you're on Windows. I haven't tested it but it looks promising. You can read this blog post, it's a step-by-step tutorial on how to do it, and also shares some other possible solutions.

How to setup shared persistent storage for multiple AWS EC2 instances?

I have a service hosted on Amazon Web Services. There I have multiple EC2 instances running with the exact same setup and data, managed by an Elastic Load Balancer and scaling groups.
Those instances are web servers running web applications based on PHP. So currently there are the very same files etc. placed on every instance. But when the ELB / scaling group launches a new instance based on load rules etc., the files might not be up-to-date.
Additionally, I'd rather like to use a shared file system for PHP sessions etc. than sticky sessions.
So, my question is, for those reasons and maybe more coming up in the future, I would like to have a shared file system entity which I can attach to my EC2 instances.
What way would you suggest to resolve this? Are there any solutions offered by AWS directly so I can rely on their services rather than doing it on my on with a DRBD and so on? What is the easiest approach? DRBD, NFS, ...? Is S3 also feasible for those intends?
Thanks in advance.
As mentioned in a comment, AWS has announced EFS (http://aws.amazon.com/efs/) a shared network file system. It is currently in very limited preview, but based on previous AWS services I would hope to see it generally available in the next few months.
In the meantime there are a couple of third party shared file system solutions for AWS such as SoftNAS https://aws.amazon.com/marketplace/pp/B00PJ9FGVU/ref=srh_res_product_title?ie=UTF8&sr=0-3&qid=1432203627313
S3 is possible but not always ideal, the main blocker being it does not natively support any filesystem protocols, instead all interactions need to be via an AWS API or via http calls. Additionally when looking at using it for session stores the 'eventually consistent' model will likely cause issues.
That being said - if all you need is updated resources, you could create a simple script to run either as a cron or on startup that downloads the files from s3.
Finally in the case of static resources like css/images don't store them on your webserver in the first place - there are plenty of articles covering the benefit of storing and accessing static web resources directly from s3 while keeping the dynamic stuff on your server.
From what we can tell at this point, EFS is expected to provide basic NFS file sharing on SSD-backed storage. Once available, it will be a v1.0 proprietary file system. There is no encryption and its AWS-only. The data is completely under AWS control.
SoftNAS is a mature, proven advanced ZFS-based NAS Filer that is full-featured, including encrypted EBS and S3 storage, storage snapshots for data protection, writable clones for DevOps and QA testing, RAM and SSD caching for maximum IOPS and throughput, deduplication and compression, cross-zone HA and a 100% up-time SLA. It supports NFS with LDAP and Active Directory authentication, CIFS/SMB with AD users/groups, iSCSI multi-pathing, FTP and (soon) AFP. SoftNAS instances and all storage is completely under your control and you have complete control of the EBS and S3 encryption and keys (you can use EBS encryption or any Linux compatible encryption and key management approach you prefer or require).
The ZFS filesystem is a proven filesystem that is trusted by thousands of enterprises globally. Customers are running more than 600 million files in production on SoftNAS today - ZFS is capable of scaling into the billions.
SoftNAS is cross-platform, and runs on cloud platforms other than AWS, including Azure, CenturyLink Cloud, Faction cloud, VMware vSPhere/ESXi, VMware vCloud Air and Hyper-V, so your data is not limited or locked into AWS. More platforms are planned. It provides cross-platform replication, making it easy to migrate data between any supported public cloud, private cloud, or premise-based data center.
SoftNAS is backed by industry-leading technical support from cloud storage specialists (it's all we do), something you may need or want.
Those are some of the more noteworthy differences between EFS and SoftNAS. For a more detailed comparison chart:
https://www.softnas.com/wp/nas-storage/softnas-cloud-aws-nfs-cifs/how-does-it-compare/
If you are willing to roll your own HA NFS cluster, and be responsible for its care, feeding and support, then you can use Linux and DRBD/corosync or any number of other Linux clustering approaches. You will have to support it yourself and be responsible for whatever happens.
There's also GlusterFS. It does well up to 250,000 files (in our testing) and has been observed to suffer from an IOPS brownout when approaching 1 million files, and IOPS blackouts above 1 million files (according to customers who have used it). For smaller deployments it reportedly works reasonably well.
Hope that helps.
CTO - SoftNAS
For keeping your webserver sessions in sync you can easily switch to Redis or Memcached as your session handler. This is a simple setting in the PHP.ini and they can all access the same Redis or Memcached server to do sessions. You can use Amazon's Elasticache which will manage the Redis or Memcache instance for you.
http://phpave.com/redis-as-a-php-session-handler/ <- explains how to setup Redis with PHP pretty easily
For keeping your files in sync is a little bit more complicated.
How to I push new code changes to all my webservers?
You could use Git. When you deploy you can setup multiple servers and it will push your branch (master) to the multiple servers. So every new build goes out to all webserver.
What about new machines that launch?
I would setup new machines to run a rsync script from a trusted source, your master web server. That way they sync their web folders with the master when they boot and would be identical even if the AMI had old web files in it.
What about files that change and need to be live updated?
Store any user uploaded files in S3. So if user uploads a document on Server 1 then the file is stored in s3 and location is stored in a database. Then if a different user is on server 2 he can see the same file and access it as if it was on server 2. The file would be retrieved from s3 and served to the client.
GlusterFS is also an open source distributed file system used by many to create shared storage across EC2 instances
Until Amazon EFS hits production the best approach in my opinion is to build a storage backend exporting NFS from EC2 instances, maybe using Pacemaker/Corosync to achieve HA.
You could create an EBS volume that stores the files and instruct Pacemaker to umount/dettach and then attach/mount the EBS volume to the healthy NFS cluster node.
Hi we currently use a product called SoftNAS in our AWS environment. It allows us to chooses between both EBS and S3 backed storage. It has built in replication as well as a high availability option. May be something you can check out. I believe they offer a free trial you can try out on AWS
We are using ObjectiveFS and it is working well for us. It uses S3 for storage and is straight forward to set up.
They've also written a doc on how to share files between EC2 instances.
http://objectivefs.com/howto/how-to-share-files-between-ec2-instances

iSCSI connected to AWS Gateway, but keep asking formatting

All right, I am pretty newbie to network and storage things, but in my research, we need to use AWS S3 to backup data, sounds simple enough!
So I follow the "AWS storage gateway user guide (API version 2013-06-30).
Below are details I could provide based on my best knowledge:
Gateway-cached
Gateway on-premises, using VMware ESXi Hypervisor
About 300 Gb cache and 150 upload buffer
And
AWS gateway is deployed and activated
Cache storage and upload buffer configured on VM
A volume in Amazon S3 is created.
After all the above completed, I tried to use my Windows 8 iSCSI to connect to VM. It shows as a disk in folder, so I did a initial formatting. But after this, it asks for formatting again.
I followed the guide, but unfortunately it didn't work for me this time. Could anyone provide any insight on this issue? Thanks very much in advance.
It turns out that I don't understand how iSCSI works.
Quoted from Amazon Storage Gateway User Guide:
Each of your storage volumes is exposed as an iSCSI target. Connect only one iSCSI initiator to each iSCSI target
Since I thought iSCSI target as a network shared drive, I let multiple machine connect to the iSCSI target, resulting keeping asking for formatting.

Limitations of Amazon Machine Image (AMI)

As Amazon Web Services mentions,
An Amazon Machine Image (AMI) is a special type of pre-configured
operating system and virtual application software which is used to
create a virtual machine within the Amazon Elastic Compute Cloud
(EC2). It serves as the basic unit of deployment for services
delivered using EC2.
So AMI is a kind of virtual machine which we can use as a server to host our applications. So my questions are,
What are the limitations in AMI with compared to a normal server?
Can we install any software in the AMI?
AMI is not any server running; it is like a "backup of your server" on hard disk. Using this back-up you can bring up running servers instantly.
Concept of Imaging server is used in all IT companies. Suppose an IT companies give Windows 7 to new joinees with few pre-installed software. So, instead of configuring the laptop for every joinee; they will create an image and will just dump that image on new laptop and give it to new joinee.
Same Image when created on Amazon Web services is called as AMI. AMI is just an image of any server... It is something like you firstly make your machine ready ( with any operating system and all the softwares you need on them ) and then create an image out of it which is called as AMI.
To answer your questions;
1. Not sure what limitations you are talking about; but as such there is no limitation.
2. Yes you can install any software. To make things clear; you can even install a virus on the server ; and then create an AMI.