AWS Storage Gateway - amazon-web-services

Is there a way to present onprem storage to AWS EC2 instances without copying to S3 etc. We have a storage array in our DC, I want to carve up LUNs and present them to our apps running on EC2 instances in AWS, using storage gateway. Can someone suggest if this is possible at all? I don't want to pay for AWS EBS volumes, when I have plenty of storage available with me. Thanks.

You will not be able to do this with Storage Gateway - it serves as an interface between your local network and S3.
If you want to expose your local disk to EC2, you will need to run a file share (NFS or Samba), set up a VPN or Direct Connect between your data center and your AWS VPC, and then mount the exported volumes on your EC2 instance.
If you don't want to pay for any EBS volumes, you should look for instance storage based AMIs. Keep in mind that you cannot stop these instances - you can only terminate them.

According to latest AWS cloud architecture and infrastructure developments, AWS cloud resources can access on premise storages. Not only computing instances but also AWS managed SaaS can do it same. Standard protocols such as NFS, SMB and iSCSI available and you can mount AWS computing instances to remote storage through site-to-site VPN.

Related

Is it possible to have AWS EBS persistent volumes on an on-prem K8s cluster?

I wanted to use some AWS EBS volumes as a persistent storage for a deployment. I've configured the storage class and a PV, but I haven't been able to configure a Cloud provider.
The K8s documentation (as far as I understand) is for Kubernetes clusters running on a specific cloud provider, instead of an on-prem cluster using cloud resources. As the title says: Is it possible to have AWS EBS persistent volumes on an on-prem K8s cluster?
If so, can you a cloud provider to your existing cluster? (everything I've found online suggests that you add it when running kubeadm init).
Thank you!
You cannot use EBS storage in the same manner as you would when running on the cloud but you can use AWS Storage Gateway to store snapshots/backups of your volumes in cloud.
AWS Storage Gateway is a hybrid cloud storage service that connects
your existing on-premises environments with the AWS Cloud
The feature you are intrested in is called Volume Gateway
The Volume Gateway presents your applications block storage volumes
using the iSCSI protocol. Data written to these volumes can be
asynchronously backed up as point-in-time snapshots of your volumes,
and stored in the cloud as Amazon EBS snapshots.
Unfortunately you might not be able to automate creation of volumes in a way you could when running directly on AWS so some things you might have to do manually.
No you cannot because EBS can only be mounted inside AWS (usually in EC2 instances).

List of AWS services that don’t require a VPC to run

Google failed me again or may be I wasnt too clear in my question.
Is there an easy way or rather how do we determine what services are VPC bound and what services are non-vpc ?
For example - EC2, RDS require a VPC setup
Lambda, S3 are publicly available services and doesn't need a VPC setup.
The basic services that require an Amazon VPC are all related to Amazon EC2 instances, such as:
Amazon RDS
Amazon EMR
Amazon Redshift
Amazon Elasticsearch
AWS Elastic Beanstalk
etc
These resources run "on top" of Amazon EC2 and therefore connect to a VPC.
There are also other services that use a VPC, but you would only use them if you are using some of the above services, such as:
Elastic Load Balancer
NAT Gateway
So, if you wish to run "completely non-vpc", then avoid services that are "deployed". It means you would use AWS Lambda for compute, probably DynamoDB for database, Amazon S3 for object storage, etc. This is otherwise referred to as going "serverless".

Accessing AWS DocumentDB from a separate VPC using VPC Sharing?

The latest DocumentDB documentation states that a jump host is necessary for accessing the database from outside its native VPC:
By design, you access Amazon DocumentDB (with MongoDB compatibility)
resources from an Amazon EC2 instance within the same Amazon VPC as
the Amazon DocumentDB resources. However, suppose that your use case
requires that you or your application access your Amazon DocumentDB
resources from outside the cluster's Amazon VPC. In that case, you can
use SSH tunneling (also known as "port forwarding") to access your
Amazon DocumentDB resources.
However, VPC sharing seems to allow multiple accounts/VPCs to share the same resources.
Is it possible to use VPC sharing to access a documentDB resource in another VPC without having to use jump hosts?
Thank you in advance for your consideration and response.
Yes.
https://aws.amazon.com/documentdb/faqs/
Amazon DocumentDB clusters deployed within a VPC can be accessed directly by EC2 instances or other AWS services that are deployed in the same VPC. Additionally, Amazon DocumentDB can be accessed by EC2 instances or other AWS services in different VPCs in the same region or other regions via VPC peering.
We will get the documentation updated.

How does an EBS volume in an Availability zone get restricted only to a specific AWS account & its users?

In AWS, an EC2 instance is launched within a subnet created in an Availability Zone which is again, in a VPC. So, the VPC can be thought of like a container to which only the AWS account and its users have access to. But when creating EBS volumes, only the Availability Zone is asked for / provided and the same EBS volume can be attached to any EC2 instance irrespective of the VPC it belongs to (Of course, for the same AWS account only). My question is - How does AWS prevent other AWS accounts from seeing this EBS volume present in the AZ? Is that implementation abstracted by AWS?
An Amazon VPC is a virtual construct that is used to connect virtual computers according to traditional networking. Resources (eg EC2 instances, RDS databases) can be connected via a VPC, which determines how network traffic flows between them. It is not necessarily how the resources are physically created.
An Availability Zone is a physical data center (or a group of data centers). Resources are created in an AZ, which determines their physical location. For example, an Amazon EBS volume resides in a data center, so it is in only one AZ. It can be logically connected to any EC2 instance in the same account in the same AZ.
Amazon EBS volumes are connected via a backplane that is invisible to the resources. It just magically "attaches" to the instance. It does not use the same network as a VPC.
The Amazon EBS service will only provide EBS volumes to EC2 instances in the same AWS account.
According to AWS Shared Responsibility Model:
AWS responsibility “Security of the Cloud” - AWS is responsible for
protecting the infrastructure that runs all of the services offered in
the AWS Cloud. This infrastructure is composed of the hardware,
software, networking, and facilities that run AWS Cloud services.
AWS provides isolation of all resources between accounts, and this implementation is abstracted, and a part of AWS responsibility.
In addition, it is recommended to Encrypt EBS Volumes, it is free and doesn't impact volume performance.

Deploying mysql galera cluster on aws Ec2 vs VPC?

I am tying to deploy galera cluster on aws. Is it a good idea to use VPC or making a cluster with 2-3 open ec2 instances. What are pros and cons.
Also, Is there any extra billing for VPC? Any help will be great!!
I am not sure of the variation of the installation of the GALERA on AWS VPC with EC2 instances.
One suggestion which I would add is the consideration of the RDS - Database as a service from AWS; I don't whether that would solve your need to use GALERA.
Regarding the pricing for the VPC, it is free; you only pay for the underlying EC2 instances running, Elastic IP - Data Transfer, Out Bandwidth etc. If you are going to connect your local data center to VPC using VPC/VPN gateway - that would be charged
No there is no extra cost for a VPC [but only for the resources used in it]
With Galera you can have a multi-master architecture [I have not implemented it] but with RDS you cannot. I have setup a Disaster Recovery plan with RDS where a multi-master architecture would be eliminating the downtime , but instead set it up with the use of Read Replica which would be promoted in a master. That's the way AWS RDS works.