How Can we share files between multiple Windows VMs in GCP? - google-cloud-platform

I have 10 Windows VMs where I want to have PD with both read-write in all the VM's. But I came to know that we cannot mount a disk to multiple VMs with read-write. SO I am looking for option where I can access a disk from any of those VMs. For Linux we can use GCSFuse to mount the Cloud storage as a disk, Do we have any option for windows where we can mount a single disk/Cloud Storage buckets to Multiple Windows VMs.

If you want it specifically to be a GCP Disk, your best option will be setting up an additional Windows instance, and set up a shared SMB disk with the other instances.
Another option, if you don't want to get too messy, best option would be using the Filestore service ( https://cloud.google.com/filestore/ ) , which is an NFS as a service, provided you have an NFS client for your Windows version

I believe you could use Google Cloud Storage buckets, which could be an intermediate transfer point between your instances, regardless of OS.
Upload your files from your workstation to a Cloud Storage bucket. Then, download those files from the bucket to your instances. When you need to transfer files in the other direction, reverse the process. Upload the files from your instance and then download those files to your workstation.
To achieve this follow these steps:
Create a new Cloud Storage bucket or identify an existing
bucket that you want to use to transfer files.
Upload files to
the bucket
Connect to your instance using RDP
upload/download files from the bucket.
However, there are other options like using file servers on Compute engine or following options:
Cloud Storage
Compute Engine persistent disks
Single Node File Server
Elastifile
Quobyte
Avere vFXT
These options have their advantages and disadvantages, for more details for the links attached to each of these options.

Related

How to use GCP bucket data on windows file system

I want to access GCP bucket data on windows 10 as file system.
GCP provide FUSE for mac and Linux, Is there any way to mount GCP bucket with Windows.
Accordingly to the documentation Cloud Storage FUSE:
Cloud Storage FUSE is an open source FUSE adapter that allows you to mount Cloud Storage buckets as file systems on Linux or macOS systems.
As a result, there is no easy way of using it on the Windows systems.
There are a few possible ways to solve it:
Rclone
Rclone is a command line program to manage files on cloud storage. It is a feature rich alternative to cloud vendors' web storage interfaces. Over 40 cloud storage products support rclone including S3 object stores, business & consumer file storage services, as well as standard transfer protocols.
Install Rclone and WinFsp. Remember to add Rclone location to your PATH.
Follow the instructions to set up your remote GCP bucket. If your GCP bucket use Uniform bucket-level access, remember to set the --gcs-bucket-policy-only option to true when configuring Rclone remote drive.
Mount the remote GCP bucket as a local disk
rclone mount remote:path/to/files X:
where X: is an unused drive letter.
GcsFuse-Win:
GcsFuse-Win is a distributed FUSE based file system backed by Google cloud storage service. It is the first open source native version of gcs fuse on windows.It allows you to mount the buckets/folders in the storage account as a the local folder/driver on Windows system. It support the cluster mode. you can mount the blob container (or part of it) across multiple windows nodes.
CloudBerry Drive (proprietary software):
Mount cloud storage as a network drive to your Windows workstation or Windows Server
Mount bucket with FUSE into a Linux instance and share it via network Samba/NFS.
Another three tools that may be usefull here are:
NetDrive - this is a paid software (~50USD per lifetime licence). You can try it out for free however (7 days trial) and it will actually allow you to mount whatever GCP storage you have as a filesystem in Windows.
Mountain Duck - has very similar abilities and also allows mounting GCP storage in Windows - it's sligtly cheaper ~40USD for one user but it's valid for a specific major version.
CyberDuck - is the free (or libre as the official page states) version of the Mountain Duck - it doesn't allow mounting resources as a filesystem but it still let you access any cloud storage via it's interface simple & intuitive).

Is there a way i can use GCP Storage as external drive?

I'm a hobby photographer and take load of raw photos. I was wondering if there is a possibility to get rid of my external drive and use GCP Cloud Storage instead. I would require to access, read, write files directly from Adobe LightRoom.
Can I have a drive displayed on My PC in windows 10, just like I can see C:,D: i'd like to see gs: drive there.
Thanks for the help!
If you want use GCP's storage service as a network drive, That's what Cloud filestore does.
This is not Google Cloud storage, But this can be another option.
You can mount Cloud Filestore persistent disk to your windows 10 filesystem.
This allows you to use GCP persistent disk as usual like C:,D: drives.
But it is liitle bit expensive, So you need to compare with other options.
Here is GCP price calculator.
There is a GcsFuse-Win version which is backed by Google Cloud Storage Service. This is the first gcs fuse on Windows allows you to mount the buckets/folders in the storage account as a the local folder/driver on Windows system. To install the service on the Windows you must need a GCP account. I suggest you to read out the limitations before deploy in the production.

Is it possible to create multiple google cloud instances and upload files to and download files from all of them at once?

I just started using google cloud, I want to create 10 virtual machines and upload files to them to run various scripts.
I have been doing it manually one by one. Is it possible to automate creating the servers all at the same time?
I have already tried using managed instance groups, but they are always on and they scale automatically, I need to control them individually.
Also, can I use a tool to upload files to all of them at once and download all the files from them at the same time?
You can automate creating VM's in a various ways;
Use Google Cloud SDK (either installed locally or through Cloud Shell). You can choose between public images or create custom image to suit your needs. Or if you just need some minor changes use startup scripts while creating a VM to install/configure your apps.
Use Deployment Manager - create come templates of the VM's to deploy and with a single command have a number of them.
Both of solutions provide you with complete control over the parameters of VM's created. You can even upload (or download directly to the new VM's) files you need.
Uploading the files to the VM's is also possible via gcloud utility - you can create a script to upload any files to the VM's (basic shell scripting experience required).
Lastly - here you can read more about storage colution available in GCP - but I'm guessing you will be using persisten disks & buckets. You can easily connect a bucket to your VM, mount it as a filesystem or just copy files to/from your VM's.
Ultimately you can read here about various ways of transferring files to the GCP instances.

AWS storage choice for file system S3 or EFS?

I want to use file system to store xml files being received from SFTP connection to EC2 instance. Which storage to choose S3 or EFS? Once files are stored, I want to read the files and process data.
My understanding is that we should choose EFS as S3 is not recommended to mount a file system. Also, it is easy to manage directories and sub-directories permission with EFS.
The decision should depend on the budget and requirement as well
If you want to read the files and process data then you can choose EFS :
Amazon EFS is a fully-managed service that makes it easy to set up and scale file storage in the Amazon Cloud. With a few clicks in the AWS Management Console, you can create file systems that are accessible to Amazon EC2 instances via a file system interface (using standard operating system file I/O APIs) and supports full file system access semantics (such as strong consistency and file locking).
Amazon EFS file systems can automatically scale from gigabytes to petabytes of data without needing to provision storage. Tens, hundreds, or even thousands of Amazon EC2 instances can access an Amazon EFS file system at the same time, and Amazon EFS provides consistent performance to each Amazon EC2 instance. Amazon EFS is designed to be highly durable and highly available. With Amazon EFS, there are no minimum fee or setup costs, and you pay only for the storage you use.
And S3 would be an alternate solution if you want to download/upload the files/objects with different clients platforms like Android, iOS, Web etc..
It's hard to tell since you didn't specify the average file size, estimated storage requirements and the file usage pattern. The price difference between S3 and EFS is also an essential factor to consider.
Example:
EC2 instance receives a file, processes it immediately and store results to the database. XML is just stored for backup afterward and should be long-term archived for audit or recovery purposes.
In this case, I would recommend S3 and lifecycle policies to migrate data to the Glacier service for long-term archiving automatically.
Yes, for your use case it would be better if you choose EFS as it is easy to use and offers a simple interface that allows you to create and configure file systems quickly and easily.
When mounted on Amazon EC2 instances, an Amazon EFS file system provides a standard file system interface and file system access semantics, allowing you to seamlessly integrate Amazon EFS with your existing applications and tools. Multiple Amazon EC2 instances can access an Amazon EFS file system at the same time, allowing Amazon EFS to provide a common data source for workloads and applications running on more than one Amazon EC2 instance.
https://aws.amazon.com/documentation/efs/

How to setup shared persistent storage for multiple AWS EC2 instances?

I have a service hosted on Amazon Web Services. There I have multiple EC2 instances running with the exact same setup and data, managed by an Elastic Load Balancer and scaling groups.
Those instances are web servers running web applications based on PHP. So currently there are the very same files etc. placed on every instance. But when the ELB / scaling group launches a new instance based on load rules etc., the files might not be up-to-date.
Additionally, I'd rather like to use a shared file system for PHP sessions etc. than sticky sessions.
So, my question is, for those reasons and maybe more coming up in the future, I would like to have a shared file system entity which I can attach to my EC2 instances.
What way would you suggest to resolve this? Are there any solutions offered by AWS directly so I can rely on their services rather than doing it on my on with a DRBD and so on? What is the easiest approach? DRBD, NFS, ...? Is S3 also feasible for those intends?
Thanks in advance.
As mentioned in a comment, AWS has announced EFS (http://aws.amazon.com/efs/) a shared network file system. It is currently in very limited preview, but based on previous AWS services I would hope to see it generally available in the next few months.
In the meantime there are a couple of third party shared file system solutions for AWS such as SoftNAS https://aws.amazon.com/marketplace/pp/B00PJ9FGVU/ref=srh_res_product_title?ie=UTF8&sr=0-3&qid=1432203627313
S3 is possible but not always ideal, the main blocker being it does not natively support any filesystem protocols, instead all interactions need to be via an AWS API or via http calls. Additionally when looking at using it for session stores the 'eventually consistent' model will likely cause issues.
That being said - if all you need is updated resources, you could create a simple script to run either as a cron or on startup that downloads the files from s3.
Finally in the case of static resources like css/images don't store them on your webserver in the first place - there are plenty of articles covering the benefit of storing and accessing static web resources directly from s3 while keeping the dynamic stuff on your server.
From what we can tell at this point, EFS is expected to provide basic NFS file sharing on SSD-backed storage. Once available, it will be a v1.0 proprietary file system. There is no encryption and its AWS-only. The data is completely under AWS control.
SoftNAS is a mature, proven advanced ZFS-based NAS Filer that is full-featured, including encrypted EBS and S3 storage, storage snapshots for data protection, writable clones for DevOps and QA testing, RAM and SSD caching for maximum IOPS and throughput, deduplication and compression, cross-zone HA and a 100% up-time SLA. It supports NFS with LDAP and Active Directory authentication, CIFS/SMB with AD users/groups, iSCSI multi-pathing, FTP and (soon) AFP. SoftNAS instances and all storage is completely under your control and you have complete control of the EBS and S3 encryption and keys (you can use EBS encryption or any Linux compatible encryption and key management approach you prefer or require).
The ZFS filesystem is a proven filesystem that is trusted by thousands of enterprises globally. Customers are running more than 600 million files in production on SoftNAS today - ZFS is capable of scaling into the billions.
SoftNAS is cross-platform, and runs on cloud platforms other than AWS, including Azure, CenturyLink Cloud, Faction cloud, VMware vSPhere/ESXi, VMware vCloud Air and Hyper-V, so your data is not limited or locked into AWS. More platforms are planned. It provides cross-platform replication, making it easy to migrate data between any supported public cloud, private cloud, or premise-based data center.
SoftNAS is backed by industry-leading technical support from cloud storage specialists (it's all we do), something you may need or want.
Those are some of the more noteworthy differences between EFS and SoftNAS. For a more detailed comparison chart:
https://www.softnas.com/wp/nas-storage/softnas-cloud-aws-nfs-cifs/how-does-it-compare/
If you are willing to roll your own HA NFS cluster, and be responsible for its care, feeding and support, then you can use Linux and DRBD/corosync or any number of other Linux clustering approaches. You will have to support it yourself and be responsible for whatever happens.
There's also GlusterFS. It does well up to 250,000 files (in our testing) and has been observed to suffer from an IOPS brownout when approaching 1 million files, and IOPS blackouts above 1 million files (according to customers who have used it). For smaller deployments it reportedly works reasonably well.
Hope that helps.
CTO - SoftNAS
For keeping your webserver sessions in sync you can easily switch to Redis or Memcached as your session handler. This is a simple setting in the PHP.ini and they can all access the same Redis or Memcached server to do sessions. You can use Amazon's Elasticache which will manage the Redis or Memcache instance for you.
http://phpave.com/redis-as-a-php-session-handler/ <- explains how to setup Redis with PHP pretty easily
For keeping your files in sync is a little bit more complicated.
How to I push new code changes to all my webservers?
You could use Git. When you deploy you can setup multiple servers and it will push your branch (master) to the multiple servers. So every new build goes out to all webserver.
What about new machines that launch?
I would setup new machines to run a rsync script from a trusted source, your master web server. That way they sync their web folders with the master when they boot and would be identical even if the AMI had old web files in it.
What about files that change and need to be live updated?
Store any user uploaded files in S3. So if user uploads a document on Server 1 then the file is stored in s3 and location is stored in a database. Then if a different user is on server 2 he can see the same file and access it as if it was on server 2. The file would be retrieved from s3 and served to the client.
GlusterFS is also an open source distributed file system used by many to create shared storage across EC2 instances
Until Amazon EFS hits production the best approach in my opinion is to build a storage backend exporting NFS from EC2 instances, maybe using Pacemaker/Corosync to achieve HA.
You could create an EBS volume that stores the files and instruct Pacemaker to umount/dettach and then attach/mount the EBS volume to the healthy NFS cluster node.
Hi we currently use a product called SoftNAS in our AWS environment. It allows us to chooses between both EBS and S3 backed storage. It has built in replication as well as a high availability option. May be something you can check out. I believe they offer a free trial you can try out on AWS
We are using ObjectiveFS and it is working well for us. It uses S3 for storage and is straight forward to set up.
They've also written a doc on how to share files between EC2 instances.
http://objectivefs.com/howto/how-to-share-files-between-ec2-instances