Upload or download files to Filestore mounted on GKE - google-cloud-platform

I have mounted a Filestore instance in my Kubernetes cluster. Can I upload some files from my local computer to the Filestore instance? It would be like uploading files to google drive, but in Filestore.

Connect to Filestore outside the VPC network is not possible.
But it is possible to upload/dowload files from a machine outside the VPC network to Google Storage.
So the solution is upload the files to Google storage and then copy them inside the cluster using a pod that executes gsutil rsync gs://bucket mount-directory.
The pod can be replaced by a VM machine connected to the VPC network.

Because the files you would like to upload are configuration and credentials files for your app, using Filestore doesn't look like the best choice. Filestore could be justified if you need some data mounted as ReadWriteMany, because minimum volume size is 1Tb, and it's too much for configs and credentials.
Instead, a better way to achieve your goal would be using ConfigMaps to store configuration files and Secrets to store credentials. This should be also a cheaper option:
You are charged based on the provisioned capacity, not based on the
capacity used. For example, if you create a 1 TiB instance and store
100 GiB of data on it, you incur charges for the entire 1 TiB.

Related

What service should I use if I want to share a volume across multiple Windows EC2 instances?

I've looked at EFS and EBS Multi-Attach, but neither support Windows. I've looked at S3, but it's blob storage (I guess?), not a proper volume.
I just want a simple volume that I can mount from multiple instances. I realize I could just set up a share on one of the instances, but I'd prefer it not be instance-dependent (IE if I do scaling, etc).
Ideally I'd be able to access it as a mounted disk, but I'd also be fine with a mapped network drive.
Is there an AWS service that will do this?
I think you can mount an S3 bucket on ec2 instances.
Reference:
https://aws.amazon.com/blogs/storage/mounting-amazon-s3-to-an-amazon-ec2-instance-using-a-private-connection-to-s3-file-gateway/

What is the concept of attached disk and the default min/max disk space provided by the AWS Lightsail container service?

I have created an AWS Lightsail container service (Micro). Uploaded a docker image and made the deployment.
The project is a web api that has a POST endpoint that expects a file, it will save the file in a folder, extract the keywords from the file, delete the file and reply with JSON response.
I think that - when the container service restarts (for example due to API crash or whatever reason), then the container service will create a new instance of the image and hence the disk will get reset.
I am not able to locate any documentation regarding the concept of attached disk and what is the default min/max disk space provided by the AWS Lightsail container service?
when the container service restarts (for example due to API crash or whatever reason), then the container service will create a new instance of the image and hence the disk will get reset.
The container service has only ephemeral (temporary) storage by default.
When you want to persist a filesystem in a container service, you have to use some persistent storage.
If you want to use "local" filesystem, you can mount an EFS storage. Another option is to store the files directly to an S3 bucket if it is feasible for you.
S3 sand EFS are elastic - practically without any size limits.
If you want to mount a block storage (EBS - a disk), I'm not sure (I doubt) that it is possible in a LightSail container.

AWS Auto-Scaling

I'm trying AWS auto-scaling for the first time, as far as I understand it creates instances if for example my CPU Utilization reaches critical level, that I define.
So I am curious, after I lunch my instance I spend a fair amount of time configuring it and copying the data, if AWS auto-scales my instance how will it configure the new instances and move the data to it?
You can't store any data that you want to keep on an instance that is part of an autoscaling group (well you can, but you will lose it).
There are (at least) two ways to answer your question:
Create a 'golden image', in other words spin-up an instance, configure it, install the software etc and then save it as an AMI (amazon machine image). Then tell the autoscaling group to use that AMI each time an instance starts - it will be pre-configured when it starts.
Put a script on the instance that tells the instance how to configure itself when it starts up (in the user data). SO basically each time an instance scales up, it runs the script and does all the steps it needs to to configure itself.
As for you data, best practice would be to store any data you want to keep in a database or object store that is not on the instance - so something like RDS, DynamoDB or even S3 objects.
You could also use AWS EFS, store there your data/scripts that the EC2 Instances will be sharing, and automatically mount it every time a new EC2 Instance is created via /etc/fstab.
Once you have configured the EFS to be mounted on the EC2 Instance (/etc/fstab), you should create a new AMI, and use this new AMI to create a new Launch Configuration and AutoScaling Group, so that the new Instances automatically mount your EFS and are able to consume that shared data.
https://aws.amazon.com/efs/faq/
Q. What use cases is Amazon EFS intended for?
Amazon EFS is designed to provide performance for a broad spectrum of
workloads and applications, including Big Data and analytics, media
processing workflows, content management, web serving, and home
directories.
Q. When should I use Amazon EFS vs. Amazon Simple Storage Service (S3)
vs. Amazon Elastic Block Store (EBS)?
Amazon Web Services (AWS) offers cloud storage services to support a
wide range of storage workloads.
Amazon EFS is a file storage service for use with Amazon EC2. Amazon
EFS provides a file system interface, file system access semantics
(such as strong consistency and file locking), and
concurrently-accessible storage for up to thousands of Amazon EC2
instances. Amazon EBS is a block level storage service for use with
Amazon EC2. Amazon EBS can deliver performance for workloads that
require the lowest-latency access to data from a single EC2 instance.
Amazon S3 is an object storage service. Amazon S3 makes data available
through an Internet API that can be accessed anywhere.
https://docs.aws.amazon.com/efs/latest/ug/mount-fs-auto-mount-onreboot.html
You can use the file fstab to automatically mount your Amazon EFS file
system whenever the Amazon EC2 instance it is mounted on reboots.
There are two ways to set up automatic mounting. You can update the
/etc/fstab file in your EC2 instance after you connect to the instance
for the first time, or you can configure automatic mounting of your
EFS file system when you create your EC2 instance.
I recommend using a shared data container if it is data that is updated and the updated data is needed by all instances that might be spinning up.
If it is database data or you could store the needed data in a database I would consider using an RDS.
If it is static data only used to configure the instances like dumps or configuration files which are not updated by running instances then I would recommend pulling them from CloudFlare or S3 of iT is not possible to pull them from a repository.
Good luck

Comparative Application ebs vs s3

I am new to cloud environment. I do understand the definition and storage types EBS and S3. I wanted to understand the application of EBS as compared to S3.
I do understand EBS looks like a device for heavy though put operations. I cannot find any application where this can be used in comparison to S3. I could think of putting server logs on EBS on magnetic storage, as one EBS can be attached to one instance.
S3 you can use the scaling property to add some heavy data and expand in real time. We can deploy our slef managed dbs on this service.
Please correct me if I am wrong. Please help me understand what is best suited for what and application of them in comparison with one another.
As you stated, they are primarily different types of storage:
Amazon Elastic Block Store (EBS) is a persistent disk-storage service, which provides storage volumes to a virtual machine (similar to VMDK files in VMWare)
Amazon Simple Storage Service (S3) is an object store system that stores files as objects and optionally makes them available across the Internet.
So, how do people choose which to use? It's quite simple... If they need a volume mounted on an Amazon EC2 instance, they need to use Amazon EBS. It gives them a C:, D: drive, etc in Windows and a mountable volume in Linux. Computers traditionally expect to have locally-attached disk storage. Put simply: If the operating system or an application running on an Amazon EC2 instance wants to store data locally, it will use EBS.
EBS Volumes are actually stored on two physical devices in case of failure, but an EBS volume appears as a single volume. The volume size must be selected when the volume is created. The volume exists in a single Availability Zone and can only be attached to EC2 instances in the same Availability Zone. EBS Volumes persist even when the attached EC2 instance is Stopped; when the instance is Started again, the disk remains attached and all data has been presrved.
Amazon S3, however, is something quite different. It is a storage service that allows files to be uploaded/downloaded (PutObject, GetObject) and files are replicated across at least three data centers. Files can optionally be accessed via the Internet via HTTP/HTTPS without requiring a web server. There are no limits on the amount of data that can be stored. Access can be granted per-object, per-bucket via a Bucket Policy, or via IAM Users and Groups.
Amazon S3 is a good option when data needs to be shared (both locally and across the Internet), retained for long periods, backed-up (or even for storing backups) and made accessible to other systems. However, applications need to specifically coded to use Amazon S3 and many traditional application expect to store data on a local drive rather than on a separate storage service.
While Amazon S3 has many benefits, there are still situations where Amazon EBS is a better storage choice:
When using applications that expect to store data locally
For storing temporary files
When applications want to partially update files, because the smallest storage unit in S3 is a file and updating a part of a file requires re-uploading the whole file
For very High-IO situations, such as databases (EBS Provisioned IOPS can provide volumes up to 20,000 IOPS)
For creating volume snapshots as backups
For creating Amazon Machine Images (AMIs) that can be used to boot EC2 instances
Bottom line: They are primarily different types of storage and each have their own usage sweet-spot, just like a Database is a good form of storage depending upon the situation.

Persistent storage on Elastic Beanstalk

How can i attach persistent storage on Elastic Beanstalk ?
I know i need to have a .config file where i set the parameters of the environment to run every time an instance is created.
My goal is to have a volume, let's say 100GB, that even if the instances got deleted/terminated, i have this volume with persistent data where all instances can access to read from.
I could use S3 to store this data, but it would require changes to the application, and latency could be a problem.
This way i could access the filesystem like any common server.
AWS now offer a solution called Elastic File System (Amazon EFS) that lets multiple instances access a shared file store.
If your desire is to have a central data repository that all EC2 instances can access, then Amazon S3 would be your best option.
Normal disk volumes are provided via Elastic Block Store (EBS). EBS volumes can only be mounted to one EC2 instance at a time. Therefore, to share data that is contained on an EBS volume, you will need to use normal network sharing methods to mount network volumes.
However, if your goal is to provide shared access without one specific instance sharing a volume to other instances, then it is better to use S3 because it is accessible from all instances. It would likely be worth the effort of modifying your application to take advantage of S3.