I am running samtools on a google VM with 8CPUs. It seems that when the process is finished, the program crashes giving the below error. At the same time, there is a problem with the bucket, showing this. Any ideas? Problems with saving the file?
Error:
username#instance-1:~/my_bucket$ /usr/local/bin/bin/samtools view -#20 -O sam -f 4 file_dedup.realign
ed.cram > file.unmapped.sam
samtools view: error closing standard output: -1
Also this comes up when tying ls in the bucket directory:
ls: cannot open directory '.': Transport endpoint is not connected
As we discovered at the comment section this issue is related to the difference between a FUSE and a POSIX file systems.
You can solve this issue in two ways:
Increase disk space on your VM instance (by following the documentation Resize the disk and Resize the file system and partitions) and stop using Google Cloud Storage Bucket mounted via FUSE.
Save data received from samtools to the VM's disk at first and then move them to the Google Cloud Storage Bucket mounted via FUSE.
You can estimate cost for each scenario with Google Cloud Pricing Calculator.
Keep in mind that persistent disks have restrictions, among them:
Each persistent disk can be up to 64 TB in size, so there is no need to manage arrays of disks to create large logical volumes.
Most instances can have up to 128 persistent disks and up to 257 TB of total persistent disk space attached. Total persistent disk space
for an instance includes the size of the boot persistent disk.
In addition, please have a look Quotas & limits for Google Cloud Storage.
We have around 11TB of images in local storage and the same has been copied to Google Cloud Bucket. We have a requirement to sync all images incrementally i.e onlyn updated files. Currently we are syncing files using below gsutil command.
gsutil -m rsync -r -C /mnt/Test/ gs://test_images/test-H/
Issue which we are facing is it is taking around 6 days to copy and most of the time it is taking to scan the disk. Please let me know if any method to copy updated data at least for 24hours.
To increase the transfer speed, here some tips:
Use regional storage, the closest to your VM
Use a VM with at least 8vCPU to maximise the bandwith like described in quota
Depends on the machine type of the VM:
All shared-core machine types are limited to 1 Gbps.
2 Gbps per vCPU, up to 32 Gbps per VM for machine types that use the Skylake or later CPU platforms with 16 or more vCPUs. This egress rate is also available for ultramem machine types.
2 Gbps per vCPU, up to 16 Gbps per VM for all other machine types with eight or more vCPUs.
We have Increase the size of VM instance to N1-Standard-4 as it will provide more CPU power and network performance on the GCP network. We noticed in stackdriver that the server was hitting 100% CPU utilization at times along with being limited to the max speeds allowed for GCP networking transfers due to the compute sizing and also we mounted bucket in the same server and executed the script. Below is the command we used to mount and sync the files.
Below is the command used to authenticate google bucket.
gcloud auth application-default login
Mount disk by using below command.
gcsfuse --implicit-dirs Bucketname Mountpoint
sync the files using rsync command.
I have built a script that scrapes a few thousands of pdf files. I want to build a t2 instance that runs the script for at least 2 weeks continuously and saves the downloaded files in the S3 bucket. I read this tutorial but I have a doubt :
If I set the download folder to the mounted drive location, then does mounting here imply that data will be stored in the EBS and S3 both or that the files will be saved in the S3 bucket directly.
I need this clarification because while building the instance, I'll keep the storage low (~75 GB) and rent an S3 bucket since the total size of scraped files is going to exceed 300 GB.
Thanks!
Yes, mounted drive doesn't take up your local storage so you could just spin up an instance with only 8GB. For the mounting tools I'd recommend https://github.com/kahing/goofys (very actively developed) instead of s3fs which seems to be slow and heats up CPU usage pretty badly if you have large files. I've been using goofys for years with my micro instance plus 300GB mounted drive without any slowness and issues.
Another even better solution is to use aws cli to transfer files directly to S3 without requiring any mounting technique. You can simply write a python script with boto3 which first downloads the pdf then copies to S3 and then removes that pdf locally (that would take only few seconds even for large files).
https://cloudkul.com/blog/mounting-s3-bucket-linux-ec2-instance/
A S3 bucket can be mounted in a AWS instance as a file system known as S3fs. S3fs is a FUSE file-system that allows you to mount an Amazon S3 bucket as a local file-system. It behaves like a network attached drive, as it does not store anything on the Amazon EC2, but user can access the data on S3 from EC2 instance.
The key point to take away from this is "network attached drive," meaning it will not use any disk memory on your EC2 instance aside from the dependencies you will need to install.
If the script you are using is directly copying the file to directory on the s3fs mount, it will not take up any space on the EBS.
If the script copies the pdf locally first anywhere outside the s3fs and then MOVES it to s3fs, it is still fine. It will only take up space on s3 bucket.
If the script copies the pdf locally first anywhere outside the s3fs and then copies it to s3fs. It will still leave a copy on EBS and take up space there as well. So you need to check - are you copying or moving to S3fs.
if you are copying, replace it with a move or delete at source after a successfull copy.
So even 8 GB space should be enough for the Instance.
I have a AWS Windows Instance with SQL Server running on it.
I took a database backup and the resultant file is of size 175 GB.
What is the fastest and most efficient way of downloading this file from AWS to my local machine ?
Network bandwidth varies with the size of Amazon EC2 instances. Put simply, larger instances have larger bandwidth.
Your own Internet bandwidth will also be a limiting factor.
To fully utilize the available bandwidth, you could use the Tsunami UDP protocol. This is similar in concept to Bittorrent, in that it has large windows and does not wait for error correction.
Amazon S3 actually supports the Bittorrent protocol, so you could copy the file to S3, then use Bittorrent to download it. This would be great at recovering from transmission errors. However, it means you are sending the file twice through constrained resources (the EC2 instance to S3, then S3 to your computer), which would be less efficient.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 1 year ago.
The community reviewed whether to reopen this question 9 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
As per the title of this question, what are the practical differences between AWS EFS, EBS and S3?
My understanding of each:
S3 is a storage facility accessible any where
EBS is a device you can mount onto EC2
EFS is a file system you can mount onto EC2
So why would I use EBS over EFS? Seem like they have the same use cases but minor semantic differences? Although EFS is replicated across AZs where as EBS is just a mounted device. I guess my understanding of EBS is lacking hence I'm unable to distinguish.
Why choose S3 over EFS? They both store files, scale and are replicated. I guess with S3 you have to use the SDK where as with EFS being a file system you can use standard I/O methods from your programming language of choice to create files. But is that the only real difference?
One word answer: MONEY :D
1 GB to store in US-East-1:
(Updated at 2016.dec.20)
Glacier: $0.004/Month (Note: Major price cut in 2016)
S3: $0.023/Month
S3-IA (announced in 2015.09):
$0.0125/Month (+$0.01/gig retrieval charge)
EBS: $0.045-0.1/Month (depends on speed - SSD or not) + IOPS costs
EFS: $0.3/Month
Further storage options, which may be used for temporary storing data while/before processing it:
SNS
SQS
Kinesis stream
DynamoDB, SimpleDB
The costs above are just samples. There can be differences by region, and it can change at any point. Also there are extra costs for data transfer (out to the internet). However they show a ratio between the prices of the services.
There are a lot more differences between these services:
EFS is:
Generally Available (out of preview), but may not yet be available in your region
Network filesystem (that means it may have bigger latency but it can be shared across several instances; even between regions)
It is expensive compared to EBS (~10x more) but it gives extra features.
It's a highly available service.
It's a managed service
You can attach the EFS storage to an EC2 Instance
Can be accessed by multiple EC2 instances simultaneously
Since 2016.dec.20 it's possible to attach your EFS storage directly to on-premise servers via Direct Connect. ()
EBS is:
A block storage (so you need to format it). This means you are able to choose which type of file system you want.
As it's a block storage, you can use Raid 1 (or 0 or 10) with multiple block storages
It is really fast
It is relatively cheap
With the new announcements from Amazon, you can store up to 16TB data per storage on SSD-s.
You can snapshot an EBS (while it's still running) for backup reasons
But it only exists in a particular region. Although you can migrate it to another region, you cannot just access it across regions (only if you share it via the EC2; but that means you have a file server)
You need an EC2 instance to attach it to
New feature (2017.Feb.15): You can now increase volume size, adjust performance, or change the volume type while the volume is in use. You can continue to use your application while the change takes effect.
S3 is:
An object store (not a file system).
You can store files and "folders" but can't have locks, permissions etc like you would with a traditional file system
This means, by default you can't just mount S3 and use it as your webserver
But it's perfect for storing your images and videos for your website
Great for short term archiving (e.g. a few weeks). It's good for long term archiving too, but Glacier is more cost efficient.
Great for storing logs
You can access the data from every region (extra costs may apply)
Highly Available, Redundant. Basically data loss is not possible (99.999999999% durability, 99.9 uptime SLA)
Much cheaper than EBS.
You can serve the content directly to the internet, you can even have a full (static) website working direct from S3, without an EC2 instance
Glacier is:
Long term archive storage
Extremely cheap to store
Potentially very expensive to retrieve
Takes up to 4 hours to "read back" your data (so only store items you know you won't need to retrieve for a long time)
As it got mentioned in JDL's comment, there are several interesting aspects in terms of pricing. For example Glacier, S3, EFS allocates the storage for you based on your usage, while at EBS you need to predefine the allocated storage. Which means, you need to over estimate. ( However it's easy to add more storage to your EBS volumes, it requires some engineering, which means you always "overpay" your EBS storage, which makes it even more expensive.)
Source: AWS Storage Update – New Lower Cost S3 Storage Option & Glacier Price Reduction
I wonder why people are not highlighting the MOST compelling reason in favor of EFS. EFS can be mounted on more than one EC2 instance at the same time, enabling access to files on EFS at the same time.
(Edit 2020 May, EBS supports mounting to multiple EC2 at same time now as well, see:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volumes-multi.html)
Fixing the comparison:
S3 is a storage facility accessible any where
EBS is a device you can mount onto EC2
EFS is a file system you can mount onto several EC2 instances at the same time
At this point it's a little premature to compare EFS and EBS- the performance of EFS isn't known, nor is its reliability known.
Why would you use S3?
You don't have a need for the files to be 'local' to one or more EC2 instances.
(effectively) infinite capacity
built-in web serving, authentication
Apart from price and features, the throughput also varies greatly (as mentioned by user1677120):
EBS
Taken from EBS docs:
| EBS volume | Throughput | Throughput |
| type | MiB/s | dependent on.. |
|------------|------------|-------------------------------|
| gp2 (SSD) | 128-160 | volume size |
| io1 (SSD) | 0.25-500 | IOPS (256Kib/s per IOPS) |
| st1 (HDD) | 20-500 | volume size (40Mib/s per TiB) |
| sc1 (HDD) | 6-250 | volume size (12Mib/s per TiB) |
Note, that for io1, st1 and sc1 you can burst throughput traffic to at least 125Mib/s, but to 500Mib/s, depending on volume size.
You can further increase throughput by e.g. deploying EBS volumes as RAID0
EFS
Taken from EFS docs
| Filesystem | Base | Burst |
| Size | Throughput | Throughput |
| GiB | MiB/s | MiB/s |
|------------|------------|------------|
| 10 | 0.5 | 100 |
| 256 | 12.5 | 100 |
| 512 | 25.0 | 100 |
| 1024 | 50.0 | 100 |
| 1536 | 75.0 | 150 |
| 2048 | 100.0 | 200 |
| 3072 | 150.0 | 300 |
| 4096 | 200.0 | 400 |
The base throughput is guaranteed, burst throughput uses up credits you gathered while being below the base throughput (so you'll only have this for a limited time, see here for more details.
S3
S3 is a total different thing, so it cannot really be compared to EBS and EFS. Plus: There are no published throughput metrics for S3. You can improve throughput by downloading in parallel (I somewhere read AWS states you would have basically unlimited throughput this way), or adding CloudFront to the mix
To add to the comparison: (burst)read/write-performance on EFS depends on gathered credits. Gathering of credits depends on the amount of data you store on it. More data -> more credits. That means that when you only need a few GB of storage which is read or written often you will run out of credits very soon and throughput drops to about 50kb/s.
The only way to fix this (in my case) was to add large dummy files to increase the rate credits are earned. However more storage -> more cost.
AWS (Amazon Web Services) is well-known for its extensive product line. There are (probably) a few Amazon Web Services ninjas who know exactly how and when to use which Amazon product for which task. The rest of us are in desperate need of assistance.
AWS offers three common storage services: S3, Elastic Block Store (EBS), and Elastic File System (EFS), all of which function differently and provide various levels of performance, cost, availability, and scalability. We'll compare the performance, cost, and accessibility to stored data of these storage options, as well as their use cases.
AWS Storage Options:
Amazon S3 is a basic object storage service that can be used to host website images and videos, as well as data analytics and smartphone and web applications. Data is managed as objects in object storage, which means that all data types are stored in their native formats. With object storage, there is no hierarchy of file relationships, and data objects can be spread through many machines. You can use the S3 service from any computer with an internet connection.
AWS EBS offers block-level data storage that is persistent. Block storage systems are more versatile and provide better capacity than standard file storage since files are stored in several volumes called blocks, which serve as separate hard drives. An Amazon EC2 instance must be mounted with EBS. Business continuity, software testing, and database management are examples of use cases.
AWS EFS is a shared, elastic file storage framework that expands and contracts in response to file additions and deletions. It follows the conventional file storage model, with data organized into folders and subdirectories. EFS is useful for content management systems and SaaS applications. EFS can be mounted on several EC2 instances at once.
Which AWS Cloud Storage Service Is Best?
As always, it depends.
For data storage alone, Amazon S3 is the cheapest choice. S3, on the other hand, has a range of other pricing criteria, including cost per upload, S3 Analytics, and data transfer out of S3 per gigabyte. The cost structure of EFS is the most straightforward.
Amazon S3 is a cloud storage service that can be accessed from anywhere. AWS EBS is only accessible in a single region, while multiple EFS instances can share files across multiple regions.
EBS and EFS both outperform Amazon S3 in terms of IOPS and latency.
With a single API call, EBS can be scaled up or down. You can use EBS for database backups and other low-latency interactive applications that need reliable, predictable performance because it is less expensive than EFS.
Large amounts of data, such as large analytic workloads, are better served by EFS. Users must break up data and distribute it between EBS instances because data at this scale cannot be stored on a single EC2 instance allowed in EBS. The EFS service allows thousands of EC2 instances to be accessed at the same time, allowing vast volumes of data to be processed and analyzed in real-time.
EBS is simple - block level storage which can be attached to an instance from same AZ, and can survive irrespective of instance life.
However, interesting difference is between EFS and S3, and to identify proper use cases for it.
Cost: EFS is approximately 10 times costly than S3.
Usecases:
Whenever we have thousands of instances who needs to process file simultaneously EFS is recommended over S3.
Also note that S3 is object based storage while EFS is file based it implies that whenever we have requirement that files are updated continuously (refreshed) we should use EFS.
S3 is eventually consistent while EFS is strong consistent. In case you can't afford eventual consistency, you should use EFS
In simple words
Amazon EBS provides block level storage .
Amazon EFS provides network-attached shared file storage.
Amazon S3 provides object storage .
AWS EFS, EBS and S3. From Functional Standpoint, here is the difference
EFS:
Network filesystem :can be shared across several Servers; even between regions. The same is not available for EBS case.
This can be used esp for storing the ETL programs without the risk of security
Highly available, scalable service.
Running any application that has a high workload, requires scalable storage, and must produce output quickly.
It can provide higher throughput. It match sudden file system growth, even for workloads up to 500,000 IOPS or 10 GB per second.
Lift-and-shift application support: EFS is elastic, available, and scalable, and enables you to move enterprise applications easily and quickly without needing to re-architect them.
Analytics for big data: It has the ability to run big data applications, which demand significant node throughput, low-latency file access, and read-after-write operations.
EBS:
for NoSQL databases, EBS offers NoSQL databases the low-latency performance and dependability they need for peak performance.
S3:
Robust performance, scalability, and availability: Amazon S3 scales storage resources free from resource procurement cycles or investments upfront.
2)Data lake and big data analytics: Create a data lake to hold raw data in its native format, then using machine learning tools, analytics to draw insights.
Backup and restoration: Secure, robust backup and restoration solutions
Data archiving
S3 is an object store good at storing vast numbers of backups or user files. Unlike EBS or EFS, S3 is not limited to EC2. Files stored within an S3 bucket can be accessed programmatically or directly from services such as AWS CloudFront. Many websites use it to hold their content and media files, which may be served efficiently via AWS CloudFront.
The main difference between EBS and EFS is that EBS is only accessible from a single EC2 instance in your particular AWS region, while EFS allows you to mount the file system across multiple regions and instances.
Finally, Amazon S3 is an object store good at storing vast numbers of backups or user files.
Amazon EBS provides block level storage - It is used to create a filesystem on it and store files.
Amazon EFS - its shared storage system similar like NAS/SAN. You need to mount it to unix server and use it.
Amazon S3 - It is object based storage where each item is stored with a http URL.
One of the difference is - EBS can be attached to 1 instance at a time and EFS can be attached to multiple instances that why shared storage.
S2 plain object storage cannot be mounted.
EFS & S3 have the same purpose, you can store any kind of object or files.
But for me the only difference is EFS is allowing you to have a traditional file system in the VM(EC2) cloud with more flexibility like you can attach to multiple instances.
S3, on the other hand, is a separate flexible and elastic server for your objects. It can be used for your static files, images, videos or even hosting static app (js).
EBS is obviously for block storage where you can install OS or anything related to your OS.
This question is very much answered by other people, I just want to make a point whenever deciding on any service to be in AWS is that understanding the use case for each and also see the solution that the service will provide in terms of the Well-Architected Framework, do you need High Availability, Fault Torelant, Cost optimization. This will help to decide on any kind of service to be used.