EC2 Windows C: Drive Size Increase stuck - amazon-web-services

I have extended the size of the C: drive for my EC2 Windows VM from the AWS console. I can see the unallocated disk space in Windows Disk Management, but the option to extend it is greyed out.
Also, a side note, the Volumes page on the portal says my volume is under optimizing, could this be the issue? only after optimizing, can i extend it on the Windows Disk Management.
I have extended the size of the C: drive for my EC2 Windows VM from the AWS console. I can see the unallocated disk space in Windows Disk Management, but the option to extend it is greyed out.
Also, a side note, the Volumes page on the portal says my volume is under optimizing, could this be the issue? only after optimizing, can i extend it on the Windows Disk Management.

Related

Windows Server automatically extending volumes after expanding EBS volume in AWS

I noticed that when I expanded the EBS volume size for my Windows EC2 instances, the disk volumes are automatically extended within the OS within seconds. However, it does not occur for all instances even though they are all Server 2019. Is this something specific to AWS? Or a Windows setting that needs to be adjusted?
I thought this may be a new feature in Windows Server 2019 but cannot find any settings that control this. I checked the registry and could not find anything that seems to control this behavior.

Program crashes on VM just when finishing

I am running samtools on a google VM with 8CPUs. It seems that when the process is finished, the program crashes giving the below error. At the same time, there is a problem with the bucket, showing this. Any ideas? Problems with saving the file?
Error:
username#instance-1:~/my_bucket$ /usr/local/bin/bin/samtools view -#20 -O sam -f 4 file_dedup.realign
ed.cram > file.unmapped.sam
samtools view: error closing standard output: -1
Also this comes up when tying ls in the bucket directory:
ls: cannot open directory '.': Transport endpoint is not connected
As we discovered at the comment section this issue is related to the difference between a FUSE and a POSIX file systems.
You can solve this issue in two ways:
Increase disk space on your VM instance (by following the documentation Resize the disk and Resize the file system and partitions) and stop using Google Cloud Storage Bucket mounted via FUSE.
Save data received from samtools to the VM's disk at first and then move them to the Google Cloud Storage Bucket mounted via FUSE.
You can estimate cost for each scenario with Google Cloud Pricing Calculator.
Keep in mind that persistent disks have restrictions, among them:
Each persistent disk can be up to 64 TB in size, so there is no need to manage arrays of disks to create large logical volumes.
Most instances can have up to 128 persistent disks and up to 257 TB of total persistent disk space attached. Total persistent disk space
for an instance includes the size of the boot persistent disk.
In addition, please have a look Quotas & limits for Google Cloud Storage.

How to fix ec2 instance when I can't login via ssh after installing clamscan/clamav?

After installing clamscan clamav on ubuntu 18.04 aws ec2 instance I can't login to my aws server with ssh. Neither my website on that server shows up on browser. I have rebooted but not working. How do I fix this?
Common reasons for instances are Exhausted memory and Corrupted file system.
Since you are using t2.micro, which only has 1GB of ram, and by default 8GB disk, its possible that your instance is simply too small to run your workloads. In such a situation, a common solution is to upgrade it to, e.g. t2.medium (2GB of RAM), but such change will be outside of free-tier.
Alterantively, you can re-reinstall your application on new t2.micro, but this time setup CloudWatch Agent to monitor RAM and disk use. By default these things are not monitored. If you monitor them on a new instance, it can give your insights about how much ram, disk or other resources, are used by your applications.
The metrics collected in CloudWatch will help you judge better of what causing the freeze of your instance.

Jenkins running on AWS EC2 running out of disk space despite upgrading the memory size

I have Jenkins running on an ec2 instance. The main build server had gone offline due to "Out of Disk space". I upgraded the ec2 instance to a larger instance type (moving from a .large instance with 4GB of memory to a .xlarge with 8GB of memory)
However, after upgrading the instance to have more memory, the Free Disk Space still showed the same amount, and instead I reduced the Free Space Threshold to enable the master node to get back online. (As outlined here: how to solve jenkins 'Disk space is too low' issue?)
Why did the Free Disk Space remain the same despite increasing the memory space of the instance? Is there a way that I can allocate more memory to the Jenkins via some server settings?
RAM and Disk Space are not the same thing. You will need to resize the EBS volume, then expand the partition/filesystem to use the additional space.
Expanding the partition/filesystem is OS-specific. Here is the procedure for Linux. I am assuming you are running Linux on your server based on the screenshot.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-modify-volume.html

Create cloud watch for monitoring EC2 RAM

I always wonder why amazon has not provided cloud watch to monitor EC2 RAM , if they are able to do it for CPU? I understand that Amazon does not have visibility into the guest O/s, whatever is visible to hypervisor, only that could be monitored. Isn't CPU utilization also a part of guest O/S which is invisible to xen hypervisor? Then why is it that only RAM monitoring is excluded?
I think my understanding isn't clear here, could someone help?
It's possible to monitor EC2 RAM with CloudWatch.
This link shows how: http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/mon-scripts.html