List all devices, partitions and volumes in Powershell - list

I have multiple volumes (as nearly everybody nowadays): on Windows they end up specified as C:, D: and so on. How do I list these all like on a Unix machine with "ls /mnt/" with Powershell?

To get all of the file system drives, you can use the following command:
gdr -PSProvider 'FileSystem'
gdr is an alias for Get-PSDrive, which includes all of the "virtual drives" for the registry, etc.

Get-Volume
You will get:
DriveLetter, FileSystemLabel, FileSystem, DriveType, HealthStatus, SizeRemaining and Size.

On Windows Powershell:
Get-PSDrive
[System.IO.DriveInfo]::getdrives()
wmic diskdrive
wmic volume
Also the utility dskwipe: http://smithii.com/dskwipe
dskwipe.exe -l

Firstly, on Unix you use mount, not ls /mnt: many things are not mounted in /mnt.
Anyhow, there's the mountvol DOS command, which continues to work in Powershell, and there's the Powershell-specific Get-PSDrive.

Though this isn't 'powershell' specific... you can easily list the drives and partitions using diskpart, list volume
PS C:\Dev> diskpart
Microsoft DiskPart version 6.1.7601
Copyright (C) 1999-2008 Microsoft Corporation.
On computer: Box
DISKPART> list volume
Volume ### Ltr Label Fs Type Size Status Info
---------- --- ----------- ----- ---------- ------- --------- --------
Volume 0 D DVD-ROM 0 B No Media
Volume 1 C = System NTFS Partition 100 MB Healthy System
Volume 2 G C = Box NTFS Partition 244 GB Healthy Boot
Volume 3 H D = Data NTFS Partition 687 GB Healthy
Volume 4 E System Rese NTFS Partition 100 MB Healthy

Run command:
Get-PsDrive -PsProvider FileSystem
For more info see:
PsDrive Documentation
PSProvider Documentation

This is pretty old, but I found following worth noting:
PS N:\> (measure-command {Get-WmiObject -Class Win32_LogicalDisk|select -property deviceid|%{$_.deviceid}|out-host}).totalmilliseconds
...
928.7403
PS N:\> (measure-command {gdr -psprovider 'filesystem'|%{$_.name}|out-host}).totalmilliseconds
...
169.474
Without filtering properties, on my test system, 4319.4196ms to 1777.7237ms. Unless I need a PS-Drive object returned, I'll stick with WMI.
EDIT:
I think we have a winner:
PS N:> (measure-command {[System.IO.DriveInfo]::getdrives()|%{$_.name}|out-host}).to‌​talmilliseconds
110.9819

We have multiple volumes per drive (some are mounted on subdirectories on the drive). This code shows a list of the mount points and volume labels. Obviously you can also extract free space and so on:
gwmi win32_volume|where-object {$_.filesystem -match "ntfs"}|sort {$_.name} |foreach-object {
echo "$(echo $_.name) [$(echo $_.label)]"
}

You can also do it on the CLI with
net use

You can use the following to find the "total" disk size on a drive as well.
Get-CimInstance -ComputerName yourhostname win32_logicaldisk | foreach-object {write " $($.caption) $('{0:N2}' -f ($.Size/1gb)) GB total, $('{0:N2}' -f ($_.FreeSpace/1gb)) GB free "}

Microsoft have a way of doing this as part of their az vm repair scripts (see: Repair a Windows VM by using the Azure Virtual Machine repair commands).
It is available under MIT license at: https://github.com/Azure/repair-script-library/blob/51e60cf70bba38316394089cee8e24a9b1f22e5f/src/windows/common/helpers/Get-Disk-Partitions.ps1

If the device is present, but not (yet) mounted, this helps:
Get-PnpDevice -PresentOnly -InstanceId SCSI*

Related

Why df -h and lsblk show different sizes of my one and only xvda1?

So when i do lsblk, it shows that my xvda1 size is 10 GB. but when i do df -h, it shows that my xvda1 size is 7.7 GB
because the commands work differently.
lsblk: Use lsblk command to view your available disk devices and their mount points (if applicable) to help you determine the correct device name to use. The output of lsblk removes the /dev/ prefix from full device paths. It tells you the size of volume and partition(which in your case both 10 GB).
df-h:Use df -h command to verify the size of the file system for each volume. Sometimes the size of the filesystem might be default, so you need to extend that using resize2fs or xfs_growfs commands depending on the type of file system.
For more details please check: recognize-expanded-volume-linux

How to execute command on multiple servers for executing a command

i have set of servers (150) for logging and a command (to get disk space). How can i execute this command for each server.
Suppose if script is taking 1 min to get report of command for single server, how can i send report for all the servers for every 10 min?
use strict;
use warnings;
use Net::SSH::Perl;
use Filesys::DiskSpace;
# i have almost more than 100 servers..
my %hosts = (
'localhost' => {
user => "z",
password => "qumquat",
},
'129.221.63.205' => {
user => "z",
password => "aardvark",
},
'129.221.63.205' => {
user => "z",
password => "aardvark",
},
);
# file system /home or /dev/sda5
my $dir = "/home";
my $cmd = "df $dir";
foreach my $host (keys %hosts) {
my $ssh = Net::SSH::Perl->new($host,port => 22,debug => 1,protocol => 2,1 );
$ssh->login($hostdata{$host}{user},$hostdata{$host}{password} );
my ($out) = $ssh->cmd($cmd});
print "$out\n";
}
It has to send output of disk space for each server
Is there a reason this needs to be done in Perl? There is an existing tool, dsh, which provides precisely this functionality of using ssh to run a shell command on multiple hosts and report the output from each. It also has the ability, with the -c (concurrent) switch to run the command at the same time on all hosts rather than waiting for each one to complete before going on to the next, which you would need if you want to monitor 150 machines every 10 minutes, but it takes 1 minute to check each host.
To use dsh, first create a file in ~/.dsh/group/ containing a list of your servers. I'll put mine in ~/.dsh/group/test-group with the content:
galera-1
galera-2
galera-3
Then I can run the command
dsh -g test-group -c 'df -h /'
And get back the result:
galera-3: Filesystem Size Used Avail Use% Mounted on
galera-3: /dev/mapper/debian-system 140G 36G 99G 27% /
galera-1: Filesystem Size Used Avail Use% Mounted on
galera-1: /dev/mapper/debian-system 140G 29G 106G 22% /
galera-2: Filesystem Size Used Avail Use% Mounted on
galera-2: /dev/mapper/debian-system 140G 26G 109G 20% /
(They're out-of-order because I used -c, so the command was sent to all three servers at once and the results were printed in the order the responses were received. Without -c, they would appear in the same order the servers are listed in the group file, but then it would wait for each reponse before connecting to the next server.)
But, really, with the talk of repeating this check every 10 minutes, it sounds like what you really want is a proper monitoring system such as Icinga (a high-performance fork of the better-known Nagios), rather than just a way to run commands remotely on multiple machines (which is what dsh provides). Unfortunately, configuring an Icinga monitoring system is too involved for me to provide an example here, but I can tell you that monitoring disk space is one of the checks that are included and enabled by default when using it.
There is a ready-made tool called Ansible for exactly this purpose. There you can define your list of servers, group then and execute commands on all of them.

While trying to create file system on my partition getting an error

Iam operating centos 7 on a virtual machine
I have /dev/sdb of size 1G
I used the following command to create a extended partition
fdisk /dev/sdb
n -- for adding new partition
e -- extended
Partition number --1
First sector -- default
Last sector -- +512M
entered W
partprobe /dev/sdb
mkfs.ext4 /dev/sdb1 /mnt
Got this error
Mke2fs 1.42.9 (28-dec-2013)
Mkfs.ext4 : invalid blocks '/mnt' on device '/dev/sdb1'
Not able to figure it out
I created a directory /data and tried it in place of /mnt still not worked
according to man mkfs.ext4 you need to either ignore last parameter given (/mnt) becasue the last argument AFTER the device is fs-size, not some path
OR provide actual fs-size, like 512m

performance grep vs perl -ne

I made a performance test with really surprising result: perl is more than 20 times faster!
Is this normal?
Does it result from my regular expression?
is egrep far slower than grep?
... i tested on a current cygwin and a current OpenSuSE 13.1 in virtualbox.
Fastest Test with perl:
time zcat log.gz \
| perl -ne 'print if ($_ =~ /^\S+\s+\S+\s+(ERROR|WARNING|SEVERE)\s/ )'
| tail
2014-06-24 14:51:43,929 SEVERE ajp-0.0.0.0-8009-13 SessionDataUpdateManager cannot register active data when window has no name
2014-06-24 14:52:01,031 ERROR HFN SI ThreadPool(4)-442 CepEventUnmarshaler Unmarshaled Events Duration: 111
2014-06-24 14:52:03,556 ERROR HFN SI ThreadPool(4)-444 CepEventUnmarshaler Unmarshaled Events Duration: 52
2014-06-24 14:52:06,789 SEVERE ajp-0.0.0.0-8009-1 SessionDataUpdateManager cannot register active data when window has no name
2014-06-24 14:52:06,792 SEVERE ajp-0.0.0.0-8009-1 SessionDataUpdateManager cannot register active data when window has no name
2014-06-24 14:52:07,371 SEVERE ajp-0.0.0.0-8009-9 SessionDataUpdateManager cannot register active data when window has no name
2014-06-24 14:52:07,373 SEVERE ajp-0.0.0.0-8009-9 SessionDataUpdateManager cannot register active data when window has no name
2014-06-24 14:52:07,780 SEVERE ajp-0.0.0.0-8009-11 SessionDataUpdateManager cannot register active data when window has no name
2014-06-24 14:52:07,782 SEVERE ajp-0.0.0.0-8009-11 SessionDataUpdateManager cannot register active data when window has no name
2014-06-24 15:06:24,119 ERROR HFN SI ThreadPool(4)-443 CepEventUnmarshaler Unmarshaled Events Duration: 117
real 0m0.151s
user 0m0.062s
sys 0m0.139s
fine!
far slower test with egrep:
time zcat log.gz \
| egrep '^\S+\s+\S+\s+(ERROR|WARNING|SEVERE)\s'
| tail
...
real 0m2.454s
user 0m2.448s
sys 0m0.092s
(Output was same as above...)
finally even slower grep with different notation (my first try)
time zcat log.gz \
| egrep '^[^\s]+\s+[^\s]+\s+(ERROR|WARNING|SEVERE)\s'
| tail
...
real 0m4.295s
user 0m4.272s
sys 0m0.138s
(Output was same as above...)
The ungzipped file size is about 2.000.000 lines an un-gzip-ped 500MBytes - matching line count is very small.
my tested versions:
OpenSuSE with grep (GNU grep) 2.14
cygwin with grep (GNU grep) 2.16
perhaps some Bug with newer grep versions?
You should be able to make the Perl a little bit faster by making the parentheses non-capturing:
(?:ERROR|WARNING|SEVERE)
Also, it's unnecessary to match against $_. $_ is assumed if there is nothing specified. That's why it exists.
perl -ne 'print if /^\S+\s+\S+\s+(?:ERROR|WARNING|SEVERE)\s/'
You get tricked by your operating system's cache. When reading and grepping files some layers to the filesystem get walked through:
Harddrive own cache
OS read cache
To really know what's going on it's a good idea to warm up these caches by running some tests which do their work but do not count. After these test stop your runnning time.
As Chris Hamel commented.... not using the "|" atom grep becomes faster about 10+ times - still slower than perl.
time zcat log.gz \
| egrep '^[^\s]+\s+[^\s]+\s+(ERROR)\s'
| tail
...
real 0m0.216s
user 0m0.062s
sys 0m0.123s
So with 2 "|" atoms the grep run gets more than 3 times slower than running three greps after each other - sounds like a Bug for me... any earlier grep version to test around? i have a reHat5 also... grep seems similar slow there...

Redhat Kickstart Multiple Harddrive (Virtualbox Implementation)

I am setting up a kickstart server using Apache Server. I followed this tutorial and everything works fine there. However I am writing the ks.cfg file trying to mount separate disk into separate partitions beyond of that post scope. Say I have 10 disks, and I want to use the first disk maybe as /root, /boot, swap..etc. And /dev/sdb mount to /data1, /dev/sdc mount to /data2...
I am testing it using virtual box but it doesn't like my ks.cfg file.
My part part looks like below:
# Wipe all partitions and build them with the info below
clearpart --all --drives=sda,sdb,sdc,sdd --initlabel
# Create the bootloader in the MBR with drive sda being the drive to install it on
bootloader --location=mbr --driveorder=sda,sdb,sdc,sdd
part /boot --fstype ext3 --size=100 --onpart=sda
part / --fstype ext3 --size=3000 --onpart=sda
part swap --size=2000 --onpart=sda
part /home --fstype ext3 --size=100 --grow
part /data1 --fstype=ext4 --onpart=sdb --grow --asprimary --size=200
part /data2 --fstype=ext4 --onpart=sdc --grow --asprimary --size=200
part /data3 --fstype=ext4 --onpart=sdd --grow --asprimary --size=200
Can anyone tell me what is wrong with the part? Also, there is a sample ks.cfg that I reference.
I ran into a similar problem attempting to mount a second hard drive (sdb) during the install. What worked for me was to run the install the first time, then login to a root accessible account and format and partition the sdb drive. Afterwards, when re-installing the image, kickstart recognized and mounted sdb. Hope this helps.