Get disk space using wmi class only the size nothing else - wmi

i use a simple call using the WMI
(get-wmiobject win32_logicaldisk).Size
(get-wmiobject win32_logicaldisk).Freespace
I only want the info about the size in MB or in GB...i do not want a table or something like that...solutions for that are already avaliable
How can this be accomplished in a one-line…?
PS C:\Users\XXXXXXXX> (get-wmiobject win32_logicaldisk).size
2550860308484992762052608
Would love to see here 250 GB or so... again only one by one no table….

Divide by 1024 / 1024 /1024
There are 1024 Bytes in a Kilobyte, 1024 Kilobytes in a Megabyte, etc...So keep dividing by 1024 until you don't have comma's
Edit
I'm not sure what that number is but the amount of Bytes in 250GB is
268435456000

Related

Leveldb limit testing - limit Memory used by a program

I'm currently benchmarking an application built on Leveldb. I want to configure it in such a way that the key-values are always read from disk and not from memory.
For that, I need to limit the memory consumed by the program.
I'm using key-value pairs of 100 bytes each and 100000 of them, which makes their size equal to 10 MB. If I set the virtual memory limit to less than 10 MB using ulimit, I can't even run the command Makefile.
1) How can I configure the application so that the key value pairs are always fetched from the disk?
2) What does ulimit -v mean? Does limiting the virtual memory translate to limiting the memory used by the program on RAM?
Perhaps there is no need in reducing available memory, but simply disable cache as described here:
leveldb::ReadOptions options;
options.fill_cache = false;
leveldb::Iterator* it = db->NewIterator(options);
for (it->SeekToFirst(); it->Valid(); it->Next()) {
...
}

how HDFS will divide the file in block

i know that hdfs divide file in 128 mb block
now suppose a file size of 1034mb is been divided in 128*8 blocks.what will happen 10 mb will it be allocated the whole 128 mb or just 10mb block.
if yes. then rest 118mb gets wasted.
how this issue can be resolved.
I have just started learning hadoop.
please pardon me if my question is naive.
Thanks

Converting B to MB properly, or entirely wrong

I'm not very experienced in these things, so please try not to jump to conclusions right out the gate. Okay, so I've got a number in bytes that I've been trying to convert to mb with little consistency or success. An example is a directory I have that comes back as 191,919,191 bytes (191.919 MB) when I 'get info'.
I was curious about how to convert it myself, so here's what I learned:
Google:
1 KB = 1000 B
1 MB = 1000 KB
1 GB = 1000 MB
So far so good...
1024000 B in KB = 1024
1024 KB in MB = 1.024
This seems perfectly logical...
191,919,191 B to MB = 191.919 MB
This looks correct too, but when I go to convert my bytes to mb using mostly any code sample out there in existence I end up with something far different from friendly ol' google.
According to Princeton
SYNOPSIS:
Converting between bytes, kilobytes, megabytes, and gigabytes.
SOLUTION:
1 byte = 1 character
1 kilobyte (kb) = 1024 bytes
1 megabyte (Mb) = 1024 kb = 1,048,576 bytes
1 gigabyte (Gb) = 1024 Mb = 1,048,576 kb = 1,073,741,824 bytes
So with this information:
191.919 mb / (1024000) = 187.421 B
I've also seen conversions like this:
191.919 mb / (1024 * 1024) = 183.028 B
WTF? is this stuff just made up as we go along, or is there some standard way of getting the real file size in mb from bytes? I'm completely lost and confused now because of this conflicting information. I have no real idea of who is right or wrong with this, or if I'm just completely missing something.
I have code like this:
UInt32 fileSize = 191919191; // size in bytes
UInt32 mbSize = fileSize / 1024000; // do conversion
printf(#"%u MB",(unsigned int)mbSize); // result:
Which outputs:
187 MB
So how in the world can 191,919,191 bytes = 191 MB?
Just to summarise...
The official, SI standardised, correct use of the prefixes is that kilo = 10^3 = 1000, mega = 10^6 = 1000000 and so on. The abbreviations are K, M, G etc.
There is a separate set of prefixes for the digital world where kibi = 2^10 = 1024, mebi = 2^20 = 1048576 and so on. The abbreviations are Ki, Mi, Gi and so on.
In popular usage the abbreviations K, M, G etc are slightly vague, and sometimes understood to mean one thing, sometimes the other.
The bottom line is that whenever it matters, you have to take extra care to know which you're using. Some documentation will not have taken that care, and can be very misleading. I don't see this ambiguity changing any time soon.

WriteFile Failure with error code 87 in 4096 Bytes per sector Disk

WriteFile() Win32 call with input buffer size = 512 Fails., when i try to write to the disk that has bytes per sector = 4096.[3 TB disk]. Same WriteFile with input buffer size = 4096 works fine.,
Can any body explain this behavior.
For low-level I/O operations, your buffer must be an integer multiple of the sector size. In your case, k*4096. Most likely your hard drive wasn't manufactured a long time ago. They are called "Advanced Format" and have 4096 bytes per sector. Mine doesn't mind if I set it to 512 because it's old. Try using the GetDiskFreeSpace function to learn more about your hard-drive.

Reading binary files, Linux Buffer Cache

I am busy writing something to test the read speeds for disk IO on Linux.
At the moment I have something like this to read the files:
Edited to change code to this:
const int segsize = 1048576;
char buffer[segsize];
ifstream file;
file.open(sFile.c_str());
while(file.readsome(buffer,segsize)) {}
For foo.dat, which is 150GB, the first time I read it in, it takes around 2 minutes.
However if I run it within 60 seconds of the first run, it will then take around 3 seconds to run. How is that possible? Surely the only place that could be read from that fast is the buffer cache in RAM, and the file is too big to fit in RAM.
The machine has 50GB of ram, and the drive is a NFS mount with all the default settings. Please let me know where I could look to confirm that this file is actually being read at this speed? Is my code wrong? It appears to take a correct amount of time the first time the file is read.
Edited to Add:
Found out that my files were only reading up to a random point. I've managed to fix this by changing segsize down to 1024 from 1048576. I have no idea why changing this allows the ifstream to read the whole file instead of stopping at a random point.
Thanks for the answers.
On Linux, you can do this for a quick troughput test:
$ dd if=/dev/md0 of=/dev/null bs=1M count=200
200+0 records in
200+0 records out
209715200 bytes (210 MB) copied, 0.863904 s, 243 MB/s
$ dd if=/dev/md0 of=/dev/null bs=1M count=200
200+0 records in
200+0 records out
209715200 bytes (210 MB) copied, 0.0748273 s, 2.8 GB/s
$ sync && echo 3 > /proc/sys/vm/drop_caches
$ dd if=/dev/md0 of=/dev/null bs=1M count=200
200+0 records in
200+0 records out
209715200 bytes (210 MB) copied, 0.919688 s, 228 MB/s
echo 3 > /proc/sys/vm/drop_caches will flush the cache properly
in_avail doesn't give the length of the file, but a lower bound of what is available (especially if the buffer has already been used, it return the size available in the buffer). Its goal is to know what can be read without blocking.
unsigned int is most probably unable to hold a length of more than 4GB, so what is read can very well be in the cache.
C++0x Stream Positioning may be interesting to you if you are using large files
in_avail returns the lower bound of how much is available to read in the streams read buffer, not the size of the file. To read the whole file via the stream, just keep
calling the stream's readsome() method and checking how much was read with the gcount() method - when that returns zero, you have read everthing.
It appears to take a correct amount of time the first time the file is read.
On that first read, you're reading 150GB in about 2 minutes. That works out to about 10 gigabits per second. Is that what you're expecting (based on the network to your NFS mount)?
One possibility is that the file could be at least in part sparse. A sparse file has regions that are truly empty - they don't even have disk space allocated to them. These sparse regions also don't consume much cache space, and so reading the sparse regions will essentially only require time to zero out the userspace pages they're being read into.
You can check with ls -lsh. The first column will be the on-disk size - if it's less than the file size, the file is indeed sparse. To de-sparse the file, just write to every page of it.
If you would like to test for true disk speeds, one option would be to use the O_DIRECT flag to open(2) to bypass the cache. Note that all IO using O_DIRECT must be page-aligned, and some filesystems do not support it (in particular, it won't work over NFS). Also, it's a bad idea for anything other than benchmarking. See some of Linus's rants in this thread.
Finally, to drop all caches on a linux system for testing, you can do:
echo 3 > /proc/sys/vm/drop_caches
If you do this on both client and server, you will force the file out of memory. Of course, this will have a negative performance impact on anything else running at the time.