How can I get the physical size of all hard disks on the current computer using C++ / Qt framework on Windows? Just to be clear, if I have a 640 GB HDD, I want to the application to show 640 GB, not 596 GB of available space.
I know that Qt probably doesn't have a function I could use, because it has to be platform-specific, so I guess in this case something from the Win32 API. Unfortunately I can't use GetDiskFreeSpaceEx(), because I would only get the free/available disk space. I've read about using WMI, but I can't seem to find any usable code examples for this purpose.
I think this issue is mainly cosmetic as a result of inconsistencies in measurements used by Operating Systems and Hard Drive manufacturers. Check this wikipedia page for more information. Perhaps find a way to do the math while treating 1 Kilobyte as 1000 bytes (instead of 1024), 1 Megabyte as 1000 * 1000 and so on -- instead of 1 kilobyte as 1024 bytes etc.
Related
We have an 64 bit application running on windows, for the fact we know it is leaking very few bytes of memory in the c++ code.But for a setup which has 16gb physical ram and 32gb pagefile.sys. Resource monitor is showing commit memory as 22gb and 900 MB in working set used by our process.
I knew for every process os will create virtual address space in pages and that number of addresses that will be depend on the 32 bit or 64 bit .I also knew that os will swap pages to disk i.e. pagefile.sys for running other apps.In windows i think page size is 4kb, what i want to know is if one byte has leaked in a page of 4 kb in physical ram , then after swapping to disk does it will show as 4kb is used instead of one byte by the process or not ?
I am in the process of creating a C++ application that measures disk usage. I've been able to retrieve current disk usage (read and write speeds) by reading /proc/diskstats at regular intervals.
I would now like to be able to display this usage as a percentage (I find it is more user-friendly than raw numbers, which can be hard to interpret). Therefore, does anyone know of a method for retrieving maximum (or nominal) disk I/O speed programmatically on Linux (API call, reading a file, etc)?
I am aware of various answers about measuring disks speeds(eg https://askubuntu.com/questions/87035/how-to-check-hard-disk-performance), but all are through testing. I would like to avoid such methods as they take some time to run and entail heavy disk I/O while running (thus potentially degrading the performance of other running applications).
In the advent of IBM PC era, there was a great DOS utility, I forgot its name, but it was measuring the speed of the computer (maybe Speedtest? whatever). There was a bar in the 2/3 bottom of the screen, which is represented the speed of the CPU. If you had a 4.0 MHz (not GHz!) the bar occupied the 10% of the screen.
2-3 years later, '386 computers has risen, and the speed indicator bar overgrown not just the line but the screen, and it looked crappy.
So, there is no such as 100% disk speed, CPU speed etc.
The best you can do: if you program runs for a while, you can remember the highest value and set it as 100%. Probably you may save the value into a tmp file.
I am working on B-L475E-IOT01A2 which is a STM32L475 series Discovery IoT kit and has an ARM M4 cortex. It has two banks of FLASH Memory of size 512 KB each. I am implementing two applications along with a bootloader and all of them are stored in the FLASH. Since there is very little space, the bootloader, the 1st application and some part of the 2nd application is stored in the 1st bank whereas the 2nd bank contains the remaining part of the 2nd application. So at a point in the bootloader program, I need to swap both the applications.
The problem is that only some part of both the applications is getting swapped because the 2nd Application is stored partly in both the banks. Only one page (2 KB) of memory can be written at once in the FLASH . Both the applications have a size of 384 KB and after calculation it turns out to be 192 pages. But after running the swapping program only 72 pages were swapped.
Here are the addresses of the applications and the bootloader.
BOOTLOADER_ADDRESS 0x08000000, (Size = 48K )
APPLICATION1_ADDRESS 0x0800F000 (Size = 384 KB)
APPLICATION2_ADDRESS 0x0806F800 (Size = 384 KB)
So what should I do to ensure proper swapping? Should I enable dual bank mode or store the 2nd Application in the 2nd bank or do something else?
Your help will be highly appreciated.
Thanks,
Shetu
One possible workaround/different approach is to integrate the bootloader functionality into both application 1 and application 2, and having each application in its own flash bank (1 and 2). Using dual bank mode makes switching back and forth between applications much easier. I have used this approach with an STM32F7 device.
When the device boots it is configured to boot from flash bank 1 or 2 depending on several device option bytes/settings. If your code in the bootloader/application decides to boot into the other application, it can do this by modifying some option bytes and then performing a soft reset. Also, while running bootloader/application from one flash bank, the other flash bank can be updated.
If using this approach to do firmware updates, you must be especially careful that new firmware versions do not break the firmware update functionality of the bootloader.
I'm working on a "search" project. The main idea is how to build a index to respond to the search request as fast as possible. The input is a query, such as "termi termj", ouput is docs where both termi and termj appear.
the index file looks like this:(each line is called a postlist, which is sorted array of unsigned int and can be compressed with good compression ratio)
term1:doc1, doc5, doc8, doc10
term2:doc10, doc51, doc111, doc10000
...
termN:doc2, doc4, doc10
3 main time resuming procedure is
seek termi and termj's postlist in file (random disk read)
decode the postlists (cpu)
calculate the intersection of 2 postlists (cpu)
My question is, How can I know that the application can't be more efficient, it has a disk I/O bottleneck? How can I measure if my computer has used his disk 100 percent? Are there any tools on linux to help? Is there any tools can measure disk I/O perfectly like google cpu profiler can measure cpu?
My develop env is Ubuntu 14.04.
CPU: 8 cores 2.6GHz
disk: SSD
benchmark now is about 2000 queries/second, but I don't know how to improve it.
Any suggestion will be appreciated! Thank you very much!
Experienced Mac / OS X Developer here, working with libCinder, a cross platform C++ toolkit for graphics, looking for some guidance on optimizing Windows disk access.
I am in the process of optimizing disk access for reading (large, high resolution) image sequences. Ive implemented my optimizations on my Mac running OS X 10.10, and have almost quadrupled my disk performance to match synthetic disk benchmarks (yay).
Testing the same code on Windows results in no performance increases (boo!)
I want to explain what I am doing and why, and what little I understand.
Current state of my code:
My code has 4 threads:
Main thread spools up OpenGL and Renders, handles events etc.
Thread 1 reads from disk and loads my images directly into memory from their disk representations (tiff header and all) and dumps into a concurrent circular buffer of cinder buffer objects. (Producer).
Thread 2 reads from my circular buffer of buffers (Consumer) and 'decodes' them to raw cinder surface objects (uncompressed image data), and adds them to a second Concurrent Circular Buffer of surfaces (Producer)
Thread 3 reads from my Surface buffer and submits to a secondary GL context as textures, and notifies main there is a new texture available.
I separated my threads out this way from measuring performance and hotspots, indicating disk access + file decoding was a limiting factor, and by uncoupling them I was able to make gains.
Now, within Thread 1, which does the disk reads, I've tried a few different methods of disk access:
Using library provided disk access (Mac + Win) via ci::DataSourcePath
fread (Mac + Win),
CreateFile (Win Only)
mmap (Mac only for now)
On OS X, I see that using fread (with no cache and read-ahead flags via fcntl) results in slightly better sustained disk read performance that using Cinders provided ci::DataSourcePath object, but not by much. I am able to almost saturate my Mac Book Pro's SSD's read getting roughly 750MB/s via both methods. Interestingly memory mapped file access (with madvise) was not as fast (400-500MB/s), but thats why we benchmark.
On Windows, CreateFile, fread (with no flags for no cache available to my knowledge) and ci::DataSourcePath all result in similar performance, however, its 200MB/s, and on my hardware it should be possible to get near 8GB/s (yes, seriously, we have raided Intel PCI SSDs).
That is dismal!
Some questions for folks more familiar than I with Windows disk IO:
Research indicates that FILE_FLAG_NO_BUFFER + FILE_FLAG_OVERLAPPED for Windows CreateFile was (is?) the way to go.
Other info indicate I should be using IOCompletionPorts and async IO.
Is Boost::ASIO viable for high performance, unbuffered disk access? Most posts indicate it being used for socket stuff.
I'd like to avoid too much cross platform complexity (overlapped and async IO seems non trivial), and it seems crazy to me that the exact same x64 architectures result in such drastically different performance. What am I missing - why doesnt my decoupling work on Windows? What API should I be using in 2015?
Any advice is very appreciated.
TL;DR - Experienced Mac dev tried some basic optimizations for cross platform disk IO and failed when it comes to Windows, Windows IO is weird, what API should I be using in 2015 to get fast disk reads on Windows?
Thanks.