I need a library for my C++ program.
But the problem, I don't know the name of this data type I want.
I have NPAPI plugin (I know this API is deprecated and removed from modern browsers) which issues to a server
HTTP range requests. Request is asyncronious and the data may arraive in any order with any chunks size.
So I need to track ranges I already have requested from a server.
For example, if initially I requested bytes [10-20] (inclusevely), then I requested [30-40] the data type I need should keep it as two intervals:
[10-20],[30-40]
But if I request [21-29] or even [15-35] it should be merged in one interval:
[10-20],[30-40] + [15-35] = [10-40]
Also I need a substraction when a requested block arrives:
[10-40] - [20-30] = [10-19],[31-40]
(requested - arrived = we're still waiting for)
I had a look at boost::numeric::intervals library but at first glance it is too big for this task (1583 files, 13 Mb of sources after './dist/bin/bcp numeric/interval ~/boost').
Also, GNU ddrescue has some similar arithmetics inside but the code isn't a library there, it coupled too much with the applications specifics.
UPDATE:
Here is what I've found on my way:
A container for integer intervals, such as RangeSet, for C++
https://en.wikipedia.org/wiki/Interval_tree
Boost.ICL
NCBI C++ Toolkit, CIntervalTree
Related
I am attempting to read a gzip-compressed file from multiple threads.
I was thinking this would significantly speed up decompression process as my gzread functions in multiple threads start from different file offset (using gseek), hence they read different parts of the file.
The simplified code is like
// in threads
auto gf = gzopen("file.gz",xxx);
gzseek(gf,offset);
gzread(xx);
gzclose(gf);
To my surprise, my multi-thread version program does not speed up at all. The 20-thread version uses exactly the same time as the single-thread version. I am pretty sure this is far away from the disk bottleneck.
I guess the zlib inflation functionality may need to decompress the entire file for reading even a small part, but I failed to get any clue from their manual.
Anyone have an idea how to speed up in my case?
Short answer: due to the serial nature of a deflate stream, gzseek() must decode all of the compressed data from the start up to the requested seek point. So you can't get any gain with what you are trying to do. In fact, the total cycles spent will increase with the square of the length of the compressed data! So don't do that.
tl;dr: zlib isn't designed for random access. It seems possible to implement, though requiring a complete read-through to build an index, so it might not be helpful in your case.
Let's look into the zlib source. gzseek is a wrapper around gzseek64, which contains:
/* if within raw area while reading, just go there */
if (state->mode == GZ_READ && state->how == COPY &&
state->x.pos + offset >= 0) {
"Within raw area" doesn't sound quite right if we're processing a gzipped file. Let's look up the meaning of state->how in gzguts.h:
int how; /* 0: get header, 1: copy, 2: decompress */
Right. At the end of gz_open, a call to gz_reset sets how to 0. Returning to gzseek64, we end up with this modification to the state:
state->seek = 1;
state->skip = offset;
gzread, when called, processes this with a call to gz_skip:
if (state->seek) {
state->seek = 0;
if (gz_skip(state, state->skip) == -1)
return -1;
}
Following this rabbit hole just a bit further, we find that gz_skip calls gz_fetch until gz_fetch has processed enough input for the desired seek. gz_fetch, on its first loop iteration, calls gz_look which sets state->how = GZIP, which causes gz_fetch to decompress data from the input. In other words, your suspicion is right: zlib does decompress the entire file up to that point when you use gzseek.
zlib implementation have no multithreading (http://www.zlib.net/zlib_faq.html#faq21 - "Is zlib thread-safe? - Yes. ... Of course, you should only operate on any given zlib or gzip stream from a single thread at a time.") and will decompress "entire file" up to seeked position.
And zlib format has bad alignment (bit alignment) / no offset fields (deflate format) to enable parallel decompression/seeking.
You may try another implementations of z (deflate/inflate), for example, http://zlib.net/pigz/ (or switch from ancient compression from the era of single core to non-zlib modern parallel formats, xz/lzma/something from google)
pigz, which stands for parallel implementation of gzip, is a fully functional replacement for gzip that exploits multiple processors and multiple cores to the hilt when compressing data. pigz was written by Mark Adler, and uses the zlib and pthread libraries. To compile and use pigz, please read the README file in the source code distribution. You can read the pigz manual page here.
The manual page is http://zlib.net/pigz/pigz.pdf and it has useful information.
It uses format compatible to zlib, but adopted to parallel compress:
Each partial raw deflate stream is terminated by an empty stored block ... in order to end that partial bit stream at a byte boundary.
Still, DEFLATE format is bad for parallel decompression:
Decompression can’t be parallelized, at least not without specially prepared deflate streams for that purpose. Asaresult, pigz uses a single thread (the main thread) for decompression, but will create three other threads for reading, writing, and check calculation, which can speed up decompression under some circumstances.
Referring to the docs, you can specify the number of concurrent connection when pushing large files to Amazon Web Services s3 using the multipart uploader. While it does say the concurrency defaults to 5, it does not specify a maximum, or whether or not the size of each chunk is derived from the total filesize / concurrency.
I trolled the source code and the comment is pretty much the same as the docs:
Set the concurrency level to use when uploading parts. This affects
how many parts are uploaded in parallel. You must use a local file as
your data source when using a concurrency greater than 1
So my functional build looks like this (the vars are defined by the way, this is just condensed for example):
use Aws\Common\Exception\MultipartUploadException;
use Aws\S3\Model\MultipartUpload\UploadBuilder;
$uploader = UploadBuilder::newInstance()
->setClient($client)
->setSource($file)
->setBucket($bucket)
->setKey($file)
->setConcurrency(30)
->setOption('CacheControl', 'max-age=3600')
->build();
Works great except a 200mb file takes 9 minutes to upload... with 30 concurrent connections? Seems suspicious to me, so I upped concurrency to 100 and the upload time was 8.5 minutes. Such a small difference could just be connection and not code.
So my question is whether or not there's a concurrency maximum, what it is, and if you can specify the size of the chunks or if chunk size is automatically calculated. My goal is to try to get a 500mb file to transfer to AWS s3 within 5 minutes, however I have to optimize that if possible.
Looking through the source code, it looks like 10,000 is the maximum concurrent connections. There is no automatic calculations of chunk sizes based on concurrent connections but you could set those yourself if needed for whatever reason.
I set the chunk size to 10 megs, 20 concurrent connections and it seems to work fine. On a real server I got a 100 meg file to transfer in 23 seconds. Much better than the 3 1/2 to 4 minute it was getting in the dev environments. Interesting, but thems the stats, should anyone else come across this same issue.
This is what my builder ended up being:
$uploader = UploadBuilder::newInstance()
->setClient($client)
->setSource($file)
->setBucket($bicket)
->setKey($file)
->setConcurrency(20)
->setMinPartSize(10485760)
->setOption('CacheControl', 'max-age=3600')
->build();
I may need to up that max cache but as of yet this works acceptably. The key was moving the processor code to the server and not relying on the weakness of my dev environments, no matter how powerful the machine is or high class the internet connection is.
We can abort the process during upload and can halt all the operations and abort the upload at any instance. We can set Concurrency and minimum part size.
$uploader = UploadBuilder::newInstance()
->setClient($client)
->setSource('/path/to/large/file.mov')
->setBucket('mybucket')
->setKey('my-object-key')
->setConcurrency(3)
->setMinPartSize(10485760)
->setOption('CacheControl', 'max-age=3600')
->build();
try {
$uploader->upload();
echo "Upload complete.\n";
} catch (MultipartUploadException $e) {
$uploader->abort();
echo "Upload failed.\n";
}
I just spent a day creating an abstraction layer to kyotodb to remove global locks from my code, I was busy porting my algorithms to this new abstraction layer when I discover that scan_parallel isn't really parallel. It only maxes out one core -- For jollies I stuck in a billion-int-countdown spin-loop in my code(empty stubs as I port) to try and simulate some processing time. still only one core maxed. Do I need to move to berkley db or leveldb ? I thought kyotodb was meant for internet scale problems :/. I must be doing something wrong or missing some gotchas.
top or iostat never went above 100% / 25% (iostat one cpu maxed = 1/number of cores * 100):/ On a quad core i5.
source db is 10gigs corpus of protocol buffer encoded data (treedb) with the following flags (picked these up from the documentation).
index_db.tune_options(TreeDB::TLINEAR | TreeDB::TCOMPRESS);
index_db.tune_buckets(1LL * 1000);
index_db.tune_defrag(8);
index_db.tune_page(32768);
edit
Do not remove the IR TAG. Please think before you wave arround the detag bat.
This IS an IR related question, its about creating GINORMOUS (40 gig +) inverted files ONLINE, inverted indices are the base of IR data access methods, and inverted index creation has a unique transactional profile. By removing the IR tag you rob me of the wisdom of IR researchers who have used a database library to create such large database files.
Trying to read the sizes of disks that were created in multiple sessions using GetDiskFreeSpaceEx() gives the size of the last session only. How do I read correctly the number and sizes of all sessions in C/C++?
Thanks.
You might want to look at the DeviceIoControl API function. See here for control codes. Here is a code example that retrieves the size of a CD disk. Substitute
CreateFile(TEXT("\\\\.\\PhysicalDrive0")
for e.g.
CreateFile(TEXT("\\\\.\\F:") /* Drive is F: */
if you wish.
Note: The page says that DeviceIoControl can be used to "retrieve information about a floppy disk drive, hard disk drive, tape drive, or CD-ROM drive", but I have also tested it on a DVD, and it seemed to work perfectly. I did not have access to any multisession DVDs to test, so you'll have to test if that works yourself. If it doesn't work, I'd try some of the other control codes, at least IOCTL_DISK_GET_DRIVE_GEOMETRY_EX, IOCTL_DISK_GET_DRIVE_LAYOUT_EX, IOCTL_DISK_GET_LENGTH_INFO and IOCTL_DISK_GET_PARTITION_INFO_EX.
If all fails with DeviceIoControl, you could possibly make use of the Windows Image Mastering API (IMAPI). You'll need v2 of the API (included with Vista & later, can be added to XP & 2003 too, see here: What's new in IMAPIv2) for DVD support. This API is primarily for CD burning, but does perhaps contain some functionality for retrieving disk size, I'd find it weird if it didn't. Particularly, this example seems to be interesting. I do not know if this one works for multisession disks either, but since it can create them, I guess it's likely.
Here are some resources for IMAPI:
MSDN - IMAPI
MSDN - IMAPI interfaces
MSDN - Creating multisession disks with IMAPI (note: example with VB, not C or C++)
Hey I got at least 2 solutions for you:
1) Download dvd+rw-mediainfo.exe from http://fy.chalmers.se/~appro/linux/DVD+RW/tools/win32/, it's a tool that reads info about your disc. Then just make a system call from your app and parse the results. Here's example output:
D:\Downloads>"dvd+rw-mediainfo.exe" f:
INQUIRY: [HL-DT-ST][DVDRAM GT30N ][1.01]
GET [CURRENT] CONFIGURATION:
Mounted Media: 10h, DVD-ROM
Current Write Speed: 1.0x1385=1385KB/s
Write Speed #0: 8.0x1385=11080KB/s
Write Speed #1: 4.0x1385=5540KB/s
Write Speed #2: 2.0x1385=2770KB/s
Write Speed #3: 1.0x1385=1385KB/s
Speed Descriptor#0: 00/2292991 R#8.0x1385=11080KB/s W#8.0x1385=11080KB/s
READ DVD STRUCTURE[#0h]:
Media Book Type: 01h, DVD-ROM book [revision 1]
Legacy lead-out at: 2292992*2KB=4696047616
READ DISC INFORMATION:
Disc status: complete
Number of Sessions: 1
State of Last Session: complete
Number of Tracks: 1
READ TRACK INFORMATION[#1]:
Track State: complete
Track Start Address: 0*2KB
Free Blocks: 0*2KB
Track Size: 2292992*2KB
Last Recorded Address: 2292991*2KB
FABRICATED TOC:
Track#1 : 17#0
Track#AA : 17#2292992
Multi-session Info: #1#0
READ CAPACITY: 2292992*2048=4696047616
2) Investigate mciSendString from [DllImport("winmm.dll", EntryPoint = "mciSendStringA", CharSet = CharSet.Ansi)], I suspect you can send some command and get the desired results.
PS: of course you may download dvd+rw-mediainfo.exe sources from here and investigate further, I am just giving you ideas to think of.
UPDATE
Link to source code updated, thanks #oystein
There are many way to do this since the DVD drives have several interfaces for this due to legacy and backward-compatibility issues.
You could send an IOCTL_SCSI_PASSTHROUGH_DIRECT command to the DVD-drive ( the physicaldevice handle for it). With it you issue a SCSI commands that will be answered by the drive. You can read session information, disk information disk capcity and more.
I believe that dvd+rw-mediainfo.exe issues these.
Unfortunatly, the interface is a bit tricky and obscure, since it is a command within a command. Th passthrough has a byte buffer you will have to fill in yourself with the command structure.
Or you can call IOCTL_CDROM_READ_TOC_EX:
http://www.osronline.com/ddkx/storage/k306_2cs2.htm
I also believe that the exact set of the IOCTL / commands that will work depends on on the drive and its firmaware.
Older drives will not support the newr interfaces and some of the newer drives will not support legacy interfaces.
Thus, some of the libraries & tools might use one or more of these interfaces.
Accseeing the older sessons is all quite messy, really, since most OS will not care about them, only the most recent ones.
I want to implement an progress bar in my C++ windows application when downloading a file using WinHTTP. Any idea how to do this? It looks as though the WinHttpSetStatusCallback is what I want to use, but I don't see what notification to look for... or how to get the "percent downloaded"...
Help!
Thanks!
Per the docs:
WINHTTP_CALLBACK_STATUS_DATA_AVAILABLE
Data is available to be retrieved with
WinHttpReadData. The
lpvStatusInformation parameter points
to a DWORD that contains the number of
bytes of data available. The
dwStatusInformationLength parameter
itself is 4 (the size of a DWORD).
and
WINHTTP_CALLBACK_STATUS_READ_COMPLETE
Data was successfully read from the
server. The lpvStatusInformation
parameter contains a pointer to the
buffer specified in the call to
WinHttpReadData. The
dwStatusInformationLength parameter
contains the number of bytes read.
There may be other relevant notifications, but these two seem to be the key ones. Getting "percent" is not necessarily trivial because you may not know how much data you're getting (not all downloads have content-length set...); you can get the headers with:
WINHTTP_CALLBACK_STATUS_HEADERS_AVAILABLE
The response header has been received
and is available with
WinHttpQueryHeaders. The
lpvStatusInformation parameter is
NULL.
and if Content-Length IS available then the percentage can be computed by keeping track of the total number of bytes at each "data available" notification, otherwise your guess is as good as mine;-).