A friend of mine just used plain-old cat to concatenate two mp3 files,...
cat file1.mp3 file2.mp3 > out.mp3
...and the resulting file is perfectly reproducible, playing one song and then the next.
What is this black magic? What happened to headers, metadata? How can this work? The duration is even displayed correctly.
An MP3 file is nothing more than the raw MPEG2-Layer 3 (audio) stream data, there is no file level header structure with, for example, duration, original source, encoding info. An MP3 stream is made of blocks starting with a synchronization marker FF Fx, so arbitrary data, such as ID3 tags, can be placed anywhere and will not affect the audio. Players either guess duration from the bitrate and file size if ID3 tags don't list this information or do a full scan of the file to accurately calculate it.
Don't forget that players are typically prepared to handle variable bitrate encodings, so each frame is liable to have a different bitrate anyway.
As for metadata, that's an odd duck; even though the id3 tags from both tracks would be included in the new file, most players are only going to be looking for tags at the end of the file for display to the user, and simply skip over embedded tags in the middle of a file as known 'not-music' content. Some might play garbage or crash, but I doubt they'd be popular if they are that brittle.
And note that the mp3 headers don't encode any information about overall file size -- that's all calculated at runtime. (Perhaps through magic.)
Back when I was trying to learn German by listening to streaming radio stations, I frequently used dd to split apart giant streams by guessing how far into the track I wanted to start and stop cuts... inelegant, but no re-encoding, and my player handled it fine.
Related
In a C++ program, I get multiple chunks of PCM data and I am currently using libmp3lame to encode this data into MP3 files. The PCM chunks are produced one after another. However, instead of waiting until the PCM data stream finished, I'd like to encode data early as possible into multiple MP3 chunks, so the client can either play or append the pieces together.
As far as I understand, MP3 files consist of frames, files can be split along frames and published in isolation. Moreover, there is no information on length needed in advance, so the format is suitable for streaming. However, when I use libmp3lame to generate MP3 files from partial data, the product cannot be interpreted by audio players after concatted together. I deactivated the bit reservoir, thus, I expect the frames to be independent.
Based on this article, I wrote a Python script that extracts and lists frames from MP3 files. I generated an MP3 file with libmp3lame by first collecting the whole PCM data and then applying libmp3lame. Then, I took the first n frames from this file and put them into another file. But the result would be unplayable as well.
How is it possible to encode only chunks of an audio, which library is suitable for this and what is the minimum size of a chunk?
I examined the source code of lame and the file lame_main.c helped me to come to a solution. This file implements the lame command-line utility, which also can encode multiple wav files, so they can be appended to a single mp3 file without gaps.
My mistake was to initialize lame every single time I call my encode function, thus, initialize lame for each new segment. This causes short interruptions in the output mp3. Instead, initializing lame once and re-using it for subsequent calls already solved the problem. Additionally, I call lame_init_bitstream at the start of encode and use lame_set_nocap_currentindex and lame_set_nogap_total appropriately. Now, the output fragments can be combined seamlessly.
I'm trying to do some file carving on a disk with c++. I can't find any resources on the web related to the on-disk structure of a pdf file. The thing is that I can find the %PDF-1.x token at the start of a cluster but I can't find out the size of a PDF file anywhere.
Let's say hypothetically that the file system entry for this particular document is lost. I find the start of the document and I keep reading until I run into the "startxref number %%EOF". The thing is that I don't know when to stop since there are multiple "%%EOF" markers in the content of a document.
I've tried stopping after reading, let's say 10 clusters, and not finding any pdf specific keyword like "obj", "stream", "trailer", "xref" anywhere. But it's quite arbitrary and it's not a deterministic method of finding the ending of the document so I can determine it's size.
I've also seen some "Length number" markers at the start of some "obj"s but the number doesn't really fit most of the time.
Any ideas on what I can try next? Is there a way to determine the exact size of the entire document? I'm interested in recovering documents programmatically.
Since PDF's are "free format" (pretty much like text files, but with less obviousness to humans when it comes to "reading" the content), it's probably hard to piece them together if they aren't in order.
A stream does have a length, which is a key to where the endstream goes. (A blank line before and after the stream itself). Streams are used t introduce bitmaps and similar things [fonts, line-art data in compressed form, etc] into the document). But if you have several 4KB segments that could go in as the same block in the middle of a stream then there's no way to tell which way they go, other than pasting it together and seeing which ones look sane and which doesn't. Similarly, if there are several segments of streams and objects, you can't really tell which goes where.
Of course, this applies to almost all types of files with "variable content" - you can find the first few kilobytes of a JPG, but knowing what the REST of the of is, won't be easy - only be visually inspecting the content can you determine which blocks of bytes belong where - if you get it wrong, you'll probably just get some random garbage.
The open source tool bulk_extractor has a module called scan_pdf that does pretty much what you are describing here. It can recognize the individual parts of a PDF file on a drive, automatically decompresses the compressed regions, and extracts text using a two strategies. It will recover data from fragments of PDFs even if the xref table cannot be found.
I need to implement a feature that could transmit parts of a large mp3 file over the TCP/IP in a way that would allow user to listen each part without having the entire file (using libmpg123). I would like to allow users to transmit parts as small as possible without re-encoding the stream. I would like to forget about re-encoding, because i don't want sound quality to degrade with each transmission. Each time i want to cut mp3 i do have the splitting coordinates in samples: "from what sample to what sample", so each time i should translate this to an IDs of an mp3-frames. So my question is:
Does each mp3 frame has enough information (bps/samplerate/bits-per-sample/channels) to play it without entire mp3-file header just by feeding them to an mp3 decoder?
Is there any BSD/MIT-licensed small library that could work as mp3 splitter using samples-coordinates and supporting VBR?
You can just cut binary file!
The only problem of this solution... problem with Tags
Or try this: http://www.codeproject.com/Articles/8295/MPEG-Audio-Frame-Header
Each mp3 frame is stand-alone, and can survive by itself. So you don't have to worry about it.
I'm just trying to read a video Stream out of an IP Camera (Basler BIP-1280c).
The stream I want to have is saved in a buffer on the camera, has a length of 40 seconds and is decoded in MJPEG.
Now if I access the stream via my webbrowser it shows me the 40 seconds without any problems.
But actually I need an application which is capable of downloading and saving the stream by itself.
The camera is accessed via http, so I am using libcurl to access it. This works fine and I also can download the stream without any troubles. I have chosen to save the stream data into an *.avi file (hope that's correct…?).
But now to the problem: I can open the video (tried with Totem Video Player and VLC) and also view all that has been recorded — BUT it's way too fast. The whole video lasts like 5 seconds (instead of 40). Is there in MJPEG anything in a header where to put information like the total video length or the fps? I mean there must be some information missing for the video players, so that they play it way to fast?
Update:
As suggested in the answers, I opened the file with a hexeditor and what I found was this:
--myboundary..Content-Type: image/jpeg..Content-Length: 39050.........*Exif..II*...............V...........................2...................0210................FrameNr=000398732
6.AOI=(0800x0720)#(0240,0060)/(1280x0720).Motion=00000 (no)
[00000 | 00000 | 00000 | 00000 | 00000].Alarm=0000 (no) .IO
=000.RtTrigger=0...Basler..BIP2-1280c..1970:01:05 23:08:10.8
98286......JFIF.................................. ....&"((
This header reoccurs in the file all over ( followed by a a lot of Bytes of binary Data ). This is actually okay, since I read in the camera manual that all MJPEG Pictures get this Header.
More interesting ins the JFIFin the last line. As in the answers suggested this is maybe the indicator of the file format. But afaik JFIF is a single picture format just like jpg. So does this maybe even mean that the whole video file is just some "brainless" chained pictures? And my Player just assumes that he should show this pictures one after another, without any knowledge about the framerate?
There is not a single format to use with MJPEG. From Wikipedia:
[...] there is no document that defines a single exact format that is
universally recognized as a complete specification of “Motion JPEG”
for use in all contexts.
The formats differ by vendor. My advice would be to closely inspect the file you download. Check if it is really an AVI container. (Some cameras can send the frames wrapped in a MIME container).
After the container format is clear, you can check out the documentation of that container and look for a file which has that format and the desired fps. Then you can start adjusting your downloaded file to have the desired effect.
You might also find this project useful: http://mjpeg.sourceforge.net/
Edit:
According to your sample data your camera sends the frames packed into a MIME container. (The first line is the boundary, then the headers until you encounter an empty line, then the file data itseld, followe by the boundary and so on).
These are JPEG files as the header suggests: image/jpeg. JFIF is the standard file format to store JPEG data.
I recommend you to:
Extract the contents of the file into multiple jpeg files (with munpack for instance), then
use ffmpeg or mplayer to create a movie file out of the series of jpegs.
This way you can specify the desired frame rate too.
It can make things more complicated if the camera dynamically canges AOI (area of interest), meaning it can send only a smaller part of the image where change occured. But you should check first if the simple approach works.
on un*x systems (linux, osx,...), you can use the file cmdline tool to make a (usually good) guess about the file format.
--myboundary is an indication that the stream is regular M-JPEG streamed as multipart content over HTTP. There is no well known file format which can hold this stream "as is" and be playable (that is if you rename this to AVI it is not supposed to play back).
The format itself is a sequence of (boundary, subheader, JPEG image), (boundary, subheader, JPEG image), ... etc. The stream does not have time stamps, so playback speed completely depends on the player.
edit:
Sorry, I guess my question was vague. I'd like to have a way to check if a file is not an image without wasting time loading the whole image, because then I can do the rest of the loading later. I don't want to just check the file extension.
The application just views the images. By 'checking the validity', I meant 'detecting and skipping the non-image files' also in the directory. If the pixel data is corrupt, I'd like to still treat it as an image.
I assign page numbers and pair up these images. Some images are the single left or right page. Some images are wide and are the "spread" of the left and right pages. For example, pagesAt(3) and pagesAt(4) could return the same std::pair of images or a std::pair of the same wide image.
Sometimes, there is an odd number of 'thin' images, and the first image is to be displayed on its own, similar to a wide image. An example would be a single cover page.
Not knowing which files in the directory are non-images means I can't confidently assign those page numbers and pair up the files for displaying. Also, the user may decide to jump to page X, and when I later discover and remove a non-image file and reassign page numbers accordingly, page X could appear to be a different image.
original:
In case it matters, I'm using c++ and QImage from the Qt library.
I'm iterating through a directory and using the QImage constructor on the paths to the images. This is, of course, pretty slow and makes the application feel unresponsive. However, it does allow me to detect invalid image files and ignore them early on.
I could just save only the paths to the images while going through the directory and actually load them only when they're needed, but then I wouldn't know if the image is invalid or not.
I'm considering doing a combination of these two. i.e. While iterating through the directory, reading only the headers of the images to check validity and then load image data when needed.
So,
Will just loading the image headers be much faster than loading the whole image? Or is doing a bit of i/o to read the header mean I might as well finish off loading image in full? Later on, I'll be uncompressing images from archives as well, so this also applies to uncompressing just the header vs uncompressing the whole file.
Also, I don't know how to load/read just the image headers. Is there a library that can read just the headers of images? Otherwise, I'd have to open each file as a stream and code image header readers for all the filetypes on my own.
The Unix file tool (which has been around since almost forever) does exactly this. It is a simple tool that uses a database of known file headers and binary signatures to identify the type of the file (and potentially extract some simple information).
The database is a simple text file (which gets compiled for efficiency) that describes a plethora of binary file formats, using a simple structured format (documented in man magic). The source is in /usr/share/file/magic (in Ubuntu). For example, the entry for the PNG file format looks like this:
0 string \x89PNG\x0d\x0a\x1a\x0a PNG image
!:mime image/png
>16 belong x \b, %ld x
>20 belong x %ld,
>24 byte x %d-bit
>25 byte 0 grayscale,
>25 byte 2 \b/color RGB,
>25 byte 3 colormap,
>25 byte 4 gray+alpha,
>25 byte 6 \b/color RGBA,
>28 byte 0 non-interlaced
>28 byte 1 interlaced
You could extract the signatures for just the image file types, and build your own "sniffer", or even use the parser from the file tool (which seems to be BSD-licensed).
Just to add my 2 cents: you can use QImageReader to get information about image files without actually loading the files.
For example with the .format method you can check a file's image format.
From the official Qt doc ( http://qt-project.org/doc/qt-4.8/qimagereader.html#format ):
Returns the format QImageReader uses for reading images. You can call
this function after assigning a device to the reader to determine the
format of the device. For example: QImageReader reader("image.png");
// reader.format() == "png" If the reader cannot read any image from
the device (e.g., there is no image there, or the image has already
been read), or if the format is unsupported, this function returns an
empty QByteArray().
I don't know the answer about just loading the header, and it likely depends on the image type that you are trying to load. You might consider using Qt::Concurrent to go through the images while allowing the rest of the program to continue, if it's possible. In this case, you would probably initially represent all of the entries as an unknown state, and then change to image or not-an-image when the verification is done.
If you're talking about image files in general, and not just a specific format, I'd be willing to bet there are cases where the image header is valid, but the image data isn't. You haven't said anything about your application, is there no way you could add in a thread in the background that could maybe keep a few images in ram, and swap them in and out depending on what the user may load next? IE: a slide show app would load 1 or 2 images ahead and behind the current one. Or maybe have a question mark displayed next to the image name until the background thread can verify that validity of the data.
While opening and reading the header of a file on a local filesystem should not be too expensive, it can be expensive if the file is on a remote (networked) file system. Even worse, if you are accessing files saved with hierarchical storage management, reading the file can be very expensive.
If this app is just for you, then you can decide not to worry about those issues. But if you are distributing your app to the public, reading the file before you absolutely have to will cause problems for some users.
Raymond Chen wrote an article about this for his blog The Old New Thing.