Streaming MP3 from Internet with FMOD - mp3

I thought this would be a relatively simple task with something like FMOD, but I can't get it to work. Even the example code netstream doesn't seem to do the trick. No matter what mp3 I try to play with the netstream example program, I get this error:
FMOD error! (20) Couldn't perform seek operation. This is a limitation of the medium (ie netstreams) or the file format.
I don't really understand what this means. Isn't this exactly what the netstream example program was for? to stream some file from the internet?
I can't get passed the createSound method
result = system->createSound(argv[1], FMOD_HARDWARE | FMOD_2D | FMOD_CREATESTREAM | FMOD_NONBLOCKING, 0, &sound);
EDIT:
This is what I modified after reading Mathew's answer
FMOD_CREATESOUNDEXINFO soundExInfo;
memset(&soundExInfo, 0, sizeof(FMOD_CREATESOUNDEXINFO));
soundExInfo.cbsize = sizeof(FMOD_CREATESOUNDEXINFO);
soundExInfo.suggestedsoundtype = FMOD_SOUND_TYPE_MPEG;
result = system->createSound(argv[1], FMOD_HARDWARE | FMOD_2D | FMOD_CREATESTREAM | FMOD_NONBLOCKING | FMOD_IGNORETAGS, &soundExInfo, &sound);
I get two different errors depending on which files I use.
Test 1
URL: http://kylegobel.com/test.mp3
Test 1 Error: (25) Unsupported file or audio format.
Test 2 URL: http://kylegobel.com/bullet.mp3
Test 2 Error: (20) Couldn't perform seek operation. This is a limitation of the medium (ie netstreams) or the file format.
Before I made the change, I could use netstream to play "C:\test.mp3" which is the same file named test.mp3 on the web, but that no longer works with the above changes. Maybe these files are just in the wrong formats or something? Sorry for my lack of knowledge in this area, I really don't know much about sound, but trying to figure it out.
Thanks,
Kyle

It's possible the MP3 has a large amount of tags at the start, so FMOD reads them then tries to seek back to the start (which it can't do because it's a net stream). Can you try using FMOD_IGNORETAGS and perhaps FMOD_CREATESOUNDEXINFO with suggestedsoundtype set to FMOD_SOUND_TYPE_MPEG?
If that does't work could you post the url to a known not working MP3 stream?
EDIT:
The file in question has around 60KB of tag data, FMOD is happy to read over that stuff but for the MPEG codec to work it needs to do some small seeks. Since you cannot seek a netstream all the seeks must be contained inside the low level file buffer. If you tweak the file buffer size, make it a bit larger you can overcome this restriction. See System::setFileSystem "blockalign" parameter.

Related

How do I read text chunks quicker with libpng?

With libpng, I’m trying to extract text chunks in a 44-megabyte PNG image (and preferably validate that the PNG data is not malformed (e. g. lacking IEND, etc.)). I could do that with png_read_png and png_get_text, but it took way too long for me, 0.47 seconds, which I’m pretty sure is because of the massive amount of the IDAT chunks the image has. How do I do this in a quicker manner?
I didn’t need the pixels, so I tried to make libpng ignore the IDAT chunks.
To have libpng ignore IDAT chunks, I tried:
png_read_info(p_png, p_png_information); png_read_image(p_png, nullptr); png_read_end(p_png, p_png_information); to skip IDAT chunks; crashed and failed.
png_set_keep_unknown_chunks to make libpng unknow about IDAT, and png_set_read_user_chunk_fn(p_png, nullptr, discard_an_unknown_chunk) (discard_an_unknown_chunk is a function that does return 1;) to discard unknown chunks; a weird CRC error occurred on the first IDAT chunk and failed.
And failed to do that.
Edit
Running as a Node.js C++ addon, mostly written in C++, on Windows 10, with i9-9900K CPU # 3.6 GHz and gigabytes of memory.
Read the image file on an SSD with fs.readFileSync, a Node.js method returning a Buffer, and tossed it to the libpng to process.
Yes, at first, I blamed libpng for the prolonged computation. Now I see there might be other reasons causing the delay. (If that’s the case, this question would be a bad one with an XY problem.) Thank you for your comments. I’ll check my code out again more thoroughly.
Edit 2
With every step for feeding the PNG data input to the C++ addon kept the same, I ended up manually picking and decoding text chunks only, with my C pointer magic and some C++ magic. And, the performance was impressive (0.0020829 seconds on processing), being almost immediate. Don’t know why and how though.
B:\__A2MSUB\image-processing-utility>npm run test
> image-processing-utility#1.0.0 test B:\__A2MSUB\image-processing-utility
> node tests/test.js
----- “read_png_text_chunks (manual decoding, not using libpng.)” -----
[
{
type: 'tEXt',
keyword: 'date:create',
language_tag: null,
translated_keyword: null,
content: '2020-12-13T22:01:22+09:00',
the_content_is_compressed: false
},
{
type: 'tEXt',
keyword: 'date:modify',
language_tag: null,
translated_keyword: null,
content: '2020-12-13T21:53:58+09:00',
the_content_is_compressed: false
}
]
----- “read_png_text_chunks (manual decoding, not using libpng.)” took 0.013713 seconds.
B:\__A2MSUB\image-processing-utility>
I had to do something similar, but where I wanted libpng to do all of the metadata chunk parsing (e.g. eXIf, gAMA, pHYs, zEXt, cHRM, etc. chunks). Some of these chunks can appear after the IDAT, which means the metadata can't be read with just png_read_info. (The only way to get to them would be to do a full decode of the image, which is expensive, and then call png_read_end.)
My solution was to create a synthetic PNG byte stream that is fed to libpng via the read callback set using png_set_read_fn. In that callback, I skip all IDAT chunks in the source PNG file, and when I get to an IEND chunk, I instead emit a zero-length IDAT chunk.
Now I call png_read_info: it parses all of the metadata in all of the chunks it sees, stopping at the first IDAT, which in my synthetic PNG stream is really the end of the source PNG image. Now I have all of the metadata and can query libpng for it via the png_get_xxx functions.
The read callback that creates the synthetic PNG stream is a little complicated due to it being called by libpng multiple times, each for small sections of the stream. I solved that using a simple state machine that processes the source PNG progressively, producing the synthetic PNG stream on-the-fly. You could avoid those complexities if you produce the synthetic PNG stream up-front in memory before calling png_read_info: without any real IDATs, your full synthetic PNG stream is bound to be small...
While I don't have benchmarks to share here, the final solution is fast because IDATs are skipped entirely and not decoded. (I use a file seek to skip each IDAT in the source PNG after reading the 32-bit chunk length.)
You can check that all the correct PNG chunks are in a file, in the correct order, and not repeated and with correct checksums using pngcheck. It is open source so you could look at how it works.
If you add the parameter -7, you can not only check the structure but also extract the text:
pngcheck -7 a.png
Output
File: a.png (60041572 bytes)
date:create:
2020-12-24T13:22:41+00:00
date:modify:
2020-12-24T13:22:41+00:00
OK: a.png (10000x1000, 48-bit RGB, non-interlaced, -0.1%).
I generated a 60MB PNG and the above check takes 0.067s on my MacBook Pro.

MoSync - Edit video

After making a small video recorder application and being able to play that video again. I would like to make the possibility to pick X seconds from the video and put that into a new .MP4 file (or overwrite the old one, that would be even better).
I am using the MoSync C++ Native UI and VideoViewer. I know I can get the position and that part is all fine, and according to the MoSync documentation.
char buf[BUFFER_SIZE];
maWidgetGetProperty(videoViewHandle,
MAW_VIDEO_VIEW_CURRENT_POSITION,
buf,
BUFFER_SIZE);
int seconds = 5;
//So here I need to make a new file ranging from buf to buf + seconds
However, I have absolutely no clue as to where to look for this. Should I use mp4 header files and create my own mp4 (how is this even done? and is this cross-compatible?).
Will appreciate any advice/help you can offer!

FSCTL_GET_RETRIEVAL_POINTERS failure on very small file on a NT File System

My questions is: how would it be possible to get the file disk offset if this file (very important) is small (less than one cluster, only a few bytes).
Currently I use this Windows API function:
DeviceIOControl(FileHandle, FSCTL_GET_RETRIEVAL_POINTERS, #InBuffer, SizeOf(InBuffer), #OutBuffer, SizeOf(OutBuffer), Num, Nil);
FirsExtent.Start := OutBuffer.Pair[0].LogicalCluster ;
It works perfectly with files bigger than a cluster but it just fails with smaller files, as it always returns a null offset.
What is the procedure to follow with small files ? where are they located on a NTFS volume ? Is there an alternative way to know a file offset ? This subtility doesn't seem to be documented anywhere.
Note: the question is tagged as Delphi but C++ samples or examples would be appreciated as well.
The file is probably resident, meaning that its data is small enough to fit in its MFT entry. See here for a slightly longer description:
http://www.disk-space-guide.com/ntfs-disk-space.aspx
So you'd basically need to find the location of the MFT entry in order to know where the data is on disk. Do you control this file? If so the easiest thing to do is make sure that it's always larger than the size of an MFT entry (not a documented value, but you could always just do 4K or something).

how do I read a huge .gz file (more than 5 gig uncompressed) in c

I have some .gz compressed files which is around 5-7gig uncompressed.
These are flatfiles.
I've written a program that takes a uncompressed file, and reads it line per line, which works perfectly.
Now I want to be able to open the compressed files inmemory and run my little program.
I've looked into zlib but I can't find a good solution.
Loading the entire file is impossible using gzread(gzFile,void *,unsigned), because of the 32bit unsigned int limitation.
I've tried gzgets, but this almost doubles the execution time, vs reading in using gzread.(I tested on a 2gig sample.)
I've also looked into "buffering", such as splitting the gzread process into multiple 2gig chunks, find the last newline using strcchr, and then setting the gzseek.
But gzseek will emulate a total file uncompression. which is very slow.
I fail to see any sane solution to this problem.
I could always do some checking, whether or not a current line actually has a newline (should only occure in the last partially read line), and then read more data from the point in the program where this occurs.
But this could get very ugly.
Does anyhow have any suggestions?
thanks
edit:
I dont need to have the entire file at once,just need one line a time, but I got a fairly huge machine, so if that was the easiest I would have no problems.
For all those that suggest piping the stdin, I've experienced extreme slowdowns compared to opening the file. Here is a small code snippet I made some months ago, that illustrates it.
time ./a.out 59846/59846.txt
# 59846/59846.txt
18255221
real 0m4.321s
user 0m2.884s
sys 0m1.424s
time ./a.out <59846/59846.txt
18255221
real 1m56.544s
user 1m55.043s
sys 0m1.512s
And the source code
#include <iostream>
#include <fstream>
#define LENS 10000
int main(int argc, char **argv){
std::istream *pFile;
if(argc==2)//ifargument supplied
pFile = new std::ifstream(argv[1],std::ios::in);
else //if we want to use stdin
pFile = &std::cin;
char line[LENS];
if(argc==2) //if we are using a filename, print it.
printf("#\t%s\n",argv[1]);
if(!pFile){
printf("Do you have permission to open file?\n");
return 0;
}
int numRow=0;
while(!pFile->eof()) {
numRow++;
pFile->getline(line,LENS);
}
if(argc==2)
delete pFile;
printf("%d\n",numRow);
return 0;
}
thanks for your replies, I'm still waiting the golden apple
edit2:
using the cstyle FILE pointers instead of c++ streams is much much faster. So I think this is the way to go.
Thank for all your input
gzip -cd compressed.gz | yourprogram
just go ahead and read it line by line from stdin as it is uncompressed.
EDIT: Response to your remarks about performance. You're saying reading STDIN line by line is slow compared to reading an uncompressed file directly. The difference lies within terms of buffering. Normally pipe will yield to STDIN as soon as the output becomes available (no, or very small buffering there). You can do "buffered block reads" from STDIN and parse the read blocks yourself to gain performance.
You can achieve the same result with possibly better performance by using gzread() as well. (Read a big chunk, parse the chunk, read the next chunk, repeat)
gzread only reads chunks of the file, you loop on it as you would using a normal read() call.
Do you need to read the entire file into memory ?
If what you need is to read lines, you'd gzread() a sizable chunk(say 8192 bytes) into a buffer, loop through that buffer and find all '\n' characters and process those as individual lines. You'd have to save the last piece incase there is just part of a line, and prepend that to the data you read next time.
You could also read from stdin and invoke your app like
zcat bigfile.gz | ./yourprogram
in which case you can use fgets and similar on stdin. This is also beneficial in that you'd run decompression on one processor and processing the data on another processor :-)
I don't know if this will be an answer to your question, but I believe it's more than a comment:
Some months ago I discovered that the contents of Wikipedia can be downloaded in much the same way as the StackOverflow data dump. Both decompress to XML.
I came across a description of how the multi-gigabyte compressed dump file could be parsed. It was done by Perl scripts, actually, but the relevant part for you was that Bzip2 compression was used.
Bzip2 is a block compression scheme, and the compressed file could be split into manageable pieces, and each part uncompressed individually.
Unfortunately, I don't have a link to share with you, and I can't suggest how you would search for it, except to say that it was described on a Wikipedia 'data dump' or 'blog' page.
EDIT: Actually, I do have a link

Edit the frame rate of an avi file

Is it possible to change the frame rate of an avi file using the Video for windows library? I tried the following steps but did not succeed.
AviFileInit
AviFileOpen(OF_READWRITE)
pavi1 = AviFileGetStream
avi_info = AviStreamInfo
avi_info.dwrate = 15
EditStreamSetInfo(dwrate) returns -2147467262.
I'm pretty sure the AVIFile* APIs don't support this. (Disclaimer: I was the one who defined those APIs, but it was over 15 years ago...)
You can't just call EditStreamSetInfo on an plain AVIStream, only one returned from CreateEditableStream.
You could use AVISave, then, but that would obviously re-copy the whole file.
So, yes, you would probably want to do this by parsing the AVI file header enough to find the one DWORD you want to change. There are lots of documents on the RIFF and AVI file formats out there, such as http://www.opennet.ru/docs/formats/avi.txt.
I don't know anything about VfW, but you could always try hex-editing the file. The framerate is probably a field somewhere in the header of the AVI file.
Otherwise, you can script some tool like mencoder[1] to copy the stream to a new file under a different framerate.
[1] http://www.mplayerhq.hu/
HRESULT: 0x80004002 (2147500034)
Name: E_NOINTERFACE
Description: The requested COM interface is not available
Severity code: Failed
Facility Code: FACILITY_NULL (0)
Error Code: 0x4002 (16386)
Does it work if you DON'T call EditStreamSetInfo?
Can you post up the code you use to set the stream info?