how can we have a sound pcm wav little endian (mono) in output - pcm

can you please help me retrieve the wav format pcm but in little endian (mono)
you will find my code as an attachment
code source

Related

GStreamer send 16 raw video over rtp

I've a 16bit greyscale video stream from a LWIR (thermal camera) and I want to forward the stream over RTP without any compression.
gstreamer format is: video/x-raw,format=GRAY16_LE,width=640,height=520,framerate=9/1
But I can't find any plugin to transmit the data over RTP.
https://gstreamer.freedesktop.org/documentation/rtp/index.html?gi-language=c
Do you have an idea?
Thanks, Martin
Check for the specs of uncompressed video data over RTP:
https://www.rfc-editor.org/rfc/rfc4175
As you will notice your specific format is not covered by the specification.

Encoding uncompressed avi using RAWVIDEO codec and RGB24

I coded an encoder using FFMPEG (c++). The requirements for this encoder are:
The output format should be uncompressed avi,
Preferably using RGB24/YUV444 pixel format since we do not want chroma subsampling.
Most standard players should support the format (windows media player (WMP), VLC)
Using the encoder I wrote, I can write a number of file types right now:
Lossless H.264 encoded video using the YUV420p pixel format and AVI container. (Obviously not uncompressed and chroma subsampled, however both WMP and VLC play without any problem.)
MPEG4 encoded video using the YUV420p pixel format and AVI container.(Obviously not uncompressed and chroma subsampled, however both WMP and VLC play without any problem.)
AYUV encoded video using the YUVA444P pixel format. (uncompressed as far as I understand and not chroma subsampled. However, VLC does not play this.)
FFV1 encoded video using the YUV444P pixel format. (lossless, and not chroma subsampled. However, WMP does not play this.)
The above is derived from this very usefull post.
So I am now looking into the RAWVIDEO encoder from FFMPEG. I can't get this to work and neither can I find an example in the FFMPEG documentation on how to use this encoder for writing video. Can somebody point me in the right direction or supply sample code for this?
Also, if there is another direction I should follow to meet my requirements feel free to point me to it.
Thanks in advance

Resample 8KHz audio sample rate to 44.1KHz using swr_convert[FFMPEG]

Anybody, has tried upsampling audio stream from 8K to 44.1K?
I need to resample input audio stream 8KHz to 44.1K since Mac OSX default audio output device support minimum 44.1K audio sampling rate.
I tried to up-sampling using FFMPEG swr_convert() API, it converts with lots of noise. Which is not good.
If anybody has tried successfully upscale 8K to 44.1 or 48K then please share it.
Solution with C/C++ library code is preferable. Didn't tried Core-audio samples.
I Tried swr_convert() code from following link https://www.ffmpeg.org/doxygen/2.1/group__lswr.html#details
Thanks,
Ramanand

FFMPEG get jpeg data buffer from mjpeg stream?

I'm using FFMPEG to decode video stream from IP Camera, I have the example code that can decode video stream with any codec into YUI frames format.
But my case is special, I will describe as below
The IP camera stream is MJPEG, and I want using FFMPEG to decode, but I don't want to decode frame into YUV, I want to decode frame under jpeg format, and save those jpeg buffer into image files (*.jpg).
So far, I can do it by converting YUV frame (after decoding) to Jpeg, but this will cause bad performance. Since video stream is MJPEG, I think I can get jpeg data before decoding to YUI, but I don't know how to do it.
Someone can help me?
Many thanks,
T&T

How to get PCM data from recorded sound for Fourier analysis

I've been working on c++ code that will take in sound and output it's core frequency, like a guitar tuner. I can generate my own randomized sine wave and successfully perform the FFT from a text file that is just amplitude vs. time. I just don't know how to produce usable data from either microphone or sound file.
Is there a simple way to sample sound and have it output the data in an amplitude vs. time text file?
I've looked into the WAV file format and how the various chunks work but it's a bit above my level. Any help is really appreciated.
If you can ensure that your WAV is mono, uncompressed, 16-bit and at a known sample rate, you can either skip the WAV/RIFF/whatever header or suck it in as if it were samples (that shouldn't affect your FFT results much if the file is long).
Other than that, uncompressed WAV isn't a horribly complex format. With some more effort you'll parse it.