Standalone AGC (auto gain control) in WebRtc application - c++

I'm trying to create a standalone AGC using WebRtc library. (Input - wav file, output - wav file with adjusted gain). But at this time I have some problems with this issue.
I'm trying to use functions which are declared in gain_control.h file. When I'm using WebRtcAgc_Process(....) I obtain constant gain, which applies to whole signal, but not nonlinear gain which depends from input signal magnitude.
May be I should use another functions for my purpose? How can I implement AGC via WebRTC library?

The AGC's main purpose is to provide a recommended system mic volume which the user is expected to set through the OS. If you would like to apply a purely digital gain, you can configure it in one of two modes (from modules/audio_processing/include/audio_processing.h, but gain_control.h has analogous modes):
// Adaptive mode intended for situations in which an analog volume control
// is unavailable. It operates in a similar fashion to the adaptive analog
// mode, but with scaling instead applied in the digital domain. As with
// the analog mode, it additionally uses a digital compression stage.
kAdaptiveDigital,
// Fixed mode which enables only the digital compression stage also used by
// the two adaptive modes.
//
// It is distinguished from the adaptive modes by considering only a
// short time-window of the input signal. It applies a fixed gain through
// most of the input level range, and compresses (gradually reduces gain
// with increasing level) the input signal at higher levels. This mode is
// preferred on embedded devices where the capture signal level is
// predictable, so that a known gain can be applied.
kFixedDigital
You can set these through WebRtcAgc_Init(), though unless you need to avoid the overhead, I'd recommend just using the AudioProcessing class.

refer to http://osxr.org/android/source/external/webrtc/src/modules/audio_processing/agc/interface/gain_control.h#0133
The gain adjustments are done only during 0135 * active periods of
speech. The input speech length can be either 10ms or 0136 * 20ms and
the output is of the same length.
quick overview of webrtcage_process
int WebRtcAgc_Process(void* agcInst,
const WebRtc_Word16* inNear,
const WebRtc_Word16* inNear_H,
WebRtc_Word16 samples,
WebRtc_Word16* out,
WebRtc_Word16* out_H,
WebRtc_Word32 inMicLevel,
WebRtc_Word32* outMicLevel,
WebRtc_Word16 echo,
WebRtc_UWord8* saturationWarning);

Related

Is there an API that will run on iOS in order to change the Frame Per Second of an existing video?

I am looking for a way to receive as an input any video (that is supported on iOS) and save on the device a new video with a new Frame Per Second rate. The motivation is to decrease the video size, and as well make it as lite weighted as possible.
Tried using ffmpeg library from command line (need it to run directly from application)
Tried working with SDAVAssetExportSessionDelegate, but managed only to change the bit per second (each frame quality is lower)
Though to work with OpenCV - but preferring something lighter and build in if possible
Objective C:
'''
compressionEncoder.videoSettings = #
{
AVVideoCodecKey: AVVideoCodecTypeH264,
AVVideoWidthKey: [NSNumber numberWithInt:width], //Set your resolution width here
AVVideoHeightKey: [NSNumber numberWithInt:height], //set your resolution height here
AVVideoCompressionPropertiesKey: #
{
AVVideoAverageBitRateKey: [NSNumber numberWithInt:bitRateKey], // Give bitrate for lower size low values
AVVideoProfileLevelKey: AVVideoProfileLevelH264High40,
// Does not change - quality setting and not reletaed to playback framerate!
//AVVideoMaxKeyFrameIntervalKey: #800,
},
};
compressionEncoder.audioSettings = #
{
AVFormatIDKey: #(kAudioFormatMPEG4AAC),
AVNumberOfChannelsKey: #2,
AVSampleRateKey: #44100,
AVEncoderBitRateKey: #128000,
};
'''
Expected a video with less Frame Per Second, each frame is in the same quality. Similar to a brief thumbnail summary of the video
The type of conversion you are doing will be time and power consuming on a mobile device, but I am guessing you are already aware of that.
Given your end goal is to reduce size, while presumably maintaining a reasonable quality, you may find you want to experiment with different settings etc in the encodings.
For this type of video manipulation, ffmpeg is a good choice as you probably saw from your command line usage. To use ffmpeg from an application, a common approach is to use a well supported 'ffmpeg wrapper' - this effectively runs the Ffmpeg command line commands from wihtin your application.
The advantage is that all the usual syntax should work and you can leverage the vast amount of info on ffmpeg command line syntax on the web. The downsides are that ffmpeg was not not designed to be wrapped like this so you may see some issues, although with a well supported wrapper you should find either help or that others have already worked around the issues.
Some examples of popular iOS ffmpeg wrappers:
https://github.com/tanersener/mobile-ffmpeg
https://github.com/sunlubo/SwiftFFmpeg
Get MobileFFMpeg up and running:
https://stackoverflow.com/a/59325680/1466453
Once you can make MobileFFMpeg calls in your IOS code then changing frame rate is pretty straightforward with this code:
[MobileFFmpeg execute: #"-i -filter:v fps=fps=30 "];

C++/C Multiple threads to read gz file simultaneously

I am attempting to read a gzip-compressed file from multiple threads.
I was thinking this would significantly speed up decompression process as my gzread functions in multiple threads start from different file offset (using gseek), hence they read different parts of the file.
The simplified code is like
// in threads
auto gf = gzopen("file.gz",xxx);
gzseek(gf,offset);
gzread(xx);
gzclose(gf);
To my surprise, my multi-thread version program does not speed up at all. The 20-thread version uses exactly the same time as the single-thread version. I am pretty sure this is far away from the disk bottleneck.
I guess the zlib inflation functionality may need to decompress the entire file for reading even a small part, but I failed to get any clue from their manual.
Anyone have an idea how to speed up in my case?
Short answer: due to the serial nature of a deflate stream, gzseek() must decode all of the compressed data from the start up to the requested seek point. So you can't get any gain with what you are trying to do. In fact, the total cycles spent will increase with the square of the length of the compressed data! So don't do that.
tl;dr: zlib isn't designed for random access. It seems possible to implement, though requiring a complete read-through to build an index, so it might not be helpful in your case.
Let's look into the zlib source. gzseek is a wrapper around gzseek64, which contains:
/* if within raw area while reading, just go there */
if (state->mode == GZ_READ && state->how == COPY &&
state->x.pos + offset >= 0) {
"Within raw area" doesn't sound quite right if we're processing a gzipped file. Let's look up the meaning of state->how in gzguts.h:
int how; /* 0: get header, 1: copy, 2: decompress */
Right. At the end of gz_open, a call to gz_reset sets how to 0. Returning to gzseek64, we end up with this modification to the state:
state->seek = 1;
state->skip = offset;
gzread, when called, processes this with a call to gz_skip:
if (state->seek) {
state->seek = 0;
if (gz_skip(state, state->skip) == -1)
return -1;
}
Following this rabbit hole just a bit further, we find that gz_skip calls gz_fetch until gz_fetch has processed enough input for the desired seek. gz_fetch, on its first loop iteration, calls gz_look which sets state->how = GZIP, which causes gz_fetch to decompress data from the input. In other words, your suspicion is right: zlib does decompress the entire file up to that point when you use gzseek.
zlib implementation have no multithreading (http://www.zlib.net/zlib_faq.html#faq21 - "Is zlib thread-safe? - Yes. ... Of course, you should only operate on any given zlib or gzip stream from a single thread at a time.") and will decompress "entire file" up to seeked position.
And zlib format has bad alignment (bit alignment) / no offset fields (deflate format) to enable parallel decompression/seeking.
You may try another implementations of z (deflate/inflate), for example, http://zlib.net/pigz/ (or switch from ancient compression from the era of single core to non-zlib modern parallel formats, xz/lzma/something from google)
pigz, which stands for parallel implementation of gzip, is a fully functional replacement for gzip that exploits multiple processors and multiple cores to the hilt when compressing data. pigz was written by Mark Adler, and uses the zlib and pthread libraries. To compile and use pigz, please read the README file in the source code distribution. You can read the pigz manual page here.
The manual page is http://zlib.net/pigz/pigz.pdf and it has useful information.
It uses format compatible to zlib, but adopted to parallel compress:
Each partial raw deflate stream is terminated by an empty stored block ... in order to end that partial bit stream at a byte boundary.
Still, DEFLATE format is bad for parallel decompression:
Decompression can’t be parallelized, at least not without specially prepared deflate streams for that purpose. Asaresult, pigz uses a single thread (the main thread) for decompression, but will create three other threads for reading, writing, and check calculation, which can speed up decompression under some circumstances.

Synchronizing input pins in directshow

I am creating a directshow filter which's purpose is to take 3 input pins and create a video which shows alternately vidoe from the first source, the second source and the third source, in a fixed time internal.
So if i have three webcam connected to my filter, i want the final video for example to show 5 seconds of the first cam, five seconds of the second cam, and so on...
I have tried two approaches:
Approach one
I use a class TimeManager. This class has a function isItPinsTurn(pinname). This functions returns true or false regarding if the pin is supposed to send sample to the output. To do this the TimeManager creates a new thread which sleeps every x seconds.
After it slept it changes to the current active inputpin to the next.
The result is that every x seconds the isItPinSTurn(pinname) function returns another pin. This way every pin only seconds output to the outputpin when it is its turn, hence i get the desired videos with x intervalls between the input cam.
The problem with this approach
Sleep doesn't seem to work in directshow filters. I get a runtime error:
abort() has been called
Approach two
I use the samples GetMediaTime method and a buffer which keeps track of how much video samples in terms of its mediatime, has already been sent to the output pin. This is best illustrated with code:
void MyFilter::acceptFilterInput(LPCWSTR pinname, IMediaSample* sample)
{
mylogger->LogDebug("In acceptFIlterInput", L"D:\\TEMP\\yc.log");
if (wcscmp(pinname, this->currentInputPin) == 0)
{
outpin->Deliver(sample);
LONGLONG timestart;
LONGLONG timeend;
sample->GetTime(&timestart, &timeend);
*mediaTimeBuffer += timeend - timestart;
if (*mediaTimeBuffer > this->MEDIATIME)
{
this->SetNextPinActive(pinname);
*mediaTimeBuffer = 0;
}
}
}
When the filter starts the currentInputPin is set to pin0 (the first). Calls to acceptFilterInput (which is called by the the input pins receie function) adjust the mediaTimeBUffer with the size of the MediaSample-MediaTime. If this buffer is higher than MEDIATIME (which can for example be 5 (seconds)), the buffer is set back to zero and the next pin is set active.
Problems with this approach
I am not even sure if CMediaSample->GetMediaTime returns the data i need, as it seems to return negative numbers, which doesn't seem to make much sense. I didn't find useful information about the return value of GetMediaTime on the web.
You are expected to block execution (incoming calls to IPin::Receive) on input streams so that other streams could catch up on their own streaming threads. You typically achieve this by either using wait/synchronization APIs and functions, or by holding references on media samples so that input peer would block on empty allocator waiting for a media sample (buffer) to get available.
Yes Sleep works well, although polling is the worst of possible options.
Approach two does not make sense for me because I don't see any real synchronization there: there is no execution blocking, and there is no making pin active. You cannot force data on the input pin, you only can wait to get called with new media sample. So you should block accepting data on one input stream/pin until you get data on another.
Some useful relevant information on multiplexing:
How to make a DirectShow Muxer Filter - Part 1
How to make a DirectShow Muxer Filter - Part 2
GDCL MPEG-4 Multiplexer - available in source, and can multiplex data from 2+ streams

Reading and writing structs remotely

I'm currently building a robot which has some sensors attached to it. The control unit on the robot is an ARM Cortex-M3, all sensors are attached to it and it is connected via Ethernet to the "ground station".
Now I want to read and write settings on the robot via the ground station. Therefore I thought about implementing a "virtual register" on the robot that can be manipulated by the ground station.
It could be made up of structs and look like this:
// accelerometer register
struct accel_reg {
// accelerations
int32_t accelX;
int32_t accelY;
int32_t accelZ;
};
// infrared distance sensor register
struct ir_reg {
uint16_t dist; // distance
};
// robot's register table
struct {
uint8_t status; // current state
uint32_t faultFlags; // some fault flags
accel_reg accelerometer; // accelerometer register
ir_reg ir_sensors[4]; // 4 IR sensors connected
} robot;
// usage example:
robot.accelerometer.accelX = -981;
robot.ir_sensors[1].dist = 1024;
On the robot the registers will be constantly filled with new values and configuration settings are set by the ground station and applied by the robot.
The ground station and the robot will be written in C++ so they both can use the same struct datatype.
The question I have now is how to encapsulate the read/write operations in a protocol without writing tons of meta data?
Let's say I want to read the register robot.ir_sensors[2].dist. How would I address this register in my protocol?
I already thought about transmitting a relative offset in bytes (i.e the relative position in memory inside the struct) but I think memory alignment and padding may cause problems, especially because the ground station runs on a x86_64 architecture and the robot runs on a 32-bit ARM processor.
Thanks for any hints! :)
I'm also going to suggest Google Protocol Buffers.
In the simplest case, you could implement one message RobotState like this:
message RobotState {
optional int32_t status = 1;
optional int32_t distance = 2;
optional int32_t accelX = 3;
...
}
Then when the robot receives the message, it will take new values from any optional field that is present. It will then reply with a message containing the current value of all fields.
This way it is quite easy to implement field update using the "merge message" functionality of most protobuf implementations. Also you can keep it very simple at start because you only have one message type, but if you need to expand later you can add submessages.
It is true that protobuf does not support int8_t or int16_t. Just use int32_t instead.
I think the Google protocol buffers are an excellent session/presentation layer tool to use. Actually, Google protocol buffers do not support the syntax I am thinking of. So, I will change this part of my answer to recommend XSD by Code Synthesis. Although it is primarily used with XML, it supports different presentation layers such as XDR and may be more efficient than protocol buffers with large amounts of optional data. The generate code is also very nice to work with. XSD is free to use with OpenSource software and even commercial use with limited message structures.
I don't believe you want to read/write register sets at random. You can prefix a message with an enum that denotes a message such as, IR update, distance, accel, etc. These are register groups. Then the robot responds with the register set. All the registers you've given so far are sensors. The write ones must be motor control?
You want to think about what control you want to perform and the type of telemetry you would like to receive. Then come up with a message structure and bundle the information together. You could use sequence diagrams, and remote procedure API's like SOA/SOAP, RPC, REST, etc. I don't mean these RPC frameworks directly, but the concepts such as request/response and perhaps message that are just sent periodically (telemetry) without specific requests. So there would be a telemetry request from the ground station with some sort of interval and then the robot would respond periodically with unsolicited data. You always need a message id (enum above), unless your protocol is going to be stateful, which I would discourage for robustness reasons.
You haven't described how the control system might work or if you wish to do this remotely. Describing that may lead to more ideas on the protocol. I believe we are talking about layers 5,6,7 of OSI. Have fun.

How to use ALSA's snd_pcm_writei()?

Can someone explain how snd_pcm_writei
snd_pcm_sframes_t snd_pcm_writei(snd_pcm_t *pcm, const void *buffer,
snd_pcm_uframes_t size)
works?
I have used it like so:
for (int i = 0; i < 1; i++) {
f = snd_pcm_writei(handle, buffer, frames);
...
}
Full source code at http://pastebin.com/m2f28b578
Does this mean, that I shouldn't give snd_pcm_writei() the number of
all the frames in buffer, but only
sample_rate * latency = frames
?
So if I e.g. have:
sample_rate = 44100
latency = 0.5 [s]
all_frames = 100000
The number of frames that I should give to snd_pcm_writei() would be
sample_rate * latency = frames
44100*0.5 = 22050
and the number of iterations the for-loop should be?:
(int) 100000/22050 = 4; with frames=22050
and one extra, but only with
100000 mod 22050 = 11800
frames?
Is that how it works?
Louise
http://www.alsa-project.org/alsa-doc/alsa-lib/group___p_c_m.html#gf13067c0ebde29118ca05af76e5b17a9
frames should be the number of frames (samples) you want to write from the buffer. Your system's sound driver will start transferring those samples to the sound card right away, and they will be played at a constant rate.
The latency is introduced in several places. There's latency from the data buffered by the driver while waiting to be transferred to the card. There's at least one buffer full of data that's being transferred to the card at any given moment, and there's buffering on the application side, which is what you seem to be concerned about.
To reduce latency on the application side you need to write the smallest buffer that will work for you. If your application performs a DSP task, that's typically one window's worth of data.
There's no advantage in writing small buffers in a loop - just go ahead and write everything in one go - but there's an important point to understand: to minimize latency, your application should write to the driver no faster than the driver is writing data to the sound card, or you'll end up piling up more data and accumulating more and more latency.
For a design that makes producing data in lockstep with the sound driver relatively easy, look at jack (http://jackaudio.org/) which is based on registering a callback function with the sound playback engine. In fact, you're probably just better off using jack instead of trying to do it yourself if you're really concerned about latency.
I think the reason for the "premature" device closure is that you need to call snd_pcm_drain(handle); prior to snd_pcm_close(handle); to ensure that all data is played before the device is closed.
I did some testing to determine why snd_pcm_writei() didn't seem to work for me using several examples I found in the ALSA tutorials and what I concluded was that the simple examples were doing a snd_pcm_close () before the sound device could play the complete stream sent it to it.
I set the rate to 11025, used a 128 byte random buffer, and for looped snd_pcm_writei() for 11025/128 for each second of sound. Two seconds required 86*2 calls snd_pcm_write() to get two seconds of sound.
In order to give the device sufficient time to convert the data to audio, I put used a for loop after the snd_pcm_writei() loop to delay execution of the snd_pcm_close() function.
After testing, I had to conclude that the sample code didn't supply enough samples to overcome the device latency before the snd_pcm_close function was called which implies that the close function has less latency than the snd_pcm_write() function.
If the ALSA driver's start threshold is not set properly (if in your case it is about 2s), then you will need to call snd_pcm_start() to start the data rendering immediately after snd_pcm_writei().
Or you may set appropriate threshold in the SW params of ALSA device.
ref:
http://www.alsa-project.org/alsa-doc/alsa-lib/group___p_c_m.html
http://www.alsa-project.org/alsa-doc/alsa-lib/group___p_c_m___s_w___params.html