Concatenate data in an array in C ++ - c++

I'm working on software for processing audio in real time in C++ with Qt. I need that requirements are minimized.
Defining a temporary buffer 40ms, launching our device with a sampling frequency Fs = 8000Hz, every 320 samples entered a feature called Data Processing ().
The idea is to have a global buffer that stores the 10s last recorded, 80000 samples.
This Buffer in each iteration eliminates the initial 320 samples and looped at the end, 320 new samples. Thus the buffer is updated and the user can observe the real-time graphical representation of the recorded signal.
At first I thought of using QVector (equivalent to std::vector but for Qt) for this deployment, thus we reduce the process a few lines of code
int NUM_POINTS=320;
DatosTemporales.erase(DatosTemporales.begin(),DatosTemporales.begin()+NUM_POINTS);
DatosTemporales+= (DatosNuevos); // Datos Nuevos con un tamaño de NUM_POINTS
In each iteration we create a vector of 80000 samples in addition to free some positions so requires some processing time. An alternative for opting was the use of * double, and iterations a loop:
for(int i=0;i<80000;i++){
if(i<80000-NUM_POINTS){
aux=DatosTemporales[i];
DatosTemporales[i+NUM_POINTS]=aux;
}else{
DatosTemporales[i]=DatosNuevos[i-NUN_POINTS];
}
}
Does fails. I think the best way is to use dynamic memory. Implementing this process by pointers. Could anyone give me some idea how to implement it?

It sounds like what you are looking for is a circular buffer.
https://www.google.com/search?q=qcircularbuffer
https://qt.gitorious.org/qt/qtbase/merge_requests/60
And it looks like you only need the header file and you should be good to go.
A similar tool that is already in the Qt data set is found here:
http://doc.qt.io/qt-5/qcontiguouscache.html#details
The advantage of using a system like these presented, is that they don't need to have dynamic memory, it just needs to move the head and the tail pointers.
Hope that helps.

Related

Monitor buffers in GNU Radio

I have a question regarding buffering in between blocks in GNU Radio. I know that each block in GNU (including custom blocks) have buffers to store items that are going to be sent or received items. In my project, there is a certain sequence I have to maintain to synchronize events between blocks. I am using GNU radio on the Xilinx ZC706 FPGA platform with the FMCOMMS5.
In the GNU radio companion I created a custom block that controls a GPIO Output port on the board. In addition, I have an independent source block that is feeding information into the FMCOMMS GNU block. The sequence I am trying to maintain is that, in GNU radio, I first send data to the FMCOMMS block, second I want to make sure that the data got consumed by the FMCOMMS block (essentially by checking buffer), then finally I want to control the GPIO output.
From my observations, the source block buffer doesn’t seem to send the items until it’s full. This will cause a major issue in my project because this means that the GPIO data will be sent before or in parallel with sending the items to the other GNU blocks. That’s because I’m setting the GPIO value through direct access to its address in the ‘work’ function of my custom block.
I tried to use pc_output_buffers_full() in the ‘work’ function of my custom source in order to monitor the buffer, but I’m always getting 0.00. I’m not sure if it’s supposed to be used in custom blocks or if the ‘buffer’ in this case is something different from where the output items are stored. Here's a small code snippet which shows the problem:
char level_count = 0, level_val = 1;
vector<float> buff (1, 0.0000);
for(int i=0; i< noutput_items; i++)
{
if(level_count < 20 && i< noutput_items)
{
out[i] = gr_complex((float)level_val,0);
level_count++;
}
else if(i<noutput_items)
{
level_count = 0;
level_val ^=1;
out[i] = gr_complex((float)level_val,0);
}
buff = pc_output_buffers_full();
for (int n = 0; n < buff.size(); n++)
cout << fixed << setw(5) << setprecision(2) << setfill('0') << buff[n] << " ";
cout << "\n";
}
Is there a way to monitor the buffer so that I can determine when my first part of data bits have been sent? Or is there a way to make sure that the each single output item is being sent like a continuous stream to the next block(s)?
GNU Radio Companion version: 3.7.8
OS: Linaro 14.04 image running on the FPGA
Or is there a way to make sure that the each single output item is being sent like a continuous stream to the next block(s)?
Nope, that's not how GNU Radio works (at all!):
A while back I wrote an article that explains how GNU Radio deals with buffers, and what these actually are. While the in-memory architecture of GNU Radio buffers might be of lesser interest to you, let me quickly summarize the dynamics of it:
The buffers that (general_)work functions are called with behave for all that's practical like linearly addressable ring buffers. You get a random number of samples at once (restrictable to minimum numbers, multiples of numbers), and all that you not consume will be handed to you the next time work is called.
These buffers hence keep track of how much you've consumed, and thus, how much free space is in a buffer.
The input buffer a block sees is actually the output buffer of the "upstream" block in the flow graph.
GNU Radio's computation is backpressure-controlled: Any block's work method will immediately be called in an endless loop given that:
There's enough input for the block to do work,
There's enough output buffer space to write to.
Therefore, as soon as one block finishes its work call, the upstream block is informed that there's new free output space, thus typically leading to it running
That leads to high parallelity, since even adjacent blocks can run simultaneously without conflicting
This architecture favors large chunks of input items, especially for blocks that take a relative long time to computer: while the block is still working, its input buffer is already being filled with chunks of samples; when it's finished, chances are it's immediately called again with all the available input buffer being already filled with new samples.
This architecture is asynchronous: even if two blocks are "parallel" in your flow graph, there's no defined temporal relation between the numbers of items they produce.
I'm not even convinced switching GPIOs at times based on the speed computation in this completely non-deterministic timing data flow graph model is a good idea to start with. Maybe you'd rather want to calculate "timestamps" at which GPIOs should be switched, and send (timestamp, gpio state) command tuples to some entity in your FPGA that keeps absolute time? On the scale of radio propagation and high-rate signal processing, CPU timing is really inaccurate, and you should use the fact that you have an FPGA to actually implement deterministic timing, and use the software running on the CPU (i.e. GNU Radio) to determine when that should happen.
Is there a way to monitor the buffer so that I can determine when my first part of data bits have been sent?
Other than that, a method to asynchronously tell another another block that, yes, you've processed N samples, would be either to have a single block that just observes the outputs of both blocks that you want to synchronize and consumes an identical number of samples from both inputs, or to implement something using message passing. Again, my suspicion is that this is not a solution to your actual problem.

SDL_Mixer is playing single chunk over itself possible?

I'm having trouble with SDL_Mixer (my lack of experience). Chunks and Music play just fine (using Mix_PlayChannel and Mix_PlayMusic), and playing two different chunks simultaneously isn't an issue.
My problem is that I would like to play some chunk1, and then play second iteration of chunk1 overlapping the first. I am trying to play a single chunk in rapid succession, but it instead plays the sound repeatedly at a much longer interval (not as quickly as I want). I've tested console output and my method of playing/looping is not at fault, since I can see console messages printing, looped at the right speed.
I have an array of Chunks that I periodically load during initialization, using Mix_LoadWAV();
Mix_Chunk *sounds[32];
I also have a function reserved for playing these chunks:
void PlaySound(int snd_id)
{
if(snd_id >= 0 && snd_id < 32)
{
if(Mix_PlayChannel(-1, sounds[snd_id], 0) == -1)
{
printf("Mix_PlayChannel: %s\n",Mix_GetError());
}
}
}
Attempting to play a single sound several times in rapid succession(say, 100ms delay/10bps), I am given the sound playing at a set, slower interval(some 500ms or so/2bps) despite the function being called at 10bps.
I already used "Mix_AllocateChannels(16);" to ensure I have allocated channels (let me know if I'm using that incorrectly) and still, a single chunk from the array refuses to play at a certain rate.
Any ideas/help is appreciated, as well as critique on how I posted this question.
As said in the documentation of SDL_Mixer (https://www.libsdl.org/projects/SDL_mixer/docs/SDL_mixer_28.html) :
"... -1 for the first free unreserved channel."
So if your chunk is longer than 1.6 seconds (16 channels*100ms) you'll run out of channels after 1.6 seconds, and so you wont be enabled to play new chunks until one of the channels end playing.
So there are basically 2 solutions :
Allocate more channels (more than : ChunkDuration (in sec) / Delay (in sec))
Stop a channel, so that you can use it. (and to do it properly, you should not use -1 as channel but a variable that you increment each time you play a chunk (don't forget to set it back to 0 when it's equal to your number of channels) )

Correct use of memcpy

I have some problems with a project I'm doing. Basically I'm just using memcpy the wrong way. I know the theroy of pointer/arrays/references and should know how to do that, nevertheless I've spend two days now without any progress. I'll try to give a short code overview and maybe someone sees a fault! I would be very thankful.
The Setup: I'm using an ATSAM3x Microcontroller together with a uC for signal aquisition. I receive the data over SPI.
I have an Interrupt receiving the data whenever the uC has data available. The data is then stored in a buffer (int32_t buffer[1024 or 2048]). There is a counter that counts from 0 to the buffer size-1 and determines the place where the data point is stored. Currently I receive a test signal that is internally generated by the uC
//ch1: receive 24 bit data in 8 bit chunks -> store in an int32_t
ch1=ch1|(SPI.transfer(PIN_CS, 0x00, SPI_CONTINUE)<<24)>>8;
ch1=ch1|(SPI.transfer(PIN_CS, 0x00, SPI_CONTINUE)<<16)>>8;
ch1=ch1|(SPI.transfer(PIN_CS, 0x00, SPI_CONTINUE)<<8)>>8;
if(Not Important){
_ch1Buffer[_ch1SampleCount] = ch1;
_ch1SampleCount++;
if(_ch1SampleCount>SAMPLE_BUFFER_SIZE-1) _ch1SampleCount=0;
}
This ISR is active all the time. Since I need raw data for signal processing and the buffer is changed by the ISR whenever a new data point is available, i want to copy parts of the buffer into a temporary "storage".
To do so, I have another, global counter wich is incremented within the ISR. In the mainloop, whenever the counter reaches a certain size, i call a method get some of the buffer data (about 30 samples).
The method aquires the current position in the buffer:
'int ch1Pos = _ch1SampleCount;'
and then, depending on that position I try to use memcpy to get my samples. Depending on the position in the buffer, there has to be a "wrap-around" to get the full set of samples:
if(ch1Pos>=(RAW_BLOCK_SIZE-1)){
memcpy(&ch1[0],&_ch1Buffer[ch1Pos-(RAW_BLOCK_SIZE-1)] , RAW_BLOCK_SIZE*sizeof(int32_t));
}else{
memcpy(&ch1[RAW_BLOCK_SIZE-1 - ch1Pos],&_ch1Buffer[0],(ch1Pos)*sizeof(int32_t));
memcpy(&ch1[0],&_ch1Buffer[SAMPLE_BUFFER_SIZE-1-(RAW_BLOCK_SIZE- ch1Pos)],(RAW_BLOCK_SIZE-ch1Pos)*sizeof(int32_t));
}
_ch1Buffer is the buffer containing the raw data
SAMPLE_BUFFER_SIZE is the size of that buffer
ch1 is the array wich is supposed to hold the set of samples
RAW_BLOCK_SIZE is the size of that array
ch1Pos is the position of the last data point written to the buffer from the ISR at the time where this method is called
Technically I'm aware of the requirements, but apparently thats not enough ;-).
I know, that the data received by the SPI interface is "correct". The problem is, that this is not the case for the extracted samples. There are a lot of spikes in the data that indicate that I've been reading something I wasn't supposed to read. I've changed the memcpy commands that often, that I completly lost the overview. The code sample above is one version of many's, and while you're reading this I'm sure I've changed everything again.
I would appreciate every hint!
Thanks & Greetings!
EDIT
I've written down everything (again) on a sheet of paper and tested some constellations. This is the updated Code for the memcpy part:
if(ch1Pos>=(RAW_BLOCK_SIZE-1)){
memcpy(&ch1[0],&_ch1Buffer[ch1Pos-(RAW_BLOCK_SIZE-1)] , RAW_BLOCK_SIZE*sizeof(int32_t));
}else{
memcpy(&ch1[RAW_BLOCK_SIZE-1-ch1Pos],&_ch1Buffer[0],(ch1Pos+1)*sizeof(int32_t));
memcpy(&ch1[0],&_ch1Buffer[SAMPLE_BUFFER_SIZE-(RAW_BLOCK_SIZE-1-ch1Pos)],(RAW_BLOCK_SIZE-1-ch1Pos)*sizeof(int32_t));
}
}
This already made it a lot better. From all the changes, everything kinda got messed up. Now there is just one Error there. There is a periodical spike. I'll try to get more information, but I think it is a wrong access while wrapping around.
I've changed the if(_ch1SampleCount>SAMPLE_BUFFER_SIZE-1) _ch1SampleCount=0; to if(_ch1SampleCount>=SAMPLE_BUFFER_SIZE) _ch1SampleCount=0;.
EDIT II
To answer the Questions of #David Schwartz :
SPI.transfer returns a single byte
The buffer is initialised once at startup: memset(_ch1Buffer,0,sizeof(int32_t)*SAMPLE_BUFFER_SIZE);
EDIT III
Sorry for the frequent updates, the comment section is getting too big.
I managed to get rid of a bunch of zero values at the beginning of the stream by decreasing ch1Pos: 'int ch1Pos = _ch1SampleCount;' Now there is just one periodic "spike" (wrong value). It must be something with the splitted memcpy command. I'll continue looking. If anyone has an idea ... :-)

OpenCL SHA1 Throughput Optimisation

Hoping someone more experienced in OpenCL usage may be able to help me here! I'm doing a project (to help me learn a bit more crypto and to try my hand at GPGPU programming) where I'm trying to implement my own SHA-1 algorighm.
Ultimately my question is about maximizing my throughput rates. At present I'm seeing something like 56.1 MH/sec, which compares very badly to open source programs I've looked at, such as John the Ripper and OCLHashcat, which are giving 1,000 and 1,500 MH/sec respectively (heck, I'd be well-chuffed with a 3rd of that!).
So, what I'm doing
I've written a SHA-1 implementation in an OpenCL kernel and a C++ host application to load data to the GPU (using CL 1.2 C++ wrapper). I'm generating blocks of candidate data to hash in a threaded fashion on the CPU and loading this data onto the global GPU memory using the CL C++ call to enqueueWriteBuffer (using uchars to represent the bytes to hash):
errorCode = dispatchQueue->enqueueWriteBuffer(
inputBuffer,
CL_FALSE,//CL_TRUE,
0,
sizeof(cl_uchar) * inputBufferSize,
passwordBuffer,
NULL,
&dispatchDelegate);
I'm en-queuing data using enqueueNDRangeKernel in the following manner (where global worksize is a user-defined variable, at present I've set this to my GPUs maximum flattened global worksize of 16.777 million per run):
errorCode = dispatchQueue->enqueueNDRangeKernel(
*kernel,
NullRange,
NDRange(globalWorkgroupSize, 1),
NullRange,
NULL,
NULL);
This means that (per dispatch) I load 16.777 million items in a 1D array and index from my kernel into this using get_global_offset(0).
My Kernel signature:
__kernel void sha1Crack(__global uchar* out, __global uchar* in,
__constant int* passLen, __constant int* targetHash,
__global bool* collisionFound)
{
//Kernel Instance Global GPU Mem IO Mapping:
__private int id = get_global_id(0);
__private int inputIndexStart = id * passwordLen;
//Select Password input key space:
#pragma unroll
for (i = 0; i < passwordLen; i++)
{
inputMem[i] = in[inputIndexStart + i];
}
//SHA1 Code omitted for brevity...
}
So, given all this: am I doing something fundamentally wrong in the way I'm loading data? I.e. 1 call to enqueueNDrange for 16.7 million kernel executions over a 1D input vector? Should I be using a 2-D space and sub-dividing into localworkgroup ranges? I tried playing with this but it didn't seem quicker.
Or, perhaps as likely is my algorithm itself the source of slowness? I've spent a good while optimizing it and manually unrolling all of the loop stages using pre-processor directives.
I've read about memory coalescing on the hardware. Could that be my issue? :S
Any advice at all appreciated! If I've missed anything important please let me know and I'll update.
Thanks in advance! ;)
Update: 16,777,216 is the device maximum reported workgroup size; 256**3. The global array of boolean values is one boolean. It's set to false at the start of the kernel enqueue, then a branching statement sets this to true if a collision is found only - will that force a convergence? passwordLen is the length of the current input value and target hash is an int[4] encoded hash to check against.
Your 'maximum flattened global worksize' should be multiplied by passwordLen. It is the number of kernels you can run, not the maximal length of an input array. You can most likely send much more data than this to the GPU.
Other potential issues: the 'generating blocks of candidate data to hash in a threaded fashion on the CPU', try doing this in advance of the kernel iterations to see whether the delay is in the generation of the data blocks or in the processing of the kernels; your sha1 algorithm is the other obvious potential issue. I'm not sure how much you've really optimised it by 'unrolling' the loops, usually the bigger optimisation issue is 'if' statements (if a single kernel instance within a workgroup tests to true then all of the lockstepped workgroup instances must follow that branch in parallel).
And DarkZeros is correct, you should manually play with the local workgroup size making it the highest common multiple of the global size and the number of kernels which can be run at once on the card. The easiest way to do this is to round up the global work group size to the next multiple of the card capacity and use an external if{} statement in the kernel only running the kernel for global_id less than the actual number of kernels you want to run.
Dave.

Silence between played buffers in OpenAL?

I use alSourceQueueBuffers to stream buffers into a AL sound source. I have buffers of different size that need to be played one after another. So far so good, however, between some buffer I need a variable amount of silence, how can I add it programmatic?
Perhaps the easiest way would be to generate buffers that hold silence of the length needed and queue them appropriately. You just need to make an array full of zeros based on the sample rate and the desired length of silence and pass it into the buffer.
If you want things to be more complicated, then you can't queue all of the buffers. You queue the one that needs to play right now and set a timer for when it will be done (and the amount of silent time has also passed). Then you can queue the next buffer. Or you can poll the source to see if it has stopped and when it does, start counting down the silent time. You could also use the streaming functionality...
Edit:
This worked for me. Sample rate needs to be the same as other buffers queued on your source. You could also have a 'greatest common denominator' length buffer and just queue it up multiple times.
int sampleRate=22050;
double sTime=2.5; // How long to maintain silence.
int sampleCount= int(sTime*sampleRate);
int byteCount = sampleCount*sizeof(short);
short* silence = (short*)malloc(byteCount);
memset(silence,0,byteCount);
alBufferData(silenceBuffer,AL_FORMAT_MONO16,silence,byteCount,sampleRate);
alSourceQueueBuffers(mySource,1,&silenceBuffer);
free(silence);