image buffer for video processing [closed] - c++

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I want to develope an application with Qt and opencv , in order to process all the frames comming from a camera.
I have 2 Qthread, one for capturing image and the other for processing.
the processing thread is a little slow , so in order to process all the frames , I need to have a frame buffer.
I really have no idea how to simply impelement a frame buffer.
any help would be apprecieted.

You'll want to create your threads to run asynchronously. When you capture the image, add it to a std::queue using the capture thread, and then let your processing thread pull from queue. Try to use pointers as much as you can for the images to cut down on memory use and processing time. Make sure you're thread safe and use std::Mutex when appropriate.
Since you are using QT, you could use QQueue for the queue and QMutex for the mutex.

if your processing thread is slower then the frame capturing period, meaning that your code will go out of memory eventually. You should consider decreasing capturing frame rate, dropping frames or maybe decreasing the frame resolution.
As for the buffer, go for a thread safe circular queue for frames, where, the capturing thread will be producer and processing thread will be the consumer. If the queue is full (it will be obviously) you have two options: (1) remove the oldest (but not being processed) one and add the new one. (2) Just drop the newest frame, which is pretty easy to implement.

Related

Handle multiple events in gpio [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I'm new to embedded programming and I apologise in advance for any confusion.
I need to handle multiple events from different devices connected to a gpio. These events need to be monitored continually. This means that after one event is generated and handled, the code needs to keep monitoring the device for other events.
I understand the concept of interruptions and polling in Linux (the kernel gets an interruption and dispatch it to the handler which goes on up to the callee of an epoll which is inside an infinite loop while(1)-like).
This is fine for one-time, single-event toy models. In a embedded system with limited resources such as the AT91SAM9x5 that runs at 400mhz and has 128mb of ram what can I do ? I believe that the while(1)-like pattern isn't the best choice. I've heard good things about thread pool solution but at the heart of each thread don't we find a while(1) ?
What are my options to attack this problem ?
Thank you in advance !
For an embedded system, the AT91SAM is actually quite "resource rich" rather than resource limited. The idea is the same as if you writing it using Linux: you set up a pin interrupt, and in your interrupt handler, you do some minimal processing and maybe set up some global data so that your main loop "while (1)" can detect the situation and then process the information in the non-interrupt context. Basically you want the interrupt handler to finish as quickly as possible so that it can handle the next interrupt.
In most systems, interrupts can be pended or nested. With systems that allow nested interrupts, you have to make sure that it does not trash the context of the previous interrupt that is still being executed.
The actual communication scheme between the interrupt handler and the main code depends on your requirement. You can even use an RTOS with support for such requirements.
It depends a lot on what is your application and what are your constrains but here are some of the common methods to monitor gpio pins for event
In many of the newer controllers, all GPIO pins are capable of generating a combined interrupt. You can use this to trigger an ISR call on any change on any of the pins and then inside the ISR detect which specific pin triggered it.
If there is nothing else your controller should be doing, then there is nothing wrong is a while(1) loop continuously monitoring all port pins and triggering relevant actions
If none of the above solutions are acceptable, you can perhaps try to load a small OS like FreeRTOS on your controller and then use different tasks to monitor port pins
A lighter version of the above method is to have a configure a timer interrupt and poll for all the port pins inside it. You can then save the state of pins in global variable and use that in the main loop to take relevant actions.

System Architecture using Amazon SQS [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I am modeling a system (A) that notifies system (B) when something changes on system (C).
So my my system (A) just make pooling on system (C), using an API and when something changes then notifies system (B).
The system (C) stores status about an order. This status can be X, Y, Z.
It can take minutes to hours for the order to transition from state X to Y. And it can take days for the order to transition from Y to Z.
If I model my system just using one queue, I think I going to have some problems.
I could have a moment that I will have much more message from state Y-->Z than state X-->Y because the Y-->Z has a update time greater than X-->Y.
So it is going to take a long time to process X-->Y. To prevent this I think I have to clone the message and re-enqueue it, to put it back to end of the queue.
Instead of one queue, I can have two queues, one for each status.
I simplify the problem. In the original problem I can have multiple status. (I think it is 5 status).
Do you guys have a suggestion?
Keeping the queue separate for each status is best option for your use case as you don't have to worry or tune one queue for handling both the status.
Queue system are not fail-safe. Queue parser must be able to handle application crash.
If required, implement process to make sure queue are not processes twice. (program might read but failed to consumed the message after processing )
Implement a failed safe house keeping process that republish works IF queue server went down and wipe all the message with it.
Amazon SQS maximum message retention period are 14 days. Even for own MQ server, it is high risk to store "long term" message.
Whether your X-->Y , or Y-->Z process that take different processing timeframe , it doesn't matter. If you publish message to queue ASAP when change is trigger(say Y--> Z changes happens after 24 days), then there should be no waiting issues. IMHO, I see no reason you raise question about "re-enqueue".

IMage elaboration and saving using Multithreading

I've written a software in C++ for processing the video stream coming from a camera, using openCV libraries.
I would like to save the video frame while processing it, in order to have the possibility to run the code many times offline using the exact same video as input.
I was thinking to use multi-threading using the Producer/Consumer Pattern.
My idea would be to have one producer (frame grabber) and two consumer (one for processing the image and the second one for saving the frames on file (as video)).
I don't have experience with multi-threading programming, so I've searched for some tutorials on internet.
All the tutorials I've found were about one producer and one consumer, but what I need is slightly different: my need would be a producer that sends the same image to both consumers and after both consumers finish their work, go ahead with the next frame. The point is that the producer would have one queue where it stores the frames, while the consumers both would need to read the same element once from the same queue.
Do you have any suggestion?
Do you think that the pattern I've chosen fits my need?
Thanks.
Producer-consumer works. In your case, the producer could "produce" twice, first placing it in the processing queue, then placing a 2nd copy in the saving queue.

Global mouse hook on click events in X11 [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I've read a lot of information about X11 graphic system and found a lot of questions about this issue without answer. So let me ask onу more time.
I need classic implementation of hook mechanism (like SetWindowsHookEx) or any other approach in UNIX-like operation systems with ONLY ONE CONDITION : ability to listen events without blocking original event (like XGrabButton and XUngrabButton do).
P.S. Ben, this is Danila. I need help! ®
I've ended up by grabbing source code from Xnee - it allows record all input events, including keyboard and mouse with non-blocking logic. The only restriction is that I have to ask if there any events in loop with 100ms interval, but it's ok for me - there is no processor loading at all.
Not possible do globally (all events/all windows) unless you read low level communication (using pcap or replacing real xserver with proxy that gives you all the data)
To get notification for particular window you change event mask of that window. Server keeps separate copy of event mask for a window per client, and notifies each client interested in events matching mask. For example, if you add PointerMotion bit to root window event mask from your connection you'll get pointer events when mouse moves over root window (given it's visible)

Microphone Specific Signal to activate interupt [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
A heads up, i am very new to audio so please bear with me =)
I am trying to interpret audio signals into an AVR (its a classical myAVR MK2 board). Now normally, the interupt signal is always some kind of switch. So if i press this swich, go into that interupt.
My goal is to interpret audio signals via microphone into the board, and have the board react to it. My first question is, when sending the microphone signal, do i have to put it through the A/D Converter, since technically it is an anolag signal ??
My second and more complicated question is, how would i actually interpret the audio signal coming in?
For example, if i scream "GREEN" then what ever the programm was doing should be stopped, the interupt should be called and the green LED should come on. Now the mic is preatty much always on ... how do i control so that only if GREEN is said, the interupt signal is sent. I dont want him constantly going in and out of the interupts just because someone made some noise ...
Would i have to save "GREEN" as a bit-combination somewhere and compare the incoming signal with the saved bits ... or ??
Some answers:
...do i have to put it through the A/D Converter, since technically it is an anolag signal ?
Yes, digital chips may fry when exposed to analog signals.
Be aware that you may have delay some time after starting the ADC before the signals are accurate.
how would i actually interpret the audio signal coming in?
Basically you have digital values coming in at a frequency. You will need to store those values and then analyze them. You must trade memory capacity/usage for accuracy. The more samples you take, the better your data and results; but this occupies more memory.
You will also need to filter out noise from the signal and from layered sounds.
You may get some benefits from researching on FFTs.
You should compare using "fuzzy logic" because in the real world, nothing is exact; for example your voice signal could be +/- 30 counts and still be "correct".