It's bit difficult to track the code flow in Gstreamer. For example if you write 'c' code, it has sequential command execution so that you know that which statement is going to execute after which one. However in Gstreamer there is no sequence and most of the times it's bit difficult to understand the code flow.
Take a log with debug level 9 and search for change_state in a logs.so you will come to know from where does the state transition for elements in a pipeline start.From there onward try to understand the code with debug logs.i.e try to understand the code with respect to state transition of element.
Above approach is useful if you are trying to resolve the bug only.But if you want to understand the gstreamer thoroughly then first you need to know the GOBJECT which is nothing but C with OOPS concept.Try to understand the hierarchy of element you want to debug.i.e understand inheritance in gobject first,how function overriding is happened and there is one more mechanism called chain mechanism.if you understand these basic mechanism then gstreamer code flow is just like other c/c++ code flow
Related
I'm looking for code of tensorflow v1.3 for using this framework more precise.
However there are lots of complicate things.
Specifically, I'm watching the process of running the graph.
When one node's numerical computation is done, the output of that node will be added to ready queue.
With this thought, I tracked the code of tensorflow. However in PropagateOutputs function (tensorflow/core/common_runtime/executor.cc), they divide the case into four (enter, exit, next iteration, none).
In this part, I have no idea what node is enter, exit or something. Also I cannot get the point of frame and iteration after I read the manual in the tf code.
Can I get some knowledge of such things or can I get any reference for studying the architecture of tensorflow?
Thanks.
Merge/Switch are concepts taken from data flow processing concepts from the 70s
(from Advances in Computers, 1992)
See section 4.4 of https://arxiv.org/pdf/1603.04467.pdf , discussion in https://github.com/tensorflow/tensorflow/issues/4762
Also following code may be informative:
A replacement for Switch:
https://github.com/tensorflow/tensorflow/pull/9189
Adding gradients to while loop:
https://github.com/tensorflow/tensorflow/commit/301b14c2
I am creating a user interface using (Qt) and I am attaching it to my C/C++ motion application using shared memory as my form of Inter Process Communication.
I currently have a class which I created in my motion application that has many members. Most of these members are used to update data on the UI and some of them get updated about 20 to 50 times a second, so it is pretty fast (the reason being because it is tracking motion). My problem is that the data is not getting updated on the UI frequently. It gets updated every few seconds. I was able to get it work using other variables made in structures from my application by using "volatile" however it does not seem to be working for members of my class. I know that the problem is not on the UI (Qt) side, because I saw that the actual member data was not being updated in my application, even though I have commands every cycle to update the data.
I was pretty sure the problem is that some optimization is occurring since I do not have my members declared as volatile as in my structures, but when I made them volatile it still did not work. I found that when I through a comment to print out in the function that updates my motion data within my motion application, the UI updates much more frequently as if the command to print out the comment deters the compiler form optimizing out some stuff.
Has anyone experienced this problem or have a possible solution?
Your help is greatly appreciated. Thanks ahead of time!
EDIT:
The interface does not freeze completely. I just updates every few seconds instead of continuously as I intended for it to do. Using various tests I know that the problem is not on the GUI or shared memory side. The problem lies strictly on the motion application side. The function that I am calling is below: int
`motionUpdate(MOTION_STAT * stat)
{
positionUpdate(&stat->traj);
}
`
where
positionUpdate(){stat->Position = motStatus.pos_fb;}
Position is a class member that contains x, y, and z. The function does not seem to update the position values unless I put a printed out comment before positionUpdate(). I don't track the change in shared memory to update the UI, but instead just update the UI every cycle.
Especially Given you are using Qt, I would strongly advise not using "native" shared memory, but to use signals instead. Concurrency using message-passing (signals/slots is one such way) is much, much easier to reason about and debug than trying to share memory.
I would expect your problem with updating is that the UI isn't being called enough of the time, so there is a backlog of updating to do.
Try putting in some code that throws away updates if they happen less than 0.3 seconds apart and see if that helps. You may wish to tune that number but start at the larger end.
Secondly, make sure there aren't any "notspots" in your app, in which the GUI thread is not being given the chance to run. If there are, consider putting code into another thread or, alternatively, calling processEvents() within that part of the code.
If the above really isn't what's happening, I would suggest adding more info about the architecture of your program.
Question Intro
I'm running an opencv project in Visual Studios 2010 and have implemented cuda support (refer to my previous question for precise info on my set-up). All cuda-functionalities are working fine - to the best of my knowledge - and are indeed improving speed on the image processing.
However, I now also wanted to attemp to speed up the video-writing function in this project by replacing the current cv::VideoWriter with the gpu::VideoWriter_GPU function. The reason for this is that the cv::VideoWriter seems to somehow cause processes running outside of the scope in which the VideoWriter is called to be slowed down, resulting in images available at the DirectShow driver being dropped by the VideoCapture-function, hence messing up an algorithm I've implemented.
Problem
To attempt to solve this issue, I've now replaced the VideoWriter-calls with VideoWriter_GPU-functionality (and corresponding syntax), but when I run my project (Compile & Run in Debug-mode), I get the following error-message (directly originating atthe calling of gpu::VideoWriter_GPU):
OpenCv Error: The function/feature is not implemented (The called functionality
is disabled for current build or platform) in unknown function, file
c:slave\builds\wininstallermegapack\opencv\modules\gpu\src\precomp.hpp, line 131.
and the program then ends with
code -529697949 (0xe06d7363)
I've purposely currently not included any of my code because the error-message originates so clearly from the call to the gpu::VideoWriter_GPU, which is making me think it's not a coding or syntax problem. (Please comment if you feel my code is necessary for answering this question.
My steps so far
I miss the natural gift of understanding what precisely this message means or how to interpret it. Does my opencv v2.4.4 simply not support what I want...? Does this function simply not work on my windows 7, 64bit system...?
I've checked out as many available google-hits I could find (relating to this error message and combinations of searchterms like "opencv, gpu, VideoWriter_GPU, disabled for current build") but have not understood what the problem is or how to solve it.
Corresponding header-file and error message can also be found here.
This post and this post suggest the error message is trying to tell me that opencv simply does not provide the option of using the function or functionality I am aiming to use. Or maybe even that cuda is not at all supported.. But that's all against my experience as every single opencv gpu-function I've tried to use has seemed to work fine.
Question
Could someone please explain to me why this is not working for me, and more importantly share with me what I should do to make the VideoWriter_GPU work?
Many thanks!
Maybe this link can give you a little idea of what the problem is: VideoReader_GPU not available, but built with NVCUVID?.
It seems to be that the problem is the CUDA_DISABLER var.
WebKit's Remote Debugging Protocol went 1.0 recently and I've been playing around with it a little, mostly out of curiosity and interest. I've thrown together a very basic recreation of Chrome's developer tools console as a replacement front-end, but I'm a little confused as to how I can execute code in a specific frame/window like Chrome's Dev Tools allow you to.
At the moment, I'm using the Runtime.evaluate method to execute my console input. This seems inadequate because of the aforementioned problem and it doesn't provide the command line API. I've discovered the Debugger.evaluateOnCallFrame method, which requires a callFrameID parameter. The only problem is, it doesn't seem possible to remotely acquire a list of callFrame objects to pass to this method.
I have a feeling I'm completely missing something here. Does anyone know the solution?
Have a look at the Debugger.paused event, which will give you an array of current call frames.
The overall program is too complex to display here. Basically, just pay attention to the green highlights in my recent git commit. I am very new to DirectInput, so I expect I've made several errors. I have very carefully studied the MSDN documentation, so I promise I'm not just throwing this out there and stamping FIX IT FOR ME on it. :)
Basically, I think I have narrowed down my problem to the area of code around Engine::getEvent (line 238+). I do not understand how these functions work, and I've messed with certain pieces to achieve different results. My goal here is to simply read in keyboard events directly and output those raw numbers to the screen (I will deal with the numbers' meaning later). The problem here relates to KEYBOARD_BUFFER_SIZE. If I make it small, the program seems to run fine, but it outputs no events. If I make it big, it runs a bit better, but it starts to slow down and then freeze (the OpenGL window just has a rotating color cube). How do I properly capture keyboard events?
I checked the return values on all the setup steps higher in the code. They all return DI_OK just fine.
Your code seems to be okay (according to this tutorial, which I have used in the past). The use of several stack-based arrays is questionable, but shouldn't be too much of an issue (unless you start having lots of concurrent getEvent calls running).
However, your best bet would be to stop using DirectInput and start using Windows Raw Input. It's best to make this switch early (ie, now) rather than realise later on that you really need to use something other than DI to get the results you want.