Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
What are the possible causes of glbadcontext ?
Can it be related to OpenGL version , GPU , mesa libraries ( in linux) , memory corruption or something else?
I'm not experienced in OpenGL and I want to develop a clear understanding of that error.
There is no "bad context" error in OpenGL. There is the GL_CONTEXT_LOST error. What's this error about?
One of the consequences of programmability is that people can write bad programs with programmable hardware. So as GPUs have become more programmable, they now become susceptible to issues that arise when a GPU program does something stupid. In a modern OS, when a CPU process does something wrong, the OS kills the process. In a modern OS, when a GPU "process" starts doing the wrong thing (accessing memory it's not allowed to, infinite loops, other brokenness), the OS resets the GPU.
The difference is that a GPU reset, depending on the reason for it and the particular hardware, often affects all programs using the GPU, not just the one that did a bad thing. OpenGL reports such a scenario by declaring that the OpenGL context has been lost.
The function glGetGraphicsResetStatus can be used to query the party responsible for a GPU reset. But even that is a half-measure, because all it tells you is whether it was your context or someone else's that caused the reset. And there's no guarantee that it will tell you that, since glGetGraphicsResetStatus can return GL_UNKNOWN_CONTEXT_RESET, which represents not being able to determine who was at fault.
Ultimately, a GPU reset could happen for any number of reasons. Outside of making sure your code doesn't do something that causes one, all you can do is accept that they can happen and deal with them when they do.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
After a lot of research i could not find any solution to my question (if i did i woudln't be here ...)
I'm looking for solutions that permits me to reduce the flash memory used by my program.
I'm programming an embedded c++ programm and when i Flash my electronic card in release mode everything is fine cause it doesn't overflow the space of the flash memory, but that is not the case when i do it in Debug mode... I want to know if it is possible to find functions (my goal is to do it without reducing the code) that could reduce Flash memory.I already thought about defragmentation but I don't find how to do it in embedded even though i don't even know if i can ... I also tried the -Os cmd from gcc but without any big success
So I'm taking any advices or support and i'll be there at any question about my issue ;)
Thanks !
Look at your map file. Is there something there you don't
expect? Functions you aren't expecting (like floating point, or
exception handling, etc.) or something unreasonably large?
Turn on optimization except for the file you're interested in.
Make sure you've actually got optimizations turned on (look at the build log and ensure that you've got -Os being passed to each compile step)
Consider link time optimizations, but don't expect miracles
Welcome to embedded programming. 90% of the job is figuring out how to stuff the never ending requirements into the memory available. Rinse and repeat.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
My application has a huge memory leak which eats all my memory instantly, and I can't debug as it freezes the computer ...
Do you guys have any technical solution for that kind of issue?
Edit : I am using Qt Creator with Windows 7 and MSVC compiler.
Cheers
You cannot just freeze a computer with a single instruction. If you allocate a lot of memory, it will either crash the program or use a virtual memory space without actually consuming the real space.
Thus, if you debug it further, maybe in smaller steps, I am sure you will find your solution.
There are many debugging tools that you can try to use, depending on your working environment. Assuming you are working under linux, the simplest one is the command line gdb, allowing you to execute code line-by-line. More advanced, tailored specifically to memory problems is valgrind.
In the comment you are asking if there is a way for the OS to artifically limit the available memory to a program/process. You can try by reading this question:
https://unix.stackexchange.com/questions/44985/limit-memory-usage-for-a-single-linux-process
however, given the little information you provided, I am not convinced it will solve your problem.
If you have got global variables which allocate memory immediately, i.e. before reaching the first line of code in main(), which could be found for instance in class constructors, then you may consider placing your breakpoints not on the first line of main() but rather on the class constructors. Just as a hint based on a previous similar experience ...
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
How can we give a process in taskmanager (like notepad.exe) as an input as process for my Bankers Algorithm (Deadlock detection) ???
It's going to be hard and probably unfeasible to keep track of all the OS / external conditions to implement a real deadlock-prevention algorithm on a real application. Modern OSes (when we're not talking about RT-aware systems) prefer not to implement such algorithms due to their overwhelming complexity and expensiveness.
In other terms you can get away from a Windows deadlock, in the worst case, with a simple reboot. And given how many times this happens it isn't deemed a huge problem in the desktop OSes market.
Thus I recommend to write a simple test case with a dummy application that will either
Serve your purpose
Allow you to know exactly what's being used by your application and let you manage the complexity
As a sidenote: applications like notepad.exe or similar are not real-time processes even if you give them "Real Time" priority in the Windows task manager (and not even soft-real time). Real real-time processes have time constraints (i.e. deadlines) that they MUST observe. This isn't true in any desktop OS since they're just built with a different concept in mind (time sharing). Linux has some RT patches (e.g. Xenomai) to render the scheduling algorithm in the kernel a real real-time one, but I'm not aware of the status of that patch right now.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I need some unbiased views from experts. I bought BobCAD a couple months ago. It did run fine while evaluating and also after installation. Now, after some use it starts crashing with multiple "null Pointer" exceptions on closing the simulation mode.
Tech support is telling me that it is the graphic card that behaves (I quote:) "unpredictable". They say an integrated graphic card is only good for word and internet browsing.
However BobCad once run fine, I can perfectly play games, use CAD or other applications on my computer without crashing it. This leads me to having a hard time to believe this. BobCad does not use a lot of resources contrary to what they claim. There is no lagging or signs of useng my computer at the limit of what it is capable of.
From what I know you do not program the graphic card directly anymore - and certainly not in a CAM application, so those problems with graphic cards should be gone.
From what I see BobCad is a WPF application presumably written in C++
Please tell me, are they right? Is my suspicion of them not being very competent wrong?
Help me out with your experiences.
Best Regards
Leo
A expensive dedicated graphic card is usually better than an integrated,
but that doesn´t means that integrated ones can´t do any real work.
Gc´s are directly programmed, even today (usage is even rising).
But, probably not in a WPF application...
Anyway, that all is no excuse for Nullpointerexceptions delivered to the user.
That´s simply a programming error, doesnt´t matter what your Gc is capable of.
If the program says "The Gc is too weak" it´s one thing, but crashing is inacceptable.
(And, incomptent support people are nothing unusual, sadly.)
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I work on embedded systems with limited memory and throughput. Making a system more robust requires more memory, and time on the processor. I like the idea of the Chaos Monkey to figure out if your system is fault tolerant, however with limited resources I'm not sure how feasible it is to just keep adding code to it. Are there certain design considerations whether in the architecture or otherwise that would improve the fault handling capabilities, without necessarily adding "more code" to a system?
One way I have seen to help in preventing writing an if then statement in c (or c++) that assigns rather then compares a static value, recommends writing that on the left hand side of the comparison, this way if you try to assign your variable to say the number 5, things will complain and you're likely to find any issues right away.
Are there Architectural or Design decisions that can be made early on that prevent possible redundancy/reliability issues in a similar way?
Yes many other techniques can be used. You'd do well to purchase and read "Code Complete".
Code Complete on Amazon