Scientific application suddenly slowed down on Linux [closed] - c++

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I do scientific computing using c++ with a couple of basic fortran routines in a Xubuntu 12.10 distribution. Things have been running well for years. All of the sudden today when messing with my code the time to complete each iteration would jump drastically half way through a run. Figuring I made a mistake I reverted to an old git version, remade the whole thing and still had the same issue. I've run the code on other computer ans the time per iteration remains constant. What could possibly be the issue?

Best guess: You CPU is overheating. As such, the processor throttles itself to prevent damage. Your code itself is likely what triggers the heat levels to spike. Hence, when you get "half way through a run", your CPU is sufficiently warm where it detects it needs to slow down.
Check to make sure your case fans, CPU fans, and any other cooling on your machine is working correctly. Maybe just turning off the machine for a bit to let it cool down, then restarting/rebooting will resolve the issue.

Related

Running the same code, old computers are as fast as new ones, is that true? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I've been learning about parallel programming in c++ and I came across materials from a university. In the lecture they stated :
"With old code, a computer from 2021 is not any faster than a computer from 2000. In this course, we will learn how to write new code that is designed with modern computers in mind." LINK
Waiting for answers from the professor, I thought to post my questions here.
Is that true? and in what conditions ? what do they mean by old code? sequential code?
In the lecture, they talk about the clock speed of the CPU and they mentioned that it hasn't been changed since the 2000s. Is that enough to say that an old computer is as fast as a new one?
Is that true? and in what conditions ?
It is true that the clock speed of processors hasn't increased since ~2005 (and went down in the mean time).
That isn't to say that single core wall-clock performance hasn't improved. It hasn't been the case for long before then that each instruction took a single clock to process. There is a pipeline of instructions, and multiple calculations are "in-flight" at a time. A recent processor will take fewer cycles to execute the same instruction stream than an older processor.
Memory has also increased in speed, and processors have more on-die memory. Programs and data that didn't fit in the caches of a P4 might fit on a current-gen Core, and when they do have to fetch from RAM they are waiting less.
What is true is that a major improvement in processors since that era is the multiplication of cores on a single die, and using that performance is not as simple as "waiting for next year's faster processor"

Reduce the size of Flash memory embedded cpp [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
After a lot of research i could not find any solution to my question (if i did i woudln't be here ...)
I'm looking for solutions that permits me to reduce the flash memory used by my program.
I'm programming an embedded c++ programm and when i Flash my electronic card in release mode everything is fine cause it doesn't overflow the space of the flash memory, but that is not the case when i do it in Debug mode... I want to know if it is possible to find functions (my goal is to do it without reducing the code) that could reduce Flash memory.I already thought about defragmentation but I don't find how to do it in embedded even though i don't even know if i can ... I also tried the -Os cmd from gcc but without any big success
So I'm taking any advices or support and i'll be there at any question about my issue ;)
Thanks !
Look at your map file. Is there something there you don't
expect? Functions you aren't expecting (like floating point, or
exception handling, etc.) or something unreasonably large?
Turn on optimization except for the file you're interested in.
Make sure you've actually got optimizations turned on (look at the build log and ensure that you've got -Os being passed to each compile step)
Consider link time optimizations, but don't expect miracles
Welcome to embedded programming. 90% of the job is figuring out how to stuff the never ending requirements into the memory available. Rinse and repeat.

Bankers Algorithm with realtime process [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
How can we give a process in taskmanager (like notepad.exe) as an input as process for my Bankers Algorithm (Deadlock detection) ???
It's going to be hard and probably unfeasible to keep track of all the OS / external conditions to implement a real deadlock-prevention algorithm on a real application. Modern OSes (when we're not talking about RT-aware systems) prefer not to implement such algorithms due to their overwhelming complexity and expensiveness.
In other terms you can get away from a Windows deadlock, in the worst case, with a simple reboot. And given how many times this happens it isn't deemed a huge problem in the desktop OSes market.
Thus I recommend to write a simple test case with a dummy application that will either
Serve your purpose
Allow you to know exactly what's being used by your application and let you manage the complexity
As a sidenote: applications like notepad.exe or similar are not real-time processes even if you give them "Real Time" priority in the Windows task manager (and not even soft-real time). Real real-time processes have time constraints (i.e. deadlines) that they MUST observe. This isn't true in any desktop OS since they're just built with a different concept in mind (time sharing). Linux has some RT patches (e.g. Xenomai) to render the scheduling algorithm in the kernel a real real-time one, but I'm not aware of the status of that patch right now.

How do threads get handled? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
Am I getting it right, that if you have for example four CPU's and four threads, these are getting distributed on each CPU? And when you have got five threads, one CPU has to handle two threads at once?
Thanks in advance :)
The only guarantee you get is that the threads will run independently of each other. Scheduling is done by the operating system and the OS usually tries to make all cores busy, but since there is a lot more running on your computer than just your program, there's no guarantee that your four threads will always run on the four cores.
On Windows, you can pin threads to a processor core, but this is neither standard nor cross-platform, and not always to the benefit of your program.

Considering the Chaos Monkey in Designing and Architecting an Embedded Systems [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I work on embedded systems with limited memory and throughput. Making a system more robust requires more memory, and time on the processor. I like the idea of the Chaos Monkey to figure out if your system is fault tolerant, however with limited resources I'm not sure how feasible it is to just keep adding code to it. Are there certain design considerations whether in the architecture or otherwise that would improve the fault handling capabilities, without necessarily adding "more code" to a system?
One way I have seen to help in preventing writing an if then statement in c (or c++) that assigns rather then compares a static value, recommends writing that on the left hand side of the comparison, this way if you try to assign your variable to say the number 5, things will complain and you're likely to find any issues right away.
Are there Architectural or Design decisions that can be made early on that prevent possible redundancy/reliability issues in a similar way?
Yes many other techniques can be used. You'd do well to purchase and read "Code Complete".
Code Complete on Amazon