C++ Core Guidelines Checker REALLY slow on small project - c++

I'm using an A8-5600K so I'm used to waiting a bit for CPU tasks to complete - nothing too bad but as far as CPUs go it's not the greatest. 12GB RAM, running very latest Visual Studio 2017 (15.6.6 iirc) as of this morning.
I'm wanting to run static analysis simply because I believe it could save me time chasing edge cases and bugs while also making the code look nicer. I've got a relatively small project (single file, 5000 lines) which uses WxWidgets, OpenCV, STD, Chrono, Threads, Windows, and probably a few other things. I'm only running a single check (Style) as I figured it would be faster than running every check. It's been running for an hour and in the meantime I've got a CPU that's more or less tapped out and can't compile anything.
I presume by this point that the Checker is getting through my code in no time and spending ages trying to tell me how to fix OpenCV or something. I understand that without knowing how OpenCV works the Checker might find it difficult understanding some stuff that's going on in my code.
What can I do? Is the Checker simply not meant for consumer-grade CPUs ? If I ran this overnight would it finish? Does the number of rules have that much of an effect or should I just have them all checked for (ie 95% crawling my project, 5% analysis)?
Also, the posts I've seen in regard to 3rd Party Libraries seem content with simply filtering out the warnings post-check (words on the blackboard are written but merely erased afterward so to speak) - this likely won't speed things up for me.

Related

Tips for preventing Xcode 9.4 from Indexing indefinitely

For fun, I have started to develop games with Unreal and with that comes learning C++ and using an actual IDE. My past experience has been with web development, so something like Atom or Sublime text was all that was needed to get the job done.
Something that has been a nuisance is the indefinite indexing that can occur after builds in XCode. I realize that this is a little out of my control, since it would require Apple to fix these issues. Maybe they will and maybe they won't, but until then I would like to spend more of my time coding and less time waiting for XCode to reboot.
For reference, the reboot is being done because the CLANG process (from my understanding it is the complier responsible for the indexing in XCode) is eating up at least 95% of my CPU.
I would like to code and create game worlds more efficiently, and not have to deal with this indexing issue so much. Since I can't fix the issue then maybe there is a way to avoid it. I was hoping that some insight could be shared in this regard. These are the two things that I have noticed that can set it off:
If there is an error or a warning during the build, then this can
trigger the indexing to run indefinitely. I can fix the issue,
re-initialize the build, and then the indexing continues to run
indefinitely :(. If there are no issues or errors during the build,
then indexing would actually complete in a timely manner. For me, I
don't see any avoidance other than don't make errors or create
warnings (which I can tell you, is unavoidable because I will make
errors).
The second, which seems to be easier to avoid, is that if I do any
clicking, button pushing, etc. in Xcode while it is building then
this can also set off the indefinite indexing.
I have read several posts, forum discussions, etc. on this issue and tried several of the suggestion, i.e. removing the DerivedData from Xcode. It looks like you can even turn indexing off. This shuts down the auto-complete and refactoring features, which might in the end be worth it since (Refactor -> Extract Function) hasn't exactly been kind either.
Any workflow suggestions on things to do and things NOT to do is this kind of scenario would be appreciated!
Long post, but I thought this could be good for anyone else in similar shoes, so I wanted to include details.
When this happened to me, I thought it might be because iCloud Drive was stalling for some reason (as i mentioned in my comment). I didn't really need it to be synced, so I just moved the project directory to outside the iCloud Drive, then the infinite indexing problem went away.
I'm not sure if you're using iCloud or not, but hopefully this answer helps someone anyway.

What is the performance analyzer in vs2012 doing differently?

I decided to try out the performance analyzer thingy in vs 2012. To my surprise, the test code(way too big to post) runs about 15% faster when analyzing than in the default release configuration over the length of ~1 minute. What could be the reason for this? Is it using different compiler flags or something?
To elaborate a bit more on the code: Its a specialized spatial sorting algorithm(most similar to counting sort) that operates on relatively simple pod classes and is looped 10k times, IO times are excluded from being timed.
Well I think I finally got a clue. When I run the program from the IDE, I think the thingy that allows for breakpoints in code is slowing it down, can't break in the analyzer. To confirm I ran the program from the .exe instead and surely enough it was even faster than it was in the analyzer (not by much but still), probably because it didn't have the sampler poking at it. Mystery solved!

What is the best approach for coding in a slow compilation environment?

I used to be coding in C# in a TDD style - write/or change a small chunk of code, re-compile in 10 seconds the whole solution, re-run the tests and again. Easy...
That development methodology worked very well for me for a few years, until a last year when I had to go back to C++ coding and it really feels that my productivity has dramatically decreased since. The C++ as a language is not a problem - I had quite a lot of C++ dev experience... but in the past.
My productivity is still OK for a small projects, but it gets worse when with the increase of the project size and once compilation time hits 10+ minutes it gets really bad. And if I find the error I have to start compilation again, etc. That is just purely frustrating.
Thus I concluded that in a small chunks (as before) is not acceptable - any recommendations how can I get myself into the old gone habit of coding for an hour or so, when reviewing the code manually (without relying on a fast C# compiler), and only recompiling/re-running unit tests once in a couple of hours.
With a C# and TDD it was very easy to write a code in a evolutionary way - after a dozen of iterations whatever crap I started with was ending up in a good code, but it just does not work for me anymore (in a slow compilation environment).
Would really appreciate your inputs and recos.
p.s. not sure how to tag the question - anyone is welcome to re-tag the question appropriately.
Cheers.
I've found that recompiling and testing sort of pulls me out of the "zone", so in order to have the benefits of TDD, I commit fairly often into a git repository, and run a background process that checks out any new commit, runs the full test suite and annotates the commit object in git with the result. When I get around to it (usually in the evening), I then go back to the test results, fix any issues and "rewrite history", then re-run the tests on the new history. This way I don't have to interrupt my work even for the short times it takes to recompile (most of) my projects.
Sometimes you can avoid the long compile. Aside from improving the quality of your build files/process, you may be able to pick just a small thing to build. If the file you're working on is a .cpp file, just compile that one TU and unit-test it in isolation from the rest of the project. If it's a header (perhaps containing inline functions and templates), do the same with a small number of TUs that between them reference most of the functionality (if no such set of TUs exists, write unit tests for the header file and use those). This lets you quickly detect obvious stupid errors (like typos) that don't compile, and runs the subset of tests you believe to be relevant to the changes you're making. Once you have something that might vaguely work, do a proper build/test of the project to ensure you haven't broken anything you didn't realise was relevant.
Where a long compile/test cycle is unavoidable, I work on two things at once. For this to be efficient, one of them needs to be simple enough that it can just be dropped when the main task is ready to be resumed, and picked up again immediately when the main task's compile/test cycle is finished. This takes a bit of planning. And of course the secondary task has its own build/test cycle, so sometimes you want to work in separate checked-out copies of the source so that errors in one don't block the other.
The secondary task could for example be, "speed up the partial compilation time of the main task by reducing inter-component dependencies". Even so you may have hit a hard limit once it's taking 10 minutes just to link your program's executable, since splitting the thing into multiple dlls just as a development hack probably isn't a good idea. The key thing to avoid is for the secondary task to be, "hit SO", or this.
Since a simple change triggers a 10 minutes recompilation, that means you have a bad build system. Your build should recompile only changed files and files depending on the changed files.
Other then that, there are other techniques to speed up the build time (For example, try to remove unneeded includes. Then instead of including a header, use forward declaration. etc ), but the speed up of these things is not that important as what is recompiled on a change.
I don't see why you can't use TDD with C++. I used CppUnit back in 2001, so I assume it's still in place.
You don't say what IDE or build tool you're using, so I can't comment on how those affect your pace. But small, incremental compiles and running unit tests are both still possible.
Perhaps looking into Cruise Control, Team City, or another hands-off build and test process would be your cup of tea. You can just check in as fast as you can and let the automated build happen on another server.

C++ visual studio inline

When building projects in Visual Studio (I'm using 2008 SP1) there is an optimizing option
called Enable link-time code generation. As far as I understand, this allows specific inlining techniques to be used and that sounds pretty cool.
Still, using this option dramatically increases the size of static libraries built. In my case it was something like 40 mb -> 250 mb and, obviously building process can become REALLY slow if you have even 5-6 libraries that are that huge.
So my question is - is it worth it?. Is the effect of link-time code generation measurable so that I leave it turned on and suffer from slooooooooooooow builds?
Thank you.
How are we supposed to know? You're the one suffering the slower link times.
If you can live with the slower builds, then it speeds up your code, which is good.
If you want faster builds, you lose out on optimizations, making your code run slower.
Is it worth it? That depends on you and nothing else. How patient are you? How long can you wait for a build?
It can significantly speed up your code though. If you need the speed, it is a very valuable optimization.
It's up to you. This is rather a subjective question. Here's a few things to go over to help you make that determination:
Benchmark the performance with and without this feature. Sometimes smaller code runs faster, sometimes more inlining works. It's not always so clear cut and dry.
Is performance critical? Will your client reject your application with it's current speed unless you find a way to improve things on that front?
How slow is acceptable in the build process? Do you have to keep this on while you, yourself build it, or can you push it off to the test environment/continuous build machine?
Personally, I'd go with whatever helped me develop faster and then worry about the optimizations later. Make sure that it does what it needs to do first.

Simplifying algorithm testing for researchers.

I work in a group that does a large mix of research development and full shipping code.
Half the time I develop processes that run on our real time system ( somewhere between soft real-time & hard real-time, medium real-time? )
The other half I write or optimize processes for our researchers who don't necessarily care about the code at all.
Currently I'm working on a process which I have to fork into two different branches.
There is a research version for one group, and a production version that will need to occasionally be merged with the research code to get the latest and greatest into production.
To test these processes you need to setup a semi complicated testing environment that will send the data we analyze to the process at the correct time (real time system).
I was thinking about how I could make the:
Idea
Implement
Test
GOTO #1
Cycle as easy, fast and pain free as possible for my colleagues.
One Idea I had was to embed a scripting language inside these long running processes.
So as the process run's they could tweak the actual algorithm & it's parameters.
Off the bat I looked at embedding:
Lua (useful guide)
Python (useful guide)
These both seem doable and might actually fully solve the given problem.
Any other bright idea's out there?
Recompiling after a 1-2 line change, redeploying to the test environment and restarting just sucks.
The system is fairly complicated and hopefully I explained it half decently.
If you can change enough of the program through a script to be useful, without a full recompile, maybe you should think about breaking the system up into smaller parts. You could have a "server" that handles data loading etc and then the client code that does the actual processing. Each time the system loads new data, it could check and see if the client code has been re-compiled and then use it if that's the case.
I think there would be a couple of advantages here, the largest of which would be that the whole system would be much less complex. Now you're working in one language instead of two. There is less of a chance that people can mess things up when moving from python or lua mode to c++ mode in their heads. By embedding some other language in the system you also run the risk of becoming dependent on it. If you use python or lua to tweek the program, those languages either become a dependency when it becomes time to deploy, or you need to back things out to C++. If you choose to port things to C++ theres another chance for bugs to crop up during the switch.
Embedding Lua is much easier than embedding Python.
Lua was designed from the start to be embedded; Python's embeddability was grafted on after the fact.
Lua is about 20x smaller and simpler than Python.
You don't say much about your build process, but building and testing can be simplified significantly by using a really powerful version of make. I use Andrew Hume's mk, but you would probably be even better off investing the time to master Glenn Fowler's nmake, which can add dependencies on the fly and eliminate the need for a separate configuration step. I don't ordinarily recommend nmake because it is somewhat complicated, but it is very clear that Fowler and his group have built into nmake solutions for lots of scaling and portability problems. For your particular situation, it may be worth the effort to master it.
Not sure I understand your system, but if the build and deployment is too complicated, maybe you could automate it? If deployment is completely automatic, would that solve the problem?
I don't understand how a scripting language would solve the problem? If you change your algorithm, you still need to restart calculation from the beginning, don't you?
It kind of sounds like what you need is CruiseControl or something similar; every time hyou touch the baseline code, it rebuilds and reruns tests.