Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
After learning about c++ through a couple different sources, I found conflicting advice concerning the use of cout/printf(). One source said that printf(), and I quote:
... does not provide type safety, so it is easy to inadvertently tell it to display an integer as if it were a character and vice versa. printf() also does not support classes, and so it is not possible to teach it how to print your class data; you must feed each class member to printf() one by one.
So, the more important thing to me would be the readability factor of using printf(). On the other hand, another source mentioned that cout, with its use of the overloaded operator <<, uses more instructions to execute and can therefore be more expensive in terms of memory over a large program. Although, the person who said this was a systems programmer, in which every bit of performance is vital. But say I wanted to go into game or application development.
Would the performance differences between printf() and cout matter all that much?
In general, does it really matter what I choose to use in an application program?
Thank you for any input.
You would measure the differences on your particular implementation for your specific use case and determine that for yourself.
I would say both lines of reasoning in the question have merit, but you can't generalise about performance.
If you want to go into game or application programming, printf/cout won't be needed too much. The only use in these cases is for debugging.
If you really need to use printf/cout a lot, the difference will be when writing a huge amount of data, otherwise you don't need to bother.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I have two functions performing same process but with different techniques and I need to know on a large scale which technique is faster than the other of maybe in the future will be more techniques available. So my question is, how can I do that in c++ specially? Is there a specific method and header to be used to perform this task?
More details:
For example the isLargest() uses three parameters and it has two versions, one uses a nested if technique and the other uses initializers and less if statements. So if I need to know which one is faster, how can I do that?
Try your code in the real world and measure
There is a tool called a profiler that is meant to solve this problem. Broadly speaking, there are two kinds (note: some are a mix between the two):
Sampling profilers.
Instrumenting profilers.
It's worth learning about what each does and their pros/cons, but if you don't know what to use go with a sampling profiler.
There are many sampling profilers, but support depends on your platform. If you're on Windows, Visual Studio comes with a really nice sampling profiler and I recommend you start there!
If you go down this route, it's important to make sure you use your functions as you would "for real" when you're profiling them, as there are many subtle factors that can affect the result.
An alternative
If you don't want to try your code running in a real program, perhaps if you're just trying to understand general characteristics of the function, there are libraries to help you do this such as Google Benchmark.
Benchmarking code can be surprisingly difficult to get right, so I would strongly recommend using existing benchmarking tools where like Google Benchmark wherever possible.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I stumbled across polymorphic engines and I don't know anything about them. However, I am curious about how they are written. Every example that I've looked up writes them in assembly, my assembly is not good at all; I know just a few instructions here and there but not that well. On the other hand, I am good in C and C++.
I am familiar with the concept of polymorphism in C++ but after reading about polymorphic engines, I am assuming that they are different from the polymorphism in C++.
How can techniques such as using virtual keyword in C++ be used to obfuscate or encrypt the code in an application?
If a program has to be modified you can go either modifying the source code or modifying the compiled executable.
The first approach is awful (in my opinion) because:
A source file is subject to a lot of optimizations in the compilation processes. So two source files slightly different from each other could produce the same object code.
If you need your program to be self modifying you will have to carry with all the tools needed to build it. (Something like carrying a candy factory with you just for the case you want a candy of a different flavor in your trip)
...
Notice that I'm talking here about compiled languages as the use of C or C++ in your question suggests. For interpreted languages the first approach is the obvious one.
In your case, the second makes more sense but it is strictly related to the machine code of the target machine.
So my point is: if you want to implement a program or routine that is able to produce a modified version of other program or a modified version of itself you can implement it in Assembly, C, C++ or any other language but in all cases you have to be proficient in your target machine's assembly language and machine code.
I recommend you to research more. This topic is broad. In the case you decide to go on, I can say that Assembly won't be the biggest dragon to beat.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
So, in the process of taking a data-structures class (in C++), my professor wanted us to manipulate a linked list of playing cards. That doesn't matter however, what I am interested in is why she did not use an enumerator to represent the suites.
In her code, she used strings to hold the suite of a card. This seemed inefficient because she wanted us to sort them based on suite, under the circumstances, it would have been considerably easier if she had used an enumerated type instead of a string. The string did not offer any help either, because in printing the suite, she output a Unicode character, roughly doubling the length of her function, simply because she did not use an enum.
Is there any reason for her to have done this, or does she simply have strange preferences when it comes to code style?
If you really want to know what your professor's reasoning is, you have to ask your professor. I can only speculate.
But if I were to speculate, I would guess that there are two possible reasons why your professor chose to use strings as descriptors for these attributes.
She is trying to keep the code simple and easy for newbie C++ programmers to understand. Whether the means meet the goal is debateable.
(Personal bias alert) Professors and others in academia, with no real-world experience, often do and teach things that I would consider to be highly sub-optimal.
My guess would be that she either had not considered that approach or that she wanted to test your ability to work with sorting strings.
Code examples might help in that they might clarify what she did and what you think she should have done.
The likely answer is that she just didn't think about the problem she was using to demonstrate whatever she is trying to teach you. That is, she wanted you to focus on sorting (for example), and probably took some code written by someone else and just adapted it to that purpose without much thought.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I am supposed to build a program for storing and handling huge integers. I know that there are many answers out there but I need ideas that I can implement easily, bearing in mind that I can use any of the basic concepts of C/C++.
How should I go about it?
This is the first time I am asking a question here so please correct me if I am wrong about anything.
Edit: Actually what I wanted to know was how should I go about storing a huge integer... Obviously an array is what comes to mind at first glance but are there any other methods out there at the basic level?
EDIT2: I came across a very nice solution to this problem a while ago, but was just a bit lazy to put it on here. We can use the concept of number systems to deal with huge numbers. We can declare an array that holds the co-efficient of powers of 256, thus obtaining a base 256 system. We can then use fundamental concepts like those of the various number systems to obtain our required results.
Matt McCutchen has a Big Integer Library
If you want to do this yourself his code would be a great starting point. As you can overload arithmetic operators in C++ it is not too difficult to make a new BigInteger class and make this handle any number of bits per integer.
There is also a stack overflow answer to this question: here
I consider this as a question about theory, as such I suggest to browse the internet using the right keywords for documents/articles or to take a sneak peek at libraries that are implementing this feature and are well tested, this projects also tend to offer a mailing list or a forum where developers can communicate, it can be a good place to start writing about this stuff.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I work on embedded systems with limited memory and throughput. Making a system more robust requires more memory, and time on the processor. I like the idea of the Chaos Monkey to figure out if your system is fault tolerant, however with limited resources I'm not sure how feasible it is to just keep adding code to it. Are there certain design considerations whether in the architecture or otherwise that would improve the fault handling capabilities, without necessarily adding "more code" to a system?
One way I have seen to help in preventing writing an if then statement in c (or c++) that assigns rather then compares a static value, recommends writing that on the left hand side of the comparison, this way if you try to assign your variable to say the number 5, things will complain and you're likely to find any issues right away.
Are there Architectural or Design decisions that can be made early on that prevent possible redundancy/reliability issues in a similar way?
Yes many other techniques can be used. You'd do well to purchase and read "Code Complete".
Code Complete on Amazon