Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
A lot of C++ programs are written with the time() function, but as you probably know, this is not going to work after the year 2038 as it will return a negative integer. It's going to cause a lot of programs to be completely unusable, so I'm just wondering, what is going to be the solution, and is anybody worried about this? Is there actually an alternative out there right now?
Also, do you think this is going to be a major problem or is not something really to worry about?
One question is answerable:
Is there actually an alternative out there right now?
Yes, since C++11 the std::chrono library provides time types that are specified to be good for roughly 500 years. Since they're nicely encapsulated, it shouldn't be too difficult to extend their range, if anything recognisable as C++ is still in use by then.
On most modern platforms, time_t has 64 bits, so even using that the problem can be avoided if you're careful to always assign the results to time_t variables, not int or whatever.
The other questions are purely speculative. I suspect the problem will be similar to Y2K - most programs will already do the right thing; others can be easily changed; and there will be some ancient systems churning away long after the developers have retired, the compilers discontinued, and the source code lost.
The time function returns a time_t value and it's not specified how big the time_t type must be. Implementations will probably just change the time_t typedef so that it is at least 64 bits in size. I think this is already the case on most (or all) 64-bit machines. There is a chance that this could cause a problem for programs that depend on time_t being less than 64-bits, but I imagine that's very unlikely to often be the case for something like time_t.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
After learning about c++ through a couple different sources, I found conflicting advice concerning the use of cout/printf(). One source said that printf(), and I quote:
... does not provide type safety, so it is easy to inadvertently tell it to display an integer as if it were a character and vice versa. printf() also does not support classes, and so it is not possible to teach it how to print your class data; you must feed each class member to printf() one by one.
So, the more important thing to me would be the readability factor of using printf(). On the other hand, another source mentioned that cout, with its use of the overloaded operator <<, uses more instructions to execute and can therefore be more expensive in terms of memory over a large program. Although, the person who said this was a systems programmer, in which every bit of performance is vital. But say I wanted to go into game or application development.
Would the performance differences between printf() and cout matter all that much?
In general, does it really matter what I choose to use in an application program?
Thank you for any input.
You would measure the differences on your particular implementation for your specific use case and determine that for yourself.
I would say both lines of reasoning in the question have merit, but you can't generalise about performance.
If you want to go into game or application programming, printf/cout won't be needed too much. The only use in these cases is for debugging.
If you really need to use printf/cout a lot, the difference will be when writing a huge amount of data, otherwise you don't need to bother.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Are there some low level features of C++ that are growing extinct because of the increasing availability of computer resources like memory, disk space, CPU power etc?
Not per-se a C++ feature (it's common to C), but the register specification doesn't do much any more. This used to be a recommendation for the compiler to generate instructions to place some variable in a register, but it's not really useful anymore. When I learned C, the chapter on loops used to be full of
for(register int i ...)
Compilers develop, but the language as such is likely to stay the same (at least old language standards will remain), because otherwise, old code will break.
The inline keyword is no longer meaning "inline this function", but has some semantics based on multiple declarations of the same function in the final binary (there will only be ONE function, rather than multiple functions).
This is an effect of compiler being more clever as to when to inline (most modern compilers will for example inline any function that is static and called only once, regardless of size)
Obviusly, with more hardware resources, the solution may change - if you can write something in Python and it's reasonably speedy, why write it in C or C++? 20 years ago, that project may not even have been possible with hand-written assembler...
Bitfields are often pointless nowadays. They typically save only a few bytes per object, but accessing them is slower. So if you're not running out of memory, your program is probably faster without them. (They make a lot of sense when they can prevent your program from swapping to disk; disk is 100x slower than RAM)
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
What is the best way to perform basic arbitrary-precision arithmetic on an arbitrary base, with best performance ?
I was thinking about switching to binary, and then work with some inline assembly, but actually, I need the best-performance way to do it and I am not sure that this is the best way to do it.
EDIT : I do not want to use any library except the standart C++ one.
The problem is, "best" performance attainable with multiprecision numeric algorithms very strongly depends on the data you're working with (such as average order of numbers you may need to calculate). Consider the discussion of algorithm selection used by gnu gmp as an example:
https://gmplib.org/manual/Algorithms.html
Gnu GMP code is also used inside glibc (in particular, in precise floating point conversion code), so in a sense it is part of a "standard c" library.
Speaking of personal experience, it is extremely difficult to beat GMP's performance figures (in fact, it is rather difficult to even get within factor of 2 to GMP's performance in a general case, so if performance is an absolute priority you may want to reconsider your design goals). Performance in multiprecision calculations is not strongly dependent on implementation technique (so you're not going to win anything by using assembly instead of something like Java for this matter, if your numbers are reasonably long) - the algorithmic complexities will necessarily dominate. In fact, it makes sense to start with highest level language available and optimize from there.
And just in case, you should definitely go through chapter 4, volume 2 of Knuth's TAoCP if you haven't done so already.
I know this is probably not the answer you're looking for, but it's longer than a comment.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
So, in the process of taking a data-structures class (in C++), my professor wanted us to manipulate a linked list of playing cards. That doesn't matter however, what I am interested in is why she did not use an enumerator to represent the suites.
In her code, she used strings to hold the suite of a card. This seemed inefficient because she wanted us to sort them based on suite, under the circumstances, it would have been considerably easier if she had used an enumerated type instead of a string. The string did not offer any help either, because in printing the suite, she output a Unicode character, roughly doubling the length of her function, simply because she did not use an enum.
Is there any reason for her to have done this, or does she simply have strange preferences when it comes to code style?
If you really want to know what your professor's reasoning is, you have to ask your professor. I can only speculate.
But if I were to speculate, I would guess that there are two possible reasons why your professor chose to use strings as descriptors for these attributes.
She is trying to keep the code simple and easy for newbie C++ programmers to understand. Whether the means meet the goal is debateable.
(Personal bias alert) Professors and others in academia, with no real-world experience, often do and teach things that I would consider to be highly sub-optimal.
My guess would be that she either had not considered that approach or that she wanted to test your ability to work with sorting strings.
Code examples might help in that they might clarify what she did and what you think she should have done.
The likely answer is that she just didn't think about the problem she was using to demonstrate whatever she is trying to teach you. That is, she wanted you to focus on sorting (for example), and probably took some code written by someone else and just adapted it to that purpose without much thought.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I am supposed to build a program for storing and handling huge integers. I know that there are many answers out there but I need ideas that I can implement easily, bearing in mind that I can use any of the basic concepts of C/C++.
How should I go about it?
This is the first time I am asking a question here so please correct me if I am wrong about anything.
Edit: Actually what I wanted to know was how should I go about storing a huge integer... Obviously an array is what comes to mind at first glance but are there any other methods out there at the basic level?
EDIT2: I came across a very nice solution to this problem a while ago, but was just a bit lazy to put it on here. We can use the concept of number systems to deal with huge numbers. We can declare an array that holds the co-efficient of powers of 256, thus obtaining a base 256 system. We can then use fundamental concepts like those of the various number systems to obtain our required results.
Matt McCutchen has a Big Integer Library
If you want to do this yourself his code would be a great starting point. As you can overload arithmetic operators in C++ it is not too difficult to make a new BigInteger class and make this handle any number of bits per integer.
There is also a stack overflow answer to this question: here
I consider this as a question about theory, as such I suggest to browse the internet using the right keywords for documents/articles or to take a sneak peek at libraries that are implementing this feature and are well tested, this projects also tend to offer a mailing list or a forum where developers can communicate, it can be a good place to start writing about this stuff.