What is the difference between fundamental vs. built-in types C++ - c++

I'm reading my notes for my C++ class in my college. And it states that types can be classified into categories based on their relationship to the underlying hardware facilities:
fundamental types - correspond directly to the hardware facilities
built-in types - reflect the capabilities of the hardware facilities directly and efficiently
I understand that fundamental types are int, bool, char, double and so forth.
I always thought fundamental types are built-in types as they are built in within the C++ language. Or am I wrong? What is the difference between fundamental and built-in?

There is no such dichotomy in C++. Instead, there are fundamental types and compound types. Fundamental types are also informally known as built-in types.

built-in types - reflect the capabilities of the hardware facilities
directly and efficiently
The only reference I can find is at senecac.on.ca Overview that is about an object-oriented language, not specifically C++.
C++, as others have pointed put, makes no difference for "fundamental types" and "built-in types", even "intrinsic types" or "primitive types", they all are synonyms.
Trying to figure out what the author of that sentence is trying to explain, I can think of the size_t type. It's not something that a CPU can use "as is". It's an unsigned integer, but implementation-defined. Once the implementation defines it, then it fits into that "built-in types" definition sentence.

Related

Using standard layout types to communicate with other languages

This draft of the standard contains a note at 11.2.6 regarding standard layout types :
[Note 3: Standard-layout classes are useful for communicating with code written in other programming languages. Their layout is specified in [class.mem]. — end note]
Following the link to class.mem we find rules regarding the layout of standard-layout types starting here but it is not clear to me what about them makes them useful for communicating with other languages. It all seems to be about layout-compatible types and common initial sequence, but I see no indication that these compatibility requirements extend being a given implementation.
I always assumed that standard layout types could not have arbitrary padding imposed by an implementation and had to follow an "intuitive" layout which would make them easy to use from other languages. But I can't seem to find any such rules.
What does this note mean? Did I miss any rules that force standard layout types to at least be consistent across a given platform?
The standard can’t meaningfully speak about other languages and implementations: even if one could unambiguously define “platform”, all it can do is constrain a C++ implementation, possibly in a fashion that would be impossible to satisfy for whatever arbitrary choices that other software makes. That said, the ABI can define such things, and standard-layout types are those that don’t have anything “C++-specific” (like references, base class subobjects, or a virtual table pointer) that would presumably fail to map into some other environment. In practice that “other environment” is just C, or some other language that itself follows C rules (e.g., ctypes in Python).

Resolve (u)int_fastX_t at compile time

Implementations of the C++ standard typedef the (u)int_fastX types as one of their built in types. This requires research in which type is the fastest, but there cannot be one fastest type for every case.
Wouldn't it increase performance to resolve such types at compile time to account for the case by chosing the optimal type for the actual use? The compiler would analyze the use of a _fast variable and then chose the optimal type. Factors coming into play could be alignment and the kind of operations used with the variable.
This would effectively make those types a language feature.
This could introduce bugs when the compiler suddenly decides to choose another width for such a variable. But one shouldn't use a _fast type in such use cases, where the behaviour depends on the width, anyways.
Is such compile time resolval permitted by the standard?
If yes, why isn't it implemented as of today?
If no, why isn't it in the standard?
No, this is not permitted by the standard. Keep in mind the C++ standard defers to C for this particular area, for example, C++11 defers to C99, as per C++11 1.1 /2. Specifically, C++11 18.4.1 Header <cstdint> synopsis /2 states:
The header defines all functions, types, and macros the same as 7.18 in the C standard.
So let's get your first contention out of the way, you state:
Implementations of the C++ standard typedef the (u)int_fastX types as one of their built in types. This requires research in which type is the fastest, but there cannot be one fastest type for every case.
The C standard has this to say, in c99 7.18.1.3 Fastest minimum-width integer types (my italics):
Each of the following types designates an integer type that is usually fastest to operate with among all integer types that have at least the specified width.
The designated type is not guaranteed to be fastest for all purposes; if the implementation has no clear grounds for choosing one type over another, it will simply pick some integer type satisfying the signedness and width requirements.
So you're indeed correct that a type cannot be fastest for all possible uses but this seems to not be what the authors had in mind in defining these aspects.
The introduction of the fixed-width types was (in my opinion) to solve the problem all those developers had in having different int widths across the various implementations.
Similarly, once a developer knows the range of values they want, the fast minimum-width types give them a way to do arithmetic on those values at the maximum possible speed.
Covering your three specific questions in your final paragraph (in bold below):
(1) Is such compile time resolution permitted by the standard?
I don't believe so. The relevant part of the C standard has this little piece of text:
For each type described herein that the implementation provides, <stdint.h> shall declare that typedef name and define the associated macros.
That seems to indicate that it must be a typedef provided by the implementation and, since there are no "variable" typedefs, it has to be fixed.
There may be wiggle room because it could be possible to provide a different typedef depending on certain environmental considerations but the difficulty in actually implementing this seems very high (see my answer to your third question below).
Chief amongst these is that these adaptable types, should they have external linkage, would require agreement amongst all the compiled translation units when linked together. Having one unit with a 16-bit type and another with a 32-bit type is going to cause all sorts of problems.
(2) If yes, why isn't it implemented as of today?
I'm pushing "no" as an answer to your first question so I'm not going to speculate on this other than by referring you to the answer to the third question below (it's probably not implemented because it's very hard, with dubious benefits).
(3) If no, why isn't it in the standard?
A standard is a contract between the implementor and the user and describes what the implementor will provide. It's usual that the standards committees tend to be more populated by the former (who aren't that keen on making too much extra work for themselves) than the latter.
For example, I would love to have all the you-beaut C++ data structures in C but this would have the consequence that standards versions would be decades apart rather than years :-)

What's the difference between C++ "type deduction" and Haskell "type inference"?

In English semantics, does "type deduction" equal to "type inferring"?
I'm not sure if
this is just an idiom preference chosen by different language designers, or
there's computer science that tells a strict "type deduction" definition,
which is not "type inference"?
Thanks.
The C++ specification and working drafts use 'type deduce' extensively in reference to the type of expressions that don't have an type declaration as reference; for example this working draft on concepts uses it when talking about auto-declared variables and I remember lots of books using it when talking about templates way way back when I had to learn – and then subsequently forget most of – C++. Type inference, however, has its own Wikipedia page and is also the name of a significant field of study in programming-language theory. If you say type inference, people will immediately think of modern typed functional programming languages. You can even use it as a ruler to compare languages; some might say that their language X or their library Y is easier to type inference and is therefore better or friendlier.
I would say that type inference is the more specific, more precise, and more widely-used term. Type deduction as a phrase probably only holds cachet in the C++ community. The terms are close cousins, but the context they've been used in have given them dictional shades of color.
As mentioned, type inference is studied for years in theory. It is also a common feature provided by several programming languages, not only Haskell.
On the other hand, type deduction is the general name of several processes defined in the C++ specification, including return type deduction, placeholder type deduction, and the origin of them, template (type) argument deduction. It is about to identify the type implied by a term (within a C++ simple-type-specifier or template-argument) with yet unknown to-be-deduced type. It is similar to type inference, which is about typing -- determining the type of a term, but more restricted.
The major differences are:
Type deduction is the process to get the type of terms with some restricted form. Normally a C++ expression is typed without type deduction rules. In most cases the static type of an expression is determined by specific semantic rules forming the expression. Type inference can be used instead as the one truly general method of static typing.
As the term "type" has more restricted meaning in C++ compared to that in type theory, programming language theory and many contemporary programming languages, the domain of the result is also more restricted. Namely, type deduction must deduce a well-formed C++ type, not a higher-order type (which is not expressible as a C++ type).
These restrictions would not likely change without a massive redesigning of the type system of C++, so they can be essential.

Does C++ standard address the concept "TYPE"?

I have been reading Design Patterns(GOF), and it presents a clear distinction between the class and the type of an object as specified below.
The TYPE of the object is defined by it's interface(set of methods that it can handle) and the CLASS of the object defines its implementation.
I have read in many books on C++ that a Class is user-defined Type. And nothing more has been mentioned about the concept TYPE (not even as GOF mentions it.)
I just want to know does C++ standard mentions anywhere the concept TYPE in any way if not the way that GOF mentions.
Or is it assumed that this difference is too basic to mention?
C++ defines several kinds of types. Class types are just one such kind of type; others are integral types, floating-point types, pointer types, array types, function types, and so forth. The concept of "type" is well defined in C++.
The C++ standard discusses types in section 3.9 [basic.types] (in the 2011 ISO C++ standard; the section number may be different in other editions).
The Design Patterns book is is not language-specific, and it's using the words "type" and "class" in a different way than the way the C++ standard uses them.

what is a type in C++?

What all constructs(class,struct,union) classify as types in C++? Can anyone explain the rationale behind calling and qualifying certain C++ constructs as a 'type'.
$3.9/1 - "There are two kinds of
types: fundamental types and compound
types. Types describe objects (1.8),
references (8.3.2), or functions
(8.3.5). ]"
Fundamental types are char, int, bool and so on.
Compound types are arrays, enums, classes, references, unions etc
A variable contains a value.
A type is a specification of the value. (eg, number, text, date, person, truck)
All variables must have a type, because they must hold strictly defined values.
Types can be built-in primitives (such as int), custom types (such as enums and classes), or some other things.
Other answers address the kinds of types C++ makes available, so I'll address the motivation part. Note that C++ didn't invent the notion of a data type. Quoting from the Wikipedia entry on Type system.
a type system may be defined as "a
tractable syntactic framework for
classifying phrases according to the
kinds of values they compute"
Another interesting definition from the Data Type page:
a data type (or datatype) is a
classification identifying one of
various types of data, such as
floating-point, integer, or Boolean,
stating the possible values for that
type, the operations that can be done
on that type, and the way the values
of that type are stored
Note that this last one is very close to what C++ means by "type". Perhaps obvious for the built-in (fundamental) types like bool:
possible values are true and false
operations - per definition of arithmetic operators that can accept bool as argument
the way it's stored - actually not mandated by the C++ standard, but one can guess that on some systems a type requiring only a single bit can be stored efficiently (although I think most C++ systems don't do this optimization).
For more complex, user created, types, the situation is more difficult. Consider enum types: you know exactly the range of values a variable of an enum type can get. What about struct and class? There, also, your type declaration tells the compiler what possible values the struct can have, what operations you can do on it (operator overloading and functions accepting objects of this type), and it will even infer how to store it.
Re range of values, although huge, remember it's finite. Even a struct with N 32-bit integers has a finite range of possible values, which is 2^(32N).
Cite from the book "Bjarne Stroustrup - Programming Principles and Practice Using C++", page 77, chapter 3.8:
A type defines a set of possible values and a set of operations (for an object).
An object is some memory that holds a value of a given type.
A value is a set of bits in memory interpreted according to a type.
A variable is a named object.
A declaration is a statement that gives a name to an object.
A definition is a declaration that sets aside memory for an object.
Sounds like a matter of semantics to me...A type refers to something with a construct that can be used to describe it in a away that conforms to traditional Object Oriented concepts(properties and methods). Anything that isn't called a type is probably created with a less robust construct.