Is it possible to convert foo from float to long (and vice versa)?
auto foo = float(1234567891234.1234);
cout << "foo: " << foo << endl;
foo = long(1234567891234.1234);
cout << "foo: " << foo << endl;
The output is always:
foo: 1.23457e+12
foo: 1.23457e+12
Not in the way you wrote it. First,
auto foo = float(1234567891234.1234);
uses auto type deduction rules to infer the type of the RHS, and the result is float. Once this is done, the type of foo is float and it is set in stone (C++ is statically typed, unlike e.g. Python). When you next write
foo = long(1234567891234.1234);
the type of foo is still float and it is not magically changed to long.
If you want to emulate a "change" of type you can at most perform a cast:
cout << "foo (as long): " << static_cast<long>(foo) << endl;
or use an additional variable
long foo_long = foo; // again you may have a loss of precision
but be aware of possible precision loss due to floating point representation.
If you have access to a C++17 compiler, you can use an std::variant<long, float>, which is a type-safe union, to switch between types. If not, you can just use a plain old union like
#include <iostream>
union Foo
{
float f;
long l;
};
int main()
{
Foo foo;
foo.f = float(1234567891234.1234); // we set up the float member
std::cout << "foo: " << foo.f << std::endl;
foo.l = long(1234567891234.1234); // we set up the long member
std::cout << "foo: " << foo.l << std::endl;
}
Live on Coliru
Or, you can use a type-erasure technique like
#include <iostream>
int main()
{
void *foo; // we will store the object via this pointer
foo = new int{42};
std::cout << *(int*)foo << '\n';
operator delete(foo); // don't do delete foo, it is undefined behaviour
foo = new float{42.42};
std::cout << *(float*)foo << '\n';
operator delete(foo); // don't do delete foo, it is undefined behaviour
}
Live on Coliru
The modern version of the code above can be re-written with a std::shared_ptr like
#include <iostream>
#include <memory>
int main()
{
std::shared_ptr<void> foo{new int{42}};
std::cout << *(int*)foo.get() << '\n';
foo.reset(new float{42.42});
std::cout << *(float*)foo.get() << '\n';
}
Live on Coliru
A std::unique_ptr<void> won't work as only std::shared_ptr implements type-erasure.
Of course, if you don't really care about storage size etc, just use 2 separate variables.
Related
I am not sure I understand why the first test evaluates to true and the second to false. I know that the information from typeid().name() is usually not reliable, but my main problem is with the typeid itself. I don't understand why the type of *test is not Location<1>, or what else is wrong. Any thoughts? Is there same wrapper around a type here that I don't see? Thanks in advance, and apologies if the answer is obvious.
#include <iostream>
#include <utility>
#include <typeinfo>
class LocationAbstract
{
virtual void get_() = 0;
};
template<int i>
class Location : public LocationAbstract
{
public:
static constexpr int test = i;
virtual void get_() override
{
return;
}
};
template <int i>
Location<i> LocationGenerator()
{
Location<i> test{};
return test;
}
int main()
{
LocationAbstract *table[10];
table[0] = new decltype(LocationGenerator<0>());
table[1] = new decltype(LocationGenerator<1>());
Location<1> *test;
try
{
std::cout << "Casting\n";
test = dynamic_cast<Location<1>*>(table[1]);
}
catch (std::bad_cast &e)
{
std::cout << "Bad cast\n";
}
// test1, evaluates to true
std::cout << (typeid(*test) == typeid(*dynamic_cast<Location<1>*>(table[1]))) << "\n";
std::cout << typeid(*test).name() << "\n";
std::cout << typeid(*dynamic_cast<Location<1>*>(table[1])).name() << "\n----\n";
// test2, why does this evaluate to false while the above evaluates to true ?
std::cout << (typeid(Location<1>()) == typeid(*dynamic_cast<Location<1>*>(table[1]))) << "\n";
std::cout << typeid((Location<1>())).name() << "\n";
std::cout << typeid(*dynamic_cast<Location<1>*>(table[1])).name() << "\n";
auto test1 = Location<1>();
auto test2 = *dynamic_cast<Location<1>*>(table[1]);
std::cout << typeid(test1).name() << " and " << typeid(test2).name() << "\n";
return 0;
}
An extra set of () makes all the difference here. In typeid(Location<1>()) and typeid((Location<1>())), Location<1>() actually means two totally different things.
In typeid(Location<1>()), Location<1>() is interpreted as a function type that returns a Location<1> and takes no parameters.
In typeid((Location<1>())), Location<1>() is interpreted as value-initializing an anonymous Location<1> object.
The typeid operator can work on either types or expressions. That is, you can say typeid(int) as well as typeid(42). Since Location<1>() can be interpreted as a type, the language does so. (Location<1>()) cannot be interpreted as a type though, so it must be interpreted as an expression. The only thing Location<1>() can mean as part of an expression is to value-initialize an anonymous Location<1> object, so typeid gives you the type of that object.
Let this be yet another reason to prefer uniform-initialization syntax when creating temporary objects; Location<1>{} would not have this ambiguity.
Examine these two lines:
std::cout << (typeid(Location<1>()) == typeid(*dynamic_cast<Location<1>*>(table[1]))) << "\n";
std::cout << typeid((Location<1>())).name() << "\n";
In the first line, you use typeid(Location<1>()). typeid can take types as well as expressions, and Location<1>() is a function type with no parameters and a return type of Location<1>.
So why does the name print the same? That's because of the second line: typeid((Location<1>())). By wrapping the argument in parentheses, it is no longer a valid type, so it is treated as an expression and the name of typeid(Location<1>) is printed. Removing the extra parentheses prints F8LocationILi1EEvE under the same mangling scheme.
To avoid the ambiguity, you can also use the type directly (typeid(Location<1>)) or use braces: typeid(Location<1>{})).
I should start by saying that my knowledge of C++ is pretty limited. I have some understanding of templates and specialization, but I'm by no means an experienced C++ programmer. For example I've today learnt about "aliases" which are not quite the same as "typedefs" and this is completely news to me.
I've been reading up a bit on alias template functions, but I have to admit that I find most examples very cryptic, so I've come up with a very simple use case, see below.
#include <iostream>
// A fictitious 24-bit type, that can be manipulated like any other type but can be distinguished from the underlying 32-bit type
using int24_t = int32_t;
template<typename T>
void foo(T x)
{
std::cout << "x = " << x << std::endl;
}
template<>
void foo<int24_t>(int24_t x)
{
std::cout << "24-bit specialization - x = " << x << std::endl;
}
int main(void)
{
foo<int16_t>(0);
foo<int24_t>(1);
foo<int32_t>(2); // Indistinguishable from 24-bit
}
Is it possible to do what I want, i.e. have a specialization of foo<int24_t> but also have a general purpose implementation of foo<int32_t> ?
When you've made an alias (using typedef or using) that alias is indistinguishable from the original type. You could consider making int24_t an enum though.
Example:
#include <cstdint>
#include <iostream>
enum int24_t : std::int32_t {};
template<typename T>
void foo(T v) {
std::cout << "base " << v << '\n';
}
template<>
void foo<std::int32_t>(std::int32_t v) {
std::cout << "int32_t " << v << '\n';
}
template<>
void foo<int24_t>(int24_t v) {
std::cout << "int24_t " << v << '\n';
}
int main() {
int24_t a{1};
std::int32_t b{2};
unsigned c{3};
foo(a);
foo(b);
foo(c);
a = static_cast<int24_t>(b); // needed to assign an int32_t to the enum type
foo(a);
}
Output
int24_t 1
int32_t 2
base 3
int24_t 2
I have read many posts about variadic templates and std::bind but I think I am still not understanding how they work together. I think my concepts are a little hazy when it comes to using variadic templates, what std::bind is used for and how they all tie together.
In the following code my lambda uses the dot operator with objects of type TestClass but even when I pass in objects of type std::ref they still work. How is this exactly? How does the implicit conversion happen?
#include <iostream>
using std::cout;
using std::endl;
#include <functional>
#include <utility>
using std::forward;
class TestClass {
public:
TestClass(const TestClass& other) {
this->integer = other.integer;
cout << "Copy constructed" << endl;
}
TestClass() : integer(0) {
cout << "Default constructed" << endl;
}
TestClass(TestClass&& other) {
cout << "Move constructed" << endl;
this->integer = other.integer;
}
int integer;
};
template <typename FunctionType, typename ...Args>
void my_function(FunctionType function, Args&&... args) {
cout << "in function" << endl;
auto bound_function = std::bind(function, args...);
bound_function();
}
int main() {
auto my_lambda = [](const auto& one, const auto& two) {
cout << one.integer << two.integer << endl;
};
TestClass test1;
TestClass test2;
my_function(my_lambda, std::ref(test1), std::ref(test2));
return 0;
}
More specifically, I pass in two instances of a reference_wrapper with the two TestClass objects test1 and test2, but when I pass them to the lambda the . operator works magically. I would expect that you have use the ::get() function in the reference_wrapper to make this work but the call to the .integer data member works..
The reference unwrapping is performed by the result of std::bind():
If the argument is of type std::reference_wrapper<T> (for example, std::ref or std::cref was used in the initial call to bind), then the reference T& stored in the bound argument is passed to the invocable object.
Corresponding standardese can be found in N4140 draft, [func.bind.bind]/10.
It is important to note that with std::bind;
The arguments to bind are copied or moved, and are never passed by reference unless wrapped in std::ref or std::cref.
The "passed by reference" above is achieved because std::ref provides a result of std::reference_wrapper that is a value type that "wraps" the reference provided.
std::reference_wrapper is a class template that wraps a reference in a copyable, assignable object. It is frequently used as a mechanism to store references inside standard containers (like std::vector) which cannot normally hold references.
By way of an example of what bind's unwrapping of the reference does (without the bind);
#include <iostream>
#include <utility>
#include <functional>
int main()
{
using namespace std;
int a = 1;
auto b = std::ref(a);
int& c = b;
cout << a << " " << b << " " << c << " " << endl; // prints 1 1 1
c = 2;
cout << a << " " << b << " " << c << " " << endl; // prints 2 2 2
}
Demo code.
I am fiddling with a code like following:
union Data {
int i;
double x;
std::string str;
~Data(){}
};
union Data var = {.x = 31293.932};
std::cout << var.x << "\n";
std::cout << var.str << "\n";
std::cout << var.i << "\n";
As far as I know, the union have some 64 bit thing written after I set x member to some floating point number. Then I want to see corresponding string, asuming I treated those bytes as char. But I am getting segmentation fault when I try to print it as string. Why is that? I initialized the union so I assume var.str must be initialized as well.
str is not constructed. if you must use str you must either provide a constructor for it or construct it via placement new. A full example below
#include <iostream>
#include <vector>
using namespace std;
union Data
{
int i;
double x;
std::string str;
Data() {}
Data(std::string st) : str(st) {}
~Data() {}
};
int main()
{
Data var;
var.x = 31293.932;
new (&var.str) std::string("Hello World!");
std::cout << var.x << "\n";
std::cout << var.str << "\n";
std::cout << var.i << "\n";
//destroy it
var.str.std::string::~string();
}
EDIT:
Just to expand my answer a bit...
MSDN seems to have a n00bie friendly explanation about unions than cppreference. So, check: Unions - MSDN and Unions - cppreference
You should be using char to access the bytes in the union. std::string is not a POD type and can't be used in this way.
Try this instead:
union Data {
int i;
double x;
char bytes[sizeof(double)];
~Data(){}
};
union Data var = {.x = 31293.932};
std::cout << var.x << "\n";
std::cout.write(var.bytes, sizeof(var.bytes));
std::cout << "\n" << var.i << "\n";
The full definition of what a POD type is extensive. In very simple terms it is a basic data type without a explicitly-defined copy constructor, destructor, or virtual methods and does not itself contain any such types if it is an aggregate type (like struct, class, and unions).
As I understand it, both decltype and auto will attempt to figure out what the type of something is.
If we define:
int foo () {
return 34;
}
Then both declarations are legal:
auto x = foo();
cout << x << endl;
decltype(foo()) y = 13;
cout << y << endl;
Could you please tell me what the main difference between decltype and auto is?
decltype gives the declared type of the expression that is passed to it. auto does the same thing as template type deduction. So, for example, if you have a function that returns a reference, auto will still be a value (you need auto& to get a reference), but decltype will be exactly the type of the return value.
#include <iostream>
int global{};
int& foo()
{
return global;
}
int main()
{
decltype(foo()) a = foo(); //a is an `int&`
auto b = foo(); //b is an `int`
b = 2;
std::cout << "a: " << a << '\n'; //prints "a: 0"
std::cout << "b: " << b << '\n'; //prints "b: 2"
std::cout << "---\n";
decltype(foo()) c = foo(); //c is an `int&`
c = 10;
std::cout << "a: " << a << '\n'; //prints "a: 10"
std::cout << "b: " << b << '\n'; //prints "b: 2"
std::cout << "c: " << c << '\n'; //prints "c: 10"
}
Also see David RodrÃguez's answer about the places in which only one of auto or decltype are possible.
auto (in the context where it infers a type) is limited to defining the type of a variable for which there is an initializer. decltype is a broader construct that, at the cost of extra information, will infer the type of an expression.
In the cases where auto can be used, it is more concise than decltype, as you don't need to provide the expression from which the type will be inferred.
auto x = foo(); // more concise than `decltype(foo()) x`
std::vector<decltype(foo())> v{ foo() }; // cannot use `auto`
The keyword auto is also used in a completely unrelated context, when using trailing return types for functions:
auto foo() -> int;
There auto is only a leader so that the compiler knows that this is a declaration with a trailing return type. While the example above can be trivially converted to old style, in generic programming it is useful:
template <typename T, typename U>
auto sum( T t, U u ) -> decltype(t+u)
Note that in this case, auto cannot be used to define the return type.
That's my thinking about the auto and decltype:
The most obvious difference in pratice between the two is:
In type deduction for expr, decltype deduce the correct type( except the lvalue expr -> lvalue ref) and auto default to value.
We need to learn the "data stream" model before understanding the difference.
In our codes, the function calling can be resolved as a data stream model(something like the concept of functional program), so the function which is been called is the data receiver, and the caller is the data provider. It is obviously that the data type must be decided by the data receiver, or the data cannot be organized in order in the data stream.
Look this:
template<typename T>
void foo(T t){
// do something.
}
the T will be deduced to a value type, regardless of whether you pass.
If you want the ref type, you should use auto& or auto&&, that's what I am saying, the data type is decided by the data receiver.
Let's return to the auto:
auto is used to do a type deduction for rvalue expr, giving the data receiver a correct interface to receive the data.
auto a = some expr; // a is data receiver, and the expr is the provider.
So why dose auto ignore the ref modifier?
because it should be decided by the receiver.
Why we need decltype?
The answer is : auto cannot to be used as a true type deduction, it will not give you the correct type of a expr. It just give the data receiver a correct type to receive the data.
So, we need decltype to get the correct type.
modify #Mankarse's example code,I think a better one blew:
#include <iostream>
int global = 0;
int& foo()
{
return global;
}
int main()
{
decltype(foo()) a = foo(); //a is an `int&`
auto b = foo(); //b is an `int`
b = 2;
std::cout << "a: " << a << '\n'; //prints "a: 0"
std::cout << "b: " << b << '\n'; //prints "b: 2"
std::cout << "global: " << global << '\n'; //prints "global: 0"
std::cout << "---\n";
//a is an `int&`
a = 10;
std::cout << "a: " << a << '\n'; //prints "a: 10"
std::cout << "b: " << b << '\n'; //prints "b: 2"
std::cout << "global: " << global << '\n'; //prints "global: 10"
return 0;
}
I consider auto to be a purely simplifying feature whereas the primary purpose of decltype is
to enable sophisticated metaprogramming in foundation libraries. They are however very closely
related when looked at from a language-technical point of use.
From HOPL20 4.2.1, Bjarne Stroustrup.
Generally, if you need a type for a variable you are going to initialize, use auto. decltype is better used when you need the type for something that is not a variable, like a return type.