Trying to provide &[u8] as argument to a function requiring Read doesn't seem to work as I expected, as illustrated by the below example.
use std::io::Read;
fn main() {
let bytes: &[u8] = &[1, 2, 3, 4];
print_reader(&bytes);
}
fn print_reader(reader: &(Read + Sized)) {
for byte in reader.bytes() {
println!("{}", byte.unwrap());
}
}
Compiler error:
error: the trait bound `std::io::Read + Sized: std::marker::Sized` is not satisfied [--explain E0277]
--> <anon>:9:24
9 |> for byte in reader.bytes() {
|> ^^^^^
note: `std::io::Read + Sized` does not have a constant size known at compile-time
error: the trait bound `std::io::Read + Sized: std::marker::Sized` is not satisfied [--explain E0277]
--> <anon>:9:5
9 |> for byte in reader.bytes() {
|> ^
note: `std::io::Read + Sized` does not have a constant size known at compile-time
note: required because of the requirements on the impl of `std::iter::Iterator` for `std::io::Bytes<std::io::Read + Sized>`
error: aborting due to 2 previous errors
Rust playground
The following trait implementation can be found in the std::slice documentation:
impl<'a> Read for &'a [u8].
I think this is a rather an unhelpful error message. I'll try to explain:
First: you can't have a trait object &Sized. This violates the first object safety rule and it doesn't really make sense either. The only reason to add the Sized trait bound is to use the special property of all Sized types (e.g. saving it on the stack). Look at this example trying to use the property:
fn foo(x: &Sized) {
let y = *x;
}
What size would y have? The compiler can't know, as with any other trait object. So we're not able to use the only purpose of Sized with trait objects. Thus a trait object &Sized is useless and can't really exist.
In this case the error message at least kind of tells us the correct thing:
error: the trait `std::marker::Sized` cannot be made into an object [--explain E0038]
--> <anon>:7:1
7 |> fn foo(x: &Sized) {
|> ^
note: the trait cannot require that `Self : Sized`
Furthermore: I suspect you added the + Sized bound to work around the same error, which already showed up when you had the argument reader: &Read. Here is one important insight from the detailed error description:
Generally, Self : Sized is used to indicate that the trait should not be used as a trait object.
This restriction for Read::bytes does make sense, because the Bytes iterator calls Read::read() once for every single byte. If this function call would be a virtual/dynamic one, the overhead for the function call would be much higher than the actual process of reading the byte.
So... why do you need to have Read as a trait object anyway? Often it's sufficient (and in any case much faster) to handle this via generics:
fn print_reader<R: Read>(reader: R) {
for byte in reader.bytes() {
println!("{}", byte.unwrap());
}
}
This avoids dynamic dispatch and works nicely with the type checker and the optimizer.
Related
I'm trying to work with active directory from Rust by following the c++ examples Microsoft posts for the ADSI API and the Windows-RS crate. I'm not understanding quite what is going on here:
https://learn.microsoft.com/en-us/windows/win32/api/adshlp/nf-adshlp-adsopenobject
They create an uninitialized pointer to IADs (drawing from my c# knowledge, it looks like an interface) then, when it comes time to use it, they have a double pointer that is cast as void. I tried to replicate this behavior in Rust, but I'm thinking I'm just not understanding exactly what is happening. This is what I've tried so far:
// bindings omitted
use windows::Interface;
use libc::c_void;
fn main() -> windows::Result<()> {
let mut pads: *mut IADs = ptr::null_mut();
let ppads: *mut *mut c_void = pads as _;
unsafe {
let _ = CoInitialize(ptr::null_mut());
let mut ldap_root: Vec<u16> = "LDAP://rootDSE\0".encode_utf16().collect();
let hr = ADsOpenObject(
ldap_root.as_mut_ptr() as _,
ptr::null_mut(),
ptr::null_mut(),
ADS_AUTHENTICATION_ENUM::ADS_SECURE_AUTHENTICATION.0 as _,
& IADs::IID,
ppads,
);
if !hr.is_err() {
...
}
}
Ok(())
}
First, I'm probably wrong to be creating a null pointer because that's not what they're doing in the example, but the problem is that rust doesn't permit the use of an uninitialized variable, so I'm not sure what the equivalent is.
Second, presumably the pADs variable is where the output is supposed to go, but I'm not understanding the interaction of having a pointer, then a double pointer, to an object that doesn't have an owner. Even if that were possible in rust, I get the feeling that it's not what I'm supposed to do.
Third, once I have the pointer updated by the FFI call, how do I tell Rust what the resulting output type is so that we can do more work with it? Doing as _ won't work because it's a struct, and I have a feeling that using transmute is bad
Pointer parameters are often used in FFIs as a way to return data alongside the return value itself. The idea is that the pointer should point to some existing object that the call will populate with the result. Since the Windows API functions often return HRESULTs to indicate success and failure, they use pointers to return other stuff.
In this case, the ADsOpenObject wants to return a *void (the requested ADs interface object), so you need to give it a pointer to an existing *void object for it to fill:
let mut pads: *mut c_void = std::ptr::null_mut();
let ppads = &mut pads as *mut *mut c_void;
// or inferred inline
let hr = ADsOpenObject(
...
&mut pads as _,
);
I changed pads to *mut c_void to simplify this demonstration and match the ADsOpenObject parameters. After a successful call, you can cast pads to whatever you need.
The key difference is casting pads vs &mut pads. What you were doing before was making ppads the same value as pads and thus telling the function that the *void result should be written at null. No good. This makes the parameter point to pads instead.
And the uninitialized vs null difference is fairly moot because the goal of the function is to overwrite it anyways.
This question already has answers here:
The size for values of type `T` cannot be known at compilation time when using mem::size_of::<T> as an array length
(1 answer)
What expressions are allowed as the array length N in [_; N]?
(1 answer)
Closed 3 years ago.
I'm having a bit of trouble understanding the problem with this code:
fn doesnt_compile<T>() {
println!("{}", std::mem::size_of::<[T; std::mem::size_of::<T>()]>());
}
fn main() {
doesnt_compile::<i32>();
}
When run in the playground (or on my machine) the compiler seems to ignore the implicit trait bound 'Sized' for T.
This is the error:
error[E0277]: the size for values of type `T` cannot be known at compilation time
--> src/main.rs:2:64
|
2 | println!("{}", std::mem::size_of::<[T; std::mem::size_of::<T>()]>());
| ^ doesn't have a size known at compile-time
|
= help: the trait `std::marker::Sized` is not implemented for `T`
= note: to learn more, visit <https://doc.rust-lang.org/book/ch19-04-advanced-types.html#dynamically-sized-types-and-the-sized-trait>
= help: consider adding a `where T: std::marker::Sized` bound
I stared at it for a while and tried to rewrite it in different ways, but i can't figure out why it shouldn't compile. I find it especially confusing since the following code works just fine:
fn compiles<T>() {
println!("{}", std::mem::size_of::<T>());
}
fn main() {
compiles::<i32>();
}
Is there something I'm missing? Is it a compiler bug?
This is the result of a known compiler bug (#43408). Array length expressions cannot currently have type parameters, and apparently it isn't even possible to improve the error message without major refactoring.
There currently isn't a good workaround for this in general, though there might be one for your specific use case.
According to D Language Reference static initialization of associative arrays an associative array (AA) can be initialized this way:
immutable long[string] aa = [
"foo": 5,
"bar": 10,
"baz": 2000
];
void main()
{
import std.stdio : writefln;
writefln("(aa = %s)", aa);
}
However the example doesn't compile with a reasonable recent DMD:
$ dmd --version
DMD64 D Compiler v2.083.0
Copyright (C) 1999-2018 by The D Language Foundation, All Rights Reserved written by Walter Bright
$ dmd -de -w so_003.d
so_003.d(3): Error: non-constant expression ["foo":5L, "bar":10L, "baz":2000L]
A bit of googling seems to indicate this is a long standing bug (?) in the language:
Cannot initialize associative array
What is the syntax for declaring a constant string[char] AA?
Error in Defining an associative array in D
So I know how to work around that with a static constructor. However considering the issue have existed already about 10 years is this in practice turned into a feature ?
In fact that just a prelude to my actual question:
Is it possible to initialize an associative array in compile time ?
In the example below I can initialize module level string[] doubleUnits with a generator function that is run in compile-time (with CTFE) as proofed by pragma(msg). And I can initialize int[string] doubleUnitMap in run-time. But how I can initialize the AA in compile-time ?
import std.stdio : writefln;
immutable char[] units = ['a', 'b', 'c'];
immutable string[] doubleUnits = generateDoubleUnits(units);
pragma(msg, "compile time: ", doubleUnits);
string[] generateDoubleUnits(immutable char[] units)
pure
{
import std.format : format;
string[] buffer;
foreach(unit; units) {
buffer ~= format("%s%s", unit, unit);
}
return buffer;
}
immutable int[string] doubleUnitMap;
// pragma(msg) below triggers the following compilation error:
// Error: static variable doubleUnitMap cannot be read at compile time
// while evaluating pragma(msg, "compile time: ", doubleUnitMap)
// pragma(msg, "compile time: ", doubleUnitMap);
shared static this() {
doubleUnitMap = generateDoubleUnitMap(units);
}
int[string] generateDoubleUnitMap(immutable char[] units)
pure
{
import std.format : format;
int[string] buffer;
foreach(unit; units) {
string key = format("%s%s", unit, unit);
buffer[key] = 1;
}
return buffer;
}
void main()
{
writefln("(doubleUnits = %s)", doubleUnits);
writefln("(doubleUnitMap = %s)", doubleUnitMap);
}
It is not possible to do the built-in AAs initialized at compile time because the compiler is ignorant of the runtime format. It knows the runtime interface and it knows the compile time memory layout... but the runtime memory layout is delegated to the library, so the compiler doesn't know how to form it. Hence the error.
But, it you were to implement your own AA type implementation, then you can write the CTFE code to lay it out and then the compiler could make it at compile time.
Many years ago, this was proposed as a fix - replace the built-in magic implementation with a library AA that happens to fit the compiler's interface. Then it could do it all. The problem was library types cannot express all the magic the built in associative arrays do. I don't remember the exact problems, but I think it was about const and other attribute interaction.
But that said, even if it failed for a 100% replacement, your own implementation of a 90% replacement may well be good enough for you. The declarations will look different - MyAA!(string, int) instead of string[int], and the literals for it are different (though possibly makeMyAA(["foo" : 10]); a helper ctfe function that takes a built-in literal and converts it to your format), but the usage will be basically the same thanks to operator overloading.
Of course, implementing your own AA can be a bit of code and maybe not worth it, but it is the way to make it work if CT initialization is a must have.
(personally I find the static constructor to be plenty good enough...)
At the moment, that is not possible (as described in the language specification document). I've submitted a change in the spec with a note that the feature is not yet implemented. It is definitely planned, but not yet implemented...
I'm writing a library in TypeScript, and I want to check that my type definitions are correct. Often, I want to check that a variable has a certain static type. I usually do it like this:
let expectedToBeString : string = Api.callFunction("param1", 2, []);
But sometimes, a type might be widened to any without me knowing about it, so the above expression would still compile. So I'd want to make sure it's not any by writing an expression that will intentionally fail type checking.
Sometimes I also want to check that my set of overloads works for legal types, but not for illegal ones, but the only way to make sure of that is to raise a compilation error.
How can I verify that a compilation error is being raised when it should be?
Interesting issue. When conditional types are released in TypeScript v2.8, coming out supposedly sometime this month (March 2018), or available now at typescript#next, you will be able to do something like this:
type ReplaceAny<T, R> = 0 extends (1 & T) ? R : T
The ReplaceAny<T, R> type will be T unless T is any, in which case it will be R. No normal type T should satisfy 0 extends (1 & T), since 1 & T should be at least as narrow as 1, and 0 is not a subtype of 1. But the any type in TypeScript breaks the rules: it's considered to be both a supertype of and a subtype of every other type (more or less). Which means that 1 & any becomes any, and 0 extends any is true. So 0 extends (1 & T) behaves like an any detector.
Now we can make a convenience function like this:
const replaceAny = <R>() => <T>(x: T): ReplaceAny<T,R> => x as any;
If you call replaceAny<{}>(), it produces a function which will take any input and return a value of type {} if that input is of type any.
So let's examine some scenarios:
declare const Api: {
callFunctionS(...args: any[]): string,
callFunctionN(...args: any[]): number,
callFunctionA(...args: any[]): any,
}
let expectedToBeString: string;
expectedToBeString =
replaceAny<{}>()(Api.callFunctionS("param1", 2, []));
// okay
expectedToBeString =
replaceAny<{}>()(Api.callFunctionN("param1", 2, []));
// error, number not assignable to string
expectedToBeString =
replaceAny<{}>()(Api.callFunctionA("param1", 2, []));
// error, {} not assignable to string
The first two behave as you expect, where expectedToBeString is happy with callFunctionS() but angry about callFunctionN(). The new behavior is that it is also angry about callFunctionA(), since replaceAny<{}>() causes the return value to be of type {} instead of any, and {} is not assignable to string.
Hope that helps; good luck!
I developed a generic "Unsigned" class, or really a class template Unsigned<size_t N> that models after the C (C++) built-in unsigneds using the amount of uint8_ts as a parameter. For example Unsigned<4> is identical to a uint32_t and Unsigned<32> would be identical to a uint256_t -- if it existed.
So far I have managed to follow most if not all of the semantics expected from a built-in unsigned -- in particular sizeof(Natural<N>)==N, (Natural<N>(-1) == "max_value_all_bits_1" == ~Natural<N>(0)), compatibility with abs(), sign(), div (using a custom div_t structure), ilogb() (exclusive to GCC it seems) and numeric_limits<>.
However I'm facing the issue that, since 1.- a class template is just a template so templated forms are unrelated, and 2.- the template non-typed parameter requires a "compile-time constant", which is way stricter than "a const", I'm essentially unable to create a Unsigned given an unknown N.
In other words, I can't have code like this:
...
( ... assuming all adequate headers are included ...)
using namespace std;
using lpp::Unsigned;
std::string str;
cout<< "Enter an arbitrarily long integer (end it with <ENTER>) :>";
getline(cin, str, '\n');
const int digits10 = log10(str.length()) + 1;
const int digits256 = (digits10 + 1) * ceil(log(10)/log(256)); // from "10×10^D = 256^T"
// at this point, I "should" be able to, semantically, do this:
Unsigned<digits256> num; // <-- THIS I CAN'T -- num would be guaranteed
// big enough to hold str's binary expression,
// no more space is needed
Unsigned::from_str(num, str); // somehow converts (essentially a base change algo)
// now I could do whatever I wanted with num "as if" a builtin.
std::string str_b3 = change_base(num, 3); // a generic implemented somehow
cout<< "The number above, in base 3, is: "<< str_b3<< endl;
...
(A/N -- This is part of the testsuite for Unsigned, which reads a "slightly large number" (I have tried up to 120 digits -- after setting N accordingly) and does things like expressing it in other bases, which in and of itself tests all arithmethic functions already.)
In looking for possible ways to bypass or otherwise alleviate this limitation, I have been running into some concepts that I'd like to try and explore, but I wouldn't like to spend too much effort into an alternative that is only going to make things more complicated or that would make the behaviour of the class(es) deviate too much.
The first thing I thought was that if I wasn't able to pick up a Unsigned<N> of my choice, I could at least pick up from a set of pre-selected values of N which would lead to the adequate constructor being called at runtime, but depending on a compile-time value:
???? GetMeAnUnsigned (size_t S) {
switch (S) {
case 0: { throw something(); } // we can't have a zero-size number, right?
case 1, 2, 3, 4: { return Unsigned<4>(); break; }
case 5, 6, 7, 8: { return Unsigned<8>(); break; }
case 9, 10, 11, 12, 13, 14, 15, 16: { return Unsigned<16>(); break; }
....
default: { return Unsigned<128>(); break; } // wow, a 1Kib number!
} // end switch
exit(1); // this point *shouldn't* be reachable!
} // end function
I personally like the approach. However I don't know what can I use to specify the return type. It doesn't actually "solve" the problem, it only degrades its severity by a certain degree. I'm sure doing the trick with the switch would work since the instantiations are from compile-time constant, it only changes which of them will take place.
The only viable help to declare the return type seems to be this new C++0(1?)X "decltype" construct which would allow me to obtain the adequate type, something like, if I understood the feature correctly:
decltype (Unsigned<N>) GetMeAnUnsigned (size_t S) {
.. do some choices that originate an N
return Unsigned<N>();
}
... or something like that. I haven't entered into C++?X beyond auto (for iterators) yet, so the first question would be: would features like decltype or auto help me to achieve what I want? (Runtime selection of the instantiation, even if limited)
For an alternative, I was thinking that if the problem was the relation between my classes then I could make them all a "kind-of" Base by deriving the template itself:
template <size_t N>
class Unsigned : private UnsignedCommon { ...
... but I left that approach in the backburner because, well, one doesn't do that (make all a "kind-of") with built-ins, plus for the cases where one does actually treat them as a common class it requires initializing statics, returning pointers and leave the client to destruct if I recall correctly. Second question then: did I do wrong in discarding this alternative too early?
In a nutshell, your problem is no different from that of the built-in integral types. Given a short, you can't store large integers in it. And you can't at runtime decide which type of integer to use, unless you use a switch or similar to choose between several predefined options (short, int, long, long long, for example. Or in your case, Unsigned<4>, Unsigned<8>, Unsigned<256>. The size cannot be computed dynamically at runtime, in any way.
You have to either define a dynamically sized type (similar to std::vector), where the size is not a template parameter, so that a single type can store any type of integer (and then accept the loss of efficiency that implies), or accept that the size must be chosen at compile-time, and the only option you have for handling "arbitrary" integers is to hardcode a set of predefined sizes and choose between them at runtime.
decltype won't solve your problem either. It is fairly similar to auto, it works entirely at compile-time, and just returns the type of an expression. (The type of 2+2 is int and the compiler knows this at compiletime, even though the value 4 is only computed at runtime)
The problem you are facing is quite common. Templates are resolved at compile time, while you need to change your behavior at runtime. As much as you might want to do that with the mythical one extra layer of indirection the problem won't go away: you cannot choose the return type of your function.
Since you need to perform the operations based on runtime information you must fall back to using dynamic polymorphism (instead of the static polymorphism that templates provide). That will imply using dynamic allocation inside the GetMeAnUnsigned method and possibly returning a pointer.
There are some tricks that you can play, like hiding the pointer inside a class that offers the public interface and delegates to an internal allocated object, in the same style as boost::any so that the user sees a single type even if the actual object is chosen at runtime. That will make the design harder, I am not sure how much more complex the code will be, but you will need to really think on what is the minimal interface that you must offer in the internal class hierarchy to fulfill the requirements of the external interface --this seems like a really interesting problem to tacke...
You can't directly do that. Each unsigned with a separate number has a separate type, and the compiler needs to know the return type of your method at compile time.
What you need to do is have an Unsigned_base base class, from which the Unsigned<t> items derive. You can then have your GetMeAnUnsigned method return a pointer to Unsigned_base. That could then be casted using something like dynamic_cast<Unsigned<8> >().
You might be better off having your function return a union of the possible unsigned<n> types, but that's only going to work if your type meets the requirements of being a union member.
EDIT: Here's an example:
struct UnsignedBase
{
virtual ~UnsignedBase() {}
};
template<std::size_t c>
class Unsigned : public UnsignedBase
{
//Implementation goes here.
};
std::auto_ptr<UnsignedBase> GiveMeAnUnsigned(std::size_t i)
{
std::auto_ptr<UnsignedBase> result;
switch(i)
{
case 42:
result.reset(new Unsigned<23>());
default:
result.reset(new Unsigned<2>());
};
return result;
}
It's a very common problem indeed, last time I saw it was with matrices (dimensions as template parameters and how to deal with runtime supplied value).
It's unfortunately an intractable problem.
The issue is not specific to C++ per se, it's specific to strong typing coupled with compile-time checking. For example Haskell could exhibit a similar behavior.
There are 2 ways to deal with this:
You use a switch not to create the type but actually to launch the full computation, ie main is almost empty and only serve to read the input value
You use boxing: you put the actual type in a generic container (either by hand-crafted class or boost::any or boost::variant) and then, when necessary, unbox the value for specific treatment.
I personally prefer the second approach.
The easier way to do this is to use a base class (interface):
struct UnsignedBase: boost::noncopyable
{
virtual ~UnsignedBase() {}
virtual UnsignedBase* clone() const = 0;
virtual size_t bytes() const = 0;
virtual void add(UnsignedBase const& rhs) = 0;
virtual void substract(UnsignedBase const& rhs) = 0;
};
Then you wrap this class in a simple manager to ease memory management for clients (you hide the fact that you rely on heap allocation + unique_ptr):
class UnsignedBox
{
public:
explicit UnsignedBox(std::string const& integer);
template <size_t N>
explicit UnsignedBox(Unsigned<N> const& integer);
size_t bytes() const { return mData->bytes(); }
void add(UnsignedBox const& rhs) { mData->add(rhs.mData); }
void substract(UnsignedBox const& rhs) { mData->substract(rhs.mData); }
private:
std::unique_ptr<UnsignedBase> mData;
};
Here, the virtual dispatch takes care of unboxing (somewhat), you can also unbox manually using a dynamic_cast (or static_cast if you know the number of digits):
void func(UnsignedBase* i)
{
if (Unsigned<2>* ptr = dynamic_cast< Unsigned<2> >(i))
{
}
else if (Unsigned<4>* ptr = dynamic_cast< Unsigned<4> >(i))
{
}
// ...
else
{
throw UnableToProceed(i);
}
}