I want to compute some information in parallel and use the result outside the cobegin.
To be more precise, my requirement is to retrieve a domain (and other non primitive types) like this
var a,b: domain(1,stridable=true);
cobegin{
a = foo1();
b = foo2();
}
foo3(a,b);
I am aware of sync/single types but the do not work for domains.
Note:
Using out in the procedure parameters also doesn't work.
In order to make writing race conditions more explicit, variables used in parallel constructs are treated as though they were passed to a function with blank intent. For most types, this means they can be read, but not written to.
To make the variables modifiable within the parallel statement, you can use a task intent clause to give them ref intent.
cobegin with (ref a, ref b) {
a = foo1();
b = foo2();
}
The legal task intents are ref, in, const, const in and const ref. The out and inout intents are not supported as task intents because each task would copy a value out in an unspecified order, resulting in a race condition.
See the subsection "Task Intents" in the "Task Parallelism and Synchronization" section of the Chapel language spec for more details.
Related
Can anyone explain how lambda functions are represented in std::function? Is there an implicit conversion by the compiler and std::function used as a container?
I recently asked a slightly different question, which was marked as a duplicate of this question. The answer is the type of a lambda function is not defined and unspecified. I have found some code that appears to provide a container for the lambda function as follows below. I have also included a Stroustrup quote, which seems to contradict that lambda functions do not have a type defined, saying however it is a function closure type. This is only confusing the matter further.
Update:
Partial answer regarding implementation of function<> is here.
Appreciate your guidance.
#include <iostream>
#include <vector>
#include <functional>
using namespace std;
static vector<function<void(int)>> cbl;
static vector<function<int(int)>> cblr;
class GT {
public:
int operator()(int x) {return x;}
};
void f()
{
auto op = [](int x) {cout << x << endl;};
cbl.push_back(op);
for (auto& p : cbl)
p(1);
auto op2 = [](int x) {return x;};
cblr.push_back(op2);
cblr.push_back(GT());
for (auto& p : cblr)
cout << p(99) << endl;
}
int main(int argc, char *argv[])
{
f();
return 0;
}
Compilation and result:
g++ -pedantic -Wall test130.cc && ./a.out
1
99
99
Stroustrup talks about this in C++ 4th Ed Page 297:
To allow for optimized versions of lambda expressions, the type of a
lambda expression is not defined. However, it is defined to be the
type of a function object in the style presented in §11.4.1. This
type, called the closure type, is unique to the lambda, so no two
lambdas have the same type. Had two lambdas had the same type, the
template instantiation mechanism might have gotten con- fused. A
lambda is of a local class type with a constructor and a const member
function operator()().
The type is there. It’s just that you don’t know in advance what it is. Lambdas have type - just the standard says nothing about what that type is; it only gives the contracts that type has to fulfill. It’s up to the compiler implementers to decide what that type really is. And they don’t have to tell you. It’s not useful to know.
So you can deal with it just like you would deal with storage of any “generic” type. Namely: provide suitably aligned storage, then use placement new to copy-construct or move-construct the object in that storage. None of it is specific to std::function. If your job was to write a class that can store an arbitrary type, you’d do just that. And it’s what std::function has to do as well.
std::function implementers usually employ the small-object optimization. The class leaves some unused room in its body. If the object to be stored is of an alignment for which the empty room is suitable, and if it will fit in that unused room, then the storage will come from within the std::function object itself. Otherwise, it’ll have to dynamically allocate the memory for it. That means that e.g. capture of intrinsic vector types (AVX, Neon, etc) - if such is possible - will usually make a lambda unfit for small object optimization storage within std::function.
I'm making no claims as to whether or if the capture of intrinsic vector types is allowed, fruitful, sensible, or even possible. It's sometimes a minefield of corner cases and subtle platform bugs. I don't suggest anyone go and do it without full understanding of what's going on, and the ability to audit the resulting assembly as/when needed (implication: under pressure, typically at 4AM on the demo day, with the suits about to wake up soon - and already irked that they have to interrupt their golf play so early in the day just to watch the presenter sweat).
There is a difference between "does not have a type" and "type is unspecified". Unspecified means that it exists, but you don't get to know what it is. That is, it doesn't have a type name you can key in, but it does have a type.
op in your example is a variable with a type. You don't know what valid combination of letters would identify that type's name (and indeed, no valid combination of letters will identify that type's name). But the type can be computed via decltype(op).
A lambda has a type. You can write a function template that takes a lambda and calls it:
template<typename T>
class F{
T t;
public:
F(const T &lambda):t(lambda){}
void call() {t();}
};
That's a very basic approach for std::function
"Is there an implicit conversion by the compiler and std::function used as a container?" No, you don't need to convert them.
You keep repeatedly claiming that lambdas "don't have a type", but that's not what "unspecified" means, nor what anybody said it means on the last three questions.
Lambdas have a type, you just don't know the types' names.
The standard doesn't specify the name of the type of any lambda. If such a type even has a name, the internal workings of the compiler know it, typeid().name() may reveal it, and error messages often reveal it.
But you don't, and cannot, know in advance what the name of the type of any given lambda is going to be. That is by design. So you cannot write that type out in your source code. That's fine; why would you want to?
Templates and auto don't need the name, because that's how templates work. The inner workings of std::function don't need to know the name of any lambda type; they just need to know whether the expression thing(args) is valid.
"What are explicit declaration & implicit declaration of variables in programming language concepts and their advantages and disadvantages?"
An explicit declaration is when you start making the variable by order it first.
ex: String name; name="yourname";
the advantages is you be able to fill your variable with any algorithm or math logic to make a value. the disadvantages is when you use it as a material without fill the value of your variable ,there will be an error.
An implicit declaration is when you make a variable directly without order it first. ex : String name="yourname";
the advantages : it is a practically treatment at some condition.
.
Explicit means declaring variable like in c.
Implicit declaration in variable declaration in python.
In Explicit we should cast.
In implicit no need of casting.
Explicit variable declaration means that the type of the variable is declared before or when the variable is set. Implicit variable declaration means the type of the variable is assumed by the operators, but any data can be put in it.
In C,
int x = 5; printf(x-5); x = "test"; printf(x-5);
returns a compile time error when you set x to test
but in Python,
x = 5; print(x-5); x = "test"; print(x-5);
will "compile" (python doesn't compile, but it will run the program) and give you a runt time error when you try to subtract from the string.
One advantage of Implicit variables is that it makes it easier to write code without worrying about the behind the scenes data type, the compiler should pick the appropriate one based on its future usage.
Another advantage is that you can flexibly type a variable to hold different things that may not even share a parent class. Doing this is risky, as you have no guarantees that the objects will be interpreted correctly by following code.
One disadvantage is that Implicit variables have no guarantees of what they are. A function that computes the difference between two numbers will not return an compile time error if the variables have strings in them. You passed in two variables, it is up to you to ensure they are the right type. It also makes reading code harder in some ways. var nextLocation = LeftHandedSmokeShifter(3.3) is completely legitimate code, but You have to look up the function to even guess what it is doing. string nextLocation = LeftHandedSmokeShifter(3.3) at least tells me that I should be using the output for mathematical operations.
Type heavy languages are always explicitly declared and typed, but type weak languages are mostly implicitly typed. If you can set a variable to "Var" it is likely an implicitly typed language.
The basic question:
Edit: v-The question-v
class foo {
public:
constexpr foo() { }
constexpr int operator()(const int& i) { return int(i); }
}
Performance is a non-trivial issue. How does the compiler actually compile the above? I know how I want it to be resolved, but how does the specification actually specify it will be resolved?
1) Seeing the type int has a constexpr constructor, create a int object and compile the string of bytes that make the type from memory into the code directly?
2) Replace any calls to the overload with a call to the 'int's constructor that for some unknown reason int doesn't have constexpr constructors? (Inlining the call.)
3) Create a function, call the function, and have that function call 'int's consctructor?
Why I want to know, and how I plan to use the knowledge
edit:v-Background only-v
The real library I'm working with uses template arguments to decide how a given type should be passed between functions. That is, by reference or by value because the exact size of the type is unknown. It will be a user's responsibility to work within the limits I give them, but I want these limits to be as light and user friendly as I can sanely make them.
I expect a simple single byte character to be passed around in which case it should be passed by value. I do not bar 300mega-byte behemoth that does several minuets of recalculation every time a copy constructor is invoked. In which case passing by reference makes more sense. I have only a list of requirements that a type must comply with, not set cap on what a type can or can not do.
Why I want to know the answer to my question is so I can in good faith make a function object that accepts this unknown template, and then makes a decision how, when, or even how much of a object should be copied. Via a virtual member function and a pointer allocated with new is so required. If the compiler resolves constexpr badly I need to know so I can abandon this line of thought and/or find a new one. Again, It will be a user's responsibility to work within the limits I give them, but I want these limits to be as light and user friendly as I can sanely make them.
Edit: Thank you for your answers. The only real question was the second sentence. It has now been answered. Everything else If more background is required, Allow me to restate the above:
I have a template with four argument. The goal of the template is a routing protocol. Be that TCP/IP -unlikely- or node to node within a game -possible. The first two are for data storage. They have no requirement beyond a list of operators for each. The last two define how the data is passed within the template. By default this is by reference. For performance and freedom of use, these can be changed define to pass information by value at a user's request.
Each is expect to be a single byte long. They could in the case of metric for a EIGRP or OSFP like protocol the second template argument could be the compound of a dozen or more different variable. Each taking a non-trival time to copy or recompute.
For ease of use I investigate the use a function object that accepts the third and fourth template to handle special cases and polymorphic classes that would fail to function or copy correctly. The goal to not force a user to rebuild their objects from scratch. This would require planning for virtual function to preform deep copies, or any number of other unknown oddites. The usefulness of the function object depends on how sanely a compiler can be depended on not generate a cascade of function calls.
More helpful I hope?
The C++11 standard doesn't say anything about how constexpr will be compiled down to machine instructions. The standard just says that expressions that are constexpr may be used in contexts where a compile time constant value is required. How any particular compiler chooses to translate that to executable code is an implementation issue.
Now in general, with optimizations turned on you can expect a reasonable compiler to not execute any code at runtime for many uses of constexpr but there aren't really any guarantees. I'm not really clear on what exactly you're asking about in your example so it's hard to give any specifics about your use case.
constexpr expressions are not special. For all intents and purposes, they're basically const unless the context they're used in is constexpr and all variables/functions are also constexpr. It is implementation defined how the compiler chooses to handle this. The Standard never deals with implementation details because it speaks in abstract terms.
As you probably know, C++11 introduces the constexpr keyword.
C++11 introduced the keyword constexpr, which allows the user to
guarantee that a function or object constructor is a compile-time
constant.
[...]
This allows the compiler to understand, and verify, that [function name] is a
compile-time constant.
My question is why are there such strict restrictions on form of the functions that can be declared. I understand desire to guarantee that function is pure, but consider this:
The use of constexpr on a function imposes some limitations on what
that function can do. First, the function must have a non-void return
type. Second, the function body cannot declare variables or define new
types. Third, the body may only contain declarations, null statements
and a single return statement. There must exist argument values such
that, after argument substitution, the expression in the return
statement produces a constant expression.
That means that this pure function is illegal:
constexpr int maybeInCppC1Y(int a, int b)
{
if (a>0)
return a+b;
else
return a-b;
//can be written as return (a>0) ? (a+b):(a-b); but that isnt the point
}
Also you cant define local variables... :(
So I'm wondering is this a design decision, or do compilers suck when it comes to proving function a is pure?
The reason you'd need to write statements instead of expressions is that you want to take advantage of the additional capabilities of statements, particularly the ability to loop. But to be useful, that would require the ability to declare variables (also banned).
If you combine a facility for looping, with mutable variables, with logical branching (as in if statements) then you have the ability to create infinite loops. It is not possible to determine if such a loop will ever terminate (the halting problem). Thus some sources would cause the compiler to hang.
By using recursive pure functions it is possible to cause infinite recursion, which can be shown to be equivalently powerful to the looping capabilities described above. However, C++ already has that problem at compile time - it occurs with template expansion - and so compilers already have to have a switch for "template stack depth" so they know when to give up.
So the restrictions seem designed to ensure that this problem (of determining if a C++ compilation will ever finish) doesn't get any thornier than it already is.
The rules for constexpr functions are designed such that it's impossible to write a constexpr function that has any side-effects.
By requiring constexpr to have no side-effects it becomes impossible for a user to determine where/when it was actually evaluated. This is important since constexpr functions are allowed to happen at both compile time and run time at the discretion of the compiler.
If side-effects were allowed then there would need to be some rules about the order in which they would be observed. That would be incredibly difficult to define - even harder than the static initialisation order problem.
A relatively simple set of rules for guaranteeing these functions to be side-effect free is to require that they be just a single expression (with a few extra restrictions on top of that). This sounds limiting initially and rules out the if statement as you noted. Whilst that particular case would have no side-effects it would have introduced extra complexity into the rules and given that you can write the same things using the ternary operator or recursively it's not really a huge deal.
n2235 is the paper that proposed the constexpr addition in C++. It discusses the rational for the design - the relevant quote seems to be this one from a discussion on destructors, but relevant generally:
The reason is that a constant-expression is intended to be evaluated by the compiler
at translation time just like any other literal of built-in type; in particular no
observable side-effect is permitted.
Interestingly the paper also mentions that a previous proposal suggested the the compiler figured out automatically which functions were constexpr without the new keyword, but this was found to be unworkably complex, which seems to support my suggestion that the rules were designed to be simple.
(I suspect there will be other quotes in the references cited in the paper, but this covers the key point of my argument about the no side-effects)
Actually the C++ standardization committee is thinking about removing several of these constraints for c++14. See the following working document http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2013/n3597.html
The restrictions could certainly be lifted quite a bit without enabling code which cannot be executed during compile time, or which cannot be proven to always halt. However I guess it wasn't done because
it would complicate the compiler for minimal gain. C++ compilers are quite complex as is
specifying exactly how much is allowed without violating the restrictions above would have been time consuming, and given that desired features have been postponed in order to get the standard out of the door, there probably was little incentive to add more work (and further delay of the standard) for little gain
some of the restrictions would have been either rather arbitrary or rather complicated (especially on loops, given that C++ doesn't have the concept of a native incrementing for loop, but both the end condition and the increment code have to be explicitly specified in the for statement, making it possible to use arbitrary expressions for them)
Of course, only a member of the standards committee could give an authoritative answer whether my assumptions are correct.
I think constexpr is just for const objects. I mean; you can now have static const objects like String::empty_string constructs statically(without hacking!). This may reduce time before 'main' called. And static const objects may have functions like .length(), operator==,... so this is why 'expr' is needed. In 'C' you can create static constant structs like below:
static const Foos foo = { .a = 1, .b = 2, };
Linux kernel has tons of this type classes. In c++ you could do this now with constexpr.
note: I dunno but code below should not be accepted so like if version:
constexpr int maybeInCppC1Y(int a, int b) { return (a > 0) ? (a + b) : (a - b); }
I am looking a tool able to detect ordered function call pairs in a nested fashion as shown below:
f() // depth 0
f() //depth 1
g()
g()
At each depth of call f() there must be a call of g() forming function call pair. This is particularly important in critical section entry and exit.
In C++, one option is to wrap the calls to f() and g() in the constructor and destructor of a class and only call those functions by instantiating an instance of that class. For example,
struct FAndGCaller
{
FAndGCaller() { f(); }
~FAndGCaller() { g(); }
};
This can then be used in any scope block like so:
{
FAndGCaller call_f_then_later_g; // calls f()
} // calls g()
Obviously in real code you'd want to name things more appropriately, and often you'll simply want to have the contents of f() and g() in the constructor and destructor bodies, rather than in separate functions.
This idiom of Scope Bound Resource Management (SBRM, or more commonly referred to as Resource Acquisition is Initialization, RAII) is quite common.
You may abuse a for-loop for this.
#define SAVETHEDAY for (bool seen = ((void)f(), true); seen; seen = ((void)g(), false))
The comma operator always lets your functions f be executed before the dependent statement and g afterwards. E.g
SAVETHEDAY {
SAVETHEDAY {
}
}
Pros:
Makes nesting levels clear.
Works for C++ and C99.
The pseudo for-loop will be
optimized away by any decent
compiler.
Cons:
You might have surprises with
break, return and continue
inside the blocks, so g might not be called in such a situation.
For C++, this is not safe against a
throw inside, again g might not be called
Will be frowned upon by many people since is in some sort extending the language(s).
Will be frowned upon by many people
especially for C++ since generally
such macros that hide code are
thought to be evil
The problem with continue can be repaired by doing things a bit more cleverly.
The first two cons can be circumvented in C++ by using a dummy type as for-variable that just has f and g in the constructor and destructor.
Scan through the code (that's the hard part) and every time you see an invocation of f(), increment a counter. Every time you see an invocation of g(), decrement the counter. At the end, the counter should be back to zero. If it ever goes negative, that's a problem as well (you had a call to g() that wasn't preceded by a matching call to f()).
Scanning the code accurately is the hard part though -- with C and (especially) C++, writing code to understand source code is extremely difficult. Offhand, I don't know of an existing tool for this particular job. You could undoubtedly get clang (for one example) to do it, but while it'll be a lot easier than doing it entirely on your own, it still won't be trivial.
The Coccinelle tool for semantic searching and patching of C code is designed for this sort of task (see also this LWN article on the tool).