How does Clojure's optimiser work, and where is it? - clojure

I am new to Clojure, but not to lisp. A few of the design decisions look strange to me - specifically requiring a vector for function parameters and explicitly requesting tail calls using recur.
Translating lists to vectors (and vice versa) is a standard operation for an optimiser. Tail calls can be converted to iteration by rewriting to equivalent clojure before compiling to byte code. The [] and recur syntax suggest that neither of these optimisations are present in the current implementation.
I would like a pointer to where in the implementation I can find any/all source-to-source transformation passes. I don't speak Java very well so am struggling to navigate the codebase.
If there isn't any optimisation before function-by-function translation to the JVM's byte code, I'd be interested in the design rationale for this. Perhaps to achieve faster compilation?
Thank you.

There is no explicit optimizer package in the compiler code. Any optimizations are done "inline". Some can be enabled or disabled via compiler flags.
Observe that literal vectors for function parameters are a syntactic choice how functions are represented in source code. Whether they are represented as vectors or list or anything else would not affect runtime and cannot be optimized hence.
Regarding automatic recur, Rich Hickey explained his decision here:
When speaking about general TCO, we are not just talking about
recursive self-calls, but also tail calls to other functions. Full TCO
in the latter case is not possible on the JVM at present whilst
preserving Java calling conventions (i.e without interpreting or
inserting a trampoline etc).
While making self tail-calls into jumps would be easy (after all,
that's what recur does), doing so implicitly would create the wrong
expectations for those coming from, e.g. Scheme, which has full TCO.
So, instead we have an explicit recur construct.
Essentially it boils down to the difference between a mere
optimization and a semantic promise. Until I can make it a promise,
I'd rather not have partial TCO.
Some people even prefer 'recur' to the redundant restatement of the
function name. In addition, recur can enforce tail-call position.

specifically requiring a vector for function parameters
Most other lisps build structures out of syntactic lists. For an associative "map" for example, you build a list of lists. For a "vector", you make a list. For a conditional switch-like expression, you make a list of lists of lists. Lots of lists, lots of parenthesis.
Clojure has made it an obvious goal to make the syntax of lisp more readable and less redundant. A map, set, list, vector all have their own syntax delimiters so they jump out at the eye, while also providing specific functionality that otherwise you'd have to explicitly request using a function if they were all lists. In addition to these structural primitives, other functions like cond minimize the parentheses by removing one layer of parentheses for each pair in the expression, rather than additionally wrapping each pair in yet another grouped parenthesis. This philosophy is widespread throughout the language and its core library so the code is more readable and elegant.
Function parameters as a vector are just part of this syntax. It's not about whether the language can convert a list to a vector easily, it's about how the language requires the placement of function parameters in a function definition -- and it does so by explicitly requiring a vector. And in fact, you can see this clearly in the source for defn:
https://github.com/clojure/clojure/blob/clojure-1.7.0/src/clj/clojure/core.clj#L296
It's just a requirement for how a function is written, that's all.

Related

Are there any more useful use-cases of functors?

I am trying to understand cases that require using functors. Most of the answer on Stackoverflow and other websites put emphasis on being able to define different adders or multipliers regarding benefits of functors.
Can the use of functors go beyond them? What are some other uses of functors?
More often than not, functors are used with other API calls that need some kind of function object. For example, sorting vectors of user-defined objects which don't have operator() or operator< (etc.) defined.
There are some cases where a set of functors may prove useful. One such case comes when you have several algorithms which functionally do the same thing, but achieve varying levels of accuracy. This happens a lot with some numeric optimization problems: given the general form of a matrix, we might use a different technique to find the solution of a linear equation (e.g., sparse vs dense problem-matracies can employ different algorithms to invert the matrix).
In particular, you should consider functors versus lambdas. In modern versions of C++, there really isn't a need to specify a functor unless you're implementing a function/method that needs a functor (or lambda) as an argument. There are some cases to consider: Do you need a unit-test? Is the functor itself a prototype of future functionality? etc.
ADDENDUM: The key thing to consider is that the use of functor/lambda ultimately boils down to a design decision. As #t.niese noted in the comments, you could use just use functions in combination of template arguments. In addition to the previous considerations above, consider whether or not you can make a compile-time or run-time assessment of the needed functionality.
Additionally, as you make design decisions, you may want to consider "Is there a need for this function to be used outside of this specific context?" If the answer is no, that's a compelling argument to choose a lambda over a free function. With regards to functor specifically, this was an important pattern added before the addition of lambdas to the standard. Typically they're defined in a somewhat private context (frequently in the implementation files, thus after compiled into a library, obfuscated to users of the API). Now with lambdas, you can simply define them within another function or even as a function argument, instead of pre-defining them prior to need.

Why haven't modern compilers removed the need for expression templates?

The standard pitch for expression templates in C++ is that they increase efficiency by removing unnecessary temporary objects. Why can't C++ compilers already remove these unnecessary temporary objects?
This is a question that I think I already know the answer to but I want to confirm since I couldn't find a low-level answer online.
Expression templates essentially allow/force an extreme degree of inlining. However, even with inlining, compilers cannot optimize out calls to operator new and operator delete because they treat those calls as opaque since those calls can be overridden in other translation units. Expression templates completely remove those calls for intermediate objects.
These superfluous calls to operator new and operator delete can be seen in a simple example where we only copy:
#include <array>
#include <vector>
std::vector<int> foo(std::vector<int> x)
{
std::vector<int> y{x};
std::vector<int> z{y};
return z;
}
std::array<int, 3> bar(std::array<int, 3> x)
{
std::array<int, 3> y{x};
std::array<int, 3> z{y};
return z;
}
In the generated code, we see that foo() compiles to a relatively lengthy function with two calls to operator new and one call to operator delete while bar() compiles to only a transfer of registers and doesn't do any unnecessary copying.
Is this analysis correct?
Could any C++ compiler legally elide the copies in foo()?
However, even with inlining, compilers cannot optimize out calls to operator new and operator delete because they treat those calls as opaque since those calls can be overridden in other translation units.
since c++14, this is no more true, allocation calls can be optimized-out/reused under certain conditions:
[expr.new#10] An implementation is allowed to omit a call to a replaceable global allocation function. When it does so, the storage is instead provided by the implementation or provided by extending the allocation of another new-expression.[conditions follows]
So foo() may be legally optimized to something equivalent to bar() nowadays ...
Expression templates essentially allow/force an extreme degree of inlining
IMO the point of expression templates is not much about inlining per se, it's rather exploiting the symmetries of the type system of the domain specific language the expression models.
For example, when you multiply three, say, hermitian matrices, an expression template can use a space-time optimized algorithm exploiting the fact that the product is associative and that hermitian matrices are adjoint-symmetric, resulting in a reduction of total operation count (and possibly even better accuracy). And all this, occurs at compile time.
Conversely, a compiler cannot know what an hermitian matrix is, it's constrained evaluating the expression the brute way (according to your implementation floating point semantics).
There are two kinds of expression templates.
One kind is about domain specific languages embedded directly in C++. Boost.Spirit turns expressions into recursive descent parsers. Boost.Xpressive turns them into regular expressions. Good old Boost.Lambda turns them into function objects with argument placeholders.
Obviously there is nothing a compiler can do to get rid of that need. It would take special-purpose language extensions to add the capabilities that the eDSL adds, like lambdas were added to C++11. But it's not productive to do that for every eDSL written; it would make the language gigantic and impossible to comprehend, among other problems.
The second kind is the expression templates that keep high-level semantics the same but optimize execution. They apply domain-specific knowledge to transform expressions into more efficient execution paths, while keeping the semantics the same. A linear algebra library might do that as Massimiliano explained in his answer, or a SIMD library like Boost.Simd might translate multiple operations into a single fused operation like multiply-add.
These libraries provide services that a compiler could, in theory, perform without modifying the language specification. However, in order to do so, the compiler would have to recognize the domain in question and have all the built-in domain knowledge to do the transformations. This approach is way too complex and would make compilers huge and even slower than they are.
An alternative approach to expression templates for these kinds of libraries would be compiler plugins, i.e. instead of writing a special matrix class that has all the expression template magic, you write a plugin for the compiler that knows about the matrix type and transforms the AST that the compiler uses. The problem with this approach is that either compilers would have to agree on a plugin API (not going to happen, they work too differently internally) or the library author has to write a separate plugin for every compiler they want their library to be usable (or at least performant) with.

What are 'if ,define, lambda' in scheme?

We have a course whose project is to implement a micro-scheme interpreter in C++. In my implementation, I treat 'if', 'define', 'lambda' as procedures, so it is valid in my implementation to eval 'if', 'define' or 'lambda', and it is also fine to write expressions like '(apply define (quote (a 1)))', which will bind 'a' to 1.
But I find in racket and in mit-scheme, 'if', 'define', 'lambda' are not evaluable. For example,
It seems that they are not procedures, but I cannot figure out what they are and how they are implemented.
Can someone explain these to me?
In the terminology of Lisp, expressions to be evaluated are forms. Compound forms (those which use list syntax) are divided into special forms (headed by special operators like let), macro forms, and function call forms.
The Scheme report desn't use this terminology. It calls functions "procedures". Scheme's special forms are called "syntax". Macros are "derived expression types", individually introduced as "library syntax". (The motivation for this may be some conscious decision to blend into the CS academic mainstream by scrubbing some unfamiliar Lisp terminology. Algol has procedures and a BNF-defined syntax, Scheme has procedures and a BNF-defined syntax. That ticks off some sort of familiarity checkbox.)
Special forms (or "syntax") are recognized by interpreters and compilers as a set of special cases. The interpreter or compiler may handle these forms via function-like bindings in some internal table keyed on symbols, but it's not the program-visible binding namespace.
Setting up these associations in the regular namespace isn't necessarily wrong, but it could be problematic. If you want both a compiler and interpreter, but let has only one top-level binding, that will be an issue: who gets to install their procedure into that binding: the interpreter or compiler? (One way to resolve that is simple: make the binding values cons pairs: the car can be the interpreter function, the cdr the compiler function. But then these bindings are not procedures any more that you can apply.)
Exposing these bindings to the application is problematic anyway, because the semantics is so different between interpretation and compilation. If your interpretation is interpreted, then calling the define binding as a function is possible; it has the effect of performing the definition. But in a compiled interpretation, code depending on this won't work; define will be a function that doesn't actually define anything, but rather compiles: it calculates and returns a compiled fragment written in some intermediate representation.
About your implementation, the fact that (apply define (quote (a 1))) works in your implementation raises a bit of a red flag. Either you've made the environment parameter of the function optional, or it doesn't take one. Functions implementing special operators (or "syntax") need an environment parameter, not just the piece of syntax. (At least if we are developing a lexically scoped Scheme or Lisp!)
The fact that (apply define (quote (a 1))) works also suggests that your define function is taking quote and (a 1) as arguments. While that is workable, the usual approach for these kinds of syntax procedures is to take the whole form as one argument (and a lexical environment as another argument). If such a function can be called, the invocation looks something like like (apply define (list '(define a 1) (null-environment 5))). The procedure itself will do any necessary destructuring on the syntax, and checking for validity: are there too many or too few parameters and so on.

Are there any C++ language obstacles that prevent adopting D ranges?

This is a C++ / D cross-over question. The D programming language has ranges that -in contrast to C++ libraries such as Boost.Range- are not based on iterator pairs. The official C++ Ranges Study Group seems to have been bogged down in nailing a technical specification.
Question: does the current C++11 or the upcoming C++14 Standard have any obstacles that prevent adopting D ranges -as well as a suitably rangefied version of <algorithm>- wholesale?
I don't know D or its ranges well enough, but they seem lazy and composable as well as capable of providing a superset of the STL's algorithms. Given their claim to success for D, it would seem very nice to have as a library for C++. I wonder how essential D's unique features (e.g. string mixins, uniform function call syntax) were for implementing its ranges, and whether C++ could mimic that without too much effort (e.g. C++14 constexpr seems quite similar to D compile-time function evaluation)
Note: I am seeking technical answers, not opinions whether D ranges are the right design to have as a C++ library.
I don't think there is any inherent technical limitation in C++ which would make it impossible to define a system of D-style ranges and corresponding algorithms in C++. The biggest language level problem would be that C++ range-based for-loops require that begin() and end() can be used on the ranges but assuming we would go to the length of defining a library using D-style ranges, extending range-based for-loops to deal with them seems a marginal change.
The main technical problem I have encountered when experimenting with algorithms on D-style ranges in C++ was that I couldn't make the algorithms as fast as my iterator (actually, cursor) based implementations. Of course, this could just be my algorithm implementations but I haven't seen anybody providing a reasonable set of D-style range based algorithms in C++ which I could profile against. Performance is important and the C++ standard library shall provide, at least, weakly efficient implementations of algorithms (a generic implementation of an algorithm is called weakly efficient if it is at least as fast when applied to a data structure as a custom implementation of the same algorithm using the same data structure using the same programming language). I wasn't able to create weakly efficient algorithms based on D-style ranges and my objective are actually strongly efficient algorithms (similar to weakly efficient but allowing any programming language and only assuming the same underlying hardware).
When experimenting with D-style range based algorithms I found the algorithms a lot harder to implement than iterator-based algorithms and found it necessary to deal with kludges to work around some of their limitations. Of course, not everything in the current way algorithms are specified in C++ is perfect either. A rough outline of how I want to change the algorithms and the abstractions they work with is on may STL 2.0 page. This page doesn't really deal much with ranges, however, as this is a related but somewhat different topic. I would rather envision iterator (well, really cursor) based ranges than D-style ranges but the question wasn't about that.
One technical problem all range abstractions in C++ do face is having to deal with temporary objects in a reasonable way. For example, consider this expression:
auto result = ranges::unique(ranges::sort(std::vector<int>{ read_integers() }));
In dependent of whether ranges::sort() or ranges::unique() are lazy or not, the representation of the temporary range needs to be dealt with. Merely providing a view of the source range isn't an option for either of these algorithms because the temporary object will go away at the end of the expression. One possibility could be to move the range if it comes in as r-value, requiring different result for both ranges::sort() and ranges::unique() to distinguish the cases of the actual argument being either a temporary object or an object kept alive independently. D doesn't have this particular problem because it is garbage collected and the source range would, thus, be kept alive in either case.
The above example also shows one of the problems with possibly lazy evaluated algorithm: since any type, including types which can't be spelled out otherwise, can be deduced by auto variables or templated functions, there is nothing forcing the lazy evaluation at the end of an expression. Thus, the results from the expression templates can be obtained and the algorithm isn't really executed. That is, if an l-value is passed to an algorithm, it needs to be made sure that the expression is actually evaluated to obtain the actual effect. For example, any sort() algorithm mutating the entire sequence clearly does the mutation in-place (if you want a version doesn't do it in-place just copy the container and apply the in-place version; if you only have a non-in-place version you can't avoid the extra sequence which may be an immediate problem, e.g., for gigantic sequences). Assuming it is lazy in some way the l-value access to the original sequence provides a peak into the current status which is almost certainly a bad thing. This may imply that lazy evaluation of mutating algorithms isn't such a great idea anyway.
In any case, there are some aspects of C++ which make it impossible to immediately adopt the D-sytle ranges although the same considerations also apply to other range abstractions. I'd think these considerations are, thus, somewhat out of scope for the question, too. Also, the obvious "solution" to the first of the problems (add garbage collection) is unlikely to happen. I don't know if there is a solution to the second problem in D. There may emerge a solution to the second problem (tentatively dubbed operator auto) but I'm not aware of a concrete proposal or how such a feature would actually look like.
BTW, the Ranges Study Group isn't really bogged down by any technical details. So far, we merely tried to find out what problems we are actually trying to solve and to scope out, to some extend, the solution space. Also, groups generally don't get any work done, at all! The actual work is always done by individuals, often by very few individuals. Since a major part of the work is actually designing a set of abstractions I would expect that the foundations of any results of the Ranges Study Group is done by 1 to 3 individuals who have some vision of what is needed and how it should look like.
My C++11 knowledge is much more limited than I'd like it to be, so there may be newer features which improve things that I'm not aware of yet, but there are three areas that I can think of at the moment which are at least problematic: template constraints, static if, and type introspection.
In D, a range-based function will usually have a template constraint on it indicating which type of ranges it accepts (e.g. forward range vs random-access range). For instance, here's a simplified signature for std.algorithm.sort:
auto sort(alias less = "a < b", Range)(Range r)
if(isRandomAccessRange!Range &&
hasSlicing!Range &&
hasLength!Range)
{...}
It checks that the type being passed in is a random-access range, that it can be sliced, and that it has a length property. Any type which does not satisfy those requirements will not compile with sort, and when the template constraint fails, it makes it clear to the programmer why their type won't work with sort (rather than just giving a nasty compiler error from in the middle of the templated function when it fails to compile with the given type).
Now, while that may just seem like a usability improvement over just giving a compilation error when sort fails to compile because the type doesn't have the right operations, it actually has a large impact on function overloading as well as type introspection. For instance, here are two of std.algorithm.find's overloads:
R find(alias pred = "a == b", R, E)(R haystack, E needle)
if(isInputRange!R &&
is(typeof(binaryFun!pred(haystack.front, needle)) : bool))
{...}
R1 find(alias pred = "a == b", R1, R2)(R1 haystack, R2 needle)
if(isForwardRange!R1 && isForwardRange!R2 &&
is(typeof(binaryFun!pred(haystack.front, needle.front)) : bool) &&
!isRandomAccessRange!R1)
{...}
The first one accepts a needle which is only a single element, whereas the second accepts a needle which is a forward range. The two are able to have different parameter types based purely on the template constraints and can have drastically different code internally. Without something like template constraints, you can't have templated functions which are overloaded on attributes of their arguments (as opposed to being overloaded on the specific types themselves), which makes it much harder (if not impossible) to have different implementations based on the genre of range being used (e.g. input range vs forward range) or other attributes of the types being used. Some work has been being done in this area in C++ with concepts and similar ideas, but AFAIK, C++ is still seriously lacking in the features necessary to overload templates (be they templated functions or templated types) based on the attributes of their argument types rather than specializing on specific argument types (as occurs with template specialization).
A related feature would be static if. It's the same as if, except that its condition is evaluated at compile time, and whether it's true or false will actually determine which branch is compiled in as opposed to which branch is run. It allows you to branch code based on conditions known at compile time. e.g.
static if(isDynamicArray!T)
{}
else
{}
or
static if(isRandomAccessRange!Range)
{}
else static if(isBidirectionalRange!Range)
{}
else static if(isForwardRange!Range)
{}
else static if(isInputRange!Range)
{}
else
static assert(0, Range.stringof ~ " is not a valid range!");
static if can to some extent obviate the need for template constraints, as you can essentially put the overloads for a templated function within a single function. e.g.
R find(alias pred = "a == b", R, E)(R haystack, E needle)
{
static if(isInputRange!R &&
is(typeof(binaryFun!pred(haystack.front, needle)) : bool))
{...}
else static if(isForwardRange!R1 && isForwardRange!R2 &&
is(typeof(binaryFun!pred(haystack.front, needle.front)) : bool) &&
!isRandomAccessRange!R1)
{...}
}
but that still results in nastier errors when compilation fails and actually makes it so that you can't overload the template (at least with D's implementation), because overloading is determined before the template is instantiated. So, you can use static if to specialize pieces of a template implementation, but it doesn't quite get you enough of what template constraints get you to not need template constraints (or something similar).
Rather, static if is excellent for doing stuff like specializing only a piece of your function's implementation or for making it so that a range type can properly inherit the attributes of the range type that it's wrapping. For instance, if you call std.algorithm.map on an array of integers, the resultant range can have slicing (because the source range does), whereas if you called map on a range which didn't have slicing (e.g. the ranges returned by std.algorithm.filter can't have slicing), then the resultant ranges won't have slicing. In order to do that, map uses static if to compile in opSlice only when the source range supports it. Currently, map 's code that does this looks like
static if (hasSlicing!R)
{
static if (is(typeof(_input[ulong.max .. ulong.max])))
private alias opSlice_t = ulong;
else
private alias opSlice_t = uint;
static if (hasLength!R)
{
auto opSlice(opSlice_t low, opSlice_t high)
{
return typeof(this)(_input[low .. high]);
}
}
else static if (is(typeof(_input[opSlice_t.max .. $])))
{
struct DollarToken{}
enum opDollar = DollarToken.init;
auto opSlice(opSlice_t low, DollarToken)
{
return typeof(this)(_input[low .. $]);
}
auto opSlice(opSlice_t low, opSlice_t high)
{
return this[low .. $].take(high - low);
}
}
}
This is code in the type definition of map's return type, and whether that code is compiled in or not depends entirely on the results of the static ifs, none of which could be replaced with template specializations based on specific types without having to write a new specialized template for map for every new type that you use with it (which obviously isn't tenable). In order to compile in code based on attributes of types rather than with specific types, you really need something like static if (which C++ does not currently have).
The third major item which C++ is lacking (and which I've more or less touched on throughout) is type introspection. The fact that you can do something like is(typeof(binaryFun!pred(haystack.front, needle)) : bool) or isForwardRange!Range is crucial. Without the ability to check whether a particular type has a particular set of attributes or that a particular piece of code compiles, you can't even write the conditions which template constraints and static if use. For instance, std.range.isInputRange looks something like this
template isInputRange(R)
{
enum bool isInputRange = is(typeof(
{
R r = void; // can define a range object
if (r.empty) {} // can test for empty
r.popFront(); // can invoke popFront()
auto h = r.front; // can get the front of the range
}));
}
It checks that a particular piece of code compiles for the given type. If it does, then that type can be used as an input range. If it doesn't, then it can't. AFAIK, it's impossible to do anything even vaguely like this in C++. But to sanely implement ranges, you really need to be able to do stuff like have isInputRange or test whether a particular type compiles with sort - is(typeof(sort(myRange))). Without that, you can't specialize implementations based on what types of operations a particular range supports, you can't properly forward the attributes of a range when wrapping it (and range functions wrap their arguments in new ranges all the time), and you can't even properly protect your function against being compiled with types which won't work with it. And, of course, the results of static if and template constraints also affect the type introspection (as they affect what will and won't compile), so the three features are very much interconnected.
Really, the main reasons that ranges don't work very well in C++ are the some reasons that metaprogramming in C++ is primitive in comparison to metaprogramming in D. AFAIK, there's no reason that these features (or similar ones) couldn't be added to C++ and fix the problem, but until C++ has metaprogramming capabilities similar to those of D, ranges in C++ are going to be seriously impaired.
Other features such as mixins and Uniform Function Call Syntax would also help, but they're nowhere near as fundamental. Mixins would help primarily with reducing code duplication, and UFCS helps primarily with making it so that generic code can just call all functions as if they were member functions so that if a type happens to define a particular function (e.g. find) then that would be used instead of the more general, free function version (and the code still works if no such member function is declared, because then the free function is used). UFCS is not fundamentally required, and you could even go the opposite direction and favor free functions for everything (like C++11 did with begin and end), though to do that well, it essentially requires that the free functions be able to test for the existence of the member function and then call the member function internally rather than using their own implementations. So, again you need type introspection along with static if and/or template constraints.
As much as I love ranges, at this point, I've pretty much given up on attempting to do anything with them in C++, because the features to make them sane just aren't there. But if other folks can figure out how to do it, all the more power to them. Regardless of ranges though, I'd love to see C++ gain features such as template constraints, static if, and type introspection, because without them, metaprogramming is way less pleasant, to the point that while I do it all the time in D, I almost never do it in C++.

condition execution using if or logic

when using objects I sometimes test for their existence
e.g
if(object)
object->Use();
could i just use
(object && object->Use());
and what differences are there, if any?
They're the same assuming object->Use() returns something that's valid in a boolean context; if it returns void the compiler will complain that a void return isn't being ignored like it should be, and other return types that don't fit will give you something like no match for 'operator&&'
One enormous difference is that the two function very differently if operator&& has been overloaded. Short circuit evaluation is only provided for the built in operators. In the case of an overloaded operator, both sides will be evaluated [in an unspecified order; operator&& also does not define a sequence point in this case], and the results passed to the actual function call.
If object and the return type of object->Use() are both primitive types, then you're okay. But if either are of class type, then it is possible object->Use() will be called even if object evaluates to false.
They are effectively the same thing but the second is not as clear as your first version, whose intent is obvious. Execution speed is probably no different, either.
Functionally they are the same, and a decent compiler should be able to optimize both equally well. However, writing an expression with operators like this and not checking the result is very odd. Perhaps if this style were common, it would be considered concise and easy to read, but it's not - right now it's just weird. You may get used to it and it could make perfect sense to you, but to others who read your code, their first impression will be, "What the heck is this?" Thus, I recommend going with the first, commonly used version if only to avoid making your fellow programmers insane.
When I was younger I think I would have found that appealing. I always wanted to trim down lines of code, but I realized later on that when you deviate too far from the norm, it'll bite you in the long run when you start working with a team. If you want to achieve zen-programming with minimum lines of code, focus on the logic more than the syntax.
I wouldn't do that. If you overloaded operator&& for pointer type pointing to object and class type returned by object->Use() all bets are off and there is no short-circuit evaluation.
Yes, you can. You see, C language, as well as C++, is a mix of two fairly independent worlds, or realms, if you will. There's the realm of statements and the realm of expressions. Each one can be seen as a separate sub-language in itself, with its own implementations of basic programming constructs.
In the realm of statements, the sequencing is achieved by the ; at the end of the single statement or by the } at the end of compound statement. In the realm of expressions the sequencing is provided by the , operator.
Branching in the realm of statements is implemented by if statement, while in the realm of expressions it can be implemented by either ?: operator or by use of the short-circuit evaluation properties of && and || operators (which is what you just did, assuming your expression is valid).
The realm of expressions has no cycles, but it has recursion that can replace it (requires function calls though, which inevitable forces us to switch to statements).
Obviously these realms are far from being equivalent in their power. C and C++ are languages dominated by statements. However, often one can implement fairly complex constructs using the language of expressions alone.
What you did above does implement equivalent branching in the language of expressions. Keep in mind that many people will find it hard to read in the normal code (mostly because, once again, they are used by statement-dominated C and C++ code). But it often comes very handy in some specific contexts, like template metaprogramming, for one example.