Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I recently did a code test for a company that is using C (and sort of C++) to write their own language. I was somewhat appalled at all the if statements that were in the code that they sent me that had no brackets. Initially I just thought they were hacks but then I was wondering if they did it that way because it is actually (minimally) faster. Also, if anyone has seen the bit of code that was the cause of the security breach in iOS recently, you'll note that curly braces would have thwarted the bug. Are they writing for speed as well?
This question is open to any C (syntax) type language as I imagine there could be some differences.
Braces have nothing to do with speed in a compiled language.
In cases where it is optional, it is just a style preference, albeit one with a higher potential for mistakes (e.g. Apple's faux pas).
All of these languages are compiled. The brace itself is not an instruction of any sort, it is simply a higher level syntactic element that you use to tell the compiler that a group of statements forms a coherent block of some kind. (The fact that it is a curly brace in many languages is probably more a matter of tradition than anything else.) It is similar in spirit to semicolons, parentheses, colons, etc. It is nothing more than a grammatical symbol used to help you express your program accurately to the compiler.
As far as I know there is no processor or virtual machine that has the equivalent of an fyi_curly_brace_was_here instruction.
This question is akin to asking if white-space or extra semicolons affect performance in compiled languages - these are all either optional formatting or necessary syntactic elements.
The reason we mention "compiled" languages is that certain interpreted languages, where the code is parsed as it is executed, could conceivably incur a modest speed penalty just due to parsing, but even in those types of languages, the effect would likely be completely negligible compared to whatever else the code itself is doing.
In compiled languages like C or C++, the existence or non-existence of brackets cannot make the actual program faster.
My guess: They just hacked it in faster without them.
No. Compiled code is going to be the same.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I am a beginner Clojure programmer.
In a Book, it is an advantage of the Clojure that the code that is not evaluated is data.
But I do not understand.
So, I want an example code and an explanation so I can understand.
I am Korean. For that reason, I would appreciate it if you could understand even if I wrote an awkward sentence or failed to keep my manners.
I think the question you are asking is: why is it an advantage for a programming language that its source code is available within the language as data (and in particular as structured data, not just as strings)?
The reason it is an advantage is simple: once you have access to a data structure representing the source of a program you can manipulate that structure: you can write programs that manipulate other programs.
A particularly good case is where the surface form of the language and the data structure which represents it is rather 'low-commitment': it doesn't encode much meaning of the program, just its syntactic form.
Then you can write programs in this low-commitment language which include constructs which don't yet exist in the language. And you write other programs which take these programs and turn them into other programs which only use constructs which already exist in the language.
And you can keep doing this. So you can start with a language which is whatever language you are given, and incrementally build it into a language which is the language you want.
Of course you can do this with almost any language: in almost any language that lets you read files into strings, parse those strings into some representation of a language, with extensions, and then process that representation into the original language which you then hand off to the compiler or interpreter.
But Lisp-family languages make this much easier for you:
they do the parsing-into-a-data-structure for you, so 'source code is data';
they have an intentionally low-commitment source form, leaving it up to you what given constructs mean;
they provide facilities to let you do this processing-of-source-code-into-other-source-code in a fairly painless way, by defining macros.
That's the advantage of having source code available as structured data in the language: it's a crucial part of the things you need to make implementing your own programming language (which is often really a programming jargon – a language which inherits all of a base language but adds something of its own on top of it) easy.
I think it is in "Mastering Clojure Macros" that I saw an example of "code is data" that made a lot of sense (at least to me).
Syntactically speaking this is valid Clojure code: it's a simple list of numbers and symbol.
(1 + 1)
But it is not a valid program! If you evaluate this in your REPL it will throw an error because 1 isn't a function.
When Clojure reads this text, it produces a list (the same list) and allows things such as a macro to receive it (unevaluated), transform it and send it back. Perhaps that macro could simply swap the first two elements and return (+ 1 1)?
This would not have been possible in JavaScript for example:
var a = + 1 1; // Syntax error
The engine would have exploded way before you could try anything!
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I know it's a trick to do boolean conversion. My question is primarily about resource cost when writing this way. Will complier just ignore the "!!" and do implicit boolean conversion directly?
If you have any doubts you can check the generated assembly; noting at the assembly level there is no such thing as a boolean type anyway. So yes, it's probably all optimised out.
As a rule of thumb, code that mixes types therefore necessitating type conversions will run slower, although that is masked by another rule of thumb which is write clear code.
It depends.
If you limit attention just to basic types that are convertible to bool and can be an operand of the ! operator, then it depends on the compiler. Depending on target system, the compiler may emit a sequence of instructions that gives the required effect, but not in the way you envisage. Also, a given compiler may treat things differently, with different optimisation settings (e.g compiling for debugging versus release).
The only way to be sure is to examine the code emitted by the compiler. In practice, it is unlikely to make much difference. As others have commented, you would be better off worrying about getting your code clear and working correctly,than about the merits of premature optimisation techniques. If you have a real need (e.g. the operation is in a hotspot identified by a profiler) than you will have data to understand what the need is, and identify realistic options to do something about it. Practically, I would doubt there are many real-world cases where there would be any difference.
In C++, with user-defined types, all bets are off. There are many possibilities, such as classes that have an operator!() that returns a class type, a class that has an operator!() but not an operator bool(). The list goes on, and there are many permutations. There are cases where the compiler would be incorrect in doing such a transformation (e.g. !!x would be expected to be equivalent to x.operator!().operator!() but there is not actually a requirement (coding guidelines aside) for that sequence to give the same net effect as x.operator bool()). Practically, I wouldn't expect too many compilers to even attempt to identify an opportunity in such cases - the analysis would be non-trivial, probably not give many practical benefits (optimising single instructions is rarely where the gains are to be made in compiler optimisation). Again, it is better for the programmer to focus on getting code clear and correct, rather than worrying about how the compiler optimises single expressions like this. For example, if calling an operator bool() is intended, then it is better to provide that operator AND write an expression that uses it (e.g. bool(x)) rather than hoping the compiler will convert a hack like !!x into a call of x.operator bool().
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 months ago.
Improve this question
Let me just say right off the bat that i'm not a programmer. I'm just a guy with an idea taking his first steps to make it a reality. I'm no stranger to programming, mind you, but some of the concepts and terminology here are way over my head; so i apologize in advance if this question was answered before (i.e. Convert Python program to C/C++ code?).
I have an idea to create a simple A.I. network to analyze music data sent from a phone via cloud computing (I got a guy for the cloud stuff). It will require a lot of memory and need to be fast for the hard number-crunching. I had planned on doing it in python, but have since learned that might not be such a good idea (Is Python faster and lighter than C++?).
Since python is really the only gun i have in my holster, i was thinking of using a python-to-C++-converter. But nothing comes without a price:
Is this an advantageous way to keep my code fast?
What's the give-and-take for using a converter?
Am i missing anything? I'm still new to this so i'm not even sure what questions to ask.
Thanks in advance.
Generally it's an awful way to write code, and does not guarantee that it will be any faster. Things which are simple and fast in one language can be complex and slow in another. You're better off either learning how to write fast Python code or learning C++ directly than fighting with a translator and figuring out how to make the generated code run acceptably.
If you want C++, use C++. Note, however that PyPy have a bunch of benchmarks showing that they can be much faster than C; and with NumPy, which uses compiled extensions, numerical work becomes much faster and easier.
If you want to programme in something statically compiled, and a bit like Python, there's RPython.
Finally, you can do what NumPy does: use extensions written in C or C++ for most of your heavy computational lifting, where that appears to be appropriate, either because profiling shows a hotspot, or because you need an extension to more easily do something involving python's internals. Note that this will tie your code to a particular implementation.
Similar to what was already stated, C++ may be faster in some areas and slower in others. Python is exactly the same. In the end, any language will be converted into machine code. It is really up to the compiler in the end to make it as efficient as it knows how to do. That said, it is better to pick one language and learn how to write fast and efficient code to do what you want.
No because significant part of the good C++ performance comes from the possibility to choose the better performing architecture. It does not come magically from the same fact "because it is C".
A simple, line by line translation from Python into C++ is unlikely to increase the performance more than just using something like Cython so I think it is more reasonable to use Cython. It can still be much worse than a good developer can do with C++ from scratch. C++ simply provides more control over everything like possibility to define data type of the minimal needed length, fixed size array on stack, turn off array bounds checking in production and the like.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
So I wanna make a small (64k) demo--nothing super impressive, just for the coding experience. I've been wondering, what exactly counts towards a byte count? For example, I could embed Lua as a scripting language once I have a simple demo engine running, but since python comes on pretty much every *nix computer, can I use its interpreter at no cost?
Some might argue that it's not in the spirit of the demo scene, but I do think it counts as milking every last byte. Plus, Lua is 50k and I don't want to write a smaller, custom interpreter (which will be buggy).
The general spirit of the thing is that, as a piece of art, any random person ought to be able to download and view your demo. So it's the base default install of the platform that you're concerned with. This is why most of the best demos target Windows; DirectX is universally available, and the ability to use those libraries dramatically reduces the amount of code in the demo executable.
The same is true of OSX, but other Linux/UNIX variants are really problematic, because there's often no such thing as a standard install. And good luck as far as drivers for hardware-accelerated 3D go.
That said, it's really up to the individual group or competition that you plan to submit your demo to. You'd be best off contacting one of the members or organizers to see what their rules are. If you're just doing this for yourself, to post on the web, then you get to decide what seems fair. The more restrictions you place on yourself, the more impressive the demo ends up being.
If you're really serious about a 64k demo, though, you'll use assembly, not an interpreted language. You only benefit from something like Python if you can get short text to expand into a very complicated function in the stdlib. Most of the places where that matters for a demo are related to graphics and sound, and Python's stdlib doesn't provide much (nor should it) in either regard.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
Are there any C++ code parsers that look for boolean expressions that can be simplified using boolean algebra?
I know that compilers do this already, but it would be nice to have a tool giving out such things so that one can actually improve the code readability.
Humans.
You want to improve readability, and since readability is mostly a human thing it should be taught by a human.
Ask more experienced developers to review your expressions and give tips.
For example, see my answer here: What is the best way (performance-wise) to test whether a value falls within a threshold?
Although it doesn't work directly on C/C++ boolean expressions, one tool I've found very useful for simplifying complex boolean logic is Logic Friday. Unfortunately it's Windows-only, but fortunately it's free.
You can make the code more efficient by reducing the number of "if"s its need to check. but more simplified and better readability can't be made automatically.
http://www.freewarepalm.com/educational/booleanfunctionsimplificationtool.shtml
might be worth a try.
However it is usually better to do you for yourself - as readability and understandability is more important - and let the compiler do the simplification.
This is such a bad idea! Programmers write code that reflects their thought processes. So boolean expressions as written by humans are already automatically optimised for human comprehension. Any attempt to improve on this programatically is doomed to failure. The only context in which it might make sense is post-processing of tool-generated source code.
What you need is a tool that can parse C++, determine the meaning of its symbols, pick out boolean equations, and apply boolean simplification rules to them that don't violate the semantics.
A tool that can do this is our DMS Software Reengineering Toolkit with its C++ Front End. DMS is designed to carry out program analyses and source-to-source transformations on code. Using the C++ Front End, it can parse C++ to ASTs, build up symbol tables, and infer the type of an expression, and apply rewrite rules.
One can code rewrite rules like this:
domain Cpp. -- tell DMS to use the C++ front end
rule factor_common_and_term(e1: condition, e2:condition, e3: condition):
disjunctive_expression -> disjunctive_expression =
" \e1 && \e2 || \e1 && \e3 " -> " \e1 && ( \e2 || \e3 ) "
if no_side_effects(e1) /\ no_side_effects(e2);
to factor out a common condition. The rule has name "factor_common_and_term" to distinguish it from the often hundreds of other rules one might write (e.g., "distribute_term", etc.). The e1,e2,e3 are metavariables representing arbitrary subexpressions (of type "condition" according to the grammar rules). The rewrite operates only on disjunction_expressions; you could make this be "just expression" but then you would not get disjunctions nested inside other conditional expressions. The rewrite has a pattern (left) and a replacement (right), both wrapped in meta-quotes " to distinguish the C++ code in the patterns from the rule-language syntax surrounding it. The \e1 are escapes from C++ syntax, and indicate where a metavariable can match. Metavariables match any syntax of the corresponding category, so where \e1 is seen can be an arbitrarily complicated "condition". The fact that e1 is mentioned twice in the pattern forces the occurences to be identical.
One can write a set of rewrite rules that encode knowledge about simplifying arbitrarily complex boolean equations; a few dozen rules sort of does it. We've applied these to systems of non-C++ boolean equations with hundreds of thousands of terms, and to C and C++ prepreprocessor conditionals.
For C++, you need a check that the rewrite is safe to do, which it is not if e1 has a side effect, or e2 has a side effect. This check is made with an auxiliary function call that has to determine this answer in a conservative way. The determination that there is no side effect is in fact pretty complex for C++: you have to know what all the elements of the expression are, and that none of them have side effects.
One can do this check with DMS's attribute grammar (an organized tree crawl) that inspects all expression elements. Simple constants and variables (need a symbol table for this) do not. Function calls (including constructors, etc.) may; their definition has to be found (again the need for a symbol table), and processed similarly. It is possible that an expression element calls a separately compiled function; the conservative answer in this case is "don't know" therefore "assume has side effect". DMS can actually read multiple compilation units at the same time, so a separately compiled function can be found, parse/symbol-resolved, and crawled if you want to go that far.
So the boolean rewrite part is pretty easy; the side effect analysis is not.
We've used DMS to carry out massive changes on C++ code; we often cheat a bit by making assumptions about complex analyses like this. Usually we get suprised the same ways programmers get surprised ("What do you mean, that has a side effect?"). Mostly it works pretty well. We have done side-effect analysis in detail on C systems of 25 million lines; not quite there for C++ yet.
The side effect analysis only matters if some subexpression might be evaluated more than once.
OP's example, given in a comment, doesn't need them, and can be handled by the following rules:
rule not_on_disjunction(e1:condition, e2:condition):
condition -> condition =
" ! (\e1 || \e2) " -> " !\e1 && !\e2";
rule double_not(e:condition):
condition -> condition =
" ! ! \e " --> " \e ";
A complete, but simple worked example with more detailed description is this example of algebraic simplification of conventional algebra and some calculus.
There's clearly controversy as to whether a particular code transformation will make code more readable. IMHO, that's because the shape of code is often an art judgement, and we all seem to disagree about art. This isn't any different than letting somebody else modify your code.