Is anybody using the named boolean operators? - c++

Or are we all sticking to our taught "&&, ||, !" way?
Any thoughts in why we should use one or the other?
I'm just wondering because several answers state thate code should be as natural as possible, but I haven't seen a lot of code with "and, or, not" while this is more natural.

I like the idea of the not operator because it is more visible than the ! operator. For example:
if (!foo.bar()) { ... }
if (not foo.bar()) { ... }
I suggest that the second one is more visible and readable. I don't think the same argument necessarily applies to the and and or forms, though.

"What's in a name? That which we call &&, || or !
By any other name would smell as sweet."
In other words, natural depends on what you are used to.

Those were not supported in the old days. And even now you need to give a special switch to some compilers to enable these keywords. That's probably because old code base may have had some functions || variables named "and" "or" "not".

One problem with using them (for me anyway) is that in MSVC you have to include iso646.h or use the (mostly unusable) /Za switch.
The main problem I have with them is the Catch-22 that they're not commonly used, so they require my brain to actively process the meaning, where the old-fashioned operators are more or less ingrained (kind of like the difference between reading a learned language vs. your native language).
Though I'm sure I'd overcome that issue if their use became more universal. If that happened, then I'd have the problem that some boolean operators have keywords while others don't, so if alternate keywords were used, you might see expressions like:
if ((x not_eq y) and (y == z) or (z <= something)) {...}
when it seems to me they should have alternate tokens for all the (at least comparison) operators:
if ((x not_eq y) and (y eq z) or (z lt_eq something)) {...}
This is because the reason the alternate keywords (and digraphs and trigraphs) were provided was not to make the expressions more readable - it was because historically there have been (and maybe still are) keyboards and/or codepages in some localities that do not have certain punctuation characters. For example, the invariant part of the ISO 646 codepage (surprise) is missing the '|', '^' and '~' characters among others.

Although I've been programming C++ from quite some time, I did not know that the keywords "and" "or" and "not" were allowed, and I've never seen it used.
I searched through my C++ book, and I found a small section mentioning alternative representation for the normal operators "&&", "||" and "!", where it explains those are available for people with non-standard keyboards that do not have the "&!|" symbols.
A bit like trigraphs in C.
Basically, I would be confused by their use, and I think I would not be the only one.
Using a representation which is non-standard, should really have a good reason to be used.
And if used, it should be used consistently in the code, and described in the coding standard.

The digraph and trigraph operators were actually designed more for systems that didn't carry the standard ASCII character set - such as IBM mainframes (which use EBCDIC). In the olden days of mechanical printers, there was this thing called a "48-character print chain" which, as its name implied, only carried 48 characters. A-Z (uppercase), 0-9 and a handful of symbols. Since one of the missing symbols was an underscore (which rendered as a space), this could make working with languages like C and PL/1 a real fun activity (is this 2 words or one word with an underscore???).
Conventional C/C++ is coded with the symbols and not the digraphs. Although I have been known to #define "NOT", since it makes the meaning of a boolean expression more obvious, and it's visually harder to miss than a skinny little "!".

I wish I could use || and && in normal speech. People try very hard to misunderstand when I say "and" or "or"...

I personally like operators to look like operators. It's all maths, and unless you start using "add" and "subtract" operators too it starts to look a little inconsistent.
I think some languages suit the word-style and some suit the symbols if only because it's what people are used to and it works. If it ain't broke, don't fix it.
There is also the question of precedence, which seems to be one of the reasons for introducing the new operators, but who can be bothered to learn more rules than they need to?

In cases where I program with names directly mapped to the real world, I tend to use 'and' and 'or', for example:
if(isMale or isBoy and age < 40){}

It's nice to use 'em in Eclipse+gcc, as they are highlighted. But then, the code doesn't compile with some compilers :-(

Using these operators is harmful. Notice that and and or are logical operators whereas the similar-looking xor is a bitwise operator. Thus, arguments to and and or are normalized to 0 and 1, whereas those to xor aren't.
Imagine something like
char *p, *q; // Set somehow
if(p and q) { ... } // Both non-NULL
if(p or q) { ... } // At least one non-NULL
if(p xor q) { ... } // Exactly one non-NULL
Bzzzt, you have a bug. In the last case you're testing whether at least one of the bits in the pointers is different, which probably isn't what you thought you were doing because then you would have written p != q.
This example is not hypothetical. I was working together with a student one time and he was fond of these literate operators. His code failed every now and then for reasons that he couldn't explain. When he asked me, I could zero in on the problem because I knew that C++ doesn't have a logical xor operator, and that line struck me as very odd.
BTW the way to write a logical xor in C++ is
!a != !b

I like the idea, but don't use them. I'm so used to the old way that it provides no advantage to me doing it either way. Same holds true for the rest of our group, however, I do have concerns that we might want to switch to help avoid future programmers from stumbling over the old symbols.

So to summarize: it's not used a lot because of following combination
old code where it was not used
habit (more standard)
taste (more math-like)
Thanks for your thoughts

Related

Is it a good style to write constants on the left of equal to == in If statement in C++? [duplicate]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Okay, we know that the following two lines are equivalent -
(0 == i)
(i == 0)
Also, the first method was encouraged in the past because that would have allowed the compiler to give an error message if you accidentally used '=' instead of '=='.
My question is - in today's generation of pretty slick IDE's and intelligent compilers, do you still recommend the first method?
In particular, this question popped into my mind when I saw the following code -
if(DialogResult.OK == MessageBox.Show("Message")) ...
In my opinion, I would never recommend the above. Any second opinions?
I prefer the second one, (i == 0), because it feel much more natural when reading it. You ask people, "Are you 21 or older?", not, "Is 21 less than or equal to your age?"
It doesn't matter in C# if you put the variable first or last, because assignments don't evaluate to a bool (or something castable to bool) so the compiler catches any errors like "if (i = 0) EntireCompanyData.Delete()"
So, in the C# world at least, its a matter of style rather than desperation. And putting the variable last is unnatural to english speakers. Therefore, for more readable code, variable first.
If you have a list of ifs that can't be represented well by a switch (because of a language limitation, maybe), then I'd rather see:
if (InterstingValue1 == foo) { } else
if (InterstingValue2 == foo) { } else
if (InterstingValue3 == foo) { }
because it allows you to quickly see which are the important values you need to check.
In particular, in Java I find it useful to do:
if ("SomeValue".equals(someString)) {
}
because someString may be null, and in this way you'll never get a NullPointerException. The same applies if you are comparing constants that you know will never be null against objects that may be null.
(0 == i)
I will always pick this one. It is true that most compilers today do not allow the assigment of a variable in a conditional statement, but the truth is that some do. In programming for the web today, I have to use myriad of langauges on a system. By using 0 == i, I always know that the conditional statement will be correct, and I am not relying on the compiler/interpreter to catch my mistake for me. Now if I have to jump from C# to C++, or JavaScript I know that I am not going to have to track down assignment errors in conditional statements in my code. For something this small and to have it save that amount of time, it's a no brainer.
I used to be convinced that the more readable option (i == 0) was the better way to go with.
Then we had a production bug slip through (not mine thankfully), where the problem was a ($var = SOME_CONSTANT) type bug. Clients started getting email that was meant for other clients. Sensitive type data as well.
You can argue that Q/A should have caught it, but they didn't, that's a different story.
Since that day I've always pushed for the (0 == i) version. It basically removes the problem. It feels unnatural, so you pay attention, so you don't make the mistake. There's simply no way to get it wrong here.
It's also a lot easier to catch that someone didn't reverse the if statement in a code review than it is that someone accidentally assigned a value in an if. If the format is part of the coding standards, people look for it. People don't typically debug code during code reviews, and the eye seems to scan over a (i = 0) vs an (i == 0).
I'm also a much bigger fan of the java "Constant String".equals(dynamicString), no null pointer exceptions is a good thing.
You know, I always use the if (i == 0) format of the conditional and my reason for doing this is that I write most of my code in C# (which would flag the other one anyway) and I do a test-first approach to my development and my tests would generally catch this mistake anyhow.
I've worked in shops where they tried to enforce the 0==i format but I found it awkward to write, awkward to remember and it simply ended up being fodder for the code reviewers who were looking for low-hanging fruit.
Actually, the DialogResult example is a place where I WOULD recommend that style. It places the important part of the if() toward the left were it can be seen. If it's is on the right and the MessageBox have more parameters (which is likely), you might have to scroll right to see it.
OTOH, I never saw much use in the "(0 == i) " style. If you could remember to put the constant first, you can remember to use two equals signs,
I'm trying always use 1st case (0==i), and this saved my life a few times!
I think it's just a matter of style. And it does help with accidentally using assignment operator.
I absolutely wouldn't ask the programmer to grow up though.
I prefer (i == 0), but I still sort of make a "rule" for myself to do (0 == i), and then break it every time.
"Eh?", you think.
Well, if I'm making a concious decision to put an lvalue on the left, then I'm paying enough attention to what I'm typing to notice if I type "=" for "==". I hope. In C/C++ I generally use -Wall for my own code, which generates a warning on gcc for most "=" for "==" errors anyway. I don't recall seeing that warning recently, perhaps because the longer I program the more reflexively paranoid I am about errors I've made before...
if(DialogResult.OK == MessageBox.Show("Message"))
seems misguided to me. The point of the trick is to avoid accidentally assigning to something.
But who is to say whether DialogResult.OK is more, or less likely to evaluate to an assignable type than MessageBox.Show("Message")? In Java a method call can't possibly be assignable, whereas a field might not be final. So if you're worried about typing = for ==, it should actually be the other way around in Java for this example. In C++ either, neither or both could be assignable.
(0==i) is only useful because you know for absolute certain that a numeric literal is never assignable, whereas i just might be.
When both sides of your comparison are assignable you can't protect yourself from accidental assignment in this way, and that goes for when you don't know which is assignable without looking it up. There's no magic trick that says "if you put them the counter-intuitive way around, you'll be safe". Although I suppose it draws attention to the issue, in the same way as my "always break the rule" rule.
I use (i == 0) for the simple reason that it reads better. It makes a very smooth flow in my head. When you read through the code back to yourself for debugging or other purposes, it simply flows like reading a book and just makes more sense.
My company has just dropped the requirement to do if (0 == i) from its coding standards. I can see how it makes a lot of sense but in practice it just seems backwards. It is a bit of a shame that by default a C compiler probably won't give you a warning about if (i = 0).
Third option - disallow assignment inside conditionals entirely:
In high reliability situations, you are not allowed (without good explanation in the comments preceeding) to assign a variable in a conditional statement - it eliminates this question entirely because you either turn it off at the compiler or with LINT and only under very controlled situations are you allowed to use it.
Keep in mind that generally the same code is generated whether the assignment occurs inside the conditional or outside - it's simply a shortcut to reduce the number of lines of code. There are always exceptions to the rule, but it never has to be in the conditional - you can always write your way out of that if you need to.
So another option is merely to disallow such statements, and where needed use the comments to turn off the LINT checking for this common error.
-Adam
I'd say that (i == 0) would sound more natural if you attempted to phrase a line in plain (and ambiguous) english. It really depends on the coding style of the programmer or the standards they are required to adhere to though.
Personally I don't like (1) and always do (2), however that reverses for readability when dealing with dialog boxes and other methods that can be extra long. It doesn't look bad how it is not, but if you expand out the MessageBox to it's full length. You have to scroll all the way right to figure out what kind of result you are returning.
So while I agree with your assertions of the simplistic comparison of value types, I don't necessarily think it should be the rule for things like message boxes.
both are equal, though i would prefer the 0==i variant slightly.
when comparing strings, it is more error-prone to compare "MyString".equals(getDynamicString())
since, getDynamicString() might return null.
to be more conststent, write 0==i
Well, it depends on the language and the compiler in question. Context is everything.
In Java and C#, the "assignment instead of comparison" typo ends up with invalid code apart from the very rare situation where you're comparing two Boolean values.
I can understand why one might want to use the "safe" form in C/C++ - but frankly, most C/C++ compilers will warn you if you make the typo anyway. If you're using a compiler which doesn't, you should ask yourself why :)
The second form (variable then constant) is more readable in my view - so anywhere that it's definitely not going to cause a problem, I use it.
Rule 0 for all coding standards should be "write code that can be read easily by another human." For that reason I go with (most-rapidly-changing value) test-against (less-rapidly-changing-value, or constant), i.e "i == 0" in this case.
Even where this technique is useful, the rule should be "avoid putting an lvalue on the left of the comparison", rather than the "always put any constant on the left", which is how it's usually interpreted - for example, there is nothing to be gained from writing
if (DateClass.SATURDAY == dateObject.getDayOfWeek())
if getDayOfWeek() is returning a constant (and therefore not an lvalue) anyway!
I'm lucky (in this respect, at least) in that these days in that I'm mostly coding in Java and, as has been mentioned, if (someInt = 0) won't compile.
The caveat about comparing two booleans is a bit of a red-herring, as most of the time you're either comparing two boolean variables (in which case swapping them round doesn't help) or testing whether a flag is set, and woe-betide-you if I catch you comparing anything explicitly with true or false in your conditionals! Grrrr!
In C, yes, but you should already have turned on all warnings and be compiling warning-free, and many C compilers will help you avoid the problem.
I rarely see much benefit from a readability POV.
Code readability is one of the most important things for code larger than a few hundred lines, and definitely i == 0 reads much easier than the reverse
Maybe not an answer to your question.
I try to use === (checking for identical) instead of equality. This way no type conversion is done and it forces the programmer do make sure the right type is passed,
You are right that placing the important component first helps readability, as readers tend to browse the left column primarily, and putting important information there helps ensure it will be noticed.
However, never talk down to a co-worker, and implying that would be your action even in jest will not get you high marks here.
I always go with the second method. In C#, writing
if (i = 0) {
}
results in a compiler error (cannot convert int to bool) anyway, so that you could make a mistake is not actually an issue. If you test a bool, the compiler is still issuing a warning and you shouldn't compare a bool to true or false. Now you know why.
I personally prefer the use of variable-operand-value format in part because I have been using it so long that it feels "natural" and in part because it seems to the predominate convention. There are some languages that make use of assignment statements such as the following:
:1 -> x
So in the context of those languages it can become quite confusing to see the following even if it is valid:
:if(1=x)
So that is something to consider as well. I do agree with the message box response being one scenario where using a value-operand-variable format works better from a readability stand point, but if you are looking for constancy then you should forgo its use.
This is one of my biggest pet peeves. There is no reason to decrease code readability (if (0 == i), what? how can the value of 0 change?) to catch something that any C compiler written in the last twenty years can catch automatically.
Yes, I know, most C and C++ compilers don't turn this on by default. Look up the proper switch to turn it on. There is no excuse for not knowing your tools.
It really gets on my nerves when I see it creeping into other languages (C#,Python) which would normally flag it anyway!
I believe the only factor to ever force one over the other is if the tool chain does not provide warnings to catch assignments in expressions. My preference as a developer is irrelevant. An expression is better served by presenting business logic clearly. If (0 == i) is more suitable than (i == 0) I will choose it. If not I will choose the other.
Many constants in expressions are represented by symbolic names. Some style guides also limit the parts of speech that can be used for identifiers. I use these as a guide to help shape how the expression reads. If the resulting expression reads loosely like pseudo code then I'm usually satisfied. I just let the expression express itself and If I'm wrong it'll usually get caught in a peer review.
We might go on and on about how good our IDEs have gotten, but I'm still shocked by the number of people who turn the warning levels on their IDE down.
Hence, for me, it's always better to ask people to use (0 == i), as you never know, which programmer is doing what.
It's better to be "safe than sorry"
if(DialogResult.OK == MessageBox.Show("Message")) ...
I would always recommend writing the comparison this way. If the result of MessageBox.Show("Message") can possibly be null, then you risk a NPE/NRE if the comparison is the other way around.
Mathematical and logical operations aren't reflexive in a world that includes NULLs.

Why alternative keywords are not famous in place of in-built ascii operators? [duplicate]

Or are we all sticking to our taught "&&, ||, !" way?
Any thoughts in why we should use one or the other?
I'm just wondering because several answers state thate code should be as natural as possible, but I haven't seen a lot of code with "and, or, not" while this is more natural.
I like the idea of the not operator because it is more visible than the ! operator. For example:
if (!foo.bar()) { ... }
if (not foo.bar()) { ... }
I suggest that the second one is more visible and readable. I don't think the same argument necessarily applies to the and and or forms, though.
"What's in a name? That which we call &&, || or !
By any other name would smell as sweet."
In other words, natural depends on what you are used to.
Those were not supported in the old days. And even now you need to give a special switch to some compilers to enable these keywords. That's probably because old code base may have had some functions || variables named "and" "or" "not".
One problem with using them (for me anyway) is that in MSVC you have to include iso646.h or use the (mostly unusable) /Za switch.
The main problem I have with them is the Catch-22 that they're not commonly used, so they require my brain to actively process the meaning, where the old-fashioned operators are more or less ingrained (kind of like the difference between reading a learned language vs. your native language).
Though I'm sure I'd overcome that issue if their use became more universal. If that happened, then I'd have the problem that some boolean operators have keywords while others don't, so if alternate keywords were used, you might see expressions like:
if ((x not_eq y) and (y == z) or (z <= something)) {...}
when it seems to me they should have alternate tokens for all the (at least comparison) operators:
if ((x not_eq y) and (y eq z) or (z lt_eq something)) {...}
This is because the reason the alternate keywords (and digraphs and trigraphs) were provided was not to make the expressions more readable - it was because historically there have been (and maybe still are) keyboards and/or codepages in some localities that do not have certain punctuation characters. For example, the invariant part of the ISO 646 codepage (surprise) is missing the '|', '^' and '~' characters among others.
Although I've been programming C++ from quite some time, I did not know that the keywords "and" "or" and "not" were allowed, and I've never seen it used.
I searched through my C++ book, and I found a small section mentioning alternative representation for the normal operators "&&", "||" and "!", where it explains those are available for people with non-standard keyboards that do not have the "&!|" symbols.
A bit like trigraphs in C.
Basically, I would be confused by their use, and I think I would not be the only one.
Using a representation which is non-standard, should really have a good reason to be used.
And if used, it should be used consistently in the code, and described in the coding standard.
The digraph and trigraph operators were actually designed more for systems that didn't carry the standard ASCII character set - such as IBM mainframes (which use EBCDIC). In the olden days of mechanical printers, there was this thing called a "48-character print chain" which, as its name implied, only carried 48 characters. A-Z (uppercase), 0-9 and a handful of symbols. Since one of the missing symbols was an underscore (which rendered as a space), this could make working with languages like C and PL/1 a real fun activity (is this 2 words or one word with an underscore???).
Conventional C/C++ is coded with the symbols and not the digraphs. Although I have been known to #define "NOT", since it makes the meaning of a boolean expression more obvious, and it's visually harder to miss than a skinny little "!".
I wish I could use || and && in normal speech. People try very hard to misunderstand when I say "and" or "or"...
I personally like operators to look like operators. It's all maths, and unless you start using "add" and "subtract" operators too it starts to look a little inconsistent.
I think some languages suit the word-style and some suit the symbols if only because it's what people are used to and it works. If it ain't broke, don't fix it.
There is also the question of precedence, which seems to be one of the reasons for introducing the new operators, but who can be bothered to learn more rules than they need to?
In cases where I program with names directly mapped to the real world, I tend to use 'and' and 'or', for example:
if(isMale or isBoy and age < 40){}
It's nice to use 'em in Eclipse+gcc, as they are highlighted. But then, the code doesn't compile with some compilers :-(
Using these operators is harmful. Notice that and and or are logical operators whereas the similar-looking xor is a bitwise operator. Thus, arguments to and and or are normalized to 0 and 1, whereas those to xor aren't.
Imagine something like
char *p, *q; // Set somehow
if(p and q) { ... } // Both non-NULL
if(p or q) { ... } // At least one non-NULL
if(p xor q) { ... } // Exactly one non-NULL
Bzzzt, you have a bug. In the last case you're testing whether at least one of the bits in the pointers is different, which probably isn't what you thought you were doing because then you would have written p != q.
This example is not hypothetical. I was working together with a student one time and he was fond of these literate operators. His code failed every now and then for reasons that he couldn't explain. When he asked me, I could zero in on the problem because I knew that C++ doesn't have a logical xor operator, and that line struck me as very odd.
BTW the way to write a logical xor in C++ is
!a != !b
I like the idea, but don't use them. I'm so used to the old way that it provides no advantage to me doing it either way. Same holds true for the rest of our group, however, I do have concerns that we might want to switch to help avoid future programmers from stumbling over the old symbols.
So to summarize: it's not used a lot because of following combination
old code where it was not used
habit (more standard)
taste (more math-like)
Thanks for your thoughts

Ampersand and square brackets priority

I see a lot of programmers using brackets around an expression, e.g. :
&(tab[i]) /* I use `&tab[i]`. */
I think it isn't necessary, because the [] operator has a greater priority than & operator. So, why do they use brackets ?
For sake of clarity. Not everyone has all the operator precedences memorized.
I'd say there are basically two reasons:
The programmer isn't sure about operator precedence and codes defensively
To make absolutely clear for every reader (who may not have the precendence memorized) what is going on
It is indeed not necessary because postfix operators always have higher precedence than unary operators.
The people that use parens in &(tab[i]) use them for the same reason they use it in the expression (8 * 5) / 2.
In a recent question, someone had to decypher code like --p---> x < 0. Just like your code, that code is unambiguous as per C++ parsing rules.
However, humans don't always remember all of these complex rules, so many programmers make it a habit to use parens in situations that might look not totally clear to others (or to themselves). It is documenting the real intention of the code.
This is a Good Thing To Do™
Because they find it clearer? Especially for something like this:
int *a[];
Is that a pointer to an array of ints or an array of pointers to int?

c++ styleguide: why to have non-lvalues on the left side?

In one C++ coding style guide,
I found one particular recommendation (page 41, recommendation number 53):
Always have non-lvalues on the left side (0 == i instead of i == 0).
And I don't uderstand what is this good for? Are to sticking to this practice?
I'm not and I don't know why is his a good practice. The only advantage I can think of is that is will avoid mistaking an unintentional assignment with a comparison (if (foo = 0){} versus if (foo == 0){})
Have you got any other ideas why should I use it?
Yes, you guessed it right. It's the good, old Yoda condition!!!
As you say, the reason some people use it is to occasionally avoid typing = when they mean ==.
Since it only catches some cases, where you're comparing a lvalue with a constant or rvalue, and every compiler I know of will warn you if you make that mistake, there's very little point in doing it.
At least to native English speakers, it makes the code read as if it's written backwards; so a "Yoda condition" some call it do. Like many rules in corporate style guides, it dates back to a time when dealing with unforgiving compilers was a higher priority than writing readable code.

Is it feasible to ascribe pronunciations to distinct source code concepts?

I frequently tutor fellow students in programming, most often in C++ or Java.
It is uniquely aggravating to try to verbally convey the essential syntax of a C++ expression. The speaker must give either an idiomatic translation into English, or a full specification of the code in verbal longhand, using explicit yet slow terms such as "opening parenthesis", "bitwise and", et cetera. Neither of these solutions is optimal.
In C++, there is a finite set of keywords—63—and operators—54, discounting named operators and treating compound assignment operators and prefix versus postfix auto-increment and decrement as distinct. There are just a few types of literal, a similar number of grouping symbols, and the semicolon. Unless I'm utterly mistaken, that's about it.
Would it not then be feasible to ascribe a concise, unique pronunciation to each of these distinct concepts (including one for whitespace, where it is required) and go from there? Programming languages are far more regular than natural languages, so the pronunciation could be standardised.
Instead of creating new "words" to describe them, for things such as "include" you could simply prefix it with "keyword" when saying it aloud. You could use words/phrases commonly known to say other parts as well. As with any new programmer, you have to literally describe everything anyway, so I don't think that requires special attention. I think creating new words is the harder method...
So, for example:
#include <iostream>;
int main()
{
if (1 < 2)
return 1;
else
return 0;
}
Could be read out as:
(keyword) include iostream new-line
(keyword) int main no params start
block if number 1 (operator) less than
number 2 new-line (keyword) return
number 1 new-line (keyword) else
new-line (keyword) return number 0 end
block
Treat words in () as optional descriptive words, most likely to be used in more complex code. You could use the word 'literal' if you want them to actually write the descriptive word. For example
(keyword) if literal number (operator)
less than literal keyword
becomes
if (number < keyword)
Other words could be given defined meanings as well, such as 'split-line' when you want them to continue on the next line, without closing any currently open parenthesis, etc.
I personally find this method quite simple to use and easy to teach. YMMV, as always.
Of course, this doesn't solve the internationalisation issue, but at worst, would result in 'new words' being used in the non-English languages, which is no worse than the proposed solution you offered.
As a blind developer, programming since I was 13, I found this question really interesting. First of all, as mentioned by other peple, learning a new language to be able to understand code is not a practical solution, as it would probably take longer to learn the spoken utterances as it would to learn the actual programming language.
Reading the question/answers two further points occured to me:
Firstly, you'd be surprised how important "thinking time" is. I have previously programmed in C/C++/Java and now use C# as my primary language, and consider myself very competant. But when I did a couple of projects in Python, I found the reduced punctuation robbed me of my "thinking time" - subconsciously, I was using the punctuation to digest what I'd just heard - fascinating... However, the situation is a bit different when it comes to identifiers, as these aren't well known by the listener - I personally find it hard to listen to code with acronym variables (RGXRatio, RGVRatio) as I don't have time to figure out what it means. On the flip side, hungarian notation and initial underscores makes code hard to listen to as the length of the variables (in terms of time taken to speak) is much longer than the more important operations being performed on those variables.
Another thing to consider is that the length of the audio stream is an end result, but not the root cause. The reason the audio is so long is because audio is a one-dimensional medium, whereas reading text is a 2d medium with the ability to jump around and skip past irelevant/familiar text. It wouldn't work for a face-to-face lecture, but what if there were keyboard commands for controlling the speech. In text documents my screen reader lets me jump to the next line, but what if this were adapted to the semantics of a programming language. some research, such as by T V Raman at Google, includes using different voices for syntax highlighting, and audio cues to mark metadata like capitals.
I know the original question specifically related to a lecture given to a class, but if like myself you have to listen to entire files of source code , I also find the structure of the code makes a huge difference. I personally read code like a story - left to right, top to bottom. so it's very hard to trace through unfamiliar code when it's written bottom-up.
So would it not then be feasible to simply ascribe a concise, unique pronunciation to each of these distinct concepts (including one for whitespace, where it is required) and go from there? Programming languages are far more regular than natural languages, so the pronunciation could be standardised
Perhaps, but you've lost sight of your goal. The premise was that the person listening did not already know the language. If he does, we can simply say "include iostream" when we mean #include <iostream>, or "vector of int" when we mean std::vector<int>.
Your premise was that the person listening is not familiar enough with the language to understand what you read out loud unless you read out exactly what it says.
Now, inventing a whole new language just to describe the primitives that occur in your source code doesn't solve the problem. Instead, you still have to read out every syntactic token (with simpler, more "standardized" pronunciations, yes, but they still have to be read out loud), and the person listening still won't understand you, because if they don't know C++ well enough to understand "include iostream", they won't understand your standardized pronunciation either. And if you're going to teach them your pronunciation, why bother, when you could've just taught them to understand C++ syntax directly instead?
There's also the root problem that C++ code tends to consist of a lot of syntactic tokens. Take a line as simple as this:
std::vector<int> v;
I count 9 tokens. Not one of them can be omitted. If the person listening does not understand the code and syntax well enough to understand a high-level description such as "declare a vector of int, named v", then you'll have to read out all 9 tokens in some form. Even if you come up with simpler names than "namespace resolution operator" and "less than sign", you still have to list 9 token names. Which is a lot of work.
In short, no, I don't think it'd work. First, it's still too cumbersome, and second, it's presuming prior knowledge on the part of the person listening, when the motivation for this was that the person listening was a student without the prior knowledge that made it possible to understand a high-level description of the code.