Type safety in Clojure - clojure

I want to ask what sort of type safety languages constructs are there on Clojure?
I've read 'Practical Clojure' from Luke VanderHart and Stuart Sierra several times now, but i still have the distinct impression that Clojure (like other lisps) don't take compilation-time validation checking very seriously. Type safety is just but one (very popular) strategy for doing compilation-time checking of correct semantics
I'm asking this question because i'm aching to be proven wrong; what sort of design patterns are there available on clojure to validate (at compilation-time, not at run-time) that a function that expects a string doesn't get called with, say, a list of integers?
Also, i've read very smart people like Paul Graham openly advocate about lisp allowing to implement everything from lower-level languages on top of it (most would say that the language themselves are being reimplemented on top of it), so if that assertion would be true, then trivially stuff like type checking should be a piece of cake. So do you feel that there exist type systems (or the ability to implement such type systems) in clojure or other lisps, that give the programmer the ability to offset validation checking from run-time to compile-time, or even better, design-time?

Compilation units in Clojure are very small - a single function. Lispers tend to change small portions of running programs while they develop. Introducing static type checking into this style of development is problematic - for a deeper discussion why I recommend the post Types are Anti-Modular by Gilad Bracha. Thus Clojure's prefers pre/post-conditions which jive better with Lisp's highly REPL-oriented development.
That said, it's certainly desirable and possible to build an a la carte type system for Clojure. This trail has been blazed by Qi/Shen, and Typed Racket. This functionality could be easily provided as a library. I'm hoping to build something like that in the future with core.logic - https://github.com/clojure/core.logic.

Since Clojure is a dynamic language the whole idea is not to check the types (or much of anything) at compile time.
Even when you add type hints to your function they do not get checked at compile-time.
Since Clojure is a Lisp you can do whatever you want at compile-time with macros and macros are powerful enough that you can write your own type systems. Some people have made type systems for lisps Typed Racket and Qi. These Type systems can be just as powerful as any Type system in a "normal" language.
Ok, we now know that it is possible but does Clojure has such a optional type system? The answer is currently no but there is a logic engine (core.logic) that could be used to implement a typesystem but the author has not worked (yet) in that direction.

There is a library that adds an optional type system to Clojure,
http://typedclojure.org/
Rationale
Static typing has well known benefits. For example, statically typed languages catch many common programming errors at the earliest time possible: compile time. Types also serve as an excellent form of (machine checkable) documentation that almost always augment existing hand-written documentation.
Languages without static type checking (dynamically typed) bring other benefits. Without the strict rigidity of mandatory static typing, they can provide more flexible and forgiving idioms that can help in rapid prototyping. Often the benefits of static type checking are desired as the program grows.
This work adds static type checking (and some of its benefits) to Clojure, a dynamically typed language, while still preserving idioms that characterise the language. It allows static and dynamically typed code to be mixed so the programmer can use whichever is more appropriate.

Related

What Language Features Can Be Added To Clojure Through Libraries?

For example pattern matching is a programming language feature that can be added to the clojure language through macros: http://www.brool.com/index.php/pattern-matching-in-clojure
What other language features can be added to the language?
Off the top of my hat I have two examples, but I'm sure there are more.
Contracts programming: https://github.com/fogus/trammel
Declarative logic: https://github.com/jduey/mini-kanren
I think its a stupid question to ask what can be added, what you should ask is what you cant add. Macros allow you to hook into the compiler that mean you can do almost anything.
At the moment you cant add your own syntax to the language. Clojure does not have a user extenseble reader, this means you don't have any reader-macros (http://dorophone.blogspot.com/2008/03/common-lisp-reader-macros-simple.html). This is not because of a technical problem but more a decition by Rich Hickey (the Clojure creator).
What you can not do is implement features that need virtual maschine support like add tail call semantics or goto.
If you want to see some stuff that has been done: Are there any Clojure DSLs?
Note that this list is not 100% up to date.
Edit:
Since you seem you took pattern matching as an example (it is a really good example for the power of macros) you should really look at the match library. Its probebly the best fastest pattern matching library in Clojure. http://vimeo.com/27860102
You can effectively add any language features you like.
This follows from the ability of macros to construct arbitrary code at compile time: as long as you can figure out what code you need to generate in order to implement your language features, it can be achieved with macros.
Some examples I've seen:
Query languages (Korma)
Logic programming (core.logic)
Image synthesis DSL (clisk)
Infix notation for arithmetic
Algebraic manipulation
Declarative definition of realtime data flows (Storm, Aleph)
Music programming (Overtone, Music As Data)
There are a few caveats:
If the feature isn't supported directly by the JVM (e.g. tail call optimisation in the mutually recursive case) then you'll have to emulate it. Not a big deal, but may have some performance impact.
If the feature requires a syntax not supported by the Clojure reader, you'll need to provide your own reader (since Clojure lacks an extensible reader at present). As a result, it's much easier if you stick to Clojure syntax/forms.
If you do anything too unusual / unidiomatic, it probably won't get picked up by others. There is a lot of value in sticking to standard Clojure conventions.
Beware of using macros where they are not needed. Often, just using normal functions (perhaps higher order functions) is sufficient to implement many new language features. The general rule is: "don't use macros unless you absolutely need to".

Safe c++ in mission critical realtime apps

I'd want to hear various opinions how to safely use c++ in mission critical realtime applications.
More precisely, it is probably possible to create some macros/templates/class library for safe data manipulation (sealing for overflows, zerodivides produce infinity values or division is possible only for special "nonzero" data types), arrays with bound checking and foreach loops, safe smartpointers (similar to boost shared_ptr, for instance) and even safe multithreading/distributed model (message passing and lightweight processes like ones are defined in Erlang languge).
Then we prohibit some dangerous c/c++ constructions such as raw pointers, some raw types, native "new" operator and native c/c++ arrays ( for application programmer, not for library writer, of course). Ideally, we should create a special preprocessor/checker, at least we must have some formal checking procedure, which can be applyed to sources using some tool or manualy by some person.
So, my questions:
1) Are there any existing libraries/projects that utilize such an idea? (Embedded c++ is apparently not of desired kind) ?
2) Is it a good idea at all or not? Or it may be useful only for prototyping some another hipothetical language? Or it is totally unusable?
3) Any other thoughts (or links) on this matter also welcome
Sorry if this question is not actually a question, offtopic, duplicate, etc.,
but I haven't found more appropriate place to ask it
For good rules on how to write C++ for mission critical real-time applications have a look at the Joint Strike Fighter coding standards. Many of the rules there are based on the MISRA C coding standards, which I believe are proprietary. PC-Lint is a C++ code checker with rule sets like what you want (including the MISRA rules). I believe you can customize your own rules as well.
We use C++ in mission-critical real-time applications, although I suppose we have it easy (in theory) because we have to only provide real-time guarantees as good as the hardware our clients use. Thus, sufficient profiling lets us get by without mlockall() or stack pre-loading or any other RT traditions. As for the language itself, I think everyday modern C++ coding practices (ones that discourage C concepts) are entirely sufficient to write robust applications that can be used in RT contexts, given 21st century hardware.
Unit tests and QA should be the main focus of effort, instead of in-house libraries that duplicate existing language features.
If you're writing critical high-performance realtime S/W in C++, you probably need every microsecond you can get out of the hardware. As such, I wouldn't necessarily suggest implementing all the extra checks such as ones that you mentioned, at least the ones with overhead implications on program execution. You can obviously mask floating point exceptions to prevent divide by zero from crashing the program.
Some observations:
Peer review all code (possibly multiple reviewers). This will go a long way to improving quality without requiring lots of runtime checks.
DO make use of diagnostic tools and non-release-only asserts.
Do make use of simulation systems to test on non-embedded hardware.
C++ was specifically designed without things like bounds checking for performance reasons.
In general I don't suggest arbitrarily restricting the language, although making use of RAII and smart pointers should have minimal overhead and provides a nice benefit.
Someone else pointed out that if you want Ada, just use Ada.

Functional Programming and Type Systems

I have been learning about various functional languages for some time now including Haskell, Scala and Clojure. Haskell has a very strict and well-defined static type system. Scala is also statically typed. Clojure on the other hand, is dynamically typed.
So my questions are
What role does the type system play in a functional language?
Is it necessary for a language to have a type system for it to be functional?
How is the "functional" level of a language related to the kind of the type system of the language?
A language does not need to be typed to be functional - at the heart of functional programming is the lambda calculus, which comes in untyped and typed variants.
The type system plays two roles:
it provides a guarantee at compile time that a class of errors cannot occur at run-time. The class of errors usually includes things like trying to add two strings together, or trying to apply an integer as a function.
it has some efficiency benefits, in that objects at runtime do not need to carry their types around, because the types have already been established at compile-time. This is known as type erasure.
In advanced type systems like Haskell's, the type system can provide more benefits:
overloading: using one identifier to refer to operations on different types
it allows a library to automatically choose an optimised implementation based on what type it is used at (using Type Families)
it allows powerful invariants to be proven at compile time, such as the invariant in a red-black tree (using Generalised Algebraic Datatypes)
What role does the type system play in a functional language?
To Simon Marlow's excellent answer, I would add that a type system, especially one that includes algebraic data types, makes it easier to write programs:
Software designs, which in object-oriented languages are sometimes expressed using UML diagrams, are very clearly expressed using types. This clarity manifests especially when not only values have types, but also modules have types, as in Objective Caml or Standard ML.
When a person is writing code, a couple of simple heuristics make it very, very easy to write pure functions based on the types:
A value of function type can always be created with a lambda.
A value of function type can always be consumed by applying it.
A value of an algebraic data type can be created by applying any of the type's constructors.
A value of an algebraic data type can be consumed by scrutinizing it with a case expression.
Based on these observations, and on the simple rule that unless there's a good reason, a function should consume each of its arguments, it's pretty easy to cut down the space of possible code you could write to a very small number of candidates. For example, there just aren't that many sensible functions of type (using Haskell notation)
forall a . (a -> Bool) -> [a] -> Bool
The art of using types to create code is called type-directed programming. When it works well, you hear functional programmers say things like "once we got the types right, the code practically wrote itself." Since the types are usually much smaller than the
code, this is a big win.
Same as in any programming language: it helps you to avoid/find errors in your code. In case of static typing a good type system prevents programs with certain types of errors from compiling.
No. The untyped lambda calculus is what you could call the prototype of functional programming languages and it is, as the name suggests, entirely untyped.
In a functional language (as well as any other language where a function can be used as a value) the type system needs to know what the type of a function is. Other than that there is nothing special about type systems for functional languages.
In a purely functional language you need to abstract side-effects, so you'd want the type system to somehow be able to support that. For example if you want to have a world type like in Clean, you'd want the type system to support uniqueness types to ensure proper usage.
If you want to have an IO monad like in haskell, you'd need an IO type (though a monad typeclass like in haskell is not required to have an IO monad, so you don't need a type system, which supports that).
1: Same as any other, it stops you from doing operations that are either ill-defined, or whose result would be 'nonsensical' to humans. Like float addition on integers.
2: Nope, the oldest programming language in the world, the (untyped) lambda calculus, is both functional and untyped.
3: Hardly, functional just means no side effects, no mutations, referential transparency et cetera.
Just remember that the oldest functional language, the untyped lambda calculus has no type system.

What are the good and bad points of C++ templates? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I've been talking with friends and some completely agree that templates in C++ should be used, others disagree entirely.
Some of the good things are:
They are more safe to use (type safety).
They are a good way of doing generalizations for APIs.
What other good things can you tell me about C++ templates?
What bad things can you tell me about C++ templates?
Edit: One of the reasons I'm asking this is that I am studying for an exam and at the moment I am covering the topic of C++ templates. So I am trying to understand a bit more on them.
Templates are a very powerful mechanism which can simplify many things. However to use them properly requires much time and experience - in order to decide when their usage is appropriate.
For me the most important advantages are:
reducing the repetition of code (generic containers, algorithms)
reducing the repetition of code advanced (MPL and Fusion)
static polymorphism (=performance) and other compile time calculations
policy based design (flexibility, reusability, easier changes, etc)
increasing safety at no cost (i.e. dimension analysis via Boost Units, static assertions, concept checks)
functional programming (Phoenix), lazy evaluation, expression templates (we can create Domain-specific embedded languages in C++, we have great Proto library, we have Blitz++)
other less spectacular tools and tricks used in everyday life:
STL and the algorithms (what's the difference between for and for_each)
bind, lambda (or Phoenix) ( write clearer code, simplify things)
Boost Function (makes writing callbacks easier)
tuples (how to genericly hash a tuple? Use Fusion for example...)
TBB (parallel_for and other STL like algorithms and containers)
Can you imagine C++ without templates? Yes I can, in the early times you couldn't use them because of compiler limitations.
Would you write in C++ without templates? No, as I would lose many of the advantages mentioned above.
Downsides:
Compilation time (for example throw in Sprit, Phoenix, MPL and some Fusion and you can go for a coffee)
People who can use and understand templates are not that common (and these people are useful)
People who think that they can use and understand templates are quite common (and these people are dangerous, as they can make a hell out of your code. However most of them after some education/mentoring will join the group mentioned in the previous point)
template export support (lack of)
error messages could be less cryptic (after some learning you can find what you need, but still...)
I highly recommend the following books:
C++ Templates: The Complete Guide by David Vandevoorde and Nicolai Josuttis (thorough introduction to the subject of templates)
Modern C++ Design. Generic Programming and Design Patterns Applied by Andrei Alexandrescu (what is the less known way of using templates to simplify your code, make development easier and result in code robust to changes)
C++ Template Metaprogramming by David Abrahms and Aleksey Gutov (again - different way of using the templates)
More C++ Idioms from Wikibooks presents some nice ideas.
On the positive side, C++ templates:
Allow for generalization of type
Decrease the amount of redundant code you need to type
Help to build type-safe code
Are evaluated at compile-time
Can increase performance (as an alternative to polymorphism)
Help to build very powerful libraries
On the negative side:
Can get complicated quickly if one isn't careful
Most compilers give cryptic error messages
It can be difficult to use/debug highly templated code
Have at least one syntactic quirk ( the >> operator can interfere with templates)
Help make C++ very difficult to parse
All in all, careful consideration should be used as to when to use templates.
My 2c are rather negative.
C++ types were never designed to perform compile time calculations.
The notion of using types to achieve computational goals is very
clearly a hack – and moreover, one that was never sought but rather
stumbled upon
..
The reward for using MP in your code is the moment of satisfaction of
having solved a hard riddle. You did stuff in 100 lines that would
have otherwise taken 200. You grinded your way through
incomprehensible error messages to get to a point where if you needed
to extend the code to a new case, you would know the exact 3-line
template function to overload. Your maintainers, of course, would have
to invest infinitely more to achieve the same.
Good points: powerful; allows you to:
prescribe compile-time attributes and computation
describe generic algorithms and datastructures
do many other things that would otherwise be repetitive, boring, and mistake-prone
does them in-language, without macros (which can be far more hazardous and obscure!)
Bad points: powerful; allows you to:
provoke compile-time errors that are verbose, misleading, and obscure (though not as obscure and misleading as macros...)
create obscure and hazardous misdesigns (though not as readily as macros...)
cause code bloat if you're not careful (just like macros!)
Templates vastly increase the viable design space, which is not necessarily a bad thing, but it does make them that much harder to use well. Template code needs maintainters who understand not just the language features, but the design consequences of the language features; practically speaking, this means many developer groups avoid all but the simplest and most institutionalized applications of C++ templates.
In general, templates make the language much more complicated (and difficult to implement correctly!). Templates were not intentionally designed to be Turing-complete, but they are anyway -- thus, even though they can do just about anything, using them may turn out to be more trouble than it's worth.
Templates should be used sparingly.
"Awful to debug" and "hard to read" aren't great arguments against good template uses with good abstractions.
Better negative arguments would go towards the fact that the STL has a lot of "gotchas", and using templates for purposes the STL already covers is reinventing the wheel. Templates also increase link time, which can be a concern for some projects, and have a lot of idiosyncrasies in their syntax that can be arcane to people.
But the positives with generic code reuse, type traits, reflection, smart pointers, and even metaprograms often outweigh the negatives. The thing you have to be sure of is that templates are always used carefully and sparingly. They're not the best solution in every case, and often not even the second or third best solution.
You need people with enough experience writing them that they can avoid all the pitfalls and have a good radar for when the templates will complicate things more than helping.
One of the disadvantages I haven't seen mentioned yet is the subtle semantic differences between regular classes and instantiations of class templates. I can think of:
typedefed typenames in ancestor types aren't inherited by template classes.
The need to sprinkle typename and template keywords in appropriate places.
Member function templates cannot be virtual.
These things can usually be overcome, but they're a pain.
Some people hate templates (I do) because:
On maintainability pov, the wrong use of templates can have a negative effect ten times stronger than the initial advantage of time they were supposed to bring.
On optimization pov, compiler optimizations they allow are nothing compared to an optimal algorithm and the use of multi threading.
On compiling time pov, wrong use of templates can a very negative effect on parsing, compilation and linking phases, when poorly written templated declaration brings tons of useless parasite declarations in each compilation units (here is how 200 lines of code can produce an .obj of 1Mb).
To me templates are like a chainsaw with an integrated flame thrower that can also launch grenades. One time in my life I may have a specific need of that. But most of the time, I'm using a regular hammer and a simple saw to build things and I'm doing a pretty good job that way.
Advantage: Generic Datatypes can be created.
Disadvantage: Code Bloating
I don't see how they are hard to read. What is unreadable about
vector <string> names;
for example? What would you replace it with?
Reusable code is made with template. Its application is in accordance with the profile of each.

What can C++ do that is too hard or messy in any other language?

I still feel C++ offers some things that can't be beaten. It's not my intention to start a flame war here, please, if you have strong opinions about not liking C++ don't vent them here. I'm interested in hearing from C++ gurus about why they stick with it.
I'm particularly interested in aspects of C++ that are little known, or underutilised.
RAII / deterministic finalization. No, garbage collection is not just as good when you're dealing with a scarce, shared resource.
Unfettered access to OS APIs.
I have stayed with C++ as it is still the highest performing general purpose language for applications that need to combine efficiency and complexity. As an example, I write real time surface modelling software for hand-held devices for the surveying industry. Given the limited resources, Java, C#, etc... just don't provide the necessary performance characteristics, whereas lower level languages like C are much slower to develop in given the weaker abstraction characteristics. The range of levels of abstraction available to a C++ developer is huge, at one extreme I can be overloading arithmetic operators such that I can say something like MaterialVolume = DesignSurface - GroundSurface while at the same time running a number of different heaps to manage the memory most efficiently for my app on a specific device. Combine this with a wealth of freely available source for solving pretty much any common problem, and you have one heck of a powerful development language.
Is C++ still the optimal development solution for most problems in most domains? Probably not, though at a pinch it can still be used for most of them. Is it still the best solution for efficient development of high performance applications? IMHO without a doubt.
Shooting oneself in the foot.
No other language offers such a creative array of tools. Pointers, multiple inheritance, templates, operator overloading and a preprocessor.
A wonderfully powerful language that also provides abundant opportunities for foot shooting.
Edit: I apologize if my lame attempt at humor has offended some. I consider C++ to be the most powerful language that I have ever used -- with abilities to code at the assembly language level when desired, and at a high level of abstraction when desired. C++ has been my primary language since the early '90s.
My answer was based on years of experience of shooting myself in the foot. At least C++ allows me to do so elegantly.
Deterministic object destruction leads to some magnificent design patterns. For instance, while RAII is not as general a technique as garbage collection, it leads to some impressive capabilities which you cannot get with GC.
C++ is also unique in that it has a Turing-complete preprocessor. This allows you to prefer (as in the opposite of defer) a lot of code tasks to compile time instead of run time. For instance, in real code you might have an assert() statement to test for a never-happen. The reality is that it will sooner or later happen... and happen at 3:00am when you're on vacation. The C++ preprocessor assert does the same test at compile time. Compile-time asserts fail between 8:00am and 5:00pm while you're sitting in front of the computer watching the code build; run-time asserts fail at 3:00am when you're asleep in Hawai'i. It's pretty easy to see the win there.
In most languages, strategy patterns are done at run-time and throw exceptions in the event of a type mismatch. In C++, strategies can be done at compile-time through the preprocessor facility and can be guaranteed typesafe.
Write inline assembly (MMX, SSE, etc.).
Deterministic object destruction. I.e. real destructors. Makes managing scarce resources easier. Allows for RAII.
Easier access to structured binary data. It's easier to cast a memory region as a struct than to parse it and copy each value into a struct.
Multiple inheritance. Not everything can be done with interfaces. Sometimes you want to inherit actual functionality too.
I think i'm just going to praise C++ for its ability to use templates to catch expressions and execute it lazily when it's needed. For those not knowing what this is about, here is an example.
Template mixins provide reuse that I haven't seen elsewhere. With them you can build up a large object with lots of behaviour as though you had written the whole thing by hand. But all these small aspects of its functionality can be reused, it's particularly great for implementing parts of an interface (or the whole thing), where you are implementing a number of interfaces. The resulting object is lightning-fast because it's all inlined.
Speed may not matter in many cases, but when you're writing component software, and users may combine components in unthought-of complicated ways to do things, the speed of inlining and C++ seems to allow much more complex structures to be created.
Absolute control over the memory layout, alignment, and access when you need it. If you're careful enough you can write some very cache-friendly programs. For multi-processor programs, you can also eliminate a lot of slow downs from cache coherence mechanisms.
(Okay, you can do this in C, assembly, and probably Fortran too. But C++ lets you write the rest of your program at a higher level.)
This will probably not be a popular answer, but I think what sets C++ apart are its compile-time capabilities, e.g. templates and #define. You can do all sorts of text manipulation on your program using these features, much of which has been abandoned in later languages in the name of simplicity. To me that's way more important than any low-level bit fiddling that's supposedly easier or faster in C++.
C#, for instance, doesn't have a real macro facility. You can't #include another file directly into the source, or use #define to manipulate the program as text. Think about any time you had to mechanically type repetitive code and you knew there was a better way. You may even have written a program to generate code for you. Well, the C++ preprocessor automates all of these things.
The "generics" facility in C# is similarly limited compared to C++ templates. C++ lets you apply the dot operator to a template type T blindly, calling (for example) methods that may not exist, and checks-for-correctness are only applied once the template is actually applied to a specific class. When that happens, if all the assumptions you made about T actually hold, then your code will compile. C# doesn't allow this... type "T" basically has to be dealt with as an Object, i.e. using only the lowest common denominator of operations available to everything (assignment, GetHashCode(), Equals()).
C# has done away with the preprocessor, and real generics, in the name of simplicity. Unfortunately, when I use C#, I find myself reaching for substitutes for these C++ constructs, which are inevitably more bloated and layered than the C++ approach. For example, I have seen programmers work around the absence of #include in several bloated ways: dynamically linking to external assemblies, re-defining constants in several locations (one file per project) or selecting constants from a database, etc.
As Ms. Crabapple from The Simpson's once said, this is "pretty lame, Milhouse."
In terms of Computer Science, these compile-time features of C++ enable things like call-by-name parameter passing, which is known to be more powerful than call-by-value and call-by-reference.
Again, this is perhaps not the popular answer- any introductory C++ text will warn you off of #define, for example. But having worked with a wide variety of languages over many years, and having given consideration to the theory behind all of this, I think that many people are giving bad advice. This seems especially to be the case in the diluted sub-field known as "IT."
Passing POD structures across processes with minimum overhead. In other words, it allows us to easily handle blobs of binary data.
C# and Java force you to put your 'main()' function in a class. I find that weird, because it dilutes the meaning of a class.
To me, a class is a category of objects in your problem domain. A program is not such an object. So there should never be a class called 'Program' in your program. This would be equivalent to a mathematical proof using a symbol to notate itself -- the proof -- alongside symbols representing mathematical objects. It'll be just weird and inconsistent.
Fortunately, unlike C# and Java, C++ allows global functions. That lets your main() function to exist outside. Therefore C++ offers a simpler, more consistent and perhaps truer implementation of the the object-oriented idiom. Hence, this is one thing C++ can do, but C# and Java cannot.
I think that operator overloading is a quite nice feature. Of course it can be very much abused (like in Boost lambda).
Tight control over system resources (esp. memory) while offering powerful abstraction mechanisms optionally. The only language I know of that can come close to C++ in this regard is Ada.
C++ provides complete control over memory and as result a makes the the flow of program execution much more predictable.
Not only can you say precisely at what time allocations and deallocations of memory occurs, you can define you own heaps, have multiple heaps for different purposes and say precisely where in memory data is allocated to. This is frequently useful when programming on embedded/real time systems, such as games consoles, cell phones, mp3 players, etc..., which:
have strict upper limits on memory that is easy to reach (constrast with a PC which just gets slower as you run out of physical memory)
frequently have non homogeneous memory layout. You may want to allocate objects of one type in one piece of physical memory, and objects of another type in another piece.
have real time programming constraints. Unexpectedly calling the garbage collector at the wrong time can be disastrous.
AFAIK, C and C++ are the only sensible option for doing this kind of thing.
Well to be quite honest, you can do just about anything if your willing to write enough code.
So to answer your question, no, there is nothing you can't do in another language that C++ can't do. It's just how much patience do you have and are you willing to devote the long sleepless nights to get it to work?
There are things that C++ wrappers make it easy to do (because they can read the header files), like Office development. But again, it's because someone wrote lots of code to "wrap" it for you in an RCW or "Runtime Callable Wrapper"
EDIT: You also realize this is a loaded question.