Alternatives to C and asm on microcontrollers [closed] - c++

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
My background is like this: embedded/C, then C++, then higher level OO languages (Java, Scala, Ruby, Groovy, etc.), and now I am doing a small project involving MSP430 microcontroller. Meanwhile, inspired by that, I am contemplating a number of potential pet embedded system projects (meshes and/or RTLS look appealing). So my question is focused primarily on MSP430 for now, though, as an aside, I'd love to have a broader picture, too, involving other microcontrollers.
I was a bit surprised finding out that, after so many years, I might need to go back to C, with its macros, naming conventions, and all. My brain used to be wired for C, but that was many, many years ago.
So what alternatives are available?
C++ feels much more agreeable to me, and, fortunately, seems doable: http://stonepile.fi/object-oriented-approach-to-embedded-programming-with-c/
So if I am to program C++, I just need to inline a lot, avoid virtual functions when possible, and I should be good, right? (at least, memory-wise; they did not benchmark for performance at the above link).
However, if it's so easy, why do people program C? I must be missing something.
The above link also seems to provide a wrapper library for pico]OS. Has anyone used picoOS on MSP430, how reliable is it, and how much resources does it take?
What are the pros and cons of Energia for a simple MS430 project? I tried it, it seems very intuitive and self-documenting, but does it result in as neat a code under the hood? For instance, does Energia initialize unused GPIO to the off state to save energy? Does it initialize unused interrupts? What is the overhead in terms of memory and speed? Etc.
Edit: As a long-time Eclipse person, I'd love to use CCS. I saw that Energia sketches can be imported to CCS. Does it mean that CCS have full support for Energia and can be used as an Energia IDE?
Has anyone used Java Grinder http://hackaday.com/2014/02/10/java-grinder-spits-out-dspic-and-msp430-assembly-code/ ? It seems appealing, but because it spits out an Assembly and not C/C++ code, it's a bit scary to commit to it: what if I am locked into it and it's not ready for the prime time? If it generated C code, I could have easily dropped it if it did not work.
I mentioned Java and my question was deleted, as it's self-evident that other than grinder-like syntactic sugar (not that I mind syntactic sugar!), Java can't run on MSP430. I guess I'll ask another question re WHERE Java can run. This has already grown too long.
What other languages/environments are out there, that fill the niche between low- and hig-level languages?

you seem to have several questions here so I shall go through in the order you numbered them.
Most micros will indeed run C++ (assuming the manufacturer or an open source project provides a compiler back-end), however you have to be wary of a number of drawbacks. C++ Is less deterministic, as in, it provides a significantly higher level of abstraction, which one likely does not want an a resource constrained embedded system, and by and large it is not needed either as embedded systems are rarely powerful enough to usefully run the enormously complex algorithms that warrant a high level language like C++. It is also likely to cause a wide range of hard to track bugs, given the difficulty of debugging code from an embedded system having bugs which are simple and easy to trace are very much nicer. However very importantly, the C++ standard libraries are enormous, they will use excessive ram and very likely waste much of your limited memory space. Thus, even if you do use C++, you wont be able to use any of the techniques that make it powerful.
Simply, I have not used it, however like any RTOS, it is useful if you want a slightly higher level interface, however for a micro the tiny size of the MSP430 it seems overkill, I cannot imagine you doing anything on there that warrants an ROTS, if you need multitasking it would be better to provide simple cooperative tasking yourself.
Unfortunately I have not used that platform either, however given it is based on wiring, my guess it that it does not provide high levels of hardware specific optimization, if you want that I recommend using it for the bulk of your code but make calls into lower level libraries when needed. Beyond that however, it does provide a lovely, self documenting interface, I strongly encourage you to try it. It will also make your code many times easier port if you switch to another micro later (many systems from many companies provide wiring bindings).
You really answer this yourself here, it could be very powerful but is still very immature, I would avoid it purely because of this lock-in until it becomes more mature, then it is worth re-assessing.
Java works nicely on more powerful ARM chips, that is the only place I have seen it in wide use, and implemented fairly efficiently in a micro (ARM provides hardware assistance specifically for Java). Other than this Java is a poor fit for the micro world, at one point it appeared it might go somewhere but this was largely unrealized, for now C likes are the way to go for smaller micros.
Unfortunately there is not really a huge amount of choice other than C. My best recommendation is using higher level libs like wiring. That gives you slightly nicer interfaces without killing efficiency, otherwise there is little point in using a tiny micro if you need high levels of abstraction.
In summary, C does a fairly good job here, I do not think there has been any motivation or effort to make a good replacement. And frankly I largely feel this way too, C never became a bad language, it is still well suited for small systems for the same reasons as it was before. It provides power, efficiency and predictability.
I hope this helps somewhat, if you have any queries please comment me and I will see what I can do to help.

Related

Why is using more than one language in application server projects? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
After a while surfing on source code of big projects especially application servers like this, I have understand these projects are not developing by one language. Many of them are using python for secondary language.
Now I have four questions:
Why is used more than one language?
Why python is used for secondary often?
Why python is not used to develop all parts of projects and they still is using c/c++?
And which the parts of projects should be developed with python and which part of projects is using c/c++?
Hard and soft layers
Programming language designs tend to trade off between "high-level" features, which enhance programmer productivity at the cost of speed, and "low-level" features, which require a great deal of programmer effort but produce very fast code.
Therefore it sometimes makes sense to use two languages on a project:
Write 90% of the code in an expressive, high level language which is easy to write and maintain.
Write the 10% of performance-critical code in a low-level language which is harder to write, but allows for comprehensive optimisation.
c2wiki calls this the HardAndSoftLayers pattern:
By virtue of the first rule of optimization, go ahead and write most of your code in the highest level language you can find.
By virtue of the third rule of optimization, when you must, use a profiler and find the slow parts of your program. Take those parts, and write them in a lower-level language.
For reference, the rules of optimisation are:
First Rule Of Optimization - Don't.
Second Rule Of Optimization - Don't... yet.
Profile Before Optimizing
The rule is pretty simple: the developers choose the language(s) based more or less on the following criterias:
their familiarity with it
how easily you can do the task using that language
how well is suited the language to the specific task
Today most of the development done in this multilingual environments are huge solutions, where different components need to communicate, exchange data or simply do work which is comprised of more than one step. It is easier to write the communication/data interpretation/whatever wrapping necessary part in a language such as python and then leave the real time and speed needy work to be done by some lower level language which compiles directly without the need for an interpreter.
Let's dig a little bit deeper.
How familiar are the developers with the programming language depends on the background of each developer. If they are given a free choice, obviously they will pick the language they know the best, unless there is a lobby from someone else... usually higher in the management chain. Python is not necessarily the language of choice, python is simply an easy to use and learn language, which is well suited for most tasks. Our project has no bit of python in it, only tons of ruby code. Because the main developer liked ruby at that time, so we're stuck with it.
If you know more than one programming language you know that each of them is doing the same thing differently. For example, creating a socket, connecting to a server, reading the stuff and printing it out is just a few lines of Erlang code, but it takes a lot more to do it in C++ (for example...) So again, if you have a task you know how to solve easily in a specific language you are going to stuck to it. People are lazy, they don't necessarily learn new stuff unless needed.
Obviously you are not going to write a device driver in python, and it is much easier to create a complete web service with java than with plain C... but you still would need the part of the solution that does the hardware close thing. When you have a task you carefully measure the requirements and implications and wisely choose the language you will do it in. Because it will stuck to it forever.
Sometimes python is not good enough.
Dealing with computer vision, image or sound processing, calculate tones of data is not really what python is good at. Other language like C or C++ is really good at those fields.
support your primary language is java, and you want to glue other languages into one project. That where we need Python. Python is well known glue language. you can use ctype,SWIG, Jython, ironPython or other method to bind multiple language.
Guess I answered this question at 1.
Need for speed. go for C or C++ . Care more about productivity, go with Python.
Without referring to the project you sent, I'll give you my 50c for why the company I work for, as to why we use python quite often in our projects.
Primarily, we have no python code relating to the software solution itself. All python code either relates to assist with development, machine set up, common framework tools deployment for testing, and vastly for code generation.
Why is used more than one language?
No project we work on has only one language, when looking at all our enterprise level solutions or large scale implementations.
This is mostly due to the fact that our tiers are written in languages that provide best performance and usability at each level separately.
For instance, C++ for speedy core back-end services, and C#.NET for rapidly developed and provide good UI for the front-end.
Why python is used for secondary often?
Personally, apart for the reasons I explained above, we don't make use of python 'secondary often'. We use C++/C# as the most common pair, but depending on the platform, might be other pairs.
Why python is not used to develop all parts of projects and they still is using c/c++?
Python is great for quick solutions and doing things you wish your shell could do. This largely involves file management, etc.
C++ is perhaps the fastest compiled language, providing optimal usage for core and largely used actions.
Based on that, and the fact that the market has more knowledge and experience in C++ (for many reasons), C++ is the more popular choice.
And which the parts of projects should be developed with python and which part of projects is using c/c++?
I believe I may have already addressed that above.
-
I hope I could help, please remember this is only my personal opinion and by no means should this be taken as a fact.

Best C++ programming practices in professional game development environments? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I'm well educated in the C++ programming language regarding syntax, STL, etc. I haven't done any true projects with it yet other than some college courses. My goal is to start writing progams but attempt to keep it to industry best practices. One of my main goals is to work in the game industry. A door is opening which lead me to ask these questions.
Are C++ game developers usually limited on using all the C++ features? Is it true more or less for each platform?
Is STL used these days in the environment, or should I avoid it?
Do they treat C++ as "everything must be an object" or is it a mix of paradigms?
Any features they tend to not use?
Is C still used, and is it good to show a project completed in it?
I did talk to a famous game designer and he said they don't use STL and said to use basic encapsulation with classes, but he said not to use all other advanced C++ features. The idea is to keep it simple. Is this common?
Thanks in advance!
Bear in mind that "games development" covers everything from phones to websites to PSPs and DS to Wii, Xbox, PS3 and PC/Mac, so the answers you get may vary from one product/developer/platform to another.
1) Yes. Games development can mean that you have to use a particular compiler for your target platform, which may not support some of the "newer" features of C++. You may be restricted on which libraries the development studio will allow you to use. In addition, hardware restrictions on some platforms may influence how you code (e.g. on some systems the cost of virtual functions can be excessively high compared to a PC, so your designs can be restricted away from pure OO for performance reasons. Some systems have limited resources (memory, disk, etc) which will influence your design a lot. On the Playstation3 you may need to write vector-processing SPU tasks rather than traditional single/multithreaded code. etc).
2) Yes. (it's commonly used, as there's usually no point wasting time rolling your own, probably less efficient, equivalents)
3) A mix. Generally in games the key goals are (in rough order): Deliver the impossible on time (why are you going home now, it's only 10pm?), deliver high performance, and try not to crash.
4,6) Generally keep the code simple and flexible so you can develop fast and change to suit the producer's latest vision. You usually have tight deadlines so a lot of design and careful coding tends to be difficult to achieve. The industry unfortunately tends to work on the "churn out the code quickly so we can throw it away and start on the next product", although it is slowly coming around to the idea of re-usable libraries (and more methodical professional practices like unit tests, etc).
A typical scenario is to be asked to code up a quick and dirty demo for the boss, and then when you think you'll be able to throw it away and write it properly, the boss says "we don't have time to rewrite something that already works, start on the next level". So if you're asked to be quick'n'dirty, try not to burn too many bridges or you'll regret it.
5) You may use C, but it depends on the platform and the development house. These days most devlopment will be C++ even if parts of the code are effectively "little more than C" in terms of syntax and features used.
These questions are not at all unique to game development. You will find they vary as much by field as they do by company. I can think of two game design firms that use STL off the top of my head, and also of a few that don't.
1) Define "all C++ features". They typically don't use goto, if that's what you mean.
2) Yes STL is used. You shouldn't avoid it, or not avoid it - use the tools you need to get the job done every time you have a job that needs to get done. Keeping what you use in line with other ways similar problems were tackled -within that project or context- is importent.
3) Depends where you are, both in games and out of games. Some are graduated C guys who want things more C like. Others are C#/Java guys who are more OO (Object Obsessed).
4) goto
5) Yes, and yes. Knowing more than one language is always a good thing. Even if it's just C to C++.
6) Again, use the tools available to solve your problems. Keeping it simple is good when you can, sure.
1) No limitation really, except that you should follow programming guides so as to use them properly (i.e. avoid dinosaurs when using goto's).
2) Yes, its used, and Boost also (which is kinda the preview of the next STL).
3) Mix, always a mix. To say that C++ is only object-oriented is to misunderstand the language. "Everything is an object" is really a Java (or things like it) thing; check out C++'s concept of POD, as an example of how not everything is an object.
4) same as 1)
5) It is used, but as far as I've seen, no new project is being created in C.
6) No... he kinda said the opposite of what I've just said.
Generally, some parts of C++ are limited for patform and performance reasons.
Exceptions and RTTI are normally disabled. Exceptions are expensive when they unwind the stack, and RTTI uses too much memory. Lack of exceptions is particularly a pain because rather than come up with another error handling system, programmers tend to just return NULL. :)
Floats are used almost exclusively over doubles, for performance reasons.
Virtual functions and inheritence can be used, but the virtual table lookup can be costly so sometimes it's optimised out.
The STL can be used, but you normally have to write your own allocators to avoid fragging memory (see http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2271.html). It's more common to see custom containers - of varying quality. I've never come across boost being used in a console project.
Most code is OO, almost to a fault.
Best C++ programming practices in professional game development environments?
The question is too broad.
Are C++ game developers usually limited on using all the C++ features? Is it true more or less for each platform?
No, or at least extremely unlikely. Depends on platform/compiler.
Is STL used these days in the environment, or should I avoid it?
You shouldn't avoid it, but:
you'll definitely need a profiler if you want stl containers
STL containers are extremely slow in debug builds. This might be a problem, because if you're frequently accessing them, there will be a huge performance drop (say, from 300 fps to 50) in debug build (when compared to release build). Debug code is already slower, and stl containers have large amount of "security check" in debug build.
Do they treat C++ as "everything must be an object"
Every idea is bad if you take it to extremes. "Everything must be an object" is one of those.
Any features they tend to not use?
Why? Unless there is a compiler bug or performance loss there is no point in avoiding feature. Besides, there are commercial 3D games written in java, so I see no point in avoiding feature of compiled language.
Is C still used, and is it good to show a project completed in it?
Some opensource projects may use C, but I can't remember a game completely written in it. There is Q3 engine, but afaik it is mix of C/C++.
I did talk to a famous game designer and he said they don't use STL and said to use basic encapsulation with classes, but he said not to use all other advanced C++ features.
They could decide to avoid stl simply because containers are slow in debug build. It is unclear what exactly you call "advanced C++ features". Also, keep in mind that he is designer, not programmer.
The idea is to keep it simple. Is this common?
The idea may or may not be common, but it is a correct idea for a small teams. With limited human resources everything should be made as simple as possible - to reduce development costs. If you stick to concept like "everything should be an object", decide to build "nice-looking" class hierarchy, etc, AND if you have limited resources, then you are doomed - It is very possible that you'll never finish your project and will be rewriting code forever and trying to decide what should be derived from what. Needless project flexibility and extensibility may kill small project. Game engine doesn't have to be complicated.
Large teams may afford complex solutions, but complexity leads to additional bugs. For example, you could look at sims 3 - it has a VERY advanced system (object "plugins", game resource management, GPU resource management, content "streaming", etc, and I think they even put game logic into a few .NET modules that are stored within game archive...), but extensibility of the system resulted in mountain of bugs after certain updates.
Electronic Arts as one example uses STL, but felt compelled to create their own version of it called EASTL.
Are C++ game developers usually limited on using all the C++ features? Is it true more or less for each platform? These days, I think it's pretty good on the main console platforms but if you were developing for mobile devices I doubt it's quite so even
Is STL used these days in the environment, or should I avoid it? It is used and should be used... but some people don't trust it because in the past STL support was much less even, and sometimes had performance problems
Do they treat C++ as "everything must be an object" or is it a mix of paradigms? Far too general. Varies from person to person and company to company. On average, game developers have a reputation for being less focused on software engineering and more on hacking code, but that's far too generalised to really be of use. However spouting for 30min about design patterns and frameworks and abstractions in an interview is probably not a wonderful plan
Any features they tend to not use? STL still has a bad rep amongst those who worked on older platforms. Exceptions also and RTTI, probably sicne they weren't well supported (and maybe are still not)
Is C still used, and is it good to show a project completed in it? I'm sure it is, and if you have a project then show it, but it's not anywhere near as widely used as C++
I did talk to a famous game designer and he said they don't use STL and said to use basic encapsulation with classes, but he said not to use all other advanced C++ features. The idea is to keep it simple. Is this common?
That view is pretty common but it doesn't mean it's right. You might want to tread carefully to avoid upsetting a 20-year veteran who has not moved with the times
I'm going to assume you're referring to people who develop full-price games for consoles and PC.
"Are C++ game developers usually limited on using all the C++ features?"
I think most are avoiding exceptions, RTTI, and many avoid the STL part of the standard library. They're probably using templates, though not much metaprogramming. I don't think this varies much across platform. But it does vary from company to company.
"Is STL used these days in the environment, or should I avoid it?"
It is used by many, but not all. Don't avoid it - it's a useful tool that is very good at the tasks it is designed for. Just make sure you can get by without it.
"Do they treat C++ as "everything must be an object" or is it a mix of paradigms?"
Different places will work in different ways. There's nothing intrinsic to object orientation in C++ that makes it good or bad for game development. Bear in mind that there is often legacy code or 3rd party libraries that are written in C or designed to be operated from C, so it's very rare that you'll have C++ throughout.
"Any features they tend to not use?"
I think we covered this above.
"Is C still used, and is it good to show a project completed in it?"
Yes, and no. Life is too short to spend on mastering every related language and dialect before you get your first job. Showing the ability to use a C-style library (eg. SDL) should suffice. Obviously if you set your goals on joining a company that just uses C, you will have to consider that.
"I did talk to a famous game designer and he said they don't use STL and said to use basic encapsulation with classes, but he said not to use all other advanced C++ features. The idea is to keep it simple. Is this common?"
See above. Incidentally, game "designers" typically aren't programmers, so watch your terminology. If you apply for a game designer role, you won't usually be showing programming skills.

What next generation low level language is the best bet when migrating a code base? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Let's say you have a company running a lot of C/C++, and you want to start planning migration to new technologies so you don't end up like COBOL companies 15 years ago.
For now, C/C++ runs more than fine and there is plenty dev on the market for it.
But you want to start thinking about it now, because given the huge running code base and the data sensitivity, you feel it can take 5-10 years to move to the next step without overloading the budget and the dev teams.
You have heard about D, starting to be quite mature, and Go, promising to be quite popular.
What would be your choice and why?
D and Go will probably just become as popular as Python and Ruby are today. They each fill a niche, and even though D was supposed to be a full-fledged replacement of C++, it probably will never acquire enough mass to push C++ away. Not to mention that they both aren't stable/mature enough, and it's unknown whether you'll have support for these languages in 10-20 years for the then-current hardware and operating systems. Considering that C/C++ is pretty much the compiled language and is used in the great majority of operating systems and native-code applications, it's very unlikely that it'll go away in the foreseeable future.
C and C++ are a pretty much unbeatable combo when it comes to native/unmanaged/"lowlevel" languages.
Not because they're the best languages, far from it, but because they're there, they do the job, and they're good enough. There's little doubt that D, for example, is better than C++ in most respects. But it fails in the most important one: Compatibility with all the existing C++ code. Without that requirement, most of that code would be written in a managed language today anyway. The only reason so many codebases use C++ today is because they used it last year, and it'd be too much of a pain to switch to something else. But if and when they switch, they typically don't switch to D. They switch to C# or Java or Python.
The problem for D and other "upcoming" languages competing for the same niches, is that while they're better, they're not groundbreaking enough to motivate people to actually switch to them.
So C and C++ are here to stay. C is unlikely to evolve much further. It is as it is, and one of the niches it has to fill is "simplicity, even for compiler writers". No other language is likely to beat it in that niche, even if they never revise the standard again.
C++ is evolving much more dramatically, with C++0x getting nearer, and they've already got a huge list of features they want to do afterwards. C++ isn't a dead end in any way.
Both languages are here to stay. Perhaps in 50 years other languages will have replaced them, but it won't happen this decade.
I currently use D regularly. I wouldn't recommend it yet for people writing production code because it's too bleeding edge. I get away with it because most of my code is research prototypes in bioinformatics. However, the language is starting to stabilize. Andrei Alexandrescu is releasing a book titled "The D Programming Language" next March, and right now there is a push to stabilize the spec for version 2 of the language in time for the book.
While D is not a formal superset of C, it is what I'd call an idiomatic superset except for the lack of a preprocessor. In other words, any code written in C proper (ignoring the preprocessor), can be trivially translated to D without a redesign, because C concepts like pointers and inline ASM are there and work the same in D as in C. D also supports direct linking to C code and the D standard library includes the entire C standard library.
Also, despite D's lack of libraries because it is still a bleeding edge language, it's a library writer's dream because of its metaprogramming capabilities. If it takes off, it will probably have some pretty impressive libs. For a preview of this, see std.range or std.algorithm in the D2 standard library (Phobos). As another example, I implemented an OpenMP-like parallelism model (parallel foreach, parallel map, parallel reduce, futures) as a pure library in D, without any special compiler support. (See http://cis.jhu.edu/~dsimcha/parallelFuture.html)
Given that you're mostly interested in the long term, I'd say give D 6 months to stabilize (given Andrei's book and the current push to stabilize the language, version 2 should be stable by then) and then take a hard look at it.
Edit: Now that the core language spec is relatively stable and the focus has turned to toolchain and library development, I would recommend D for small production projects unless you are in a very risk-averse environment. Larger projects that absolutely must have good toolchain and library support should still wait, though.
If you believe in the lean manufacturing principles, you should strive to "decide as late as possible". The moment should be the last responsible moment, meaning the moment at which failing to make a decision eliminates an important alternative.
I think this principle can be applied to your situation. Instead of committing now to a language (that you don't even know will be around in 10 years), you should keep your options open. Maybe refactor some of your code so it is a bit more generic or is built on more abstractions, so that when it is indeed required to migrate, the process will be easier.
Stick with C and C++. I don't see it going the way of COBOL, it runs as well as anything, and you'll have no problem finding people to code in C and C++.
C++ -- it is relatively young and updated... It has a big number of compiler vendors and got
improved all the time.
C -- it would live for a long time filling the gap between assembler and higher level languages. It is also very simple and easy to implement language, so it would remain the
first language for various "strange" architectures like embedded or extremely new ones.
D is promising but still very new and unstable specifications and libraries.
Go was born few weeks ago... Never use anything of version 0 for big important projects. Also it is significantly more limited the C++ or D.
2019 update: C++ will stay around for the next 10 years... (if not, I will correct this answer, when it will not be relevant any more....)
the reason companies works with COBOL today is b/c they already have millions of COBOL code written. if the could throw it - they will do it at once, on the other hand - companies work with C/C++ as part of their needs and new projects using this language b/c they can't / don't want to use java/c# any other framework based language - so COBOL is not the analogy here.
Like dsimcha said the D way is currently risky. Yet the language has a huge potential, it is low-level and i've experienced drastically better productivity with D (instead of C++). Perhaps what people feel with dynamic languages.
Go is so much blog-marketed it seems like a joke to me.
Dispatching an interface method is not trivial, and actually slower than dispatching a regular single-inheritance method.
If you'd have a huge codebase the decision is of course more difficult, I would advise only to switch for new projects, not for existing ones.
I wouldn't concentrate on a language but more on the libraries surrounding it. C++ in combination with the boost libraries are an excellent choice. People who develop in C++ tend to have a better understanding of computing, I myself started of with Java which made my life easier by hiding a lot of fundamental stuff, which is good, however I only really started to understand programming once I learned C/C++ (pointers etc).
I do recognise that C++ can be hard (e.g. memory management) so I think it's good to have a 'add on' language where performance is not essential and readability (==maintainability) scores high: I recommend Python for this.
There are countless machines running C++ software, I don't see them shutting down all at once. If C++ will go in the way of COBOL there will be a huge market for application migration. There will be specialized tools developed to translate C++ applications to the popular language of the time (Z++ ???).
So I guess the best advice is to cross that bridge when you come to it.
Check out Intel® Cilk++ Software Development Kit if you want to spark your interest in C++/Multi-Core development. I don't see C or C++ going away anytime soon either.
Comparing C* to Cobol is questionable
Comparing C* to Cobol may lead to the wrong conclusion. C was perfect for its day, a huge leap forward on its introduction, and it still gets the job done today.
I would sum up Cobol on my most charitable day with "nice try".
C and C++ will survive for a long time because they fit the bill well as implementation languages. This won't ever really change.
Also, consider that the main negative issue with C/C++ is the lack of memory safety. This tends to be less and less of a problem as codes mature. This means there will not be a serious reason to replace the old codes.
I expect that software systems will grow outwards from C. Look at the hierarchy today:
application written in a framework such as Rails
application back-end written in Ruby, PHP, Python, C#, whatever
Ruby, PHP, Python, or C# run-time implementation (written in C*)
OS kernel (written in C89)
I don't think the old layers will vanish, and I think legacy higher layers written in C and C++ will simply be supported that way for an indefinite period of time, eventually being phased out for their replacements written in Ruby, Python, C#, or a future development.
We have no idea if Go will find acceptance. Just being by Google is probably not going to be enough.
D? Well, some nice things are being said about it but it won't be taking off either. No user base to speak of. D is #20 in popularity on the TIOBE Index, and dropping fast.
You may say that a language's popularity has little to do with how well it's suited for your company's work. But it has a lot to do with how easy it will be to find people qualified to program in it.
Java is on top and I would be surprised if it went far away in the next 20 years. It's not considered a systems programming language but performs well enough that there are few tasks you'd do in C++ that you couldn't in Java. Certainly these days nobody is willing to task human programmers with the job done (flawlessly and often more effectively) by the garbage collector. I for one considered Java a significant step up from C++ in terms of programming effectivity.
I'm quite impressed by Ruby. It's an elegant, expressive language: You can accomplish a lot with not too much code, yet that code is still mostly legible. One of Ruby's main principles is to be consistent and not hold surprises for the developer. This is an extremely good idea, IMO, and boosts productivity. At the time of the big Rails hype (which may still be ongoing), I made a wide berth around Ruby because its reference implementation is abysmally slow. However, the JRuby folks at Sun have made it blazingly fast on a JVM, so now it's definitely worth some consideration. Ruby provides closures and a good handful of functional programming capabilities (see below for why this important), though it's not really considered a FP language. TIOBE index: 10 and rising.
Something to consider for the future is the fact that CPU makers have run up against a performance limit imposed by physics. No longer is there a 30% faster CPU available every Christmas, as it was in the past. So now to get more performance you need more cores. Software development will need all the help it can get in supporting multi-core concurrent programming. C++ leaves you mostly alone with this, and Java's solutions are horrible by modern standards.
In view of this, there's a certain trend toward functional programming (which eliminates much of the hassle associated with concurrency) as well as languages with better concurrency support. Erlang was written specifically for this and for the ability to swap code in a running program (Ericsson wanted incredible uptimes). Scala is similar to Java but with much stronger support for functional programming and concurrency. Clojure, ditto, but it's a Lisp and it's not even in the top 50 (yet!!).
Scala was developed academics, and shows it: It's sophisticated and downright pedantic about data types; it tries to be the Swiss Army Knife of programming languages. I believe a lot of medium-smart programmers will have trouble getting a grip on Scala. Ruby is less FP and doesn't do so much about concurrency, but it's pragmatic, and fun and easy to get stuff done in. Also, running on the JVM, there is an enormous amount of code readily available in Java libraries, which Ruby can interface with. So:
My bet would be on Ruby, with an outside chance on Scala. But there are plenty of alternatives!
Java. For most low level things Java is fine these days. Why go with a partial solution to C/C++ such as D or Go when you can have something as safe and easy to develop with as Java? If you are looking for a real time solution, D and Go are definitely not it, not to mention they are probably even less supported than Java.
Java is now a system programming language. I don't see how you can consider anything with unsafe constructs such as pointers "next gen". The only reason those insecure constructs ever existed is because it was the pragmatic approach to building a turing complete language. There was no concern of representing the memory in discrete objects, because they just wanted to build something that worked. There are already hard and soft realtime applications in Java, a variety of hardware bytecode processors, and over 2 billion mobile devices running Java. At most all you would have to do is add some constructs for interoperability with devices, which wouldn't be that much code; even in C/C++ you'd still have to add these constructs...
What are you programming? 8-bit microcontrollers with 1KB ram? In that case, it would be pointless to use anything other than the assembler for that platform...

Programming in gamedev (performance related)

I am just wondering how some things work in gamedev:
I know, that the performance is actually crucial so there is still (and I think never will be) no place to use managed languages/platforms as Java/.NET just because of their performance. But... recently I have read somewhere here on SO, that even though people creating games use C++ as a primary language, they actually do not use STL or Boost (or a lot of them). In think it has something in common with performance, right? If I am wrong, could you please tell me what are the reasons to avoid those libraries (that I think make developer's life much easier)? Is it because of licensing (Boost)? And what about EA's version of STL? Do other studios make their own versions too?
How "close to metal" game programming really is? Do you go deeper and closer to the machine? Do you sometimes use Assembly for critical inner loops, or C++ is actually the lowest abstraction layer that you use at the moment? I assume that in such products where performance is the most important thing profiling is very, very common task - but are you sometimes forced to use assembly to speed some parts up, or good C++ is "good enough"?
Edit:
Sorry, It may not have been clear, but I am interested in answers from people having game industry experience. I am not interested in some assumptions given by people who do not have commercial experience in game development. I am also not interested in examples of some niche-games created in C#/Java whatever. However if you know a product that looks better than FarCry2 (just and example, but your favourite modern great looking game name here), and is written entirely in Java/.NET, and has similar performance to FarCry2... do not hesitate to mention this product! Thanks.
1.
Contrary to some beliefs, the STL is quite optimized and not at all bad code. The reason for why most game studios don't use it is memory. You don't have as much control over memory allocation and deallocation as if you would write your own version of the STL containers. This is also the reason why managed languages are not preferred.
Writing your own containers will let you write cross platform code and do memory tracking easier. This is especially true on consoles where, for instance the PS3, requires detailed knowledge of the hardware in order to get the best performance out of it. Which usually in the end means that you need full control over memory flow between the PPU, SPUs and RSX.
2.
Assembler is only "required" (in quotes since it's not actually required but helps) for very specialized operations, e.g. math library functions. What's more common is SIMD intrinsics which vectorizes the code. However, most studios have legacy code which is quite optimized and since these optimizations are quite low level it's not code that needs to change greatly between hardware generations. I'd say on consoles it's much more common that you use lower level code.
I know, that the performance is actually crucial so there is still (and I think never will be) no place to use managed languages/platforms as Java/.NET just because of their performance.
No, you don't know this. You think it, you want to believe it because you romanticize game development, and because you think high-level languages can't be fast.
.NET performance is perfectly good enough for 90% of the games out there. And it's only going to get better. There is no inherent reason why managed platforms must be slower. They have the potential to be faster because they're JIT'ed. In practice, their performance tends to be about the same as reasonably good C++ code, much better than typical C++ code, and slightly worse than really good C++ code. And most big games use more than one language anyway. They use scripting languages, like Lua or Python, or some home-brewed stuff, all of which are orders of magnitudes slower than .NET.
Similarly, there is absolutely no reason why most of your game couldn't be written in .NET. And then the three really performance-critical functions can, if necessary, be ported to native C++ later.
But... recently I have read somewhere here on SO, that even though people creating games use C++ as a primary language, they actually do not use STL or Boost (or a lot of them). In think it has something in common with performance, right? If I am wrong, could you please tell me what are the reasons to avoid those libraries
Same as you're guilty of above... Superstition about game development. "Oh no, we can't afford to use other people's code! It's far too inefficient". Game development is stuck in the 80's in terms of programming practices and methodologies. In other words, don't worry too much about what other game developers do. If the STL or Boost make your code easier to write, then use it! And then, if you experience performance problems, profile it, and if necessary, replace that particular library component with your own.
But most of the STL is literally zero overhead. And 95% of the code in any game is not performance critical. Treat game development like you would any other programming. Don't treat it as some magical land where every line of code must be perfectly optimized and where normal rules don't apply.
And what about EA's version of STL? Do other studios make their own versions too?
As far as I know, no. EA made it partly for internal use, but also as input to the C++ community as a whole, as an example of what they (and a lot of game developers) would like to see from future revisions to the standard (it was submitted to the standards committee as well)
I can recommend the book C++ for game programmers. It has an in-depth discussion of the performance cost of c++ features such as the STL, exceptions and RTTI. It also touches on having your own memory manager, and various common performance optimizations.
Appearlenty there is a new edition out, but it has a different author. What's up with that?
About 1:
I haven't tried EA's STL version, but I can confirm from my own game development experience that the STL can sometimes be a bottleneck. So far I was always able to find workarounds though.
Boost can be very helpful, but it really depends on the particular part of boost whether it's helpful or not for performance-critical code. For example, Boost::filesystem was very useful for me, whereas boost::signals was barely useable due to very poor performance. So I implemented my own signaling library using FastDelegates instead.
About 2:
Most of the time you will get away with regular C++ code. Once the game is running and you can identify bottlenecks with your profiler, you can start optimizing those pieces of code. And even then, you might not have to write any assembler code if you do it right.
For example, my custom-built 2d game engine runs without hardware acceleration. I developed it when 3D drivers were still quite buggy and most casual gamers have outdated graphics card drivers, so compatibility was more important than pure performance at that time.
Still, in our game latest game Gemsweeper, we are using a lot of alpha blending with 8-bit alpha masks and the game still has to run on 500 mhz cpu's. So alpha blending turned out to be a performance critical area.
To optimize this, I've used the VectorC compiler so that I could take advantage of MMX, SSE and the like without having to write assembler code. But the same code can be fast on one CPU and slow on the other (e.g. Intel vs. AMD), so I also compiled the alpha blending code several times with different optimization settings. The game runs a benchmark at runtime to find the fastest blitting module for each blitting method and uses that module from then on.
I've compared the result with some other 2d blending libraries, one of which claimed to be pure hand-optimized assembler and my engine had about the same performance in average, as measured on different CPUs.
Bottom line:
Do not optimize without measuring first and think about alternatives before starting to write assembler. This usually wields good enough results and wil save you a lot of time.
We use 'STL' (ie. the standard C++ library) and a small amount of Boost. However some of it is avoided or frowned upon (std::map, std::list, boost::shared_ptr) typically for the over-exuberant memory allocation policies or poor cache coherency. These can typically be worked around, eg. with custom allocators, but instead we have other approaches to the same problems with their own benefits.
As for how close to the metal it really is, it depends. In our project C++ is the lowest level we go. In another project in this studio, there is a little assembly, especially on the non-PC platforms. These days in certain projects you're just as likely to be limited by the GPU as by the CPU so the days of low level code optimisation are getting fewer and the days of optimising shaders and art assets are growing.
Be wary of claiming that Java/.NET etc is never used however. Not everybody needs the performance of FarCry2 (which is a pretty excessive spec) which is why you're seeing more and more games written in managed languages with C++ just for optimisation.
It is true that in game development, STL is not used. In spite of what certain people always rush to claim, they also never use Java or C# or other managed languages.
I'm not talking about small X-Box Live Arcade downloadable games or web browser games, or such things. I'm talking about high-end development in AAA games.
They don't use STL. However, they do use their own custom implementations that look a lot like STL. There will be smart-arrays, there will be hash tables, there will be smart pointers, they just won't be STL.
Consoles have some performance characteristics that are very different from PCs. Even game projects that exclusively target PC are usually using codebases that have been used for console projects in the past. A lot of tweaking goes into making the basic template structures work as desired.
Most game studios also want code that they can adapt to other platforms. Locking into an implementation from MS/Sony/Nintendo makes for a lot more pain when it comes time to port the game to a new platform. The provided template libraries (which aren't necessarily STL to start with) are often less than stellar. At least they are that way early in the hardware cycle when a studio is ironing out the engine they plan to keep using for the next five years.
At the studios I've worked at, I've certainly seen a fair degree of "not-built-here" attitude to dismiss third party code. Sometimes it's justified, sometimes it's not. In the case of basic data structure templates, it typically is.
As for your second question, assembler is occasionally used. But only in isolated situations where a large volume of math needs to happen very frequently. An entire engine might contain two or three smallish files of asm blocks.
You can find out for yourself (to a degree) by looking at game SDKs.
Almost all the id Tech 4 games (DooM III, Prey, Quake IV, ET:QW) have SDKs out, complete with physics, script, AI, math, etc. systems included. The only asm used is for specialized math code, everything else is pure C++.
Crytek has a Crysis SDK out (you'll need the game installed to install the SDK though) and Far Cry SDK.
Valve has the Source SDK available to anyone who has purchased a Source game through steam.
There are a lot more if you look. A lot of the code isn't particularly clean or flexible (sometimes not even fast), but I suppose it's easier to adjust things in code you've written as opposed to monolithic libraries full of hard to understand template-fu.
No, you are largely wrong. Both .NET and Java have been used in commercial games, certainly on Windows (and probably on consoles too).
STL is also used widely, I know that quite a large proportion of amateur games developers use it.
Probably the main reason for not using STL is inertia, and using third party libraries/engines which do not.
I imagine that historically on some platforms, good STL implementations were no available, especially on RAM-limited stuff like PS2.

D Programming Language in the real world? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Is anyone out there using D for real world applications? If so, what are you using it for? I can't seem to find anything big on the web written in D.
Despite the lack of known big users, D seems like a very promissing language to me, and according to TIOBE, it's fairly popular.
I do bioinformatics work in D. For me, the key thing about D is that it takes a very level-headed approach to tradeoffs and recognizes the principle of diminishing returns.
Unlike C++, which adheres rigorously to the zero-overhead principle, D allows features that may have a small performance/space cost if they make the language a lot more usable. These include garbage collection, a monitor object for each class, runtime type info, etc.
Unlike Ruby, Python, PHP, etc, D tries to be almost as fast as C, even if it is less dynamic and slightly more difficult to program in than scripting languages.
The result is a language that is optimal when both development time and execution time matter about equally, which in my field is most of the time.
Similarly, D takes a very level-headed approach to safety vs. flexibility. It assumes that programmers basically know what they're doing, but do make mistakes.
Unlike C and C++, it assumes that you don't want to use pointers, unsafe casts, manual memory management, etc, everywhere in your code, because they're error prone, and assumes that you don't want to sift through multi-page template error messages when you screw up just to use resizable arrays.
Unlike Java and other bondage-and-discipline languages, D assumes that sometimes pointers, unsafe casts, manual memory management, etc. are a necessary evil, and assumes you're smart enough to handle real templates, operator overloading, etc. without writing obfuscated code. It also assumes that you may screw up and access an array out of bounds, but that the programmer knows best what tradeoff should be made between safety and speed in any given situation. Therefore, whether arrays are bounds checked is simply determined by a compiler switch.
I'm using D for my research work in the area of computer graphics. I and others have had papers published in our fields based on work done using D. I think it's definitely ready for use on small to medium sized research projects where performance matters. It's a nice fit for research work because often you're starting from scratch anyway, so you don't have much legacy code to worry about integrating with.
Another popular area for use seems to be web services. Hopefully someone else can comment who's in this space, but there too I think the idea is that performance often really matters so you want a compiled-to-the-metal language. Services are often fairly small, self-contained processes, so interop with large amounts of legacy C++ code is not really necessary or useful. Thus D can get its foot in the door.
I think D will continue to gain grass-roots followers in this way -- on smaller projects that for whatever reason can afford to ditch the C++ legacy in order to gain a programming language that's much more enjoyable to use, and perhaps more productive too.
But until there's a huge number of grass-roots users there won't be much in the way of big corporate users I suspect.
I know of one smallish company that have sent a mail server product to the market. They had at least 2 people working full time on the project.
Also, a major player in the IT business have several employees using D in larger internal projects.
Further I know of one company seeking venture funding, several (at least 4) employees in smaller companies using D either part or full time, and at least a couple (including me) actively seeking opportunities in the consulting market.
I've probably left out a few that I should have known about, and probably some I haven't heard about, but that still exists, as the above is more or less those I know myself via the community.
A small percentage of my current income comes from D.
I use D for web development and it proved quite a lot more productive compared to C/C++.
There are a lot of frameworks based on ruby/php/python, of course.
But when you want to develop something unique that also have to be as fast as C and nearly as easy as to program with as you do in many script languages, then D is a good choice.
I use D for a hardware in the loop (HIL) test environment. This is for software tests in the automotive area. D can be used here, because as a system programming language it is possible to be used in real-time programs (IRQ handlers in a linux real-time extension RTAI-LXRT).
With the ongoing port of SWT/JFace I plan to do more and work in D which I would have been done in Java before.
Facebook announced that they are using it in production as of today.
I'm using D in research about compile time code translation. The advanced templating combined with tuples and mixins makes code translation much easier and allows for code translation to be done during compile time without requiring a separate tool.
There are some examples of physicists using D to enhance their programs with meta-programming in D.
video - Conference talk, could not find source site of physicist use.
Our whole (high-traffic) network infrastructure is based only on D1 and tango. We are a young startup company in Berlin: sociomantic.com
My current work task is a system to translate C# to D. This is as part of a for profit project to develop a software system.
Well, I have written a couple of research papers in D as have others.
http://www.digitalmars.com/pnews/read.php?server=news.digitalmars.com&group=digitalmars.D.announce&artnum=13337
http://www.digitalmars.com/webnews/newsgroups.php?art_group=digitalmars.D.announce&article_id=9466.
It seems that Remedy Games has a large D2 codebase for their games (cf. Using D Alongside a Game Engine by Manu Evans - DConf 2013).
They are a big company, knowing that a big company is using D is very good.
A lot of the games released by ABA Games are written in D 1.x, though I imagine the console ports had to be rewritten in C++.
I've written quite a few game prototypes in D, but I'm not sure if that qualifies as 'real world' since I wrote them for my own benefit and have never released any of them.
I wrote (and I am still maintaining and developing) a software for the conversion of tester protocols from various hardware testing stations to a standardized output format for traceability and stuff like that.
All together over 5k lines of code, written with D 1.x and the Phobos library.
D is so easy to learn, and disregarding some pitfalls (in the Phobos library) a real joy to program.
I used D for my research project on developing a global optimization algorithm. I applied it to the problem of training neural networks. It's up to you whether you want to call this "real world".
I wrote a wrapper script that builds DGCC on OS X
http://github.com/davecheney/make-gdc-apple/tree/master
I'd love to hear from other DMD programmers out there
I use D2, the second standard of the version. I wrote real-time applications (3D engine, for instance).
The language gets more and more powerful each day. D is very pragmatic and all the embedded features, especially the metaprogramming paradigm, makes it far over C++, in my opinion. The syntax is clearer, you can use the strength of functional programming through functions such as filter or reduce, and one of the most important feature: you can use all the C libs.
Definitely my favourite language, and I’m pretty sure it will be a spread used language.
I suppose we can read something into the lack of immediate answers to this question and that is that not many/any of the acive stackoverflow responders are using D. I was also a little surprised about the level of its ranking in the TIOBE listing that you link to.
Having said that, Walter Bright has been working on the language for quite a number of years now and I think he has quite a number of `followers' who remember what a good job he did with the Zortech C++ compiler back in the '90s. I also note that the language appears to be leaning towards the functional direction now.
The D's official website enumerates the organizations that are currently using D.
http://dlang.org/orgs-using-d.html
The D wiki also provides a list of organizations, but it's outdated.
Just watch carefully DConf talks.
DConf 2013
DConf 2014
Almost all people there work for some company, and they use D at work.
I am starting a project to rewrite some of our internal tools from Perl to D. I chose D because I'm pretty excited about the design philosophies of the language. I've been programming for a long time and used a lot of languages, from assemblers to high-level (mostly C) to scripting languages (mostly Perl), and D is the first language I've been enthused about learning in many years.
I decided to move to a compiled language for one main reason - security. Functionally, Perl works quite well for the toolset I work on, but it's insecure - anyone that can run a script can also read, copy and create their own modified version of the tool. (And they do.) I know there are circuitous methods to get around this problem (sort of), but to be honest I don't have enough hours in the day to manage all of that and still get my primary job done.
I started a ray-traced renderer in D with ldmd2.
http://palaes.rudanium.org/SubSpace/render.php