Graphics in reliable systems (like airline instruments) - c++

I wonder about what technology is used to visualise flying instruments on this little lcds that are in cockpits in planes.
I am windows applications c++ software developer, and I'm interested what what libraries are used to this highly reliable systems like aircrafts onboard systems
example of one of this lcds, probably from boeing aircraft?

https://www.khronos.org/openglsc/ OpenGL has a safety critical subset, that's worth reading up on.

MFDs (Multifunctional display) are completely separate computers themselves. They communicate with other components (to obtain the data to be displayed) conforming ARINC661 standard, which defines a binary communication format to exchange data between display and user applications (sensors, etc.). Avionic systems also use RTOS (Integrity was being used in my project), each component has a partition itself and allocated processing time by the OS. Also, as Andreas stated, OpenGL has a safety-critical subset for this purpose. Avionic codes go through elaborate reviews and certification, and are coded over-safe (e.g. we were not allowed to use "new" keyword in C++, only static memory allocation was allowed).

I'm in the aerospace industry. Glad you asked.
My experience is that the hardware setup is unique for each display unit. Commercial or custom made GPU:s are used, but drivers and libraries are always made by the display unit vendor more or less from scratch since the combo of CPU, GPU, OS and connectors in between is often unique and always a company secret of the display unit vendor. OpenGL Safety Critical profile does appear in some products but in the end the vendor only develop what the customer really need and is willing to pay for. And quite often companies buy the basics and then pay for additional functionality such as another blend operation or larger textures. Similar to addons for cars.
In general aerospace is 10-20 years behind in graphics capabilities. For displays like the one in the picture there is no need to get up to date either. More complex capabilities introduce a gruesome cost on verification without any customer actually ready to pay for it. Can't have wrong altitude presented to the pilot, so testing and documentation is immense.
Entertainment systems are in general more capable as the information displayed cannot crash the aircraft. I think they are similar to systems found in casino slot machines. As long as the hardware does not ignite itself it is safe enough.
Most what I do is either company or military confidential. I can't say much more than what is publically available or common industry knowledge. I hope this shed some light on the environment you were interested in.

Related

How to display computer vision and AR in UML use case diagrams

My system uses both Augmented reality and computer vision,
The first feature is: The user actor can scan a specific object and the computer vision should recognize it.
The second feature is: The user actor can view a specific place using the augmented reality.
Each feature is a use case connected to the user, but do I also connect them to some sort of AI actors? and if so what is the suitable way to do it?
Do I just say "Computer vision system", and "Augmented reality system" ?
Feature or use-case?
This is a good start. However, there is a key misconception here:
Features are characteristics or capabilities offered by the software that are valued by users because it helps them to achieve some purpose. Features are often identified with user stories.
Use-cases represent goals of the actors using the system, that corresponds to a set of behaviors and interactions with the user, without reference to the internal structure of the system.
These a two different concepts. There is of course some overlap: some higher level capabilities can be described in terms of goals. For example, an ERP can be expected to have accounting, warehouse management and sales administration features.
But features are more general: it can also describe technical capabilities that are not directly observable by the user (e.g. backup), capabilities that are not directly related to a specific set of behaviors (e.g. multilingual user interface), Or which are much more detailed (e.g. date picking feature)
If you're on features, you may consider non-UML techniques, such as a feature tree, or user-story mapping (which is a kind of feature tree constructed with user-stories).
The big picture with use-cases
In your diagram, the bulbs seem to show that the system offers, and not what the user wants to do. If you want to show the big-picture with use-cases, you need to relate the bubbles with user goals:
Does the user just want to scan objects? Or is this scanning only one step for a larger goal, such as making an inventory, recognizing and ordering spare parts, or populating a virtual world.
Does the user just want to view a place in VR? Or are the expectations more ambitious, like purchasing products that would look fine in a given place?
This might look like an unecessary philosophic debate. But it is not. Because the main benefit of use-cases is a goal-oriented approach. Framing the problem or the expectations correctly, may allow you to think more creatively at alternatives instead of locking you early in a pre-conceived solution.
The right boundaries
The actors raise another question: are these actors autonomous and independent systems and do they matter to the user? Or are they just implementation details?
Formally, actors are external to the system, and moreover, the use-case should not depend on the internal structure of the system. So if the computer vision and the virtual reality system are in fact libraries, components, sub-systems of your system, they should not appear in the diagram.
Secondly, use-cases should offer observable result of value for actors. If the external system is dependent on your system and has no value on its own, then the use-case results cannot be of value to this system. For example, a DBMS are often viewed as candidate actors, but do not pass this test: the DBMS without the main system would be useless. If the system is not independent an autonomous, just remove it from the diagram to keep things simple.
Lastly, is, does the system actor matter to the other actors? If it makes no difference for your human users if an external system-actor intervenes, keep it simple and do not show the system-actor although you could. Because then again, it's more an implementation choice to rely on an external system than a requirement.
The way you denoted it is common practice. The so-called primary actor (which is who receives the added value from the system under consideration) is placed to the left and the so-called secondary actors (which only take part in and/or support the use case) are placed to the right. Depending on who the reader of the UC diagram will be their appearance will make sense or not. If you present it to some customer they are likely not interested in IT blurb. But for system designers it would be some important information.

How to make a GameBoy / GameBoy Advance Emulator? [duplicate]

Closed. This question is off-topic. It is not currently accepting answers.
Closed 9 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
How do emulators work? When I see NES/SNES or C64 emulators, it astounds me.
Do you have to emulate the processor of those machines by interpreting its particular assembly instructions? What else goes into it? How are they typically designed?
Can you give any advice for someone interested in writing an emulator (particularly a game system)?
Emulation is a multi-faceted area. Here are the basic ideas and functional components. I'm going to break it into pieces and then fill in the details via edits. Many of the things I'm going to describe will require knowledge of the inner workings of processors -- assembly knowledge is necessary. If I'm a bit too vague on certain things, please ask questions so I can continue to improve this answer.
Basic idea:
Emulation works by handling the behavior of the processor and the individual components. You build each individual piece of the system and then connect the pieces much like wires do in hardware.
Processor emulation:
There are three ways of handling processor emulation:
Interpretation
Dynamic recompilation
Static recompilation
With all of these paths, you have the same overall goal: execute a piece of code to modify processor state and interact with 'hardware'. Processor state is a conglomeration of the processor registers, interrupt handlers, etc for a given processor target. For the 6502, you'd have a number of 8-bit integers representing registers: A, X, Y, P, and S; you'd also have a 16-bit PC register.
With interpretation, you start at the IP (instruction pointer -- also called PC, program counter) and read the instruction from memory. Your code parses this instruction and uses this information to alter processor state as specified by your processor. The core problem with interpretation is that it's very slow; each time you handle a given instruction, you have to decode it and perform the requisite operation.
With dynamic recompilation, you iterate over the code much like interpretation, but instead of just executing opcodes, you build up a list of operations. Once you reach a branch instruction, you compile this list of operations to machine code for your host platform, then you cache this compiled code and execute it. Then when you hit a given instruction group again, you only have to execute the code from the cache. (BTW, most people don't actually make a list of instructions but compile them to machine code on the fly -- this makes it more difficult to optimize, but that's out of the scope of this answer, unless enough people are interested)
With static recompilation, you do the same as in dynamic recompilation, but you follow branches. You end up building a chunk of code that represents all of the code in the program, which can then be executed with no further interference. This would be a great mechanism if it weren't for the following problems:
Code that isn't in the program to begin with (e.g. compressed, encrypted, generated/modified at runtime, etc) won't be recompiled, so it won't run
It's been proven that finding all the code in a given binary is equivalent to the Halting problem
These combine to make static recompilation completely infeasible in 99% of cases. For more information, Michael Steil has done some great research into static recompilation -- the best I've seen.
The other side to processor emulation is the way in which you interact with hardware. This really has two sides:
Processor timing
Interrupt handling
Processor timing:
Certain platforms -- especially older consoles like the NES, SNES, etc -- require your emulator to have strict timing to be completely compatible. With the NES, you have the PPU (pixel processing unit) which requires that the CPU put pixels into its memory at precise moments. If you use interpretation, you can easily count cycles and emulate proper timing; with dynamic/static recompilation, things are a /lot/ more complex.
Interrupt handling:
Interrupts are the primary mechanism that the CPU communicates with hardware. Generally, your hardware components will tell the CPU what interrupts it cares about. This is pretty straightforward -- when your code throws a given interrupt, you look at the interrupt handler table and call the proper callback.
Hardware emulation:
There are two sides to emulating a given hardware device:
Emulating the functionality of the device
Emulating the actual device interfaces
Take the case of a hard-drive. The functionality is emulated by creating the backing storage, read/write/format routines, etc. This part is generally very straightforward.
The actual interface of the device is a bit more complex. This is generally some combination of memory mapped registers (e.g. parts of memory that the device watches for changes to do signaling) and interrupts. For a hard-drive, you may have a memory mapped area where you place read commands, writes, etc, then read this data back.
I'd go into more detail, but there are a million ways you can go with it. If you have any specific questions here, feel free to ask and I'll add the info.
Resources:
I think I've given a pretty good intro here, but there are a ton of additional areas. I'm more than happy to help with any questions; I've been very vague in most of this simply due to the immense complexity.
Obligatory Wikipedia links:
Emulator
Dynamic recompilation
General emulation resources:
Zophar -- This is where I got my start with emulation, first downloading emulators and eventually plundering their immense archives of documentation. This is the absolute best resource you can possibly have.
NGEmu -- Not many direct resources, but their forums are unbeatable.
RomHacking.net -- The documents section contains resources regarding machine architecture for popular consoles
Emulator projects to reference:
IronBabel -- This is an emulation platform for .NET, written in Nemerle and recompiles code to C# on the fly. Disclaimer: This is my project, so pardon the shameless plug.
BSnes -- An awesome SNES emulator with the goal of cycle-perfect accuracy.
MAME -- The arcade emulator. Great reference.
6502asm.com -- This is a JavaScript 6502 emulator with a cool little forum.
dynarec'd 6502asm -- This is a little hack I did over a day or two. I took the existing emulator from 6502asm.com and changed it to dynamically recompile the code to JavaScript for massive speed increases.
Processor recompilation references:
The research into static recompilation done by Michael Steil (referenced above) culminated in this paper and you can find source and such here.
Addendum:
It's been well over a year since this answer was submitted and with all the attention it's been getting, I figured it's time to update some things.
Perhaps the most exciting thing in emulation right now is libcpu, started by the aforementioned Michael Steil. It's a library intended to support a large number of CPU cores, which use LLVM for recompilation (static and dynamic!). It's got huge potential, and I think it'll do great things for emulation.
emu-docs has also been brought to my attention, which houses a great repository of system documentation, which is very useful for emulation purposes. I haven't spent much time there, but it looks like they have a lot of great resources.
I'm glad this post has been helpful, and I'm hoping I can get off my arse and finish up my book on the subject by the end of the year/early next year.
A guy named Victor Moya del Barrio wrote his thesis on this topic. A lot of good information on 152 pages. You can download the PDF here.
If you don't want to register with scribd, you can google for the PDF title, "Study of the techniques for emulation programming". There are a couple of different sources for the PDF.
Emulation may seem daunting but is actually quite easier than simulating.
Any processor typically has a well-written specification that describes states, interactions, etc.
If you did not care about performance at all, then you could easily emulate most older processors using very elegant object oriented programs. For example, an X86 processor would need something to maintain the state of registers (easy), something to maintain the state of memory (easy), and something that would take each incoming command and apply it to the current state of the machine. If you really wanted accuracy, you would also emulate memory translations, caching, etc., but that is doable.
In fact, many microchip and CPU manufacturers test programs against an emulator of the chip and then against the chip itself, which helps them find out if there are issues in the specifications of the chip, or in the actual implementation of the chip in hardware. For example, it is possible to write a chip specification that would result in deadlocks, and when a deadline occurs in the hardware it's important to see if it could be reproduced in the specification since that indicates a greater problem than something in the chip implementation.
Of course, emulators for video games usually care about performance so they don't use naive implementations, and they also include code that interfaces with the host system's OS, for example to use drawing and sound.
Considering the very slow performance of old video games (NES/SNES, etc.), emulation is quite easy on modern systems. In fact, it's even more amazing that you could just download a set of every SNES game ever or any Atari 2600 game ever, considering that when these systems were popular having free access to every cartridge would have been a dream come true.
I know that this question is a bit old, but I would like to add something to the discussion. Most of the answers here center around emulators interpreting the machine instructions of the systems they emulate.
However, there is a very well-known exception to this called "UltraHLE" (WIKIpedia article). UltraHLE, one of the most famous emulators ever created, emulated commercial Nintendo 64 games (with decent performance on home computers) at a time when it was widely considered impossible to do so. As a matter of fact, Nintendo was still producing new titles for the Nintendo 64 when UltraHLE was created!
For the first time, I saw articles about emulators in print magazines where before, I had only seen them discussed on the web.
The concept of UltraHLE was to make possible the impossible by emulating C library calls instead of machine level calls.
Something worth taking a look at is Imran Nazar's attempt at writing a Gameboy emulator in JavaScript.
Having created my own emulator of the BBC Microcomputer of the 80s (type VBeeb into Google), there are a number of things to know.
You're not emulating the real thing as such, that would be a replica. Instead, you're emulating State. A good example is a calculator, the real thing has buttons, screen, case etc. But to emulate a calculator you only need to emulate whether buttons are up or down, which segments of LCD are on, etc. Basically, a set of numbers representing all the possible combinations of things that can change in a calculator.
You only need the interface of the emulator to appear and behave like the real thing. The more convincing this is the closer the emulation is. What goes on behind the scenes can be anything you like. But, for ease of writing an emulator, there is a mental mapping that happens between the real system, i.e. chips, displays, keyboards, circuit boards, and the abstract computer code.
To emulate a computer system, it's easiest to break it up into smaller chunks and emulate those chunks individually. Then string the whole lot together for the finished product. Much like a set of black boxes with inputs and outputs, which lends itself beautifully to object oriented programming. You can further subdivide these chunks to make life easier.
Practically speaking, you're generally looking to write for speed and fidelity of emulation. This is because software on the target system will (may) run more slowly than the original hardware on the source system. That may constrain the choice of programming language, compilers, target system etc.
Further to that you have to circumscribe what you're prepared to emulate, for example its not necessary to emulate the voltage state of transistors in a microprocessor, but its probably necessary to emulate the state of the register set of the microprocessor.
Generally speaking the smaller the level of detail of emulation, the more fidelity you'll get to the original system.
Finally, information for older systems may be incomplete or non-existent. So getting hold of original equipment is essential, or at least prising apart another good emulator that someone else has written!
Yes, you have to interpret the whole binary machine code mess "by hand". Not only that, most of the time you also have to simulate some exotic hardware that doesn't have an equivalent on the target machine.
The simple approach is to interpret the instructions one-by-one. That works well, but it's slow. A faster approach is recompilation - translating the source machine code to target machine code. This is more complicated, as most instructions will not map one-to-one. Instead you will have to make elaborate work-arounds that involve additional code. But in the end it's much faster. Most modern emulators do this.
When you develop an emulator you are interpreting the processor assembly that the system is working on (Z80, 8080, PS CPU, etc.).
You also need to emulate all peripherals that the system has (video output, controller).
You should start writing emulators for the simpe systems like the good old Game Boy (that use a Z80 processor, am I not not mistaking) OR for C64.
Emulator are very hard to create since there are many hacks (as in unusual
effects), timing issues, etc that you need to simulate.
For an example of this, see http://queue.acm.org/detail.cfm?id=1755886.
That will also show you why you ‘need’ a multi-GHz CPU for emulating a 1MHz one.
Also check out Darek Mihocka's Emulators.com for great advice on instruction-level optimization for JITs, and many other goodies on building efficient emulators.
I've never done anything so fancy as to emulate a game console but I did take a course once where the assignment was to write an emulator for the machine described in Andrew Tanenbaums Structured Computer Organization. That was fun an gave me a lot of aha moments. You might want to pick that book up before diving in to writing a real emulator.
Advice on emulating a real system or your own thing?
I can say that emulators work by emulating the ENTIRE hardware. Maybe not down to the circuit (as moving bits around like the HW would do. Moving the byte is the end result so copying the byte is fine). Emulator are very hard to create since there are many hacks (as in unusual effects), timing issues, etc that you need to simulate. If one (input) piece is wrong the entire system can do down or at best have a bug/glitch.
The Shared Source Device Emulator contains buildable source code to a PocketPC/Smartphone emulator (Requires Visual Studio, runs on Windows). I worked on V1 and V2 of the binary release.
It tackles many emulation issues:
- efficient address translation from guest virtual to guest physical to host virtual
- JIT compilation of guest code
- simulation of peripheral devices such as network adapters, touchscreen and audio
- UI integration, for host keyboard and mouse
- save/restore of state, for simulation of resume from low-power mode
To add the answer provided by #Cody Brocious
In the context of virtualization where you are emulating a new system(CPU , I/O etc ) to a virtual machine we can see the following categories of emulators.
Interpretation: bochs is an example of interpreter , it is a x86 PC emulator,it takes each instruction from guest system translates it in another set of instruction( of the host ISA) to produce the intended effect.Yes it is very slow , it doesn't cache anything so every instruction goes through the same cycle.
Dynamic emalator: Qemu is a dynamic emulator. It does on the fly translation of guest instruction also caches results.The best part is that executes as many instructions as possible directly on the host system so that emulation is faster. Also as mentioned by Cody, it divides the code into blocks ( 1 single flow of execution).
Static emulator: As far I know there are no static emulator that can be helpful in virtualization.
How I would start emulation.
1.Get books based around low level programming, you'll need it for the "pretend" operating system of the Nintendo...game boy...
2.Get books on emulation specifically, and maybe os development. (you won't be making an os, but the closest to it.
3.look at some open source emulators, especially ones of the system you want to make an emulator for.
4.copy snippets of the more complex code into your IDE/compliler. This will save you writing out long code. This is what I do for os development, use a district of linux
I wrote an article about emulating the Chip-8 system in JavaScript.
It's a great place to start as the system isn't very complicated, but you still learn how opcodes, the stack, registers, etc work.
I will be writing a longer guide soon for the NES.

Are there specialities within the embedded fields

I've started learning embedded and its 2 main languages (c and c++). But I'm starting to realize that despite the simple learning requirements, embedded is a whole world in and of itself. And once you deal with real projects, you start to realize that you need to learn more "stuff" specific to the hardware used in the device you're working on. This is an issue that rarely came up with the software-only projects I currently work on.
Is it possible to fragment this field into sub-fields? I'm thinking that those with experience in the field may have noticed that some types of projects are different from other types, which has led them to maybe maybe come up with their own categories. For example, when you run into a project, you may think to yourself that it's "outside your field"? Does that happen to you? and if so, what would you call your sub-field or what other sub-fields have you encountered?
Here are a few sub-specialities I can think of:
Assembly Language Specialist
Yep. You need to know C and C++. But some people also specialize in assembly. These are the experts that are called up to port a RTOS to a new chip, or to squeeze every drop of performance from a highly constrained embedded system (usually to save $$ per unit).
This person probably is not needed that much these days... but.. yet still critical from time to time.
Device Driver Specialist
comfortable living between a real OS or RTOS and a piece of hardware. This person is usually comfortable with lab tools like o-scopes or logic analyzers, thinking in "hex", and understanding the critical nature of timing with HW. This person reads device data sheets for fun at night, and gets excited about creating the perfect porting driver for some new device.
DSP Specialist
Digital Signal Processing seems to be its own sub-specialty of embedded, although perhaps a software engineer may not know the exact algorithm details, and may only be implementing what a system or electrical engineer requires. However, understanding sampling rate theory, FFTs, and some foundational elements from "DSP" is handy and maybe required. And you still generally must be very aware of timing and your target hardware's restrictions ( sampling rate, noise, bits per sample, etc).
Control Theory Specialist
Perhaps the same issue as with DSP: a system or electrical engineer may provide the detailed specs. But, then again, familiarity with various motors, sensors, and other controllers handled by a microcontroller, would be great. Throw in a Bode plot, some Laplace transforms or two and some higher math skills... that couldn't hurt too much!
Networking Specialist
basically the same as the PC world "networking". Many embedded devices are adding networking connectivity features these days. TCP/IP sockets, http, etc good to know and understand how to use in a resource constrained device. Throw in USB and Bluetooth for good measure.
UI Specialist
more and more embedded devices include 2D graphics, and now more include 3D graphics thanks to the influence of iPhones, etc. Even though these are still "fat" systems by other embedded device standards, they are still limited. Just read a bit in the Android Development Guide, and you will realize that you still must consider responsiveness, performance, etc, even in a high end cell phone.
http://developer.android.com/guide/practices/design/performance.html
And then, of course, every industry is a specialization unto itself. Consumer Electronics, Military, Avionics, Robotics, Industrial Machines, Medical Devices, etc...
Have fun and good luck!
Yes, there certainly are several sub-fields. I don't think I can list them all from the top of my head, but the way I see it, there are at least 3 big sub-divisions, and from there, they are further sub-divided. There are micro-controllers, micro-processors and sand-boxed/VMs. For example, using a 16bit micro-controller in a drive-by-wire would be an example of the first, a set-top-box like TiVo would be and example of the second, and iPhones and Androids are the latter.
Micro-controllers are very limited, and usually can't even be programmed in C++. Most of them either has no OS running, or, the most expensive ones, have an RTOS. Set-top-boxes and any ARM/MIPS/SuperH4/Broadcom chips are much more like a PC, in that they have a linux distribution running in them and you can find most of the same facilities as a PC, and if you can't find one, cross-compiling to it is usually simple. The sand-boxed guys, are well, sand-boxed; so it is exactly what it sonds, usually the SDK isolates you from the hardware and you don't really get the 'full embedded experience'.
Sure, for example, there are many operating systems in use in the embedded world. Working with embedded Linux is very different than working with a bare micro controller.
"Learning embedded" sounds impossible to me. I do some work on headless linux computers controlling large machinery - which can be referred to as embedded (but it's not much different to programming any other computer, bar a few hardware interfaces). That's totally different to a phone, and totally different to an air conditioner or home automation system.
Control systems and mobile devices would be two categories of 'embedded' - but I'm sure there are plenty more.
I work on embedded linux on Mobile devices, and its whole lot different from a full flegded Ubuntu image where i write my code and cross compile it for the mobile device.
First of all a embedded system is stripped down to meet the bare requirements of the device, very much unlike the traditional desktop operating system where you can have as many functionalities/libraries etc.
The memory constraints also are a major part of a embedded system. Hence all the programs/applications have to be written inorder to fit into the architecture. This may not be much of a concern on a traditional system.
Basically my point is to emphasize that working on embedded cannot be summed up into a few lines as each have a different purpose.
However programming keeping in view the overall architecture may help you gain confidence if you can fit into a project or not.
PS: I may not be good in categorizing which is what the question expects, however this is my bit on embedded systems.
Lots of good answers already to this question. I think you need to decide what the word embedded software means to you and/or what you want it to mean. Maybe your definition isnt really embedded. My definition means no operating system. And that will probably upset many embedded software engineers, but the experienced ones like ones that have already answered will certainly understand our variations in definition and why. I think they would call me a microcontroller specialist, and that is certainly true, but I spend most of my time on full speed processors with gobs of memory and rom and I/O, networking, etc. I am the guy that brings the hardware up the first time, flushes out board and chip bugs, then hands it off to what most would call the embedded software engineers. I am an electrical engineer by training and software engineer by trade, so I straddle the line.
It is very possible, and not uncommon that you could remain in the C/C++ embedded world, never have to read a datasheet or schematic, all you would do is call api's that someone else has created. There is a large and increasingly larger market for that as what used to be (my definition of) true embedded, or rtos based embedded (which is often api calls and not the full experience) to this linux embedded thing that has exploded. There is nothing wrong with it, it is fairly close to the experience of developing code for a desktop, but you have to try just a little harder for reliable code since it may be flash/rom based and they may not want to have weekly/monthly updates to units in the field. Ideally never update, but that is also becoming more rare.
The rtos/embedded linux api based embedded is and can still be a different experience than what I call application programming. You may still want or need to read a datasheet or schematic, you may still need to know assembler for the target platform.
I like all of the answers thus far to this question, I guess we are struggling to understand what you are really asking or what you are really looking for in life, add to that what we enjoy about our choices and you get this mix of answers.
I see a few groups, there is certainly the good old true embedded microcontroller stuff, but even that is turning into libraries and apis instead of on the metal, look at the arduino community and stellaris and a bunch of others. I spend a lot of my time in board bring up and test, you have to know a fair amount about the whole system hardware, registers, schematic, etc. Have to know enough assembler both to boot the thing out of reset as well as debug things by staring at disassembly dumps and looking for signs of life in the I/O or on memory busses, etc. If lucky you will get to work on chip design as well and get to watch your instructions execute in simulation. The next group is bootloader/operating system. The hardware working well enough at this point, chip boots, memory appears to work, rom is there. This team writes the production boot code and gets the product from power up into the embedded system, rtos, linux, vxworks, bsd, whatever. this is a talent in and of itself, toolchain, root file system, etc. The next group is the masses, the software engineers that write the applications for that operating system, now some will be reading datasheets, schematics, etc, writing device drivers or apis for others to use, and the highest level may be someone that is all application level programming, the api and sdk calls, some of which may be company developed some may be purchased or other.
Bottom line: Absolutly, there are specialties within embedded. Are you going to know everything? NO, maybe 20 years ago, likely 40 years ago, not today the field is too big and wide. What is the best things you can do for yourself in this field? Learn assembler for a few different instruction sets. The popular ones, arm definitely, thumb version of arm, maybe mips or powerpc or others. If you lean toward microcontrollers, learn (arm, thumb,) avr, pic (blah), msp430, maybe 8051. Read some data sheets, microcontrollers can teach you this even if that is not the field you want, tons of sub $50 development/eval boards (sparkfun.com for example) that give data sheets, simple schematics, assembler, C, etc. If you are a software guy, learn to speak hardware guy, software and hardware folks do not speak the same language, if you can avoid picking sides and stay neutral and speak both languages you will help yourself, your career and whomever you work for and with. Despite any personal views you may have about endians or bit or byte numbering, you are likely to have to deal with some screwy things, and speak to customers/coworkers that can only deal with octal (yeah really) or only deal with the msbit of anything being zero. I recommend looking into verilog and maybe vhdl. At least in a readable sense, not necessarily create it from scratch. If you can already program and know C it is very readable. Depending on the job and the coworkers the verilog and the schematic may be your only documentation you use to write your software. If you cant do it they may replace you with someone who can (rather than get the hardware folks to document their stuff).

Is C++ still actively used for general purpose development? [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
Which sector of software industry uses C++?
C++ was for many years the holy grail of mission critical high performance development. However, it seems that for the past 10 years like much of the development world has moved to Java and C#. My quesiton is this, is C++ effectively relegated to embedded systems, OS, Browser and other special purpose development? Should I let this skillset go the way of the VB 6 and other skillsets that are no longer showing the same level of demand and value in the market? I love C++ and would love to update my knowledge in it, but I wouldn't even know where to begin to try to apply it to common business problems today.
Regards.
First of all, I doubt anybody can give a definitive answer -- there's just no way to tell exactly how much any particular language is really used. Nearly anything you can measure is a secondary measurement, such as how many people are advertising jobs using that language. The problem is that this tends to show relatively new languages as dominating to a much greater degree than is real.
That said, my belief is as follows. At one time, C++ was the hot new language on the block, and there was a bubble when it dominated the market. That bubble deflated quite a while ago. Since then, use of C++ has been growing on an absolute basis, but the market has been growing (quite a bit) faster so its shrinking on a relative basis.
There are a couple of reasons this doesn't show up in most secondary measures such as job advertisements though. A couple of the obvious ones include:
Many teams producing C++ have now had years to "settle in", so the turnover rate is relatively low.
It's now well established where it's used, so positions tend to be filled by internal promotions.
There's another effect I almost hesitate to mention, but it's true no matter how little a lot of people like it: there are both programmers and managers who are more excited about "new" than effective. This leads to a large group of wannabes who are constantly on the move to the latest and greatest "technology" (whether that happens to be a language, framework, platform, or whatever). They get a job, loaf (or worse, actually write some code), then move on to their next victim...er...employer. They cause a lot of "churn", and inflate the number of job advertisements, but produce little or nothing of any real value. That group moved from C++ to Java a long time ago, and have long since moved from Java to C# to Ruby on Rails to Hadoop to whatever the managers are excited about this week.
Lest I sound excessively negative, I should add that along the way, a few of them really find something they're good at, and (mostly) tend to stay with that. Unfortunately, for every one who does, there are at least five more new graduates to join the throng...
"C++ effectively relegated to embedded systems, OS, Browser"
"other special purpose development"
You mean 99% of the code people run on a daily basis?
C++ is still heavily used in many mission critical financial applications. For example, most of Bloomberg's platforms are based on C++ with very little front end in other languages. Many investment banks and hedge funds use algorithmic trading systems written completely in C++ (e.g., Tower Research Capital, Knight Capital, etc.).
If you've been out of C++ for a while, you may need to get used to a whole bunch of now-standard libraries. When I was doing most of my C++, STL was fairly new and you either adopted the Microsoft libs or did not. If I went back to C++ now, I'll have to learn all the new libraries to be effective.
I think most of the movement to other languages is related to web development and web-centric development. The main exception to that would be Google, which still primarily use C++ and Python.
C++ is still valuable for many high performance apps. There are other technologies, and depends on the situation different languages are better suited for your needs. But if you want strong performance, good control of what your code is doing, and flexible networking and programming stack, C++ is still a good choice.
A better suggestion is: let the problems come to you and find the language that best suites the situation, rather than take a language and go look for problems.
Still: if you know C++ well, you can learn/program in anything.
To this day, C++ is the only language which is both object oriented and compiled (or at least, which has a mature ecosystem of optimizing compilers). Which leaves it as the sole choice for most large scale, compute-intense projects.
To me the prominent example is games and game engines - these are huuuuuge projects that squeeze machines for milisecond-fractions. MS is trying to get some traction for XNA (a managed game-dev framework - basically a DirectX wrapper ), but most probably would never get any for AAA game productions.
If I take a look at the applications I have installed on the laptop I am writing this message on, I see a lot of C/C++ and few (if any) managed apps. Examples? Google Chrome, Firefox, iTunes, uTorrent, Spotify, Picasa, Google Earth, OpenOffice, Notepad++, IrfanView... this list goes on and on. I write desktop applications for a living, which are installed on thousands of PCs worldwide, and C++ is still my language of choice. The lack of dependencies (WTL is your friend) is a massive plus IMHO (and that of my customers I should add!.) YMMV though - as a seasoned developer I think I am productive enough in C++, but I can't speak for everybody.
It hasn't gone away if you need to do something really, really fast. If "fast enough" is OK, then C# and Java are fine, but if you have a calculation that takes hours or days, or you need something to happen on the microsecond timescale (i.e. high frequency trading) C++ is still the language to use.
More often than not, we get lost in the hype cycle. First there was Java, then came PHP, and currently is Python. But the fact of the matter is development of general purpose desktop application still requires use of libraries like Carbon/Cocoa for mac, GTK/QT for Linux, MFC for Windows. All of which are C/C++ based. So are most applications written for these platforms. So calling C++ as being relegated to embedded is not right, although yeah its being extensively used now, unlike earlier when it was just assembly or C at the max. In my opinion, if you want a high performance application with great looking GUI, it still has to be done in C/C++.
Different languages are prevalent in different domains. It is interesting that you think it might be rendered unimportant by being relegated to embedded systems when in fact that is where most software development occurs; at least in terms of number of projects/products.
There are many ways of measuring, and a number of them are presented here: http://langpop.com/. The evidence suggests that C++ remains important.
I'm not sure whether the gaming industry falls under "general purpose development", but if you want to develop anything that you intend to get working on more than a single console, C++ is what's for lunch. While many gaming and 3D libraries have extensions for other languages, they -all- have extensions for C/C++.
C++ is still used everywhere you want the best performance. Its major advantage is that you can use literally for everything. In addition to what other people have said you can also use it to power websites, for instance OkCupid uses it almost exclusively.
As the recent Hip Hop of Facebook shows, in the end, if you can afford it (ie. you have a large and competent team) you can always gains something using it. Then it also a matter of scale, other than industry.
C++ is still very popular. For instance, combined with Qt it is often used.
C++ is usually used for systems work, generally defined as software where the UI is not central, not application work -- where the UI is central. So, for general business use it's probably not very interesting and those problems are better solved with a higher level language. However, there will always be low level systems work to be done, and C or C++ is the practical answer for those problems right now.
As a general development language? Well, it depends on your industry, but I've worked in two different industries and there is always plenty of C++ work:
Telecoms
Embedded devices often use C and C++ for core services
Network equipment, often very complex, heavily utilize C++
Software apps that work with hardware will often be written in C++
Financial Services
Trade Execution systems are often in C++. You cannot have your garbage collection kick in when you're executing an order for a customer.
Algorithmic and high-frequency trading systems are usually in C++
General trading systems that do not have strict speed requirements seem to be in C++ and Java, with C# starting to show up as well.
Administrative applications tends to be written in Java, VB, or C# these days
Recently there is a trend towards functional languages for quantitative analysis, so F# and Haskell are starting to appear, and SAS and Matlab are always common too
I read somewhere that Nyse/Euronext uses Java, but that they disable the garbage collector and run on servers with insane amounts of memory.

XA distributed transactions in C++

Is there a good C++ framework to implement XA distributed transactions?
With the term "good" I mean usable, simple (doesn't imply "easy"), well-structured.
Due to study reasons, at the moment I'm proceeding with a personal implementation, following X/Open XA specification.
Thank you in advance.
I am not aware of an open-source or free transaction monitor that has any degree of maturity, although This link does have some fan-out. The incumbent commercial ones are BEA's Tuxedo, Tibco's Enterprise Message Service (really a transactional message queue manager like IBM's MQ) and Transarc's Encina (now owned by IBM). These systems are all very expensive.
If you want to make your own (and incidentally make a bit of a name for yourself by filling a void in the open-source software space) get a copy of Grey and Reuter.
This is the definitive work on transaction processing systems architecture, written by two of the foremost experts in the field.
Interestingly, they claim that one can implement a working TP monitor in around 10,000 lines of C. This actually sounds quite reasonable, as what it does is not all that complex. On occasion I have been tempted to try.
Essentially you need to make a distributed transaction coordinator that runs as a daemon process. You will need to get the resource manager protocol working from it, so starting with this as a prototype is probably a good start. If you can get it to independently roll back or commit a transaction you have the basis of the TM-RM interface.
The XA API as defined in the spec is the API to control the transaction manager. Strictly speaking, you don't need to make a 3-tier architecture to use distributed transactions of this sort, but they are more or less pointless without a TP monitor. How you communicate from the front-end to the middle-tier can be left as an exercise for the reader. You are probably best off using an existing ORB, of which there are several good open-source implementations available.
Depending on whether you want to make the DTC and the app server separate processes (which is possibly desirable for stability but not strictly necessary) you could also use ACE as a basis for the DTC server.
If you want to make a high-performance middle-tier server, check out Douglas Schmidt's ACE framework. This comes with an ORB called TAO, and is flexible enough to allow you to use more or less any threading model that takes your fancy. Using this is a trade-off between learning it and the effort of writing your own and debugging all the synchronisation and concurrancy issues.
Maybe quite late for your task, but it can be useful for other users: LIXA is not a "framework", but it provides an implementation for the TX Transaction Demarcation specification and supports most of the XA features.
The TX specification is for C and COBOL languages, but the integration of the C version inside a C++ project should be effortless.
Other option is open source Enduro/X distributed transaction processing framework which allows to write simple C/C++ services which may operate with resource managers (e.g. databases) and gives capability to commit or abort works done by several different executables on same/different physical servers worked with different resources/databases.
Internally XA 2PC is used there.