I work in a factory where 80% of our equipment uses an MS-DOS interface. None of our engineers have experience in c/c++ programming, and it was requested we add some features to the machines interface. Our automation group has abandoned the MS-DOS platform in favor of Allen-Bradley controls. I'm feeling ambitious and decided to take on this project, even though I have next to no experience in c/c++.
On to the question:
All of the programming was written and compiled in Turbo C++. I would prefer to use DEV++ for various reasons (ease of use, additional headers, more developed C++ platform, ect.). The problem is the existing programming relies heavily on non-standard headers from TC++. There are 10 or so headers unavailable in DEV++ in the source code, and rewriting the code using more modern constructs is not an option; we would lose what little support we have from our AG, time, ect.
Is there a way I could add all the headers from TC++ to DEV++? For example adding the graphics.h to DEV++ and have it be fully functional? I have tried adding it to the include folder, calling it with #include"graphics.h", and if DEV++ manages to recognize it, it throws a ton of compiling errors because it doesn't recognize the internal commands in the graphics.h file.
Unfortunately I cannot include any example code from this project, due to non-disclosure and copy-write policies.
My programming experience:
DABBLE in RSLogigx500,5000 ; Arduino IDE (don't judge) ; Parker 6K ; PanelView ; ~40hrs of self-taught c and c++.
Any help would be much appreciated.
UPDATE
Very helpful information. It seems like this isn't going to be possible given how outdated the hardware is and the restrictions I have on this project, but thank you all for your input.
Most of the headers from old Turbo C are really just the MSDOS API, in a way. So it doesn't make any sense to attempt to use those headers in any other environment and you can't port them to a Windows compiler. Similarly, graphics.h is for a Borland-specific DOS graphics library called BGI and will not work on any other compiler.
It should be noted that old Turbo C++ (I'm assuming version 3.1?) didn't follow the C or C++ standards much. The C++ version it used is completely antique.
Also note that the Dev C++ IDE is outdated and doesn't update the GCC compiler any longer. The CodeBlocks IDE is a better alternative.
This is more of a "long comment" than a direct answer to your question, trying to guide you to a better understanding of what MAY be your challenge in your project.
I personally would choose a more "professional" level development tool. Either Eclipse (positives is that this is portable and looks/feels the same whether you use Windows or Linux), XCode (works only on Mac) or Visual Studio (which works only on Windows). These are full featured integrated development environments, and they are all very slick. All of them are free or nearly free.
Compiling OLD code, that is written for DOS into a modern compiler with on a modern OS may be quite a challenge, depending on what the application does and how much in the way of assumptions about it's environment the code is written with:
does it assume int is 16 bits
does it call direct to DOS to get file info, opening/reading/writing/closing files
does it do raw keyboard input
does it poke characters and/or pixels directly at the screen
does it use far and near pointers, are there driver-like components that interface directly to hardware interrupts
One thing that stands out in your question is the mention of graphics.h, which I believe is very Borland specific. Which means you'll have to write your own replacement functions - either a replacement graphics.h set of functions (I expect most functionality is available in any modern OS, it's more a case of "what is it called and what do I need in order to call that function"). This can be quite a task in itself.
The tricky part here is not only to identify what the code does, but to replace it with similar logic, that does the same thing in your new environment.
And of course, it all depends on what you want to do with the code, how well written it is - is it nicely modular, does each function do one thing and one thing only, or are there functions that "This calculates the value of , and then reads some data from disk, then does some I/O to the screen, and then talks to some external hardware, and because it gets calls frequently, also updates the time on the screen if it has changed".
Related
Background: I'm working on porting a large project that was developed in C++ exclusively for unix systems, to be compatible with windows to make way for an eventual windows distribution. I don't have very much experience in Windows development, but I'd like to get whatever I can do, done right, before senior developers move in and take over.
Question: So for a while, I've been looking up Windows versions for all the unix/posix calls that are used in the software, most coming from dirent.h, unistd.h, and some ones under sys, like sys/stat.h or sys/types.h. And although it'll take a lot of work to modify the programs to adopt the new Win32 API calling conventions and return types (and sometimes all new functions), it'll probably work, eventually.
But I've been seeing this come up frequently, the fact that MinGW, I guess, includes many native unix calls and functionalities as part of the GCC environment, and can translate them into windows-compatible calls so you can compile on Windows, and use said compiled program on Windows easily. In fact, one of the similar questions I read in the sidebar talks about just that. What I don't understand, and seem to have trouble understanding, is exactly the extent of this built-in translation functionality, and where I can find a list of system stuff that'll work with this.
Sorry this post is a little unstructured and that I'm so green with this, but I only have 2 weeks to try to accomplish something with this before I get swapped with a senior developer.
No. MinGW does not attempt to implement UNIX functions on Windows. It cannot replicate most system calls with no effort.
However, Cygwin does do that.
I'm not sure the way I'm organising my library is the most elegant way to organise it. I'm mainly concerned about making all the code that I type compile/run for all systems that I'm targeting (keeping it portable), and also keeping it up-to-date for every system.
For example:
I'm not sure using __declspec(dllexport/dllimport) would be used for Mac or Linux. I assume that it's not, but I don't know what the equivalent is for Mac/Linux is. Or another example might be calling specific Operating System functions, which I try to avoid. However, things such as: measuring how long something takes to happen*, in a precise manner, does require me to call OS specific functions.
*as in getting the user's time precisely (down to micro/milli-seconds).
The systems that I am currently (perhaps more in the future) aiming for is Mac, Windows and Linux. But testing that the code compiles (and runs correctly) for every system just seems like a waste of time. As currently, the way I'm proposing to do it, requires me to make a separate project for every system. i.e. I create a Visual Studio Project for Windows, an Xcode project for Mac, and use the command line for Linux or use some other IDE.
Okay, so my main problems with the way I organise things are:
1. Time Consumption, as in creating the projects for all systems and keeping each indiviual project, for each system up-to-date.
2. Knowledge on all IDE's/Compilers that I require to use
Please note, I've never really used* Linux before, I'm thinking about switching. The only problem that I'm concerned about is finding the right tools for me to use with it, and finding the right Distro that would suit me, for what I need. I would really appreciate if someone that is experienced with Linux could guide me, or give me some advice whether to switch or not.
I really like Visual Studio, and it's been my main IDE for quite a while now. I'm just not sure if I want to ditch Visual Studio or not, for Linux; as I don't know if the tools that are available for Linux can do what Visual Studio can or not. What I mean is, it being as user friendly as Visual Studio is. I'm afraid to learn how to use Linux's tools, I'm just not sure if it's worth doing so. Time isn't a major factor on this library, I have plenty of time, I'm fairly young and determined to program as my career in the future.
*I have used it before, but I've never replaced it for Windows.
I am currently hosting all my source code for my project on BitBucket, but at the moment, I only have the RAW code on my repo. There is no project files or any other tool to compile it with, just the code and a readme file. I was thinking of using Makefiles, since they seem popular. But I've never made a Makefile before, don't get me wrong, I am willing to learn. I'm just not really sure where to start. I've heard that people use CMake to create portable libraries, such as SFML and Ogre3D. I've built a couple of libraries with CMake, but I have no clue on how to actually make my own library with it to make my project/make files. Should I learn and incorporate CMake with my library, or is there a better option available?
EDIT:
I'm not aiming to write a library for actual Software that uses a GUI. I'm mainly aimed to write games.
1 - Boost. Boost will help your portability more than you can imagine. Its only real sticking point is, believe it or not, OS X.
2 - Use CMake. It integrates with Visual Studio project files as the build tool, and you can put most of your different-platform compilation voodoo in there.
3 - If you're seriously writing a portable library, consider writing it in C/writing a C wrapper, or making it header-only, or providing the source-code. Making it a shared or statically linkable library does not mean that it will play nice. Name mangling leads to inconsistencies that will blow your mind.
4 - Always be explicit about the number of bits in each variable.
5 - use git. It'll allow you to setup a crappy local server for a repository very easy and get very fast transfers of the kind of huge changes MSVC will make annoyingly
There are a lot more best-practices that can be discussed about cross-platform development. All of that advice isn't applicable in every case; I have a very code-heavy Linux/Windows library that I code almost exclusively in MSVC2k10 and mostly build/test for in Linux, and it is nowhere close to header-only.
EDIT in response to comments:
git was suggested because I find it very easy to use and manage locally. I've use svn before and liked it, I won't really endorse any others, but there are probably plenty of fine ones.
To expound on point 3,
A C wrapper would make it so that anyone anywhere could use your library - FORTRAN developers, Ruby, even Java.
Otherwise you generally have to have similar compiler versions to link properly and it will only link to other C++ code, outside the case of DLLs, and there are still versioning issues. It's one of the stupidest problems in C++ left over, check "name mangling" on Wikipedia. There is a reason widely-used libraries are written in C or have C wrappers, such as libz, openssl etc.
There are other advantages to it. Exception propagation across dynamic libraries is non-existent; with static libraries it can be inconsistent or non-existent.
You'll find that the most widely-used C++ libraries are mostly header-only, like Boost. A header-only library solves many problems by putting all the code directly into a project in a relatively intuitive way, and modern compilers can still optimize away much (but certainly not all) of the extra compile time associated with it.
With all this said, it is certainly possible to do without a C wrapper or header-only, it is just annoying and very troublesome. DLL hell and its Linux equivalents still exist.
You also asked about Boost. That depends. If you're distributing the sourcecode then you certainly must distribute Boost with your code/have people install it. Having people install libraries in order to compile other libraries or use programs is common practice. Think of how specific versions of DirectX come with games for an example.
However if you are distributing binary versions of your library, statically linking against Boost will eliminate any need to include it as long as you are careful to keep Boost headers out of the outward-facing parts of your library. This is where you start seeing things like void * pointers inside C++ headers; an unfortunate side-effect of some of the shortcomings of C++ compilation and library distribution unfortunately.
CMake is a good choice. You can learn to use it. Read a tutorial:
http://www.cmake.org/cmake/help/cmake_tutorial.html
But, if your targets are Linux and Windows only. It is probably OK, in your case (small/average first multi-platform project), to maintain 2 separate build systems.
On Linux, Use Make. It is standard and has a very good reference manual:
http://www.gnu.org/software/make/manual/make.html
On Windows. Use your IDE project file, be it Visual, DevC++ or other. That is the simplest way to go.
Most important, make it easy to test your library/software on different platforms. Install a virtual machine on your desktop. Or at least compile your library into Cygwin.
Once you are here come back on stackoverflow and we will help !
Personally I'd leverage a framework like Qt, because it is quite portable, it does abstract a lot of OS functionality (files, timing, threads, networking), and you get a decent, free IDE (Qt Creator) that is also portable and runs on Windows, OS X and any Unix flavor that runs Qt. It'd give you the lowest barrier to entry. Qt Creator can leverage the Visual Studio compiler and the CDB debugger if they are available.
You do not need to use OpenGL to use Qt, in fact you're not bound to any particular graphics subsystem. Qt only "uses" OpenGL in Qt 5 for the Qt Quick 2 graphics backend. It's not needed for Qt 4, nor for Qt Quick 1 (even in Qt 5!).
You can use any 2D or 3D framework you fancy to push images and other content to the screen. What Qt is good at is creating the kind of 2D imagery that is often needed in games - menus, configuration screens, HUDs, etc. There's a lot of controls and drawing primitives that Qt makes easy to leverage for your purposes.
Qt also lets you use a reasonably powerful model-view and networking frameworks, thus you'd be able to reasonably easily generate server or client lists that update in real time.
There'd need to be a small amount of shim code between Qt and DirectX, of course. On the output side, you typically end up with a QImage in Qt, and then use DirectX, SDL, OpenGL, etc. to push it to the screen. On the input side, you need to call qApp->processEvents() within your main game loop, and you will need to post user input events from DirectX etc. to Qt's event queue using qApp->postEvent(...). This would be only needed if, say, DirectX main loop consumes all Windows messages and won't let standard winapi/win32 code (Qt's windows event dispatcher) see them. I haven't deal with DirectX much, so others feel free to chime in with details, of course.
I am new to code::blocks, so I was wondering how could I compile and run my computer graphics animation programs (those that use graphics.h) in Code::Blocks.
Is this possible with the default installation?
This is impossible. You can't use graphics.h without a time machine, so you're better off forgetting about it altogether. This question gets asked about once a week, and the consensus is always the same.
The graphics.h header is a proprietary graphics library included with Borland compilers back in the early 1980s. It is not a standard C or C++ library, and hasn't been modern or relevant for at least 20 years. It should come as no surprise that it does not work with modern compilers, or on modern operating systems. Using or learning to use it is an utter and complete waste of time. If your educational system is insisting upon you doing so, you should be entitled to a complete refund of your tuition.
It's good that you're moving up to a modern IDE like Code::Blocks. You should also upgrade your library and toolkits while you're at it. If you want to write graphics applications, you're undoubtedly going to be doing so in a graphical environment like Windows or X. You might as well learn the way to do it there, rather than getting lines to appear in an MS-DOS emulator and marveling at the appeal of retro tech.
I am a software engineer and i work in VC++, C++ in WIndows OS.
Are there any major differences when it comes to coding in C++ in Linux environment.
Or is it just some adjustments that we have to make when we need to code in C++ in Linux.
It would depend on the types of projects you've worked on and what native windows APIs you made use of. For example if you used the native Windows API for everything, you're going to have a pretty big task ahead of you, it'd be worth making your project(s) work nicely with Wine instead.
In the Linux environment you have the man pages, quite detailed documentation of almost everything :). As mentioned above, look at POSIX, and while I recommend Qt - it provides a LOT of abstractions for things you might want to learn to do the Linux way (eg sockets, filesystem...)
Use the POSIX API instead of the Win32 API.
Use gtkmm, Qt, or wxWidgets instead of MFC.
Linux programming world is very different from you are familiar with in Windows world. You have to understand it and get used to it. Once you understand you will not want to come back.
You have many small/good tools that works with each other rather then all-in-one MSVC solution. For example:
In Linux you have a compiler as stand-alone tool (Gnu compiler collection), you have build system as stand-alone tool (autotools, CMake). You have GNU Debugger as stand alone tool and you have very good editors as stand alone tool (like hard core vim/emacs).
There are integrated development environments like Eclipse, Netbeans, KDevelop, Anjuta
but still you have to understand how stuff works.
You should understand that each separate tool is very powerful and integrates with others.
OS Level API is designed for simplicity. You'll rarely will find calls like CreateProcessEx with bizzilion parameters rather you have simple fork()+exec(). man is you real friend in all connected to system API and standard C library.
GUI - You have two big GUI libraries Qt/GTK. Qt is great C++ library that makes GUI development enjoyable work (unlike MFC). GTK has both C and C++ APIs GTK and GTKmm (no experience with them).
i18n/l10n/unicode - this is where Linux programming makes life easier. Almost everything is UTF-8. No wide API crap, no issues with opening Chinese file names with simple fopen or ifstream, no 3rd part library that can't open file with Unicode name. Great built in tools available like gettext, and good translation toolkits like KBabel.
Libraries - this is where Linux programming makes you hate Windows. Almost every single free library is already installed or available with simple apt-get or yum install. no debug/release incompatibility crap, no DLL_EXPORT-ing, simple robust, making shared objects is as simple as working with static libraries (and most do not use static libraries at all).
My $0.02
(I'm Linux programmer that have deal a lot with windows development)...
It depends on how many windows-specific things you've been using. The standard part of C++ is the same, but using that will not get you much further than command-line applications.
There's also the whole makefile-instead-of-letting-VS-build-for-you thing. Depending on what tool (or IDE) you decide to use in Linux, that could be a big difference.
I have worked quite a bit on both platforms and like them both, but in general I found most developers to like one and hate the other.
I would describe *nix environment as "geek friendly": many excellent and very flexible tools on your disposal. Some of them introduce hard learning curve, and some are simply broken but still popular for some reason (make) but if you are willing to invest some time in properly learning them, the reward is high. In fact, I use many *nix tools even when working on Windows: vim, grep, perl, etc...
On the other hand, Windows platform offers Win32 API which has way more functionality than POSIX, is very well documented and supported by very good tools. Debuggers on Windows (especially windbg) are generally more powerful that any *nix debugger I have tried, although gdb is generally good enough for most tasks. Deployment of executables is also easier than in Linux world - in fact the only truly reliable way to deploy software on Linux is to ship source code and build it on clients' machines via config/make.
I would suggest to use a Buildsystem like SCons which works very well on both Linux and Win32.
Take a look at the source to some open-source project that runs on both Linux and Windows. Typically, over 80% of the code is identical, and the bigger the project the less the system-specific part tends to be. Unfortunately, there can be hard parts (threading, non-blocking network IO, GUI details) in the system-specific code.
There are some major differences that I can think of:
Tools. Good and bad points. If you are used to Visual Studio, there is nothing quite like that available. Each Linux IDE has some issues. On the other hand, especially debugging tools are very good. But all in all, you are supposed to create your own working environment from what's available.
API's. Documentation varies wildly. Some components are well documented, but often you end up reading the source code to figure out how something is supposed to work. On the other hand, you have source code so eventually you have all the tools possible to figure out why something doesn't work.
The Linux programming community is usually very good as long as you remember to behave and you find the right places. SO isn't half bad in some issues, but sometimes you need to find other places.
Things are not quite as automatic as you might have learned in the Windows world. Yes, some tools allow you to create projects without Makefile knowledge, but really, you should learn how to use them. In Windows it's much more common that you never edit the project files (e.g. Makefiles) by hand.
If you want to work in kernel space (drivers etc) C is a better bet than C++ since the kernel is written with that.
And I agree with people suggesting Qt. Very nice widget set. Beats at least Swing (yes, I know, it's Java) hands down. And Qt Creator isn't half bad.
Don't underestimate the power of shell scripting! Something very few Windows programmers have figured out, but you can do a hell of a lot with them to help your work.
A typical windows programmer who is used to Visual C++ might find the following aspects of Linux C++ programming novel, or difficult:
Linux programming isn't linux programming, it's Unix programming. Unix programming's roots go back a lot further than the MS-DOS roots of Windows, and it shows in a lot of places.
Windows programmers tend to think about the environment, they tend to think about the IDE tools (your GUI editor, compiler, debugger) first. Unix programmers tend to be arranged in various tribes, many core Unix (linux) C++ programmers are very comfortable working from the command line without an IDE, and some, I'm sure, use visual-studio style IDEs on Linux, of which there are many.
I personally found I had to learn how to read (and maybe write) a makefile, build a bunch of standard Linux/Unix applications from source (and understand how to type my way through steps like 'autoconfiguration' and the various "--command-line-options" one might select there), before I get the feel, and the flavour of the environment.
Until you are a seasoned Linux system administrator you might want to stick with the newbie-friendly Linux distributions (like Ubuntu).
I am making a C++ program.
One of my biggest annoyances with C++ is its supposed platform independence.
You all probably know that it is pretty much impossible to compile a Linux C++ program in Windows and a Windows one to Linux without a deluge of cryptic errors and platform specific include files.
Of course you can always switch to some emulation like Cygwin and wine, but I ask you, is there really no other way?
The language itself is cross-platform but most libraries are not, but there are three things that you should keep in mind if you want to go completely cross-platform when programming in C++.
Firstly, you need to start using some kind of cross-platform build system, like SCons. Secondly, you need to make sure that all of the libraries that you are using are built to be cross-platform.
And a minor third point, I would recommend using a compiler that exists on all of your target platforms, gcc comes in mind here (C++ is a rather complex beast and all compilers have their own specific quirks).
I have some further suggestions regarding graphical user interfaces for you. There are several of these available to use, the three most notable are:
GTK+
QT
wxWidgets
GTK+ and QT are two API's that come with their own widget sets (buttons, lists, etc.), whilst wxWidgets is more of a wrapper API to the currently running platforms native widget set. This means that the two former might look a bit differently compared to the rest of the system whilst the latter one will look just like a native program.
And if you're into games programming there are equally many API's to choose from, all of them cross-platform as well. The two most fully featured that I know of are:
SDL
SFML
Both of which contains everything from graphics to input and audio routines, either through plugins or built-in.
Also, if you feel that the standard library in C++ is a bit lacking, check out Boost for some general purpose cross-platform sweetness.
Good Luck.
C++ is cross platform. The problem you seem to have is that you are using platform dependent libraries.
I assume you are really talking about UI componenets- in which case I suggest using something like GTK+, Qt, or wxWindows- each of which have UI components that can be compiled for different systems.
The only solution is for you to find and use platform independent libraries.
And, on a side note, neither cygwin or Wine are emulation- they are 100% native implementations of the same functionality found their respective systems.
Once you're aware of the gotchas, it's actually not that hard. All of the code I am currently working on compiles on 32 and 64-bit Windows, all flavors of Linux, as well as Unix (Sun, HP and IBM). Obviously, these are not GUI products. Also, we don't use third-party libraries, unless we're compiling them ourselves.
I have one .h file that contains all of the compiler-specific code. For example, Microsoft and gcc disagree on how to specify an 8-bit integer. So in the .h, I have
#if defined(_MSC_VER)
typedef __int8 int8_t;
#elif defined(__unix)
typedef char int8_t;
#endif
There's also quite a bit of code that uniformizes certain lower-level function calls, for example:
#if defined(_MSC_VER)
#define SplitPath(Path__,Drive__,Name__,Ext__) _splitpath(Path__,Drive__,Dir__,Name__,Ext__)
#elif defined(__unix)
#define SplitPath(Path__,Drive__,Name__,Ext__) UnixSplitPath(Path__,Drive__,Name__,Ext__)
#endif
Now in this case, I believe I had to write a UnixSplitPath() function - there will be times when you need to. But most of the time, you just have to find the correct replacement function. In my code, I'll call SplitPath(), even though it's not a native function on either platform; the #defines will sort it out for me. It takes a while to train yourself.
Believe it or not, my .h file is only 240 lines long. There's really not much to it. And that includes handling endian issues.
Some of the lower-level stuff will need conditional compilation. For example, in Windows, I use Critical Sections, but in Linux I need to use pthread_mutex's. CriticalSection's were encapsulated in a class, and this class has a good deal of conditional compilation. However, the upper-level program is totally unaware, the class functions exactly the same regardless of the platform.
The other secret I can give you is: build your project on all platforms often (particularly at the beginning). It is a lot easier when you nip the compiler problems in the bud. Don't wait until you're done development before you attempt to go cross-platform.
Stick to ANSI C++ and libraries that are cross-platform and you should be fine.
Create some low-level layer that will contain all the platform-specific code in your project. Implement 2 versions of this layer - one for Windows, and one for Linux - with the same interface, and build them to 2 libraries. Access all platform-specific functionality in your project through that interface.
This layer can contain general classes for file access, printing, GUI, etc.
All the (now non-platform-specific) code that uses that layer can now be compiled once on Windows and once on Linux.
Compile it in Window and again in Linux. Unless you used platform specific libraries, it should work. It's not like Java, where you compile it once and it works everywhere. No one has made a virtual machine for C++, and probably never will. The code you write in C++ will work in any platform. You just have to compile it in every platform first.
Suggestions:
Use typedef's for ints. Or #include <stdint.h>. Some machines think int is 8 bytes, some 4. (It used to be 2 and 4. How the times have changed.)
Use encapsulation wherever possible. My last window's compiler thought %lld was %I64d", gave screwy return values for vsnprintf(), similar issues with close() and sockets, etc.
Watch out for stack size / buffer size limits. I've run into an 8k UDP buffer limit under Windows, amongst other problems.
For some reason, my Window's C++ compiler wouldn't accept dynamicly-sized allocations off the stack. E.g.: void foo(int a) { int b[a]; } Be aware of those sort of things. Plan how you will recode.
#ifdef can be your best friend. And your worst enemy! (At the same time!)
It can certainly be done. But compile and test early and often!
Also Linux and Windows have diffrent data model.
See article: The forgotten problems of 64-bit programs development
Standard C++ is code compiles without errors on any platform.
Try using Bloodshed Dev C++ on windows (instead of VC++ / Borland C++).
As Bloodshed Dev C++ confirms C++ standards, so programs compiled using it will be compiled on linux without errors in most of the cases.