Easy way to cross-compile a Qt application using Qt Creator? - c++

I'm very new to this C++ and Qt Creator thing, I'm using Qt Creator in Mac OS X.
Is there an easy way to compile for Windows and Linux platform also?
The current way I'm doing is copying the source file to a Windows machine (with Qt) and compile it, which takes a lot of time.
Is there a command or something that can get all 3 executables at once?

It's not supported out of the box and you might end up messing between the different flavours
I strongly suggest, for such a goal:
- to use either centralised (svn) or distributed (git, hg) SCM
- to use continuus integration with 3 agents, each one in a different platform (can be VM or physical computer). You can use hudson or cruise control
This way:
- you develop locally, on whatever platform you prefer
- you push / commit / submit your changes
- the buildbox compiles on all platforms (while you can still work on the next feature)
- ideally, you run your unit tests as well
- once all builds on all platforms are finished, you got a status and a build on all targets
This is fairly useful when wanting to deal with more than one version of Qt or compiler : the return value is definitely worth the time invested in the setup (and it scales well)

It is far from trivial, and Trolltech didn't like the idea (by making it harder than it should be).
There is Cross compiling Qt/Win Apps on Linux on the Internet which will help (only directories might be different for Mac, commands should be the same).
After you have cross-compiled, you'll need to create a shortcut to Qt Creator with a custom PATH that has your cross-compiler directory prepended to the rest of the PATH. That way you can ensure it is used. This is not recommended though.
Why not just use a properly set up VM?

Related

Development in Cygwin C++

For a few years I was writing programs in Visual Studio for Windows and with GCC (Code Blocks) for Linux. Most of my libs compiled seamlessly as they worked both in Windows and Linux. However at the moment I am a bit confused, as I have to create an app using Cygwin. I don't really understand if I am still in UNIX/Linux environment, just running app on Windows by some "emulation", or I am rather on Windows just having access to some Linux/Unix functionality. From what I understood from the FAQ's and documentation it looks like I just should behave like in Linux environment.
All explanations I found in internet usually are very general and don't explain the detailed differences from programmers viewpoint.
Short question: Can I just write programs like I did for Linux without any major changes when using Cygwin?
Maybe.
A lot of code written for Linux will compile in Cygwin with very few problems, which can mainly be fixed by messing with preprocessor definitions.
However, any code written for linux which:
Uses a Linux driver
Directly accesses the kernel
Relies on any code which does either of these two things (and doesn't have a Windows counterpart)
will quite definitely not work, regardless of how much you modify the code.
Much as it tries to, Cygwin cannot fully emulate (yes it is an emulator, of sorts) everything a POSIX system can normally do. Cygwin is not windows, just a conversion layer from its own machine language.
For more information, read cygwin's wikia
Can I just write programs like I did for Linux without any major
changes when using Cygwin?
The platforms are not identical, so you can not realistically expect to write the program in Linux, and then POOF expect it to build and work under Cygwin. But if you don't use things not available under Windows, then you won't need major changes. And you can write non-trivial programs, which will build and work on both, perhaps needing a few #ifdefs in places.
From your question I take it you want to work on Linux, but write programs for running under Cygwin. In that case you must also build and test it in the Cygwin environment all the time, so:
Use version control, commit often. I recommend a DVCS like git or mercurial which have separate commit and push, it will allow you to do commits more freely.
Whenever you commit/push, do checkout/pull and build on the Cygwin host. You can do this manually or automatically (by simple custom script polling the version control, or by Jenkins or something).
When ever your code stops building or working under Cygwin, fix it before continuing with new code.
If Cygwin is not absolute requirement, then I would look into using Qt SDK. It can be used for non-Qt projects too, the MinGW toolchain on Windows is very similar to gcc on Linux. And if you're willing to use Qt, then it has all sorts of platform-independent features for things you might want to do, such as discover locations of standard directories for saving files, use threads, print things, have GUI...

Build server / continuous integration recommendation for C++ / Qt-based projects

I'm looking to implement a build server for Qt-based C++ projects. The server needs to check out the necessary code / assets from Subversion, build the executable files, assemble the artifacts for installation projects, and build the installation media files. The target platforms and (rough) toolchains are:
Windows (32- and 64-bit): qmake, nmake, msbuild, wix toolchain. The end result is an installer EXE and DVD image.
Mac OS X: qmake, make, custom bash scripts to assemble package. The end result is an application bundle within a disk image and a DVD image.
Ubuntu (32- and 64-bit): qmake, make, debuild-based scripts. The end result is a collection of DEB files and a DVD image.
Fedora (32- and 64-bit): qmake, make, rpmbuild-based scripts. The end result is a collection of RPM files and a DVD image.
So that's at least 4 build agents (maybe more if 32- and 64-bit can't be done on the same box) and 7 configurations. Open-source projects are preferred, but that is not an absolute requirement.
Most of the tools I'm seeing seem to be catered to Java (Jenkins, CruiseControl, etc.) or .Net (CruiseControl.net, etc.) Can those be used with a C++ toolchain, or will I constantly be fighting the system? Anything that you have used in the past and found works well with Qt / C++?
I use Jenkins for building and packaging many C++ projects, based on qmake, cmake, and makefiles.
There are plugins for cmake, qmake, and msbuild, but any command line scripts can be run as well.
I have done packaging using Jenkins with no problems, as it is just another command line step in a project.
There are good plugins for monitoring the number of warnings/errors produced by the compiler (I normally use GCC).
It also has matrix builds which allow you to build a project several times with different combinations of compiler flags, pre-processor variables, platform, etc. One project I set up is a matrix build with 5 boolean preprocessor flags on two platforms, which then does 2^6=64 builds. These can take a bit of setting up to get correct.
Here you can read a quick example:
Continuous Integration Server - Hudson
I think that Hudson, jenkins and builbot are worth a try. Wasting a day or two evaluating and trying them with a quick example will help you to choose confidently.
Most of the tools I'm seeing seem to be catered to Java (Jenkins, CruiseControl, etc.) or .Net (CruiseControl.net, etc.) Can those be used with a C++ toolchain, or will I constantly be fighting the system? Anything that you have used in the past and found works well with Qt / C++?
Any reasonably capable CI system will have a piece that will allow you to execute any program you want for your build command.
Here's what I would consider:
Does the CI system run on your system(s) of choice
Does it allow you an easy way to view your logs
Does it integrate with your test runner
Does it integrate with your code coverage reports (e.g. BullseyeCoverage w/C++ & Qt)
Will it publish your files in a manner sensible for your needs
Will at provide an archive/store of files, if necessary (e.g. pdbs & lib*.so.debug)
If the CI system doesn't support feature X, will you have to write it for each supported OS/system
Is the CI system / UI easy for you to use.
I did the above using CruiseControl and most things were pretty easy. I wrote everything in make or qmake and simply called out to the command that I needed executed. For unit test and code coverage integration I output stuff to XML and transformed it to something supported by CruiseControl.
My recommendation, take a look at the recommended CI systems and examine them based on the criteria above.
I'm using buildbot for this. I've been using it for 4 years, and I feel very happy with it.
It is an application written in python, that runs on a server and can manage multiple clients on various OSes. I'm currently using Windows XP, Windows 7, Debian, Ubuntu and CentOS build slaves. My projects are C++, and one of them (the end user GUI) is made in Python. But we've also integrated with other frameworks, for other features than GUI.
What is really good about buildbot is that it works by running command lines on slaves. With this, you can do whatever you want. Even on Windows systems to compile using Visual Studio! From these command lines, you get all the output centralized on the server, and accessible.
You may also find alternatives on this site that references many of them.
Disclaimer: I looked at it 3 years ago, I don't know if it is still accurate.
Hudson or Jenkins is pretty good.
Jenkins is indeed pretty popular for developing such a custom service, even after all these years, considering the question is already 7 years old.
Felgo also offers a Continuous Integration and Delivery (CI/CD) service for Qt. It supports desktop platforms as well as iOS, Android and embedded targets. The full feature set is described in the blog post.
Disclaimer: I am a software developer at Felgo

What tool should I use to create my buildmachine?

I am working on my free time on a multiplatform/multi-architecture library written in C++.
Before every release, I have to boot up several computers (One on Windows, one on Linux, another one on Mac OS, ...) just to make sure the code compiles and runs fine on every platform.
So I decided to create my own buildmachine but I really don't know what tools exist to do this. I'd like my buildmachine to run on Linux but any other solution will be accepted.
Ideally, I would just have to click on a "Build all" button, and it would compile my library for the different platforms/architectures, generate archives from the result and/or report potential errors.
My project "constraints" are:
It is written in C++
It compiles on Windows using SConstruct/MinGW and Visual Studio 2010
It compile on Linux and Mac OS using SConstruct/g++
The sources are stored into Subversion (svn)
Do you know any tool/set of tools that could help me achieving my goal ?
Thank you very much.
I would setup 3 VMs (VirtualBox is free), one for each platform.
Install TeamCity (or Hudson) on Linux and agents on the other VMs and then it's just a matter of configuring the build system.
At the very basic level you should have 2 tasks: one to checkout the sources from Subversion and another to invoke scons.
I'm not too familiar with Hudson but TeamCity is certainly capable of generating reports of a build, display progress etc.

Building C++ on both Windows and Linux

I'm involved in C++ project targeted for Windows and Linux (RHEL) platforms. Till now the development was purely done on Visual Studio 2008. For Linux compilation we used 3rd party Visual Studio plugin, which read VS solution/perojects files and remotely compiled on Linux machine.
Recently the decision was to abandon the 3rd party plugin.
Now my big concern is a build system. I was looking around for cross platform build tools. This way I don't need to maintain two set of build files (e.g. vcproj/solution for Windows and make files for Linux).
I found the following candidates:
a. Scons
b. cmake
What do you think about the tools for cross-platfrom development?
Yet another point that bothers me is that Visual Studio (+ Visual Assist) will loose a lot functionality without vcproj files - how you handle the issue with the tools?
Thanks
Dima
PS 1: Something that I like about Scons is that it
(a) uses python and hence it's flexible, while cmake uses propriety language (I understand that it's not a winner feature for a build-system) (b) self contained (no need to generate makefiles on Linux as with cmake).
So why not Scons? Why in your projects the decision was to use cmake?
CMake will allow you to still use Visual Studio solutions and project files. Cmake doesn't build the source code itself, rather it generated build-files for you. For Linux this can be Code::Blocks, KDevelop or plain makefiles or still other more esoteric choices . For Windows it can be among others Visual Studio project files and still others for MacOS.
So Visual Studio solutions and projects are created from your CMakeLists.txt. This works for big projects just fine. E.g. current Ogre3d uses CMake for all platforms (Windows, Linux, MacOS and IPhone) and it works really well.
I don't know much about scons in that regard though, I only used to build one library and only in Linux. So I can't compare these two on fair ground. But for our multi-platform projects CMake is strong enough.
I haven't used Scons before, so can't say how that works, but CMake works pretty well.
It works by producing the build files needed for the platform you're targeting.
When used to target VC++, it produces solution and project files so from VS, it appears as if they were native VS projects. The only difference is, of course, that if you edit the project or solution directly through VS, the changes will be erased the next time you run CMake, as it overwrites your project/solution files.
So any changes have to be made to the CMake files instead.
We have a big number of core libraries and applications based on those libraries. We maintain a Makefile based build system on Linux and on Windows using the Visual Studio solution for each project or library.
We find it works well for our needs, each library or app is developed either on linux or windows with cross compilation in mind (e.g. don't use platform specific api's). We use boost for stuff like file paths, threads and so on. In specific cases we use templates/#defines to select platform specific solution (for example events). When is ready we move to the other system (linux or windows), recompile, fix warnings/errors and test.
Instead of spending time figuring out tools that can cross compile on both platforms we use system that is best for each platform and spend time fixing specific issues and making the software better.
We have GUI apps only on Windows atm. so there's no GUI to cross compile. Most of our development that is shared between Windows and Linux is server side networking (sockets, TCP/IP, UDP ...) and then client side tools on Linux and GUI apps on Windows.
Using with perforce for source code version management we find in quite many cases that the Linux Makefile system is much more flexible for what we need then Windows VS. Especially for using multiple workspaces (views of source code versions) where we need to point to common directories and so on. On Linux this can be done automatically running a script to update environment variables, on Visual Studio referencing environment variables is very inflexible because it's hard to update automatically between views/branches.
Re sync question:
I assume you are asking how to make sure that the two build systems get synchronized between linux and windows. We are actually using Hudson on Linux and CruiseControl on Windows (we had windows first with cruise control, when I went to setup linux version I figured Hudson is better so now we have mixed environment). Our systems are running all the time. When something is updated it is tested and released (either windows or linux version) so you would know right away if it does not work. During testing we make sure all the latest features are there and fully functional. I guess that's it, no dark magic involved.
Oh you mean build scripts ... Each application has it's own solution, in solution you setup up dependencies. On Linux side I have a makefile for each project and a build script in project directory that takes care of all dependencies, this mostly means build core libraries and couple of specific frameworks required for given app. As you can see this is different for each platform, it is easy to add line to build script that changes to directory and makes required project.
It helps to have projects setup in consistent way.
On Windows you open project and add dependency project. Again no magic involved. I see this kind of tasks as development related, for example you added new functionality to a project and have to link in the frameworks and headers. So from my point of view there is no reason to automate these - as they are part of what developers do when they implement features.
Another options is premake. It's like cmake in that it generates solutions from definition files. It's open source and the latest version is very highly customizable using Lua scripting. We were able to add custom platform support without too much trouble. For your situation it has support for both Visual Studio and GNU makefiles standard.
See Premake 4.0 Homepage
CruiseControl is a good choice for continuous integration. We have it running on Linux using Mono with success.
Here is an article about the decision made by KDE developers to choose CMake over SCons. However I've to point that this article is almost three years old, so scons should have improved.
Here is comparison of SCons with other building tools.
Had to do this a lot in the past. What we did is use gnu make for virtually everything including windows at times.
You can use the project files under windows if you prefer and use gnu make for Linux.
There isn't really a nice way to write cross platform makefiles because the target file will
be different among other things (and pathname issues, \ vs / etc). In general, you'll probably be tweaking the code across the various platforms to take subtle differences into account, so a tweak to a make file and checking on the other platforms would have to happen
anyway.
Many OS projects maintain Makefiles for different platforms such as zlib where they are named like Makefile.win, Makefile.linux etc. You could follow their lead.

Using Visual Studio to develop for C++ for Unix

Does anyone have battle stories to share trying to use Visual Studio to develop applications for Unix? And I'm not talking using .NET with a Mono or Wine virtual platform running beneath.
Our company has about 20 developers all running Windows XP/Vista and developing primarily for Linux & Solaris. Until recently we all logged into a main Linux server and modified/built code the good old fashioned way: Emacs, Vi, dtpad - take your pick. Then someone said, "hey - we're living in the Dark Ages, we should be using an IDE".
So we tried out a few and decided that Visual Studio was the only one that would meet our performance needs (yes, I'm sure that IDE X is a very nice IDE, but we chose VS).
The problem is, how do you setup your environment to have the files available locally to VS, but also available to a build server? We settled with writing a Visual Studio plugin - it writes our files locally and to the build server whenever we hit "Save" and we have a bit fat "sync" button that we can push when our files change on the server side (for when we update to the latest files from our source control server).
The plugin also uses Visual Studio's external build system feature that ultimately just ssh's into the build server and calls our local "make" utility (which is Boost Build v2 - has great dependency checking, but is really slow to start as a result i.e. 30-60 seconds to begin). The results are piped back into Visual Studio so the developer can click on the error and be taken to the appropriate line of code (quite slick actually). The build server uses GCC and cross-compiles all of our Solaris builds.
But even after we've done all this, I can't help but sigh whenever I start to write code in Visual Studio. I click a file, start typing, and VS chugs to catch up with me.
Is there anything more annoying than having to stop and wait for your tools? Are the benefits worth the frustration?
Thoughts, stories, help?
VS chugs to catch up with me.
Hmmm ... you machine needs more memory & grunt. Never had performance problems with mine.
I've about a decade's experience doing exactly what you're proposing, most of it in the finance industry, developing real-time systems for customers in the banking, stock exchanges, stock brokerage niches.
Before proceeding further, I need to confess that all this was done in VS6 + CVS, and of late, SVN.
Source Code Versioning
Developers have separate sourcesafe repositories so that they can store their work and check it packages of work at logical milestones. When they feel they want to do an integration test, we run a script that checks it into SVN.
Once checked into SVN, we've a process that kicks off that will automatically generate relevant makefiles to compile them on the target machines for continuous integration.
We've another set of scripts that synchs new stuff from SVN to the folders that VS looks after. There's a bit of gap because VS can't automatically pick up new files; we usually handle that manually. This only happens regularly the first few days of the project.
That's an overview of how we maintain codes. I have to say, I've probably glossed over some details (let me know if you're interested).
Coding
From the coding aspect, we rely heavily on the pre-processors (i.e. #define, etc) and flags in the makefile to shape compilation process. For cross platform portability, we use GCC. A few times, we were force to use aCC on HP-UX and some other compilers, but we did not have much grief. The only thing that is a constant pain, is that we had to watch out for thread heap spaces across platforms. The compiler does not spare us from that.
Why?
The question is usually, "Why the h*ll would you even what to have such a complicated way of development?". Our answer is usually another question that goes, "Have you any clue how insane it is to debug a multi-threaded application by examining the core dump or using gdb?". Basically, the fact that we can trace/step through each line of code when you're debugging an obscure bug, makes it all worth the effort!
Plus!... VS's intellisense feature makes it so easy to find the method/attribute belonging to classes. I also heard the VS2008 has refactoring capabilities. I've shifted my focus to Java on Eclipse that has both features. You'd be more productive focusing coding business logic rather than devote energy making your mind do stuff like remember!
Also! ... We'd end up with a product that can run on both Windows and Linux!
Good luck!
I feel your pain. We have an application which is 'cross-platform'. A typical client/server application where the client needs to be able to run on windows and linux. Since our client base mostly uses windows we work using VS2008 (the debugger makes life a lot easier) - however we still need to perform linux builds.
The major problem with this was we were checking in code that we didn't know would build under gcc, which would more than likely break the CI stuff we had setup. So we installed MingGW on all our developer's machines which allows us to test that working copy will build under gcc before we commit it back to the repository.
We develop for Mac and PC. We just work locally in whatever ide we prefer, mostly VS but also xcode. When we feel our changes are stable enough for the build servers we check them in. The two build servers (Mac and PC) look for source control checkins, and each does a build. Build errors are emailed back to the team.
Editing files live on the build server sounds precarious to me. What happens if you request a build while another developer has edits that won't build?
I know this doesn't really answer your question, but you might want to consider setting up remote X sessions, and just run something like KDevelop, which, by the way, is a very nice IDE--or even Eclipse, which is more mainstream, and has a broader developer base. You could probably just use something like Xming as the X server on your Windows machines.
Wow, that sounds like a really strange use for Visual Studio. I'm very happy chugging away in vim. However, the one thing I love about Visual Studio is the debugger. It sounds like you are not even using it.
When I opened the question I thought you must be referring to developing portable applications in Visual Studio and then migrating them to Solaris. I have done this and had pleasant experiences with it.
Network shares.
Of course, then you have killer lag on the network, but at least there's only one copy of your files.
You don't want to hear what I did when I was developing on both platforms. But you're going to: drag-n-drop copy a few times a day. Local build and run, and periodically checking it out on Unix to make sure gcc was happy and that the unit tests were happy on that platform too. Not really a rapid turnaround cycle there.
#monjardin
The main reason we use it is because of the re-factoring/search tools provided through Visual Assist X (by Whole Tomato). Although there are a number of other nice to haves like Intelli-sense. We are also investigating integrations with our other tools AccuRev, Bugzilla and Totalview to complete the environment.
#roo
Using multiple compilers sounds like a pain. We have the luxury of just sticking with gcc for all our platforms.
#josh
Yikes! That sounds like a great way to introduce errors! :-)
I've had good experience developing Playstation2 code in Visual Studio
using gcc in cygwin. If you've got cygwin with gcc and glibc, it
should be nearly identical to your target environments. The fact that you
have to be portable across Solaris and Linux hints that cygwin should
work just fine.
Most of my programming experience is in Windows and I'm a big fan of visual studio (especially with Resharper, if you happen to be doing C# coding). These days I've been writing an application for linux in C++. After trying all the IDEs (Netbeans, KDevelop, Eclipse CDT, etc), I found Netbeans to be the least crappy. For me, absolute minimum requirements are that I be able to single-step through code and that I have intellisense, with ideally some refactoring functions as well. It's amazing to me how today's linux IDE's are not even close to what Visual Studio 6 was over ten years ago. The biggest pain point right now is how slow and poorly implemented the intellisense in Netbeans is. It takes 2-3 seconds to populate on a fast machine with 8GB of RAM. Eclipse CDT's intellisense was even more laggy. I'm sorry, but a 2 second wait for intellisense doesn't cut it.
So now I'm looking into using VS from Windows, even though my only build target is linux...
Chris, you might want to look at the free automation build server 'CruiseControl', which integrates with all main source control systems (svn, tfs, sourcesafe, etc.). It's whole purpose is to react to check-ins in a source control system. In general, you configure it so that anytime anyone checks code in, a build is initiated and (ideally) unit tests are run. For some languages there are some great plugins that do code analysis, measure unit test code coverage, etc. Notifications are sent back to the team about successful / broken builds.
Here's a post describing how it can be set up for C++: link (thoughtworks.org).
I'm just getting started with converting from a linux-only simple config (Netbeans + SVN, with no build automation) to using Windows VS 2008 with build automation back-end that runs unit tests in addition to doing builds in linux. I shudder at the amount of time it's going to take me to get that all configured, but the sooner the better, I guess.
In my ideal end-state I'll be able to auto-generate the Netbeans project file from the VS project, so that when I need to debug something in linux I can do so from that IDE. VS project files are XML-based, so that shouldn't be too hard.
If anyone has any pointers for any of this, I'd really appreciate it.
Thanks,
Christophe
You could have developers work in private branches (easier if you're using a DVCS). Then, when you want to checkin some code, you check it into your private branch on [windows|unix], update your sandbox on [unix|windows] and build/test before committing back to the main branch.
We are using a similar solution to what you described.
We have our code stored on the Windows side of the world and UNIX (QNX 4.25 to be exact) has access though an NFS mount (thanks to UNIX services for Windows). We have an ssh into UNIX to run make and the pipe to output into VS. Accessing the code is fast, builds are a little slower than before, but our longest compile is currently less than two minutes, not a big deal.
Using VS for UNIX development has been worth the effort to set it up, because we now have IntelliSense. Less typing = happy developer.
Check out "Final Builder" (http://www.finalbuilder.com/). Select a version control system (e.g. cvs or svn, to be honest, cvs would suit this particular use case better by the sounds of it) and then set up build triggers on FinalBuilder so that checkins cause a compile and send the results back to you.
You can set up rulesets in FinalBuilder that prevent you checking in / merging broken code into the baseline or certain branch folders but allow it to others (we don't allow broken commits to /baseline or /branches/*, but we have a /wip/ branching folder for devs who need to share potentially broken code or just want to be able to commit at the end of the day).
You can distribuite FB over multiple "build servers" so that you don't wind up with 20 people trying to build on one box, or waiting for one box to process all the little bitty commits.
Our project has a Linux-based server with Mac and Win clients, all sharing a common codebase. This set up works ridiculously well for us.
I'm doing the exact same thing at work. The setup I use is VS for Windows development, with a Linux VM running under VirtualBox for local build / execute verification. VirtualBox has a facility where you can make a folder on the host OS (Windows 7 in my case) available as a mountable filesystem in the guest (Ubuntu LTS 12.04 in my case). That way, after I start a VS build, and it's saved the files, I can immediately start a make under Linux to verify it builds and runs OK there.
We use SVN for source control, the final target is a Linux machine (it's a game server), so that uses the same makefile as my local Linux build. That way, if I add a file to the project / change a compiler option, usuall adding / changing a -D, I do the modifications initially under VS, and then immediately change the Linus makefile to reflect the same changes. Then when I commit, the build system (Bamboo) picks up the change, and does its own verification build.
Hard earned experience says this is an order of magnitude easier if you build like this from day one.
The first project I worked on started as Windows only, I was hired to port it to Linux, since they wanted to switch to a homogenous server environment, and everything else was Linux. Retrofitting a Windows project into this sort of a setup was a fairly major expenditure of effort.
Project number two was done "two system build" right from day one. We wanted to maintain the ability to use VS for development / debug since it is a very polished IDE, but we also had the requirement for final deploy to Linux servers. As I alluded to above, when the project was build with this in mind right from the start, it was quite painless. The worst part was a single file: system_os.cpp that contained OS specific routines, things like "get current time since linux epoch start in milliseconds", etc. etc. etc.
At the risk of getting a little off topic, we also created unit tests for this, and having the unit tests for the OS specific pieces provided a great deal of confidence that they worked the same on both platforms.