Recently I learned that there's an openjdk shark project, which uses llvm to make hotspot vm more portable. Since I used to work on jdk and is interested in llvm right now, this is a match made in heaven. So first thing first, how to build openjdk shark against llvm 3.0? To be more specific, which repository in http://hg.openjdk.java.net/ should I begin with? jdk7u, jdk7u2, jdk8, or icedtea?
I prefer latest update release of jdk7 if possible. And I intend to build that in a mac os x.
I made some progress. Here's what I did:
started with this repository
applied llvm 3.0 patch mentioned here
and bunch of other make file hacks
But there is a road blocker. To support JSR 292, a.k.a invoke dynamic, something called Richochet frames has been introduced into Hotspot JVM, which unfortunately hasn't been implemented in zero yet. So the building fails not only on Mac OS X but also on other platforms.
The good news is Chris Phillips from Redhat is working on it. I also learned that the most relevant mailing lists are mlvm-dev and zero-dev at openjdk.java.net, in case someone is interested in the same topic too.
Related
For a long time opencv has been (and still is,) the main infrastructure for 2d development.
When going 3d, PCL is the natural choise: it has vast range of algorithms implemented, online API documentation, and the backbone of the industry's leading companies.
That said, How can it be that the last binary is for IDE 5 years ago?(!!!) last update was in 2013 (probably due to the death of OPENNI, thank you Apple), the implementation is obsolete , and I am not even talking about c++1x, nevertheless the futuristic compute capability 5.x.
Is PCL a dead project? are there's any predecessors?
I too work with PCL and find the outdated libraries frustrating. However as PhilLab mentioned the GitHub page is still active.
HOWEVER: Thanks to Tsuksa Sugiura there exists a perfect pre-built package for Windows + VS2015. He even maintains this and updates it. Both x86 and x64.
ALSO it is possible to use 1.8.0 RC2 on the NVidia Tegra platforms, such as the Jetson TX1. here the CMake system is working relatively well.
AND ROS supports it (again defaults to 1.7.2, but can run on 1.8.x)
So to conclude; sure no one is packing it into tidy releases, but the package is slowly getting advanced. And it kind of is our only choice...
Also to the moderators: I would have commented this on PhilLab's answer as I feel this doesn't dignify a new answer... but on this strange community you can answer before being able to comment. Sorry.
I share your frustration with the outdated prebuilts (outdated both in IDE version and PCL version) but the project is still quite active on GitHub: https://github.com/PointCloudLibrary/pcl/commits/master.
The release cycle seems to be quite lengthy but the commits come in steadily
Edit: Release 1.8.0 is in preparation and the lack of Windows builds is because the lack of a Windows programmer
Edit (06/2018): The newest versions include windows prebuilds
Now the best option to use PCL on windows is using the vcpkg package management system to manage the dependencies and binaries.
If I want my software to run on Red Hat Linux 6.0, do I have to build it on 6.0? Or can I build it on 6.3? (Similar question for 5.X) I'm asking a general question about runtime implications of shared libraries and similar "automatic" dependencies that get sucked in during the build process. And I'm interested only in the divergence between minor releases. I know that more things change between major releases. I'm interesting specifically in RH and RH-derived distributions. My program is written in C and C++. I think the biggest dependency I need to worry about is the GCC runtime libraries for C and C++. Is there a web page I can use to verify which GCC updates were used in which RH minor releases?
To be clear: I understand the goal and commitment to compatibility between update releases going forward. Upgrading from 6.1 to 6.2 should not break my existing applications. In order to build on a newer update and run on an older update, I would need the reverse kind of compatibility. I need 6.1 to be compatible with things built on 6.2. In general this kind of compatibility is impossible to achieve on a wide-spread basis, across all config files, libraries etc. But I only need a narrow slice of reverse compatibility.
I have an app that was designed, written and built successfully on 6.1. Now I want to build it on 6.2, but I want it to still run correctly on 6.1. Is there a general software release practice on Linux that you always have to build on the oldest update release that you want to support? Or do most people use trial-and-error to determine whether their app runs on older update releases? If you use trial-and-error, how much "error" shows up in the equation?
I was recently tasked with performing a feasibility study based around switching from using DOS to Linux for use as an OS to run our industrial control software (developed internally). In a nutshell I have been restricted to using Ubuntu 8.04 (with a vendor supplied kernel upgrade providing drivers for the hardware on the board). As this is no longer supported I am unable to update or install software meaning that I am stuck using gcc version 4.2. I want to be able to use C++ and preferably boost libraries but currently this seems like I will not be able to do so.
Basically I am asking how do companies/professionals go about using Linux as a development environment? Is what I described above a common occurrence? Do you simply pick a version and a compiler and stick with it throughout the product lifetime to ensure that the development environment doesn't change too much or can you freely upgrade the kernel, compiler etc. as you go along? Is it common to be constrained by what a particular vendor can provide. Would anyone be prepared to give their opinion as to whether ubuntu 8.04 is a suitable choice of OS for development of industrial control software?
I am not a linux expert at all, but my research and experimentation so far is leading me to conclusion that I should abandon the linux approach and use DOS. Our company has no linux knowledge and is very small and for personal career reasons I have no interest in learning redundant technology like DOS.
I realise this is not exactly a yes/no type question but any responses will be gratefully received.
GCC 4.2 has no C++11 support but the C++03 support should be good and you should be able to find a version of Boost that can deal with that quite easily.
Ultimately, Linux has many upsides you won't find in DOS- for example, no segmentation, virtual memory, and such things that will make it easier and faster to develop software, not to mention additional libraries you might need, as absolutely nobody whatsoever will support DOS today.
With linux-based systems there's not much reason to stick with fixed OS+toolchain version, because backwards compatibility is a very serious issue in Unix-world. Sometimes it is good to target certain fixed system, but frankly these are rather rare, and even then the development can be done on up-to-date systems as long as testing is done on the target macine/platform.
Basically you could just upgrade to for example Ubuntu 12.04 LTS(long term support) for development and stick with it, it is very unlikely that there would be any sorts of uncompatibility problems on the target platform/machine.
Libraries and such tend to change between Linux distros, new versions of linux distros, and other *nix OSes.
I once worked on a C++ application that had to run on both Windows and RHEL. I was the 'Linux guy' on the team, so I got to deal with coaxing all the open-source linux libraries we were using to build and work on Windows (using cygwin), and getting the latest changes made by the devs working on Windows to work on Linux.
Midway through development, we upgraded to a newer version of RHEL. It was not a fun experience. Library versions had changed, some had been removed in favor of other 'equivalent' libs, etc. Shaking out all of the problems caused by changing gcc versions took a little while too (granted, the newer gcc version was a bit less forgiving and exposed some stuff in our code that probably wasn't quite right anyway).
A couple of days before a big demo, management informed us that the app needed to run on Solaris as well. That was not a trivial task -- Solaris is NOT Linux. They hinted about wanting it to run on IRIX at one point. Glad that didn't happen.
I would recommend that you pick a specific version of a Linux distro, gcc, etc. and stick with it throughout development. Upgrading that stuff can happen later, when the software is in maintenance. RHEL offers long-term support, at a cost. You might also consider the newly released Ubuntu 12.04 LTS
While deciding for a cross platform language for a desktop application I want to do, I came across "wxwidgets" for C++. After testing a demo application in Mac 10.6.4 I noticed the application needed "Rosetta" to run.
My concern is: Will I always need "Rosetta" for a C++ application with wxwidgets to run on a Mac?
Note: Latest news about Mac dropping support for Java in future OS release (hoping Oracle will pickup were left) and the upcoming App Desktop Store will not support apps requiring Rosetta.
You can create Universe Binaries with wxWidgets. My guess is that your demo application was only compiled for PPC. (Which seems weird, actually. Was the app you tried one you built yourself from examples/, or just one you downloaded off the web?).
I've built Universal apps in wxWidgets both in Xcode (the easiest way to do it), and I believe it's not that hard with a Makefile on the command line. (you make the ppc version, make the intel version, and use the lipo command line tool to squash them together.)
I would like to port my C/C++ apps to OS X.
I don't have a Mac, but I have Linux and Windows. Is there any tool for this?
For Linux, there is a prebuilt GCC cross-compiler (from publicly available Apple's modified GCC sources).
https://launchpad.net/~flosoft/+archive/cross-apple
Update for 2015
After so many years, the industry-standard IDE now supports OSX/iOS/Android.
http://channel9.msdn.com/Events/Visual-Studio/Connect-event-2014/311
Embarcadero's RadStudio also supports building OSX/iOS/Android apps on Windows.
This answer by Thomas also provides a cross-compilation tool.
For all these options you still need a real mac/i-device to test the application.
I have created a project called OSXCross which aims to target OS X (10.4-10.9) from Linux.
It currently supports clang 3.2 up to 3.8 (trunk) (you can use your dist's clang).
In addition you can build an up-to-date vanilla GCC as well (4.6+).
LTO works as well, for both, clang and GCC.
Currently using cctools-870 with ld64-242.
https://github.com/tpoechtrager/osxcross
There appears to be some scripts that have been written to help get you set up cross compiling for the Mac; I can't say how good they are, or how applicable to your project. In the documentation, they refer to these instructions for cross-compiling for 10.4, and these ones for cross compiling for 10.5; those instructions may be more helpful than the script, depending on how well the script fits your needs.
If your program is free or open source software, then you may wish instead to create a MacPorts portfile (documentation here), and allow your users to build your program using MacPorts; that is generally the preferred way to install portable free or open source software on Mac OS X. MacPorts has been known to run on Linux in the past, so it may be possible to develop and test your Portfile on Linux (though it will obviously need to be tested on a Mac).
Get "VMware Player"
Get "Mac OS X vm image"
Compile/Debug/Integrate-and-test your code on the new OS to make sure everything works
When you are trying to get something working on multiple platforms you absolutely must compile/run/integrate/test on the intended platform. You can not just compile/run on one platform and then say "oh it should work the same on the other platform".
Even with the a really good cross-platform language like Java you will run into problems where it won't work exactly the same on the other platform.
The only way I have found that respects my time/productivity/ability-to-rapidly iterate on multiple platforms is to use a VM of the other platforms.
There are other solutions like dual-boot and ones that I haven't mentioned but I find that they don't respect my productivity/time.
Take dual-booting as an example:
I make a change on OS 1
reboot into OS 2
forget something on OS 1
reboot into OS 1
make a change on OS 1
reboot into OS 2 ... AGAIN...
BAM there goes 30 minutes of my time and I haven't done anything productive.
You would need a toolchain to cross compile for mach-o, but even if you had that, you won't have the Apple libraries around to develop with. I'm not sure how you would be able to port without them, unfortunately.
Apple development is a strange beast unto itself. OS X uses a port of GCC with some modifications to make it 'appley'. In theory, it's possible to the the sources to the Apple GCC and toolchain as well as the Apple kernel and library headers and build a cross compiler on your Windows machine.
Why you'd want to go down this path is beyond me. You can have a cheap Mac mini from $600. The time you invest getting a cross compiler working right (particularly with a Windows host for Unix tools) will probably cost more than the $600 anyway.
If you're really keen to make your app cross platform look into Qt, wxWidgets or FLTK. All provide cross-platform support with minimal changes to the base code. At least that way all you need to do is find a Mac to compile your app on, and that's not too hard to do if you have some technically minded friends who don't mind giving you SSH access to their Mac.
You will definitely need OS X somehow. Even if you don't own a Mac, you can still try some alternatives.
I found this small documentation on the net:
http://devs.openttd.org/~truebrain/compile-farm/apple-darwin9.txt
This describes exactly what you want. I am interested in it myself, haven't tested it yet though (found it 10 minutes ago). The documentation seems good, though.
You can hire a mac in the cloud from this website. You can hire them from $1, which should be enough (unless you need root access, then you are looking at $49+).
There are a few cross-compiler setups, but nearly all of them are meant for distcc-style distributed compiling. To my knowledge there is no way to directly target the Mac platform without actually having a Mac. The closest you can get without resorting to QT or wxWidgets is OpenStep with GNUStep or similar, but that's not a true Cocoa platform, just very close.
I know this question isn’t very active but answering anyways. Why don’t you try using TransMac, then download the XCode image and do it that way? Or you can use a VM, or Sosumi. You’ll find a video on youtube about sosumi, definitely.
The short answer is kind of. You will need to use a cross-platform library like QT. There are IDE's like QT Creator that will let you develop on one OS and generate Makefiles for others. For more information on cross platform development, check out the cross-platform episodes of this podcast (note that the series isn't over and new episodes appear to come out weekly).
As other answers explain you can probably compile for a Mac on Windows or Linux but you won't be able to test your applications so you should probably spend the $600 for a Mac if you’re doing professional programming, or if you’re working on open-source software find a developer with a Mac who will help you.