How to add a target to GDB, complete support for a new CPU. I took a look at GDB Internals. I just introduced adding a xxx_tdep.c file, but did not say what information xxx_tdep.c needs to describe in detail. I refer to ROCgdb of RAM architecture, but I don't understand either.
I'm not aware that there is a single document that you can read that contains complete steps for adding a new target.
The best strategy, I think, would be to find a recently added target, and look at the commits that added that target, I would recommend the RISC-V target as being a good example.
I'd start with git log -- gdb/riscv* and then start at the bottom of the list and work upwards.
The first commit you find should be this one.
The one thing the above doesn't show you though, is that before you can add GDB support, you'll need support for your target in the bfd library (the binutils-gdb/bfd directory), that's how GDB is able to open and process files for your ARCH. Additionally, if you want GDB to be able to disassemble the instructions for ARCH you'll need to add at least disassembler support to the opcodes library (the binutils-gdb/opcodes directory). Once you have bfd support and opcodes support things like readelf, objdump, and objcopy either will work, or will require minimal extra effort to get working. At that point you're ready to start working on GDB.
The ARCH-tdep.[ch] files deal with bare metal target stuff, so every architecture is going to need one of these files. Then if you want to add Linux support (for example), you'll need ARCH-linux-tdep.[ch] files. This is generic Linux support, so used by both local and remote Linux targets. To actually support running GDB natively on Linux for a particular target, you will add ARCH-linux-nat.[ch] files.
Related
I've inherited a large volume of C++ code for running & monitoring laboratory equipment. Currently the deployment is managed by compiling all of the individual modules (each it's own program) using DevC++, manually moving all the .exe files to a Dropbox folder, and then running them on the host machine manually.
I'm trying to automate this process somewhat to make rolling out an implementation on a new machine simpler and making sure the most up to date binaries are what is running on any given machine quickly. However, I don't know anything about deploying software in a Windows environment (I'm used to working on linux systems where a simple makefile would suffice) What tools (preferably command line) are available to compile & organize binaries in a portable way on windows systems?
Assume that you have a C++ compiler usable on the command line, on one translation unit. For example, GCC is such a compiler (and mingw is or contains a variant of GCC). Assume also that it is capable of linking (e.g. by driving the system linker).
Then you need to use some build automation tool to drive such compilation commands. For example GNU make or ninja (but there are many others). AFAIK they exist on Windows (so you could port your Makefile on Linux to Windows).
Once you have chosen your build automation tool, studied its documentation and understood how to use it, you'll write the relevant configuration file for it. For make, you'll write a Makefile (caveat : tab characters are significant). For ninja, you'll write some build.ninja files (but you'll probably generate it, perhaps with meson).
Notice that some build tools (e.g. cmake) are cross-platform.
BTW, DevC++ is an IDE, not a compiler.
I developed a Qt application in MacBook (El-Capitan 10.11.2) and it is ready now to be released.
What i want now, is to create the standalone executable file for both Mac and Windows OS.
But I don't know how !
I found this link but I am unable to follow it is guidance, it looks different from what my system is showing me.
If you have any idea, please help me.
Thank you
Well, to compile an application for windows, you will need a windows machine (or at least a virtual machine). You can't compile for windows on mac.
Regarding the "standalone": The easy way is to deploy your application together with all the required dlls/frameworks and ship them as one "package". To to this, there are the tools windeployqt and macdeployqt. However, those will not be "single file" applications, but rather a collection of files.
If you want to have one single file, you will have to build Qt statically! You can to this, but you will have to do it on your own. And if you do, please notice that the LGPL-license (the one for the free version of Qt) requires you to make the source-code of your program public! That's not the case if you just link to the dynamic libraries.
EDIT:
Deployment
Deployment can be really hard, because you have to do it differently for each platform. Most times you will have 3 steps
Dependency resolving: In this step, you collect all the exectuables/lirabries/translations/... your application requires and collect them somewhere they can find each other. For windows and mac, this can be done using the tools I mentioned above.
Installation: Here you will have to create some kind of "installer". The easiest way is to create a zip-file that contains everyhing you need. But if you want to have a "nice" installation, you will have to create proper "installers" for each platform. (One of many possibilities is the Qt Installer Framework. Best thing about it: It's cross platform.)
Distribution: Distribution is how to get your program to the user. On Mac, you will have the App-Store, for windows you don't. Best way is to provide the download on a website created for this (like sourceforge, github, ...)
I can help you with the first step, but for the second step you will have to research the possibilities and decide for a way to do it.
Dependencies
Resolving the dependencies can be done by either building Qt statically (this way you will have only one single file, but gain additional work because you will have to compile Qt) or using the dynamic build. For the dynamic build, Qt will help you to resolve the dependencies:
macdeployqt is rather easy to use. Compile your app in release mode and call <qt_install_dir>/bin/macdeployqt <path_to_your_bundle>/<bundle>.app. After thats done, all Qt libraries are stored inside the <bundle>.app folder.
For windeployqt is basically the same: <qt_install_dir>\bin\windeployqt --release <path_to_your_build>\<application>.exe. All dependencies will be inside the build folder. (Hint: copy the <application>.exe in an empty directoy and run windeployqt on that path instead. This way you get rid of all the build-files).
Regarding the static build: Just google it, you will find hundreds of explanations for any platform. But unless you have no other choice but to use one single file (for whatever reason) it would recommend you to use dynamic builds. And regarding the user experience: On mac, they won't notice a difference, since in both cases everything will be hidden inside the app bundle. On windows, it's normal to have multiple files, so no one will bother. (And if you create an installer for windows, just make sure to add a desktop shortcut. This way the user will to have "a single file" to click.)
I'm trying to use the Quadprog++ library (http://quadprog.sourceforge.net/). I don't understand the instructions though.
To build the library simply go through the ./configure; make; make
install cycle.
In order to use it, you will be required to include in your code file
the "Array.hh" header, which contains a handy C++ implementation of
Vector and Matrices.
There are some "configure", and "MakeFile" files, but they have no extension and I have no idea what to do with them. There are also some ".am", ".in" and ".ac" extensions in the folder.
Does this look familiar to anyone? What do I do with this?
(Edit: On Windows.)
This package is built using the autotools. These files you talk to (*.am, *.in...) are because of the tools automake, and autoconf.
Autotools is a de-facto standard in the GNU/Linux world. Not everybody uses it, but if they do you ease the work of package and distribution managers. Actually they should be portable to any POSIX system.
That said, I'm guessing that you are using a non-unix machine, such as Windows, so the configure script is not directly runable in your system. If you insist in keep using Windows, wich you probably will, your options are:
Use MinGW and MSYS to get a minimal build enviroment compatible with autotools.
Use Cygwin and create a POSIX like environment in your Windows.
Create a VS project, add all the source of the library in there, compile and debug the errors they may arise, as if the code had been written by you.
Search for someone that already did the work and distributes a binary DLL, or similar.
(My favourite!) Get a Linux machine, install a cross-compiler environment to build Windows binaries, and do configure --host i686-mingw32 ; make.
This instruction say how can be build an program delivered like a tarball in Linux. To understand take a look on Why always ./configure; make; make install; as 3 separate steps?.
This can be confusing at first, but here you go. Type these in as shown below:
cd <the_directory_with_the_configure_file>
./configure
At this point, a bunch of stuff will roll past on the screen. This is Autoconf running (for more details, see http://www.edwardrosten.com/code/autoconf/index.html)
When it's done, type:
make
This initiates the build process. (To learn more about GNU make, check out Comprehensive gnu make / gcc tutorial). This will cause several build messages to be printed out.
When this is done, type:
sudo make install
You will be asked for the root password. If this is not your own machine (or you do not have superuser access), then contact the person who administers this computer.
If this is your computer, type in the root password and the library should install in /usr/local/lib/ or something similar (watch the screen closely to see where it puts the .so file).
The rest of it (include the .hh file) seems self-explanatory.
Hope that helps!
I need to make portable application, that will run on Windows, Linux, MacOS and no install required. It must be one executable file and no other library files (.dll, .so ...). I will use ANSI C and recompile project for each platform. I want to use Lua scripts, so must embed Lua interpreter in my code. I need network and some other modules to write but i now that Lua already have modules for that purpose, so I will use them instead writing my own.
How can I link all that together, Lua interpreter, Lua modules (LuaSocks i.e.) in one executable file that will load .lua script. Lua has "require" system that expects .dll to find, so I wondering what I should do, is it enough just to call functions without "require" statement.
You most certainly can do that (and it is not wrong!), although it is not trivial. The Lua core is made for embedding, to the point that you can just include the Lua sources into your own project and it "just works" :).
The deal is slightly different with modules - not many of them are suited for direct embedding. For example, this has been tried successfully for LuaSocket before and also asked here. The basic idea is to embed the sources of MODULE to your project and insert the luaopen_MODULE function into package.preload['MODULE'], so that require can pick it up later.
One way to go is to look at sources of projects that already embed Lua and other libraries, like LÖVE, MurgaLua and Scrupp.
If the goal of not having a single executable with no external libraries turns out not achievable, you can loosen up a bit and go for portable application - an application that carries all it's dependencies with it, in a single directory, independent of the system. This is what LuaDist was designed for - you use it similar to LuaRocks to install Lua packages. The difference is that these packages can be installed/deployed into a separate directory, where all necessary dependencies are installed too. This directory (a "dist") is fully independent, meaning you can move it somewhere else and it will still work.
Also, I dislike the idea of an application that requires installation (because it puts files all around my system) - uninstallation should be just removal of a directory :)
I believe you cannot do that (and I think it is wrong to do that). An executable is operating system and machine specific (on some systems like MacOSX, there are fat binary executables, which are a mix of various machine specific variants for the same operating system.).
The only way to have a system & machine "independent" program is essentially to target it to some single common "virtual machine" (in the broadest sense). In your case this VM is the Lua VM (it could be the Java VM for others, etc.). But you have to suppose that your user have it, or to provide one which is machine & system specific.
And I would personally dislike the idea of an application which is not installable (because it is then not easily uninstallable).
I'm trying to design an SConstruct file for an embedded system project. The compiler on my machine is at "C:\Program Files\IAR Systems\Embedded Workbench 5.4\arm\bin" I would like the build system to try to locate the toolchain even if there is another verison of Embedded Workbench installed, or if the user has chosen to install it elsewhere.
I'd also be interested in strategies used in makefiles or ant files since they are probably useful here as well.
What are some strategies for doing this? Do I have options other than searching the Windows registry or looking for "C:\Program Files\IAR Systems\Embedded Workbench *\arm\bin"?
The simplest solution is to use an environment variable. You still have to set that up manually for each build host, but the build system need only refer to the environment variable, so can be common for all build hosts.
For example in your case you might have:
EWBARM_V0504="C:\Program Files\IAR Systems\Embedded Workbench 5.4\arm\bin"
And similar for other versions installed, and then in your build system you would use %EWBARM_V0504% in place of the path. The worse that will happen is if the variable does not exist the build will fail, which is preferable to using the wrong compiler, and easily fixed.
Since different versions of toolchains may have different bugs and/or features, silently falling back onto different sets of tools is probably a bad idea. When I've supported multiple tools versions on a single project, I usually have the version number assigned via a makefile or the environment. Then you can pass -D TOOLS_VERSION=$(TOOLS_VERSION) to your compiler and use that value to key bugfixes and workarounds you need for particular versions of the tools. This system makes it clear which tools you want to support, while still making it easy for other developers to switch tool versions by making a single edit.
The nice thing about SCons is you have all of python at your disposal. So you can use win32.winreg to look in the registry, or glob around in sets of paths, whatever works for you. And of course you can have a command-line option or an options file to override the autodetection. Then once you've found your tool of choice, you have basically two ways to make SCons use it: either prepend the tool's dir to env['ENV']['PATH'] (you can use env.PrependEnvPath for that), or just use the tool's full path as the value of your $CC (and set $LINK, $SHLINK etc. appropriately too).
I usually make a TOOL_MYCOMPILER function that takes an env and sets it all up for use with the compiler and its toolchain (cpp, linker, whatever). It keeps things cleaner in your SConstruct/SConscript.