Alternatives for accurately representing Visual Basic Decimal variables in C++ - c++

I am currently working on a program that takes Visual Basic data in the form of a text file, and then stores this data in C++. Some of the data from Visual Basic is of the type Decimal. C++ has no built in type equivalent to decimal. I don't want to use double because there is a possible loss of significant figures if the numbers are large enough.
One option is write my own decimal class. I was wondering if there were any other alternatives for solving this problem before I attempted to do that.
Thanks for you help.

There's the decNumber library. This is a C library designed for use with decimal numbers without losing precision/accuracy.
Given that it's a C library, you should be able to easily wrap it in a C++ class, or just use the C functions directly
This is an IBM sponsored lib and it's available under an open source license (ICU)

Using a Decimal class is the best solution in my opinion. As to writing your own implementation, try a short web research first: It seems that others had the same problem before. The first Google result reveals a CodeProject solution, there may be many other...

Related

Convert very large decimal numbers to binary

For example, to convert an arbitrary long string of decimal numbers to binary. I think it's possible, once the length is known, starting from the left to right digits. I'm not able to find the way to do this. How can it be achieved?
Your best bet would be to use a library which already does what you want: gmp. It does not have Fortran bindings, but it is easy enough to call from Fortran using Fortran's ISO C binding features and possibly a wrapper function.
If you have gcc installed, you already are using gmp; you may just have to install the relevant development files.
Using gmp, you would set your integer from a string value using mpz_set_str and convert it to another base using mpz_get_str.
No need to reinvent that particular wheel.

Integers greater than 4294967295 on 32-bit Windows

I'm trying to get to grips with C++ basics by building a simple arithmetic calculator application. Right now I'm trying to figure out how to make it capable of dealing with integers greater than 4294967295 on 32-bit Windows. I know that Windows' integrated Calculator is capable of this. What have I missed?
Note that this application should be compilable with both MSVC compiler and g++ (MinGW/GCC).
Thank you.
If you want to be both gcc and msvc compatible use <stdint.h>. It's source code compatible with both.
You probably want uint64_t for this. It will get you up to 18,446,744,073,709,551,615.
There are also libraries to get you up to integers as large as you have memory to handle as well.
Use __int64 to get 64-bit int calculations in Visual C++ - not sure if GCC will like this, though.
You could create a header file that typedefs (say) MyInt64 to the appropriate thing for each compiler. Then you can work internally with MyInt64, and the compiled code will be correct for each target. This is a pretty standard way of supporting different target compilers on one source codebase.
afai can tell, long long would work OK for both, but I have not used GCC so YMMV - see here for GCC info and here for Visual C++.
You could also create a "Large Number" class that would basically store the value across multiple variables in one form or another
There are different solutions, if 2^64 is big enough for you, you can use a 64 bit integer type (these are implementation dependent, so search for your particular compiler). On the other hand, if you want to be able to handle any number, you will have to use or implement a BigInteger type that encapsulates it. The implementation is an interesting exercise... basically use a vector of smaller type, operate on each subelement and then merge and normalize the result.

Define C++ function at runtime

I'm trying to adjust some mathematical code I've written to allow for arbitrary functions, but I only seem to be able to do so by pre-defining them at compile time, which seems clunky. I'm currently using function pointers, but as far as I can see the same problem would arise with functors. To provide a simplistic example, for forward-difference differentiation the code used is:
double xsquared(double x) {
return x*x;
}
double expx(double x) {
return exp(x);
}
double forward(double x, double h, double (*af)(double)) {
double answer = (af(x+h)-af(x))/h;
return answer;
}
Where either of the first two functions can be passed as the third argument. What I would like to do, however, is pass user input (in valid C++) rather than having to set up the functions beforehand. Any help would be greatly appreciated!
Historically the kind of functionality you're asking for has not been available in C++. The usual workaround is to embed an interpreter for a language other than C++ (Lua and Python for example are specifically designed for being integrated into C/C++ apps to allow scripting of them), or to create a new language specific to your application with your own parser, compiler, etc. However, that's changing.
Clang is a new open source compiler that's having its development by Apple that leverages LLVM. Clang is designed from the ground up to be usable not only as a compiler but also as a C++ library that you can embed into your applications. I haven't tried it myself, but you should be able to do what you want with Clang -- you'd link it as a library and ask it to compile code your users input into the application.
You might try checking out how the ClamAV team already did this, so that new virus definitions can be written in C.
As for other compilers, I know that GCC recently added support for plugins. It maybe possible to leverage that to bridge GCC and your app, but because GCC wasn't designed for being used as a library from the beginning it might be more difficult. I'm not aware of any other compilers that have a similar ability.
As C++ is a fully compiled language, you cannot really transform user input into code unless you write your own compiler or interpreter. But in this example, it can be possible to build a simple interpreter for a Domain Specific Language which would be mathematical formulae. All depends on what you want to do.
You could always take the user's input and run it through your compiler, then executing the resulting binary. This of course would have security risks as they could execute any arbitrary code.
Probably easier is to devise a minimalist language that lets users define simple functions, parsing them in C++ to execute the proper code.
The best solution is to use an embedded language like lua or python for this type of task. See e.g. Selecting An Embedded Language for suggestions.
You may use tiny C compiler as library (libtcc).
It allows you to compile arbitrary code in run-time and load it, but it is only works for C not C++.
Generally the only way is following:
Pass the code to compiler and create shared object or DLL
Load this Shared object or DLL
Use function from this shared object.
C++, unlike some other languages like Perl, isn't capable of doing runtime interpretation of itself.
Your only option here would be to allow the user to compile small shared libraries that could be dynamically-loaded by your application at runtime.
Well, there are two things you can do:
Take full advantage of boost/C++0x lambda's and to define functions at runtime.
If only mathematical formula's are needed, libraries like muParser are designed to turn a string into bytecode, which can be seen as defining a function at runtime.
While it seems like a blow off, there are a lot of people out there who have written equation parsers and interpreters for c++ and c, many commercial, many flawed, and all as different as faces in a crowd. One place to start is the college guys writing infix to postfix translators. Some of these systems use paranthetical grouping followed by putting the items on a stack like you would find in the old HP STL library. I spent 30 seconds and found this one:
http://www.speqmath.com/tutorials/expression_parser_cpp/index.html
possible search string:"gcc 'equation parser' infix to postfix"

Matlab to C or C++

I am working on an image processing project using Matlab. We should run our program (intended to be an application) on a cell phone.We were then asked to convert our code into C or C++ language so we get a feel of how long it would take for execution and then choose a platform. So far we didn't figure out how to do this conversion.. Any ideas of what to do to convert Matlab to C or C++??
The first thing you need to realise is that porting code from one language to another (especially languages as different as Matlab and C++) is generally non-trivial and time-consuming. You need to know both languages well, and you need to have similar facilities available in both. In the case of Matlab and C++, Matlab gives you a lot of stuff that you just won't have available in C++ without using libraries. So the first thing to do is identify which libraries you're going to need to use in C++. (You can write some of the stuff yourself, but you'll be there a long time if you write all of it yourself.)
If you're doing image processing, I highly recommend looking into something like ITK at http://www.itk.org -- I've written my image processing software twice in C++, once without ITK (coding everything myself) and once with, and the version that used ITK was finished faster, performed better and was ten times more fun to work on. FWIW.
Matlab can gererate C code for you.
See:
http://www.mathworks.com/products/featured/embeddedmatlab/
The generated code does however depend on matlab libraries. So you probably can't use it for a cell phone. But it might save you some time anyways.
I also used the MATLAB Coder to convert some functions consisting of a few hundred lines of MATLAB into C. This included using MATLAB's eigenvalue solver and matrix inversion functions.
Although Coder was able to produce C code (which theoretically was identical), it was very convoluted, bloated, impossible to decipher, and appeared to be extremely inefficient. It literally created about 10x as many lines of code as it should have needed. I ended up converting it all by hand so that I would actually be able to comprehend the C code later and make further changes/updates. This task however, can be very tedious/dangerous, as the array indexing in Matlab is 1-based and in C it's 0-based. You're likely to add bugs into the code, as I experienced. you'll also have to convert any vector/matrix arithmetic into loops that handle scalars (or use some type of C matrix algebra package)
The MathWorks provides a product called MATLAB Coder that claims to generate "readable and portable C and C++ code from MATLABĀ® code". I haven't tried it myself, so I can't comment on how well it accomplishes these goals.
With regard to the Image Processing Toolbox, this list (presumably for R2016b) shows which functions have been enabled for code generation and any limitations they may have.
Matlab has a tool called "Matlab Coder" which can convert your matlab file to C code or mex file. My code is relatively simple so it works fine. Speed up gain is about 10 times faster. This saves me time coding a few hundreds lines. Hope it's helpful for you too
Quick Start Guide for MATLAB Coder Confirmation
The links describe the process of converting your code in 3 major steps:
First you need to make a few simplifications in your present code so that it would be simple enough for the Coder to translate.
Second, you will use the tool to generate a mex file and test if everything is actually working.
Finally you would change some setting and generate the C code. In my case, the C code has about 700 lines including all the original matlab code (about 150 lines) as comments. I think it's quite readable and could be improve upon. However, I already get a 10 times speed up gain from the mex file anyway. So this is definitely a good thing.
We can't not be sure that this will work in all case but it's definitely worth trying.
I remember there is a tool to export m-files as c(++)-files. But I could never get that running. You need to add some obscure MATLAB-headers in the c/c++code, ... And I think it is also not recommended.
If you have running MATLAB-code, it shouldn't take too much effort to do the conversion "by hand". I have been working on several project where MATLAB was used and it was never consider to use any tools to convert the code to C/C++. It was always done "by hand".
I believe to have been the only one who ever investigate into using a tool.
Well there is not straight conversion from matlab to c/c++ You will need to understand the language and the differences between matlab and c/c++ and then start coding it in c/c++. Code a little test a little until it works.

What's your favorite way of dealing with cross-platform development? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm currently working on cross-platform applications and was just curious as to how other people tackle problems such as:
Endianess
Floating point support (some systems emulate in software, VERY slow)
I/O systems (i.e. display, sound, file access, networking, etc. )
And of course, the plethora of compiler differences
Obviously this is targeted at languages like c/c++ which don't abstract most of this stuff (unlike java or c#, which aren't supported on a lot of systems).
And if you were curious, the systems I'm developing on are the Nintendo DS, Wii, PS3, XBox360 and PC.
EDIT
There have been a lot of really good answers on here, ranging from how to handle the differences yourself, to library suggestions (even the suggestion of just giving in and using wine). I'm not actually looking for a solution (already have one), but was just curious as to how others tackle this situation as it is always good to see how others think/code so you can continue to evolve and grow.
Here's the way I've tackled the problem (and, if you haven't guessed from this list of systems above, I'm developing console/windows games). Please keep in mind that the systems I work on generally don't have cross-platform libraries already written for them (Sony actually recommends that you write your own rendering engine from scratch and just use their OpenGL implementation, which doesn't quite follow the standards anyway, as a reference).
Endianess
All of our assets can be custom made for each system. All of our raw data (except for textures) is stored in XML which we convert to a system specific binary format when the project is built. Seeing as how we are developing for game consoles, we don't need to worry about data being transfered between platforms with different endian formats (only the PC allows the users to do this, thus, it is insulated from the other systems as well).
Floating point support
Most modern systems do floating point values fine, the exception to this is the Nintendo DS (and GBA, but thats pretty much a dead platform for us these days). We handle this through 2 different classes. The first is a "fixed point" class (templated, can specify what integer type to use and how many bits for the decimal value) which implements all arithmetic operators (taking care of bit-shifts) and automates type conversions. The second is a "floating point" class, which is a basically just a wrapper around the float for the most part, the only difference is that it also implements the shift operators. By implementing the shift operators, we can then use bit shifts for fast multiplications/divisions on the DS and then seamlessly transition to platforms that work better with floats (like the XBox360).
I/O Systems
This is probably the trickiest problem for us, because every system has there own method for controller input, graphics (XBox360 uses a variant of DirectX9, PS3 has OpenGL or you can write your own from scratch and the DS and Wii have thier own proprietary systems), sound and networking (really only the DS differs in protocol by much, but then they each have their own server system that you have to use).
The way we ended up tackling this was by simply writing fairly high level wrappers for each of the systems (e.g. meshes for graphics, key mapping systems for controllers, etc.) and having all the systems use the same header files for access. It's then just a matter of writing specific cpp files for each platform (thus forming "the engine").
Compiler Differences
This is one thing that can't be tackled too easily, as we run into problems with compilers, we usually log the information on a local wiki (so others can see what to look out for and the workarounds to go with it) and if possible, write a macro that will handle the situation for us. While its not the most elegant solution, it works and seeing how some compilers a simply broken in certain places, the more elegant solutions tend to break the compilers anyway. (I just wish all of the compilers implemented Microsoft's "#pragma once" command, so much easier than wrapping everything in #ifdef's)
A great deal of this complexity is generally solved by the third party libraries (boost being the most famous) you are using. One rarely writes everything from scratch...
For endian issues in data loaded from files,
embed a value such as 0x12345678 in the file header.
The object that loads the data, look at this value, and if it matches its internal representation of the value, then the file contains native endian values. The load is simple from there.
If the value does not match, then it is a foreign endianism, so the loader needs to flip the values before storing them.
I usually encapsulate system-specific calls in a single class. If you decide to port your application to a new platform, you only have to port one file...
I normally use multi-platform libraries, like boost or Qt, they solves about the 95% of my problems dealing with platform specific codes (i admit the only platform i-m dealing with are win-xp and linux). For the remaining 5%, I usually encapsulate the platform specific code in one or more classes, using factory pattern or generic programming to reduce the #ifdef/#endif sections
I think the other answers have done a great job of addressing all your concerns except for endianness, so I'll add something about that... it should only be a problem at your interfaces to the outside world. All your internal data processing should be done in the native endianness. When communicating via TCP/IP (or any other socket protocol), there are functions you should always use to convert your values to and from network byte order. IIRC, the functions are htons() and htonl(), (host to network short, host to network long) and their inverses, which I can't remember... perhaps something like ntohl(), etc?
The only other place you should be interacting with data that has the wrong byte order is reading files from your local hard drive, so make your file loaders and writers use similar functions (perhaps you can even get away with using the network functions).
By using these library-provided functions for dealing with endianness always (use them even in code you never intend to port, unless you have a compelling reason not to -- it'll make like easier later when you decide to port), you can run the code on any platform and it will "just work", regardless of the native endianness.
Usually, this kind of portability problem are left to the build system (autotools or cmake in my case) which detect specific of the system. Finally, I get a config.h from this build system and then I just have to use constant defined in this header (using IF DEFINED).
For example here is a config.h :
/* Define to 1 if you have the <math.h> header file. */
#define HAVE_MATH_H
/* Define to 1 if you have the <sys/time.h> header file. */
#define HAVE_SYS_TIME_H
/* Define to 1 if you have the <errno.h> header file. */
#define HAVE_ERRNO_H
/* Define to 1 if you have the <time.h> header file. */
#define HAVE_TIME_H
Then the code will look like this (for time.h for example) :
#ifdef (HAVE_TIME_H)
//you can use some function from time.h
#else
//find another solution :)
#endif
For data formats - use plain text for everything. For compiler differences, be aware of the C++ standard and make use of compiler switches such as g++ -pedantic, which will warn you of portability problems.
It depends on the kind of things you are doing. One thing which is almost always the right choice is to port the basic stuff to any target platform, and then deal with it with a common API.
For example, I do a lot of numerical computation coding, and some platforms have a lot of broken/non standard code: the way to solve it is to reimplement those functions, and then use those new functions everywhere in your code (for platforms which work, the new function just calls the old one).
But this only really works for low level stuff. For GUI, high level IO, using an already existing library is definitely a better option almost every time.
For platforms without native floating point support, we have used some own fixed point type and some typedefs. Like this:
// native floating points
typedef float Real;
or for fixed points something like:
typedef FixedPoint_16_16 Real;
Then math functions may look this:
Real OurMath::ourSin(const Real& value);
Actual implementation might ofcourse be:
float OurMath::ourSin(const float& value)
{
return sin(value);
}
// for fixed points something more or less trickery stuff
For things like endianness using different functions or classes is a bit nore overhead. try using the pre-processor like:
#ifdef INTEL_LINUX:
code here
#endif
#ifdef SOLARIS_POWERPC
code here
#endif