How do I extend the 32-bit limitation in coldfusion? - coldfusion

We've been using bit operations to offer users the ability to customize the sections of a report. Each section has a bit value, and the sum of all the sections that the user chooses to include is the user's preference. We can then use bit operations to determine which sections to display on the user's report.
However, ColdFusion bit operations are limited to 32-bit numbers, and the number of sections on our report may soon exceed this limitation. We'd like to develop a method to perform bit operations with 64-bit numbers and higher. Is there any built-in way to accomplish this in CF? If not, how can we go about accomplishing this manually? (i.e. breaking the 64 bit number into 2 separate 32 bit numbers, performing the bit operations, and joining it back together)

You could perhaps use a BigInteger instead?

Related

Recommendable test for software integer multiplication?

I've coded a number of integer multiplication routines for Atmel's AVR architecture. I found following a simple pattern for the multiplier (and a similar one for the multiplicand) useful, if unconvincing (start at zero, step by a one in every byte (in addition to eventual carries)).
There seems to be quite a bit about testing hardware multiplier implementations, but:
What can be recommended for testing software implementations of integer multiplication? Exhaustive testing gets out of hand - if not at, then beyond 16×16 bit.
Most approaches uses Genere & Test
generate test operands
The safest is to use all combinations of operands. But with bignums or small computing power or memory is this not possible or practical. In that case are usually used some test cases which will (most likely) stress the algorithm tested (like propagating carry ... or be near safe limits) and use only them. Another option is to use pseudo random operands and hope for a valid test sample. or combine all these in some way together
compute multiplication
Just apply the tested algorithm on generated input data.
assess the results
so you need to compare the algorithm(multiplication) result to something. Mostly use different algorithm of the same process to compare with. Or use inverse functions (like c=a*b; if (a!=c/b) ...). The problem with this is that you can not distinguish between error in compared or compared to algorithm ... unless you use something 100% working or compare to more then just one operation... There is also the possibility to have precomputed result table (computed on different platform 100% bug free) and compare to that.
AVR Tag make this a tough question. So some hints:
many MCU's have a lot of Flash memory for program that can be used to store precomputed values to compare to
sometimes running in emulator is faster/more comfortable but with the risk of dismissing some HW related issues.

Choosing between 32 and 64 bit intrinsic CRC on Intel CPU

I need to calculate CRC in order to form a hash function on an INTEL machine and came up with the following two intrinsic functions:
_mm_crc32_u32
_mm_crc32_u64
In my project, I am dealing with 32-bit variables and my dilemma is between shifting and ORing each two variables (thus creating a 64-bit variable) and then using the 64-bit CRC or run the 32-bit CRC on each of the two 32-bit variables.
I can't find anywhere the amount of cycles that each one of these functions take, and from the Intel function specifications it is unclear which one is preferable.
The same dilemma also applies on the 16-bit version of the CRC function:
_mm_crc32_u16
I tried checking it by taking the time before and after the CRC. The results were pretty much the same. So I need a more sophisticated way of calculating it.
Don't use CRC for hash values. It's not the same kind of thing.
Use the murmurhash for classic computer science hashing needs (that is, not huge cryptographic strength hashes). That also has implementations for different widths.
I don't understand what you mean: you have two 32-bit values and want a hash of that? That might be sensible or might not, depending on why. Can you clarify what you are trying to accomplish?

c++ binary data layout guaranteed by the standard

This is purely a theoretical problem, nothing I have really found myself in, but it has piqued my curiosity and wanted to see if anyone has a better solution for it:
How do you portably guarantee that an specific file format / network
protocol or whatever conforms to a specific bit pattern.
Say we have a file format that uses a 64 bit header struct immediately followed by a variable length array of 32 bit structures:
Header: magic : 32 bit
count : 32 bit
Field : id : 16 bit
data : 16 bit
My first instinct would be to write something like:
struct Field
{
uint16_t id ;
uint16_t data ;
};
Except that our compiler may decide that padding is advisable and we end up with a 64 bit structure. So our next bet is:
using Field = uint16_t[2];
and work on that.
That is, unless someone has carefully read the standard and noticed that uint16_t is optional. At this point our next best friend is uint_least16_t, which is guaranteed to be at least 16 bits long, but for all we know could be 20 bits long in a 10 bit / char processor.
At this point, the only real solution I can come up with is some sort of bit stream, capable of reading and writing specific amounts of bits, and adaptable by std::numeric_limits.
So, is there someone out there who has very carefully read the standard and found the point I'm missing? Or it is this the only real way of having a portable guarantee.
Notes:
- I've just realized that endianness would probably add another layer of complexity.
- I'm using the current working draft of the ISO standard (N3797).
How do you portably guarantee that an specific file format / network
protocol or whatever conforms to a specific bit pattern.
You can't. Not in C++, which was standardized against an abstract platform where little more than the existence of a "byte" that is made up of bits can be assumed. We can't even say for certain, in looking only at the Standard, how many bits are in a char. You can use bitfields for everything, as bits are indivsible, but then you'll have padding to contend with at the least.
Sometimes it is best to give up on the idea of absolute Standards conformance for the sake of conformance, and look to other means to get the job done efficiently and effectively. In this case, platform specifics in combination with almost absolute Standards conformance (aka, good programming practices) will set you free.
Every platform I work on regularly (linux & windows) provides a means to regulate the padding the compiler will actually apply. For network communications, under Linux & Windows I use:
#pragma pack (push, 1)
as a preface to all the data structures I'm going to send over the wire. Endianness is indeed another challenge, but one more or less easily dealt with using other resources provided by every platform: ntohl and the like.
Standards conformance is a laudable goal, and indeed in a code review I would reject most code that is non-conformant. The lack of conformance is really just a moniker for the rejection however; not the reason itself. The actual reason for the rejection is in large part difficulty in maintaining and porting non-conformant code when moving to another platform, or indeed even just upgrading the compiler on the same platform. Non-conformant code might compile and even appear to work, but it will very often fail in subtle and miserable ways when you least expect it, even after thorough testing.
The moral of the story is:
You should always write Standards-conformant code, except when you
shouldn't.
This really is just a re-imagining of Einstein's articulation of Occam's Razor:
Make everything as simple as possible, but no simpler.
If you want to ensure portability to everything standard-conforming, including platforms for which CHAR_BITS isn't 8, well, you've got your work cut out for you.
If you are comfortable limiting yourself to 98% of the computers you'll ever program, I recommend writing explicit serialization for anything that has to adhere to a particular wire-format. That includes breaking integers into bytes, etc.
Write appropriate abstractions around things and the code won't be too bad. Don't put shifts and masks everywhere. Encapsulate it.
I would use network types and network byte orders. See this link.http://www.beej.us/guide/bgnet/output/html/multipage/htonsman.html. The example uses uint16_t. You can write the values a field at a time to prevent padding.
Or if you want to read and write the entire structure at one see this link C++ struct alignment question
Make the structure easy for the program to use.
Provide input methods that extract data from the input and write to the data members. This removes the issue of padding, alignment boundaries and endianness. Similarly with output.
For example, if your input data is 16-bits wide, but your platform is 32-bits wide, declare the structure using 32-bit fields. Copy the 16 bits from the input into the 32-bit fields.
Most programs read into a structure fewer times than they access the data members. Your program is not reading the input 100% of the time.

Opinions on my data storage problem (database/homebrew solution)

I have very simply structured data which is currently stored in a home-brew file format, but I am wondering whether we should migrate to something more modern. The data is simply a table of doubles, indexed by a double column. The things I need to perform are:
Iterating through the table.
Insertion and deletion of arbitrary records.
Selecting a given number of rows before and after a given key value (where the key might not be in the database).
The requirements are:
The storage must be file-based without a server.
It should not be necessary to read the whole file into memory.
The resulting file should be portable between different architectures (wrt endian-ness...)
Must be a very stable project (the data is highly critical).
Must run on Solaris/SPARC and preferably also on Linux/x64.
Access times should be as fast as possible.
Must be available as a C++ library. Bonus points for Fortran and Python bindings :)
Optional higher precision number representation than double precision would be a bonus.
Relatively compact storage size would also be a bonus.
From my limited experience, sqlite would be an interesting choice, or perhaps mysql in a non-server mode if sqlite is not fast enough. But perhaps a full-fledged SQL database is overkill?
What do you suggest?
SQLite meets nearly all of your requirements, and it's not that hard to use. Give it a try!
It's file-based, and the entire database is a single file.
It does not need to read the entire file into memory. Database size might be limited; you should check here if the limits will be a problem in your situation.
The format is cross-platform:
SQLite databases are portable across 32-bit and 64-bit machines and between big-endian and little-endian architectures.
It's been around for a long time and is used in many places, and is generally considered mature and stable.
It's very portable and runs on Solaris/SPARC and Linux/x64.
It's faster than MySQL (grains of salt present behind that link, though) or other such database servers, because only one client needs to be taken into account.
There is a C++ API and a Python binding and a Fortran wrapper.
There is no arbitrary-precision column type, but NUMERIC will be silently converted to text if it cannot be exactly represented:
For conversions between TEXT and REAL storage classes, SQLite considers the conversion to be lossless and reversible if the first 15 significant decimal digits of the number are preserved. If the lossless conversion of TEXT to INTEGER or REAL is not possible then the value is stored using the TEXT storage class.
Compact storage of the database, I'm not sure of. But I've never heard any claims that SQLite would be particularly wasteful.

Bit manipulations/operations calculator?

I am getting into bit manipulation in C/C++.
Is there a good calculator or program (executable or online), that makes it relatively convenient to study & test bit procedures?
I can do same work in Visual Studio or Eclipse, but relatively small program is easier and more convenient.
I use bincalc by Ding Zhaojie. It conveniently lets you see what's going on in hex and binary (and decimal if you care), and has a nice gadget for inputting binary or flipping bits.
http://sites.google.com/site/bincalc/
Free, but no source.
The calculator, calc, in Windows 7 has a 'Programmer' view (Alt + 3). It has most of the operations for bit manipulations.
bitwisecmd.com Supports conversion from decimal to binary and hehadecimal numbers as well as basic bitwise operations as AND, OR, XOR, NOT, left shift, right shift. Neatly arranges bits in binary numbers so you can see how this stuff works.
I use a good online tool that supports all the bitwise operations, so you don't need to install anything: http://www.convertforfree.com/bitwise-calculator/
This tool saved me a lot of time and effort.
Use this tool. It has a free online truth table generator as well as a calculator for bitwise operations.