I'm wondering how to set a 64-bit variable in C++ for my testbench. When I use the variable_name.io_int_write(0,value0) and variable_name.io_int_write(1,value1) (for the lower & upper bits) I can see the variables are set but in the reverse manner.
Ex: When I want it to be 000...002, I see 200...000
Is there an alternate command that would help? Thanks
Edit: I have a function void set_function (set_ * dut)
and inside this function, I need to set a 64-bit variable dut->variable_name
Thanks for your answers, how would I go about fixing the Endianness in this case
If you want to set a 64bit integer variable to a constant value, you can simply say something like:
long long var64bit = 1L;
The L at the end of 1L says that it is a 64bit constant value of 1.
If you are comfortable with hexadecimal and want a 64bit bit-field constant, then the usual way of setting a 64bit variable would look something like this:
unsigned long long bitfield64bit = 0xDEADBEEFDEADBEEFL;
Again, note the trailing L.
If you have two 32bit values that you want to pack into a 64bit value, then you can do this using shift and logical-OR operations as:
long long var64bit = (long long)(((unsigned long long)high_bits << 32) | (unsigned long long)low_bits);
where *high_bits* is the 32bit variable holding the high bits and *low_bits* is the 32bit variable containing the low bits. I do the bit-wise operations on unsigned values because of possibly excess paranoia regarding the sign bit and conversion between 32bit and 64bit integers.
The opposite operations, dividing a 64bit variable into high and low bits can be done as follows:
int high_bits = (int)(var64bit & 0xFFFFFFFFL);
int low_bits = (int)((var64bit >> 32) & 0xFFFFFFFFL);
Related
I am trying to extract the lower 25 bits of uint64_t to uint32_t. This solution shows how to extract lower 16 bits from uint32_t, but I am not able to figure out for uint64_t. Any help would be appreciated.
See How do you set, clear, and toggle a single bit? for bit operations.
To answer your question:
uint64_t lower25Bits = inputValue & (uint64_t)0x1FFFFFF;
Just mask with a mask that leaves just the bits you care about.
uint32_t out = input & ((1UL<<26)-1);
The idea here is: 1UL<<26 provides an (unsigned long, which is guaranteed to be at least 32-bit wide) integer with just the 26th bit set, i.e.
00000100000000000000000000000000
the -1 makes it become a value with all the bits below it set, i.e.:
00000011111111111111111111111111
the AND "lets through" only the bits that in the mask correspond to zero.
Another way is to throw away those bits with a double shift:
uint32_t out = (((uint32_t)input)<<7)>>7;
The cast to uint32_t makes sure we are dealing with a 32-bit wide unsigned integer; the unsigned part is important to get well-defined results with shifts (and bitwise operations in general), the 32 bit-wide part because we need a type with known size for this trick to work.
Let's say that (uint32_t)input is
11111111111111111111111111111111
we left shift it by 32-25=7; this throws away the top 7 bits
11111111111111111111111110000000
and we right-shift it back in place:
00000001111111111111111111111111
and there we go, we got just the bottom 25 bits.
Notice that the first uint32_t cast wouldn't be strictly necessary because you already have a known-size unsigned value; you could just do (input<<39)>>39, but (1) I prefer to be sure - what if tomorrow input becomes a type with another size/signedness? and (2) in general current CPUs are more efficient working with 32 bit integers than 64 bit integers.
I need to accurately convert a long representing bits to a double and my soluton shall be portable to different architectures (being able to be standard across compilers as g++ and clang++ woulf be great too).
I'm writing a fast approximation for computing the exp function as suggested in this question answers.
double fast_exp(double val)
{
double result = 0;
unsigned long temp = (unsigned long)(1512775 * val + 1072632447);
/* to convert from long bits to double,
but must check if they have the same size... */
temp = temp << 32;
memcpy(&result, &temp, sizeof(temp));
return result;
}
and I'm using the suggestion found here to convert the long into a double. The issue I'm facing is that whereas I got the following results for int values in [-5, 5] under OS X with clang++ and libc++:
0.00675211846828461
0.0183005779981613
0.0504353642463684
0.132078289985657
0.37483024597168
0.971007823944092
2.7694206237793
7.30961990356445
20.3215942382812
54.8094177246094
147.902587890625
I always get 0 under Ubuntu with clang++ (3.4, same version) and libstd++. The compiler there even tells me (through a warning) that the shifting operation can be problematic since the long has size equal or less that the shifting parameter (indicating that longs and doubles have not the same size probably)
Am I doing something wrong and/or is there a better way to solve the problem being as more compatible as possible?
First off, using "long" isn't portable. Use the fixed length integer types found in stdint.h. This will alleviate the need to check for the same size, since you'll know what size the integer will be.
The reason you are getting a warning is that left shifting 32 bits on the 32 bit intger is undefined behavior. What's bad about shifting a 32-bit variable 32 bits?
Also see this answer: Is it safe to assume sizeof(double) >= sizeof(void*)? It should be safe to assume that a double is 64bits, and then you can use a uint64_t to store the raw hex. No need to check for sizes, and everything is portable.
I'm working on a relatively simple problem based around adding all the primes under a certain value together. I've written a program that should accomplish this task. I am using long type variables. As I get up into higher numbers (~200/300k), the variable I am using to track the sum becomes negative despite the fact that no negative values are being added to it (based on my knowledge and some testing I've done). Is there some issue with the data type or I am missing something.
My code is below (in C++) [Vector is basically a dynamic array in case people are wondering]:
bool checkPrime(int number, vector<long> & primes, int numberOfPrimes) {
for (int i=0; i<numberOfPrimes-1; i++) {
if(number%primes[i]==0) return false;
}
return true;
}
long solveProblem10(int maxNumber) {
long sumOfPrimes=0;
vector<long> primes;
primes.resize(1);
int numberOfPrimes=0;
for (int i=2; i<maxNumber; i++) {
if(checkPrime(i, primes, numberOfPrimes)) {
sumOfPrimes=sumOfPrimes+i;
primes[numberOfPrimes]=long(i);
numberOfPrimes++;
primes.resize(numberOfPrimes+1);
}
}
return sumOfPrimes;
}
Integers represent values use two's complement which means that the highest order bit represents the sign. When you add the number up high enough, the highest bit is set (an integer overflow) and the number becomes negative.
You can resolve this by using an unsigned long (32-bit, and may still overflow with the values you're summing) or by using an unsigned long long (which is 64 bit).
the variable I am using to track the sum becomes negative despite the fact that no negative values are being added to it (based on my knowledge and some testing I've done)
longs are signed integers. In C++ and other lower-level languages, integer types have a fixed size. When you add past their maximum they will overflow and wrap-around to negative numbers. This is due to the behavior of how twos complement works.
check valid integer values: Variables. Data Types.
you're using signed long, which is usually 32 bit, which means -2kkk - 2kkk, you can either use unsigned long, which is 0-4kkk, or use 64 bit (un)signed long long
if you need values bigger 2^64 (unsigned long long), you will need to use bignum math
long is probably only 32 bits on your system - use uint64_t for the sum - this gives you a guaranteed 64 bit unsigned integer.
#include <cstdint>
uint64_t sumOfPrimes=0;
You can include header <cstdint> and use type std::uintmax_t instead of long.
I'm trying to convert the string to int with stringstream, the code down below works, but if i use a number more then 1234567890, like 12345678901 then it return 0 back ...i dont know how to fix that, please help me out
std:: string number= "1234567890";
int Result;//number which will contain the result
std::stringstream convert(number.c_str()); // stringstream used for the conversion initialized with the contents of Text
if ( !(convert >> Result) )//give the value to Result using the characters in the string
Result = 0;
printf ("%d\n", Result);
the maximum number an int can contain is slightly more than 2 billion. (assuming ubiquitios 32 bit ints)
It just doesn't fit in an int!
The largest unsigned int (on a 32-bit platform) is 2^32 (4294967296), and your input is larger than that, so it's giving up. I'm guessing you can get an error code from it somehow. Maybe check failbit or badbit?
int Result;
std::stringstream convert(number.c_str());
convert >> Result;
if(convert.fail()) {
std::cout << "Bad things happened";
}
If you're on a 32-bit or LP64 64-bit system then int is 32-bit so the largest number you can store is approximately 2 billion. Try using a long or long long instead, and change "%d" to "%ld" or "%lld" appropriately.
The (usual) maximum value for a signed int is 2.147.483.647 as it is (usually) a 32bit integer, so it fails for numbers which are bigger.
if you replace int Result; by long Result; it should be working for even bigger numbers, but there is still a limit. You can extend that limit by factor 2 by using unsigned integer types, but only if you don't need negative numbers.
Hm, lots of disinformation in the existing four or five answers.
An int is minimum 16 bits, and with common desktop system compilers it’s usually 32 bits (in all Windows version) or 64 bits. With 32 bits it has maximum 232 distinct values, which, setting K=210 = 1024, is 4·K3, i.e. roughly 4 billion. Your nearest calculator or Python prompt can tell you the exact value.
A long is minimum 32 bits, but that doesn’t help you for the current problem, because in all extant Windows variants, including 64-bit Windows, long is 32 bits…
So, for better range than int, use long long. It’s minimum 64 bits, and in practice, as of 2012 it’s 64 bits with all compilers. Or, just use a double, which, although not an integer type, with the most common implementation (IEEE 754 64-bit) can represent integer values exactly with, as I recall, about 51 or 52 bits – look it up if you want exact number of bits.
Anyway, remember to check the stream for conversion failure, which you can do by s.fail() or simply !s (which is equivalent to fail(), more precisely, the stream’s explicit conversion to bool returns !fail()).
Say, i have binary protocol, where first 4 bits represent a numeric value which can be less than or equal to 10 (ten in decimal).
In C++, the smallest data type available to me is char, which is 8 bits long. So, within my application, i can hold the value represented by 4 bits in a char variable. My question is, if i have to pack the char value back into 4 bits for network transmission, how do i pack my char's value back into 4 bits?
You do bitwise operation on the char;
so
unsigned char packedvalue = 0;
packedvalue |= 0xF0 & (7 <<4);
packedvalue |= 0x0F & (10);
Set the 4 upper most bit to 7 and the lower 4 bits to 10
Unpacking these again as
int upper, lower;
upper = (packedvalue & 0xF0) >>4;
lower = packedvalue & 0x0F;
As an extra answer to the question -- you may also want to look at protocol buffers for a way of encoding and decoding data for binary transfers.
Sure, just use one char for your value:
std::ofstream outfile("thefile.bin", std::ios::binary);
unsigned int n; // at most 10!
char c = n << 4; // fits
outfile.write(&c, 1); // we wrote the value "10"
The lower 4 bits will be left at zero. If they're also used for something, you'll have to populate c fully before writing it. To read:
infile.read(&c, 1);
unsigned int n = c >> 4;
Well, there's the popular but non-portable "Bit Fields". They're standard-compliant, but may create a different packing order on different platforms. So don't use them.
Then, there are the highly portable bit shifting and bitwise AND and OR operators, which you should prefer. Essentially, you work on a larger field (usually 32 bits, for TCP/IP protocols) and extract or replace subsequences of bits. See Martin's link and Soren's answer for those.
Are you familiar with C's bitfields? You simply write
struct my_bits {
unsigned v1 : 4;
...
};
Be warned, various operations are slower on bitfields because the compiler must unpack them for things like addition. I'd imagine unpacking a bitfield will still be faster than the addition operation itself, even though it requires multiple instructions, but it's still overhead. Bitwise operations should remain quite fast. Equality too.
You must also take care with endianness and threads (see the wikipedia article I linked for details, but the issues are kinda obvious). You should leearn about endianness anyways since you said "binary protocol" (see this previous questions)