How to generate set of enumerations with even number of ones [closed] - c++

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I have an idea of how to prevent a single bit flip (due to cosmic radiation or similar externally induced event) from causing enumerations (enum) to change from one defined value to another defined value in a relatively easy way. To put it simple each value should have an even amount of ones, binary speaking. If one flips, the enum will be odd and is guaranteed not to match any other enum.
I'm not sure how to actually "generate" such a sequence so that it may be used as enum values as those values must be compile time constant. A macro function returning the n:th element in the set would do perfectly.
The first few numbers in the sequence would be 0 (000), 3 (011), 5 (101), 6 (110). I think you get the idea by now.
Non-enumeration (non-compile time) answers are appreciated as it may help me realize how to do it myself.
To make myself clear I want a macro generating the n:th number in an enum with even number of ones in the bit pattern, similar to macros generating the fibbonachi sequence. The lowest bit is essentially a parity bit.
Most of my memory is protected by hardware ECC, L1 cache being one exception. A single bit error in L1 has been measured to occur once every 10000h which is good enough seen from my requirements.
VRAM however is not protected. There I have mostly RGB(A) raster, a few general purpose raster (like stencil) and some geometry. RGB raster is rather unsensative to bit flips as it is only used for visualization. Erroneous geometry is in general very visible, very rare (few KB) and is by design to be resolved by user induced reboot.
For a 4096x4096x8bit stencil (~16MB) single bit error rate is in my environment about once every 8h for average cosmic radiation, more often during solar storms. It is actually not that bad in my opinion, but I'd hate filling the paper work proving to my officers why this is perfectly fine in my application and everyone elses using the stencil data without regard to how the data is used. If having a parity bit in the stencil value however I'd be able to detect most errors and if necessary re-generate the stencil hoping for better results. The stencil can be generated in less than a second so the risk of errors occuring twice in a row is considered low.
So basically, by generating a set of enumerations with parity I dodge the bullet of current and future paper work and research regarding how it may affect my app and others'.

If you simply want to know if the enum is either valid or if a bit is flipped, you can use any values and a parity bit that makes the total number of bits even. (which this first sequence is identical to your example)
0000000 0 = 0
0000001 1 = 3
0000010 1 = 5
0000011 0 = 6
0000100 1 = 9
0000101 0 = 10
0000110 0 = 12
0000111 1 = 15
which can be done by
int encode_enum(int e) {
return (e << 1) + (number_of_bits_set(e) % 2);
}
However, if you want to be able to restore the value, then a simple way is duplication; have multiple copies of the value that can be later compared to eachother. You'd need 3 copies to restore it. If your list of values is small, you can encode it into one integer.
int encode_enum(int e) {
return (e << 20) | (e << 10) | e;
}
Which if e is less than 2^10 is just copied 3 times into a single 32-bit integer.

c++14's constexpr solves this for you:
#include <iostream>
constexpr int bit_count(int val)
{
int count = 0;
for (int i = 0 ; i < 31 ; ++i) {
if (val & (1 << i))
++count;
}
return count;
}
constexpr int next_bitset(int last)
{
int candidate = last + 1;
if (bit_count(candidate) & 1)
return next_bitset(candidate);
return candidate;
}
enum values
{
a,
b = next_bitset(a),
c = next_bitset(b),
d = next_bitset(c)
};
int main()
{
std::cout << "a = " << a << std::endl;
std::cout << "b = " << b << std::endl;
std::cout << "c = " << c << std::endl;
std::cout << "d = " << d << std::endl;
}
expected output:
a = 0
b = 3
c = 5
d = 6

Related

How secure/effective is this very short random number generator? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I was experimenting with making my own random number generator and was surprised how easy it was to generate random numbers by doing something like this.
#include <iostream>
using namespace std;
int main(){
unsigned int number = 1;
for ( unsigned int i = 0; i < 0xFFFF ; i++ ){
unsigned int * data[0xFFFF];
number = number << 1;
number = number ^ (unsigned int)&data[i];
}
cout << number << endl;
while (1);
}
My question is, how effective is this, I mean, it seems to generate pretty random numbers, but how easy would it be to figure out what the next number is going to be?
The addresses of the data items are (in practice, because they'll be the same in each iteration) monotonically increasing. They're used as a one-time entropy source. Since they're monotonically increasing they're not a very good source of entropy.
In effect, for 32-bit code your code is equivalent to this:
auto main() -> int
{
unsigned number = 1;
unsigned const entropy = 123456; // Whatever.
for ( unsigned i = 0; i < 0xFFFF ; ++i )
{
number = number << 1;
number = number ^ (entropy + 4*i);
}
}
Regarding
” How easy would it be to figure out what the next number is going to be
as I see it that's not quite the right question for a pseudo-random number generator, but still, it's very easy.
Given two successive pseudo-random numbers A and B, computing (A << 1) ^ B yields X = entropy + 4*i. Now you can compute (B << 1) ^ (X + 4) and that's your next pseudo-random number C.
As I recall pseduo-random number generators are discussed in volume 1 of Donald Knuth's The Art of Computer Programming.
That discussion includes consideration of statistical measures of goodness.
It's not random at all. not even pseudo,
It has no state and all the input bits are discarded
basically you're pulling some junk off the stack and manipulating it a bit
in many contexts it will always give the same result.

Finding the square of a number without multiplication [duplicate]

This question already has answers here:
Making a square() function without x*x in C++
(7 answers)
Closed 4 years ago.
I'm a beginner in programming and trying to learn C++ by the book Programming principles and practice using C++. In some parts of the book there are little exercises that you can try to do, one of this exercises is about calculating the square of a number, here is what my book says :
Implement square() without using the multiply operator, that is, do the x * x by repetead addition (start a variable result to 0 and add x to it x times).
I've already found a solution for this program but my first tentative was something like this :
#include <iostream>
int main()
{
int a = 0;
std::cout << "Enter an integer value : ";
std::cin >> a;
while (a < a * a)
{
a += a;
std::cout << a << "\n";
}
}
I know this code is wrong but I can't understand the output of the progam, if I enter 5 the program prints 10 20 30 40 50 until 8000, why the for loop doesn't stop when a is greater than its square ? I'm just curious to undersant why
Using multiplication when trying to avoid multiplication seems broken. What about this:
int r = 0;
for (int n = 0; n < a; ++n) {
r += a;
}
why the for loop doesn't stop when a is greater than its square ?
Because it never is. If you compare the graph of y=x^2 against the graph of y=x, you will see that the only time y=x is above, is when 0 < x < 1. That's never the case for integers1. Now, since we're talking about computers with limited storage here, there is a thing called overflow, which will cause a very large number to become a very small number. However, signed integer overflow is undefined behavior in C++. So once your loop gets to the point where overflow would happen, you cannot rely on the results.
1. Note that your loop is not set to stop just when a is greater than its square, but when it is greater than or equal to its square. So, your loop will actually stop if a is 0 or 1.

Manual memory management in C++ [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
summary of code: stores over 16 million uint8_t in a 3d array as pointers to those uint8_t.
The code works but why is it that I only saved 4 KB by using uint8_t as opposed to ints. I run this same code with ints it uses 330,488K but with the uint8_t it uses 330,484. I know most of that is the pointers but shouldn't (assuming each int used minimum space) decreasing the size of each 16 million ints from 2bytes to 1 byte have saved more than 4k??? I'm thinking it should have saved closer to 16 MB right?
By "Run the same code with ints" I literally do a "find and replace: uint8_t with int" Then recompile.
uint8_t**** num3d;
num3d = new uint8_t***[256];
for(int i=0;i<256;i++){
num3d[i] = new uint8_t**[256];
for(int j=0;j<256;j++){
num3d[i][j] = new uint8_t*[256];
}
}
// Initialize
uint8_t *B;
for(int lx = 0;lx<256;lx++){
for(int ly= 0;ly<256;ly++){
for(int lz=0;lz<256;lz++){
if(ly == 0 || lx == 0 || lz == 0 || ly == 255 || lx == 255 || lz == 255){
B = new uint8_t(2);
num3d[lx][ly][lz] = B;
continue;
}
if(ly < 60){
B = new uint8_t(1);
num3d[lx][ly][lz] = B;
continue;
}
B = new uint8_t(0);
num3d[lx][ly][lz] = B;
} // inner inner loop
} // inner loop
} // outer loop
Answer to question 1)... This loops goes for ever:
for (uint8_t i=0;i<256;i++)
Indeed the range of number which can be representable by a uint8_t is 0...255. So don't use uint8_t here !
It seems to me that since your computer is allocating is this loop, it will end up eating all memory therefore question 2) doesn't really make sense.
" My question is what is it about int that allows it to work using full 32 bit ints and how would I replicate what the program already does with ints for use with 8 bit ints. I know they must have included memory management into normal ints that isn't included with uint8_t."
Well, int is at least 16 bits, 32 bits isn't even guaranteed. But ignoring that, the fact is that each integral type has a certain range. std::numeric_limits<int> or <uint_8> will tell you the respective ranges. Obviously you can't use an 8 bit number to count from 0 to 256. You can only count to 255.
Also, there's no memory management at all for int and other simple types like uint_8. The compiler justs says "The integer with name Foo is stored in these bytes" and that's it. No management needed. There are a few minor variations, e.g. an int member of a struct is stored "in these bytes of the struct" etcetera.

Need your input Project Euler Q 8

Is there a better way of doing this ?
http://projecteuler.net/problem=8
I added a condition to check if the number is >6 (Eliminates small products and 0's)
#include <iostream>
#include <math.h>
#include "bada.h"
using namespace std;
int main()
{
int badanum[] { DATA };
int pro=0,highest=0;
for(int i=0;i<=996;++i)
{
if (badanum[i]>6 and badanum[i+1] > 6 and badanum[i+2] >6 and badanum[i+3]>6 and badanum[i+4]>6)
{
pro=badanum[i]*badanum[i+1]*badanum[i+2]*badanum[i+3]*badanum[i+4];
if(pro>highest)
{
cout << pro << " " << badanum[i] << badanum[i+1] << badanum[i+2] << badanum[i+3] << badanum[i+4] << endl;
highest = pro;
}
pro = 0;
}
}
}
bada.h is just a file containing the 1000 digit number.
#DEFINE DATA <1000 digit number>
http://projecteuler.net/problem=8
that if slows things down actually
causes branching the parallel pipeline of CPU execution
also as mentioned before it will invalidate the result
does not matter that your solution is the same as it should be (for another digits it could not)
On algorithmic side you can do:
if you have fast enough division you can lower the computations number
char a[]="7316717653133062491922511967442657474235534919493496983520312774506326239578318016984801869478851843858615607891129494954595017379583319528532088055111254069874715852386305071569329096329522744304355766896648950445244523161731856403098711121722383113622298934233803081353362766142828064444866452387493035890729629049156044077239071381051585930796086670172427121883998797908792274921901699720888093776657273330010533678812202354218097512545405947522435258490771167055601360483958644670632441572215539753697817977846174064955149290862569321978468622482839722413756570560574902614079729686524145351004748216637048440319989000889524345065854122758866688116427171479924442928230863465674813919123162824586178664583591245665294765456828489128831426076900422421902267105562632111110937054421750694165896040807198403850962455444362981230987879927244284909188845801561660979191338754992005240636899125607176060588611646710940507754100225698315520005593572972571636269561882670428252483600823257530420752963450\0";
int i=0,s=0,m=1,q;
for (i=0;i<4;i++)
{
q=a[i ]-'0'; if (q) m*=q;
}
for (i=0;i<996;i++)
{
q=a[i+4]-'0'; if (q) m*=q;
if (s<m) s=m;
q=a[i ]-'0'; if (q) m/=q;
}
also you can do a table for mul,div operations for speed (but that is not faster in all cases)
int mul_5digits[9*9*9*9*9+1][10]={ 0*0,0*1,0*2, ... ,9*9*9*9*9/9 };
int div_5digits[9*9*9*9*9+1][10]={ 0/0,0/1,0/2, ... ,9*9*9*9*9/9 };
// so a=b*c; is rewritten by a=mul_5digits[b][c];
// so a=b/c; is rewritten by a=div_5digits[b][c];
of course instead of values 0*0 have to add neutral value = 1 !!!
of course instead of values i/0 have to add neutral value = i !!!
int i=0,s=0,t=1;
for (i=0;i<4;i++)
{
t=mul_5digits[t][a[i ]-'0'];
}
for (i=0;i<996;i++)
{
t=mul_5digits[t][a[i+4]-'0'];
if (s<t) s=t;
t=div_5digits[t][a[i ]-'0'];
}
Run-time measurements on AMD 3.2GHz, 64bit Win7, 32 bit App BDS2006 C++:
0.022ms classic approach
0.013ms single mul,div per step (produce false outut if there is none product > 0 present)
0.054ms tabled single mul,div per step (is slower for my setup)
PS.
All code improvements should be measured so you see if you actually speed thing up or not.
Because what is faster for one compiler/platform/computer can be slower for another.
Use at least 0.1 ms resolution.
I prefer the use of RDTSC or PerformanceCounter for that.
Except for the errors pointed out in the comments, that much multiplications aren´t necessary. If you start with the product of [0] * [1] * [2] * [3] * [4] for index 0, what would be the product starting at [1]? The old result divided by [0] and multiplied by [5]. One division and one multiplication could be faster than 4 multiplications
You don't need to store all the digits at once. Just current five of them (use an array with cyclic overwriting), one variable to store the current problem result and one to store the latest multiplication result(see below). If the number of digits in the input will grow you won't get any troubles with memory.
Also you could have the check if the oldest read digit equals zero. If it is, than you will really have to multiply all the five current digits, but if not - a better way will be to divide previous multiplication result by the oldest digit and multiply it by the latest read digit.

Conditional large flat array traversal and surprisingly short loop execution time [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I am in need of an explanation for something I have discovered by experience. I have a very large flat array of type char. The array is in total 500x500x500 = 125E+6 bytes long. Inside the cells I keep a number between 0 and 255. But luckily when traversing the array I am only interested in cells having a non-zero value!
Now here goes the question. I found out by experimenting that doing even the smallest operation on the cells takes a huge amount of time when going through the whole zero and non-zero array whereas, if I use a condition similar to the one below,
while( index < 125000000 )
{
if( array[ index ] > 0 )
{
// Do some stuff
}
index++;
}
The execution time is drastically shorter. In fact I can go through the whole array and perform my operations on the non-zero cells in a few seconds rather than the half an hour execution time of the approach with no conditions.
What I need is an explanation of why this works! I need to explain this phenomena in my thesis report and it would be the best if I can relate to a scientific paper or similar.
Thank you in advance!
Best Regards,
Omid Ariyan
It could be that you expect your char to be unsigned, hence capable of holding values in the range [0,255], but it is in fact signed, holding values in the range [-128, 127] (assuming two's complement). So the number of cases where array[ index ] > 0 is much smaller than you expect, because all elements assigned values larger than 127 will have a negative value.
Note that you claim to check for non-zero values, but you are really checking for positive ones.
You can check the range of char on your platform:
#include <limits>
#include <iostream>
int main()
{
std::cout << static_cast<int>(std::numeric_limits<char>::min()) << std::endl;
std::cout << static_cast<int>(std::numeric_limits<char>::max()) << std::endl;
char c = 234;
std::cout << static_cast<int>(c) << std::endl; // 234 if unsigned, -22 if signed
}