Manual memory management in C++ [closed] - c++

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
summary of code: stores over 16 million uint8_t in a 3d array as pointers to those uint8_t.
The code works but why is it that I only saved 4 KB by using uint8_t as opposed to ints. I run this same code with ints it uses 330,488K but with the uint8_t it uses 330,484. I know most of that is the pointers but shouldn't (assuming each int used minimum space) decreasing the size of each 16 million ints from 2bytes to 1 byte have saved more than 4k??? I'm thinking it should have saved closer to 16 MB right?
By "Run the same code with ints" I literally do a "find and replace: uint8_t with int" Then recompile.
uint8_t**** num3d;
num3d = new uint8_t***[256];
for(int i=0;i<256;i++){
num3d[i] = new uint8_t**[256];
for(int j=0;j<256;j++){
num3d[i][j] = new uint8_t*[256];
}
}
// Initialize
uint8_t *B;
for(int lx = 0;lx<256;lx++){
for(int ly= 0;ly<256;ly++){
for(int lz=0;lz<256;lz++){
if(ly == 0 || lx == 0 || lz == 0 || ly == 255 || lx == 255 || lz == 255){
B = new uint8_t(2);
num3d[lx][ly][lz] = B;
continue;
}
if(ly < 60){
B = new uint8_t(1);
num3d[lx][ly][lz] = B;
continue;
}
B = new uint8_t(0);
num3d[lx][ly][lz] = B;
} // inner inner loop
} // inner loop
} // outer loop

Answer to question 1)... This loops goes for ever:
for (uint8_t i=0;i<256;i++)
Indeed the range of number which can be representable by a uint8_t is 0...255. So don't use uint8_t here !
It seems to me that since your computer is allocating is this loop, it will end up eating all memory therefore question 2) doesn't really make sense.

" My question is what is it about int that allows it to work using full 32 bit ints and how would I replicate what the program already does with ints for use with 8 bit ints. I know they must have included memory management into normal ints that isn't included with uint8_t."
Well, int is at least 16 bits, 32 bits isn't even guaranteed. But ignoring that, the fact is that each integral type has a certain range. std::numeric_limits<int> or <uint_8> will tell you the respective ranges. Obviously you can't use an 8 bit number to count from 0 to 256. You can only count to 255.
Also, there's no memory management at all for int and other simple types like uint_8. The compiler justs says "The integer with name Foo is stored in these bytes" and that's it. No management needed. There are a few minor variations, e.g. an int member of a struct is stored "in these bytes of the struct" etcetera.

Related

Allocating memory of a variable with the size of a 2D vector [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
Here's my code, I have a 2D vector and I need to take the size of its row and column to a new variable known as visited ,but throwing an error
int main() {
vector<vector<char>>a;
int n= a.size();
int m = a[0].size();
bool vertices= new bool [n][m];
}
Getting this error
Line 5: Char 33: error: array size is not a constant expression
> bool vertices= new bool [n][m];
> ^ Line 5: Char 33: note: read of non-const variable 'm' is not allowed in a constant expression Line 4:
> Char 9: note: declared here
> int m = a[0].size();
> ^ 1 error generated.
Unable to resolve
The original code tries to determine the number of hits required by summing the widths of the three blocks (w = w1 + w2 + w3;) and repeatedly dividing that by the strength S until the remaining width w becomes zero. If the strength S is 1 (and therefore w1, w2 and w3 are all 1 and their sum is 3), that will loop forever, causing the time limit for the code to be exceeded.
Also, it is not clear how the problem could be solved by division. Rather, the problem as stated involves subtraction, not division.
Since there are only three bricks in the stack (and S is at least the width of the largest brick), there are only three cases to consider:
if (s >= w1 + w2 + w3)
hits = 1;
else if (s >= w1 + w2 || s >= w3 + w2)
hits = 2;
else
hits = 3;
A general solution to handle an arbitrarily sized pile of bricks is out of scope for the problem, so does not need to be considered.
When s == 1, the instruction w = w/s; won’t modify w. You program will loop forever because the condition to terminate the loop is when w == 0.
This explains the time exceeded that you reported.
Note that w=w/s; won’t give you the right answer anyway. Have you understood the question ? Read again.
A side remark is that you should check that the entered values respect the constrains, and reject the input if not.

How to generate set of enumerations with even number of ones [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I have an idea of how to prevent a single bit flip (due to cosmic radiation or similar externally induced event) from causing enumerations (enum) to change from one defined value to another defined value in a relatively easy way. To put it simple each value should have an even amount of ones, binary speaking. If one flips, the enum will be odd and is guaranteed not to match any other enum.
I'm not sure how to actually "generate" such a sequence so that it may be used as enum values as those values must be compile time constant. A macro function returning the n:th element in the set would do perfectly.
The first few numbers in the sequence would be 0 (000), 3 (011), 5 (101), 6 (110). I think you get the idea by now.
Non-enumeration (non-compile time) answers are appreciated as it may help me realize how to do it myself.
To make myself clear I want a macro generating the n:th number in an enum with even number of ones in the bit pattern, similar to macros generating the fibbonachi sequence. The lowest bit is essentially a parity bit.
Most of my memory is protected by hardware ECC, L1 cache being one exception. A single bit error in L1 has been measured to occur once every 10000h which is good enough seen from my requirements.
VRAM however is not protected. There I have mostly RGB(A) raster, a few general purpose raster (like stencil) and some geometry. RGB raster is rather unsensative to bit flips as it is only used for visualization. Erroneous geometry is in general very visible, very rare (few KB) and is by design to be resolved by user induced reboot.
For a 4096x4096x8bit stencil (~16MB) single bit error rate is in my environment about once every 8h for average cosmic radiation, more often during solar storms. It is actually not that bad in my opinion, but I'd hate filling the paper work proving to my officers why this is perfectly fine in my application and everyone elses using the stencil data without regard to how the data is used. If having a parity bit in the stencil value however I'd be able to detect most errors and if necessary re-generate the stencil hoping for better results. The stencil can be generated in less than a second so the risk of errors occuring twice in a row is considered low.
So basically, by generating a set of enumerations with parity I dodge the bullet of current and future paper work and research regarding how it may affect my app and others'.
If you simply want to know if the enum is either valid or if a bit is flipped, you can use any values and a parity bit that makes the total number of bits even. (which this first sequence is identical to your example)
0000000 0 = 0
0000001 1 = 3
0000010 1 = 5
0000011 0 = 6
0000100 1 = 9
0000101 0 = 10
0000110 0 = 12
0000111 1 = 15
which can be done by
int encode_enum(int e) {
return (e << 1) + (number_of_bits_set(e) % 2);
}
However, if you want to be able to restore the value, then a simple way is duplication; have multiple copies of the value that can be later compared to eachother. You'd need 3 copies to restore it. If your list of values is small, you can encode it into one integer.
int encode_enum(int e) {
return (e << 20) | (e << 10) | e;
}
Which if e is less than 2^10 is just copied 3 times into a single 32-bit integer.
c++14's constexpr solves this for you:
#include <iostream>
constexpr int bit_count(int val)
{
int count = 0;
for (int i = 0 ; i < 31 ; ++i) {
if (val & (1 << i))
++count;
}
return count;
}
constexpr int next_bitset(int last)
{
int candidate = last + 1;
if (bit_count(candidate) & 1)
return next_bitset(candidate);
return candidate;
}
enum values
{
a,
b = next_bitset(a),
c = next_bitset(b),
d = next_bitset(c)
};
int main()
{
std::cout << "a = " << a << std::endl;
std::cout << "b = " << b << std::endl;
std::cout << "c = " << c << std::endl;
std::cout << "d = " << d << std::endl;
}
expected output:
a = 0
b = 3
c = 5
d = 6

How secure/effective is this very short random number generator? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I was experimenting with making my own random number generator and was surprised how easy it was to generate random numbers by doing something like this.
#include <iostream>
using namespace std;
int main(){
unsigned int number = 1;
for ( unsigned int i = 0; i < 0xFFFF ; i++ ){
unsigned int * data[0xFFFF];
number = number << 1;
number = number ^ (unsigned int)&data[i];
}
cout << number << endl;
while (1);
}
My question is, how effective is this, I mean, it seems to generate pretty random numbers, but how easy would it be to figure out what the next number is going to be?
The addresses of the data items are (in practice, because they'll be the same in each iteration) monotonically increasing. They're used as a one-time entropy source. Since they're monotonically increasing they're not a very good source of entropy.
In effect, for 32-bit code your code is equivalent to this:
auto main() -> int
{
unsigned number = 1;
unsigned const entropy = 123456; // Whatever.
for ( unsigned i = 0; i < 0xFFFF ; ++i )
{
number = number << 1;
number = number ^ (entropy + 4*i);
}
}
Regarding
” How easy would it be to figure out what the next number is going to be
as I see it that's not quite the right question for a pseudo-random number generator, but still, it's very easy.
Given two successive pseudo-random numbers A and B, computing (A << 1) ^ B yields X = entropy + 4*i. Now you can compute (B << 1) ^ (X + 4) and that's your next pseudo-random number C.
As I recall pseduo-random number generators are discussed in volume 1 of Donald Knuth's The Art of Computer Programming.
That discussion includes consideration of statistical measures of goodness.
It's not random at all. not even pseudo,
It has no state and all the input bits are discarded
basically you're pulling some junk off the stack and manipulating it a bit
in many contexts it will always give the same result.

My algorithm is correct, but some results are wrong? (C++) [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
So I took this practice lesson called 'permMissingElement' on www.codility.com, and here is my code:
#include <iostream>
using namespace std;
int solution(vector<int> &A)
{
int N = A.size();
double Nf = double(N);
int n;
double missingf;
int missing;
double sumN1, sumN2;
int sum = 0;
double sumf;
double two = 2.0;
for (n = 0; n < N; n++)
{
sum = sum + A[n];
}
sumf = double(sum);
sumN1 = (Nf+double(1.0))/double(2.0);
sumN2 = sumN1*(Nf+double(2.0));
missingf = sumN2 - sumf;
missing = int(missingf);
return missing;
}
Now, here is the problem. My algorithm works, but for some reason I cannot figure out, gives wrong answers on "large" values of N. (N can be a maximum of 100,000 in the lesson).
Anyway, I initially wrote the program using all ints, and then realized maybe I was getting an overflow because of that, so I changed them to doubles, but I still get wrong answers on large values of N... why is that?
Thanks
Doubles are floating point numbers, which means you are guaranteed to lose precision, especially for large numbers. Instead of using int use another integral type like long or long long.
The C++ standard doesn't specify an exact size for each type, but an int is at least 16 bits, a long is at least 32 bits and a long long at least 64 bits.
Check this SO question for more pointers: size of int, long, etc
Interesting approach you have there, I solved the same problem in php using the following algorithm, interpreting each array value as an index, then inverting the value. The position of the last value>0 is the missing one, and it still runs in O(n).
That way you avoid the overflow adding big numbers, as you do not have to add anything.
error_reporting(0);
$s = count($A);
for ($i=0; $i<$s; $i++)
{
$n = abs($A[$i])-1;
$A[$n]*=(-1);
}
for ($i=0; $i<$s; $i++)
{
if ($A[$i]>0)
return $i+1;
}
return $s+1;

Conditional large flat array traversal and surprisingly short loop execution time [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I am in need of an explanation for something I have discovered by experience. I have a very large flat array of type char. The array is in total 500x500x500 = 125E+6 bytes long. Inside the cells I keep a number between 0 and 255. But luckily when traversing the array I am only interested in cells having a non-zero value!
Now here goes the question. I found out by experimenting that doing even the smallest operation on the cells takes a huge amount of time when going through the whole zero and non-zero array whereas, if I use a condition similar to the one below,
while( index < 125000000 )
{
if( array[ index ] > 0 )
{
// Do some stuff
}
index++;
}
The execution time is drastically shorter. In fact I can go through the whole array and perform my operations on the non-zero cells in a few seconds rather than the half an hour execution time of the approach with no conditions.
What I need is an explanation of why this works! I need to explain this phenomena in my thesis report and it would be the best if I can relate to a scientific paper or similar.
Thank you in advance!
Best Regards,
Omid Ariyan
It could be that you expect your char to be unsigned, hence capable of holding values in the range [0,255], but it is in fact signed, holding values in the range [-128, 127] (assuming two's complement). So the number of cases where array[ index ] > 0 is much smaller than you expect, because all elements assigned values larger than 127 will have a negative value.
Note that you claim to check for non-zero values, but you are really checking for positive ones.
You can check the range of char on your platform:
#include <limits>
#include <iostream>
int main()
{
std::cout << static_cast<int>(std::numeric_limits<char>::min()) << std::endl;
std::cout << static_cast<int>(std::numeric_limits<char>::max()) << std::endl;
char c = 234;
std::cout << static_cast<int>(c) << std::endl; // 234 if unsigned, -22 if signed
}