Extra zeros in OpenCV Mat - c++

Tried building 'cv::Mat' from 2D array but I find that extra zeros are added to the Mat which I am not able to understand. The code I tried is :
int a2D [7][7];
for(loop condition)
{
a2D[x][y] = value;
cout << "Value :"<< value << endl;
}
Mat outmat = Mat(7, 7, CV_8UC1, &a2D);
cout << "Mat2D : "<< outmat << endl;
Output is :
Value : 22
Value : 179
Value : 145
Value : 170
Value : 251
Value : 250
Value : 171
Value : 134
Value : 218
Value : 178
Value : 6
....Upto 49 values.
Mat2D : [ 22, 0, 0, 0, 179, 0, 0;
0, 145, 0, 0, 0, 170, 0;
0, 0, 251, 0, 0, 0, 250;
0, 0, 0, 171, 0, 0, 0;
134, 0, 0, 0, 218, 0, 0;
0, 178, 0, 0, 0, 6, 0;
0, 0, 72, 0, 0, 0, 25]
As in Mat2D output after every value 3 zeros are added.Why and how?

You are using int buffer to initialize cv::Mat with unsigned char elements, that explains why values are written at each fourth element (int seems to be 4 times larger than unsigned char on your machine).
Changing type of a2D to unsigned char should fix the issue.

The assignment a2D[x][y] = value is wrong if the type of a2D is int[49], you are writing outside the array. The 0's you see are uninitialized garbage.
You have to access a2D with a single index. For example: a2D[i] = value.

Related

Writing to File in Python, Reading From File with Arduino

I'm currently using Python 2.7 to pull pixel information from a series of bitmaps, and writing 24 bits of information to a file (with an arbitrary extension, ".bfs", to make it easy to find down the pipeline), 8 bits for position x, 8 bits for position y, 16 bits for color.
from PIL import Image
import struct
filename = raw_input('Please choose destination filename: ')
file_in = [0]*27
im = [0]*27
for i in range(1,27):
file_in[i] = str(i)+".bmp"
im[i] = Image.open(file_in[i])
file_out = open(filename+".bfs", 'w')
readable_out = open(filename+".txt", 'w')
for q in range(1,27):
pix = im[q].load()
width, height = im[q].size
for y in range (height):
for x in range (width):
rgb = pix[x,y]
red = rgb[0]
green = rgb[1]
blue = rgb[2]
Uint16_val = (((31*(red+4))/255)<<11) | (((63*(green+2))/255)<<5) | ((31*(blue+4))/255)
hex_16 = int('%.4x'%Uint16_val, 16)
print(str(x)+", "+str(y)+", "+str(hex_16)+"\n")
readable_out.write(str(x)+", "+str(y)+", "+str(hex_16)+"\n")
file_out.write(struct.pack('<1B', x))
file_out.write(struct.pack('<1B', y))
file_out.write(struct.pack('<1H', hex_16))
On the PC side everything is coming out clean how I expect (this is copied from a .txt file that I output and format to make it easier to read):
0, 0, 40208
1, 0, 33544
2, 0, 33544
3, 0, 39952
4, 0, 39944
5, 0, 33544
6, 0, 39688
7, 0, 39952
8, 0, 39944
9, 0, 33544
10, 0, 33800
11, 0, 39952
12, 0, 39952
13, 0, 33544
14, 0, 33800
15, 0, 48400
From here I'm taking the .bfs file and loading it onto an SD card for an Arduino Uno to read from. The Arduino code is supposed to read from the SD card, and output the x, y, and color values to a TFT LCD. Here is the Arduino Code:
#include <Adafruit_GFX.h> // Core graphics library
#include <Adafruit_ST7735.h> // Hardware-specific library
#include <SPI.h>
#include <SD.h>
#define TFT_CS 10 // Chip select line for TFT display
#define TFT_RST 9 // Reset line for TFT (or see below...)
#define TFT_DC 8 // Data/command line for TFT
#define SD_CS 4 // Chip select line for SD card
Adafruit_ST7735 tft = Adafruit_ST7735(TFT_CS, TFT_DC, TFT_RST);
void setup(void) {
Serial.begin(9600);
tft.initR(INITR_144GREENTAB);
Serial.print("Initializing SD card...");
if (!SD.begin(SD_CS)) {
Serial.println("failed!");
return;
}
Serial.println("OK!");
tft.fillScreen(0x0000);
}
uint32_t pos = 0;
uint8_t x,y;
uint8_t buffpix[3];
uint16_t c;
void loop() {
bfsDraw("image.bfs");
}
#define BUFFPIXEL 20
void bfsDraw(char *filename) {
File bfsFile;
int w, h, row, col;
uint8_t x,y;
uint16_t c;
uint32_t pos = 0, startTime = millis();
if((0 >= tft.width()) || (0 >= tft.height())) return;
if ((bfsFile = SD.open(filename)) == NULL) {
Serial.print("File not found");
return;
}
w = 128;
h = 128;
tft.setAddrWindow(0, 0, 0+w-1, 0+h-1);
for (row=0; row<h; row++) {
for (col=0; col<w; col++) {
x = bfsFile.read();
Serial.print(x);
Serial.print(", ");
y = bfsFile.read();
Serial.print(y);
Serial.print(", ");
c = read16(bfsFile);
Serial.print(c);
Serial.print(" ");
Serial.println(" ");
tft.drawPixel(x,y,c);
}
}
}
uint8_t read8(File f) {
uint16_t result;
((uint8_t *)&result)[0] = f.read();
return result;
}
uint16_t read16(File f) {
uint16_t result;
((uint8_t *)&result)[0] = f.read();
((uint8_t *)&result)[1] = f.read();
return result;
}
I have some print statements around the code that reads from the card before sending out to the TFT, and instead of matching the file that (I think) I wrote it outputs like this:
0, 0, 40208
1, 0, 33544
2, 0, 33544
3, 0, 39952
4, 0, 39944
5, 0, 33544
6, 0, 39688
7, 0, 39952
8, 0, 39944
9, 0, 33544
13, 10, 2048
132, 11, 4096
156, 12, 4096
As you can see the reading from the Arduino starts out matching the writing of the Python script, but after 9 the "X" byte has shifted into the middle instead of the leading position. My question, is what is causing this shift, after x = 9? is this a little endian versus big endian issue?
Thanks for your help!
You opened your file in text mode, not binary mode. On Windows, that means that every newline character (byte value 10) that you write gets converted into carriage return + linefeed (byte values 13, 10). Use 'wb' for the mode when opening the .bfs file.
Note that writing the coordinates of each pixel into the file is insane - you're doubling the size of the file for absolutely no benefit. You can easily recreate the coordinates as you're reading the file - in fact you're ALREADY DOING SO, in the form of the row and col variables!

c++ casting to byte (unit8_t) during subtraction won't force underflow like I expect; output is int16_t; why?

Note that byte is an 8-bit type (uint8_t) and unsigned int is a 16-bit type (uint16_t).
The following doesn't produce the results that I expect. I expect it to underflow and the result to always be a uint8_t, but it becomes a signed int (int16_t) instead!!! Why?
Focus in on the following line of code in particular: (byte)seconds - tStart
I expect its output to ALWAYS be an unsigned 8-bit value (uint8_t), but it is instead outputting a signed 16-bit value: int16_t.
How do I get the result of the subraction to always be a uint8_t type?
while (true)
{
static byte tStart = 0;
static unsigned int seconds = 0;
seconds++;
//Print output from microcontroller
typeNum((byte)seconds); typeString(", "); typeNum(tStart); typeString(", ");
typeNum((byte)seconds - tStart); typeString("\n");
if ((byte)seconds - tStart >= (byte)15)
{
typeString("TRUE!\n");
tStart = seconds; //update
}
}
Sample output:
Column 1 is (byte)seconds, Column 2 is tStart, Column 3 is Column 1 minus Column 2 ((byte)seconds - tStart)
Notice that Column 3 becomes negative (int8_t) once Column 1 overflows from 255 to 0. I expect (and want) it to remain a positive (unsigned) 8-bit value by underflowing instead.
196, 195, 1
197, 195, 2
198, 195, 3
199, 195, 4
200, 195, 5
201, 195, 6
202, 195, 7
203, 195, 8
204, 195, 9
205, 195, 10
206, 195, 11
207, 195, 12
208, 195, 13
209, 195, 14
210, 195, 15
TRUE!
211, 210, 1
212, 210, 2
213, 210, 3
214, 210, 4
215, 210, 5
216, 210, 6
217, 210, 7
218, 210, 8
219, 210, 9
220, 210, 10
221, 210, 11
222, 210, 12
223, 210, 13
224, 210, 14
225, 210, 15
TRUE!
226, 225, 1
227, 225, 2
228, 225, 3
229, 225, 4
230, 225, 5
231, 225, 6
232, 225, 7
233, 225, 8
234, 225, 9
235, 225, 10
236, 225, 11
237, 225, 12
238, 225, 13
239, 225, 14
240, 225, 15
TRUE!
241, 240, 1
242, 240, 2
243, 240, 3
244, 240, 4
245, 240, 5
246, 240, 6
247, 240, 7
248, 240, 8
249, 240, 9
250, 240, 10
251, 240, 11
252, 240, 12
253, 240, 13
254, 240, 14
255, 240, 15
TRUE!
0, 255, -255
1, 255, -254
2, 255, -253
3, 255, -252
4, 255, -251
5, 255, -250
6, 255, -249
7, 255, -248
8, 255, -247
9, 255, -246
10, 255, -245
11, 255, -244
12, 255, -243
13, 255, -242
14, 255, -241
15, 255, -240
16, 255, -239
17, 255, -238
18, 255, -237
19, 255, -236
20, 255, -235
21, 255, -234
22, 255, -233
23, 255, -232
24, 255, -231
25, 255, -230
26, 255, -229
27, 255, -228
28, 255, -227
29, 255, -226
30, 255, -225
31, 255, -224
32, 255, -223
33, 255, -222
34, 255, -221
35, 255, -220
Here is the typeNum function from above:
//--------------------------------------------------------------------------------------------
//typeNum (overloaded)
//-see AVRLibC int to string functions: http://www.nongnu.org/avr-libc/user-manual/group__avr__stdlib.html
//--------------------------------------------------------------------------------------------
//UNSIGNED:
void typeNum(uint8_t myNum)
{
char buffer[4]; //3 for the number (up to 2^8 - 1, or 255 max), plus 1 char for the null terminator
utoa(myNum, buffer, 10); //base 10 number system
typeString(buffer);
}
void typeNum(uint16_t myNum)
{
char buffer[6]; //5 for the number (up to 2^16 - 1, or 65535 max), plus 1 char for the null terminator
utoa(myNum, buffer, 10); //base 10 number system
typeString(buffer);
}
void typeNum(uint32_t myNum)
{
char buffer[11]; //10 chars for the number (up to 2^32 - 1, or 4294967295 max), plus 1 char for the null terminator
ultoa(myNum, buffer, 10); //base 10 number system
typeString(buffer);
}
//SIGNED:
void typeNum(int8_t myNum)
{
char buffer[5]; //4 for the number (down to -128), plus 1 char for the null terminator
itoa(myNum, buffer, 10); //base 10 number system
typeString(buffer);
}
void typeNum(int16_t myNum)
{
char buffer[7]; //6 for the number (down to -32768), plus 1 char for the null terminator
itoa(myNum, buffer, 10); //base 10 number system
typeString(buffer);
}
void typeNum(int32_t myNum)
{
char buffer[12]; //11 chars for the number (down to -2147483648), plus 1 char for the null terminator
ltoa(myNum, buffer, 10); //base 10 number system
typeString(buffer);
}
So I figured it out:
The answer is very simple, but the understanding behind it is not.
Answer:
(How to fix it):
Instead of using (byte)seconds - tStart, use (byte)((byte)seconds - tStart). That's it! Problem solved! All you need to do is cast the output of the mathematical operation (a subtraction in this case) to a byte as well, and it's fixed! Otherwise it returns as a signed int, which produces the errant behavior.
So, why does this happen?
Answer:
In C, C++, and C#, there is no such thing as a mathematical operation on a byte! Apparently the assembly level functions required for operators like +, -, etc, don't exist for byte inputs. Instead, all bytes are first implicitly cast (promoted) to an int before the operation is conducted, then the mathematical operation is conducted on ints, and when it is completed, it returns an int too!
So, this code (byte)seconds - tStart is implicitly cast (promoted in this case) by the compiler as follows: (int)(byte)seconds - (int)tStart...and it returns an int too. Confusing, eh? I certainly thought so!
Here's some more reading on the matter:
(the more asterisks, *, the more useful)
*****byte + byte = int... why? <--ESPECIALLY USEFUL
*****Implicit type conversion rules in C++ operators <--ESPECIALLY USEFUL. This answer here shows when implicit casts take place, and states, "Note. The minimum size of operations is int. So short/char are promoted to int before the operation is done."
*****Google search for "c++ does implicit casting occur with comparisons?"
https://www.tutorialspoint.com/cprogramming/c_type_casting.htm
http://www.improgrammer.net/type-casting-c-language/
Google search for "c++ implicit casting"
http://www.cplusplus.com/doc/tutorial/typecasting/
http://en.cppreference.com/w/cpp/language/implicit_conversion
http://en.cppreference.com/w/cpp/language/operator_comparison
Now let's look at some real C++ examples:
Here is a full C++ program you can compile and run to test expressions to see what the return type is, and if it has been implicitly cast by to the compiler to something you don't intend:
#include <iostream>
using namespace std;
//----------------------------------------------------------------
//printTypeAndVal (overloaded function)
//----------------------------------------------------------------
//UNSIGNED:
void printTypeAndVal(uint8_t myVal)
{
cout << "uint8_t = " << (int)myVal << endl; //(int) cast is required to prevent myVal from printing as a char
}
void printTypeAndVal(uint16_t myVal)
{
cout << "uint16_t = " << myVal << endl;
}
void printTypeAndVal(uint32_t myVal)
{
cout << "uint32_t = " << myVal << endl;
}
void printTypeAndVal(uint64_t myVal)
{
cout << "uint64_t = " << myVal << endl;
}
//SIGNED:
void printTypeAndVal(int8_t myVal)
{
cout << "int8_t = " << (int)myVal << endl; //(int) cast is required to prevent myVal from printing as a char
}
void printTypeAndVal(int16_t myVal)
{
cout << "int16_t = " << myVal << endl;
}
void printTypeAndVal(int32_t myVal)
{
cout << "int32_t = " << myVal << endl;
}
void printTypeAndVal(int64_t myVal)
{
cout << "int64_t = " << myVal << endl;
}
//FLOATING TYPES:
void printTypeAndVal(float myVal)
{
cout << "float = " << myVal << endl;
}
void printTypeAndVal(double myVal)
{
cout << "double = " << myVal << endl;
}
void printTypeAndVal(long double myVal)
{
cout << "long double = " << myVal << endl;
}
//----------------------------------------------------------------
//main
//----------------------------------------------------------------
int main()
{
cout << "Begin\n\n";
//Test variables
uint8_t u1 = 0;
uint8_t u2 = 1;
//Test cases:
//for a single byte, explicit cast of the OUTPUT from the mathematical operation is required to get desired *unsigned* output
cout << "uint8_t - uint8_t:" << endl;
printTypeAndVal(u1 - u2); //-1 (bad)
printTypeAndVal((uint8_t)u1 - (uint8_t)u2); //-1 (bad)
printTypeAndVal((uint8_t)(u1 - u2)); //255 (fixed!)
printTypeAndVal((uint8_t)((uint8_t)u1 - (uint8_t)u2)); //255 (fixed!)
cout << endl;
//for unsigned 2-byte types, explicit casting of the OUTPUT is required too to get desired *unsigned* output
cout << "uint16_t - uint16_t:" << endl;
uint16_t u3 = 0;
uint16_t u4 = 1;
printTypeAndVal(u3 - u4); //-1 (bad)
printTypeAndVal((uint16_t)(u3 - u4)); //65535 (fixed!)
cout << endl;
//for larger standard unsigned types, explicit casting of the OUTPUT is ***NOT*** required to get desired *unsigned* output! IN THIS CASE, NO IMPLICIT PROMOTION (CAST) TO A LARGER *SIGNED* TYPE OCCURS.
cout << "unsigned int - unsigned int:" << endl;
unsigned int u5 = 0;
unsigned int u6 = 1;
printTypeAndVal(u5 - u6); //4294967295 (good--no fixing is required)
printTypeAndVal((unsigned int)(u5 - u6)); //4294967295 (good--no fixing was required)
cout << endl;
return 0;
}
You can also run this program online here: http://cpp.sh/6kjgq
Here is the output. Notice that both the single unsigned byte uint8_t - uint8_t case, and the dual unsigned byte uint16_t - uint16_t case each were implicitly cast (promoted) by the C++ compiler to a 4-byte signed int32_t (int) variable type. This is the behavior that you need to notice. Therefore, the result of those subtractions is negative, which is the unusual behavior that originally confused me, since I had anticipated it would instead underflow to become the unsigned variable's maximum value instead (since we are doing 0 - 1). In order to achieve the desired underflow then, I had to explicitly cast the output result of the subtraction to the desired unsigned type, not just the inputs. For the unsigned int case, however, this explicit cast of the result was NOT required.
Begin
uint8_t - uint8_t:
int32_t = -1
int32_t = -1
uint8_t = 255
uint8_t = 255
uint16_t - uint16_t:
int32_t = -1
uint16_t = 65535
unsigned int - unsigned int:
uint32_t = 4294967295
uint32_t = 4294967295
Here's another brief program example to show that the single unsigned byte (unsigned char) variables are being promoted to signed ints (int) when operated upon.
#include <stdio.h>
int main(int argc, char **argv)
{
unsigned char x = 130;
unsigned char y = 130;
unsigned char z = x + y;
printf("%u\n", x + y); // Prints 260.
printf("%u\n", z); // Prints 4.
}
Output:
260
4
Test here: http://cpp.sh/84eo

Struct C++ array in function parameters not working at all

hello i have to do a program using an array of structures.. and i have to initialize it in a function. below i am trying, but my prototype keeps getting error "Expected primary expression".. i have followed tutorials but cant figure out what im doing wrong please help. i cant use pointers or vectors.. just basic stuff thank you for your time
struct gameCases{
bool flag = false;
int casenum;
double value;
};
int initialize(gameCases cases); //prototype
--- main()
gameCases cases[26];
initialize(cases); //call
int initialize(gameCases cases) //definition
{
double values[26] = {.01, 1, 5, 10, 25, 50,
75, 100, 200, 300, 400, 500, 750, 1000,
5000, 10000 , 25000, 50000, 75000, 100000,
200000 , 300000, 400000, 500000,
1000000, 2000000};
for (int i = 0; i < 26; i++)
{
array[i].value = values[i];
}
}
Declare the function like
int initialize( gameCases *array, size_t n );
and call it like
initialize( cases, 26 );
Or you could pass the array by reference. For example
int initialize( gameCases ( &cases )[26] );
Take into account that the function is declared as having return type int but it acrually returns nothing.
int initialize(gameCases cases[26]); //prototype
int initialize(gameCases cases[26]) //definition
{
double values[26] = {.01, 1, 5, 10, 25, 50,
75, 100, 200, 300, 400, 500, 750, 1000,
5000, 10000 , 25000, 50000, 75000, 100000,
200000 , 300000, 400000, 500000,
1000000, 2000000};
for (int i = 0; i < 26; i++)
{
cases[i].value = values[i];
}
}
and to call:
initialize(cases);

inverse fft of fft not returning expected data

I'm trying to make sure FFTW does what I think it should do, but am having problems. I'm using OpenCV's cv::Mat. I made a test program that, given a Mat f, computes ifft(fft(f)) and compares the result to f. I would expect the difference between the two to be negligible, but there's a strange pattern in the data..
In this case, f is initialized to be an 8x8 array of floats with positive values less than 1.
Here's my test program code:
Mat f = .. //populate f
if (f.type() != CV_32FC1)
DLOG << "Bad f type";
const int y = f.rows;
const int x = f.cols;
double* input = fftw_alloc_real(y * 2*(x/2 + 1));
// forward fft
fftw_plan plan = fftw_plan_dft_r2c_2d(x, y, input, (fftw_complex*)input, FFTW_MEASURE);
// inverse fft
fftw_plan iplan = fftw_plan_dft_c2r_2d(x, y, (fftw_complex*)input, input, FFTW_MEASURE);
// populate fftw data from f
for (int yi = 0; yi < y; ++yi)
{
const float* yptr = f.ptr<float>(yi);
for (int xi = 0; xi < x; ++xi)
input[yi*x + xi] = (double)yptr[xi];
}
fftw_execute(plan);
fftw_execute(iplan);
// put data into another cv::Mat for comparison
Mat check(y, x, f.type());
for (int yi = 0; yi < y; ++yi)
{
float* yptr = check.ptr<float>(yi);
for (int xi = 0; xi < x ; ++xi)
yptr[xi] = (float)input[yi*x + xi];
}
DLOG << Util::summary(f, "f");
DLOG << f;
DLOG << Util::summary(check, "check");
DLOG << check;
Mat diff = f*x*y - check;
DLOG << Util::summary(diff, "diff");
DLOG << diff;
Where DLOG is my logger and Util::summary(cv::Mat m) just prints passed string and the dimensions, channels, min, and max of the mat.
Here's what the data looks like (output):
f: rows:8 cols:8 chans:1 min:0.00257996 max:0.4
[0.050668437, 0.04509116, 0.033668514, 0.10986148, 0.12855141, 0.048241843, 0.12613985,.09731093;
0.028602425, 0.0092236707, 0.037089188, 0.118964, 0.075040311, 0.40000001, 0.11959606, 0.071930833;
0.0025799556, 0.051522054, 0.22233701, 0.052993439, 0.032000393, 0.12673819, 0.015244827, 0.044803992;
0.13946071, 0.019708242, 0.0112687, 0.047459811, 0.019342113, 0.030085485, 0.018739942, 0.0098618753;
0.041809395, 0.029681522, 0.026837418, 0.16038358, 0.29034778, 0.17247421, 0.1789207, 0.042179305;
0.025630442, 0.017192598, 0.060540862, 0.1854037, 0.21287154, 0.04813192, 0.042614728, 0.034764063;
0.0030835248, 0.018511582, 0.0071733585, 0.017076733, 0.064545207, 0.0026390438, 0.088922881, 0.045725599;
0.12798512, 0.23215951, 0.027465452, 0.03174505, 0.04352935, 0.025079668, 0.044403922, 0.035459157]
check: rows:8 cols:8 chans:1 min:-3.26489 max:25.6
[3.24278, 2.8858342, 2.1547849, 7.0311346, 8.2272902, 3.0874779, 8.0729504, 6.2278996;
0.30818239, 0, 2.373708, 7.6136961, 4.8025799, 25.6, 7.6541481, 4.6035733;
0.16511716, 3.2974114, -3.2648909, 0, 2.0480251, 8.1112442, 0.97566891, 2.8674555;
8.9254856, 1.2613275, 0.72119683, 3.0374279, -0.32588482, 0, 1.1993563, 0.63116002;
2.6758013, 1.8996174, 1.7175947, 10.264549, 18.582258, 11.038349, 0.042666838, 0;
1.6403483, 1.1003263, 3.8746152, 11.865837, 13.623778, 3.0804429, 2.7273426, 2.2249;
0.44932228, 0, 0.45909494, 1.0929109, 4.1308932, 0.16889881, 5.6910644, 2.9264383;
8.1910477, 14.858209, -0.071794562, 0, 2.7858784, 1.6050987, 2.841851, 2.2693861]
diff: rows:8 cols:8 chans:1 min:-0.251977 max:17.4945
[0, 0, 0, 0, 0, 0, 0, 0;
1.5223728, 0.59031492, 0, 0, 0, 0, 0, 0;
0, 0, 17.494459, 3.3915801, 0, 0, 0, 0;
0, 0, 0, 0, 1.5637801, 1.9254711, 0, 0;
0, 0, 0, 0, 0, 0, 11.408258, 2.6994755;
0, 0, 0, 0, 0, 0, 0, 0;
-0.2519767, 1.1847413, 0, 0, 0, 0, 0, 0;
0, 0, 1.8295834, 2.0316832, 0, 0, 0, 0]
The difficult part for me is the nonzero entries in the diff matrix. I've accounted for the scaling FFTW does on the values and the padding needed to do an in-place fft on real data; what am I missing?
I find it surprising that the data could be off by a value of 17 (which is 66% of the max value), when there are so many zeros. Also, the data irregularities seem to form a diagonal pattern.
As you may have noticed when writting fftw_alloc_real(y * 2*(x/2 + 1)); fftw needs extra space in the x direction to store complex data. In your case, as x=8, it needs 2*(x/2+1)=10 reals.
http://www.fftw.org/doc/Real_002ddata-DFT-Array-Format.html#Real_002ddata-DFT-Array-Format
So...you should take care of this as you populate the input array or retreive values from it.
You way change
input[yi*x + xi] = (double)yptr[xi];
for
int xfft=2*(x/2 + 1);
...
input[yi*xfft + xi] = (double)yptr[xi];
And
yptr[xi] = (float)input[yi*x + xi];
for
yptr[xi] = (float)input[yi*xfft + xi];
It should solve your problem since the non-nul points in your diff correspond to the extra padding.
Bye,

Which is a good C++ BigInteger class for programming contests?

I was just wondering which will the best BigInteger class in C++ for programming contests which do not allow external libraries?
Mainly I was looking for a class which could be used in my code( I will of course write it on my own, on similar grounds ).
The primary factors which I think are important are( according to their importance ):
Arbitrary length numbers and their operations should be supported.
Should be as small as possible, code-wise. Usually there's a limit on the size of the source code which can be submitted to ~50KB, so the code should be ( much )smaller than that.
Should be as fast as possible. I read somewhere that bigInt classes take O( log( n ) ) time, so this should have a similiar complexity. What I mean is that it should be as fast as possible.
So far I've only needed unsigned integer big numbers for codechef, but codechef only gives 2KB, so I don't have the full implementation up there anywhere, just the members needed for that problem. My code also assumes that long long has at least twice as many bits as a unsigned, though that's pretty safe. The only real trick to it is that different biguint classes may have different data lengths. Here's summaries of the more interesting functions.
#define BIG_LEN() (data.size()>rhs.data.size()?data.size():rhs.data.size())
//the length of data or rhs.data, whichever is bigger
#define SML_LEN() (data.size()<rhs.data.size()?data.size():rhs.data.size())
//the length of data or rhs.data, whichever is smaller
const unsigned char baselut[256]={ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 0, 0, 0, 0, 0,
0,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,
25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,
41,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,
25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40
};
const unsigned char base64lut[256]={ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,62, 0, 0, 0,63,
52,53,54,55,56,57,58,59,60,61, 0, 0, 0, 0, 0, 0,
0, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9,10,11,12,13,14,
15,16,17,18,19,20,21,22,23,24,25, 0, 0, 0, 0, 0,
0,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,
41,42,43,44,45,46,47,48,49,50,51, 0, 0, 0, 0, 0
};
//lookup tables for creating from strings
void add(unsigned int amount, unsigned int index)
adds amount at index with carry, simplifies other members
void sub(unsigned int amount, unsigned int index)
subtracts amount at index with borrow, simplifies other members
biguint& operator+=(const biguint& rhs)
resize data to BIG_LEN()
int carry = 0
for each element i in data up to SML_LEN()
data[i] += rhs.data[i] + carry
carry = ((data[i]<rhs[i]+carry || (carry && rhs[i]+carry==0)) ? 1u : 0u);
if data.length > rhs.length
add(carry, SML_LEN())
biguint& operator*=(const biguint& rhs)
biguint lhs = *this;
resize data to data.length + rhs.length
zero out data
for each element j in lhs
long long t = lhs[j]
for each element i in rhs (and j+i<data.size)
t*=rhs[i];
add(t&UINT_MAX, k);
if (k+1<data.size())
add(t>>uint_bits, k+1);
//note this was public, so I could do both at the same time when needed
//operator /= and %= both just call this
//I have never needed to divide by a biguint yet.
biguint& div(unsigned int rhs, unsigned int & mod)
long long carry = 0
for each element i from data length to zero
carry = (carry << uint_bits) | data[i]
data[i] = carry/rhs;
carry %= rhs
mod = carry
//I have never needed to shift by a biguint yet
biguint& operator<<=(unsigned int rhs)
resize to have enough room, always at least 1 bigger
const unsigned int bigshift = rhs/uint_bits;
const unsigned int lilshift = rhs%uint_bits;
const unsigned int carry_shift = (uint_bits-lilshift)%32;
for each element i from bigshift to zero
t = data[i-bigshift] << lilshift;
t |= data[i-bigshift-1] >> carry_shift;
data[i] = t;
if bigshift < data.size
data[bigshift] = data[0] << lilshift
zero each element i from 0 to bigshift
std::ofstream& operator<<(std::ofstream& out, biguint num)
unsigned int mod
vector reverse
do
num.div(10,mod);
push back mod onto reverse
while num greater than 0
print out elements of reverse in reverse
std::ifstream& operator>>(std::ifstream& in, biguint num)
char next
do
in.get(next)
while next is whitespace
num = 0
do
num = num * 10 + next
while in.get(next) and next is digit
//these are handy for initializing to known values.
//I also have constructors that simply call these
biguint& assign(const char* rhs, unsigned int base)
for each char c in rhs
data *= base
add(baselut[c], 0)
biguint& assign(const char* rhs, std::integral_constant<unsigned int, 64> base)
for each char c in rhs
data *= base
add(base64lut[c], 0)
//despite being 3 times as much, code, the hex version is _way_ faster.
biguint& assign(const char* rhs, std::integral_constant<unsigned int, 16>)
if first two characters are "0x" skip them
unsigned int len = strlen(rhs);
grow(len/4+1);
zero out data
const unsigned int hex_per_int = uint_bits/4;
if (len > hex_per_int*data.size()) { //calculate where first digit goes
rhs += len-hex_per_int*data.size();
len = hex_per_int*data.size();
}
for(unsigned int i=len; i --> 0; ) { //place each digit directly in it's place
unsigned int t = (unsigned int)(baselut[*(rhs++)]) << (i%hex_per_int)*4u;
data[i/hex_per_int] |= t;
}
I also made specializations for multiplication, divide, modulo, shifts and others for std::integral_constant<unsigned int, Value>, which made massive improvements to my serializing and deserializing functions amongst others.