(C++) Weird bitmap issue - Colors in grayscale - c++

I have a weird issue with creating an Bitmap in C++. I'm using the BITMAPFILEHEADER and BITMAPINFOHEADER Structure for creating an 8bit grayscale image. Bitmap data is coming from a camera over DMA as unsigned char an has exactly the same lenghts as expected. Saving the image an opening it, it contains colors?!
The way it should be: http://www.freeimagehosting.net/qd1ku
The way it is: http://www.freeimagehosting.net/83r1s
Do you have any Idea where this is comping from?
The Header of the bitmap is:
42 4D 36 00 04 00 00 00 00 00 36 00 00 00 28 00
00 00 00 02 00 00 00 02 00 00 01 00 08 00 00 00
00 00 00 00 04 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00
Info-Header:
42 4D Its a Bitmap
36 00 04 00 Size of Bitmap = 0x04 00 36 - Header-Size = 512x512
00 00 00 00 Reserved
36 00 00 00 Offset = Sizeof(Bitmapinfoheader);
28 00 00 00 Sizeof(Bitmapinfoheader);
00 02 00 00 =0x200 = 512 px.
00 02 00 00 same
01 00 = 1 - Standard. Not used anymore.
08 00 Color dept = 8 bit.
00 00 00 00 Compression: 0 = none.
00 00 00 00 Filesize or zero
00 00 00 00 X-Dot-Per-Meter, may be left 0
00 00 00 00 y-Dot-Per-Meter, may be left 0
00 00 00 00 If zero, all 255 colors are used
00 00 00 00 If zero, no color table values are used
Do you have any Idea where this comes from?

Under windows, if you do not supply a palette for your 8 bit image a system default one is provided for you. I do not recall offhand the win32 way to add a palette, but it should be as simple as creating a 256 element char array where the value of each entry is the same as its index, and writing it out to your file at the appropriate point and updating the offset parameter, etc.

Related

Getting e_lfanew from a dll, yielding E8 and not F8?

I'm reading a DLL file to a buffer (pSrcData), from here I wanted print the e_lfanew
bool readDll(const char* dllfile)
{
BYTE* pSrcData;
std::ifstream File(dllfile, std::ios::binary | std::ios::ate);
auto FileSize = File.tellg();
pSrcData = new BYTE[static_cast<UINT_PTR>(FileSize)];
File.seekg(0, std::ios::beg);
File.read(reinterpret_cast<char*>(pSrcData), FileSize);
File.close();
std::cout << std::hex << reinterpret_cast<IMAGE_DOS_HEADER*>(pSrcData)->e_lfanew;
pOldNtHeader = reinterpret_cast<IMAGE_NT_HEADERS*>(pSrcData + reinterpret_cast<IMAGE_DOS_HEADER*>(pSrcData)->e_lfanew);
return true;
}
Output: E8
Opening the dll in HxD i get this (address 0000000 - 00000030):
4D 5A 90 00 03 00 00 00 04 00 00 00 FF FF 00 00
B8 00 00 00 00 00 00 00 40 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 F8 00 00 00
Meaning e_lfanew should be F8. However, I get E8 when running the code above. Can anyone see what I'm doing wrong?
Addition:
Getting e_magic works as std::cout << std::hex << reinterpret_cast<IMAGE_DOS_HEADER*>(pSrcData)->e_magic yields 5a4d, using little endian translated to 4D 5A
Sorry, I found setting the configuration in Visual Studio 2019 to x86 Release sets e_lfanew to F9 and x86 Debug sets e_lfanew to E8. I was comparing different debug/release versions.

Memory Leak with Openssl when allocating memory for X509_STORE

I am using openssl in my project. When I exit my application I get "Detected memory leaks!" in Visual Studio 2013.
Detected memory leaks!
Dumping objects ->
{70202} normal block at 0x056CB738, 12 bytes long.
Data: <8 j > 38 E8 6A 05 00 00 00 00 04 00 00 00
{70201} normal block at 0x056CB6E8, 16 bytes long.
Data: < > 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
{70200} normal block at 0x056CB698, 20 bytes long.
Data: < l > 00 00 00 00 E8 B6 6C 05 00 00 00 00 04 00 00 00
{70199} normal block at 0x056AE838, 12 bytes long.
Data: < l > 04 00 00 00 98 B6 6C 05 00 00 00 00
{70198} normal block at 0x056CB618, 64 bytes long.
Data: < > 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
{70197} normal block at 0x056CB578, 96 bytes long.
Data: < l 3 3 > 18 B6 6C 05 00 FE C0 33 C0 FD C0 33 08 00 00 00
Object dump complete.
When I add
_CrtSetDbgFlag(_CRTDBG_ALLOC_MEM_DF | _CRTDBG_LEAK_CHECK_DF);
_CrtSetBreakAlloc(70202);
to main main function I always get a breakpoint at the allocation of the x509 store, no matter for which of the 6 numbers (70202,...) I set the break point.
I initialize and uninitialize the x509 store in a class' constructor and destructor (see below).
Is there anything else I need to look out for when using the x509_STORE?
Foo::CSCACerts::CSCACerts(void)
{
m_store = X509_STORE_new();
}
Foo::CSCACerts::~CSCACerts(void)
{
X509_STORE_free( m_store );
}

(VS15 C++) Got a Visual Leak Detector report, but what now?

because of some (strange) problems in my C++-project I used Visual Leak Detector (for the first time), to check the project on memory leaks.
So I got i.a. the follwoing reports:
WARNING: Visual Leak Detector detected memory leaks!
---------- Block 4 at 0x004D07B0: 200 bytes ----------
Leak Hash: 0xD2D1B4A0, Count: 1, Total 200 bytes
Call Stack (TID 8796):
ucrtbase.dll!malloc()
f:\dd\vctools\crt\vcstartup\src\heap\new_scalar.cpp (19): LASS.exe!operator new() + 0x8 bytes
clr.dll!0x72D616E5()
Data:
28 75 14 03 00 00 00 00 01 00 00 00 00 00 00 00 (u...... ........
9A 99 99 99 99 99 B9 3F 50 00 00 00 0A 00 00 00 .......? P.......
00 00 00 00 F4 01 00 00 00 00 00 00 01 00 00 00 ........ ........
7B 14 AE 47 E1 7A 74 3F 14 00 00 00 BA FF FF FF {..G.zt? ........
00 00 00 00 F4 01 00 00 00 00 00 00 01 00 00 00 ........ ........
7B 14 AE 47 E1 7A 84 3F 00 00 00 00 64 00 00 00 {..G.z.? ....d...
00 00 00 00 01 00 00 00 14 00 00 00 46 00 00 00 ........ ....F...
00 00 00 00 64 00 00 00 00 00 00 00 F4 01 00 00 ....d... ........
01 00 00 00 B8 E2 13 03 F0 AD 18 03 00 00 00 00 ........ ........
C8 E2 13 03 C8 AB 18 03 00 00 00 00 78 E3 13 03 ........ ....x...
B8 AC 18 03 00 00 00 00 68 E2 13 03 E8 AC 18 03 ........ h.......
00 00 00 00 14 00 00 00 01 00 00 00 64 00 00 00 ........ ....d...
01 00 00 00 00 00 00 00 ........ ........
---------- Block 20 at 0x004D0880: 200 bytes ----------
Leak Hash: 0xD2D1B4A0, Count: 1, Total 200 bytes
Call Stack (TID 8796):
ucrtbase.dll!malloc()
f:\dd\vctools\crt\vcstartup\src\heap\new_scalar.cpp (19): LASS.exe!operator new() + 0x8 bytes
clr.dll!0x72D616E5()
Data:
78 74 14 03 00 00 00 00 01 00 00 00 00 00 00 00 xt...... ........
9A 99 99 99 99 99 B9 3F 50 00 00 00 0A 00 00 00 .......? P.......
00 00 00 00 F4 01 00 00 00 00 00 00 01 00 00 00 ........ ........
7B 14 AE 47 E1 7A 74 3F 14 00 00 00 BA FF FF FF {..G.zt? ........
00 00 00 00 F4 01 00 00 00 00 00 00 01 00 00 00 ........ ........
7B 14 AE 47 E1 7A 84 3F 00 00 00 00 64 00 00 00 {..G.z.? ....d...
00 00 00 00 01 00 00 00 14 00 00 00 46 00 00 00 ........ ....F...
00 00 00 00 64 00 00 00 00 00 00 00 F4 01 00 00 ....d... ........
01 00 00 00 38 E2 13 03 00 F0 15 03 00 00 00 00 ....8... ........
B8 E1 13 03 88 00 7F 05 00 00 00 00 08 E2 13 03 ........ ........
20 FF 7E 05 00 00 00 00 E8 E1 13 03 80 FF 7E 05 ..~..... ......~.
00 00 00 00 14 00 00 00 01 00 00 00 64 00 00 00 ........ ....d...
01 00 00 00 00 00 00 00 ........ ........
---------- Block 31 at 0x0053E1B8: 72 bytes ----------
Leak Hash: 0x3F88029B, Count: 1, Total 72 bytes
Call Stack (TID 8796):
ucrtbase.dll!malloc()
f:\dd\vctools\crt\vcstartup\src\heap\new_scalar.cpp (19): LASS.exe!operator new() + 0x8 bytes
clr.dll!0x72D616E5()
Data:
60 BC 55 00 40 3E 80 05 A0 3F 80 05 A0 3F 80 05 `.U.#>.. .?...?..
60 BB 55 00 20 34 18 03 00 00 00 00 00 00 00 00 `.U..4.. ........
00 00 00 00 20 00 00 00 2F 00 00 00 80 BC 55 00 ........ /.....U.
00 2E 18 03 00 00 00 00 00 00 00 00 00 00 00 00 ........ ........
20 00 00 00 2F 00 00 00 ..../... ........
---------- Block 33 at 0x0055BB60: 8 bytes ----------
Leak Hash: 0xA49C5AA6, Count: 1, Total 8 bytes
Call Stack (TID 8796):
ucrtbase.dll!malloc()
f:\dd\vctools\crt\vcstartup\src\heap\new_scalar.cpp (19): LASS.exe!operator new() + 0x8 bytes
clr.dll!0x72D616E5()
Data:
C8 E1 53 00 00 00 00 00
..S..... ........
//And many more...
Unfortunatly I do not understand, what VLD wants to say is the problem.
With a double-click on the "f:\dd..."-lines it should set my courser to the line with the problem, shouldn´t it? But it dosen´t.
My question is now: How do I get to the area of the problem or in other words "how do I read these reports"?
In addition:
I use Visual Studio 2015
The project is a C++ Windows Forms Project
I included the vld.h in the additional includes and the lib-directory to the additional libraries of the project
In the main() I use #include <vld.h> and _CrtDumpMemoryLeaks();
EDIT:
My Main (a reduced version, but gives similar reports):
//some class-includes
#include <vld.h>
using namespace System;
using namespace System::Windows::Forms;
using namespace std;
#define _CRTDBG_MAP_ALLOC
#include <stdlib.h>
#include <crtdbg.h>
[STAThread]
void Main()
{
Application::EnableVisualStyles();
Application::SetCompatibleTextRenderingDefault(false);
Experiment* experiment = new Experiment();
Experiment_List* running_experiments = new Experiment_List();
while(!experiment->end) {
experiment= new Experiment();
LASS::MainWindow form(experiment, running_experiments);
form.ShowDialog();
if(!experiment->end){
running_experiments->register_experiment(experiment);
}
}
running_experimente->end_all();
_CrtDumpMemoryLeaks();
exit(0);
}
Unfortunatley there are about 40 classes, that I do not want to post...
I don't know where the problem exact is.
For me, it helps to run the program in RELEASE Mode, instead of DEBUG mode.
I suppose, my problem is the handling of managed and unmanaged code together.
I have unmanaged code inside managed code.
It seams as if CLR use a different new operator in Debug mode. Not as conform as the c++ standard.
According to: Using push_back() for STL List in C++ causes Access Violation, Crash
If you malloc() a C++ class, no constructors will be called for any of
that class's fields
And the VS will step into a constructor in class new_scalar.cpp.
Folks say that is depending of the Visual Leak Detector (VLD). You use them in your includes.
In the End, try to distinguish your code with
#pragma managed
and
#pragma unmanaged
And run in RELEASE mode.

Localizing function body chunk in .o file

i got some simple code file
mangen.c:
///////////// begin of the file
void mangen(int* data)
{
for(int j=0; j<100; j++)
for(int i=0; i<100; i++)
data[j*100+i] = 111;
}
//////// end of the file
I compile it with mingw (on win32)
c:\mingw\bin\gcc -std=c99 -c mangen.c -fno-exceptions -march=core2 -mtune=generic -mfpmath=both -msse2
it yeilds to mangen.o file which is 400 bytes
00000000 4C 01 03 00 00 00 00 00-D8 00 00 00 0A 00 00 00 L...............
00000010 00 00 05 01 2E 74 65 78-74 00 00 00 00 00 00 00 .....text.......
00000020 00 00 00 00 4C 00 00 00-8C 00 00 00 00 00 00 00 ....L...........
00000030 00 00 00 00 00 00 00 00-20 00 30 60 2E 64 61 74 ........ .0`.dat
00000040 61 00 00 00 00 00 00 00-00 00 00 00 00 00 00 00 a...............
00000050 00 00 00 00 00 00 00 00-00 00 00 00 00 00 00 00 ................
00000060 40 00 30 C0 2E 62 73 73-00 00 00 00 00 00 00 00 #.0..bss........
00000070 00 00 00 00 00 00 00 00-00 00 00 00 00 00 00 00 ................
00000080 00 00 00 00 00 00 00 00-80 00 30 C0 55 89 E5 83 ..........0.U...
00000090 EC 10 C7 45 FC 00 00 00-00 EB 34 C7 45 F8 00 00 ...E......4.E...
000000A0 00 00 EB 21 8B 45 FC 6B-D0 64 8B 45 F8 01 D0 8D ...!.E.k.d.E....
000000B0 14 85 00 00 00 00 8B 45-08 01 D0 C7 00 6F 00 00 .......E.....o..
000000C0 00 83 45 F8 01 83 7D F8-63 7E D9 83 45 FC 01 83 ..E...}.c~..E...
000000D0 7D FC 63 7E C6 C9 C3 90-2E 66 69 6C 65 00 00 00 }.c~.....file...
000000E0 00 00 00 00 FE FF 00 00-67 01 6D 61 6E 67 65 6E ........g.mangen
000000F0 2E 63 00 00 00 00 00 00-00 00 00 00 5F 6D 61 6E .c.........._man
00000100 67 65 6E 00 00 00 00 00-01 00 20 00 02 01 00 00 gen....... .....
00000110 00 00 00 00 00 00 00 00-00 00 00 00 00 00 00 00 ................
00000120 2E 74 65 78 74 00 00 00-00 00 00 00 01 00 00 00 .text...........
00000130 03 01 4B 00 00 00 00 00-00 00 00 00 00 00 00 00 ..K.............
00000140 00 00 00 00 2E 64 61 74-61 00 00 00 00 00 00 00 .....data.......
00000150 02 00 00 00 03 01 00 00-00 00 00 00 00 00 00 00 ................
00000160 00 00 00 00 00 00 00 00-2E 62 73 73 00 00 00 00 .........bss....
00000170 00 00 00 00 03 00 00 00-03 01 00 00 00 00 00 00 ................
00000180 00 00 00 00 00 00 00 00-00 00 00 00 04 00 00 00 ................
Now I need to know where is the binary chunk containing
above function body in here
Could someone provide some simple code that will allow me to retrive
this boundaries ?
(assume that function body may be shorter or longer and also
there may be other functions or data in source fite added so
it will move in chunk but I suspect procedure to localise it
should be not very complex.
You can use objdump -Fd mangen.o to find out file offset and lenght of a function.
Alternatively, you can use readelf -s mangen.o to find out size of a function.
You may define something like int abc = 0x11223344; in the beginning and end of function and use the constants to locate the function body.
You can use objdump or nm.
For instance, try:
nm mangen.o
Or
objdump -t mangen.o
If you need to use your own code, have a look here:
http://www.rohitab.com/discuss/topic/38591-c-import-table-parser/
It will give you something to start with. You can find much more information about the format in MSDN.
If you are into Python, there is nice tool/library (including source code) that can be helpful:
https://code.google.com/p/pefile/

Inflating TMX Data using Base64 & Zlib - C++

I have searched the web for an way to convert the TMX Data into some sort of usable data but I cannot seem to use Zlib to inflate the data I get back from a Base64 Decode function. I'm unaware if that's how it works, but from what I looked around and I'm guessing That I am supposed to Deflate the code, then inflate it with Zlib.
So: TMX Data -> Base64 -> Decode -> Decoded Data -> Zlib -> Inflate -> Usable Data?
Here's my source code:
const std::string EncryptedString = "eJxjZGBgYMSCZYCYHYilccgPNnVqOLAQmjp2PFgPiJmh6iSBWApKI7OlkNTQAgMA4AIDoQ==";
FILE *wfile;
// Will contain decoded data
wfile = fopen("testFile", "w");
fprintf(wfile, base64_decode(EncryptedString).c_str());
Then I open the same file with the decoded data, which is:
xœcd```Ä‚e€˜ˆ¥qÈ6uj8°š:v<Xˆ™¡ê$X
J#³¥ÔÐ
And try to inflate it with Zlib using the Zlib inflate function in the doc's
FILE *source;
// Contains decoded data.
source = fopen("testFile", "r");
FILE *dest;
// We write decompressed data to this file.
dest = fopen("testOutFile", "w");
zerr(Z_Inflate(source, dest));
Yet Zlib returns an error message of "Invalid or incomplete deflate data"
Here's the code for the Zlib Function:
inline int Z_Inflate(FILE *source, FILE *dest)
{
int ret;
unsigned have;
z_stream strm;
Bytef in[CHUNK];
Bytef out[CHUNK];
/* allocate inflate state */
strm.zalloc = Z_NULL;
strm.zfree = Z_NULL;
strm.opaque = Z_NULL;
strm.avail_in = 0;
strm.next_in = Z_NULL;
ret = inflateInit(&strm);
if (ret != Z_OK)
return ret;
/* decompress until deflate stream ends or end of file */
do {
strm.avail_in = fread(in, 1, CHUNK, source);
if (ferror(source)) {
(void)inflateEnd(&strm);
return Z_ERRNO;
}
if (strm.avail_in == 0)
break;
strm.next_in = in;
/* run inflate() on input until output buffer not full */
do {
strm.avail_out = CHUNK;
strm.next_out = out;
ret = inflate(&strm, Z_NO_FLUSH);
assert(ret != Z_STREAM_ERROR); /* state not clobbered */
switch (ret) {
case Z_NEED_DICT:
ret = Z_DATA_ERROR; /* and fall through */
case Z_DATA_ERROR:
case Z_MEM_ERROR:
(void)inflateEnd(&strm);
return ret;
}
have = CHUNK - strm.avail_out;
if (fwrite(out, 1, have, dest) != have || ferror(dest)) {
(void)inflateEnd(&strm);
return Z_ERRNO;
}
} while (strm.avail_out == 0);
/* done when inflate() says it's done */
} while (ret != Z_STREAM_END);
/* clean up and return */
(void)inflateEnd(&strm);
return ret == Z_STREAM_END ? Z_OK : Z_DATA_ERROR;
}
/* report a zlib or i/o error */
inline void zerr(int ret)
{
fputs("zpipe: ", stderr);
switch (ret) {
case Z_ERRNO:
if (ferror(stdin))
fputs("error reading stdin\n", stderr);
if (ferror(stdout))
fputs("error writing stdout\n", stderr);
break;
case Z_STREAM_ERROR:
fputs("invalid compression level\n", stderr);
break;
case Z_DATA_ERROR:
fputs("invalid or incomplete deflate data\n", stderr);
break;
case Z_MEM_ERROR:
fputs("out of memory\n", stderr);
break;
case Z_VERSION_ERROR:
fputs("zlib version mismatch!\n", stderr);
}
}
Any help would be greatly appreciated as I would love to use the tiled editor for my map files. It's seeming to be more of a headache.
Worked for me. After decoding the base64 string, I get in hex:
78 9c 63 64 60 60 60 c4 82 65 80 98 1d 88 a5 71
c8 0f 36 75 6a 38 b0 10 9a 3a 76 3c 58 0f 88 99
a1 ea 24 81 58 0a 4a 23 b3 a5 90 d4 d0 02 03 00
e0 02 03 a1
That is a valid zlib stream that decodes with no errors to this in hex:
01 00 00 00 01 00 00 00 01 00 00 00 01 00 00 00
01 00 00 00 01 00 00 00 1c 00 00 00 07 00 00 00
1b 00 00 00 01 00 00 00 01 00 00 00 01 00 00 00
01 00 00 00 01 00 00 00 01 00 00 00 01 00 00 00
1c 00 00 00 07 00 00 00 1b 00 00 00 01 00 00 00
01 00 00 00 01 00 00 00 01 00 00 00 01 00 00 00
01 00 00 00 01 00 00 00 1c 00 00 00 07 00 00 00
1b 00 00 00 01 00 00 00 01 00 00 00 01 00 00 00
01 00 00 00 01 00 00 00 01 00 00 00 01 00 00 00
1c 00 00 00 07 00 00 00 1b 00 00 00 01 00 00 00
01 00 00 00 01 00 00 00 01 00 00 00 01 00 00 00
01 00 00 00 01 00 00 00 1c 00 00 00 07 00 00 00
1b 00 00 00 01 00 00 00 26 00 00 00 26 00 00 00
26 00 00 00 26 00 00 00 26 00 00 00 26 00 00 00
12 00 00 00 07 00 00 00 1b 00 00 00 01 00 00 00
07 00 00 00 07 00 00 00 07 00 00 00 07 00 00 00
07 00 00 00 07 00 00 00 07 00 00 00 2e 00 00 00
03 00 00 00 01 00 00 00 19 00 00 00 1a 00 00 00
19 00 00 00 19 00 00 00 1a 00 00 00 19 00 00 00
1a 00 00 00 03 00 00 00 01 00 00 00 01 00 00 00
01 00 00 00 01 00 00 00 01 00 00 00 01 00 00 00
01 00 00 00 01 00 00 00 01 00 00 00 01 00 00 00
01 00 00 00 01 00 00 00 01 00 00 00 01 00 00 00
01 00 00 00 01 00 00 00 01 00 00 00 01 00 00 00
01 00 00 00 01 00 00 00 01 00 00 00 01 00 00 00
What kind of machine are you on? If it's Windows (shudder), you may need to make sure that your stdio functions are not trying to do end-of-line conversions. Use fopen(..., "wb") and fopen(..., "rb") for binary writing and reading.