Sending HEX with socket->write in C++ QT - c++

I am writing a TCP console in C++ QT, I want to send Hex byte : 00 00 00 00 00 06 01 02 00 01 00 01 using socket->write(???), where socket = new QTcpSocket(this)
To send Hex code 00 00 00 00 00 06 01 02 00 01 00 01, what shall be the format?
Thanks

Related

SMPP client is adding an ¿ to the end of the message

I'm trying to make a Windows desktop smpp client, and it's connecting and sending well, aside from a bug where an extra character (¿) is added to the end of the message content that I'm receiving on my phone.
So I'm sending "test" but my phone receives "test¿". Here's the contents of the pdu object, just before it gets sent:
size :58
sequence :2
cmd id :0x4
cmd status:0x0 : No Error
00000000 00 00 00 3a 00 00 00 04 00 00 00 00 00 00 00 02 |...:............|
00000010 00 05 00 74 65 73 74 66 72 6f 6d 00 01 01 34 34 |...testfrom...44|
00000020 37 37 37 37 37 37 37 37 37 37 00 00 00 00 00 00 |7777777777......|
00000030 00 00 00 00 05 74 65 73 74 00 |.....test.|
0000003a
I'm using this c++ smpp library as a base:
https://github.com/onlinecity/cpp-smpp
I had to make some slight changes to get it working on windows, but I don't think anything was changed that could have affected this.
Someone else ran a test using a different account on the smpp server, and their test added an # symbol instead.
Any ideas what could be causing this? Thanks!
Found the problem in the end, it was due to an option in the smpp library that defaults to true, called nullTerminateOctetStrings
It was adding the 00 to the end of the message, sound slike this was required by the SMPP 3.4 standards, but our smsc didn't like it. I suppose ideally I would fix the smsc, but that's provided by a 3rd party, so I've just switched off the null terminate instead.
Someone with a similar problem and more info here: https://www.mail-archive.com/devel#kannel.3glab.org/msg06765.html

How to get the location and index of an Icon in a ".lnk" shortcut with SHGetFileInfo in C++

I am using C++ (VS 2012) on Win7x64 and am trying to get the location and index of an icon using SHGetFileInfo with SHGFI_ICONLOCATION like this:
SHFILEINFO info;
memset(&info, 0, sizeof(info));
DWORD_PTR result = SHGetFileInfo(_T("C:\\Users\\Admin\\Desktop\\test.lnk"), 0, &info, sizeof(SHFILEINFO), SHGFI_ICONLOCATION);
I get a 1 as result and after inspecting info.szDisplayName I see this:
0x0022CDE0 00 00 3a 00 5c 00 50 00 72 00 6f 00 ..:.\.P.r.o.
0x0022CDEC 67 00 72 00 61 00 6d 00 20 00 46 00 g.r.a.m. .F.
0x0022CDF8 69 00 6c 00 65 00 73 00 20 00 28 00 i.l.e.s. .(.
0x0022CE04 78 00 38 00 36 00 29 00 5c 00 54 00 x.8.6.).\.T.
0x0022CE10 65 00 73 00 74 00 5c 00 54 00 65 00 e.s.t.\.T.e.
0x0022CE1C 73 00 74 00 2e 00 65 00 78 00 65 00 s.t...e.x.e.
0x0022CE28 00 00 00 00 00 00 00 00 00 00 00 00 ............
0x0022CE34 00 00 00 00 00 00 00 00 00 00 00 00 ............
What I find strange is that although the string info.szDisplayName appears empty due to the 00 00 at the start, the call to SHGetFileInfo seems to have filled in the entire path correctly and then replaced the drive letter in it with a 00 00 making it an "empty" string.
What I also have noticed is that when I choose a different icon from a different executable it appears to work alright. But when I then create a shortcut to that different executable and use the icon from it that worked before it again returns this "empty" string.
It seems to work sort of crisscross with the locations of the executable and the icon, but with the icon being from the same executable it always seems to exhibit this strange behaviour. The only exception to this is the index of the icon.
It does not matter if an executable has only one or many icons, but when I use an icon with an index greater than 0, it does fill in the location correctly and the index as well, even though the location of the executable and the icon in the shortcut is the same.
Why is SHGetFileInfo filling in icon.szDisplayName as an "empty" string when in the shortcut the location of the icon is the same as the executable and the index is 0?

Detecting Memory Leaks in C++ Windows application

I have a C++ windows application which has some memory leak issues. Is it possible to analyze the memory leak from the dump using NTSD ? If so please guide me how to do that ?
I have also heard that we can do it using User Mode Dump. I am not very familiar with finding leaks in windows. It is very easy in Linux with Valgrind.
Is there any other better options to check this ?
see here for details about Visual Leak Detector. I have used it on Windows.
all you do in your application is to
#include <vld.h>
and you will see a report about detected leaks in terminal when debugging your program, something like this:
---------- Block 1199 at 0x04BE1058: 136 bytes ----------
Call Stack:
d:\Foobar\FooLog.cpp (26): FooLog::getInstance
d:\Foobar\FooMain.cpp (75): FooMain::init
f:\dd\vctools\crt_bld\self_x86\crt\src\crtexe.c (578): __tmainCRTStartup
f:\dd\vctools\crt_bld\self_x86\crt\src\crtexe.c (403): WinMainCRTStartup
0x759A3677 (File and line number not available): BaseThreadInitThunk
0x770C9D42 (File and line number not available): RtlInitializeExceptionChain
0x770C9D15 (File and line number not available): RtlInitializeExceptionChain
Data:
9C 33 2D 6B 74 2A 2D 6B C8 11 BE 04 00 00 00 00 .3-kt*-k ........
00 00 00 00 70 14 BB 6C 70 14 BB 6C 00 00 00 00 ....p..l p..l....
00 00 00 00 68 14 BB 6C 68 14 BB 6C 00 00 00 00 ....h..l h..l....
00 00 00 00 6C 14 BB 6C 6C 14 BB 6C 20 12 BE 04 ....l..l l..l....
00 00 00 00 CD 00 CD CD 00 00 00 00 01 CD CD CD ........ ........
68 14 BB 6C 78 33 2D 6B 00 00 00 00 00 00 00 00 h..lx3-k ........
I've had great success tracking down memory and resource leaks with DrMemory. It works with both GCC and MSVC and it's very straight forward to use.

Translating raw mouse/pointer data to something meaningful?

I am using the hexdump -C to show realtime data from a pointing device on a linux box. The information it returns is 16 bytes of hex per line. Like this:
000001b0 a9 1c fd 4e f1 2c 0f 00 01 00 3e 00 01 00 00 00 |...N.,....>.....|
000001c0 a9 1c fd 4e 0e 2d 0f 00 01 00 3e 00 00 00 00 00 |...N.-....>.....|
000001d0 a9 1c fd 4e 16 2d 0f 00 00 00 00 00 00 00 00 00 |...N.-..........|
000001e0 aa 1c fd 4e b1 9a 05 00 01 00 3d 00 01 00 00 00 |...N......=.....|
000001f0 aa 1c fd 4e ce 9a 05 00 01 00 3d 00 00 00 00 00 |...N......=.....|
00000200 aa 1c fd 4e d5 9a 05 00 00 00 00 00 00 00 00 00 |...N............|
My question is, how do I know how to translate this string to the coordinate data from the mouse pointer?
Most USB input devices conform to the USB HID specification. The Xorg evdev(4) driver should be able to Just Use nearly any pointing device.
If you're writing your own driver, libusb might be a good starting point.
you need to find what is the periodicity and the size (in bytes) of x and y coordinate
you can write a progam that calculates the frequency of the coordinates are written(while moving the device). then you have to calibrate... move the pointer and see the coordinates change... it is globally how i would have done.
Trial and error maybe? You know your screen's resolution so that may help.
You could try putting the mouse pointer in the top left corner (0, 0) and record what data you get. Hopefuly it should not change if you try and scroll further past the screen (or the data repeats). Then move it to the lower right corner and record what data you get there. Again you're hoping that the values don't change if you try scroll off the screen. Then you can look at the data, fiddle with endianess until the values look right and figure out if there's any scaling going on.
Maybe

Are "#define new DEBUG_NEW" and "#undef THIS_FILE" etc. actually necessary?

When you create a new MFC application, the wizard creates the following block of code in almost every CPP file:
#ifdef _DEBUG
#define new DEBUG_NEW
#endif
and sometimes it also adds this:
#undef THIS_FILE
static char THIS_FILE[] = __FILE__;
I would like to remove this code from my CPP files if it is redundant. I am using an MFC app with C++/CLI on VS2008.
I have tried running in Debug after deleting this code from a CPP, and it seems to work fine. "new"ing variables work fine, there are no leaks, and ASSERT dialogs show the correct filename and jump to the offending line.
Can anyone tell me what it does and whether it's safe to delete it?
It is perfectly safe to delete this. It's a debugging aid; leaving it in will generate better details in the warnings in the output window of any memory leaks you have when the program exits. If you delete it, you still get the memory leak report, but just without any details about where in your source code they occurred.
On Microsoft Visual C++ 2010, I can remove the whole code and put just one #define NEW DEBUG_NEW in a header, and I still got the right memory leak reports, e.g.
Detected memory leaks!
Dumping objects ->
f:\dd\vctools\vc7libs\ship\atlmfc\src\mfc\strcore.cpp(156) : {7508} normal block at 0x029B9598, 54 bytes long.
Data: < > E4 B8 C9 00 12 00 00 00 12 00 00 00 01 00 00 00
f:\dd\vctools\vc7libs\ship\atlmfc\src\mfc\strcore.cpp(156) : {7501} normal block at 0x029B94A8, 28 bytes long.
Data: < > E4 B8 C9 00 05 00 00 00 05 00 00 00 01 00 00 00
f:\source\agent\agent\deviceid.cpp(21) : {7500} normal block at 0x029CDFC0, 8 bytes long.
Data: < > A8 95 9B 02 B8 94 9B 02
f:\dd\vctools\vc7libs\ship\atlmfc\src\mfc\strcore.cpp(156) : {6786} normal block at 0x029C0D88, 160 bytes long.
Data: < G > E4 B8 C9 00 19 00 00 00 47 00 00 00 01 00 00 00
f:\source\agent\sysinfo\sysinfo.cpp(27) : {6733} normal block at 0x029B84D8, 92 bytes long.
Data: < > 00 00 00 00 00 10 00 00 00 00 01 00 FF FF FE 7F
Object dump complete.