Details: Ubuntu 14.04(LTS), Python(2.7)
I want to write hex code to a text file so I wrote this code:
import numpy as np
width = 28
height = 28
num = 10
info = np.array([num, width, height]).reshape(1,3)
info = info.astype(np.int32)
newfile = open('test.txt', 'w')
newfile.write(info)
newfile.close()
I expected like this:
00 00 00 0A 00 00 00 1C 00 00 00 1C
But this is my actual result:
0A 00 00 00 1C 00 00 00 1C 00 00 00
Why did this happen and how can I get my expected output?
If you want big endian binary data, call astype(">i") then tostring():
import numpy as np
width = 28
height = 28
num = 10
info = np.array([num, width, height]).reshape(1,3)
info = info.astype(np.int32)
info.astype(">i").tostring()
If you want hex text:
" ".join("{:02X}".format(x) for x in info.astype(">i").tostring())
the output:
00 00 00 0A 00 00 00 1C 00 00 00 1C
Related
I wrote a very simple function that saves a file in binary mode with the support of qt. The file is saved correctly and the data inside is correct, however if I open the file with a text editor I can read strings that I shouldn't be reading.
void Game::saveRanking() const
{
QFile file(".ranking.dat");
file.open(QFile::WriteOnly);
QJsonObject recordObject;
QJsonArray rankingNameArray;
for (int i = 0; i < 5; i++)
rankingNameArray.push_back(QJsonValue::fromVariant(QVariant(highscoreName[i])));
recordObject.insert("Ranking Name", rankingNameArray);
QJsonArray rankingScoreArray;
for (int i = 0; i < 5; i++)
rankingScoreArray.push_back(QJsonValue::fromVariant(QVariant(highscoreValue[i])));
recordObject.insert("Ranking Value", rankingScoreArray);
QJsonDocument doc(recordObject);
file.write(doc.toBinaryData());
}
I've filled the arrays like this, for debugging purposes
highscoreName[0] = "Pippo"; highscoreValue[0] = 100;
highscoreName[1] = "Franco"; highscoreValue[1] = 300;
highscoreName[2] = "Giovanni"; highscoreValue[2] = 200;
highscoreName[3] = "Andrea"; highscoreValue[3] = 4000;
highscoreName[4] = "AI"; highscoreValue[4] = 132400;
I tried to do a hexdump and the result is the following
0000-0010: 71 62 6a 73-01 00 00 00-a4 00 00 00-05 00 00 00 qbjs.... ........
0000-0020: 9c 00 00 00-14 04 00 00-0c 00 52 61-6e 6b 69 6e ........ ..Rankin
0000-0030: 67 20 4e 61-6d 65 00 00-48 00 00 00-0a 00 00 00 g.Name.. H.......
0000-0040: 34 00 00 00-05 00 50 69-70 70 6f 00-06 00 46 72 4.....Pi ppo...Fr
0000-0050: 61 6e 63 6f-08 00 47 69-6f 76 61 6e-6e 69 00 00 anco..Gi ovanni..
0000-0060: 06 00 41 6e-64 72 65 61-02 00 41 49-8b 01 00 00 ..Andrea ..AI....
0000-0070: 8b 02 00 00-8b 03 00 00-0b 05 00 00-0b 06 00 00 ........ ........
0000-0080: 94 0f 00 00-0d 00 52 61-6e 6b 69 6e-67 20 56 61 ......Ra nking.Va
0000-0090: 6c 75 65 00-20 00 00 00-0a 00 00 00-0c 00 00 00 lue..... ........
0000-00a0: 8a 0c 00 00-8a 25 00 00-0a 19 00 00-0a f4 01 00 .....%.. ........
0000-00ac: 0a a6 40 00-0c 00 00 00-68 00 00 00 ..#..... h...
The toBinaryData() gives the internal representation data, not text, so it is normal you have binary data in the file.
QByteArray QJsonDocument::toBinaryData() const
From docs:
Returns a binary representation of the document.
Also it is deprecated:
This function is obsolete. It is provided to keep old source code working. We strongly advise against using it in new code.
You should use toJson().
I have in my Qt code a function f1() that emits a QSignal with a char array of binary data as parameter.
My problem is the QSlot that is connected to this QSignal receives this array but the data is incomplete: it receives the data until the first "0x00" byte.
I tried to change the char [] to char*, didn't help.
How can I do to receive the full data, including the "0x00" bytes ?
connect(dataStream, &BaseConnection::GotPacket, this, &myClass::HandleNewPacket);
void f1()
{
qDebug() << "Binary read = " << inStream.readRawData(logBuffer, static_cast<int>(frmIndex->dataSize));
//logBuffer contains the following hexa bytes: "10 10 01 30 00 00 30 00 00 00 01 00 D2 23 57 A5 38 A2 05 00 E8 03 00 00 6C E9 01 00 00 00 00 00 0B 00 00 00 00 00 00 00 A6 AF 01 00 00 00 00 00"
Q_EMIT GotPacket(logBuffer, frmIndex->dataSize);
}
void myClass::HandleNewPacket(char p[LOG_BUFFER_SIZE], int size)
{
// p contains the following hexa bytes : "10 10 01 30"
}
Thank you.
I am using openssl in my project. When I exit my application I get "Detected memory leaks!" in Visual Studio 2013.
Detected memory leaks!
Dumping objects ->
{70202} normal block at 0x056CB738, 12 bytes long.
Data: <8 j > 38 E8 6A 05 00 00 00 00 04 00 00 00
{70201} normal block at 0x056CB6E8, 16 bytes long.
Data: < > 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
{70200} normal block at 0x056CB698, 20 bytes long.
Data: < l > 00 00 00 00 E8 B6 6C 05 00 00 00 00 04 00 00 00
{70199} normal block at 0x056AE838, 12 bytes long.
Data: < l > 04 00 00 00 98 B6 6C 05 00 00 00 00
{70198} normal block at 0x056CB618, 64 bytes long.
Data: < > 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
{70197} normal block at 0x056CB578, 96 bytes long.
Data: < l 3 3 > 18 B6 6C 05 00 FE C0 33 C0 FD C0 33 08 00 00 00
Object dump complete.
When I add
_CrtSetDbgFlag(_CRTDBG_ALLOC_MEM_DF | _CRTDBG_LEAK_CHECK_DF);
_CrtSetBreakAlloc(70202);
to main main function I always get a breakpoint at the allocation of the x509 store, no matter for which of the 6 numbers (70202,...) I set the break point.
I initialize and uninitialize the x509 store in a class' constructor and destructor (see below).
Is there anything else I need to look out for when using the x509_STORE?
Foo::CSCACerts::CSCACerts(void)
{
m_store = X509_STORE_new();
}
Foo::CSCACerts::~CSCACerts(void)
{
X509_STORE_free( m_store );
}
I'm now trying to use MP3 encoder mft on Win10 pro Insider preview, but failed to set output media type.
Below is my code:
// Fill in MPEGLAYER3WAVEFORMAT data
MPEGLAYER3WAVEFORMAT mp3wfx;
ZeroMemory(&mp3wfx, sizeof(mp3wfx));
mp3wfx.wID = MPEGLAYER3_ID_MPEG;
mp3wfx.fdwFlags = 2; // no padding
mp3wfx.nBlockSize = int16_t(144 * (128000 / 44100)); // bitrate = 128000kbps
mp3wfx.nFramesPerBlock = 1;
mp3wfx.nCodecDelay = 0;
mp3wfx.wfx.wFormatTag = WAVE_FORMAT_MPEGLAYER3; // MP3
mp3wfx.wfx.nChannels = 2;
mp3wfx.wfx.nSamplesPerSec = 44100;
mp3wfx.wfx.wBitsPerSample = 16;
mp3wfx.wfx.nBlockAlign = (mp3wfx.wfx.nChannels * mp3wfx.wfx.wBitsPerSample) / 8;
mp3wfx.wfx.nAvgBytesPerSec = mp3wfx.wfx.nSamplesPerSec * mp3wfx.wfx.nBlockAlign;
mp3wfx.wfx.cbSize = sizeof(MPEGLAYER3WAVEFORMAT) - sizeof(WAVEFORMATEX); // 12
LComObject<IMFMediaType> ciOutputType; // Output media type of the encoder
hr = fpMFCreateMediaType((IMFMediaType**)(ciOutputType.GetAssignablePtrRef()));
WAVEFORMATEX* pWave = (WAVEFORMATEX*)&mp3wfx;
MFInitMediaTypeFromWaveFormatEx(ciOutputType.get(), pWave, sizeof(MPEGLAYER3WAVEFORMAT));
hr = ciEncoder->SetOutputType(0, ciOutputType.get(), 0);
please ignore the wrappers I have on those COM objects/interfaces.
Mftrace output these
5552,500 05:05:15.03439 CMFPlatExportDetours::MFTEnumEx # Activate 00 #034CE628, MFT_FRIENDLY_NAME_Attribute=MP3 Encoder ACM Wrapper MFT;MFT_INPUT_TYPES_Attributes=61 75 64 73 00 00 10 00 80 00 00 aa 00 38 9b 71 01 00 00 00 00 00 10 00 80 00 00 aa 00 38 9b 71 ;MFT_TRANSFORM_CLSID_Attribute={11103421-354C-4CCA-A7A3-1AFF9A5B6701};MFT_OUTPUT_TYPES_Attributes=61 75 64 73 00 00 10 00 80 00 00 aa 00 38 9b 71 55 00 00 00 00 00 10 00 80 00 00 aa 00 38 9b 71 ;MF_TRANSFORM_FLAGS_Attribute=1;MF_TRANSFORM_CATEGORY_Attribute=MFT_CATEGORY_AUDIO_ENCODER
5552,500 05:05:15.03616 COle32ExportDetours::CoCreateInstance # New MFT #03510D20, <NULL>
5552,500 05:05:15.03618 COle32ExportDetours::CoCreateInstance # Created {11103421-354C-4CCA-A7A3-1AFF9A5B6701} MP3 ACM Wrapper MFT (C:\Windows\System32\mfcore.dll) #03510D20 - traced interfaces: IMFTransform #03510D20,
5552,500 05:05:15.03681 CMFTransformDetours::SetOutputType #03510D20 Failed MT: MF_MT_AUDIO_AVG_BYTES_PER_SECOND=176400;MF_MT_AUDIO_BLOCK_ALIGNMENT=4;MF_MT_AUDIO_NUM_CHANNELS=2;MF_MT_MAJOR_TYPE=MEDIATYPE_Audio;MF_MT_AUDIO_SAMPLES_PER_SECOND=44100;MF_MT_AUDIO_PREFER_WAVEFORMATEX=1;MF_MT_USER_DATA=01 00 02 00 00 00 20 01 01 00 00 00 ;MF_MT_AUDIO_BITS_PER_SAMPLE=16;MF_MT_SUBTYPE=MFAudioFormat_MP3
Thanks for help
So it says:
SetOutputType #03510D20 Failed MT:
MF_MT_MAJOR_TYPE=MEDIATYPE_Audio;
MF_MT_SUBTYPE=MFAudioFormat_MP3;
MF_MT_AUDIO_PREFER_WAVEFORMATEX=1;
MF_MT_AUDIO_SAMPLES_PER_SECOND=44100;
MF_MT_AUDIO_NUM_CHANNELS=2;
MF_MT_AUDIO_BITS_PER_SAMPLE=16;
MF_MT_AUDIO_AVG_BYTES_PER_SECOND=176400;
MF_MT_AUDIO_BLOCK_ALIGNMENT=4;
MF_MT_USER_DATA=01 00 02 00 00 00 20 01 01 00 00 00 ;
There are two important things here:
MF_MT_AUDIO_AVG_BYTES_PER_SECOND needs a correct value because there is a list of acceptable bitrates
MF_MT_USER_DATA is additional format data for the media type; presumably MPEGLAYER3WAVEFORMAT values that follow WAVEFORMATEX fields - the MFT might expect certain values, which are different from those you provide.
You took the correct approach of asking MFT to enumerate output media types. Now you need to pick the closest and compare to what you have, you should see the difference that makes codec to reject your format.
I have a weird issue with creating an Bitmap in C++. I'm using the BITMAPFILEHEADER and BITMAPINFOHEADER Structure for creating an 8bit grayscale image. Bitmap data is coming from a camera over DMA as unsigned char an has exactly the same lenghts as expected. Saving the image an opening it, it contains colors?!
The way it should be: http://www.freeimagehosting.net/qd1ku
The way it is: http://www.freeimagehosting.net/83r1s
Do you have any Idea where this is comping from?
The Header of the bitmap is:
42 4D 36 00 04 00 00 00 00 00 36 00 00 00 28 00
00 00 00 02 00 00 00 02 00 00 01 00 08 00 00 00
00 00 00 00 04 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00
Info-Header:
42 4D Its a Bitmap
36 00 04 00 Size of Bitmap = 0x04 00 36 - Header-Size = 512x512
00 00 00 00 Reserved
36 00 00 00 Offset = Sizeof(Bitmapinfoheader);
28 00 00 00 Sizeof(Bitmapinfoheader);
00 02 00 00 =0x200 = 512 px.
00 02 00 00 same
01 00 = 1 - Standard. Not used anymore.
08 00 Color dept = 8 bit.
00 00 00 00 Compression: 0 = none.
00 00 00 00 Filesize or zero
00 00 00 00 X-Dot-Per-Meter, may be left 0
00 00 00 00 y-Dot-Per-Meter, may be left 0
00 00 00 00 If zero, all 255 colors are used
00 00 00 00 If zero, no color table values are used
Do you have any Idea where this comes from?
Under windows, if you do not supply a palette for your 8 bit image a system default one is provided for you. I do not recall offhand the win32 way to add a palette, but it should be as simple as creating a 256 element char array where the value of each entry is the same as its index, and writing it out to your file at the appropriate point and updating the offset parameter, etc.