I have written the following code to print the data in Pin and Pout to a file:
void run() {
while ( in.readable() >= 17*11*12+SIZE_RSPACKET &&
out.writable() >= 1 ) {
u8 *pin = in.rd()+17*11*12, *pend=pin+SIZE_RSPACKET;
u8 *pout= out.wr()->data;
for ( int delay=17*11; pin<pend;
++pin,++pout,delay=(delay-17+17*12)%(17*12) )
*pout = pin[-delay*12];
in.read(SIZE_RSPACKET);
out.written(1);
/* Printing output after Deinterleaving to file: Need to Turn this into Binary */
FILE * F; // Create File
F = fopen("Deinterleaving.txt", "wb"); // Open Deinterleaving File
for(int i = 0; i < SIZE_RSPACKET; i++){ // For Every Packet (204 bytes)
// Print char for data coming and going out (Not Binary) to file
fprintf(F, "%s %u %s %u \n", " Data coming in: ", pin[i], " Data Going Out: ", pout[i]);
}
fflush(F);
fclose(F);
}
}
This gives me the output:
Data coming in: 71 Data Going Out: 0
Data coming in: 99 Data Going Out: 0
Data coming in: 46 Data Going Out: 0
Data coming in: 84 Data Going Out: 0
Data coming in: 129 Data Going Out: 0
Data coming in: 134 Data Going Out: 0
Data coming in: 1 Data Going Out: 0
Data coming in: 101 Data Going Out: 0
Data coming in: 15 Data Going Out: 1
How can I convert this to it's binary counterpart for every 8 packets so that the output will look like the following?
What the output should look like (i.e. 1 block (8 Packets of 204 bytes each)):
71 99 46 84 129 134 1 101 -> 01000111 01100011 0101110 01010100 010000001 010000110 00000001 01100101
Use fwrite for writing internal representation of data to a file:
fwrite(&pin[0], 1, sizeof(pin), F);
Also, open the file in a binary mode to avoid translations, such as the value 0x0d being replaced by 0x0d, 0x0a.
The fprintf is for transforming internal representation into human readable form.
Related
I implemented a network video player (like VLC) using ffmpeg. But it can not decode AAC audio stream received from a IP camera. It can decode other audio sterams like G711, G726 etc. I set the codec ID as AV_CODEC_ID_AAC and I set channels and sample rate of AvCodecContext. But avcodec_decode_audio4 fails with an error code of INVALID_DATA. I checked previously asked questions, I tried to add extrabytes to AvCodecContext using media format specific parameters of "config=1408". And I set extradatabytes as 2 bytes of "20" and "8" but it also not worked. I appreciate any help, thanks.
IP CAMERA SDP:
a=rtpmap:96 mpeg4-generic/16000/1
a=fmtp:96 streamtype=5; profile-level-id=5; mode=AAC-hbr; config=1408; SizeLength=13; IndexLength=3; IndexDeltaLength=3
AVCodec* decoder = avcodec_find_decoder((::AVCodecID)id);//set as AV_CODEC_ID_AAC
AVCodecContext* decoderContext = avcodec_alloc_context3(decoder);
char* test = (char*)System::Runtime::InteropServices::Marshal::StringToHGlobalAnsi("1408").ToPointer();
unsigned int length;
uint8_t* extradata = parseGeneralConfigStr(test, length);//it is set as 0x14 and 0x08
decoderContext->channels = number_of_channels; //set as 1
decoderContext->sample_rate = sample_rate; //set as 16000
decoderContext->channel_layout = AV_CH_LAYOUT_MONO;
decoderContext->codec_type = AVMEDIA_TYPE_AUDIO;
decoderContext->extradata = (uint8_t*)av_malloc(AV_INPUT_BUFFER_PADDING_SIZE + length);
memcpy(decoderContext->extradata, extradata, length);
memset(decoderContext->extradata+ length, 0, AV_INPUT_BUFFER_PADDING_SIZE);
Did you check data for INVALID_DATA?
You can check it according to RFC
RFC3640 (3.2 RTP Payload Structure)
AAC Payload can be seperated like below
AU-Header | Size Info | ADTS | Data
Example payload 00 10 0c 00 ff f1 60 40 30 01 7c 01 30 35 ac
According to configs that u shared
AU-size (SizeLength=13)
AU-Index / AU-Index-delta (IndexLength=3/IndexDeltaLength=3)
The length in bits of AU-Header is 13(SizeLength) + 3(IndexLength/IndexDeltaLength) = 16.
AU-Header 00 10
You should use AU-size(SizeLength) value for Size Info
AU-size: Indicates the size in octets of the associated Access Unit in the Access Unit Data Section in the same RTP packet.
First 13 (SizeLength) bits 0000000000010 equals to 2. So read 2 octets for size info.
Size Info 0c 00
ADTS ff f1 60 40 30 01 7c
ADTS Parser
ID MPEG-4
MPEG Layer 0
CRC checksum absent 1
Profile Low Complexity profile (AAC LC)
Sampling frequency 16000
Private bit 0
Channel configuration 1
Original/copy 0
Home 0
Copyright identification bit 0
Copyright identification start 0
AAC frame length 384
ADTS buffer fullness 95
No raw data blocks in frame 0
Data starts with 01 30 35 ac.
I am a trying to receive some data from network using UDP and parse it.
Here is the code,
char recvline[1024];
int n=recvfrom(sockfd,recvline,1024,0,NULL,NULL);
for(int i=0;i<n;i++)
cout << hex <<static_cast<short int>(recvline[i])<<" ";
Printed the output,
19 ffb0 0 0 ff88 d 38 19 48 38 0 0 2 1 3 1 ff8f ff82 5 40 20 16 6 6 22 36 6 2c 0 0 0 0 0 0 0 0
But I am expecting the output like,
19 b0 0 0 88 d 38 19 48 38 0 0 2 1 3 1 8f 82 5 40 20 16 6 6 22 36 6 2c 0 0 0 0 0 0 0 0
The ff shouldn't be there on printed output.
Actually I have to parse this data based on each character,
Like,
parseCommand(recvline);
and the parse code looks,
void parseCommand( char *msg){
int commId=*(msg+1);
switch(commId){
case 0xb0 : //do some operation
break;
case 0x20 : //do another operation
break;
}
}
And while debugging I am getting commId=-80 on watch.
Note:
In Linux I am getting successful output with the code, note that I have used unsigned char instead char for the read buffer.
unsigned char recvline[1024];
int n=recvfrom(sockfd,recvline,1024,0,NULL,NULL);
Where as in Windows recvfrom() not allowing the second argument as unsigned it giving build error, so I chose char
Looks like you might be getting the correct values, but your cast to short int during printing sign-extends your char value, causing ff to be propogated to the top byte if the top bit of your char is 1 (i.e. it is negative). You should first cast it to unsigned type, then extend to int, so you need 2 casts:
cout << hex << static_cast<short int>(static_cast<uint8_t>(recvline[i]))<<" ";
I have tested this and it behaves as expected.
In response to your extension: the data read is fine, it is a matter of how you interpret it. To parse correctly you should:
uint8_t commId= static_cast<uint8_t>(*(msg+1));
switch(commId){
case 0xb0 : //do some operation
break;
case 0x20 : //do another operation
break;
}
As you store your data in a signed data type conversions/promotion to bigger data types will first sign extend the value (filling the high order bits with the value of the MSB) even if it then gets converted to unsigned datatypes.
One solution is to define recvline as uint8_t[] in the first place an cast it to char* when passing it to the recvfrom function. That way, you only have to cast it once and you are using the same code in your windows and linux version. Also uint8_t[] is (at least to me) a clear indication that you are using the array as raw memory instead of a string of some kind.
Another possibility is to simply perform a bitwise And: (recvline[i] & 0xff). Thanks to automatic integral promotion this doesn't even require a cast.
Personal Note:
It is really annoying that the C and C++ standards don't provide a separate type for raw memory (yet), but with any luck well get a byte type in a future standard revision.
So I have a C++ function that takes in a string and a flag and writes them to a log according to the flag. After appropriately filling a char buffer that will be written to file I call an fopen. This fopen crashes consistently(for the most part) based on certain random input. Here's the code:
int log_command(char* source, int flag)
{
char *log_file_name = "db.log";
char *buffer = NULL;
int rc = 0;
SYSTEMTIME st;
FILE *fhandle = NULL;
switch(flag){
case 0:
buffer = (char*)calloc(1, strlen(source)+ 18/* 18: size for timestamp, quotes and \0 */);
GetSystemTime(&st);
sprintf(buffer, "%04d%02d%02d%02d%02d%02d \"%s\"\n", st.wYear, st.wMonth, st.wDay, st.wHour, st.wMinute, st.wSecond, source);
break;
case ROLLFORWARD:
sprintf(buffer, "RF_START\n");
break;
case BACKUP:
sprintf(buffer, "BACKUP %s", source);
break;
}
printf("fopen attempt\n");
// Print buffer info for stackoverflow
printf("%s\n", buffer);
print_mem(buffer, strlen(buffer));
if( (fhandle = fopen(log_file_name, "a") ) == NULL ){ // Randomly crashes
rc = FILE_OPEN_ERROR;
}
else{
printf("fopen success\n");
if(info) printf("Logging to %s: \"%s\" \n", log_file_name, buffer);
fwrite(buffer, strlen(buffer), 1, fhandle);
fclose(fhandle);
}
return rc;
}
When the buffer has the following text in it:
20160513050408 "insert into other values(120)"
and raw byte data like:
32 30 31 36 30 35 31 33 30 35 30 34 30 38 20 22 20160513050408 "
69 6e 73 65 72 74 20 69 6e 74 6f 20 6f 74 68 65 insert into othe
72 20 76 61 6c 75 65 73 28 31 32 30 29 22 0a r values(120)".
It'll crash consistently for a while.. and then just work out of no where. When *source has, say 4, 176, or most any other number instead of 120 it works just fine.
You are not allocating enough characters for buffer. As a consequence, you end up writing over memory that you are not supposed to, which leads to undefined behavior.
You are using the following format in the call to sprintf.
"%04d%02d%02d%02d%02d%02d \"%s\"\n"
The needs of that format specifier are:
4 characters for st.wYear
2 characters for st.wMonth
2 characters for st.wDay
2 characters for st.wMonth
2 characters for st.wMinute
2 characters for st.wSecond
1 character for the space character
1 character for "
strlen(source) characters for source
1 character for "
1 character for \n
1 character for the terminating null character.
You need at least strlen(source) + 19 characters.
Change
buffer = (char*)calloc(1, strlen(source)+ 18);
to
buffer = (char*)calloc(strlen(source) + 19, 1); // Make it 19 or higher.
I've written a small Packet class consisting of some quint8's, quint16's, QByteArrays and a quint32 that has a toByteArray() method to return a serialized version of the object, conforming to a protocol specification.
Packet Spec
Protocol identifier [4 byte] (Version 1 = 0x74697331)
Session ID, a MD5 hash salted with the timestamp + user IP. [16 bytes]
Command ID [1 byte]
Argument Size [2 byte]
Args [1-8,192 bytes]
CRC-B (X-25) [2 bytes]
Most of the data serializes fine. The exception is the last quint16 (my crc), which seems to get clobbered.
I don't believe the problem is in the declaration of my class. I've re-created the serialization function in this code sample which demonstrates the bug I'm receiving, without my packet class. (But the same final QByteArray layout)
(More) Minimal reproducible testcase
#include <iostream>
#include <QByteArray>
#include <QDebug>
using namespace std;
int main()
{
QByteArray arr;
quint32 m_protocolId = 0x74697331;
QByteArray m_sessionId;
m_sessionId.resize(16);
m_sessionId.fill(0);
quint8 m_commandId = 0x1;
quint16 m_argsSize = 0x0e;
QByteArray args;
args.append("test").append('\0');
args.append("1234qwer").append('\0');
quint16 m_crc;
m_crc = 0xB5A2;
QDataStream out(&arr, QIODevice::WriteOnly);
out.setByteOrder(QDataStream::LittleEndian);
out << m_protocolId;
out.writeRawData(m_sessionId.data(), 16);
out << m_commandId;
out << m_argsSize;
out.writeRawData(args.data(), args.length());
out << m_crc;
foreach (char c, arr)
{
qDebug() << QString::number((int)c, 16).toAscii().data();
}
return 0;
}
Here's the output I get:
73
69
74
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
e
0
74
65
73
74
0
31
32
33
34
71
77
65
72
0
ffffffffffffffa2
ffffffffffffffb5
Those last two should be 0xa2, 0xb5. I guess it's some sort of alignment issue. Is there any way to correct this while still conforming to the packet spec?
I think you just need to tweak your debug output. Try:
..
foreach (unsigned char c, arr)
..
Despite my sincerest efforts, I cannot seem to locate the bug here. I am writing a vector to an ofstream. The vector contains binary data. However, for some reason, when a whitespace character (0x10, 0x11, 0x12, 0x13, 0x20) is supposed to be written, it is skipped.
I have tried using iterators, and a direct ofstream::write().
Here is the code I'm using. I've commented out some of the other methods I've tried.
void
write_file(const std::string& file,
std::vector<uint8_t>& v)
{
std::ofstream out(file, std::ios::binary | std::ios::ate);
if (!out.is_open())
throw file_error(file, "unable to open");
out.unsetf(std::ios::skipws);
/* ostreambuf_iterator ...
std::ostreambuf_iterator<char> out_i(out);
std::copy(v.begin(), v.end(), out_i);
*/
/* ostream_iterator ...
std::copy(v.begin(), v.end(), std::ostream_iterator<char>(out, ""));
*/
out.write((const char*) &v[0], v.size());
}
EDIT: And the code to read it back.
void
read_file(const std::string& file,
std::vector<uint8_t>& v)
{
std::ifstream in(file);
v.clear();
if (!in.is_open())
throw file_error(file, "unable to open");
in.unsetf(std::ios::skipws);
std::copy(std::istream_iterator<char>(in), std::istream_iterator<char>(),
std::back_inserter(v));
}
Here is an example input:
30 0 0 0 a 30 0 0 0 7a 70 30 0 0 0 32 73 30 0 0 0 2 71 30 0 0 4 d2
And this is the output I am getting when I read it back:
30 0 0 0 30 0 0 0 7a 70 30 0 0 0 32 73 30 0 0 0 2 71 30 0 0 4 d2
As you can see, 0x0a is being ommited, ostensibly because it's whitespace.
Any suggestions would be greatly appreciated.
You forgot to open the file in binary mode in the read_file function.
Rather than muck around with writing vector<>s directly, boost::serialization is a more effective way, using boost::archive::binary_oarchive.
I think 'a' is treated as new line. I still have to think how to get around this.
The istream_iterator by design skips whitespace. Try replacing your std::copy with this:
std::copy(
std::istreambuf_iterator<char>(in),
std::istreambuf_iterator<char>(),
std::back_inserter(v));
The istreambuf_iterator goes directly to the streambuf object, which will avoid the whitespace processing you're seeing.