I am attempting to send hexadecimal bytes to a serial com port. The issue is that the segment that sends the command apparently wants a system string instead of an integer (error C2664 "cannot convert parameter 1 from 'int' to 'System::String ^'). I have looked for a way to send an integer instead but have had no luck. (I have tried sending string representations of the hexadecimal values, but the device did not recognize the commands)
Main part of Code
private: System::Void poll_Click(System::Object^ sender, System::EventArgs^ e)
{
int i, end;
double a = 1.58730159;
String^ portscan = "port";
String^ translate;
std::string portresponse [65];
std::fill_n(portresponse, 65, "Z");
for (i=1;i<64;i++)
{
if(this->_serialPort->IsOpen)
{
// Command 0 generator
int y = 2;
y += i;
int command0[10] = {0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0x02, dectohex(i), 0x00, 0x00, dectohex(y)};
for (end=0;end<10;end++)
{
this->_serialPort->WriteLine(command0[end]);
}
translate = (this->_serialPort->ReadLine());
MarshalString(translate, portresponse [i]);
if(portresponse [i] != "Z")
{
comboBox7->Items->Add(i);
}
this->progressBar1->Value=a;
a += 1.58730159;
}
}
}
Here is the function dectohex:
int dectohex(int i)
{
int x = 0;
char hex_array[10];
sprintf (hex_array, "0x%02X", i);
string hex_string(hex_array);
x = atoi(hex_string.c_str());
return x;
}
This is what solved my problem, courtesy of Jochen Kalmbach
auto data = gcnew array<System::Byte> { 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0x02, 0xBF, 0x00, 0x00, 0xBD };
_serialPort->Write(data, 0, data->Length);
Replaced this
this->_serialPort->WriteLine(command0[end]);
You cannot sent an integer over a serial line.... you can only sent BYTES (7-8 bit)!
You need to choose what you want to do:
Sent characters: So the "number" 12 will be converted into the bytes
_serialPort->Write(12.ToString());
// => 0x49, 0x50
Sent the integer (4 bytes) as little endian
auto data = System::BitConverter::GetBytes(12);
_serialPort->Write(data, 0, data->Length);
// => 0x0c, 0x00, 0x00, 0x00
Or you write just a single byte:
auto data = gcnew array<System::Byte> { 12 };
_serialPort->Write(data, 0, data->Length);
// => 0x0c
Or write an byte array:
auto data = gcnew array<System::Byte> { 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0x02, 0xBF, 0x00, 0x00, 0xBD };
_serialPort->Write(data, 0, data->Length);
// => 0xFF 0xFF 0xFF 0xFF 0xFF 0x02 0xBF 0x00 0x00 0xBD
Related
I am reading some data packets in Go, where the fields are C++ data types. I tried parsing the data but I am reading garbage values.
Here is a small example - the data spec sheet for a particular datatype is as follows in C++,
struct CarTelemetryData
{
uint16 m_speed;
uint8 m_throttle;
int8 m_steer;
uint8 m_brake;
uint8 m_clutch;
int8 m_gear;
uint16 m_engineRPM;
uint8 m_drs;
uint8 m_revLightsPercent;
uint16 m_brakesTemperature[4];
uint16 m_tyresSurfaceTemperature[4];
uint16 m_tyresInnerTemperature[4];
uint16 m_engineTemperature;
float m_tyresPressure[4];
};
And below is what I have defined in Go
type CarTelemetryData struct {
Speed uint16
Throttle uint8
Steer int8
Brake uint8
Clutch uint8
Gear int8
EngineRPM uint16
DRS uint8
RevLightsPercent uint8
BrakesTemperature [4]uint16
TyresSurfaceTemperature [4]uint16
TyresInnerTemperature [4]uint16
EngineTemperature uint16
TyresPressure [4]float32
}
For the actual un-marshalling, I am doing this -
func decodePayload(dataStruct interface{}, payload []byte) {
dataReader := bytes.NewReader(payload[:])
binary.Read(dataReader, binary.LittleEndian, dataStruct)
}
payload := make([]byte, 2048)
s.conn.ReadFromUDP(payload[:])
telemetryData := &data.CarTelemetryData{}
s.PacketsRcvd += 1
decodePayload(telemetryData, payload)
I suspect that this is because the datatypes are not equivalent and there is some conversion issue while reading the bytes into Go data-types, whereas they have been originally packages as C++. How can I deal with this?
Note: I don't have any control over the data that is sent, this is sent by a third party service.
The issue you're facing has to do with the alignment of struct members. You can read more about it here but, in short, the C++ compiler will sometimes add padding bytes in order to maintain the natural alignment expected by the architecture. If that alignment is not used, it may cause degraded performance or even an access violation.
For x86/x64, for example, the alignment of most types will usually (but not necessarily guaranteed to) be the same as the size. We can see that
#include <cstdint>
#include <type_traits>
std::size_t offsets[] = {
std::alignment_of_v<std::uint8_t>,
std::alignment_of_v<std::uint16_t>,
std::alignment_of_v<std::uint32_t>,
std::alignment_of_v<std::uint64_t>,
std::alignment_of_v<__uint128_t>,
std::alignment_of_v<std::int8_t>,
std::alignment_of_v<std::int16_t>,
std::alignment_of_v<std::int32_t>,
std::alignment_of_v<std::int64_t>,
std::alignment_of_v<__int128_t>,
std::alignment_of_v<float>,
std::alignment_of_v<double>,
std::alignment_of_v<long double>,
std::alignment_of_v<void*>,
};
compiles to
offsets:
.quad 1
.quad 2
.quad 4
.quad 8
.quad 16
.quad 1
.quad 2
.quad 4
.quad 8
.quad 16
.quad 4
.quad 8
.quad 16
.quad 8
Due to these (and other) implementation details, it may be advisable to not rely on the internal representation. In some cases, however, other methods may not be fast enough (such as serializing field by field), or you may not be able to change the C++ code, like OP.
binary.Read expects packed data, but C++ will use padding. We need to either use a compiler-dependent directive such as #pragma pack(1) or add padding the Go struct. The first is not an option for OP, so we'll use the second.
We can use the offsetof macro to determine the offset of a struct member relative to the struct itself. We can do something like
#include <array>
#include <cstddef>
#include <cstdint>
using int8 = std::int8_t;
using uint8 = std::uint8_t;
using uint16 = std::uint16_t;
struct CarTelemetryData {
uint16 m_speed;
uint8 m_throttle;
int8 m_steer;
uint8 m_brake;
uint8 m_clutch;
int8 m_gear;
uint16 m_engineRPM;
uint8 m_drs;
uint8 m_revLightsPercent;
uint16 m_brakesTemperature[4];
uint16 m_tyresSurfaceTemperature[4];
uint16 m_tyresInnerTemperature[4];
uint16 m_engineTemperature;
float m_tyresPressure[4];
};
// C++ has no reflection (yet) so we need to list every member
constexpr auto offsets = std::array{
offsetof(CarTelemetryData, m_speed),
offsetof(CarTelemetryData, m_throttle),
offsetof(CarTelemetryData, m_steer),
offsetof(CarTelemetryData, m_brake),
offsetof(CarTelemetryData, m_clutch),
offsetof(CarTelemetryData, m_gear),
offsetof(CarTelemetryData, m_engineRPM),
offsetof(CarTelemetryData, m_drs),
offsetof(CarTelemetryData, m_revLightsPercent),
offsetof(CarTelemetryData, m_brakesTemperature),
offsetof(CarTelemetryData, m_tyresSurfaceTemperature),
offsetof(CarTelemetryData, m_tyresInnerTemperature),
offsetof(CarTelemetryData, m_engineTemperature),
offsetof(CarTelemetryData, m_tyresPressure),
};
constexpr auto sizes = std::array{
sizeof(CarTelemetryData::m_speed),
sizeof(CarTelemetryData::m_throttle),
sizeof(CarTelemetryData::m_steer),
sizeof(CarTelemetryData::m_brake),
sizeof(CarTelemetryData::m_clutch),
sizeof(CarTelemetryData::m_gear),
sizeof(CarTelemetryData::m_engineRPM),
sizeof(CarTelemetryData::m_drs),
sizeof(CarTelemetryData::m_revLightsPercent),
sizeof(CarTelemetryData::m_brakesTemperature),
sizeof(CarTelemetryData::m_tyresSurfaceTemperature),
sizeof(CarTelemetryData::m_tyresInnerTemperature),
sizeof(CarTelemetryData::m_engineTemperature),
sizeof(CarTelemetryData::m_tyresPressure),
};
constexpr auto computePadding() {
std::array<std::size_t, offsets.size()> result;
std::size_t expectedOffset = 0;
for (std::size_t i = 0; i < offsets.size(); i++) {
result.at(i) = offsets.at(i) - expectedOffset;
expectedOffset = offsets.at(i) + sizes.at(i);
}
return result;
}
auto padding = computePadding();
which compiles to (constexpr FTW)
padding:
.quad 0
.quad 0
.quad 0
.quad 0
.quad 0
.quad 0
.quad 1
.quad 0
.quad 0
.quad 0
.quad 0
.quad 0
.quad 0
.quad 2
So, on x86, we need one byte before EngineRPM and two bytes before TyresPressure.
So, let's check if that works.
C++:
#include <cstddef>
#include <cstdint>
#include <iomanip>
#include <iostream>
#include <span>
using int8 = std::int8_t;
using uint8 = std::uint8_t;
using uint16 = std::uint16_t;
struct CarTelemetryData {
uint16 m_speed;
uint8 m_throttle;
int8 m_steer;
uint8 m_brake;
uint8 m_clutch;
int8 m_gear;
uint16 m_engineRPM;
uint8 m_drs;
uint8 m_revLightsPercent;
uint16 m_brakesTemperature[4];
uint16 m_tyresSurfaceTemperature[4];
uint16 m_tyresInnerTemperature[4];
uint16 m_engineTemperature;
float m_tyresPressure[4];
};
int main() {
CarTelemetryData data = {
.m_speed = 1,
.m_throttle = 2,
.m_steer = 3,
.m_brake = 4,
.m_clutch = 5,
.m_gear = 6,
.m_engineRPM = 7,
.m_drs = 8,
.m_revLightsPercent = 9,
.m_brakesTemperature = {10, 11, 12, 13},
.m_tyresSurfaceTemperature = {14, 15, 16, 17},
.m_tyresInnerTemperature = {18, 19, 20, 21},
.m_engineTemperature = 22,
.m_tyresPressure = {23, 24, 25, 26},
};
std::cout << "b := []byte{" << std::hex << std::setfill('0');
for (auto byte : std::as_bytes(std::span(&data, 1))) {
std::cout << "0x" << std::setw(2) << static_cast<unsigned>(byte)
<< ", ";
}
std::cout << "}";
}
results in
b := []byte{0x01, 0x00, 0x02, 0x03, 0x04, 0x05, 0x06, 0x00, 0x07, 0x00, 0x08, 0x09, 0x0a, 0x00, 0x0b, 0x00, 0x0c, 0x00, 0x0d, 0x00, 0x0e, 0x00, 0x0f, 0x00, 0x10, 0x00, 0x11, 0x00, 0x12, 0x00, 0x13, 0x00, 0x14, 0x00, 0x15, 0x00, 0x16, 0x00, 0x00, 0x00, 0x00, 0x00, 0xb8, 0x41, 0x00, 0x00, 0xc0, 0x41, 0x00, 0x00, 0xc8, 0x41, 0x00, 0x00, 0xd0, 0x41, }
Let's use that in Go:
// Type your code here, or load an example.
// Your function name should start with a capital letter.
package main
import (
"bytes"
"encoding/binary"
"fmt"
)
type CarTelemetryData struct {
Speed uint16
Throttle uint8
Steer int8
Brake uint8
Clutch uint8
Gear int8
_ uint8
EngineRPM uint16
DRS uint8
RevLightsPercent uint8
BrakesTemperature [4]uint16
TyresSurfaceTemperature [4]uint16
TyresInnerTemperature [4]uint16
EngineTemperature uint16
_ uint16
TyresPressure [4]float32
}
func main() {
b := []byte{0x01, 0x00, 0x02, 0x03, 0x04, 0x05, 0x06, 0x00, 0x07, 0x00, 0x08, 0x09, 0x0a, 0x00, 0x0b, 0x00, 0x0c, 0x00, 0x0d, 0x00, 0x0e, 0x00, 0x0f, 0x00, 0x10, 0x00, 0x11, 0x00, 0x12, 0x00, 0x13, 0x00, 0x14, 0x00, 0x15, 0x00, 0x16, 0x00, 0x00, 0x00, 0x00, 0x00, 0xb8, 0x41, 0x00, 0x00, 0xc0, 0x41, 0x00, 0x00, 0xc8, 0x41, 0x00, 0x00, 0xd0, 0x41}
var dataStruct CarTelemetryData
dataReader := bytes.NewReader(b[:])
binary.Read(dataReader, binary.LittleEndian, &dataStruct)
fmt.Printf("%+v", dataStruct)
}
which prints
{Speed:1 Throttle:2 Steer:3 Brake:4 Clutch:5 Gear:6 _:0 EngineRPM:7 DRS:8 RevLightsPercent:9 BrakesTemperature:[10 11 12 13] TyresSurfaceTemperature:[14 15 16 17] TyresInnerTemperature:[18 19 20 21] EngineTemperature:22 _:0 TyresPressure:[23 24 25 26]}
Take the padding bytes out and it fails.
I'm trying to cast a 115bit data to a union of bitfields and getting a wrong result.
Setup
I have two types of data:
Configuration: data[62:0]
addr[113:107]
type[114]
RawBits: lowBits[63:0]
highBits[115:64]
So I defined the following bitfields and union:
typedef struct RawBits {
unsigned long int lowBits:64;
unsigned long int highBits:51;
unsigned long int reserved :13;
} __attribute__((packed))rawBits;
typedef struct Configuration {
unsigned long int data : 62;
unsigned long int addr : 7;
unsigned long int type : 1;
unsigned long int reserved : 58;
} __attribute__((packed))Configuration ;
typedef union Instruction {
RawBits bits;
Configuration configuration;
} __attribute__((packed))Instruction;
As data I used:
uint8_t configuration_test[] = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xfc,
0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0xfc, 0x00,
And to convert the buffer to the union type I used simple casting:
Instruction *instruction = (Instruction *)configuration_test;
Expected result
instruction->bits->lowBits = 0xfffffffffffffffc
instruction->bits->highBits = 0x00000000000001fc
instruction->bits->reserved = 0x0000000000000000
instruction->configuration->data = 0x3fffffffffffffff
instruction->configuration->addr = 0x000000000000007f
instruction->configuration->type = 0x0000000000000000
instruction->configuration->reserved = 0x0000000000000000
Real result
instruction->bits->lowBits = 0xfcffffffffffffff
instruction->bits->highBits = 0x0004010000000000
instruction->bits->reserved = 0x000000000000001f
instruction->configuration->data = 0x3cffffffffffffff
instruction->configuration->addr = 0x0000000000000003
instruction->configuration->type = 0x0000000000000000
instruction->configuration->reserved = 0x0003f00400000000
I have recently been setting up various testing environments and in this cas I nneed to read and decode a gzip response from a HTTP server. I know what I have so far works as I have tested it with wireshark and hardcoded data as outlined below, my question is what is wrong with how I am handling the gizzped data from a HTTP server?
Here is what Im using:
From this thread http://www.qtcentre.org/threads/30031-qUncompress-data-from-gzip I am using the gzipDecopress function with the data provided and seeing that it works.
QByteArray gzipDecompress( QByteArray compressData )
{
//Hardcode sample data
const char dat[40] = {
0x1F, 0x8B, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x03, 0xAA, 0x2E, 0x2E, 0x49, 0x2C, 0x29,
0x2D, 0xB6, 0x4A, 0x4B, 0xCC, 0x29, 0x4E, 0xAD, 0x05, 0x00, 0x00, 0x00, 0xFF, 0xFF, 0x03, 0x00,
0x2A, 0x63, 0x18, 0xC5, 0x0E, 0x00, 0x00, 0x00};
compressData = QByteArray::fromRawData( dat, 40);
//decompress GZIP data
//strip header and trailer
compressData.remove(0, 10);
compressData.chop(12);
const int buffersize = 16384;
quint8 buffer[buffersize];
z_stream cmpr_stream;
cmpr_stream.next_in = (unsigned char *)compressData.data();
cmpr_stream.avail_in = compressData.size();
cmpr_stream.total_in = 0;
cmpr_stream.next_out = buffer;
cmpr_stream.avail_out = buffersize;
cmpr_stream.total_out = 0;
cmpr_stream.zalloc = Z_NULL;
cmpr_stream.zalloc = Z_NULL;
if( inflateInit2(&cmpr_stream, -8 ) != Z_OK) {
qDebug() << "cmpr_stream error!";
}
QByteArray uncompressed;
do {
int status = inflate( &cmpr_stream, Z_SYNC_FLUSH );
if(status == Z_OK || status == Z_STREAM_END) {
uncompressed.append(QByteArray::fromRawData((char *)buffer, buffersize - cmpr_stream.avail_out));
cmpr_stream.next_out = buffer;
cmpr_stream.avail_out = buffersize;
} else {
inflateEnd(&cmpr_stream);
}
if(status == Z_STREAM_END) {
inflateEnd(&cmpr_stream);
break;
}
}while(cmpr_stream.avail_out == 0);
return uncompressed;
}
When the data is hardcoded as in that example, the string is decompressed. However, when I read the response from a HTTP server and store it in a QByteArray, it cannot be uncompressed. I am reading the response as follows and I can see it works when comparing the results on wireshark
//Read that length of encoded data
char EncodedData[ LengthToRead ];
memset( EncodedData, 0, LengthToRead );
recv( socketDesc, EncodedData, LengthToRead, 0 );
EndOfData = true;
//EncodedDataBytes = QByteArray((char*)EncodedData);
EncodedDataBytes = QByteArray::fromRawData(EncodedData, LengthToRead );
I assume i am missing some header or byte order when reading the response, but at the moment have no idea what. Any help very welcome!!
EDIT: So I have been looking at this a little more over the weekend and at the moment im trying to test the encode and decode of the given hex string, which is "{status:false}" in plain text. I have tried to use online gzip encoders such as http://www.txtwizard.net/compression but it returns some ascii text that does not match the hex string in the above code. When I use PHPs gzcompress( "{status:false}", 1) function it gives me non-ascii values, that I cannot copy/paste to test since they are ascii. So I am wondering if there is any standard reference for gzip encode/decode? It is definitely not in some special encoding since both firefox and wireshark can decode the packets, but my software cannot.
So the issue was with my gzip function, the correct function I found on this link: uncompress error when using zlib
As mentioned above by Cornstalks the infalteInit2 function needs to take MAX_WBITS+16 as its max bit size, I think that was the issue. If anybody knows any libraries or plugins to handle this please post them here! I am surprised that this had to be coded manually when it is so commonly used by HTTP clients/servers.
I am working on porting an application running on Arduino Mega to LPC824. The following piece of code is working differently for both the platforms.
/**
* Calculation of CMAC
*/
void cmac(const uint8_t* data, uint8_t dataLength) {
uint8_t trailer[1] = {0x80};
uint8_t bytes[_lenRnd];
uint8_t temp[_lenRnd];
memcpy(temp, data, dataLength);
concatArray(temp, dataLength, trailer, 1);
dataLength ++;
addPadding(temp, dataLength);
memcpy(bytes, _sk2, _lenRnd);
xorBytes(bytes,temp,_lenRnd);
aes128_ctx_t ctx;
aes128_init(_sessionkey, &ctx);
uint8_t* chain = aes128_enc_sendMode(bytes, _lenRnd, &ctx, _ivect);
Board_UARTPutSTR("chain\n\r");
printBytes(chain, 16, true);
memcpy(_ivect, chain, _lenRnd);
//memcpy(_ivect, aes128_enc_sendMode(bytes,_lenRnd,&ctx,_ivect), _lenRnd);
memcpy(_cmac,_ivect, _lenRnd);
Board_UARTPutSTR("Initialization vector\n\r");
printBytes(_ivect, 16, true);
}
I am expecting a value like {0x5d, 0xa8, 0x0f, 0x1f, 0x1c, 0x03, 0x7f, 0x16, 0x7e, 0xe5, 0xfd, 0xf3, 0x45, 0xb7, 0x73, 0xa2} for the chain variable. But the follow function is working differently. The print inside the function has the correct value which I want ({5d, 0xa8, 0x0f, 0x1f, 0x1c, 0x03, 0x7f, 0x16, 0x7e, 0xe5, 0xfd, 0xf3, 0x45, 0xb7, 0x73, 0xa2}).
But when the function returns chain is having a different value, compared to what I am expecting, I get the following value for chain {0x00, 0x20, 0x00, 0x10, 0x03, 0x01, 0x00, 0x00, 0xd5, 0x00, 0x00, 0x00, 0xd7, 0x00, 0x00, 0x00}
Inside the function, the result is correct. But it returns a wrong value to the function which called it. Why is it happening so ?
uint8_t* aes128_enc_sendMode(unsigned char* data, unsigned short len, aes128_ctx_t* key,
const unsigned char* iv) {
unsigned char tmp[16];
uint8_t chain[16];
unsigned char c;
unsigned char i;
memcpy(chain, iv, 16);
while (len >= 16) {
memcpy(tmp, data, 16);
//xorBytes(tmp,chain,16);
for (i = 0; i < 16; i++) {
tmp[i] = tmp[i] ^ chain[i];
}
aes128_enc(tmp, key);
for (i = 0; i < 16; i++) {
//c = data[i];
data[i] = tmp[i];
chain[i] = tmp[i];
}
len -= 16;
data += 16;
}
Board_UARTPutSTR("Chain!!!:");
printBytes(chain, 16, true);
return chain;
}
A good start with an issue like this is to delete as much as you can while reproducing the error, with a minimal code example the answer is typically clear. I have done that for you here.
uint8_t* aes128_enc_sendMode(void) {
uint8_t chain[16];
return chain;
}
The chain variable is a local to the function, it ceases to be defined once the function exists. Accessing a pointer to that variable causes undefined behaviour, don't do it.
In practice the pointer to the array still exists and points to an arbitrary block of memory. This block of memory is no longer reserved and can be overwritten at any time.
I suspect it works for the AVR because it is a simple 8 bit chip and that piece of memory was sitting unmolested by the time you used it. The ARM would have used greater optimisations, possibly running the full array on registers, so the data doesn't survive the transition.
tldr; You need to malloc() any arrays that you want to live past the function's exit. Be careful, malloc and embedded systems go together like diesel and styrofoam, it gets messy real quick.
My application requires the use of Tor, over a socks4a proxy. Currently my response from tor is reported as successful but there is no reported Port or Ip, which is required for the 4a variant of socks, according to this wikipedia article SOCKS:
field 1: null byte
field 2: status, 1 byte:
0x5a = request granted
0x5b = request rejected or failed
0x5c = request failed
0x5d = request failed
field 3: network byte order port number, 2 bytes
field 4: network byte order IP address, 4 bytes
Tor is not filling fields 3 and 4, why is it doing this and how can i fix it?
Results of Socks Handshake:
Request: 0x04, 0x01, 0x00, 0x50, 0x00, 0x00, 0x00, 0x08, 0x00, 0x77,
0x77, 0x77, 0x2E, 0x67, 0x6F, 0x6F, 0x67, 0x6C, 0x65, 0x2E, 0x63, 0x6F, 0x6D, 0x00
Response from Tor: 0x00, 0x5A, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
Source Code:
retval = connect(in_Socket, in_Socks, socksLen); //Connecting to Socks Server
if (retval != 0)
return retval; //Error if !=0
if (szUserId)
lPacketLen += strlen(szUserId); //If there is a userid, add its length to the packet length
lPacketLen += strlen(szHostName); //www.google.com
lPacketLen += 1;
char *packet = new char[lPacketLen];//Allocate a packet
memset(packet, 0x00, lPacketLen); //Init to zero
packet[0] = SOCKS_VER4; //Socks version: 0x4
packet[1] = 0x01; //Connect code
memcpy(packet + 2,(char *)&(((sockaddr_in *)in_szName)->sin_port),2); //Copy the port, 80 in this case
//Send a Malformed IP, as per Socks4a states
packet[4] = 0x00;
packet[5] = 0x00;
packet[6] = 0x00;
packet[7] = 0x8;
int IDLen = strlen(szUserId);
if (szUserId) //If there was a userid, copy it now
memcpy(packet + 8, szUserId, ++IDLen); //Account for null terminator /0
else
packet[8] = 0; //Send null ID if none provided
//Write the hostname we want Tor to resolve, i used www.google.com
memcpy(packet + 8 + IDLen, szHostName, strlen(szHostName) + 1);
if (m_Interval == 0)
Sleep(SOCKS_INTERVAL);
else
Sleep(m_Interval);
printf("\nRequest: ");
PrintArray(packet, lPacketLen);
send(in_Socket, packet, lPacketLen, 0); //Send the packet
delete[] packet; //Unallocate the packet
char reply[8]; //Allocate memory for the reply packet
memset(reply, 0, 8); //Init To 0
long bytesRecv = 0;
bytesRecv = recv(in_Socket, reply, 8, 0); //Get the reply packet
printf("\nResponse from Tor: ");
PrintArray(reply, 8);
//Check Reply Codes later
It appears that Tor treats this as an optional field, it seems to remember the connection address, resolve it, then forward any data after the handshake to the resolved domain.