Steam Protocol C++ Unzip Multi message - c++

I'm writting a plugin for Steam protocol in C++. I'm using https://github.com/seishun/SteamPP which uses protobufs from https://github.com/SteamRE/SteamKit and generally it works. I can communicate Steam, I can send and receive single messages (including logging in) without problems, but Steam sends often few messages zipped in one message (EMsg::Multi from protobuf) and here is my problem. I cannot unzip them correctly. I can't understand what I'm doing wrong.
std::string unzip(std::string &input) {
auto archive = archive_read_new();
auto result = archive_read_support_filter_all(archive);
assert(result == ARCHIVE_OK);
result = archive_read_support_format_zip(archive);
assert(result == ARCHIVE_OK);
result = archive_read_open_memory(archive, &input[0], input.size());
assert(result == ARCHIVE_OK);
archive_entry* entry;
result = archive_read_next_header(archive, &entry);
if (result != ARCHIVE_OK) {
return "read next header error " + std::to_string(result);
}
assert(result == ARCHIVE_OK);
std::string output;
output.resize(archive_entry_size(entry));
auto length = archive_read_data(archive, &output[0], output.size());
assert(length == output.size());
if (length != output.size()) {
return "hello world" + std::to_string(length);
}
assert(archive_read_next_header(archive, &entry) == ARCHIVE_EOF);
result = archive_read_free(archive);
assert(result == ARCHIVE_OK);
return output;
}
in this function (libarchive) archive_read_data returns -25 which is an error code and next assert throws error. What is wrong? It's working well in C# SteamKit version and also in node.js version. I have tried also Crypto++ Gunzip but it throws an CryptoPP::Gunzip::HeaderErr exception.
CHECK DEBUG GIF

I think you are missing Zlib in your Libarchive, because Steam messages are deflated and you need Zlib to process them. Now libarchive couldn't process them so it returns -25 because of unsupported file type. Try to recompile libarchive with Zlib attached in CMake.

Related

electron: ui and backend processes accessing the same log file on Windows

Goal
My electron-based app uses a C++ backend, which keeps a log file. I'd love to show the file content on a page of my Electron frontend.
The macOS version works as expected. I simply use node.js fs and readline libraries and to read the file on the fly, and then insert the parsed text into innerHTML.
Problem
However, on Windows, the log file seems to be locked by the backend while the CRT fopen calls use appending mode "a". So node.js keeps getting exception
EBUSY: resource busy or locked open '/path/to/my.log'
To make it worse, I use a thirdparty lib for logging and it's internal is not that easy to hack.
Code
Here is the Electron-side of code
function OnLoad() {
let logFile = Path.join(__dirname, 'logs', platformDirs[process.platform], 'my.log');
let logElem = document.querySelector('.log');
processLineByLine(logFile, logElem);
}
//
// helpers
//
async function processLineByLine(txtFile, outElement) {
const fileStream = fs.createReadStream(txtFile);
const rl = readline.createInterface({
input: fileStream,
crlfDelay: Infinity
});
// Note: we use the crlfDelay option to recognize all instances of CR LF
// ('\r\n') in input.txt as a single line break.
for await (const line of rl) {
// Each line in input.txt will be successively available here as `line`.
console.log(`Line from file: ${line}`);
outElement.innerHTML += line + '<br>';
}
}
Here is the backend side of code
inline bool OpenLogFile(FILE** ppLogFile) {
TCHAR logPath[MAX_PATH];
DWORD length = GetModuleFileName(NULL, logPath, MAX_PATH);
bool isPathValid = false;
#if (NTDDI_VERSION >= NTDDI_WIN8)
PathCchRemoveFileSpec(logPath, MAX_PATH);
HRESULT resPath = PathCchCombine(logPath, MAX_PATH, logPath, TEXT("my.log"));
isPathValid = (resPath == S_OK);
#else
PathRemoveFileSpec(logPath);
LPWSTR resPath = PathCombine(logPath, logPath, TEXT("my.log"));
isPathValid = (resPath != NULL)
#endif
if (!isPathValid)
return false;
errno_t res = _wfopen_s(ppLogFile, logPath, L"a");
if (res != 0) {
wprintf(TEXT("Error: Failed to open log file: %s"), GetOSErrStr().c_str());
}
return res == 0;
}
Question
Is this an inherent problem with my architecture?
Should I forget about accessing the log file from frontend/backend processes at the same time?
I thought about using a message queue for sharing logs between the frontend and backend processes, but that'd make logging more complex and bug prone.
Is there an easy way to have the same logging experience as with macOS?
Solved it myself.
I must use another Win32 API _wfsopen that provides more sharing options.
In my case, the following change is sufficient
*ppLogFile = _wfsopen(logPath, L"a+", _SH_DENYWR);
This answer helped.

FFmpeg AVERROR(EAGAIN) error when call avcodec receive for h264

I'm working with ffmpeg 4.1 and I'm showing live streams of multiple cameras, h264 and h265.
My program collects packets of the same frame and then calls decodeVideo function. Actually it sends all packets of a frame at once.
Program works well if there is no missing packets. When I remove packet in random I-frames, both h264 and h265 streams work as expected (jumps some seconds but continues streaming).
When I remove packet in random P-frame from h265 streams, avcodec_send_packet function gives AVERROR_INVALIDDATA and streams continue.
However when I remove packet in random P-frame from h264 streams, avcodec_send_packet function gives 0. Then avcodec_receive_frame function gives AVERROR(EAGAIN) continuously and streams freeze.
void decodeVideo(array<uint8_t>^ data, int length, AvFrame^ finishedFrame)
{
AVPacket* videoPacket = new AVPacket();
av_init_packet(videoPacket);
pin_ptr<unsigned char> dataPtr = &data[0];
videoPacket->data = dataPtr;
videoPacket->size = length;
int retVal = avcodec_send_packet((AVCodecContext*)context, videoPacket);
if(retVal < 0)
{
if (retVal == AVERROR_EOF)
Utility::Log->ErrorFormat("avcodec_send_packet() return value is AVERROR_EOF.");
else if( retVal == AVERROR_INVALIDDATA)
Utility::Log->ErrorFormat("avcodec_send_packet() INVALID DATA!");
else
Utility::Log->ErrorFormat("avcodec_send_packet() return value is negative:{0}",retVal);
}
else
{
int receive_frame = avcodec_receive_frame((AVCodecContext*)context, (AVFrame*)finishedFrame);
if (receive_frame == AVERROR(EAGAIN))
Utility::Log->ErrorFormat("avcodec_receive_frame() returns AVERROR(EAGAIN)");
else if(receive_frame == AVERROR_EOF)
Utility::Log->ErrorFormat("avcodec_receive_frame() returns AVERROR(AVERROR_EOF)");
else
Utility::Log->ErrorFormat("avcodec_receive_frame() return value is negative:{0}",receive_frame);
}
av_packet_unref(videoPacket);
delete videoPacket;
}
EDIT
When I add avcodec_flush_buffers like shown, my problem is temporarily solved. However it freeze again after a while.
if(receive_frame == AVERROR(EAGAIN))
{
Utility::Log->ErrorFormat("avcodec_receive_frame() returns AVERROR(EAGAIN)");
avcodec_flush_buffers((AVCodecContext*)context);
}
Tested with ffmpeg version 4.1.1 same results.
Find an ffmpeg version like 2.5 decode function is different but there is no problem when i remove packets. However I'm working with h265 streams too.
EDIT2
AVCodecID id = AVCodecID::AV_CODEC_ID_H264;
AVCodec* dec = avcodec_find_decoder(id);
AVCodecContext* decContext = avcodec_alloc_context3(dec);
After these lines, my code included the following lines. When i delete them, there is no problem now.
if(dec->capabilities & AV_CODEC_CAP_TRUNCATED)
decContext->flags |= AV_CODEC_FLAG_TRUNCATED;
decContext->flags2 |= AV_CODEC_FLAG2_CHUNKS;

Libzip - Error: Error while opening the archive : no error

I'm trying to find out the solution to solve a problem;
In fact, i'm writing my own tool to make saves using libzip in C++ to compress the files.
Absolutly not finished but i wanted to make some tests, then i do and obtain a "funny" error from the log.
Here's my function:
void save(std::vector<std::string> filepath, std::string savepath){
int err;
savepath += time(NULL);
zip* saveArchive = zip_open(savepath.c_str(), ZIP_CREATE , &err);
if(err != ZIP_ER_OK) throw xif::sys_error("Error while opening the archive", zip_strerror(saveArchive));
for(int i = 0; i < filepath.size(); i++){
if(filepath[i].find("/") == std::string::npos){}
if(filepath[i].find(".cfg") == std::string::npos){
err = (int) zip_file_add(saveArchive, filepath[i].c_str(), NULL, NULL);
if(err == -1) throw xif::sys_error("Error while adding the files", zip_strerror(saveArchive));
}
}
if(zip_close(saveArchive) == -1) throw xif::sys_error("Error while closing the archive", zip_strerror(saveArchive));
}
I get a => Error : Error while opening the archive : No error
And, of course, i didn't have any .zip written.
If you could help me, thanks to you !
The documentation for zip_open says that it only sets *errorp if the open fails. Either test for saveArchive == nullptr or initialize err to
ZIP_ER_OK.
P.S. The search for '/' does nothing. Did you mean to put a continue in that block?
The other problematic line is:
savepath += time(NULL);
If that is the standard time function, that returns a time in seconds since the epoch. That will probably get truncated to a char, and then that char appended to the file name. That will cause strange characters to appear in the filename! I suggest using std::chrono to convert to text.

Windws C++ Intermittent Socket Disconnect

I've got a server that uses a two thread system to manage between 100 and 200 concurrent connections. It uses TCP sockets, as packet delivery guarantee is important (it's a communication system where missed remote API calls could FUBAR a client).
I've implemented a custom protocol layer to separate incoming bytes into packets and dispatch them properly (the library is included below). I realize the issues of using MSG_PEEK, but to my knowledge, it is the only system that will fulfill the needs of the library implementation. I am open to suggestions, especially if it could be part of the problem.
Basically, the problem is that, randomly, the server will drop the client's socket due to a lack of incoming packets for more than 20 seconds, despite the client successfully sending a keepalive packet every 4. I can verify that the server itself didn't go offline and that the connection of the users (including myself) experiencing the problem is stable.
The library for sending/receiving is here:
short ncsocket::send(wstring command, wstring data) {
wstringstream ss;
int datalen = ((int)command.length() * 2) + ((int)data.length() * 2) + 12;
ss << zero_pad_int(datalen) << L"|" << command << L"|" << data;
int tosend = datalen;
short __rc = 0;
do{
int res = ::send(this->sock, (const char*)ss.str().c_str(), datalen, NULL);
if (res != SOCKET_ERROR)
tosend -= res;
else
return FALSE;
__rc++;
Sleep(10);
} while (tosend != 0 && __rc < 10);
if (tosend == 0)
return TRUE;
return FALSE;
}
short ncsocket::recv(netcommand& nc) {
vector<wchar_t> buffer(BUFFER_SIZE);
int recvd = ::recv(this->sock, (char*)buffer.data(), BUFFER_SIZE, MSG_PEEK);
if (recvd > 0) {
if (recvd > 8) {
wchar_t* lenstr = new wchar_t[4];
memcpy(lenstr, buffer.data(), 8);
int fulllen = _wtoi(lenstr);
delete lenstr;
if (fulllen > 0) {
if (recvd >= fulllen) {
buffer.resize(fulllen / 2);
recvd = ::recv(this->sock, (char*)buffer.data(), fulllen, NULL);
if (recvd >= fulllen) {
buffer.resize(buffer.size() + 2);
buffer.push_back((char)L'\0');
vector<wstring> data = parsewstring(L"|", buffer.data(), 2);
if (data.size() == 3) {
nc.command = data[1];
nc.payload = data[2];
return TRUE;
}
else
return FALSE;
}
else
return FALSE;
}
else
return FALSE;
}
else {
::recv(this->sock, (char*)buffer.data(), BUFFER_SIZE, NULL);
return FALSE;
}
}
else
return FALSE;
}
else
return FALSE;
}
This is the code for determining if too much time has passed:
if ((int)difftime(time(0), regusrs[i].last_recvd) > SERVER_TIMEOUT) {
regusrs[i].sock.end();
regusrs[i].is_valid = FALSE;
send_to_all(L"removeuser", regusrs[i].server_user_id);
wstringstream log_entry;
log_entry << regusrs[i].firstname << L" " << regusrs[i].lastname << L" (suid:" << regusrs[i].server_user_id << L",p:" << regusrs[i].parent << L",pid:" << regusrs[i].parentid << L") was disconnected due to idle";
write_to_log_file(server_log, log_entry.str());
}
The "regusrs[i]" is using the currently iterated member of a vector I use to story socket descriptors and user information. The 'is_valid' check is there to tell if the associated user is an actual user - this is done to prevent the system from having to deallocate the member of the vector - it just returns it to the pool of available slots. No thread access/out-of-range issues that way.
Anyway, I started to wonder if it was the server itself was the problem. I'm testing on another server currently, but I wanted to see if another set of eyes could stop something out of place or cue me in on a concept with sockets and extended keepalives that I'm not aware of.
Thanks in advance!
I think I see what you're doing with MSG_PEEK, where you wait until it looks like you have enough data to read a full packet. However, I would be suspicious of this. (It's hard to determine the dynamic behaviour of your system just by looking at this small part of the source and not the whole thing.)
To avoid use of MSG_PEEK, follow these two principles:
When you get a notification that data is ready (I assume you're using select), then read all the waiting data from recv(). You may use more than one recv() call, so you can handle the incoming data in pieces.
If you read only a partial packet (length or payload), then save it somewhere for the next time you get a read notification. Put the packets and payloads back together yourself, don't leave them in the socket buffer.
As an aside, the use of new/memcpy/wtoi/delete is woefully inefficient. You don't need to allocate memory at all, you can use a local variable. And then you don't even need the memcpy at all, just a cast.
I presume you already assume that your packets can be no longer than 999 bytes in length.

Receiving only necessary data with C++ Socket

I'm just trying to get the contents of a page with their headers...but it seems that my buffer of size 1024 is either too large or too small for the last packet of information coming through...I don't want to get too much or too little, if that makes sense. Here's my code. It's printing out the page just fine with all the information, but I want to ensure that it's correct.
//Build HTTP Get Request
std::stringstream ss;
ss << "GET " << url << " HTTP/1.0\r\nHost: " << strHostName << "\r\n\r\n";
std::string req = ss.str();
// Send Request
send(hSocket, req.c_str(), strlen(req.c_str()), 0);
// Read from socket into buffer.
do
{
nReadAmount = read(hSocket, pBuffer, sizeof pBuffer);
printf("%s", pBuffer);
}
while(nReadAmount != 0);
nReadAmount = read(hSocket, pBuffer, sizeof pBuffer);
printf("%s", pBuffer);
This is broken. You can only use the %s format specifier for a C-style (zero-terminated) string. How is printf supposed to know how many bytes to print? That information is in nReadAmount, but you don't use it.
Also, you call printf even if read fails.
The simplest fix:
do
{
nReadAmount = read(hSocket, pBuffer, (sizeof pBuffer) - 1);
if (nReadAmount <= 0)
break;
pBuffer[nReadAmount] = 0;
printf("%s", pBuffer);
} while(1);
The correct way to read an HTTP reply is to read until you have received a full LF-delimited line (some servers use bare LF even though the official spec says to use CRLF), which contains the response code and version, then keep reading LF-delimited lines, which are the headers, until you encounter a 0-length line, indicating the end of the headers, then you have to analyze the headers to figure out how the remaining data is encoded so you know the proper way to read it and know how it is terminated. There are several different possibilities, refer to RFC 2616 Section 4.4 for the actual rules.
In other words, your code needs to use this kind of structure instead (pseudo code):
// Send Request
send(hSocket, req.c_str(), req.length(), 0);
// Read Response
std::string line = ReadALineFromSocket(hSocket);
int rescode = ExtractResponseCode(line);
std::vector<std::string> headers;
do
{
line = ReadALineFromSocket(hSocket);
if (line.length() == 0) break;
headers.push_back(line);
}
while (true);
if (
((rescode / 100) != 1) &&
(rescode != 204) &&
(rescode != 304) &&
(request is not "HEAD")
)
{
if ((headers has "Transfer-Encoding") && (Transfer-Encoding != "identity"))
{
// read chunks until a 0-length chunk is encountered.
// refer to RFC 2616 Section 3.6 for the format of the chunks...
}
else if (headers has "Content-Length")
{
// read how many bytes the Content-Length header says...
}
else if ((headers has "Content-Type") && (Content-Type == "multipart/byteranges"))
{
// read until the terminating MIME boundary specified by Content-Type is encountered...
}
else
{
// read until the socket is disconnected...
}
}