Weird and unpredictable crash when using libx264 cross-compiled with MinGW - c++

I'm working on a C++ project using Visual Studio 2010 on Windows. I'm linking dynamically against x264 which I built myself as a shared library using MinGW following the guide at
http://www.ayobamiadewole.com/Blog/Others/x264compilation.aspx
The strange thing is that my x264 code is working perfectly sometimes. Then when I change some line of code (or even change the comments in the file!) and recompile everything crashes on the line
encoder_ = x264_encoder_open(&param);
With the message
Access violation reading location 0x00000000
I'm not doing anything funky at all so it's probably not my code that is wrong but I guess there is something going wrong with the linking or maybe something is wrong with how I compiled x264.
The full initialization code:
x264_param_t param = { 0 };
if (x264_param_default_preset(&param, "ultrafast", "zerolatency") < 0) {
throw KStreamerException("x264_param_default_preset failed");
}
param.i_threads = 1;
param.i_width = 640;
param.i_height = 480;
param.i_fps_num = 10;
param.i_fps_den = 1;
encoder_ = x264_encoder_open(&param); // <-----
if (encoder_ == 0) {
throw KStreamerException("x264_encoder_open failed");
}
x264_picture_alloc(&pic_, X264_CSP_I420, 640, 480);
Edit: It turns out that it always works in Release mode and when using superfast instead of ultrafast it also works in Debug mode 100%. Could it be that the ultrafast mode is doing some crazy optimizations that the debugger doesn't like?

I've met this problem too with libx264-120.
libx264-120 was built on MinGW and configuration option like below.
$ ./configure --disable-cli --enable-shared --extra-ldflags=-Wl,--output-def=libx264-120.def --enable-debug --enable-win32thread
platform: X86
system: WINDOWS
cli: no
libx264: internal
shared: yes
static: no
asm: yes
interlaced: yes
avs: yes
lavf: no
ffms: no
gpac: no
gpl: yes
thread: win32
filters: crop select_every
debug: yes
gprof: no
strip: no
PIC: no
visualize: no
bit depth: 8
chroma format: all
$ make -j8
lib /def:libx264-120.def /machine:x86
#include "stdafx.h"
#include <iostream>
#include <cassert>
using namespace std;
#include <stdint.h>
extern "C"{
#include <x264.h>
}
int _tmain(int argc, _TCHAR* argv[])
{
int width(640);
int height(480);
int err(-1);
x264_param_t x264_param = {0};
//x264_param_default(&x264_param);
err =
x264_param_default_preset(&x264_param, "veryfast", "zerolatency");
assert(0==err);
x264_param.i_threads = 8;
x264_param.i_width = width;
x264_param.i_height = height;
x264_param.i_fps_num = 60;//fps;
x264_param.i_fps_den = 1;
// Intra refres:
x264_param.i_keyint_max = 60;//fps;
x264_param.b_intra_refresh = 1;
//Rate control:
x264_param.rc.i_rc_method = X264_RC_CRF;
x264_param.rc.f_rf_constant = 25;
x264_param.rc.f_rf_constant_max = 35;
//For streaming:
x264_param.b_repeat_headers = 1;
x264_param.b_annexb = 1;
err = x264_param_apply_profile(&x264_param, "baseline");
assert(0==err);
x264_t *x264_encoder = x264_encoder_open(&x264_param);
x264_encoder = x264_encoder;
x264_encoder_close( x264_encoder );
getchar();
return 0;
}
This program succeeds sometime. But will fail often on x264_encoder_open with the access violation.
The information for this is not existing on Google. And how to initialize x264_param_t and how to use x264_encoder_open are unclear.
It seems that behavior caused from x264's setting values, but I can't know these without reading some open source programs that using libx264.
And, this access violation seems doesn't occurs on FIRST TIME EXECUTION and on compilation with MinGW's gcc (e.g gcc -o test test.c -lx264;./test)
Since this behavior, I think that libx264 doing some strange processes of resources in DLL version of ilbx264 that was built on MinGW's gcc.

I had the same problem. The only way I was able to fix it was to build the x264 dll without the asm option (ie. specify --disable-asm)

Related

fopen_s returns error code 2 with system account and win 32 but works fine on winx64 (c++)

I have a cpp program that uses fopen_s to open and read a file created under the directory C:\Windows\System32\config\systemprofile\AppData\Roaming.
My program needs to be compatible with winx64 and win32.
When I run this program with a system account (run using PSTools\PSExec -i -s C:\windows\system32\cmd.exe) and the Win32 compiled version of the program, fopen_s() on any file inside "C:\Windows\System32\config\systemprofile\AppData\Roaming" returns an error code 2, even though the file is present.
However, when I run the x64 compiled version of the same program, it works fine and fopen_s() is able to find and open the same file.
I am sure there are no mistakes as far as passing a valid filename to fopen_s() and I have verified this.
I make sure that the int variable that stores the return value from fopen_s() is set to 0 every time before calling fopen_s(). I am calling fopen_s() in "r" mode.
Also, elsewhere in the same program I am able to create files under the same directory.
I am using VS2019 and cpp +11 to compile my program.
My system is running windows 10 (64-bit) on an x64 processor (Intel(R) Xeon(R) Gold 6136)
Why would a win32 application fail to read a file created under "C:\Windows\System32\config\systemprofile\AppData\Roaming" with a system account while the x64 version of the same application works fine?
Code snippet:
int FileOpenFunc(FILE ** ppFile, std::string sFilename, std::string sOpenMode)
{
int errOpen = 0;
#ifdef _WIN32
errOpen = fopen_s(ppFile, sFilename.c_str(), sOpenMode.c_str());
#else
*ppFile = fopen(sFilename.c_str(), sOpenMode.c_str());
errOpen = errno;
#endif
return errOpen;
}
void func()
{
std::string sFileName = "C:\\Windows\\System32\\config\\systemprofile\\AppData\\Roaming\\Check\\sample.txt";
int errFopenErrNo = 0;
FILE* fp = NULL;
errFopenErrNo = FileOpenFunc(&fp, sFileName, "r");
if (fp!= NULL)
{
//do something
}
else
{
//do something else
}
}

C++ Tesseract OCR: Getting ObjectCache(): WARNING! LEAK! object still has count 1

I installed libtesseract-dev (v4.1.1) on Ubuntu 20.04 & I am trying out a C++ code to OCR an image to searchable PDF.
My code is somewhat modified than the C++ API example code provided at official website:
/home/test/Desktop/Example2/testexample2.cpp:
#include <leptonica/allheaders.h>
#include <tesseract/baseapi.h>
#include <tesseract/renderer.h>
int main()
{
//const char* input_image = "/usr/src/tesseract-oc/testing/phototest.tif";
//const char* output_base = "my_first_tesseract_pdf";
//const char* datapath = "/Projects/OCR/tesseract/tessdata";
const char* input_image = "001.jpg";
const char* output_base = "001";
const char* datapath = ".";
int timeout_ms = 5000;
const char* retry_config = nullptr;
bool textonly = false;
int jpg_quality = 92;
tesseract::TessBaseAPI *api = new tesseract::TessBaseAPI();
if (api->Init(datapath, "eng")) {
fprintf(stderr, "Could not initialize tesseract.\n");
exit(1);
}
/*
tesseract::TessPDFRenderer *renderer = new tesseract::TessPDFRenderer(
output_base, api->GetDatapath(), textonly, jpg_quality);
*/
tesseract::TessPDFRenderer *renderer = new tesseract::TessPDFRenderer(
output_base, api->GetDatapath(), textonly);
bool succeed = api->ProcessPages(input_image, retry_config, timeout_ms, renderer);
if (!succeed) {
fprintf(stderr, "Error during processing.\n");
return EXIT_FAILURE;
}
api->End();
return EXIT_SUCCESS;
}
I also followed https://stackoverflow.com/a/59382664 as follows:
cd /home/test/Desktop/Example2
wget https://github.com/tesseract-ocr/tessdata/raw/master/eng.traineddata
wget https://github.com/tesseract-ocr/tesseract/blob/master/tessdata/pdf.ttf
export TESSDATA_PREFIX=$(pwd)
gedit config
(In the config file, entered the contents:
tessedit_create_pdf 1 Write .pdf output file
tessedit_create txt 1 Write .txt output file
)
g++ testexample2.cpp -o testexample2 -ltesseract
./testexample2
But on execution, it displays the errors as follows:
Warning: Invalid resolution 0 dpi. Using 70 instead.
Error during processing.
ObjectCache(0x7f1b096669c0)::~ObjectCache(): WARNING! LEAK! object 0x55af5c5241a0 still has count 1 (id /home/test/Desktop/Example2/eng.traineddatapunc-dawg)
ObjectCache(0x7f1b096669c0)::~ObjectCache(): WARNING! LEAK! object 0x55af5c506770 still has count 1 (id /home/test/Desktop/Example2/eng.traineddataword-dawg)
ObjectCache(0x7f1b096669c0)::~ObjectCache(): WARNING! LEAK! object 0x55af5c9a4a70 still has count 1 (id /home/test/Desktop/Example2/eng.traineddatanumber-dawg)
ObjectCache(0x7f1b096669c0)::~ObjectCache(): WARNING! LEAK! object 0x55af5c9a4980 still has count 1 (id /home/test/Desktop/Example2/eng.traineddatabigram-dawg)
ObjectCache(0x7f1b096669c0)::~ObjectCache(): WARNING! LEAK! object 0x55af5d7d5170 still has count 1 (id /home/test/Desktop/Example2/eng.traineddatafreq-dawg)
My directory structure is:
Example2
|------->001.jpg
|------->config
|------->eng.traineddata
|------->pdf.ttf
|------->testexample2
|------->testexample2.cpp
I have searched about this on multiple sources, but could not find any fix for this.
Further, I would like to know if there is someway I can build a binary using C++ compilation from this code + libtesseract such that my binary becomes a standalone portable binary, running which on other Ubuntu systems would not require reinstalling tesseract libraries & their dependencies
tesseract API examples is show case for using tesseract features without covering all specifics of programming language of your choice (c++ in your case).
Just looking at your code even without trying it: you dynamically allocates memory 2x but you did not deallocate them. Try to fix these issues.
You must free use dynamic memory for your class "api"
Use:
... you code...
if (renderer) delete renderer;
if (api) delete api;

How to use QueryPerformanceCounter to test performance of existing code?

I have an existing project in Visual Studio, with a main file that calls a function file. I also created a CodeTimer.cpp following the steps from the Microsoft guide, and I placed it along with the necessary headers in the same directory as my code and function.
The issue is, I don't know how to link them. The solution builds fine, all three files combine with no errors. But when I CTRL-F5 it, I just see the output of my main, for obvious reasons (I didn't link the CodeTimer to the main).
This is my CodeTimer:
#include "stdafx.h"
#include <tchar.h>
#include <windows.h>
using namespace System;
int _tmain(int argc, _TCHAR* argv[])
{
__int64 ctr1 = 0, ctr2 = 0, freq = 0;
int acc = 0, i = 0;
// Start timing the code.
if (QueryPerformanceCounter((LARGE_INTEGER *)&ctr1) != 0)
{
// Code segment is being timed.
for (i = 0; i<100; i++) acc++;
// Finish timing the code.
QueryPerformanceCounter((LARGE_INTEGER *)&ctr2);
Console::WriteLine("Start Value: {0}", ctr1.ToString());
Console::WriteLine("End Value: {0}", ctr2.ToString());
QueryPerformanceFrequency((LARGE_INTEGER *)&freq);
Console::WriteLine("QueryPerformanceFrequency : {0} per Seconds.", freq.ToString());
Console::WriteLine("QueryPerformanceCounter minimum resolution: 1/{0} Seconds.", freq.ToString());
Console::WriteLine("ctr2 - ctr1: {0} counts.", ((ctr2 - ctr1) * 1.0 / 1.0).ToString());
Console::WriteLine("65536 Increments by 1 computation time: {0} seconds.", ((ctr2 - ctr1) * 1.0 / freq).ToString());
}
else
{
DWORD dwError = GetLastError();
Console::WriteLine("Error value = {0}", dwError.ToString());
}
// Make the console window wait.
Console::WriteLine();
Console::Write("Press ENTER to finish.");
Console::Read();
return 0;
}
NVM fixed it. Just had to add the body of _tmain() in main() function under my code, and get ride of the CodeTimer.cpp file completely. It was a conflict of mains (multiple mains in one project, the compiler automatically outputted the one with the highest priority in the project).

Octave c++ and VS2010

I'm trying to Use Octave with Visual C++.
I have downloaded octave-3.6.1-vs2010-setup-1.exe. Created a new project, added octave include folder to include path, octinterp.lib and octave.lib to lib path, and I added Octave bin folder as running directory.
The program compiles and runs fine except feval function that causes the exception:
Microsoft C++ exception: octave_execution_exception at memory location 0x0012faef
and on Octave side:
Invalid resizing operation or ambiguous assignment to an out-of-bounds array element.
What am I doing wrong?
Code for a standalone program:
#include <octave/octave.h>
#include <octave/oct.h>
#include <octave/parse.h>
int main(int argc, char **argv)
{
if (octave_main (argc, argv, true))
{
ColumnVector NumRands(2);
NumRands(0) = 10;
NumRands(1) = 1;
octave_value_list f_arg, f_ret;
f_arg(0) = octave_value(NumRands);
f_ret = feval("rand",f_arg,1);
Matrix unis(f_ret(0).matrix_value());
}
else
{
error ("Octave interpreter initialization failed");
}
return 0;
}
Thanks in advance.
I tried it myself, and the problem seems to originate from the feval line.
Now I don't have an explanation as to why, but the problem was solved by simply switching to the "Release" configuration instead of the "Debug" configuration.
I am using the Octave3.6.1_vs2010 build, with VS2010 on WinXP.
Here is the code I tested:
#include <iostream>
#include <octave/oct.h>
#include <octave/octave.h>
#include <octave/parse.h>
int main(int argc, char **argv)
{
// Init Octave interpreter
if (!octave_main(argc, argv, true)) {
error("Octave interpreter initialization failed");
}
// x = rand(10,1)
ColumnVector sz(2);
sz(0) = 10; sz(1) = 1;
octave_value_list in = octave_value(sz);
octave_value_list out = feval("rand", in, 1);
// print random numbers
if (!error_state && out.length () > 0) {
Matrix x( out(0).matrix_value() );
std::cout << "x = \n" << x << std::endl;
}
return 0;
}
with an output:
x =
0.165897
0.0239711
0.957456
0.830028
0.859441
0.513797
0.870601
0.0643697
0.0605021
0.153486
I'd guess that its actually stopped pointing at the next line and the error actually lies at this line:
f_arg(0) = octave_value(NumRands);
You seem to be attempting to get a value (which value?) from a vector and then assigning it to element 0 of a vector that has not been defined as a vector.
I don't really know though ... I've never tried writing octave code like that. I'm just trying to work it out by translating the code to standard matlab/octave code and that line seems really odd to me ...

Libconfig edit config value

I am using libconfig to read/wirte config files in my C++ game.
Right now I just have this one config file called video.cfg:
#Video config file
video:
{
fps:
{
limited = true;
value = 20;
};
};
This config file handles the video settings of the game.
I am trying to write a very basic console program that modifies this values based on user input. However I have no idea how to do this. I can't find anything in libconfig manual and nothing on Google.
So how do you edit values in Libconfig?
#include <libconfig.h>
int main() {
config_t cfg;
config_setting_t *vid_fps_lim = 0;
config_setting_t *vid_fps_val = 0;
config_init(&cfg);
if (config_read_file(&cfg, "myconfig") == CONFIG_TRUE) {
/* lookup the settings we want */
vid_fps_lim = config_lookup(&cfg, "video.fps.limited");
vid_fps_val = config_lookup(&cfg, "video.fps.value");
/* print the current settings */
printf("video.fps.limited = %i\n", config_setting_get_bool(vid_fps_lim));
printf("video.fps.value = %i\n", config_setting_get_int(vid_fps_val));
/* modify the settings */
config_setting_set_bool(vid_fps_lim, 1);
config_setting_set_int(vid_fps_val, 60);
/* write the modified config back */
config_write_file(&cfg, "myconfig");
}
config_destroy(&cfg);
return 0;
}
I named the file "lcex.c" and the config file "myconfig" It builds and runs on my Debian Linux machine using the following...
gcc `pkg-config --cflags libconfig` lcex.c -o lcex `pkg-config --libs libconfig`
./lcex
Open your config file after running the app and you should see that the values have been updated.
Disclaimer...error handling left out to make it easier to read. I didn't build with -Wall, etc. As with any API, read the docs and handle potential errors.
I came across this question while searching for a way to have libconfig write output to a string instead of a file. I see that there's no acceptable answer here, so I thought I would provide one for posterity, even though the question is over 3 years old.
#include <stdint.h>
#include <string>
#include "libconfig.h++"
int32_t
main (void) {
libconfig::Config config;
std::string file = "test.conf";
try {
config.readFile(file.c_str());
libconfig::Setting &limited = config.lookup("video.fps.limited");
libconfig::Setting &value = config.lookup("video.fps.value");
limited = false;
value = 60;
config.writeFile(file.c_str());
}
catch (...) {
// Do something reasonable with exceptions here. Do not catch (...)
}
return 0;
}
Hope that helps someone!