How to tell netbeans information on pthread_barrier_t - c++

I can't seem to get netbeans to recognize the pthread_barrier_t type. I can type in #include<pthread.h> okay, but no luck on pthread_barrier_t.
The following is the build and the error:
g++ -lpthread -c -g -MMD -MP -MF build/Debug/GNU-MacOSX/main.o.d -o
build/Debug/GNU-MacOSX/main.o main.cpp main.cpp:32: error:
'pthread_barrier_t' does not name a type
I am using Netbeans 7.1 and I am on Mac OSX 10.7.2
I can create threads without any compile issues.
bool isNotInSteadyState()
{
int rc = 0;
threadData threadDataArray[NUM_THREADS];
int dataArrayCount = 0;
if (NUM_THREADS < ((PLATE_SIZE - 2) * (PLATE_SIZE - 2)))
{
for (int i = 1; i < PLATE_SIZE - 1; i += sqrt(NUM_THREADS))
{
for (int j = 1; j < PLATE_SIZE - 1; j += sqrt(NUM_THREADS))
{
threadDataArray[dataArrayCount].endY = i + sqrt(NUM_THREADS) - 1;
threadDataArray[dataArrayCount].x = i;
threadDataArray[dataArrayCount].endY = j + sqrt(NUM_THREADS) - 1;
threadDataArray[dataArrayCount++].y = j;
pthread_t* thread;
int pthread_create(thread, NULL,isNotInSteadyStateCheckRunInParallel, &threadDataArray[dataArrayCount]);
if (dataArrayCount >= NUM_THREADS)
{
//pthread_barrier_init(pthread_barrier_t * barrier,
//const pthread_barrierattr_t *restrict attr, NUM+THREADS);
}
if (rc != 0)
{
fprintf(stderr, "Steady State check failed!\n");
}
}
}
}
}
Thoughts?
Thanks,

According to info about pthread_barriers on opengroup.org, barriers are defined in the optional part of POSIX 1003.1 edition 2004; the name of option is "(ADVANCED REALTIME THREADS)", sometimes more exact referred as "BAR, barriers (real-time)".
All POSIX options are listed here
2.1.3 POSIX Conformance
POSIX System Interfaces
The system may support one or more options (see Options) denoted by the following symbolic constants:
_POSIX_BARRIERS
So, only if the _POSIX_BARRIERS macro is defined as positive number, you can use pthread_barrier_t or pthrad_barrier_wait.
Mac OS X is POSIX Compliant, but I can't find full list of options implemented. I know that Solaris has problems with pthread_barrier too. There is a post in apple mainling list from 2006. It says that there are no barriers in Mac OS X

Related

Enabling C++ exceptions on ARM bare-metal bootloader

For learning purpose, I am trying to get full C++ support on an ARM MCU (STM32F407ZE). I am struggling at getting exceptions working, consequently carrying this question:
How to get C++ exceptions on a bare-metal ARM bootloader?
To extend a bit the question:
I understand that an exception, like exiting a function require unwinding the stack. The fact that exiting a function works out of the box, but the exception handling does not, make me to think that the compiler is adding the unwinding of functions-exit but can not do it for exceptions.
So the sub-question 1 is: Is this premise correct? Do I really need to implement/integrate an unwinding library for exception handling?
In my superficial understanding of unwinding, there is a frame in the stack and the unwinding "just" need to call the destructor on each object of it, and finally jump to the given catch.
Sub-question 2 is: How does the unwinding library perform this task? What is the strategy used? (to the extends appropriate for a SO answer)
In my searches, I found many explanations of WHAT is the unwinding, but very few of how to get it working. The closest is:
GCC arm-none-eabi (Codesourcery) and C++ Exceptions
The project
1) The first step and yet with some difficulties, was to get the MCU powered and communicating through JTAG.
This is just contextual information, please do not tag the question off-topic just because of this picture. Jump to step 2 instead.
I know there are testing boards available, but this is a learning project to get a better understanding on all the "magic" behind the scene. So I got a chip socket, a bread-board and setup the minimal power-up circuitry:
Note: JTAG is performed through the GPIO of a raspberry-pi.
Note2: I am using OpenOCD to communicate with the chip.
2) Second step, was to make a minimal software to blink the yellow led.
Using arm-none-eabi-g++ as a compiler and linker, the c++ code was straightforward, but my understanding of the linker script is still somewhat blurry.
3) Enable exceptions handling (not yet working).
For this goal, following informations where useful:
https://wiki.osdev.org/C++_Exception_Support
https://itanium-cxx-abi.github.io/cxx-abi/exceptions.pdf
https://itanium-cxx-abi.github.io/cxx-abi/abi-eh.html
However, it seems quite too much complexity for a simple exception handling, and before to start implementing/integrating an unwinding library, I would like to be sure I am going in the correct direction.
I would like to avoid earing in 2 weeks: "Ohh, by the way, you just need to add this "-xx" option to the compiler and it works"
main.cpp
auto reset_handler() noexcept ->void;
auto main() -> int;
int global_variable_test=50;
extern "C"
{
#include "stm32f4xx.h"
#include "stm32f4xx_rcc.h"
#include "stm32f4xx_gpio.h"
void assert_failed(uint8_t* file, uint32_t line){}
void hardFaultHandler( unsigned int * hardFaultArgs);
// vector table
#define SRAM_SIZE 128*1024
#define SRAM_END (SRAM_BASE + SRAM_SIZE)
unsigned long *vector_table[] __attribute__((section(".vector_table"))) =
{
(unsigned long *)SRAM_END, // initial stack pointer
(unsigned long *)reset_handler, // main as Reset_Handler
};
}
auto reset_handler() noexcept -> void
{
// Setup execution
// Call the main function
int ret = main();
// never finish
while(true);
}
class A
{
public:
int b;
auto cppFunc()-> void
{
throw (int)4;
}
};
auto main() -> int
{
// Initializing led GPIO
RCC_AHB1PeriphClockCmd(RCC_AHB1Periph_GPIOG, ENABLE);
GPIO_InitTypeDef GPIO_InitDef;
GPIO_InitDef.GPIO_Pin = GPIO_Pin_13 | GPIO_Pin_14;
GPIO_InitDef.GPIO_OType = GPIO_OType_PP;
GPIO_InitDef.GPIO_Mode = GPIO_Mode_OUT;
GPIO_InitDef.GPIO_PuPd = GPIO_PuPd_NOPULL;
GPIO_InitDef.GPIO_Speed = GPIO_Speed_100MHz;
GPIO_Init(GPIOG, &GPIO_InitDef);
// Testing normal blinking
int loopNum = 500000;
for (int i=0; i<5; ++i)
{
loopNum = 100000;
GPIO_SetBits(GPIOG, GPIO_Pin_13 | GPIO_Pin_14);
for (int i = 0; i < loopNum; i++) continue; //active waiting!
loopNum = 800000;
GPIO_ResetBits(GPIOG, GPIO_Pin_13 | GPIO_Pin_14);
for (int i=0; i<loopNum; i++) continue; //active waiting!
}
// Try exceptions handling
try
{
A a;
a.cppFunc();
}
catch(...){}
return 0;
}
Makefile
CPP_C = arm-none-eabi-g++
C_C = arm-none-eabi-g++
LD = arm-none-eabi-g++
COPY = arm-none-eabi-objcopy
LKR_SCRIPT = -Tstm32_minimal.ld
INCLUDE = -I. -I./stm32f4xx/CMSIS/Device/ST/STM32F4xx/Include -I./stm32f4xx/CMSIS/Include -I./stm32f4xx/STM32F4xx_StdPeriph_Driver/inc -I./stm32f4xx/Utilities/STM32_EVAL/STM3240_41_G_EVAL -I./stm32f4xx/Utilities/STM32_EVAL/Common
C_FLAGS = -c -fexceptions -fno-common -O0 -g -mcpu=cortex-m4 -mthumb -DSTM32F40XX -DUSE_FULL_ASSERT -DUSE_STDPERIPH_DRIVER $(INCLUDE)
CPP_FLAGS = -std=c++11 -c $(C_FLAGS)
LFLAGS = -specs=nosys.specs -nostartfiles -nostdlib $(LKR_SCRIPT)
CPFLAGS = -Obinary
all: main.bin
main.o: main.cpp
$(CPP_C) $(CPP_FLAGS) -o main.o main.cpp
stm32f4xx_gpio.o: stm32f4xx_gpio.c
$(C_C) $(C_FLAGS) -o stm32f4xx_gpio.o stm32f4xx_gpio.c
stm32f4xx_rcc.o: stm32f4xx_rcc.c
$(C_C) $(C_FLAGS) -o stm32f4xx_rcc.o stm32f4xx_rcc.c
main.elf: main.o stm32f4xx_gpio.o stm32f4xx_rcc.o
$(LD) $(LFLAGS) -o main.elf main.o stm32f4xx_gpio.o stm32f4xx_rcc.o
main.bin: main.elf
$(COPY) $(CPFLAGS) main.elf main.bin
clean:
rm -rf *.o *.elf *.bin
write:
./write_bin.sh main.elf
Linker script: stm32_minimal.ld
/* memory layout for an STM32F407 */
MEMORY
{
FLASH (rx) : ORIGIN = 0x08000000, LENGTH = 512K
SRAM (rwx) : ORIGIN = 0x20000000, LENGTH = 128K
}
/* output sections */
SECTIONS
{
/* program code into FLASH */
.text :
{
*(.vector_table) /* Vector table */
*(.text) /* Program code */
*(.data)
/**(.eh_frame)*/
} >FLASH
.ARM.exidx : /* Required for unwinding the stack? */
{
__exidx_start = .;
* (.ARM.exidx* .gnu.linkonce.armexidx.*)
__exidx_end = .;
} > FLASH
PROVIDE ( end = . );
}

When using dyn.load in R I do not get any error messages, but my .dll file does not load

I am following an example from the 'Writing R extensions manual' version 3.3.1. Specifically, I am working with c++ code, which I have saved in file out.cpp, that is on page 115 of that manual shown below:
#include <R.h>
#include <Rinternals.h>
SEXP out(SEXP x, SEXP y)
{
int nx = length(x), ny = length(y);
SEXP ans = PROTECT(allocMatrix(REALSXP, nx, ny));
double *rx = REAL(x), *ry = REAL(y), *rans = REAL(ans);
for(int i = 0; i < nx; i++) {
double tmp = rx[i];
for(int j = 0; j < ny; j++)
rans[i + nx*j] = tmp * ry[j];
}
UNPROTECT(1);
return ans;
}
I have a windows computer and use cygwin command line. There I typed R CMD SHLIB out.cpp and get the following result:
cygwin warning:
MS-DOS style path detected: C:/R-3.2.3/etc/x64/Makeconf
Preferred POSIX equivalent is: /cygdrive/c/R-3.2.3/etc/x64/Makeconf
CYGWIN environment variable option "nodosfilewarning" turns off this warning.
Consult the user's guide for more details about POSIX paths:
http://cygwin.com/cygwin-ug-net/using.html#using-pathnames
g++ -m64 -I"C:/R-3.2.3/include" -DNDEBUG -I"d:/RCompile/r- compiling/local/local323/include" -O2 -Wall -mtune=core2 -c out.cpp -o out.o
g++ -m64 -shared -s -static-libgcc -o out.dll tmp.def out.o -Ld:/RCompile/r-compiling/local/local323/lib/x64 -Ld:/RCompile/r-compiling/local/local323/lib -LC:/R-3.2.3/bin/x64 -lR
I think there is nothing wrong with this, right? Then when I go into Rstudio I type
dyn.load(paste("C:/Rextensions/out",
.Platform$dynlib.ext, sep = ""),local=FALSE)
which gives me no error message whatsoever. But when I run
is.loaded('out')
I get 'FALSE', so I am never able to upload out.dll. Could you help me with this please?
P.S.: I know how to run functions like this one using Rcpp, so please don't tell me to do that instead. What I am trying to do is understand really well the Writing Extensions manual so I hope I can go through all of their examples in that document.
Thank you.

"Bad" GCC optimization performance

I am trying to understand why using -O2 -march=native with GCC gives a slower code than without using them.
Note that I am using MinGW (GCC 4.7.1) under Windows 7.
Here is my code :
struct.hpp :
#ifndef STRUCT_HPP
#define STRUCT_HPP
#include <iostream>
class Figure
{
public:
Figure(char *pName);
virtual ~Figure();
char *GetName();
double GetArea_mm2(int factor);
private:
char name[64];
virtual double GetAreaEx_mm2() = 0;
};
class Disk : public Figure
{
public:
Disk(char *pName, double radius_mm);
~Disk();
private:
double radius_mm;
virtual double GetAreaEx_mm2();
};
class Square : public Figure
{
public:
Square(char *pName, double side_mm);
~Square();
private:
double side_mm;
virtual double GetAreaEx_mm2();
};
#endif
struct.cpp :
#include <cstdio>
#include "struct.hpp"
Figure::Figure(char *pName)
{
sprintf(name, pName);
}
Figure::~Figure()
{
}
char *Figure::GetName()
{
return name;
}
double Figure::GetArea_mm2(int factor)
{
return (double)factor*GetAreaEx_mm2();
}
Disk::Disk(char *pName, double radius_mm_) :
Figure(pName), radius_mm(radius_mm_)
{
}
Disk::~Disk()
{
}
double Disk::GetAreaEx_mm2()
{
return 3.1415926*radius_mm*radius_mm;
}
Square::Square(char *pName, double side_mm_) :
Figure(pName), side_mm(side_mm_)
{
}
Square::~Square()
{
}
double Square::GetAreaEx_mm2()
{
return side_mm*side_mm;
}
main.cpp
#include <iostream>
#include <cstdio>
#include "struct.hpp"
double Do(int n)
{
double sum_mm2 = 0.0;
const int figuresCount = 10000;
Figure **pFigures = new Figure*[figuresCount];
for (int i = 0; i < figuresCount; ++i)
{
if (i % 2)
pFigures[i] = new Disk((char *)"-Disque", i);
else
pFigures[i] = new Square((char *)"-Carré", i);
}
for (int a = 0; a < n; ++a)
{
for (int i = 0; i < figuresCount; ++i)
{
sum_mm2 += pFigures[i]->GetArea_mm2(i);
sum_mm2 += (double)(pFigures[i]->GetName()[0] - '-');
}
}
for (int i = 0; i < figuresCount; ++i)
delete pFigures[i];
delete[] pFigures;
return sum_mm2;
}
int main()
{
double a = 0;
StartChrono(); // home made lib, working fine
a = Do(10000);
double elapsedTime_ms = StopChrono();
std::cout << "Elapsed time : " << elapsedTime_ms << " ms" << std::endl;
return (int)a % 2; // To force the optimizer to keep the Do() call
}
I compile this code twice :
1 : Without optimization
mingw32-g++.exe -Wall -fexceptions -std=c++11 -c main.cpp -o main.o
mingw32-g++.exe -Wall -fexceptions -std=c++11 -c struct.cpp -o struct.o
mingw32-g++.exe -o program.exe main.o struct.o -s
2 : With -O2 optimization
mingw32-g++.exe -Wall -fexceptions -O2 -march=native -std=c++11 -c main.cpp -o main.o
mingw32-g++.exe -Wall -fexceptions -O2 -march=native -std=c++11 -c struct.cpp -o struct.o
mingw32-g++.exe -o program.exe main.o struct.o -s
1 : Execution time :
1196 ms (1269 ms with Visual Studio 2013)
2 : Execution time :
1569 ms (403 ms with Visual Studio 2013) !!!!!!!!!!!!!
Using -O3 instead of -O2 does not improve the results.
I was, and I still am, pretty convinced that GCC and Visual Studio are equivalents, so I don't understand this huge difference.
Plus, I don't understand why the optimized version is slower than the non-optimized version with GCC.
Do I miss something here ?
(Note that I had the same problem with genuine GCC 4.8.2 on Ubuntu)
Thanks for your help
Considering that I don't see the assembly code, I'm going to speculate the following :
The allocation loop can be optimized (by the compiler) by removing the if clause and causing the following :
for (int i=0;i <10000 ; i+=2)
{
pFigures[i] = new Square(...);
}
for (int i=1;i <10000 ; i +=2)
{
pFigures[i] = new Disk(...);
}
Considering that the end condition is a multiple of 4 , it can be even more "efficient"
for (int i=0;i < 10000 ;i+=2*4)
{
pFigures[i] = ...
pFigures[i+2] = ...
pFigures[i+4] = ...
pFigures[i+6] = ...
}
Memory wise this will make Disks to be allocated 4 by 4 an Squares 4 by 4 .
Now, this means they will be found in the memory next to each other.
Next, you are going to iterate the vector 10000 times in a normal order (by normal i mean index after index).
Think about the places where these shapes are allocated in memory.You will end up having 4 times more cache misses (think about the border example, when 4 disks and 4 squares are found in different pages, you will switch between the pages 8 times... in a normal case scenario you would switch between the pages only once).
This sort of optimization (if done by the compiler, and in your particular code) optimizes the time for Allocation , but not the time of access (which in your example is the biggest load).
Test this by removing the i%2 and see what results you get.
Again this is pure speculation, and it assumes that the reason for lower performance was a loop optimization.
I suspect that you've got an issue unique to the combination of mingw/gcc/glibc on Windows because your code performs faster with optimizations on Linux where gcc is altogether more 'at home'.
On a fairly pedestrian Linux VM using gcc 4.8.2:
$ g++ main.cpp struct.cpp
$ time a.out
real 0m2.981s
user 0m2.876s
sys 0m0.079s
$ g++ -O2 main.cpp struct.cpp
$ time a.out
real 0m1.629s
user 0m1.523s
sys 0m0.041s
...and if you really take the blinkers off the optimizer by deleting struct.cpp and moving the implementation all inline:
$ time a.out
real 0m0.550s
user 0m0.543s
sys 0m0.000s

How do you wrap C++ OpenCV code with Boost::Python?

I want to wrap my C++ OpenCV code with boost::python, and to learn how to do it, I tried a toy example, in which
I use the Boost.Numpy project to provide me with boost::numpy::ndarray.
The C++ function to be wrapped, square() takes a boost::numpy::ndarray and modifies it in place by squaring each element in it.
The exported Python module name is called test.
The square() C++ function is exported as the square name in the exported module.
I am not using bjam because IMO it is too complicated and just doesn't work for me no matter what. I'm using good old make.
Now, here's the code:
// test.cpp
#include <boost/python.hpp>
#include <boost/numpy.hpp>
#include <boost/scoped_array.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
namespace py = boost::python;
namespace np = boost::numpy;
void square(np::ndarray& array)
{
if (array.get_dtype() != np::dtype::get_builtin<int>())
{
PyErr_SetString(PyExc_TypeError, "Incorrect array data type.");
py::throw_error_already_set();
}
size_t rows = array.shape(0), cols = array.shape(1);
size_t stride_row = array.strides(0) / sizeof(int),
stride_col = array.strides(1) / sizeof(int);
cv::Mat mat(rows, cols, CV_32S);
int *row_iter = reinterpret_cast<int*>(array.get_data());
for (int i = 0; i < rows; i++, row_iter += stride_row)
{
int *col_iter = row_iter;
int *mat_row = (int*)mat.ptr(i);
for (int j = 0; j < cols; j++, col_iter += stride_col)
{
*(mat_row + j) = (*col_iter) * (*col_iter);
}
}
for (int i = 0; i < rows; i++, row_iter += stride_row)
{
int *col_iter = row_iter;
int *mat_row = (int*)mat.ptr(i);
for (int j = 0; j < cols; j++, col_iter += stride_col)
{
*col_iter = *(mat_row + j);
}
}
}
BOOST_PYTHON_MODULE(test)
{
using namespace boost::python;
def("square", square);
}
And here's the Makefile:
PYTHON_VERSION = 2.7
PYTHON_INCLUDE = /usr/include/python$(PYTHON_VERSION)
BOOST_INC = /usr/local/include
BOOST_LIB = /usr/local/lib
OPENCV_LIB = $$(pkg-config --libs opencv)
OPENCV_INC = $$(pkg-config --cflags opencv)
TARGET = test
$(TARGET).so: $(TARGET).o
g++ -shared -Wl,--export-dynamic \
$(TARGET).o -L$(BOOST_LIB) -lboost_python \
$(OPENCV_LIB) \
-L/usr/lib/python$(PYTHON_VERSION)/config -lpython$(PYTHON_VERSION) \
-o $(TARGET).so
$(TARGET).o: $(TARGET).cpp
g++ -I$(PYTHON_INCLUDE) $(OPENCV_INC) -I$(BOOST_INC) -fPIC -c $(TARGET).cpp
With this scheme, I can type make and test.so gets created. But when I try to import it,
In [1]: import test
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-1-73ae3ffe1045> in <module>()
----> 1 import test
ImportError: ./test.so: undefined symbol: _ZN5boost6python9converter21object_manager_traitsINS_5numpy7ndarrayEE10get_pytypeEv
In [2]:
This is a linker error which I can't seem to fix. Can anyone please help me with what's going on? Do you have (links to) code that already does integrate OpenCV, numpy and Boost.Python without things like Py++ or the likes?.
Okay I fixed this. It was a simple issue, but a sleepy brain and servings of bjam had made me ignore it. In the Makefile, I'd forgotten to put -lboost_numpy that links the Boost.Numpy libs to my lib. So, the modified Makefile looks like this:
PYTHON_VERSION = 2.7
PYTHON_INCLUDE = /usr/include/python$(PYTHON_VERSION)
BOOST_INC = /usr/local/include
BOOST_LIB = /usr/local/lib
OPENCV_LIB = $$(pkg-config --libs opencv)
OPENCV_INC = $$(pkg-config --cflags opencv)
TARGET = test
$(TARGET).so: $(TARGET).o
g++ -shared -Wl,--export-dynamic \
$(TARGET).o -L$(BOOST_LIB) -lboost_python -lboost_numpy \
$(OPENCV_LIB) \
-L/usr/lib/python$(PYTHON_VERSION)/config -lpython$(PYTHON_VERSION) \
-o $(TARGET).so
$(TARGET).o: $(TARGET).cpp
g++ -I$(PYTHON_INCLUDE) $(OPENCV_INC) -I$(BOOST_INC) -fPIC -c $(TARGET).cpp

Weird and unpredictable crash when using libx264 cross-compiled with MinGW

I'm working on a C++ project using Visual Studio 2010 on Windows. I'm linking dynamically against x264 which I built myself as a shared library using MinGW following the guide at
http://www.ayobamiadewole.com/Blog/Others/x264compilation.aspx
The strange thing is that my x264 code is working perfectly sometimes. Then when I change some line of code (or even change the comments in the file!) and recompile everything crashes on the line
encoder_ = x264_encoder_open(&param);
With the message
Access violation reading location 0x00000000
I'm not doing anything funky at all so it's probably not my code that is wrong but I guess there is something going wrong with the linking or maybe something is wrong with how I compiled x264.
The full initialization code:
x264_param_t param = { 0 };
if (x264_param_default_preset(&param, "ultrafast", "zerolatency") < 0) {
throw KStreamerException("x264_param_default_preset failed");
}
param.i_threads = 1;
param.i_width = 640;
param.i_height = 480;
param.i_fps_num = 10;
param.i_fps_den = 1;
encoder_ = x264_encoder_open(&param); // <-----
if (encoder_ == 0) {
throw KStreamerException("x264_encoder_open failed");
}
x264_picture_alloc(&pic_, X264_CSP_I420, 640, 480);
Edit: It turns out that it always works in Release mode and when using superfast instead of ultrafast it also works in Debug mode 100%. Could it be that the ultrafast mode is doing some crazy optimizations that the debugger doesn't like?
I've met this problem too with libx264-120.
libx264-120 was built on MinGW and configuration option like below.
$ ./configure --disable-cli --enable-shared --extra-ldflags=-Wl,--output-def=libx264-120.def --enable-debug --enable-win32thread
platform: X86
system: WINDOWS
cli: no
libx264: internal
shared: yes
static: no
asm: yes
interlaced: yes
avs: yes
lavf: no
ffms: no
gpac: no
gpl: yes
thread: win32
filters: crop select_every
debug: yes
gprof: no
strip: no
PIC: no
visualize: no
bit depth: 8
chroma format: all
$ make -j8
lib /def:libx264-120.def /machine:x86
#include "stdafx.h"
#include <iostream>
#include <cassert>
using namespace std;
#include <stdint.h>
extern "C"{
#include <x264.h>
}
int _tmain(int argc, _TCHAR* argv[])
{
int width(640);
int height(480);
int err(-1);
x264_param_t x264_param = {0};
//x264_param_default(&x264_param);
err =
x264_param_default_preset(&x264_param, "veryfast", "zerolatency");
assert(0==err);
x264_param.i_threads = 8;
x264_param.i_width = width;
x264_param.i_height = height;
x264_param.i_fps_num = 60;//fps;
x264_param.i_fps_den = 1;
// Intra refres:
x264_param.i_keyint_max = 60;//fps;
x264_param.b_intra_refresh = 1;
//Rate control:
x264_param.rc.i_rc_method = X264_RC_CRF;
x264_param.rc.f_rf_constant = 25;
x264_param.rc.f_rf_constant_max = 35;
//For streaming:
x264_param.b_repeat_headers = 1;
x264_param.b_annexb = 1;
err = x264_param_apply_profile(&x264_param, "baseline");
assert(0==err);
x264_t *x264_encoder = x264_encoder_open(&x264_param);
x264_encoder = x264_encoder;
x264_encoder_close( x264_encoder );
getchar();
return 0;
}
This program succeeds sometime. But will fail often on x264_encoder_open with the access violation.
The information for this is not existing on Google. And how to initialize x264_param_t and how to use x264_encoder_open are unclear.
It seems that behavior caused from x264's setting values, but I can't know these without reading some open source programs that using libx264.
And, this access violation seems doesn't occurs on FIRST TIME EXECUTION and on compilation with MinGW's gcc (e.g gcc -o test test.c -lx264;./test)
Since this behavior, I think that libx264 doing some strange processes of resources in DLL version of ilbx264 that was built on MinGW's gcc.
I had the same problem. The only way I was able to fix it was to build the x264 dll without the asm option (ie. specify --disable-asm)