ITK: CannyEdgeDetectionFilter issues - c++

I'm trying to use the CannyEdgeDetectionImageFilter but the GetPixel() method doesn't seem to be properly referencing the filtered image. I've tried a lot of different tactics to attempt to resolve the issue, but the only thing that seems to work is writing and reading the image to/from the disk (which is not ideal). My code is below:
typedef itk::Image<unsigned char, 2> ImageType;
typedef itk::Image<float, 2> floatImageType;
...
floatImageType::Pointer to_float(ImageType::Pointer image){
typedef itk::CastImageFilter <ImageType, floatImageType> CastFilterType;
CastFilterType::Pointer castToFloat = CastFilterType::New();
castToFloat->SetInput( image );
castToFloat->Update();
return castToFloat->GetOutput();
}
...
ImageType::Pointer check(ImageType::Pointer image){
typedef itk::ImageDuplicator <ImageType> ImageDuplicatorType;
typedef itk::RescaleIntensityImageFilter
<floatImageType, ImageType> RescaleFilter;
typedef itk::CannyEdgeDetectionImageFilter
<floatImageType, floatImageType> CannyFilter;
RescaleFilter::Pointer Rescale = RescaleFilter::New();
CannyFilter::Pointer Canny = CannyFilter::New();
ImageDuplicatorType::Pointer Duplicator = ImageDuplicatorType::New();
ImageType::Pointer cannyImage;
Canny->SetVariance( 20 );
Canny->SetUpperThreshold( 2 );
Canny->SetLowerThreshold( 20 );
Rescale->SetOutputMinimum( 0 );
Rescale->SetOutputMaximum( 255 );
Duplicator->SetInputImage(image);
Duplicator->Update();
ImageType::Pointer img_copy = Duplicator->GetOutput();
floatImageType::Pointer floatImage = to_float(img_copy);
Canny->SetInput(floatImage);
Rescale->SetInput( Canny->GetOutput() );
Rescale->Update();
cannyImage = Rescale->GetOutput();
/* Insert odd work-around here */
const ImageType::SizeType img_size = cannyImage->GetLargestPossibleRegion().GetSize();
itk::Index<2> loc = {{img_size[0]/2, 0}};
int top_edge = 0;
bool contin = true;
for (int i = 0; ((i < img_size[1]) && contin); i++){
loc[1] = i;
if (cannyImage->GetPixel(loc) != 0){
top_edge = i;
contin = false;
}
}
...
}
When a pixel value of cannyImage later on, the value should be either 0, or (if an edge) 255. However, it produces values that appear to belong to a gray-scale image.
However, if I include the following code in he section above, it works as one should expect:
std::string fname = "/tmp/canny_" + to_string(getpid()) + ".tmp";
writeImage(cannyImage, fname);
cannyImage = readImage(fname);
(Methods writeImage(ImageType::Pointer image, std::string filename) and ImageType::Pointer readImage(std::string filename) were defined earlier the program.)
Anyone know what's going wrong with my program? Why does pushing it through the disk IO cause it to work?

Related

VP8 C/C++ source, how to encode frames in ARGB format to frame instead of from file

I'm trying to get started with the VP8 library, I'm not building it in the standard way they tell you to, I just loaded all of the main files and the "encoder" folder into a new Visual Studio C++ DLL project, and just included the C files in an extern "C" dll export function, which so far builds fine etc., I just have no idea where to start with the C++ API to encode, say, 3 frames of ARGB data into a very basic video, just to get started
The only example I could find is in the examples folder called simple_encoder.c, although their premise is that they are loading in another file already and parsing its frames then converting it, so it seems a bit complicated, I just want to be able to pass in a byte array of a few ARGB frames and have it output a very simple VP8 video
I've seen How to encode series of images into VP8 using WebM VP8 Encoder API? (C/C++) but the accepted answer just links to the build instructions and references the general specification of the vp8 format, the closest I could find there is the example encoding parameters but I just want to do everything from C++ and I can't seem to find any other examples, besides for the default one simple_encoder.c?
Just to cite some of the relevant parts I think I understand, but still need more help on
//in int main...
...
vpx_image_t raw;
if (!vpx_img_alloc(&raw, VPX_IMG_FMT_I420, info.frame_width,
info.frame_height, 1)) {
//"Failed to allocate image." error
}
So that part I think I understand for the most part, VPX_IMG_FMT_I420 is the only part that's not made in this file itself, but its in vpx_image.h, first as
#define VPX_IMG_FMT_PLANAR
//then after...
typedef enum vpx_img_fmt {
VPX_IMG_FMT_NONE,
VPX_IMG_FMT_RGB24, /**< 24 bit per pixel packed RGB */
///some other formats....
VPX_IMG_FMT_ARGB, /**< 32 bit packed ARGB, alpha=255 */
VPX_IMG_FMT_YV12 = VPX_IMG_FMT_PLANAR | VPX_IMG_FMT_UV_FLIP | 1, /**< planar YVU */
VPX_IMG_FMT_I420 = VPX_IMG_FMT_PLANAR | 2,
} vpx_img_fmt_t; /**< alias for enum vpx_img_fmt */
So I guess part of my question is answered already just from writing this, that one of the formats is VPX_IMG_FMT_ARGB, although I don't where where it's defined, but I'm guessing in the above code I would replace it with
const VpxInterface *encoder = get_vpx_encoder_by_name("v8");
vpx_image_t raw;
VpxVideoInfo info = { 0, 0, 0, { 0, 0 } };
info.frame_width = 1920;
info.frame_height = 1080;
info.codec_fourcc = encoder->fourcc;
info.time_base.numerator = 1;
info.time_base.denominator = 24;
bool didIt = vpx_img_alloc(&raw, VPX_IMG_FMT_ARGB,
info.frame_width, info.frame_height/*example width and height*/, 1)
//check didIt..
vpx_codec_enc_cfg_t cfg;
vpx_codec_ctx_t codec;
vpx_codec_err_t res;
res = vpx_codec_enc_config_default(encoder->codec_interface(), &cfg, 0);
//check if !res for error
cfg.g_w = info.frame_width;
cfg.g_h = info.frame_height;
cfg.g_timebase.num = info.time_base.numerator;
cfg.g_timebase.den = info.time_base.denominator;
cfg.rc_target_bitrate = 200;
VpxVideoWriter *writer = NULL;
writer = vpx_video_writer_open(outfile_arg, kContainerIVF, &info);
//check if !writer for error
bool startIt = vpx_codec_enc_init(&codec, encoder->codec_interface(), &cfg, 0);
//not even sure where codec was set actually..
//check !startIt for error starting
//now the next part in the original is where it reads from the input file, but instead
//I need to pass in an array of some ARGB byte arrays..
//thing is, in the next step they use a while loop for
//vpx_img_read(&raw, fopen("path/to/YV12formatVideo", "rb"))
//to set the contents of the raw vpx image allocated earlier, then
//they call another program that writes it to the writer object,
//but I don't know how to read the actual ARGB data directly into the raw image
//without using fopen, so that's one question (review at end)
//so I'll just put a placeholder here for the **question**
//assuming I have an array of byte arrays stored individually
//for simplicity sake
int size = 1920 * 1080 * 4;
uint8_t imgOne[size] = {/*some big byte array*/};
uint8_t imgTwo[size] = {/*some big byte array*/};
uint8_t imgThree[size] = {/*some big byte array*/};
uint8_t *images[] = {imgOne, imgTwo, imgThree};
int framesDone = 0;
int maxFrames = 3;
//so now I can replace the while loop with a filler function
//until I find out how to set the raw image with ARGB data
while(framesDone < maxFrames) {
magicalFunctionToSetARGBOfRawImage(&raw, images[framesDone]);
encode_frame(&codec, &raw, framesDone, 0, writer);
framesDone++;
}
//now apparently it needs to be flushed after
while(encode_frame(&codec, 0, -1, 0, writer)){}
vpx_img_free(&raw);
bool isDestroyed = vpx_codec_destroy(&codec);
//check if !isDestroyed for error
//now we gotta define the encode_Frames function, but simpler
//(and make it above other function for reference purposes
//or in header
static int encode_frame(
vpx_codex_ctx_t *coydek,
vpx_image_t pic,
int currentFrame,
int flags,
VpxVideoWriter *koysayv/*writer*/
) {
//now to substitute their encodeFrame function for
//the actual raw calls to simplify things
const DidIt = vpx_codec_encode(
coydek,
pic,
currentFrame,
1,//duration I think
flags,//whatever that is
VPX_DL_REALTIME//different than simlpe_encoder
);
if(!DidIt) return;//error here
vpx_codec_iter_t iter = 0;
const vpx_codec_cx_pkt_t *pkt = 0;
int gotThings = 0;
while(
(pkt = vpx_codec_get_cx_data(
coydek,
&iter
)) != 0
) {
gotThings = 1;
if(
pkt->kind
== VPX_CODEC_CX_FRAME_PKT //don't exactly
//understand this part
) {
const
int
keyframe = (
pkt
->
data
.frame
.flags
&
VPX_FRAME_IS_KEY
) != 0; //don'texactly understand the
//& operator here or how it gets the keyframe
bool wroteFrame = vpx_video_writer_write_frame(
koysayv,
pkt->data.frame.buf
//I'm guessing this is the encoded
//frame data
,
pkt->data.frame.sz,
pkt->data.frame.pts
);
if(!wroteFrame) return; //error
}
}
return gotThings;
}
Thing is though, I don't know how to actually read the
ARGB data into the RAW image buffer itself, as mentioned
above, in the original example, they use
vpx_img_read(&raw, fopen("path/to/file", "rb"))
but if I'm starting off with the byte arrays themselves
then what function do I use for that instead of the file?
I have a feeling it can be solved by the source code for the vpx_img_read found in tools_common.c function:
int vpx_img_read(vpx_image_t *img, FILE *file) {
int plane;
for (plane = 0; plane < 3; ++plane) {
unsigned char *buf = img->planes[plane];
const int stride = img->stride[plane];
const int w = vpx_img_plane_width(img, plane) *
((img->fmt & VPX_IMG_FMT_HIGHBITDEPTH) ? 2 : 1);
const int h = vpx_img_plane_height(img, plane);
int y;
for (y = 0; y < h; ++y) {
if (fread(buf, 1, w, file) != (size_t)w) return 0;
buf += stride;
}
}
return 1;
}
although I personally am not experienced enough to necessarily know how to get a single frames ARGB data in, I think the key part is fread(buf, 1, w, file) which seems to read parts of file into buf which represents img->planes[plane];, which I think then by reading into buf that automatically reads into img->planes[plane];, but I'm not sure if that is the case, and also not sure how to replace the fread from file to just take in a bye array that is alreasy loaded into memory...
VPX_IMG_FMT_ARGB is not defined because not supported by libvpx (as far as I have seen). To compress an image using this library, you must first convert it to one of the supported format, like I420 (VPX_IMG_FMT_I420). The code here (not mine) : https://gist.github.com/racerxdl/8164330 do it well for the RGB format. If you don't want to use libswscale to make the conversion from RGB to I420, you can do things like this (this code convert a RGBA array of bytes to a I420 vpx_image that can be use by libvpx):
unsigned int tx = <width of your image>
unsigned int ty = <height of your image>
unsigned char *image = <array of bytes : RGBARGBA... of size ty*tx*4>
vpx_image_t *imageVpx = <result that must have been properly initialized by libvpx>
imageVpx->stride[VPX_PLANE_U ] = tx/2;
imageVpx->stride[VPX_PLANE_V ] = tx/2;
imageVpx->stride[VPX_PLANE_Y ] = tx;
imageVpx->stride[VPX_PLANE_ALPHA] = tx;
imageVpx->planes[VPX_PLANE_U ] = new unsigned char[ty*tx/4];
imageVpx->planes[VPX_PLANE_V ] = new unsigned char[ty*tx/4];
imageVpx->planes[VPX_PLANE_Y ] = new unsigned char[ty*tx ];
imageVpx->planes[VPX_PLANE_ALPHA] = new unsigned char[ty*tx ];
unsigned char *planeY = imageVpx->planes[VPX_PLANE_Y ];
unsigned char *planeU = imageVpx->planes[VPX_PLANE_U ];
unsigned char *planeV = imageVpx->planes[VPX_PLANE_V ];
unsigned char *planeA = imageVpx->planes[VPX_PLANE_ALPHA];
for (unsigned int y=0; y<ty; y++)
{
if (!(y % 2))
{
for (unsigned int x=0; x<tx; x+=2)
{
int r = *image++;
int g = *image++;
int b = *image++;
int a = *image++;
*planeY++ = max(0, min(255, (( 66*r + 129*g + 25*b) >> 8) + 16));
*planeU++ = max(0, min(255, ((-38*r + -74*g + 112*b) >> 8) + 128));
*planeV++ = max(0, min(255, ((112*r + -94*g + -18*b) >> 8) + 128));
*planeA++ = a;
r = *image++;
g = *image++;
b = *image++;
a = *image++;
*planeA++ = a;
*planeY++ = max(0, min(255, ((66*r + 129*g + 25*b) >> 8) + 16));
}
}
else
{
for (unsigned int x=0; x<tx; x++)
{
int const r = *image++;
int const g = *image++;
int const b = *image++;
int const a = *image++;
*planeA++ = a;
*planeY++ = max(0, min(255, ((66*r + 129*g + 25*b) >> 8) + 16));
}
}
}

GDALWarpRegionToBuffer & Tiling when Dst Frame not strictly contained in Src Frame

I'm currently working with gdal api C/C++ and I'm facing an issue with gdal warp region to buffer functionality (WarpRegionToBuffer).
When my destination dataset is not strictly contained in the frame of my source dataset, the area where there should be no data values is filled with random data (see out_code.tif enclosed). However gdalwarp command line functionality, which also uses WarpRegionToBuffer function, does not seem to have this problem.
1/ Here is the code I use:
#include <iostream>
#include <string>
#include <vector>
#include "gdal.h"
#include "gdalwarper.h"
#include "cpl_conv.h"
int main(void)
{
std::string pathSrc = "in.dt1";
//these datas will be provided by command line
std::string pathDst = "out_code.tif";
double resolutionx = 0.000833333;
double resolutiony = 0.000833333;
//destination corner coordinates: top left (tl) bottom right (br)
float_t xtl = -1;
float_t ytl = 45;
float_t xbr = 2;
float_t ybr = 41;
//tile size defined by user
int tilesizex = 256;
int tilesizey = 256;
float width = std::ceil((xbr - xtl)/resolutionx);
float height = std::ceil((ytl - ybr)/resolutiony);
double adfDstGeoTransform[6] = {xtl, resolutionx, 0, ytl, 0, -resolutiony};
GDALDatasetH hSrcDS, hDstDS;
// Open input file
GDALAllRegister();
hSrcDS = GDALOpen(pathSrc.c_str(), GA_ReadOnly);
GDALDataType eDT = GDALGetRasterDataType(GDALGetRasterBand(hSrcDS,1));
// Create output file, using same spatial reference as input image, but new geotransform
GDALDriverH hDriver = GDALGetDriverByName( "GTiff" );
hDstDS = GDALCreate( hDriver, pathDst.c_str(), width, height, GDALGetRasterCount(hSrcDS), eDT, NULL );
OGRSpatialReference oSRS;
char *pszWKT = NULL;
//force geo projection
oSRS.SetWellKnownGeogCS( "WGS84" );
oSRS.exportToWkt( &pszWKT );
GDALSetProjection( hDstDS, pszWKT );
//Fetches the coefficients for transforming between pixel/line (P,L) raster space,
//and projection coordinates (Xp,Yp) space.
GDALSetGeoTransform( hDstDS, adfDstGeoTransform );
// Setup warp options
GDALWarpOptions *psWarpOptions = GDALCreateWarpOptions();
psWarpOptions->hSrcDS = hSrcDS;
psWarpOptions->hDstDS = hDstDS;
psWarpOptions->nBandCount = 1;
psWarpOptions->panSrcBands = (int *) CPLMalloc(sizeof(int) * psWarpOptions->nBandCount );
psWarpOptions->panSrcBands[0] = 1;
psWarpOptions->panDstBands = (int *) CPLMalloc(sizeof(int) * psWarpOptions->nBandCount );
psWarpOptions->panDstBands[0] = 1;
psWarpOptions->pfnProgress = GDALTermProgress;
//these datas will be calculated in order to warp tile by tile
//current tile size
int cursizex = 0;
int cursizey = 0;
double nbtilex = std::ceil(width/tilesizex);
double nbtiley = std::ceil(height/tilesizey);
int starttilex = 0;
int starttiley = 0;
// Establish reprojection transformer
psWarpOptions->pTransformerArg =
GDALCreateGenImgProjTransformer(hSrcDS,
GDALGetProjectionRef(hSrcDS),
hDstDS,
GDALGetProjectionRef(hDstDS),
FALSE, 0.0, 1);
psWarpOptions->pfnTransformer = GDALGenImgProjTransform;
// Initialize and execute the warp operation on region
GDALWarpOperation oOperation;
oOperation.Initialize(psWarpOptions);
for (int ty = 0; ty < nbtiley; ty++) {
//handle last tile size
//if it last tile change size otherwise keep tilesize
for (int tx = 0; tx < nbtilex; tx++) {
//if it last tile change size otherwise keep tilesize
starttiley = ty * tilesizey;
starttilex = tx * tilesizex;
cursizex = std::min(starttilex + tilesizex, (int)width) - starttilex;
cursizey = std::min(starttiley + tilesizey, (int)height) - starttiley;
float * buffer = new float[cursizex*cursizey];
memset(buffer, 0, cursizex*cursizey);
//warp source
CPLErr ret = oOperation.WarpRegionToBuffer(
starttilex, starttiley, cursizex, cursizey,
buffer,
eDT);
if (ret != 0) {
CEA_SIMONE_ERROR(CPLGetLastErrorMsg());
throw std::runtime_error("warp error");
}
//write the fuzed tile in dest
ret = GDALRasterIO(GDALGetRasterBand(hDstDS,1),
GF_Write,
starttilex, starttiley, cursizex, cursizey,
buffer, cursizex, cursizey,
eDT,
0, 0);
if (ret != 0) {
CEA_SIMONE_ERROR("raster io write error");
throw std::runtime_error("raster io write error");
}
delete(buffer);
}
}
// Clean memory
GDALDestroyGenImgProjTransformer( psWarpOptions->pTransformerArg );
GDALDestroyWarpOptions( psWarpOptions );
GDALClose( hDstDS );
GDALClose( hSrcDS );
return 0;
}
The result:
output image of previous sample of code (as png, as I can't enclose TIF img)
The GdalWarp command line:
gdalwarp -te -1 41 2 45 -tr 0.000833333 0.000833333 in.dt1 out_cmd_line.tif
The command line result:
output image of previous command line (as png, as I can't enclose TIF img)
Can you please help me find what is wrong with my use of GDAL C/C++ API in order to have a similar behaviour as gdalwarp command line? There is probably an algorithm in gdalwarp that computes a mask of useful pixels in destination frame before calling WarpRegionToBuffer, but I didn't find it.
I would really appreciate help on this problem!
Best regards

ITK Importing Image Data from a Buffer

I have coded a method to create an Itk image from a buffer (in my case it's a Cimg image type). This is the algorithme :
void Cimg_To_ITK (CImg<uchar> img)
{
const unsigned int Dimension = 2;
typedef itk::RGBPixel< unsigned char > RGBPixelType;
typedef itk::Image< RGBPixelType, Dimension > RGBImageType;
typedef itk::ImportImageFilter< RGBPixelType, Dimension > ImportFilterType;
ImportFilterType::Pointer importFilter = ImportFilterType::New();
typedef itk::ImageFileWriter< RGBImageType > WriterType;
WriterType::Pointer writer = WriterType::New();
RGBImageType::SizeType imsize;
imsize[0] = img.width();
imsize[1] = img.height();
ImportFilterType::IndexType start;
start.Fill( 0 );
ImportFilterType::RegionType region;
region.SetIndex( start );
region.SetSize( imsize );
importFilter->SetRegion( region );
const itk::SpacePrecisionType origin[ Dimension ] = { 0.0, 0.0 };
importFilter->SetOrigin( origin );
const itk::SpacePrecisionType spacing[ Dimension ] = { 1.0, 1.0 };
importFilter->SetSpacing( spacing );
const unsigned int numberOfPixels = imsize[0] * imsize[1];
const bool importImageFilterWillOwnTheBuffer = true;
RGBPixelType * localBuffer = new RGBPixelType[ numberOfPixels ];
memcpy(localBuffer->GetDataPointer(), img.data(), numberOfPixels);
importFilter->SetImportPointer( localBuffer, numberOfPixels,importImageFilterWillOwnTheBuffer );
writer->SetInput( importFilter->GetOutput() );
writer->SetFileName( "output.png" );
writer->Update();
}
I dont have what i want as a output :
input :
output :
CImg stores different RGB Pixels as separate componentes
You must prepare an RGBPixel buffer and iterate over the image and save to the buffer:
RGBPixelType *buffer=new RGBPixelType[img.width()*img.height()];
cimg_for(img,x,y)
{
// Now allign three colors
buffer[x+y*img.width()]= RGBPixelType({img(x,y,0,0),img(x,y,0,1),img(x,y,0,2)});
}
importImageFilterWillOwnTheBuffer=true; // To avoid leaks
Might be also good idea to check how the Cimg stores the pixels for image and if it differs from the RGBPixelType. I suspect that the RGBPixelType array has pixels stored in rgbrgbrgbrgb while the other may have them in some rrrrggggbbbb style format. Or, as already hinted, if the input is a gray-scale image with single channel you have to replicate the value for each rgb value (or if both are rgb you have to copy data from all the three channels)...
This is probably because you just copy one byte per pixel. Each RGBPixelType is likely several bytes in size:
memcpy(localBuffer->GetDataPointer(), img.data(), numberOfPixels * sizeof(RGBPixelType));

StereoBM OpenCV bad allocation in release without debugging

I have a client/server application, my server manages the opencv library to do for example disparity mapping, the application works fine with stereoSGBM, but with stereoBM I have random crash with ctrl + f5 release, so launching it without debugging.
The crash is random, with a try/catch sometimes I can get bad allocation memory, failed to allocate 1k bytes. Instead with the call stack I'm not able to catch anything relevant because the crash is not always in the same point, sometimes is in a imgread, sometimes is a malloc, a free a mat.release, so every time is different, but always involves memory in some way.
The code is pretty simple:
void disparity_mapping(std::vector<std::string> & _return, const StereoBmValue& BmValue, const ClientShowSelection& clientShowSelection, const std::string& filenameL, const std::string& filenameR)
{
int ch;
alg = BmValue.algorithmSelection;
if((filenameL == "0" || filenameR == "0"))
_return.push_back("0");
if((filenameL != "0" && filenameR != "0"))
{
imgL = imread(filenameL , CV_LOAD_IMAGE_GRAYSCALE );
imgR = imread(filenameR, CV_LOAD_IMAGE_GRAYSCALE );
_return.push_back("1");
ch = imgL.channels();
setAlgValue(BmValue, methodSelection, ch); //Setting the value for StereoBM or SGBM
disp = calculateDisparity(imgL, imgR, alg); //calculating disparity
normalize(disp, disp8, 0, 255, CV_MINMAX, CV_8U);
string matAsStringL(imgL.begin<unsigned char>(), imgL.end<unsigned char>());
_return.push_back(matAsStringL);
string matAsStringR(imgR.begin<unsigned char>(), imgR.end<unsigned char>());
_return.push_back(matAsStringR);
string matAsStringD(disp8.begin<unsigned char>(), disp8.end<unsigned char>());
_return.push_back(matAsStringD);
}
the two function that are called:
void setAlgValue(const StereoBmValue BmValue, int methodSelection, int ch)
{
if (initDisp)
initDisparity(methodSelection); //inizializing bm.init(...) and find remap informations from steroRect, etc.
//Select 0 == StereoSGBM, 1 == StereoBM
int alg = BmValue.algorithmSelection;
//storing alg value.
stereoValue.minDisparity = BmValue.minDisparity;
stereoValue.disp12MaxDiff = BmValue.disp12MaxDiff;
stereoValue.SADWindowSize = BmValue.SADWindowSize;
stereoValue.textureThreshold = BmValue.textureThreshold;
stereoValue.uniquenessRatio = BmValue.uniquenessRatio;
stereoValue.numberOfDisparities = BmValue.numberOfDisparities;
stereoValue.preFilterCap = BmValue.preFilterCap;
stereoValue.speckleWindowSize = BmValue.speckleWindowSize;
stereoValue.speckleRange = BmValue.speckleRange;
stereoValue.preFilterSize = BmValue.preFilterSize;
if (alg == 1) //set of the values in the bm state
{
bm.state->roi1 = roi1;
bm.state->roi2 = roi2;
bm.state->preFilterCap = stereoValue.preFilterCap;
bm.state->SADWindowSize = stereoValue.SADWindowSize;
bm.state->minDisparity = stereoValue.minDisparity;
bm.state->numberOfDisparities = stereoValue.numberOfDisparities;
bm.state->textureThreshold = stereoValue.textureThreshold;
bm.state->uniquenessRatio = stereoValue.uniquenessRatio;
bm.state->speckleWindowSize = stereoValue.speckleWindowSize;
bm.state->speckleRange = stereoValue.speckleRange;
bm.state->disp12MaxDiff = stereoValue.disp12MaxDiff;
bm.state->preFilterSize = stereoValue.preFilterSize;
}
else if(alg == 0) //same for SGBM
{
sgbm.P1 = 8*ch*sgbm.SADWindowSize*sgbm.SADWindowSize;
sgbm.P2 = 32*ch*sgbm.SADWindowSize*sgbm.SADWindowSize;
sgbm.preFilterCap = stereoValue.preFilterCap;
sgbm.SADWindowSize = stereoValue.SADWindowSize;
sgbm.minDisparity = stereoValue.minDisparity;
sgbm.numberOfDisparities = stereoValue.numberOfDisparities;
sgbm.uniquenessRatio = stereoValue.uniquenessRatio;
sgbm.speckleWindowSize = stereoValue.speckleWindowSize;
sgbm.speckleRange = stereoValue.speckleRange;
sgbm.disp12MaxDiff = stereoValue.disp12MaxDiff;
}
}
and the other one:
Mat calculateDisparity(Mat& imgL, Mat& imgR, int alg)
{
Mat disparity;
//remap for rectification
remap(imgL, imgL, map11, map12, INTER_LINEAR,BORDER_CONSTANT, Scalar());
remap(imgR, imgR, map21, map22, INTER_LINEAR,BORDER_CONSTANT, Scalar());
//disp
if (alg == 1)
bm( imgL , imgR , disparity);
else if (alg == 0)
sgbm(imgL, imgR, disparity);
return disparity;
}
So as you can see the code is really simple, but using bm make all crash. I'm using the last OpenCV library build for VS9 updated. Is also linked with thrift apache, pcl, eigen, vtk and boost. The bm/sgbm value are controlled by the client and are correct, i don't get any error in debug/release with debug.
What can be? Why one works and the other one make the entire application to crash? Why just in release without debug?
I was having this same issue, and just found out that with high values of bm.state->textureThreshold it would crash. Values from ~50+ are crashing for me.

FreeImage: Pixel data accessed by FreeImage_GetBits is not correct (data + size)

I'm using the FreeImage 3.15.4 library to analyze PNG images. I'm basically trying to build a simple data structure of consisting of a palette of all colors as well as an array version of the image per-pixel data consisting of indexes into the palette.
The thing is that FreeImage_GetBits seems to be returning a pointer to invalid data and I'm not sure why. I am able to read the width and height of the PNG file correctly, but the data pointed to by FreeImage_GetBits is just garbage data, and appears to be of an odd size. No matter how many times I run the program, it consistently dies in the same place, when iPix in the code below is equal to 131740. I get a C0000005 error accessing bits[131740] in the std::find call. The actual and reported PNG image size is 524288.
Furthermore, I've tried this code with smaller images that I myself have built and they work fine. The PNG I'm using is provided my a third party, and does not appear to be corrupt in anyway (Photoshop opens it, and DirectX can process and use it normally)
Any ideas?
Here's the data declarations:
struct Color
{
char b; // Blue
char g; // Green
char r; // Red
char a; // Alpha value
bool operator==( const Color& comp )
{
if ( a == comp.a &&
r == comp.r &&
g == comp.g &&
b == comp.b )
return TRUE;
else
return FALSE;
}
};
typedef std::vector<Color> ColorPalette; // Array of colors forming a palette
And here's the code that does the color indexing:
// Read image data with FreeImage
unsigned int imageSize = FreeImage_GetWidth( hImage ) * FreeImage_GetHeight( hImage );
unsigned char* pData = new unsigned char[imageSize];
// Access bits via FreeImage
FREE_IMAGE_FORMAT fif;
FIBITMAP* hImage;
fif = FreeImage_GetFIFFromFilename( fileEntry.name.c_str() );
if( fif == FIF_UNKNOWN )
{
return false;
}
hImage = FreeImage_Load( fif, filename );
BYTE* pPixelData = NULL;
pPixelData = FreeImage_GetBits( hImage );
if ( pPixelData == NULL )
{
return false;
}
Color* bits = (Color*)pPixelData;
ColorPalette palette;
for ( unsigned int iPix = 0; iPix < imageSize; ++iPix )
{
ColorPalette::iterator it;
if( ( it = std::find( palette.begin(), palette.end(), bits[iPix] ) ) == palette.end() )
{
pData[iPix] = palette.size();
palette.push_back( bits[iPix] );
}
else
{
unsigned int index = it - palette.begin();
pData[iPix] = index;
}
}
The PNG images that were problematic were using indexed color modes and the raw pixel data was indeed being returned as 8bpp. The correct course of action was to treat this data as 8 bits per pixel, and treat each 8-bit value as an index into a palette of colors that can be retrieved using FreeImage_GetPalette. The alternative, which is the choice I ultimately made, was to call FreeImage_ConvertTo32Bits on these indexed color mode PNG images, and then pass everything through the same code path as the same 32-bit image format.
Pretty simple conversion, but here it is:
// Convert non-32 bit images
if ( FreeImage_GetBPP( hImage ) != 32 )
{
FIBITMAP* hOldImage = hImage;
hImage = FreeImage_ConvertTo32Bits( hOldImage );
FreeImage_Unload( hOldImage );
}