OpenNI2: Reading in .oni Recording - c++

I was hoping someone could provide some direction on how I could go about reading in a previously recording .ONI file generated using OpenNI2. The current path that I'm on suggested that I pass my file in to the device and that it could handle the file and read from it instead of the camera. Something like:
Device device
rc = device.open("C:/Somefolder/depth.oni");
Currently any variation of this simply fails to load a device. Any suggestions are always much appreciated!

you can do the following assuming you have file_name which is a string:
//all variables needed
openni::Device device_;
openni::VideoStream ir_;
openni::VideoStream color_;
openni::Status rc_;
openni::VideoFrameRef irf_;
openni::VideoFrameRef colorf_;
openni::OpenNI::initialize ();
const char *cstr = file_name.c_str();
device_.open(cstr);
ir_.create (device_, openni::SENSOR_DEPTH);
ir_.start ();
device_.setImageRegistrationMode (openni::IMAGE_REGISTRATION_DEPTH_TO_COLOR);
color_.create (device_, openni::SENSOR_COLOR);
color_.start ();
ir_.readFrame (&irf_);
color_.readFrame (&colorf_);
The last two functions read the depth and rgb frames.
Setting the registration mode, if supported, will align depth and rgb :)

Related

Trying to encode a GIF file using giflib

I am given image data and color table I am trying to export it as a single frame GIF using giflib. I looked into the API, but can't get it to work. The program crashes even at the first function:
GifFileType image_out;
int errorCode = 0;
char* fileName = "SomeName.gif";
image_out = *EGifOpenFileName(fileName,true, &errorCode);
It is my understanding that I first need to open a file by specifying it's name and then update it with fileHandle. Then Fill in the screen description, the extension block the image data and add the 3B ending to the file. Then use EGifSpew to export the whole gif. The problem is that I can't even use EGifOpenFileName(); The program crashes at that line.
Can someone help me the API of giflib? This problem is getting really frustrating.
Thanks.
EDIT:
For the purposes of simple encoding I do not want to specify a color table and I just want to encode a single frame GIF.
The prototype is:
GifFileType *EGifOpenFileName(char *GifFileName, bool GifTestExistance, int *ErrorCode)
You should write as
GifFileType* image_out = EGifOpenFileName(fileName,true, &errorCode);
Note GifFileType is not POD type so you should NOT copy like that.

OpenCV VideoCapture: Howto get specific frame correctly?

I am trying to get at specific frame from a video file using OpenCV 2.4.11.
I have tried to follow the documentation and online tutorials of how to do it correctly and have now tested two approaches:
1) The first method is brute force reading each frame using the video.grab() until I reach the specific frame (timestamp) I want. This method is slow if the specific frame is late in the video sequence!
string videoFile(videoFilename);
VideoCapture video(videoFile);
double videoTimestamp = video.get(CV_CAP_PROP_POS_MSEC);
int videoFrameNumber = static_cast<int>(video.get(CV_CAP_PROP_POS_FRAMES));
while (videoTimestamp < targetTimestamp)
{
videoTimestamp = video.get(CV_CAP_PROP_POS_MSEC);
videoFrameNumber = static_cast<int>(video.get(CV_CAP_PROP_POS_FRAMES));
// Grabe frame (but don't decode the frame as we are only "Fast forwarding")
video.grab();
}
// Get and save frame
if (video.retrieve(frame))
{
char txtBuffer[100];
sprintf(txtBuffer, "Video1Frame_Target_%f_TS_%f_FN_%d.png", targetTimestamp, videoTimestamp, videoFrameNumber);
string imgName = txtBuffer;
imwrite(imgName, frame);
}
2) The second method I uses the video.set(...). This method is faster and doesn't seem to be any slower if the specific frame is late in the video sequence.
string videoFile(videoFilename);
VideoCapture video2(videoFile);
videoTimestamp = video2.get(CV_CAP_PROP_POS_MSEC);
videoFrameNumber = static_cast<int>(video2.get(CV_CAP_PROP_POS_FRAMES));
video2.set(CV_CAP_PROP_POS_MSEC, targetTimestamp);
while (videoTimestamp < targetTimestamp)
{
videoTimestamp = video2.get(CV_CAP_PROP_POS_MSEC);
videoFrameNumber = (int)video2.get(CV_CAP_PROP_POS_FRAMES);
// Grabe frame (but don't decode the frame as we are only "Fast forwarding")
video2.grab();
}
// Get and save frame
if (video2.retrieve(frame))
{
char txtBuffer[100];
sprintf(txtBuffer, "Video2Frame_Target_%f_TS_%f_FN_%d.png", targetTimestamp, videoTimestamp, videoFrameNumber);
string imgName = txtBuffer;
imwrite(imgName, frame);
}
Problem) Now the issue is that using the two methods does end up with the same frame number of the content of the target image frame is not equal?!?
I am tempted to conclude that Method 1 is the correct one and there is something wrong with the OpenCV video.set(...) method. But if I use the VLC player finding the approximate target frame position it is actually Method 2 that is closest to a "correct" result?
As some extra info: I have tested the same video sequence but in two different video files being encoded with respectively 'avc1' MPG4 and 'wmv3' WMV codec.
Using the WMV file the two found frames are way off?
Using the MPG4 file the two found frames are only slightly off?
Is there anybody having some experience with this, can explain my findings and tell me the correct way to get a specific frame from a video file?
Obviously there's still a bug in opencv/ ffmpeg.
ffmpeg doesn't deliver the frames that are wanted and/or opencv doesn't handles this. See here and here.
[Edit:
Until that bug is fixed (either in ffmpeg or (as a work-around in opencv)) the only way to get exact frame by number is to "fast forward" as you did.
(Concerning VLC-player: I suspect that it uses that buggy set ()-interface. As for a player it is usually not too important to seek frame-exact. But for an editor it is).]
I think that OpenCV uses FFmpeg for video decoding.
We once had a similar problem but used FFmpeg directly. By default, random (but exact) frame access isn't guaranteed. The WMV decoder was particularly fuzzy.
Newer versions of FFmpeg allow you access to lower-level routines which can be used to build a retrieval function for frames. This solution was a little involved and nothing I can remember off my head right now. I try to find some more details later.
As a quick work-around, I would suggest to decode your videos off-line and then work on sequences off images. Though, this increases the amount of storage needed, it guarantees exact random frame access. You can use FFmpeg to convert your video file in to a sequence of images like this:
ffmpeg -i "input.mov" -an -f image2 "output_%05d.png"

Compress DICOM file with DCMTK (C++)

damn i'm very frustated...
Following the example in this page http://support.dcmtk.org/docs/mod_dcmjpeg.html, I have written a C++ program to decompress a JPEG-compressed DICOM image file
Now I want to do the vice versa, from uncompressed to compressed and if I use the other example in the same page, with the same (or other file) the code compile and run but is not able to compress the file...
I saw that afetr the following code, the originale Xfer and the Current is the same, and this is not good because need to be different
dataset->chooseRepresentation(EXS_JPEGProcess14SV1, &params);
It's like the chooseRepresentation method fail....
More the line
dataset->canWriteXfer(EXS_JPEGProcess14SV1)
return false
I saw that in the dcpixel.cc file, with debugging the code go in
DcmPixelData::canChooseRepresentation(.........
....
....
// representation not found, check if we have a codec that can create the
// desired representation.
if (original == repListEnd)
{
result = DcmCodecList::canChangeCoding(EXS_LittleEndianExplicit, toType.getXfer());
}
and result is FALSE....
How can I fix it? Someone have a code that works to compress a DICOM image with DCMTK or another library
This is the full code:
int main()
{
//dcxfer.h
DJDecoderRegistration::registerCodecs(); // register JPEG codecs
DcmFileFormat fileformat;
/**** MONO FILE ******/
if (fileformat.loadFile("Files/cnv3DSlice (1)_cnv.dcm").good())
{
DcmDataset *dataset = fileformat.getDataset();
DcmItem *metaInfo = fileformat.getMetaInfo();
DJ_RPLossless params; // codec parameters, we use the defaults
// this causes the lossless JPEG version of the dataset to be created
dataset->chooseRepresentation(EXS_JPEGProcess14SV1, &params);
// check if everything went well
if (dataset->canWriteXfer(EXS_JPEGProcess14SV1))
{
// force the meta-header UIDs to be re-generated when storing the file
// since the UIDs in the data set may have changed
delete metaInfo->remove(DCM_MediaStorageSOPClassUID);
delete metaInfo->remove(DCM_MediaStorageSOPInstanceUID);
// store in lossless JPEG format
fileformat.saveFile("Files/test_jpeg_compresso.dcm", EXS_JPEGProcess14SV1);
}
}
DJDecoderRegistration::cleanup(); // deregister JPEG codecs
return 0;
}
When trying to compress an image you need to call
DJEncoderRegistration::registerCodecs();
Decompress is
DJDecoderRegistration::registerCodecs();

initializing an openni::VideoStream object without kinect plugged in

I'm using openNI2 in order to capture kinect depth data.
in order to initialize m_depth, i have to use some methods of the class openni::VideoStream, like this:
openni::VideoStream m_depth;
openni::Device device;
const char* device_uri;
openni::Status ret;
device_uri = openni::ANY_DEVICE;
ret = openni::STATUS_OK;
ret = openni::OpenNI::initialize();
ret = device.open(device_uri);
ret = m_depth.create(device, openni::SENSOR_DEPTH);
The is that i want to initialize the object "m_depth" without the kinect plugged in. of course i can't because the methods of this class, like "m_depth.create" doesn't work.
There is a way to do that?
You can try using an .ONI file (a dummy could work) to init
Quoting the OpenNI2 documentation
Later, this file can be used to initialize a file Device, and used to
play back the same data that was recorded.
Opening a file device is done by passing its path as the uri to the
Device::open() method
So, you can change this line
device_uri = openni::ANY_DEVICE;
to the path of the dummy ONI file...
I don't think in OpenNI2 there is another way to create a depth stream, and actually it doesn't make sense to create a stream without a camera unless you want to use the coordinate converter class...
In openni 1.x you can try using mockdepth (though I didn't managed to make it work correctly)

Saving output frame as an image file CUDA decoder

I am trying to save the decoded image file back as a BMP image using the code in CUDA Decoder project.
if (g_bReadback && g_ReadbackSID)
{
CUresult result = cuMemcpyDtoHAsync(g_bFrameData[active_field], pDecodedFrame[active_field], (nDecodedPitch * nHeight * 3 / 2), g_ReadbackSID);
long padded_size = (nWidth * nHeight * 3 );
CString output_file;
output_file.Format(_T("image/sample_45.BMP"));
SaveBMP(g_bFrameData[active_field],nWidth,nHeight,padded_size,output_file );
if (result != CUDA_SUCCESS)
{
printf("cuMemAllocHost returned %d\n", (int)result);
}
}
But the saved image looks like this
Can anybody help me out here what am i doing wrong .. Thank you.
After investigating further, there were several modifications I made to your approach.
pDecodedFrame is actually in some non-RGB format, I think it is NV12 format which I believe is a particular YUV variant.
pDecodedFrame gets converted to an RGB format on the GPU using a particular CUDA kernel
the target buffer for this conversion will either be a surface provided by OpenGL if g_bUseInterop is specified, or else an ordinary region allocated by the driver API version of cudaMalloc if interop is not specified.
The target buffer mentioned above is pInteropFrame (even in the non-interop case). So to make an example for you, for simplicity I chose to only use the non-interop case, because it's much easier to grab the RGB buffer (pInteropFrame) in that case.
The method here copies pInteropFrame back to the host, after it has been populated with the appropriate RGB image by cudaPostProcessFrame. There is also a routine to save the image as a bitmap file. All of my modifications are delineated with comments that include RMC so search for that if you want to find all the changes/additions I made.
To use, drop this file in the cudaDecodeGL project as a replacement for the videoDecodeGL.cpp source file. Then rebuild the project. Then run the executable normally to display the video. To capture a specific frame, run the executable with the nointerop command-line switch, eg. cudaDecodGL nointerop and the video will not display, but the decode operation and frame capture will take place, and the frame will be saved in a framecap.bmp file. If you want to change the specific frame number that is captured, modify the g_FrameCapSelect = 37; variable to some other number besides 37, and recompile.
Here is the replacement for videoDecodeGL.cpp I used pastebin because SO has a limit on the number of characters that can be entered in a question body.
Note that my approach is independent of whether readback is specified. I would recommend not using readback for this sequence.