In a research project I am working on, we are getting raw depth data from a sensor, which is not a kinect or other consumer sensor. This raw depth data consists of unsigned 16 bit array where each element has distance data in mm.
In order to process the data, I am planning to convert these data from a .bin format into .oni format, so that I can utilize some algorithms provided by openNI and NiTE. We receive data over TCP/IP.
Thank you!
Related
I'm trying to do a Fourier Transform on audio file. So far I've managed to read the header of the file with the help of this answer. This is the output.
The audio format is 1 which means PCM and I should really easily be able to work with the data. However, this is what I can't figure out.
Is the data binary and I should convert it to float or something else that I can't understand?
Yes, it's binary. Specifically, it's signed 16-bit integers.
You may want to convert it to float or double depending on your FFT needs.
I suggest you use a mono channel input audio file ... the sample you showed has two channels ( stereo ) which complicates the data slightly ... for a mono PCM file the structure is
two-bytes-sample-A immediately followed by two-bytes-sample-B ... etc.
in PCM each such sample directly corresponds to a point on the analog audio curve as the microphone diaphragm (or your eardrum) wobbles ... paying attention to correct use of endianness of your data each of these samples will result in an integer using all 16 bits so the unsigned integers go from values of 0 up to (2^16 - 1) which is 0 to 65535 .... confirm your samples stay inside this range IF they are unsigned
I would like to know is there any way to record 16bit depth image as video in opencv or other library? As my project need depth image which is 16bit per pixel, I need to record a sequence of raw depth image pixel data.
Is there any way or alternative to achieve this idea?
Currently, I'm using opencv 2.4.11 in c++
Thanks
After some research, I decided to record as ONI file and extract frame from ONI.
I want to write a script to extract the PixelDATA of a DICOM file using c or c ++, I don't want to use external libraries like dicomsdl... if anyone can help me to write algorithm for extract and show image .
Just extracting the image data under the pixel data is not enough to interpret the DICOM image properly. You will need other attributes from DICOM file such as Rows, Columns, Bit Allocated, Bit Stored, High Bit, Photometric Interpretation, Sample Per Pixel to Number of Frames information just to interpret the raw uncompressed image data. Also, stored image data can be in Little Endian or Big Endian byte order. In addition, image data can be encapsulated or compressed (e.g. compressed using different compression algorithms such as JPEG, JPEG 2000, JPEG LS, RLE etc)) and compressed streams are stored differently than the uncompressed image data. Even the PixelData element can exist in multiple locations in a single DICOM file (e.g. one under the Icon Image Sequence (thumbnail) and one at the top level (actual image).
It can get more complicated when you need to account for Palette Color (segmented vs un-segmented), modality LUT, VOI LUT etc. My recommendation is to use an existing DICOM SDK and there are many open source and commercial SDK available for different platforms and programming environments.
I have raw buffer video data which I need to stream over the net as and when requested by the clients. This data is very huge. So I will need to compress and save it at my server location.
One way I understand of doing this convert the data into a video file ".avi" and save it and then stream it frame by frame as and when requested.
I have used the VideoWriter function of OpenCV for converting the buffer to mat and then to ".avi" file with MPEG4 encoding.
However I want to know if there is any way I can compress the raw buffer data and save it in a file. Can anyone suggest me if this is possible?
Thank You.
I need to convert depth information aquired with a kinect sensor,
to real world 3D coordinates.
I know that the way to do this is by using a DepthGenerator
and call ConvertProjectiveToRealWorld
but this requires the sensor to be connected....
Does anyone knows a way to do it without the sensor connected?
How is your depth information stored?
The easiest way would probably be initializing OpenNI from a depth recording (.oni file). You can create .oni files using the NiViewer sample bundled with OpenNI (press on '?' to see the list of commands, one of them should let you record).
If your data isn't stored in an oni file, you should be able to create a dummy file with a single depth frame in it. That should be enough to cause the sensor parameters to be stored in the oni file as well - the parameters that are used in the projective to real world conversion.