Logging to klg file using kinect 1 - python-2.7

I am trying to run SLAM algorithm (ElasticFusion) using my custom .klg file.
I tried the following 2 ways :
The first way was about to build .klg file manually from separate depth and rgb image (.png) files and their time stamp informations . I tried conversion script on this Sequence 'freiburg1_desk' dataset and then run ElasticFusion . I get good result and point cloud. But when I tried to record environment on my own device with following the same steps, I did not get desired result or point cloud. The result which i am getting in live logging is much better . I guess it is because of the code that i am using for depth image visualization.
np.clip(depth, 0, 2**10-1, depth)
depth2=depth/2842.5
depth2+=1.1863
depth=np.tan(depth2)
depth*=0.1236
depth-=0.037
depth*=1000
#depth = 0.1236 * math.tan(depth / 2842.5 + 1.1863);
depth = depth.astype(np.uint16)
return depth
I get above formula from here
A better approximation is given by Stéphane Magnenat in this post: distance = 0.1236 * tan(rawDisparity / 2842.5 + 1.1863) in meters. Adding a final offset term of -0.037 centers the original ROS data.
The second way that i tried was using this Logger which is suggested by Thomas Whelan (ElasticFusion). I run this Logger without any error.
Number devices connected: 1
1. device on bus 001:14 is a Xbox NUI Camera (2AE) from Microsoft (45E) with serial id 'A00366911101042A'
searching for device with index = 1
Opened 'Xbox NUI Camera' on bus 1:14 with serial number 'A00366911101042A'
But i am getting black screen for Depth and RGB image.
I am using Ubuntu 16.04 and Kinect1. Any kind of suggestion or help will be appreciated

Solved
Second way worked after re-installing OpenNI. Probably in previous runs Logger somehow was not able to find OpenNI for streaming the depth and rgb.

Related

unexpected result of pcl::SACsegmentation

Win 11 Pro
PCL 1.12.1
Visual Studio
I want to segment a plane within point cloud using SACsegmentation module and get some unexpected result. Though I solve this problem then, I want to know the reason causing the problem. I want to segment the ground(as blue line set showed), but the code returns the gross one as showed with red lines.
enter image description here
here is the code and pcd file.
https://github.com/PointCloudLibrary/pcl/files/8101342/compress.zip
And after sampling down the point cloud with UniformSample module, it generates the right result. Is it possible that the size of point cloud has some influence

How to make image comparison in openCV more coarse

I am writing a code on raspberry pi in python to compare two images using mean squared error. The project is an personal home security thing.
My main goal is to detect a change between the images that I capture from pi camera(if something is added to the current image or something removed from the image) but right now my code is too sensitive. It is affected by change in background lighting, which I do not want.
I have two options in front of me, to either scrape my current logic and start a new one or improve my current logic to account for these noise(if I can call them that). I am searching for ways to improve my logic but I wanted some guidance on how to go about it.
My biggest fear being, am I wasting time kicking a dead horse or should I just look for some other algorithm to detect a change in image or should I use edge detection
import numpy as np
import cv2
import os
from threading import Thread
######Function Definition########################################
def mse(imageA, imageB):
# the 'Mean Squared Error' between the two images is the
# sum of the squared difference between the two images;
# NOTE: the two images must have the same dimension
err = np.sum((imageA.astype("int") - imageB.astype("int")) ** 2)
err /= int(imageA.shape[0] * imageA.shape[1])
# return the MSE, the lower the error, the more "similar"
# the two images are
return err
def compare_images(imageA, imageB):
# compute the mean squared error
m = mse(imageA, imageB)
print(m)
def capture_image():
##shell command to click photos
os.system(image_args)
##original image Path variable
original_image_path= "/home/pi/Downloads/python-compare-two-images/originalimage.png"
##original_image_args is a shell command to click photos
original_image_args="raspistill -o "+original_image_path+" -w 320 -h 240 -q 50 -t 500"
os.system(original_image_args)
##read the greyscale of the image in to the variable original_image
original_image=cv2.imread(original_image_path, 0)
##Three images
image_args="raspistill -o /home/pi/Downloads/python-compare-two-images/Test_Images/image.png -w 320 -h 240 -q 50 --nopreview -t 10 --exposure sports"
image_path="/home/pi/Downloads/python-compare-two-images/Test_Images/"
image1_name="image.png"
#created a new thread to take pictures
My_Thread=Thread(target=capture_image)
#Thread started
My_Thread.start()
flag = 0
while(True):
if(My_Thread.isAlive()==True):
flag=0
else:
flag=1
if(flag==1):
flag=0
image1 = cv2.imread((image_path+image1_name), 0)
My_Thread=Thread(target=capture_image)
My_Thread.start()
compare_images(original_image, image1)
A first improvement is to adjust a gain to compensate for the global variation of the light. Like taking the average intensity of the two images and correcting one with the ratio of the intensities.
This can fail in case of an change of the foreground, which will influence the global average. If that change in the foreground doesn't have a too large area, you can get an estimate by robust fitting of a linear model y = a.x.
A worse, but unfortunately common, scenario, is when the background illumination changes in a non-uniform way. A partial solution is to try and fit a non-uniform gain model such as one obtained by bilinear interpolation between gains estimated at the corners, or a finer subdivision of the image.
The topic of change detection is a very studied field. One of the basic options is to model each one of the pixels as a Gaussian distribution by sampling a lot of images for each pixel and calculate the mean and variance of each pixel.
For the pixels that tend to change when there is change in lighting the variance of the pixels will be bigger than the ones that don't change as much.
In order to detect movement for a certain pixel you just need to choose what is the probability you consider as an unordarinry change in the pixel value and use the Gaussain distribution you calculated to find what is the corresponding value that is considered unordarinry.
To make this solution efficient for your raspberry pi you will need to first do an "offline" calculation of the values for each pixel that will be the threshold values for which the change in the pixel value is considered movement and store them in a file and than in the "online" sage you will just compare each pixel to the calculated value.
For the "offline" stage i recommend using images that were recorder during the entire day in order to get all the variation you need per pixel. This stage of curse can be done on your computer and only the output file will be uploaded to the raspberry pi

Image Fusion OpenCV

I am new to OpenCV and I am looking to fuse two images(Panchromatic and Multispectral) using OpenCV with C++. Note that I have already registered the reference image and now I just need to fuse the reference and the sensed image. I could not find any functions that could help me with this. Did I miss something or is there no direct way to fuse two images?
Please suggest any simple way to proceed with the fusion process.
Since you are trying to fuse together the panchromatic and multispectral images, you would need to :
Convert the input images into a suitable format (YUV works for me,
HSI might too).
Fuse the luminance or intensity values of the two images, leaving the color space untouched.
Combine the fused channel with the color information to produce the final image.
.
cvtColor(ref, tmp1, CV_BGR2GRAY, 0);
cvtColor(trans, tmp2, CV_BGR2GRAY, 0);
cv::Mat yuv;
cvtColor(ref, yuv, CV_BGR2YUV, 3);
vector <Mat> channels_ref;
split(yuv, channels_ref);
double alpha = 0.3;
double beta = 1 - alpha;
addWeighted(tmp1, alpha, tmp2, beta, 0.0, channels_ref[0]);
Mat merge[] = {channels_ref[0], channels_ref[1], channels_ref[2]};
cv::merge(merge, 3, output);
cvtColor(output, output, CV_YUV2BGR);
imshow("Linear Blend", output);
waitKey(0);
I revisited this question after a long time and decided to have a go at it as there was no sample imagery available before. In the meantime, I have generated some - see later.
So, let's say you have a hi-res, panchromatic image with 10m resolution something like this:
and a lo-res, multi-spectral image with 40m resolution of the same area, something like this:
Then, just using ImageMagick at the command-line for now (since it is installed on most Linux distros and is available for OSX and Windows), do what I was alluding to in the comments under your original question...
convert hi-res-panchromatic.tif \
\( lo-res-multispectral.tif -resize 400% -colorspace Lab -separate -delete 0 \) \
-set colorspace Lab -combine result.tif
So, that says... "Load up the hi-res image. Then, to one side, load the lo-res image and upsize it to 400% to account for the 40m resolution versus 10m resolution and convert it to Lab colorspace and separate the channels. Delete the Lightness (L) channel of the lo-res image. Now, returning to the main processing from the aside processing, we will have the hi-res image that we loaded first acting as the L channel along with the ab channels (i.e. colour information) of the lo-res image. Combine them from Lab back into RGB and save".
I see you haven't logged on in a year, so I will delay any OpenCV code-writing until anyone else expresses an interest in the question - but I hope the technique is understandable.
Note
As I don't happen to have any geo-registered panchromatic and multi-spectral imagery of the same place, I cheated somewhat... I took a single image and synthesised a panchromatic version using ImageMagick:
convert orig.tif -colorspace gray hi-res-panchromatic.tif
and I synthesised the lo-res multi-spectral image using:
convert orig.tif -resize 25% lo-res-multispectral.tif
Also, note that I just used Lab mode here to do the blending, because it is simpler, but in the comments I suggested using Principal Components Analysis. I may re-visit this again and implement that too...

Error using OGR to figure out if a pixel center is inside a polygon

I’m trying to develop (using C++ - MSVS 12.0) a function that discover which pixels (from a raster image) have its center inside a polygon (previously populated using a shapefile). I’m using GDAL 1.11.0 (just installed, using devinstall) building from source and using the option INCLUDE_OGR_FRMTS=YES. I can use GDAL and most of OGR functions without problem. However, when I use the following code:
if (polygon->Contains(tmpPoint))
I receive the error message: ERROR 6: GEOS support not enabled
Anybody knows how to solve this issue?
I’m using:
#include "ogrsf_frmts.h"
and my function is declared:
void FindPixels(GDALDataset *image, OGRLayer *poLayer, OGRPolygon *polygon)
and part of my code is:
OGRPoint *tmpPoint = NULL
OGRSpatialReference *spatialReference = NULL;
spatialReference = polygon->getSpatialReference();
tmpPoint = new OGRPoint();
tmpPoint->assignSpatialReference(spatialReference);
loop begin:
tmpPoint->setX(imgTLX + (j * imgRes) + imgResHalf);
tmpPoint->setY(imgTLY - (i * imgRes) - imgResHalf);
if (polygon->Contains(tmpPoint))
Thanks in advance!
MB
Use GDALRasterizeLayers to burn the image of polygon onto a raster. This way you will find all the pixels that fall into the polygon, or not. The default is to burn the pixel only if the centre intersects a polygon.
If the source layer has multiple polygons, you may need to distinguish them by either setting an attribute filter, or using burned attribute ID fields (although this won't work if the polygons overlap).

How to set, for each raster band, the min and max color values (using GDAL)

I successfully created a multiband raster image using GDAL in MSVS (C++), but I do not know how to setup (for each band) the min and max color scale in order I could open the image in my QGIS application and the image loads with the proper color scale. I also would like to setup the contrast to extend from min to max.
Anybody have an idea how to code it?
Thanks in advance!
Compute the statistics for each raster band; see
CPLErr GDALRasterBand::ComputeStatistics for more info.