I'm trying to figure out the best way to approach the following:
Say I have a flat representation of the earth. I would like to create a grid that overlays this with each square on the grid corresponding to about 3 square kilometers. Each square would have a unique region id. This grid would just be stored in a database table that would have a region id and then probably the long/lat coordinates of the four corners of the region, right? Any suggestions on how to generate this table easily? I know I would first need to find out the width and height of this "flattened earth" in kms, calculate the number of regions, and then somehow assign the long/lats to each intersection of vertical/horizontal line; however, this sounds like a lot of manual work.
Secondly, once I have that grid table created, I need to design a fxn that takes a long/lat pair and then determines which logical "region" it is in. I'm not sure how to go about this.
Any help would be appreciated.
Thanks.
Assume the Earth is a sphere with radius R = 6371 km.
Start at (lat, long) = (0, 0) deg. Around the equator, 3km corresponds to a change in longitude of
dlong = 3 / (2 * pi * R) * 360
= 0.0269796482 degrees
If we walk around the equator and put a marker every 3km, there will be about (2 * pi * R) / 3 = 13343.3912 of them. "About" because it's your decision how to handle the extra 0.3912.
From (0, 0), we walk North 3 km to (lat, long) (0.0269796482, 0). We will walk around the Earth again on a path that is locally parallel to the first path we walked. Because it is a little closer to the N Pole, the radius of this circle is a bit smaller than that of the first circle we walked. Let's use lower case r for this radius
r = R * cos(lat)
= 6371 * cos(0.0269796482)
= 6 368.68141 km
We calculate dlong again using the smaller radius,
dlong = 3 / (2 * pi * r) * 360
= 0.0269894704 deg
We put down the second set of flags. This time there are about (2 * pi * r) / 3 = 13 338.5352 of them. There were 13,343 before, but now there are 13,338. What's that? five less.
How do we draw a ribbon of squares when there are five less corners in the top line? In fact, as we walked around the Earth, we'd find that we started off with pretty good squares, but that the shape of the regions sheared out into pretty extreme parallelograms.
We need a different strategy that gives us the same number of corners above and below. If the lower boundary (SW-SE) is 3 km long, then the top should be a little shorter, to make a ribbon of trapeziums.
There are many ways to craft a compromise that approximates your ideal square grid. This wikipedia article on map projections that preserve a metric property, links to several dozen such strategies.
The specifics of your app may allow you to simplify things considerably, especially if you don't really need to map the entire globe.
Microsoft has been investing in spatial data types in their SQL Server 2008 offering. It could help you out here. Because it has data types to represent your flattened earth regions, operators to determine when a set of coordinates is inside a geometry, etc. Even if you choose not to use this, consider checking out the following links. The second one in particular has a lot of good background information on the problem and a discussion on some of the industry standard data formats for spatial data.
http://www.microsoft.com/sqlserver/2008/en/us/spatial-data.aspx
http://jasonfollas.com/blog/archive/2008/03/14/sql-server-2008-spatial-data-part-1.aspx
First, Paul is right. Unfortunately the earth is round which really complicates the heck out of this stuff.
I created a grid similar to this for a topographical mapping server many years ago. I just recoreded the coordinates of the upper left coder of each region. I also used UTM coordinates instead of lat/long. If you know that each region covers 3 square kilometers and since UTM is based on meters, it is straight forward to do a range query to discover the right region.
You do realize that because the earth is a sphere that "3 square km" is going to be a different number of degrees near the poles than near the equator, right? And that at the top and bottom of the map your grid squares will actually represent pie-shaped parts of the world, right?
I've done something similar with my database - I've broken it up into quad cells. So what I did was divide the earth into four quarters (-180,-90)-(0,0), (-180,0)-(0,90) and so on. As I added point entities to my database, if the "cell" got more than X entries, I split the cell into 4. That means that in areas of the world with lots of point entities, I have a lot of quad cells, but in other parts of the world I have very few.
My database for the quad tree looks like:
\d areaids;
Table "public.areaids"
Column | Type | Modifiers
--------------+-----------------------------+-----------
areaid | integer | not null
supercededon | timestamp without time zone |
supercedes | integer |
numpoints | integer | not null
rectangle | geometry |
Indexes:
"areaids_pk" PRIMARY KEY, btree (areaid)
"areaids_rect_idx" gist (rectangle)
Check constraints:
"enforce_dims_rectangle" CHECK (ndims(rectangle) = 2)
"enforce_geotype_rectangle" CHECK (geometrytype(rectangle) = 'POLYGON'::text OR rectangle IS NULL)
"enforce_srid_rectangle" CHECK (srid(rectangle) = 4326)
I'm using PostGIS to help find points in a cell. If I look at a cell, I can tell if it's been split because supercededon is not null. I can find its children by looking for ones that have supercedes equal to its id. And I can dig down from top to bottom until I find the ones that cover the area I'm concerned about by looking for ones with supercedeson null and whose rectangle overlaps my area of interest (using the PostGIS '&' operator).
There's no way you'll be able to do this with rectangular cells, but I've just finished an R package dggridR which would make this easy to do using a grid of hexagonal cells. However, the 3km cell requirement might yield so many cells as to overload your machine.
You can use R to generate the grid:
install.packages('devtools')
install.packages('rgdal')
library(devtools)
devools.install_github('r-barnes/dggridR')
library(dggridR)
library(rgdal)
#Construct a discrete global grid (geodesic) with cells of ~3 km^2
dggs <- dgconstruct(area=100000, metric=FALSE, resround='nearest')
#Get a hexagonal grid for the whole earth based on this dggs
grid <- dgearthgrid(dggs,frame=FALSE)
#Save the grid
writeOGR(grid, "grid_3km_cells.kml", "cells", "KML")
The KML file then contains the ids and edge vertex coordinates of every cell.
The grid looks a little like this:
My package is based on Kevin Sahr's DGGRID which can generate this same grid to KML directly, though you'll need to figure out how to compile it yourself.
Related
I have been working on wall detection in PCD (Point Cloud Data) file using PCL (Point Cloud Library). The PCD file has been generated through a depth camera. I found that in many of the similar applications e.g. floor detection, RANSAC has been used. So, I thought of applying RANSAC here as well and I tried my best to understand RANSAC in-general but I still have certain questions pertaining to my application:
In brief, RANSAC tries to remove the outliers in the given data and generalize the inliers through a model iteratively. So, in the case of floor detection, would it consider the rest of the point clouds corresponding to other objects as just outliers and the floor as inlier? The same is the case for the walls?
As per Plane model segmentation tutorial by PCL, RANSAC is giving the coefficients of the model plane i.e. a, b, c, and d in the equation of plane: a*x + b*y + c*z + d = 0 through coefficients->values vector. So, I assume that in the case of wall detection, it would try to give the equation of the plane corresponding to the wall. However, what if the depth camera is at the corner of a room and the top view of walls look like this:
wall 1
______________
|
|
| wall 2
|
|
So, in this case, what would be the resultant model plane look like? Would it be kind of a hypotenuse (making a triangle)?
wall 1
---------------------
|
| wall 2
|
----------------------
wall 3
Even in this case, how would it look like?
As per the Extracting indices from a PointCloud tutorial by PCL, ExtractIndices <pcl::ExtractIndices> filter is used to extract a subset of points from a point cloud based on the indices output by a segmentation algorithm. But, what exactly this filter is doing? In fact, in the case of floor detection or wall detection (assuming there is only one straight wall), RANSAC is already giving an equation of one plane. So, is there any need of using that filter? If yes then why and how?
How can I detect multiple walls in the following case? ExtractIndices <pcl::ExtractIndices> filter can do this? If yes then how?
wall 1
---------------------
|
| wall 2
|
----------------------
wall 3
If you think that there are better ways than using RANSAC then also please let me know.
Answers for a few question you asked:
As fas as I know, in case of using plane model, RANSAC chooses 3 points from the cloud randomly, and considers it as a plane.(this is a provisional statement that will be substantiated later) All the points that are closer to this plane that the given threshold as a perpendicular distance are considered as inliers.
The algorithm gives back the plane which contains the most points in it (the found plane also depends on the iteration number you choose, if it is too low maybe it misses the largest plane).
In case of walls the story is the same. You can search for planes, but should choose the searching directions well. Walls are casually perpendicular to plane x-y. The parameters should be set considered this.
Example:
pcl::SACSegmentation<pcl::PointXYZI> seg;
Eigen::Vector3f axis;
//HELPER VARIABLES
float angle = 12.0;
void set_segmentation(float threshold, int max_iteration, float probability) {
seg.setModelType(pcl::SACMODEL_PERPENDICULAR_PLANE);
seg.setMethodType(pcl::SAC_RANSAC);
// set cloud, threshold, and other paramatres
seg.setDistanceThreshold(threshold);//Distance need to be adjusted according
to the obj
seg.setMaxIterations(max_iteration);
seg.setProbability(probability);
}
PlaneSegment(float x, float y, float z, float set_angle, float threshold, int
max_iteration, float probability) {
axis = Eigen::Vector3f(x, y, z);
angle = set_angle;
seg.setAxis(axis);
seg.setEpsAngle(angle * (3.1415 / 180.0f));
//SET SEGMENTATION
set_segmentation(threshold, max_iteration, probability);
}
pcl::PointIndices::Ptr segment_plane(pcl::PointCloud<pcl::PointXYZI>::Ptr cloud,
pcl::PointIndices::Ptr inliers) {
seg.setInputCloud(cloud);
seg.segment(*inliers, *coefficients);
if (inliers->indices.size() == 0)
{
PCL_ERROR("COULD NOT ESTIMATE PLANAR MODEL.\n");
exit(-1);
}
return inliers;
}
pcl::PointCloud<pcl::PointXYZI>::Ptr
extraction(pcl::PointCloud<pcl::PointXYZI>::Ptr cloud, pcl::PointIndices::Ptr
inliers) {
extract.setInputCloud(cloud);
extract.setIndices(inliers);
extract.setNegative(true);
extract.filter(*cloud);
return cloud;
}
You can set an acceptance angle, in this case 12 degrees, and also the searching directions based on the axis.
For your second point:
In case of multiple walls, it will give back the one which contains the most points. But you should be able to extract the other planes also if needed. (advice, save all planes which contains more points than a threshold you choose)
I chekced after your problem, this is also a solution for that: pcl::RANSAC segmentation, get all planes in cloud?. First comment gives very good answer.
Third point:
Check the example code. Note, this is a class so thats why there is a constructor. The segment_plane function returns inliers. Based on that you can call the extraction function, and it removes the inliers from the cloud. This is a very simple and fast soulition for this. You can avoid the suffering with the coefficients values. Also, if you dont want to remove them just colour them by iterating through the inliers and set its intensity to a chosen value.
RANSAC algorithm can be robust, but sometimes it just really does not work. Also it can be slow because of the iteration number.
There are multiple ways to solve this problem on another way.
Just an example: consider a grid below the cloud. A lot of equal sized square cells. In each cell you check the minimum and maximum point heights. Based on this you can get the ground plane (If these values are just slightly different, and are close to each other. The difference of the maximum heigth and the minimum height is very low in case of ground cell) or the walls. With walls you can assume that the points have an even distribution in each cells and the difference of the maximum - minimum values are high.
Best regards.
I am writing a disparity matching algorithm using block matching, but I am not sure how to find the corresponding pixel values in the secondary image.
Given a square window of some size, what techniques exist to find the corresponding pixels? Do I need to use feature matching algorithms or is there a simpler method, such as summing the pixel values and determining whether they are within some threshold, or perhaps converting the pixel values to binary strings where the values are either greater than or less than the center pixel?
I'm going to assume you're talking about Stereo Disparity, in which case you will likely want to use a simple Sum of Absolute Differences (read that wiki article before you continue here). You should also read this tutorial by Chris McCormick before you read more here.
side note: SAD is not the only method, but it's really common and should solve your problem.
You already have the right idea. Make windows, move windows, sum pixels, find minimums. So I'll give you what I think might help:
To start:
If you have color images, first you will want to convert them to black and white. In python you might use a simple function like this per pixel, where x is a pixel that contains RGB.
def rgb_to_bw(x):
return int(x[0]*0.299 + x[1]*0.587 + x[2]*0.114)
You will want this to be black and white to make the SAD easier to computer. If you're wondering why you don't loose significant information from this, you might be interested in learning what a Bayer Filter is. The Bayer Filter, which is typically RGGB, also explains the multiplication ratios of the Red, Green, and Blue portions of the pixel.
Calculating the SAD:
You already mentioned that you have a window of some size, which is exactly what you want to do. Let's say this window is n x n in size. You would also have some window in your left image WL and some window in your right image WR. The idea is to find the pair that has the smallest SAD.
So, for each left window pixel pl at some location in the window (x,y) you would the absolute value of difference of the right window pixel pr also located at (x,y). you would also want some running value, which is the sum of these absolute differences. In sudo code:
SAD = 0
from x = 0 to n:
from y = 0 to n:
SAD = SAD + absolute_value|pl - pr|
After you calculate the SAD for this pair of windows, WL and WR you will want to "slide" WR to a new location and calculate another SAD. You want to find the pair of WL and WR with the smallest SAD - which you can think of as being the most similar windows. In other words, the WL and WR with the smallest SAD are "matched". When you have the minimum SAD for the current WL you will "slide" WL and repeat.
Disparity is calculated by the distance between the matched WL and WR. For visualization, you can scale this distance to be between 0-255 and output that to another image. I posted 3 images below to show you this.
Typical Results:
Left Image:
Right Image:
Calculated Disparity (from the left image):
you can get test images here: http://vision.middlebury.edu/stereo/data/scenes2003/
Please anyone help me to resolve my issue. I am working on image processing based project and I stuck at a point. I got this image after some processing and for further processing i need to crop or detect only deer and remove other portion of image.
This is my Initial image:
And my result should be something like this:
It will be more better if I get only a single biggest blob in the image and save it as a image.
It looks like the deer in your image is pretty much connected and closed. What we can do is use regionprops to find all of the bounding boxes in your image. Once we do this, we can find the bounding box that gives the largest area, which will presumably be your deer. Once we find this bounding box, we can crop your image and focus on the deer entirely. As such, assuming your image is stored in im, do this:
im = im2bw(im); %// Just in case...
bound = regionprops(im, 'BoundingBox', 'Area');
%// Obtaining Bounding Box co-ordinates
bboxes = reshape([bound.BoundingBox], 4, []).';
%// Obtain the areas within each bounding box
areas = [bound.Area].';
%// Figure out which bounding box has the maximum area
[~,maxInd] = max(areas);
%// Obtain this bounding box
%// Ensure all floating point is removed
finalBB = floor(bboxes(maxInd,:));
%// Crop the image
out = im(finalBB(2):finalBB(2)+finalBB(4), finalBB(1):finalBB(1)+finalBB(3));
%// Show the images
figure;
subplot(1,2,1);
imshow(im);
subplot(1,2,2);
imshow(out);
Let's go through this code slowly. We first convert the image to binary just in case. Your image may be an RGB image with intensities of 0 or 255... I can't say for sure, so let's just do a binary conversion just in case. We then call regionprops with the BoundingBox property to find every bounding box of every unique object in the image. This bounding box is the minimum spanning bounding box to ensure that the object is contained within it. Each bounding box is a 4 element array that is structured like so:
[x y w h]
Each bounding box is delineated by its origin at the top left corner of the box, denoted as x and y, where x is the horizontal co-ordinate while y is the vertical co-ordinate. x increases positively from left to right, while y increases positively from top to bottom. w,h are the width and height of the bounding box. Because these points are in a structure, I extract them and place them into a single 1D vector, then reshape it so that it becomes a M x 4 matrix. Bear in mind that this is the only way that I know of that can extract values in arrays for each structuring element efficiently without any for loops. This will facilitate our searching to be quicker. I have also done the same for the Area property. For each bounding box we have in our image, we also have the attribute of the total area encapsulated within the bounding box.
Thanks to #Shai for the spot, we can't simply use the bounding box co-ordinates to determine whether or not something has the biggest area within it as we could have a thin diagonal line that could drive the bounding box co-ordinates to be higher. As such, we also need to rely on the total area that the object takes up within the bounding box as well. Simply put, it's just the sum of all of the pixels that are contained within the object.
Therefore, we search the entire area vector that we have created to see which has the maximum area. This corresponds to your deer. Once we find this location, extract the bounding box locations, then use this to crop the image. Bear in mind that the bounding box values may have floating point numbers. As the image co-ordinates are in integer based, we need to remove these floating point values before we decide to crop. I decided to use floor. I then write code that displays the original image, with the cropped result.
Bear in mind that this will only work if there is just one object in the image. If you want to find multiple objects, check bwboundaries in MATLAB. Otherwise, I believe this should get you started.
Just for completeness, we get the following result:
While object detection is a very general CV task, you can start with something simple if the assumptions are strong enough and you can guarantee that the input images will contain a single prominent white blob well described by a bounding box.
One very simple idea is to subdivide the picture in 3x3=9 patches, calculate the statistics for each patch and compute some objective function. In the most simple case you just do a grid search over various partitions and select that with the highest objective metric. Here's an illustration:
If every line is a parameter x_1, x_2, y_1 and y_2, then you want to optimize
either by
grid search (try all x_i, y_i in some quantization steps)
genetic-algorithm-like random search
gradient descent (move every parameter in that direction that optimizes the target function)
The target function F can be define over statistics of the patches, e.g. like this
F(9 patches) {
brightest_patch = max(patches)
others = patches \ brightest_patch
score = brightness(brightest_patch) - 1/8 * brightness(others)
return score
}
or anything else that incorporates relevant statistics of the patches as well as their size. This also allows to incorporate a "prior knowledge": if you expect the blob to appear in the middle of the image, then you can define a "regularization" term that will penalize F if the parameters x_i and y_i deviate from the expected position too much.
Thanks to all who answer and comment on my Question. With your help I got my exact solution. I am posting my final code and result for others.
img = im2bw(imread('deer.png'));
[L, num] = bwlabel(img, 4);
%%// Get biggest blob or object
count_pixels_per_obj = sum(bsxfun(#eq,L(:),1:num));
[~,ind] = max(count_pixels_per_obj);
biggest_blob = (L==ind);
%%// crop only deer
bound = regionprops(biggest_blob, 'BoundingBox');
%// Obtaining Bounding Box co-ordinates
bboxes = reshape([bound.BoundingBox], 4, []).';
%// Obtain this bounding box
%// Ensure all floating point is removed
finalBB = floor(bboxes);
out = biggest_blob(finalBB(2):finalBB(2)+finalBB(4),finalBB(1):finalBB(1)+finalBB(3));
%%// Show images
figure;
imshow(out);
I'm currently using GeoDjango in a "Star searching and sorting" database that provides information about and will simulate information on star and planetary systems.
I'm using GeoDjango 1) because I like it and use it elsewhere, and 2) because I eventually want to use various "geo" searching features like distance/lines/polygon transforms for complex and cool volumetric querying that I can't find elsewhere.
I have the system up and running (https://github.com/jaycrossler/procyon) and it currently uses star Galactic Coordinates (already transformed from Right Ascension/Declination). There are 150k stars currently in the database, and I'm considering increasing to a few million.
After a star is in the database, I build a new table that has a GeoDjango PointField, then populate it with the Galactic X, Y, Z coordinates (which are in parsecs, and such are mostly in the range of -500 to 500). Right now, I've set the SRID to 900913 (so that I'll have a good range of coordinates that won't roll over around the world line)... but when I search on the nearby stars and order by distance, I'm only getting returns that are in a line, rather than being truly near based on X,Y,Z distance.
location = models.PointField(dim=3, blank=True, null=True, srid=900913)
objects = models.GeoManager()
I think this is because every search I make is ultimately getting wrapped onto the surface of a sphere, and that's both inefficient and screwing with results (though if it makes searching only take one line of code, I'm cool with it).
The current searching I'm using in Django is:
origin = self.location
distance = 1000
close_by_stars = StarModel.objects.filter(location__distance_lte=(origin, D(m=distance))).distance(origin).order_by('distance')
for s in close_by_stars[:200]:
#export results
But the results returned are not what I expect (I would think they'd clump around one star, not be in a line), visualized:
So, the big question is:
1) Should I use a SRID such as 900913 (Spherical Mercator)
or
2) Is there a SRID that isn't mapped to the surface of a a planet, so that I can just search on X, Y, Z distances without it rolling over a -180 into a +180 (or whatever equivalent based on projection system)? I tried using SRID=0, but geodjango than pukes and doesn't allow that.
I've got a fix, which I'm sharing here to potentially help others.
I think the issue is that the 'location__distance_lte=' function gets transformed in POSTGIS to the geo function ': ST_DWithin
From looking at pg_log/latest, I see the SQL command:
SELECT (ST_Distance("modelname"."location",
ST_GeomFromEWKB('\x01010000a031bf0d0021e527d53e2963409f3c2cd49ab2404081b22957787d5540'::bytea))) AS "distance",
"modelname"."info", "modelname"."location"
FROM "modelname" WHERE ST_Distance("modelname"."location",
ST_GeomFromEWKB('\x01010000a031bf0d0021e527d53e2963409f3c2cd49ab2404081b22957787d5540'::bytea))
<= 10.0 ORDER BY "distance" ASC
So, when searching by X, Y, Z and looking for the nearest points, it only searches within 2D space - and looks for the ones within X, Y distance... not the ones within Z.
There is an ST_3DDWithin (http://postgis.net/docs/ST_3DDWithin.html) but unfortunately Django doesn't know about it: https://github.com/django/django/blob/master/django/contrib/gis/db/backends/postgis/operations.py#L154.
Instead of overriding the django source I could use the raw sql method: https://docs.djangoproject.com/en/dev/topics/db/sql/#django.db.models.Manager.raw.
But, then the orm would lose a lot of it's benefit.
But instead, I decided to go a bit simpler/more complicated. I kept the same search (which returns basically a "cylinder" instead of a sphere of results). Then, in the function, I loop through the results and parse out the ones that aren't within the spherical results:
origin = item.location
origin_array = numpy.array((origin.x, origin.y, origin.z))
close_by_stars = star_model.objects.filter(location__distance_lte=(origin, D(m=distance))).distance(origin).order_by('distance')
star_list = []
for s in close_by_stars:
location_array = numpy.array((s.location.x, s.location.y, s.location.z))
dist = numpy.linalg.norm(origin_array - location_array)
if dist > distance:
continue
star_handle = dict()
star_handle['data'] = s.data ...
I have a set of images of the same scene but shot with different exposures. These images have no EXIF data so there is no way to extract useful info like f-stop, shutter speed etc.
What I'm trying to do is to determine the difference in stops between the images i.e. Image1 is +1.3 stops of Image0.
My current approach is to first calculate luminance from the image's RGB values using the equation
L = 0.2126 * R + 0.7152 * G + 0.0722 * B
I've seen different numbers being used in the equation but generally it should not affect the end result L too much.
After that I derive the log-average luminance of the image.
exp(avg of log(luminance of image))
But somehow the log-avg luminance doesn't seem to give much indication on exposure difference btw the images.
Any ideas on how to determine exposure difference?
edit: on c/c++
You have to generally solve two problems:
1. Linearize your image data
(In case it's not obvious what is meant: two times more light collected by your pixel shall result in two times the intensity value in your linearized image.)
Your image input might be (sufficiently) linearized already -> you may skip to part 2. If your content came from a camera and it's a JPEG, then this will most certainly not be the case.
The real 'solution' to this problem is finding the camera response function, which you want to invert and apply to your image data to get linear intensity values. This is by no means a trivial task. The EMoR model is widely used in all sorts of software (Photoshop, PTGui, Photomatix, etc.) to describe camera response functions. Some open source software solving this problem (but using a different model iirc) is PFScalibrate.
Having that said, you may get away with a simple inverse gamma application. A rough 'gestimation' for the right gamma value might be found by doing this:
capture an evenly lit, static scene with two exposure times e and e/2
apply a couple of inverse gamma transforms (e.g. for 1.8 to 2.4 in 0.1 steps) on both images
multiply all the short exposure images with 2.0 and subtract them from the respective long exposure images
pick the gamma that lead to the smallest overall difference
2. Find the actual difference of irradiation in stops, i.e. log2(scale factor)
Presuming the scene was static (no moving objects or camera), this is relatively easy:
sum1 = sum2 = 0
foreach pixel pair (p1,p2) from the two images:
if p1 or p2 is close to 0 or 255:
skip this pair
sum1 += p1 and sum2 += p2
return log2(sum1 / sum2)
On large images this will certainly work just as well and a lot faster if you sub-sample the images.
If the camera was static but the scene was not (moving objects), this starts to work less well. I produced acceptable results in this case by simply repeating the above procedure several times and use the output of the previous run as an estimate for the correct scale factor and then discard pixel pairs who's quotient is too far away from the current estimate. So basically replacing the above if line with the following:
if <see above> or if abs(log2(p1/p2) - estimate) > 0.5:
I'd stop the repetition after a fixed number of iterations or if two consecutive estimates are sufficiently close to each other.
EDIT: A note about conversion to luminance
You don't need to do that at all (as Tony D mentioned already) and if you insist, then do it after the linearization step (as Mark Ransom noted). In a perfect setting (static scene, no noise, no de-mosaicing, no quantization) every channel of every pixel would have the same ratio p1/p2 (if neither is saturated). Therefore the relative weighting of the different channels is irrelevant. You may sum over all pixels/channels (weighing R, G and B equally) or maybe only use the green channel.