Find vtkImageData origin and size - c++

I have an app with 3 viewports (3 vtkRenderers). Over everyone of them I have a vtkImageReslice. These vtkImageReslice can be zoomed, shifted, rotated, etc.
I get a task to draw a line (vtkLine or something) over every vtkImageReslice ... how can I know the origin and size of vtkImageReslice on every vtkRenderer ?
Knowing the origin and size of vtkRenderer is pretty simple:
int* pOrigin = m_pRenderer->GetOrigin();
int* pSize = m_pRenderer->GetSize();
TRACE("Origin: %d.%d, Size: %d.%d\n", pOrigin[0], pOrigin[1], pSize[0], pSize[1]);
On m_pRenderer I have a m_pReslice (vtkImageReslice) ... how can I know the origin and the size of m_pReslice to draw a line over it ?
I appreciate every hint, advice, anything ...
[Later edit]
I think I am not described well the matter, so I post here a picture:
the only thing that I have to know is the origin and the size of image data from an specific renderer ... that picture, could be shifted, rotated, zoomed, etc.
[Even later edit]
I attach another picture that illustrate what I am trying to do:
When I made a zoom to vtiImageActor, the horizontal green line must become wider:

You may think about widgets. There a lot of them VTK Widgets
e.g. the ImagePlaneWidget pipeline
vtkImagePlaneWidget myWidget = vtkImagePlaneWidget.New();
myWidget.SetInput(myDicomImageReader.GetOutput());
myWidget.SetPlaneOrientationToYAxes();
myWidget.SetSliceIndex(mySliceIndexNumber);
myWidget.SetInteractor(myInteractor);
myWidget.GetPlaneProperty().SetColor(1.0,0.0,0.0);
vtkOutlineFilter outlineFilter = vtkOutlineFilter.New();
outlineFilter.SetInputConnection(myDicomImageReader.GetOutputPort());
vtkPolyDataMapper mapper = vtkPolyDataMapper.New();
mapper.SetInputConnection(outlineFilter.GetOutputPort();
vtkActor actor = vtkActor.New();
actor.SetMapper(mapper);
myImageViewer.GetRenderer().AddActor(actor);
RenderWindow.Render();
myWidget.On();

Well in my degree thesis I did a medical application to slice images and many other things, among were slicing and painting a line on vtk image actor.
I did it in python, I will put a piece of code here, maybe it will helpfull for you. This piece of code does slice and paint a line over this slice, of course, there are many class attributes, but I hope with this code you understand the essence.
self.renderXZ = vtk.vtkRenderer()
self.renderXZ.SetBackground(0,0,0)
self.interactorStyleXZ = vtk.vtkInteractorStyleImage()
self.xz.GetRenderWindow().AddRenderer(self.renderXZ)
self.xz.SetInteractorStyle(self.interactorStyleXZ)
#creating image actor
self.imageActorXZ = vtk.vtkImageActor()
self.imageActorXZ.SetDisplayExtent(self.imageXZ.GetWholeExtent())
#getting the dimensions
xMin, xMax, yMin, yMax, zMin, zMax = self.image.GetWholeExtent()
#changing slice
self.changeSliceXZ(yMax/2)
slice = self.imageActorXZ.GetSliceNumber()
max = self.imageActorXZ.GetWholeZMax()
self.renderXZ.AddActor(self.imageActorXZ)
self.xz.GetRenderWindow().SetSize(522,243)
XZ_XSize,XZ_YSize = self.renderXZ.GetSize()
#drawing a Line
rectMapper = self.drawLine(0, XZ_YSize/2, XZ_XSize, XZ_YSize/2)
self.HorizontalLineActorXZ = vtk.vtkActor2D()
self.HorizontalLineActorXZ.SetMapper(rectMapper)
self.renderXZ.AddActor2D(self.HorizontalLineActorXZ)
#and the changeSliceXZ function is this
def changeSliceXZ(self,sliceNumber):
slicer = vtkImageSlicer()
slicer.SetInput(self.image)
#putting the direction of the slice
slicer.SetSliceDirection(1)
slicer.SetSlice(sliceNumber)
slicer.Update()
self.imageXZ = slicer.GetOutput()
self.imageActorXZ.SetInput(self.imageXZ)
self.xz.GetRenderWindow().Render()
self.YSlice = sliceNumber
self.SliceNumber[1] = sliceNumber

Related

Objects beyond the far clipping plane are rendered in perspective view

I see objects beyond the far clipping plane in perspective projection and I don't think this is how it's suppose to work, so can someone give me an explanation why do I see objects beyond the far clipping plane such as a grid in this example.
The orthogonal projections works fine btw
I cleared all shapes from this demo and added two grids by changing the following code in Luna Frank Shapes Demo
void ShapesApp::BuildRenderItems()
{
auto gridRitem = std::make_unique<RenderItem>();
gridRitem->World = MathHelper::Identity4x4();
gridRitem->ObjCBIndex = 0;
gridRitem->Geo = mGeometries["shapeGeo"].get();
gridRitem->PrimitiveType = D3D_PRIMITIVE_TOPOLOGY_TRIANGLELIST;
gridRitem->IndexCount = gridRitem->Geo->DrawArgs["grid"].IndexCount;
gridRitem->StartIndexLocation = gridRitem->Geo->DrawArgs["grid"].StartIndexLocation;
gridRitem->BaseVertexLocation = gridRitem->Geo->DrawArgs["grid"].BaseVertexLocation;
mAllRitems.push_back(std::move(gridRitem));
gridRitem = std::make_unique<RenderItem>();
XMStoreFloat4x4(&gridRitem->World, XMMatrixTranslation(0,-1002,0)* XMMatrixRotationRollPitchYaw(1.5708, 0, 0));;
gridRitem->ObjCBIndex = 1;
gridRitem->Geo = mGeometries["shapeGeo"].get();
gridRitem->PrimitiveType = D3D_PRIMITIVE_TOPOLOGY_TRIANGLELIST;
gridRitem->IndexCount = gridRitem->Geo->DrawArgs["grid"].IndexCount;
gridRitem->StartIndexLocation = gridRitem->Geo->DrawArgs["grid"].StartIndexLocation;
gridRitem->BaseVertexLocation = gridRitem->Geo->DrawArgs["grid"].BaseVertexLocation;
mAllRitems.push_back(std::move(gridRitem));
gridRitem = std::make_unique<RenderItem>();
XMStoreFloat4x4(&gridRitem->World, XMMatrixTranslation(0, -1002, 0) * XMMatrixRotationRollPitchYaw(1.5708, 1.5708, 0));
gridRitem->ObjCBIndex = 2;
gridRitem->Geo = mGeometries["shapeGeo"].get();
gridRitem->PrimitiveType = D3D_PRIMITIVE_TOPOLOGY_TRIANGLELIST;
gridRitem->IndexCount = gridRitem->Geo->DrawArgs["grid"].IndexCount;
gridRitem->StartIndexLocation = gridRitem->Geo->DrawArgs["grid"].StartIndexLocation;
gridRitem->BaseVertexLocation = gridRitem->Geo->DrawArgs["grid"].BaseVertexLocation;
mAllRitems.push_back(std::move(gridRitem));
and change the grid size
GeometryGenerator::MeshData grid = geoGen.CreateGrid(200.0f, 200.0f, 60, 40);
and the projection matrix from on resize
XMMATRIX P = XMMatrixPerspectiveFovLH(0.25f * MathHelper::Pi, AspectRatio(), .1f, 1000.f);
XMStoreFloat4x4(&mProj, P);
Now I can still see the grid even though its beyond the far plane and even if the far plane is 900 the still appears on rotation at the edge of the screen. so I need to reduce the far plane or move the grid further, and as I keep changing the value of the far plane I can see a shape works as a brush, it hides everything beyond it but as camera rotate and the grid is no longer behind it the grid re-appear
and here's what I meant by the brush
I think you're thinking of the maximum view distance as being consistently 900 units away from the camera/eye position. If that was the case, it wouldn't be a clipping plane at all, it would be a curve - a sector of a sphere.
In reality the view frustum is a truncated pyramid made up of 6 planes. When the far plane is set to 900, then the view distance for the pixel in the centre of the view is 900, but the view distance at the corners is much higher (how much higher depends on the FOVs - you could work it out with a bit of trig).
So as you turn your camera left and right, an object approx 900 units away from the camera will come in and out of view as it intersects the far plane.

How do I find an object in image/video knowing its real physical dimension?

I have a sample of images and would like to detect the object among others in the image/video already knowing in advance the real physical dimensions of that object. I have one of the image sample (its airplane door) and would like to find the window in the airplane door knowing its physical dimensions(let we say it has inner radius of 20cm and out radius of 23cm) and its real world position in the door (for example its minimal distance to the door frame is 15cm) .Also I can know prior my camera resolution. Any matlab code or OpenCV C++ that can do that automatically with image processing?
Here is my image sample
And more complex image with round logos.
I run the code for second complex image and do not get the same results. Here is the image result.
You are looking for a circle in the image so i suggest you use Hough circle transform.
Convert image to gray
Find edges in the image
Use Hugh circle transform to find circles in the image.
For each candidate circle sample the values of the circle and if the values corresponds to a predefined values accept.
The code:
clear all
% Parameters
minValueWindow = 90;
maxValueWindow = 110;
% Read file
I = imread('image1.jpg');
Igray = rgb2gray(I);
[row,col] = size(Igray);
% Edge detection
Iedge = edge(Igray,'canny',[0 0.3]);
% Hough circle transform
rad = 40:80; % The approximate radius in pixels
detectedCircle = {};
detectedCircleIndex = 1;
for radIndex=1:1:length(rad)
[y0detect,x0detect,Accumulator] = houghcircle(Iedge,rad(1,radIndex),rad(1,radIndex)*pi/2);
if ~isempty(y0detect)
circles = struct;
circles.X = x0detect;
circles.Y = y0detect;
circles.Rad = rad(1,radIndex);
detectedCircle{detectedCircleIndex} = circles;
detectedCircleIndex = detectedCircleIndex + 1;
end
end
% For each detection run a color filter
ang=0:0.01:2*pi;
finalCircles = {};
finalCircleIndex = 1;
for i=1:1:detectedCircleIndex-1
rad = detectedCircle{i}.Rad;
xp = rad*cos(ang);
yp = rad*sin(ang);
for detectedPointIndex=1:1:length(detectedCircle{i}.X)
% Take each detected center and sample the gray image
samplePointsX = round(detectedCircle{i}.X(detectedPointIndex) + xp);
samplePointsY = round(detectedCircle{i}.Y(detectedPointIndex) + yp);
sampleValueInd = sub2ind([row,col],samplePointsY,samplePointsX);
sampleValueMean = mean(Igray(sampleValueInd));
% Check if the circle color is good
if(sampleValueMean > minValueWindow && sampleValueMean < maxValueWindow)
circle = struct();
circle.X = detectedCircle{i}.X(detectedPointIndex);
circle.Y = detectedCircle{i}.Y(detectedPointIndex);
circle.Rad = rad;
finalCircles{finalCircleIndex} = circle;
finalCircleIndex = finalCircleIndex + 1;
end
end
end
% Find Main circle by merging close hyptosis together
for finaCircleInd=1:1:length(finalCircles)
circleCenter(finaCircleInd,1) = finalCircles{finaCircleInd}.X;
circleCenter(finaCircleInd,2) = finalCircles{finaCircleInd}.Y;
circleCenter(finaCircleInd,3) = finalCircles{finaCircleInd}.Rad;
end
[ind,C] = kmeans(circleCenter,2);
c = [length(find(ind==1));length(find(ind==2))];
[~,maxInd] = max(c);
xCircle = median(circleCenter(ind==maxInd,1));
yCircle = median(circleCenter(ind==maxInd,2));
radCircle = median(circleCenter(ind==maxInd,3));
% Plot circle
imshow(Igray);
hold on
ang=0:0.01:2*pi;
xp=radCircle*cos(ang);
yp=radCircle*sin(ang);
plot(xCircle+xp,yCircle+yp,'Color','red', 'LineWidth',5);
The resulted image:
Remarks:
For other images will still have to fine tune several parameters like the radius that you search for the color and Hough circle threshold and canny edge thresholds.
In the function i searched for circle with radius from 40 pixels to 80. In here you can use your prior information about the real world radius of the window and the resolution of the camera. If you know approximately the distance the camera was from the airplane and the resolution of the camera and also the window radius in cm you can use this to get the radius in pixels and use this for the hough circle transform.
I wouldn't worry too much about the exact geometry and calibration and rather find the window by its own characteristics.
Binarization works relatively well, be it on the whole image or in a large region of interest.
Then you can select the most likely blob based on it approximate area and/or circularity.

How to create polygons to display running number in cocos2dx

I'm trying to create a node that is simply a rectangle with a number in it. And this is how I'm doing it now:
int size = 100, fontSize = 64;
auto node = DrawNode::create();
Vec2 vertices[] =
{
Vec2(0,size),
Vec2(size,size),
Vec2(size,0),
Vec2(0,0)
};
node->drawPolygon(vertices, 4, Color4F(1.0f,0.3f,0.3f,1), 0, Color4F(1.0f,1.0f,1.0f,1));
auto texture = new Texture2D();
int numberToDisplay = 2000;
std::string s = std::to_string(numberToDisplay);
texture -> initWithString(s.c_str(), "fonts/Marker Felt.ttf", fontSize, Size(size, size), TextHAlignment::CENTER, TextVAlignment::CENTER);
auto textSprite = Sprite::createWithTexture(texture);
node -> addChild(textSprite);
textSprite -> setPosition(size/2, size/2);
Every time I want to change the number I have to re-create a textureSprite, remove the current child and add the new one. Is there a better way to do it?
i wonder whether you want some special features, so why not use LayerColor and labelTTF?
LayerColor* node = LayerColor::create(Color4B(255, 85, 85, 255), 100, 100);
LabelTTF* label = LabelTTF::create(s, "fonts/Marker Felt.ttf", fontSize);
node->addChild(label);
just change content of labelttf,no need to create sprite
You could use two different techniques for achieve this, to me both of them are good
1:- Use texture cache to cache texture and change image texture at run time(good if u know how many exact textures are there and texture has same Size). in your .h file define no of textures like:-
Texture2D *startTexture, *endTexture, *midTexture;
in you .cpp file do it like:-
startTexture = Director::getInstance()->getTextureCache()->addImage(
"start.png");
endTexture = Director::getInstance()->getTextureCache()->addImage(
"end.png");
middleTexture = Director::getInstance()->getTextureCache()->addImage(
"middle.png");
after that when you want to change texture of any Sprite, simply do it like:-
textSprite->setTexture(startTexture);
for this to work with you, declare "textSprite" in your .h file aswell for quick access.
Pit-fall:- changing texture doesn't change sprite initial bounding box, if initial sprite texture was 32*32 and changed texture was 50*50, then extra texture of 20*20 will be cropped automatically starting from origin point, which might look bad. to over come this you need to change rect also using
textSprite->setTextureRect(
Rect(0, 0, startTexture->getContentSize().width,
startTexture->getContentSize().height));
2:- Using Sprite Frame Cache, put all your texture in a spriteframe, load it into memory like :-
SpriteFrameCache *spriteCache = SpriteFrameCache::getInstance();
spriteCache->addSpriteFramesWithFile("test.plist", "test.png");
now when ever you want to change you texture do it like this
testSprite->setSpriteFrame(
(SpriteFrameCache::getInstance())->getSpriteFrameByName(
"newImage.png"));
this will first check sprite cache for a image named "newImage.png", if it found it in memory then it will return that texture or else it will return nullptr.

How do I find my mouse point in a scene using SceneKit?

I have set up a scene in SceneKit and have issued a hit-test to select an item. However, I want to be able to move that item along a plane in my scene. I continue to receive mouse drag events, but don't know how to transform those 2D coordinates into 3D coordinate in the scene.
My case is very simple. The camera is located at 0, 0, 50 and pointed at 0, 0, 0. I just want to drag my object along the z-plane with a z-value of 0.
The hit-test works like a charm, but how do I translate the mouse point from a drag event into a new position in the scene for the 3D object I am dragging?
You don't need to use invisible geometry — Scene Kit can do all the coordinate conversions you need without having to hit test invisible objects. Basically you need to do the same thing you would in a 2D drawing app for moving an object: find the offset between the mouseDown: location and the object position, then for each mouseMoved:, add that offset to the new mouse location to set the object's new position.
Here's an approach you could use...
Hit-test the initial click location as you're already doing. This gets you an SCNHitTestResult object identifying the node you want to move, right?
Check the worldCoordinates property of that hit test result. If the node you want to move is a child of the scene's rootNode, these is the vector you want for finding the offset. (Otherwise you'll need to convert it to the coordinate system of the parent of the node you want to move — see convertPosition:toNode: or convertPosition:fromNode:.)
You're going to need a reference depth for this point so you can compare mouseMoved: locations to it. Use projectPoint: to convert the vector you got in step 2 (a point in the 3D scene) back to screen space — this gets you a 3D vector whose x- and y-coordinates are a screen-space point and whose z-coordinate tells you the depth of that point relative to the clipping planes (0.0 is on the near plane, 1.0 is on the far plane). Hold onto this z-coordinate for use during mouseMoved:.
Subtract the position of the node you want to move from the mouse location vector you got in step 2. This gets you the offset of the mouse click from the object's position. Hold onto this vector — you'll need it until dragging ends.
On mouseMoved:, construct a new 3D vector from the screen coordinates of the new mouse location and the depth value you got in step 3. Then, convert this vector into scene coordinates using unprojectPoint: — this is the mouse location in your scene's 3D space (equivalent to the one you got from the hit test, but without needing to "hit" scene geometry).
Add the offset you got in step 3 to the new location you got in step 5 - this is the new position to move the node to. (Note: for live dragging to look right, you should make sure this position change isn't animated. By default the duration of the current SCNTransaction is zero, so you don't need to worry about this unless you've changed it already.)
(This is sort of off the top of my head, so you should probably double-check the relevant docs and headers. And you might be able to simplify this a bit with some math.)
As an experiment I implemented Mr Bishop's helpful answer. The drag doesn't quite work (the object - a chess piece - jumps off screen) because of differences in the coordinate magnitudes between the mouse click and the 3-D world. I've inserted log outputs here and there among the code.
I asked on the Apple forums if anyone knew the secret sauce to homogenize the coordinates but didn't get a decisive answer. One thing, I had made some experimental changes to Mr Bishop's method and the forum members advised me to return to his technique.
Despite my code's failings, I thought someone might find it a useful starting point. I suspect there are only one or two small problems with the code.
Note that the log of the world transform matrix of the object (chess piece) is not part of the process but one Apple forum member advised me that the matrix often offers a useful 'sanity check' - which indeed it did.
- (NSPoint)
viewPointForEvent: (NSEvent *) event_
{
NSPoint windowPoint = [event_ locationInWindow];
NSPoint viewPoint = [self.view convertPoint: windowPoint
fromView: nil];
return viewPoint;
}
- (SCNHitTestResult *)
hitTestResultForEvent: (NSEvent *) event_
{
NSPoint viewPoint = [self viewPointForEvent: event_];
CGPoint cgPoint = CGPointMake (viewPoint.x, viewPoint.y);
NSArray * points = [(SCNView *) self.view hitTest: cgPoint
options: #{}];
return points.firstObject;
}
- (void)
mouseDown: (NSEvent *) theEvent
{
SCNHitTestResult * result = [self hitTestResultForEvent: theEvent];
SCNVector3 clickWorldCoordinates = result.worldCoordinates;
log output: clickWorldCoordinates x 208.124578, y -12827.223365, z 3163.659073
SCNVector3 screenCoordinates = [(SCNView *) self.view projectPoint: clickWorldCoordinates];
log output: screenCoordinates x 245.128906, y 149.335938, z 0.985565
// save the z coordinate for use in mouseDragged
mouseDownClickOnObjectZCoordinate = screenCoordinates.z;
selectedPiece = result.node; // save selected piece for use in mouseDragged
SCNVector3 piecePosition = selectedPiece.position;
log output: piecePosition x -18.200000, y 6.483060, z 2.350000
offsetOfMouseClickFromPiece.x = clickWorldCoordinates.x - piecePosition.x;
offsetOfMouseClickFromPiece.y = clickWorldCoordinates.y - piecePosition.y;
offsetOfMouseClickFromPiece.z = clickWorldCoordinates.z - piecePosition.z;
log output: offsetOfMouseClickFromPiece x 226.324578, y -12833.706425, z 3161.309073
}
- (void)
mouseDragged: (NSEvent *) theEvent;
{
NSPoint viewClickPoint = [self viewPointForEvent: theEvent];
SCNVector3 clickCoordinates;
clickCoordinates.x = viewClickPoint.x;
clickCoordinates.y = viewClickPoint.y;
clickCoordinates.z = mouseDownClickOnObjectZCoordinate;
log output: clickCoordinates x 246.128906, y 0.000000, z 0.985565
log output: pieceWorldTransform:
m11 = 242.15889219510001, m12 = -0.000045609300002524833, m13 = -0.00000721691076126, m14 = 0,
m21 = 0.0000072168760805499971, m22 = -0.000039452697396149999, m23 = 242.15890446329999, m24 = 0,
m31 = -0.000045609300002524833, m32 = -242.15889219510001, m33 = -0.000039452676995750002, m34 = 0,
m41 = -4268.2349924762348, m42 = -12724.050221935429, m43 = 4852.6652710104272, m44 = 1)
SCNVector3 newPiecePosition;
newPiecePosition.x = offsetOfMouseClickFromPiece.x + clickCoordinates.x;
newPiecePosition.y = offsetOfMouseClickFromPiece.y + clickCoordinates.y;
newPiecePosition.z = offsetOfMouseClickFromPiece.z + clickCoordinates.z;
log output: newPiecePosition x 472.453484, y -12833.706425, z 3162.294639
selectedPiece.position = newPiecePosition;
}
I used the code written by Steve and with little modification it worked for me.
On mouseDown I save clickWorldCoordinates on a property called startClickWorldCoordinates.
On mouseDragged I calculate the selectedPiece position in this way:
SCNVector3 worldClickCoordinate = [(SCNView *) self.view unprojectPoint:clickCoordinates.x];
newPiecePosition.x = selectedPiece.position.x + worldClickCoordinate.x - startClickWorldCoordinates.x;
newPiecePosition.y = selectedPiece.position.y + worldClickCoordinate.y - startClickWorldCoordinates.y;
newPiecePosition.z = selectedPiece.position.z + worldClickCoordinate.z - startClickWorldCoordinates.z;
selectedPiece.position = newPiecePosition;
startClickWorldCoordinates = worldClickCoordinate;

Cocos2D+Box2D: Stack bodies that doesn´t fall?

EDIT 2: Problem solved! I can´t promise it will work with different settings, but by putting my block´s body density to 0, the stack of blocks did not fall when new blocks are added.
I´m sorry about the poor title of the question, I´ll explain my problem closer here:
So, I´ve used Box2D and cocos2D to setup a simple project where two boxes stacks on top of each other (I´m planning to expand to 8-10 boxes). Right now, using a friction of 10.0f on each box, the box at the top still moves around a little. If I would add more boxes, the "tower" would fall and I don´t want that.
I want the boxes to use the gravity to move down, but I never ever want them to change there start x-value.
So, how could I prevent my tower of boxes to fall over or prevent my boxes from moving in x-direction?
EDIT: Posting some code
This code creates on of the boxes, the other one just have a different sprite file.
CCSprite *block = [CCSprite spriteWithFile:#"red.png"];
block.position = ccp(200,380);
[self addChild:block];
//Body definition
b2BodyDef blockDef;
blockDef.type = b2_dynamicBody;
blockDef.position.Set(200/PTM_RATIO, 200/PTM_RATIO);
blockDef.userData = block;
b2Body *blockBody = _world->CreateBody(&blockDef);
//Create the shape
b2PolygonShape blockShape;
blockShape.SetAsBox(block.contentSize.width/PTM_RATIO/2, block.contentSize.height/PTM_RATIO/2);
//Fixture defintion
b2FixtureDef blockFixtureDef;
blockFixtureDef.shape = &blockShape;
blockFixtureDef.restitution = 0.0f;
blockFixtureDef.density = 10.0f;
blockFixtureDef.friction = 10.0f;
_redBlockFixture = blockBody->CreateFixture(&blockFixtureDef);
Nothing fancy.
Regards.
You could setup a 2 (1 pixel wide) walls in box2D to the left and right of the block. Here's some sample code for the left wall. To create the right wall, just copy and past the code and change the variable names and the position of the BodyDef.
// Constant you'll need to define
float wallHeight;
// Create wall body
b2BodyDef wallBodyDef;
wallBodyDef.type = b2_dynamicBody;
wallBodyDef.position.Set(200 - block.contentSize.width/PTM_RATIO/2, 0);
b2Body *wallBody = _world->CreateBody(&wallBodyDef);
// Create wall shape
b2PolygonShape wallShape;
wallShape.SetAsBox(1, wallHeight);
// Create shape definition and add to body
b2FixtureDef wallShapeDef;
wallShapeDef.shape = &wallShape;
wallShapeDef.density = 100.0f;
wallShapeDef.friction = 0.0f;
wallShapeDef.restitution = 0.0f;
b2Fixture *wallFixture = wallBody->CreateFixture(&wallShapeDef);
I solved this problem by adjusting the restitution (bounce) of the static surface upon which the blocks are stacked. For example, if the floor has a restitution of .2, a stack of five blocks will look like they are compressing into each other, and eventually topple:
Set the restitution of the floor to 0, and the blocks stay stacked the way you would expect: