scaling different objects using mouse wheel - opengl

I use glfw and glm.
If I scroll up - I want to make object bigger, when I scroll down - I want to make object smaller.
How to do it?
I use this function to handle mouse scrolling.
static void mousescroll(GLFWwindow* window, double xoffset, double yoffset)
{
if (yoffset > 0) {
scaler += yoffset * 0.01; //make it bigger than current size
world = glm::scale(world, glm::vec3(scaler, scaler, scaler));
}
else {
scaler -= yoffset * 0.01; //make it smaller than current size
world = glm::scale(world, glm::vec3(scaler, scaler, scaler));
}
}
By default scaler is 1.0.
I can describe the problem like this.
There is an object. If I scroll up - the value of scaler will become 1.01. So the object will be bigger in 1.01 times. When I scroll up again - as far as I can understand in my case the size of object will be bigger in 1.02 than the previous size(which is bigger than the original in 1.01 times)! But I want its size to be bigger than the original in 1.02 times.
How to solve this problem?
Matrix world looks like this
glm::mat4 world = glm::mat4(
glm::vec4(1.0f, 0.0f, 0.0f, 0.0f),
glm::vec4(0.0f, 1.0f, 0.0f, 0.0f),
glm::vec4(0.0f, 0.0f, 1.0f, 0.0f),
glm::vec4(0.0f, 0.0f, 0.0f, 1.0f));
And I calculate the positions of vertex in the shader
gl_Position = world * vec4(Position, 1.0);

But I want its size to be bigger than the original in 1.02 times.
Then reset the transform each time instead of accumulating the scales:
world = glm::scale( scaler, scaler, scaler );

Related

From perspective to orthographic projections

I'm trying to change my camera projection from perspective to orthographic.
At the moment my code is working fine with the perspective projection
m_prespective = glm::perspective(70.0f, (float)DISPLAY_WIDTH / (float)DISPLAY_HEIGHT, 0.01f, 1000.0f);
m_position = glm::vec3(mesh.centre.x, mesh.centre.y, -mesh.radius);
m_forward = centre;
m_up = glm::vec3(0.0f, 1.0f, 0.0f);
return m_prespective * glm::lookAt(m_position, m_forward, m_up);
But as soon as i change it to orthographic projection I can't see my mesh anymore.
m_ortho = glm::ortho(0.0f, (float)DISPLAY_WIDTH, (float)DISPLAY_HEIGHT,5.0f, 0.01f, 1000.0f);
m_position = glm::vec3(mesh.centre.x, mesh.centre.y, -mesh.radius);
m_forward = centre;
m_up = glm::vec3(0.0f, 1.0f, 0.0f);
return m_ortho * glm::lookAt(m_position, m_forward, m_up);
I don't understand what I'm doing wrong.
In perspective projection the term (float)DISPLAY_WIDTH / (float)DISPLAY_HEIGHT is evaluating the picture aspect ratio. This number is going to be close to 1. The left and right clip plane distances at the near plane for perspective projection is aspect * near_distance. More interesting though is the expanse of left-right at the viewing distance, which in your case is abs(m_position.z)= abs(mesh.radius).
Carrying this over to orthographic projection the left, right, top and bottom clip plane distances should be of the same order of magnitude, so given that aspect is close to 1 the values for left, right, bottom and top should be close to the value of abs(mesh.radius). The resolution of the display in pixels is totally irrelevant except for the aspect ratio.
Furthermore when using a perspective projection the value for near should be chosen as large as possible so that all desired geometry is visible. Doing otherwise will waste precious depth buffer resolution.
float const view_distance = mesh.radius + 1;
float const aspect = (float)DISPLAY_WIDTH / (float)DISPLAY_HEIGHT;
switch( typeof_projection ){
case perspective:
m_projection = glm::perspective(70.0f, aspect, 1.f, 1000.0f);
break;
case ortho:
m_projection = glm::ortho(
-aspect * view_distance,
aspect * view_distance,
view_distance,
view_distance,
-1000, 1000 );
break;
}
m_position = glm::vec3(mesh.centre.x, mesh.centre.y, -view_distance);
m_forward = centre;
m_up = glm::vec3(0.0f, 1.0f, 0.0f);
return m_projection * glm::lookAt(m_position, m_forward, m_up);

Projection View matrix calculation for directional light shadow mapping

In order to calculate the projection view matrix for a directional light I take the vertices of the frustum of my active camera, multiply them by the rotation of my directional light and use these rotated vertices to calculate the extends of an orthographic projection matrix for my directional light.
Then I create the view matrix using the center of my light's frustum bounding box as the position of the eye, the light's direction for the forward vector and then the Y axis as the up vector.
I calculate the camera frustum vertices by multiplying the 8 corners of a box with 2 as size and centered in the origin.
Everything works fine and the direction light projection view matrix is correct but I've encountered a big issue with this method.
Let's say that my camera is facing forward (0, 0, -1), positioned on the origin and with a zNear value of 1 and zFar of 100. Only objects visible from my camera frustum are rendered into the shadow map, so every object that has a Z position between -1 and -100.
The problem is, if my light has a direction which makes the light come from behind the camera and the is an object, for example, with a Z position of 10 (so behind the camera but still in front of the light) and tall enough to possibly cast a shadow on the scene visible from my camera, this object is not rendered into the shadow map because it's not included into my light frustum, resulting in an error not casting the shadow.
In order to solve this problem I was thinking of using the scene bounding box to calculate the light projection view Matrix, but doing this would be useless because the image rendered into the shadow map cuold be so large that numerous artifacts would be visible (shadow acne, etc...), so I skipped this solution.
How could I overcome this problem?
I've read this post under the section of 'Calculating a tight projection' to create my projection view matrix and, for clarity, this is my code:
Frustum* cameraFrustum = activeCamera->GetFrustum();
Vertex3f direction = GetDirection(); // z axis
Vertex3f perpVec1 = (direction ^ Vertex3f(0.0f, 0.0f, 1.0f)).Normalized(); // y axis
Vertex3f perpVec2 = (direction ^ perpVec1).Normalized(); // x axis
Matrix rotationMatrix;
rotationMatrix.m[0] = perpVec2.x; rotationMatrix.m[1] = perpVec1.x; rotationMatrix.m[2] = direction.x;
rotationMatrix.m[4] = perpVec2.y; rotationMatrix.m[5] = perpVec1.y; rotationMatrix.m[6] = direction.y;
rotationMatrix.m[8] = perpVec2.z; rotationMatrix.m[9] = perpVec1.z; rotationMatrix.m[10] = direction.z;
Vertex3f frustumVertices[8];
cameraFrustum->GetFrustumVertices(frustumVertices);
for (AInt i = 0; i < 8; i++)
frustumVertices[i] = rotationMatrix * frustumVertices[i];
Vertex3f minV = frustumVertices[0], maxV = frustumVertices[0];
for (AInt i = 1; i < 8; i++)
{
minV.x = min(minV.x, frustumVertices[i].x);
minV.y = min(minV.y, frustumVertices[i].y);
minV.z = min(minV.z, frustumVertices[i].z);
maxV.x = max(maxV.x, frustumVertices[i].x);
maxV.y = max(maxV.y, frustumVertices[i].y);
maxV.z = max(maxV.z, frustumVertices[i].z);
}
Vertex3f extends = maxV - minV;
extends *= 0.5f;
Matrix viewMatrix = Matrix::MakeLookAt(cameraFrustum->GetBoundingBoxCenter(), direction, perpVec1);
Matrix projectionMatrix = Matrix::MakeOrtho(-extends.x, extends.x, -extends.y, extends.y, -extends.z, extends.z);
Matrix projectionViewMatrix = projectionMatrix * viewMatrix;
SceneObject::SetMatrix("ViewMatrix", viewMatrix);
SceneObject::SetMatrix("ProjectionMatrix", projectionMatrix);
SceneObject::SetMatrix("ProjectionViewMatrix", projectionViewMatrix);
And this is how I calculate the frustum and it's bounding box:
Matrix inverseProjectionViewMatrix = projectionViewMatrix.Inversed();
Vertex3f points[8];
_frustumVertices[0] = inverseProjectionViewMatrix * Vertex3f(-1.0f, 1.0f, -1.0f); // near top-left
_frustumVertices[1] = inverseProjectionViewMatrix * Vertex3f( 1.0f, 1.0f, -1.0f); // near top-right
_frustumVertices[2] = inverseProjectionViewMatrix * Vertex3f(-1.0f, -1.0f, -1.0f); // near bottom-left
_frustumVertices[3] = inverseProjectionViewMatrix * Vertex3f( 1.0f, -1.0f, -1.0f); // near bottom-right
_frustumVertices[4] = inverseProjectionViewMatrix * Vertex3f(-1.0f, 1.0f, 1.0f); // far top-left
_frustumVertices[5] = inverseProjectionViewMatrix * Vertex3f( 1.0f, 1.0f, 1.0f); // far top-right
_frustumVertices[6] = inverseProjectionViewMatrix * Vertex3f(-1.0f, -1.0f, 1.0f); // far bottom-left
_frustumVertices[7] = inverseProjectionViewMatrix * Vertex3f( 1.0f, -1.0f, 1.0f); // far bottom-right
_boundingBoxMin = _frustumVertices[0];
_boundingBoxMax = _frustumVertices[0];
for (AInt i = 1; i < 8; i++)
{
_boundingBoxMin.x = min(_boundingBoxMin.x, _frustumVertices[i].x);
_boundingBoxMin.y = min(_boundingBoxMin.y, _frustumVertices[i].y);
_boundingBoxMin.z = min(_boundingBoxMin.z, _frustumVertices[i].z);
_boundingBoxMax.x = max(_boundingBoxMax.x, _frustumVertices[i].x);
_boundingBoxMax.y = max(_boundingBoxMax.y, _frustumVertices[i].y);
_boundingBoxMax.z = max(_boundingBoxMax.z, _frustumVertices[i].z);
}
_boundingBoxCenter = Vertex3f((_boundingBoxMin.x + _boundingBoxMax.x) / 2.0f, (_boundingBoxMin.y + _boundingBoxMax.y) / 2.0f, (_boundingBoxMin.z + _boundingBoxMax.z) / 2.0f);

glm lookAt FPS camera flips vertical mouse look when Z < 0

I am trying to implement a FPS camera using C++, OpenGL and GLM.
What I did until now:
I have a cameraPosition vector for the camera position, and also
cameraForward (pointing to where the camera looks at), cameraRight and cameraUp, which are calculated like this:
inline void controlCamera(GLFWwindow* currentWindow, const float& mouseSpeed, const float& deltaTime)
{
double mousePositionX, mousePositionY;
glfwGetCursorPos(currentWindow, &mousePositionX, &mousePositionY);
int windowWidth, windowHeight;
glfwGetWindowSize(currentWindow, &windowWidth, &windowHeight);
m_cameraYaw += (windowWidth / 2 - mousePositionX) * mouseSpeed;
m_cameraPitch += (windowHeight / 2 - mousePositionY) * mouseSpeed;
lockCamera();
glfwSetCursorPos(currentWindow, windowWidth / 2, windowHeight / 2);
// Rotate the forward vector horizontally. (the first argument is the default forward vector)
m_cameraForward = rotate(vec3(0.0f, 0.0f, -1.0f), m_cameraYaw, vec3(0.0f, 1.0f, 0.0f));
// Rotate the forward vector vertically.
m_cameraForward = rotate(m_cameraForward, -m_cameraPitch, vec3(1.0f, 0.0f, 0.0f));
// Calculate the right vector. First argument is the default right vector.
m_cameraRight = rotate(vec3(1.0, 0.0, 0.0), m_cameraYaw, vec3(0.0f, 1.0f, 0.0f));
// Calculate the up vector.
m_cameraUp = cross(m_cameraRight, m_cameraForward);
}
Then I "look at" like this:
lookAt(m_cameraPosition, m_cameraPosition + m_cameraForward, m_cameraUp)
The problem: I seem to be missing something, because my FPS camera works as it is supposed to be until I move forward and get behind Z(0.0) (z becomes negative).. then my vertical mouse look flips and when I try to look up my application looks down...
The same question was asked here: glm::lookAt vertical camera flips when z <= 0 , but I didn't understand what the issue is and how to solve it.
EDIT: The problem is definitely in the forward, up and right vectors. When I calculate them like this:
m_cameraForward = vec3(
cos(m_cameraPitch) * sin(m_cameraYaw),
sin(m_cameraPitch),
cos(m_cameraPitch) * cos(m_cameraYaw)
);
m_cameraRight = vec3(
sin(m_cameraYaw - 3.14f/2.0f),
0,
cos(m_cameraYaw - 3.14f/2.0f)
);
m_cameraUp = glm::cross(m_cameraRight, m_cameraForward);
Then the problem goes away, but then m_cameraPitch and m_cameraYaw don't match... I mean if m_cameraYaw is 250 and I make a 180 flip m_cameraYaw is 265... I can't restrict leaning backwards for example like that? Any ideas?

Rotate the vertexes of a cube using a rotation matrix

I'm trying to rotate a cube's vertexes with a rotation matrix but whenever I run the program the cube just disappears.
I'm using a rotation matrix that was given to us in a lecture that rotates the cube's x coordinates.
double moveCubeX = 0;
float xRotationMatrix[9] = {1, 0, 0,
0, cos(moveCubeX), sin(moveCubeX),
0, -sin(moveCubeX), cos(moveCubeX)
};
I'm adding to the moveCubeX variable with the 't' key on my keyboard
case 't':
moveCubeX += 5;
break;
And to do the matrix multiplication I'm using
glMultMatrixf();
However when I add this into my code when running it the cube has just disappeared. This is where I add in the glMultMatrixf() function.
void display(void)
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
gluLookAt(pan, 0, -g_fViewDistance,
pan, 0, -1,
0, 1, 0);
glRotatef(rotate_x, 1.0f, 0.0f, 0.0f); //Rotate the camera
glRotatef(rotate_y, 0.0f, 1.0f, 0.0f); //Rotate the camera
glMultMatrixf(xRotationMatrix);
I'm struggling to see where it is I have gone wrong.
OpenGL uses matrices of size 4x4. Therefore, your rotation matrix needs to be expanded to 4 rows and 4 columns, for a total of 16 elements:
float xRotationMatrix[16] = {1.0f, 0.0f, 0.0f, 0.0f,
0.0f, cos(moveCubeX), sin(moveCubeX), 0.0f,
0.0f, -sin(moveCubeX), cos(moveCubeX), 0.0f,
0.0f, 0.0f, 0.0f, 1.0f};
You will also need to be careful about the units for your angles. Since you add 5 to your angle every time the user presses a key, it looks like you're thinking in degrees. The standard cos() and sin() functions in C/C++ libraries expect the angle to be in radians.
In addition, it looks like your matrix is defined at a global level. If you do this, the elements will only be evaluated once at program startup. You will either have to make the matrix definition local to the display(), so that the matrix is re-evaluated each time you draw, or update the matrix every time the angle changes.
For the second option, you can update only the matrix elements that depend on the angle every time the angle changes. In the function that modifies moveCubeX, add:
xRotationMatrix[5] = cos(moveCubeX);
xRotationMatrix[6] = sin(moveCubeX);
xRotationMatrix[9] = -sin(moveCubeX);
xRotationMatrix[10] = cos(moveCubeX);

Implementing bounding sphere collision in OpenGL

I understand the basic principles of bounding sphere collision however implementing it is confusing me a little.
if I have a two cubes defined within arrays: cube1[] and cube2[] with each array consisting of GLfloats that make up each triangle. How then can I first calculate the center point of each cube? and how would I get the radius of the sphere around this?
What mathematics are needed to calculate this?
EDIT: To give more clarification on my question. Assume I have a cube defined using the following array:
GLfloat cube[] = {
2.0f, 3.0f, -4.0f, // triangle 1, top right
3.0f, 3.0f, -4.0f,
2.0f, 2.0f, -4.0f, // bottom right
3.0f, 3.0f, -4.0f, // triangle 2, back face top left
3.0f, 2.0f, -4.0f, // bottom left
2.0f, 2.0f, -4.0f,
2.0f, 3.0f, -3.0f, // triangle 1, front face top left
2.0f, 2.0f, -3.0f, // bottom left
3.0f, 3.0f, -3.0f, // Bottom right
3.0f, 3.0f, -3.0f, // triangle 2, front face
2.0f, 2.0f, -3.0f,
3.0f, 2.0f, -3.0f, // Bottom right
2.0f, 3.0f, -3.0f, // triangle 1, top face
3.0f, 3.0f, -3.0f,
2.0f, 3.0f, -4.0f,
3.0f, 3.0f, -4.0f, // triangle 2, top face
2.0f, 3.0f, -4.0f,
3.0f, 3.0f, -3.0f,
2.0f, 2.0f, -3.0f, // triangle 1, bottom face
2.0f, 2.0f, -4.0f,
3.0f, 2.0f, -3.0f,
3.0f, 2.0f, -4.0f, // triangle 2, bottom face
3.0f, 2.0f, -3.0f, // Bottom Right.
2.0f, 2.0f, -4.0f,
2.0f, 2.0f, -4.0f, // triangle 1, left face
2.0f, 2.0f, -3.0f,
2.0f, 3.0f, -4.0f,
2.0f, 3.0f, -4.0f, // triangle 2, left face
2.0f, 2.0f, -3.0f,
2.0f, 3.0f, -3.0f,
3.0f, 2.0f, -4.0f, // triangle 1, right face
3.0f, 3.0f, -4.0f,
3.0f, 2.0f, -3.0f,
3.0f, 3.0f, -4.0f, // triangle 2, right face
3.0f, 3.0f, -3.0f,
3.0f, 2.0f, -3.0f,
};
Given this cube, I need to get the center point and keep track of it every time the cube translates. I believe I have done so but assistance on whether this is correct is also appreciated:
// Calculate initial center of the shape
glm::vec3 corner1 = glm::vec3(2.0f, 3.0f, -4.0f);
glm::vec3 corner2 = glm::vec3(2.0f, 2.0f, -4.0f);
glm::vec3 corner3 = glm::vec3(3.0f, 3.0f, -4.0f);
glm::vec3 corner4 = glm::vec3(3.0f, 2.0f, -4.0f);
glm::vec3 corner5 = glm::vec3(2.0f, 3.0f, -3.0f);
glm::vec3 corner6 = glm::vec3(2.0f, 2.0f, -3.0f);
glm::vec3 corner7 = glm::vec3(3.0f, 3.0f, -3.0f);
glm::vec3 corner8 = glm::vec3(3.0f, 2.0f, -3.0f);
GLfloat x = (corner1.x + corner2.x + corner3.x + corner4.x + corner5.x + corner6.x+ corner7.x + corner8.x)/8;
GLfloat y = (corner1.y + corner2.y + corner3.y + corner4.y + corner5.y + corner6.y+ corner7.y + corner8.y)/8;
GLfloat z = (corner1.z + corner2.z + corner3.z + corner4.z + corner5.z + corner6.z+ corner7.z + corner8.z)/8;
center = glm::vec4(x, y, z, 1.0f);
Translation is kept in check with the following function:
void Cube::Translate(double x, double y, double z)
{
// Translation matrix for cube.
glm::mat4 cubeTransMatrix = glm::mat4();
cubeTransMatrix = glm::translate(cubeTransMatrix, glm::vec3(x, y, z));
//center = cubeTransMatrix * center;
//Move the cube
for(int i = 0; i < sizeof(cube) / sizeof(GLfloat); i+=3){
glm::vec4 vector = glm::vec4(cube[i], cube[i+1], cube[i+2], 1.0f);
glm::vec4 translate = cubeTransMatrix*vector;
glm::vec4 translateCenter = cubeTransMatrix*center;
center.x = translateCenter[0];
center.y = translateCenter[1];
center.z = translateCenter[2];
cube[i] = translate[0];
cube[i+1] = translate[1];
cube[i+2] = translate[2];
}
}
The center-point of a shape can be calculated in many ways, depending on what you want to consider the "center." However, for a cube, the center calculation is generally considered to be the mean of its points, which is relatively simple: Just get the mean of all the corners' coordinates by adding up all the vectors and dividing by 8. Depending on the exact mesh of your cube, you may have more vertices than that, but for a simple cube, this shouldn't be the case.
If you don't have access to the vertices themselves (you loaded up a mesh, or are using the default cube, built in to GLUT, or something), you will need to keep track of transformations for that cube. I might suggest using a "local" position vector or a local transformation matrix for each cube.
With OpenGL, matrices should be column major, so the top 3 values in the right most column should be your location, in world coordinates, after any global transformations have taken place.
Detecting a collision is almost easier (once you get past the whole "predicting when the collision is going to take place" part, which I wouldn't worry about for your first implementation, if I were you). Spheres are simple shapes, and detecting if two spheres intersect is even simpler. All you need to do is find the squared distance between the two sphere colliders, and compare that to their squared radii.
If the sum of the two squared radii is greater than the distance between the two spheres, then they intersect. Otherwise, they do not.
Just to illustrate how simple this calculation really is, I'll show you here:
float r0sqr = sphere0.radius * sphere0.radius;
float r1sqr = sphere1.radius * sphere1.radius;
float distX = sphere0.position.x - sphere1.position.x;
float distY = sphere0.position.y - sphere1.position.y;
float distZ = sphere0.position.z - sphere1.position.z;
// Since we already need to square these, we won't need to take the absolute value
// to accurately compare them to the radii
distX *= distX;
distY *= distY;
distZ *= distZ;
float sqrDist = (distX+distY+distZ)
if((r0sqr + r1sqr) > sqrDist)
{
// They intersect
}
else
{
// They do not intersect
}
Once you've detected the collision, assuming you want the spheres to be rigidbody colliders, moving them away from one another is incredibly simple. Simply take the intersection distance of the two spheres. For the sake of efficiency, we should modify our previous code a bit, however:
// Since we already need to square these, we won't need to take the absolute value
// to accurately compare them to the radii
float distSqrX = distX * distX;
float distSqrY = distY * distY;
float distSqrZ = distZ * distZ;
float sqrDist = (distSqrX+distSqrY+distSqrZ);
Once we've done that, we can get to calculating the rest of the resolution for this collision. We'll be doing it in a very simple way (assuming neither object has mass, and there is no impact to calculate).
float totalRadius = sphere0.radius + sphere1.radius;// the sum of the two spheres' radii
float dist = sqrt(sqrDist);// the actual distance between the two shapes' centers
float minMovement = (totalRadius - dist);// the difference between the total radius and the actual distance tells us how far they intersect.
minMovement /= dist;// Divide by the distance to "scale" this movement so we can "scale" our distance vector (distX, distY, and distZ)
float mvmtX = distX * minMovement * 0.5f;// The minimum movement on the x-axis to resolve the collision
float mvmtY = distY * minMovement * 0.5f;// The minimum movement on the y-axis to resolve the collision
float mvmtZ = distZ * minMovement * 0.5f;// The minimum movement on the z-axis to resolve the collision
// For the sake of simplicity, we'll just have them "pop" out of each other, and won't
// be doing any interpolation to "smooth" the spheres' interaction.
//
// However, to ensure that we move the correct collider in the correct direction, we
// need to see which one is on which side of the other, along the three axes.
if(sphere0.position.x < sphere1.position.x)
{
sphere0.position.x -= mvmtX;
sphere1.position.x += mvmtX;
}
else
{
sphere0.position.x += mvmtX;
sphere1.position.x -= mvmtX;
}
// Repeat this process for the other two axes
if(sphere0.position.y < sphere1.position.y)
{
sphere0.position.y -= mvmtY;
sphere1.position.y += mvmtY;
}
else
{
sphere0.position.y += mvmtY;
sphere1.position.y -= mvmtY;
}
if(sphere0.position.z < sphere1.position.z)
{
sphere0.position.z -= mvmtZ;
sphere1.position.z += mvmtZ;
}
else
{
sphere0.position.z += mvmtZ;
sphere1.position.z -= mvmtZ;
}
Lastly, calculating the proper radius of the sphere to get the desired effect for collision detection about a cube can be done in one of three ways:
Using a circumscribed sphere (the sphere "touches" the corners of the cube), the formula for the radius is sqrt(3)*edgeLength*0.5. You will get an "overreactive" collision detection system, in that it will detect collisions that are reasonably far outside the volume of the cube, due to the radius being able to reach out to the corners of the box. The largest point of error will be at the center of one of the faces of the cube, where the sphere will overextend the cube by 1/sqrt(3) times the side length of the cube.
The second method would be to use an inscribed sphere, where the sphere is tangent to the faces of the cube (the sphere "touches" the centers of each cube face) and has a radius that is calculated as edgeLength*0.5. Again, there will be error, but this one will actually tend to have a bit MORE error, since it will be "underreactive" at 8 points, rather than being "overreactive" at 6, as the last one was. The amount of distance each corner will be "underreactive" (how far between the corners of the cube and the closest point on the surface of the sphere) is the same as the distance of the overreactiveness of the previous one, roughly 1/sqrt(3) times the side length.
The last method, and most accurate, is the to calculate the sphere such that it is tangent to the edges of the cube. The formula for this one's radius is edgeLength/sqrt(2). This sphere will "touch" the center of each edge of the cube, and will "overestimate" on every face, and "underestimate" on every corner. However, the distance of the overestimation/underestimation is considerably less, and generally more tolerable, being only (sqrt(3)*sqrt(2))/2 times the side length further from where it "should" be, at any point (giving arpproximately 1.4 times more accurate results, for a collision).
The choice is yours as to which best suits your needs. The first one has the best "corner" detection, but he worst "face" detection, the second has the best "face" detection and the worst "corner" detection, and the third has more or less the "average" of the first two, giving it the most "reliable" accuracy, if all cases are a possibility.