How to estimate direct horizontal irradiance from GHI - pvlib

The Erbs function in pvlib (see pvlib.irradiance.erbs) can be used to estimate the direct normal irradiance (DNI) and diffuse horizontal irradiance (DHI) from global horizontal irradiance (GHI) however, where DHI is for a horizontal surface, DNI is for a surface perpendicular to the rays, not a horizontal one. How can I estimate the direct horizontal irradiance given GHI? For example, this may be the case where one wants to separate the diffuse and direct component from a measurement of GHI for the same horizontal surface. Is it reasonable to use the Erbs model, compute DHI, and subtract DHI from GHI? I.e. Direct horizontal irradiance = GHI - DHI?

Is it reasonable to use the Erbs model, compute DHI, and subtract DHI from GHI? I.e. Direct horizontal irradiance = GHI - DHI?
Yes.

Related

Translation is inaccurate n orthogonal view

Attached to this posting is an image displayed by OpenGl. The image is displaying (in orthogonal view) some white cubes made from polygons (each cube is made by two triangles).
The alike cubes are instanced and thus share an original copy of the mesh, individual positioning (xy axes) is applied to each instance, giving me the cubes as in the image. In addition, the positioning data is of type float and rounded at two decimal places to stop rounding error during cumulative math additions. After manually verifying the x-axis position values with those used by the cubes, they are exactly at 0.11 away from each other like so: 0.06, 0.17, 0.28, 0.39, 0.50, 0.61, 0.72, 0.83, 0.94, 1.05 (x-axis data used by top row).
After screen capturing the image from OpenGl and zooming to pixel level, confires the noticeable differences in spacing between the columns, which are 1 to 3 pixels. The more scaled down the arrangement of the white cubes is in OpenGl, more pronounced the differences are noticeable, which dose not look so great visually.
Now my question is, why is the spacing between the cubes changing, even though the spacing between the cubes is exactly 0.11 units apart, for the same instance of the mesh? I need the spacing to be equally the same between each column of cubes, is there a way to fix this or an alternative solution?
Note: It appears to me, from studying the image, that the translations for spacing are correct, however the cubes do not appear to be drawing correctly, otherwise the alignment would not be lining up at consecutive sequence of columns...
This is probably just an aliasing issue. Ultimately, your floating point object coordinates are projected into integer pixel coordinates. If they are in between, they are rounded one way or another, leading the issues like you are seeing here.

Rounding a 3D point relative to a plane

I have a Plane class that holds n for normal and q for a point on the plane. I also have another point p that also lies on that plane. How do I go about rounding p to the nearest unit on that plane. Like snapping a cursor to a 3D grid but the grid can be rotating plane.
Image to explain:
Red is the current point. Green is the rounded point that I'm trying to get.
Probably the easiest way to achieve this is by taking the plane to define a rotated and shifted coordinate system. This allows you to construct the matrices for transforming a point in global coordinates into plane coordinates and back. Once you have this, you can simply transform the point into plane coordinates, perform the rounding/projection in a trivial manner, and convert back to world coordinates.
Of course, the problem is underspecified the way you pose the question: the transformation you need has six degrees of freedom, your plane equation only yields three constraints. So you need to add some more information: the location of the origin within the plane, and the rotation of your grid around the plane normal.
Personally, I would start by deriving a plane description in parametric form:
xVec = alpha*direction1 + beta*direction2 + x0
Of course, such a description contains nine variables (three vectors), but you can normalize the two direction vectors, and you can constrain the two direction vectors to be orthogonal, which reduces the amount of freedoms back to six.
The two normalized direction vectors, together with the normalized normal, are the base vectors of the rotated coordinate system, so you can simply construct the rotation matrix by putting these three vectors together. To get the inverse rotation, simply transpose the resulting matrix. Add the translation / inverse translation on the appropriate side of the rotation, and you are done.

How do you find interpolated points on a sphere / hemisphere from data in spherical coordinates

How do you interpolate data on the sphere / hemisphere in C++?
I have a bunch of theta, phi spherical coordinates with a density value associated.
[Theta | Phi | Density] for about 100 points.
If I sample a new data point, not captured in the data but on the sphere, How can I find what the interpolated density value should be from the data points?
Splines, RBFs, something else?
You could use something like Voronoi tesselation to create a set of (preferably convex) facets from the existing points, and then interpolate within these facets based on proximity to each vertex.
This answer might offer some useful pointers:
Algorithm to compute a Voronoi diagram on a sphere?

OpenGl changing the unit of measure for coordinate system

I'm learning OpenGL using the newest blue book, and I was hoping to get a point of clarification. The book says you can use any system of measurement for the coordinate system, however when doing 3d perspective programming, the world coordinates are +1/-1.
I guess where I am a little confused is lets say I want to use feet and inches for my world, and I want to build the interior of my house. Would I just use translates (x(feet), y(feet), z(feet)), extra, or is there some function to change you x,y world coordinates (i.e change the default from -1/+1 to lets say (-20/20). I know openGl converts everything unit left.
So i guess My biggest gap of knowledge is how to I model real world objects (lengths) to make sense in OPenGL?
I think the author is referring to normalized coordinates (device coordinates) that appear after the perspective divide.
Typically when you actually manipulate objects in OpenGL you use standard world coordinates which are not limited to -1/+1. These world coordinates can be anything you like and are translated into normalized coordinates by multiplying by the modelview and projection matrices then dividing by the homogeneous coordinate 'w'.
OpenGL will do all this for you (until you get into shaders) so don't really worry about normalized coordinates.
Well its all relative. If you set 1 OpenGL unit to be 1 meter and then build everything perfectly to scale on that basis then you have the scale you want (ie 10 meters would be 10 openGL units and 1 millimeter would be 0.001 OpenGL units.
If, however, you were to then introduce a model that was designed such that 1 OpenGL unit = 1 foot then the object is going to end up being ~3 times too large.
Sure you can change between the various methods but by far the best way to do this is to re-scale your model before loading it into your engine.
…however when doing 3d perspective programming, the world coordinates are +1/-1.
Says who? Maybe you are referring to clip space, however this is just one special space in which all coordinates are after projection.
You world units may be anything. The relevant part is, how those are projected. Say you have a perspective projection of 90°FOV, near clip plane at 0.01, far clip plane at 10.0, quadratic aspect (for simplicity). Then your view volume into the world will resolve a rectangle with side lengths 0.01 and 0.01 distance close to the viewer, and stretch to 10.0 in the distant with the 0.02 side lengths. Reduce the FOV and the lateral lenghts shrink accordingly.
Nevertheless, coordinates in about the range 0.01 to 10. are projected into the +/-1 clip space.
So it's up to you to choose the projection limits to match your scene and units of choice.
Normalized window coordinate range is always -1 to +1. But it can be used as you wish.
Suppose we want a range -100 to +100 then divide actual coordinate with 100. In this case use
the function glVertex2f or glVertex3f only.
e.g. glVertex2f(10/100,10/100).

scale random 3d model to fit in a viewport

How can I scale a random 3d model to fit in an opengl viewport? I am able to center the model in the middle of the view port. How do I scale it to fit it in the viewport. The model could be an airplane, a cone, an 3d object or any other random model.
Appreciate any help.
You'll need the following information:
r: the radius of the object's bounding sphere
z: the distance from the object to the camera
fovy: the vertical field of view (let's say in degrees) of the camera, as you might have passed it to gluPerspective
Make a little sketch of the situation, find the right triangle in there, and deduce the maximum radius of a sphere that would fit exactly. Given the above parameters, you should find r_max = z * sin(fovy*M_PI/180 / 2).
From that, the scale factor is r_max / r.
All this assumes that the viewport is wider than it is high; if it's not, you should derive fovx first, and use that instead of fovy.