Reconstruction of Enright Test using OpenVDB - c++

I want to recreate the Enright Test results with OpenVDB as mentioned in the article by Ken Museth.
After setting up OpenVDB I've Created the sphere similarly to the way it was described in the OpenVDB test git.
I have recieved results which are very different than the results shown in the article.
my code is shown below:
openvdb::GridCPtrVec SphereTest(){
openvdb::GridCPtrVec GridVec;
float fGridSize = 512;
int iGridSize = 512;
double w = 1;
openvdb::Vec3f centerEnright(0.35*fGridSize, 0.35*fGridSize, 0.35*fGridSize);
openvdb::FloatGrid::Ptr grid(new openvdb::FloatGrid());
grid->setGridClass(openvdb::GridClass::GRID_LEVEL_SET);
auto tree = grid->treePtr();
auto outside = 5 * w;
auto inside = -outside;
for (int i = 0; i < iGridSize; ++i)
{
for (int j = 0; j < iGridSize; j++)
{
for (int k = 0; k < iGridSize; k++)
{
openvdb::Coord coord(i, j, k);
const openvdb::Vec3f p = grid->transform().indexToWorld(coord);
const float dist = float((p - centerEnright).length() - (0.15*fGridSize));
auto aDist = abs(dist);
if (aDist < outside)
{
if (dist>0)
tree->setValue(coord, dist);
else
tree->setValue(coord, dist);
}
else
{
if (dist>outside)
tree->setValueOff(coord, outside);
else
tree->setValueOff(coord, inside);
}
}
}
}
std::cout << "Active Voxels MV: " << grid->activeVoxelCount() / 1000000.0 << "\n";
double mem = MemInfo::virtualMemUsedByMe();
std::cout << "Memory MB: " << mem / 1000000.0 << "\n";
openvdb::tools::pruneLevelSet(grid->tree());
std::cout << "Active Voxels MV: " << grid->activeVoxelCount() / 1000000.0 << "\n";
double lastmem=mem;
mem = MemInfo::virtualMemUsedByMe();
std::cout << "Memory MB: " << (mem-lastmem) / 1000000.0 << "\n";
GridVec.push_back(grid);}
my results are as follows :
Active Voxels MV: 0.742089
Memory MB: 617.325
after
Active Voxels MV: 0.742089
Memory MB: 56.234
and as one can see it is ten folds bigger from the results in the article.
Results can be seen in Tables II ,III and IV in the article referring to the 512^3 gridsize , with the [6,5,4,3] tree branching. I've reached the almost the same number of active voxels(Table III) , but with significant additional memory consumption table(IV), while the results of Table II are very confusing. am I missing something? or doing something wrong , maybe not activating some kind of compression, or bit quantization as the article states.
also when looking at the generated grid using the viewer it shows a perfect rounded sphere(not voxelized in the boolean manner) , which is what I'm going for.
any thoughts?
thank you

Related

Deinterleave audio data in varied bitrates

I'm trying to write one function that can deinterleave 8/16/24/32 bit audio data, given that the audio data naturally arrives in an 8 bit buffer.
I have this working for 8 bit, and it works for 16/24/32, but only for the first channel (channel 0). I have tried so many + and * and other operators that I'm just guessing at this point. I cannot find the magic formula. I am using C++ but would also accept a memcpy into the vector if that's easiest.
Check out the code. If you change the demux call to another bitrate you will see the problem. There is an easy math solution here I am sure, I just cannot get it.
#include <vector>
#include <map>
#include <iostream>
#include <iomanip>
#include <string>
#include <string.h>
const int bitrate = 8;
const int channel_count = 5;
const int audio_size = bitrate * channel_count * 4;
uint8_t audio_ptr[audio_size];
const int bytes_per_channel = audio_size / channel_count;
void Demux(int bitrate){
int byterate = bitrate/8;
std::map<int, std::vector<uint8_t> > channel_audio;
for(int i = 0; i < channel_count; i++){
std::vector<uint8_t> audio;
audio.reserve(bytes_per_channel);
for(int x = 0; x < bytes_per_channel; x += byterate){
for(int z = 0; z < byterate; z++){
// What is the magic formula!
audio.push_back(audio_ptr[(x * channel_count) + i + z]);
}
}
channel_audio.insert(std::make_pair(i, audio));
}
int remapsize = 0;
std::cout << "\nRemapped Audio";
std::map<int, std::vector<uint8_t> >::iterator it;
for(it = channel_audio.begin(); it != channel_audio.end(); ++it){
std::cout << "\nChannel" << it->first << " ";
std::vector<uint8_t> v = it->second;
remapsize += v.size();
for(size_t i = 0; i < v.size(); i++){
std::cout << "0x" << std::hex << std::setfill('0') << std::setw(2) << +v[i] << " ";
if(i && (i + 1) % 32 == 0){
std::cout << std::endl;
}
}
}
std::cout << "Total remapped audio size is " << std::dec << remapsize << std::endl;
}
int main()
{
// External data
std::cout << "Raw Audio\n";
for(int i = 0; i < audio_size; i++){
audio_ptr[i] = i;
std::cout << "0x" << std::hex << std::setfill('0') << std::setw(2) << +audio_ptr[i] << " ";
if(i && (i + 1) % 32 == 0){
std::cout << std::endl;
}
}
std::cout << "Total raw audio size is " << std::dec << audio_size << std::endl;
Demux(8);
//Demux(16);
//Demux(24);
//Demux(32);
}
You're actually pretty close. But the code is confusing: specifically the variable names and what actual values they represent. As a result, you appear to be just guessing the math. So let's go back to square one and determine what exactly it is we need to do, and the math will very easily fall out of it.
First, just imagine we have one sample covering each of the five channels. This is called an audio frame for that sample. The frame looks like this:
[channel0][channel1][channel2][channel3][channel4]
The width of a sample in one channel is called byterate in your code, but I don't like that name. I'm going to call it bytes_per_sample instead. You can easily see the width of the entire frame is this:
int bytes_per_frame = bytes_per_sample * channel_count;
It should be equally obvious that to find the starting offset for channel c within a single frame, you multiply as follows:
int sample_offset_in_frame = bytes_per_sample * c;
That's just about all you need! The last bit is your z loop which covers each byte in a single sample for one channel. I don't know what z is supposed to represent, apart from being a random single-letter identifier you chose, but hey let's just keep it.
Putting all this together, you get the absolute offset of sample s in channel c and then you copy individual bytes out of it:
int sample_offset = bytes_per_frame * s + bytes_per_sample * c;
for (int z = 0; z < bytes_per_sample; ++z) {
audio.push_back(audio_ptr[sample_offset + z]);
}
This does actually assume you're looping over the number of samples, not the number of bytes in your channel. So let's show all the loops for completion sake:
const int bytes_per_sample = bitrate / 8;
const int bytes_per_frame = bytes_per_sample * channel_count;
const int num_samples = audio_size / bytes_per_frame;
for (int c = 0; c < channel_count; ++c)
{
int sample_offset = bytes_per_sample * c;
for (int s = 0; s < num_samples; ++s)
{
for (int z = 0; z < bytes_per_sample; ++z)
{
audio.push_back(audio_ptr[sample_offset + z]);
}
// Skip to next frame
sample_offset += bytes_per_frame;
}
}
You'll see here that I split the math up so that it's doing less multiplications in the loops. This is mostly for readability, but might also help a compiler understand what's happening when it tries to optimize. Concerns over optimization are secondary (and in your case, there are much more expensive worries going on with those vectors and the map)..
The most important thing is you have readable code with reasonable variable names that makes logical sense.

How to speed up the call to a huge vector in a for loop c++ armadillo

I am using armadillo library. The part of my program that is too slow and I need to speed it up is the following
for(int q(0); q < Nk*Nk; q++){
for(int k(0); k < Nk*Nk; k++){
int kq = (k+q) % (Nk*Nk);
cx_mat Gx = ((Eigveck.slice(k)).t())*(Vxk.slice(k)-Vxk.slice(kq))*Eigveck.slice(kq);
cx_mat Gy = ((Eigveck.slice(k)).t())*(Vyk.slice(k)-Vyk.slice(kq))*Eigveck.slice(kq);
vec ek = Eigvalk.col(k);
vec ekq = Eigvalk.col(kq);
for(int i(0); i < Ltot; i++){
for(int j(0); j < Ltot; j++){
chi = chi + (abs(Gx(i,j))*abs(Gx(i,j))+abs(Gy(i,j))*abs(Gy(i,j)))*(1.0/(1.0+exp(ekq(j)/T))-1.0/(1.0+exp(ek(i)/T)))*((ekq(j)-ek(i))/((ekq(j)-ek(i))*(ekq(j)-ek(i))+eta*eta))/(Nk*Nk);
}
}
}
double qx = (G1(0)*floor(q/Nk)/Nk+G2(0)*(q % Nk)/Nk);
double qy = (G1(1)*floor(q/Nk)/Nk+G2(1)*(q % Nk)/Nk);
lindhard << qx << " " << qy << " " << -chi << " " << endl;
}
Before this part, I define a huge matrix, Eigvalk and huge cubes, Eigveck, Vxk, Vyk.
Now, the call to their values is extremely slow in the for loop, it takes ages. The cubes contain eigenvectors and other quantities of a given problem. The thing is that for Nk=10 (very small Nk to test the code), it takes 0.1 seconds to compute Nk*Nk=100 times 47 eigenvectors and it takes 4.5 seconds to perform the loop shown that uses them. I have checked that the part that costs time is the call
cx_mat Gx = .....
I have also tried to define vector or huge cx_mat (by vectorising matrices) instead of cx_cube, but nothing changes.
Is there a better way solve this?
I don't see mayor errors. The order of traversing of the matrix is ok.
I think your code may be efficiently be calculated in parallel using a openMP reduction like this
for(int q(0); q < Nk*Nk; q++){
#pragma omp parallel for default(shared) reduction(+:chi)
for(int k(0); k < Nk*Nk; k++){
int kq = (k+q) % (Nk*Nk);
cx_mat Gx = ((Eigveck.slice(k)).t())*(Vxk.slice(k)-Vxk.slice(kq))*Eigveck.slice(kq);
cx_mat Gy = ((Eigveck.slice(k)).t())*(Vyk.slice(k)-Vyk.slice(kq))*Eigveck.slice(kq);
vec ek = Eigvalk.col(k);
vec ekq = Eigvalk.col(kq);
for(int i(0); i < Ltot; i++){
for(int j(0); j < Ltot; j++){
chi = chi + (abs(Gx(i,j))*abs(Gx(i,j))+abs(Gy(i,j))*abs(Gy(i,j)))*(1.0/(1.0+exp(ekq(j)/T))-1.0/(1.0+exp(ek(i)/T)))*((ekq(j)-ek(i))/((ekq(j)-ek(i))*(ekq(j)-ek(i))+eta*eta))/(Nk*Nk);
}
}
}
double qx = (G1(0)*floor(q/Nk)/Nk+G2(0)*(q % Nk)/Nk);
double qy = (G1(1)*floor(q/Nk)/Nk+G2(1)*(q % Nk)/Nk);
lindhard << qx << " " << qy << " " << -chi << " " << endl;
}
Ps.
Perhaps you define some const local variables, such as
const auto delta = ekq(j)-ek(i);
How did you measure your hot spot?
What compiler options do you use? I assume that you have proper optimization level turned on, right?

RIFT descriptors return NaN in pcl library

I am trying to compute the RIFT descriptors of some point clouds for later purposes of matching the descriptors with the ones of other point clouds to check if they belong to the same point cloud.
The problem is that the computation returns NaN values so I cannot match or do anything about it.
I am using pcl 1.7 and I previously remove NaNs of the pcls with the function pcl::removeNaNFromPointCloud().
The code for computing the descriptors is:
pcl::PointCloud<RIFT32>::Ptr processRIFT(
pcl::PointCloud<pcl::PointXYZRGB>::Ptr cloud) {
// ------------------------------------------------------------------
// -----Read ply file-----
// ------------------------------------------------------------------
//Asign pointer to the keypoints cloud
/*pcl::PointCloud<pcl::PointXYZRGB>::Ptr cloudColor(
new pcl::PointCloud<pcl::PointXYZRGB>);
pcl::PointCloud<pcl::PointXYZRGB>& point_cloud = *cloudColor;
if (pcl::io::loadPLYFile(filename, point_cloud) == -1) {
cerr << "Was not able to open file \"" << filename << "\".\n";
printUsage("");
}*/
// Object for storing the point cloud with intensity value.
pcl::PointCloud<pcl::PointXYZI>::Ptr cloudIntensityGlobal(
new pcl::PointCloud<pcl::PointXYZI>);
// Convert the RGB to intensity.
pcl::PointCloudXYZRGBtoXYZI(*cloud, *cloudIntensityGlobal);
pcl::PointCloud<pcl::PointWithScale> sifts = processSift(cloud);
//We find the corresponding point of the sift keypoint in the original
//cloud and store it with RGB so that it can be transformed into
intensity
pcl::PointCloud<pcl::PointXYZRGB>::Ptr cloud_Color(
new pcl::PointCloud<pcl::PointXYZRGB>);
pcl::PointCloud<pcl::PointXYZRGB>& point_cloud_sift = *cloud_Color;
for (int i = 0; i < sifts.points.size(); ++i) {
pcl::PointWithScale pointSift = sifts.points[i];
pcl::PointXYZRGB point;
for (int j = 0; j < cloud->points.size(); ++j) {
point = cloud->points[j];
/*if (pointSift.x == point.x && pointSift.y == point.y
&& pointSift.z == point.z) {*/
if (sqrt(
pow(pointSift.x - point.x, 2)
+ pow(pointSift.y - point.y, 2)
+ pow(pointSift.z - point.z, 2)) < 0.005) {
point_cloud_sift.push_back(point);
//std::cout << point.x << " " << point.y << " " << point.z
<< std::endl;
break;
}
}
}
cout << "Keypoint cloud has " << point_cloud_sift.points.size()
<< " points\n";
// Object for storing the point cloud with intensity value.
pcl::PointCloud<pcl::PointXYZI>::Ptr cloudIntensityKeypoints(
new pcl::PointCloud<pcl::PointXYZI>);
// Object for storing the intensity gradients.
pcl::PointCloud<pcl::IntensityGradient>::Ptr gradients(
new pcl::PointCloud<pcl::IntensityGradient>);
// Object for storing the normals.
pcl::PointCloud<pcl::Normal>::Ptr normals(new
pcl::PointCloud<pcl::Normal>);
// Object for storing the RIFT descriptor for each point.
pcl::PointCloud<RIFT32>::Ptr descriptors(new pcl::PointCloud<RIFT32>
());
// Convert the RGB to intensity.
pcl::PointCloudXYZRGBtoXYZI(*cloud_Color, *cloudIntensityKeypoints);
std::cout << "Size: " << cloudIntensityKeypoints->points.size() <<
"\n";
// Estimate the normals.
pcl::NormalEstimation<pcl::PointXYZI, pcl::Normal> normalEstimation;
normalEstimation.setInputCloud(cloudIntensityGlobal);
//normalEstimation.setSearchSurface(cloudIntensityGlobal);
normalEstimation.setRadiusSearch(0.03);
pcl::search::KdTree<pcl::PointXYZI>::Ptr kdtree(
new pcl::search::KdTree<pcl::PointXYZI>);
normalEstimation.setSearchMethod(kdtree);
normalEstimation.compute(*normals);
// Compute the intensity gradients.
pcl::IntensityGradientEstimation<pcl::PointXYZI, pcl::Normal,
pcl::IntensityGradient,
pcl::common::IntensityFieldAccessor<pcl::PointXYZI> > ge;
ge.setInputCloud(cloudIntensityGlobal);
//ge.setSearchSurface(cloudIntensityGlobal);
ge.setInputNormals(normals);
ge.setRadiusSearch(0.03);
ge.compute(*gradients);
// RIFT estimation object.
pcl::RIFTEstimation<pcl::PointXYZI, pcl::IntensityGradient, RIFT32>
rift;
rift.setInputCloud(cloudIntensityKeypoints);
rift.setSearchSurface(cloudIntensityGlobal);
rift.setSearchMethod(kdtree);
// Set the intensity gradients to use.
rift.setInputGradient(gradients);
// Radius, to get all neighbors within.
rift.setRadiusSearch(0.05);
// Set the number of bins to use in the distance dimension.
rift.setNrDistanceBins(4);
// Set the number of bins to use in the gradient orientation
dimension.
rift.setNrGradientBins(8);
// Note: you must change the output histogram size to reflect the
previous values.
rift.compute(*descriptors);
cout << "Computed " << descriptors->points.size() << " RIFT
descriptors\n";
//matchRIFTFeatures(*descriptors, *descriptors);
return descriptors;
}
And the code to match the descriptors is currently this, as KDTreeFlann does not allow me to compile an instance of it with the RIFT type, I dont understand why.
int matchRIFTFeatures(pcl::PointCloud<RIFT32>::Ptr descriptors1,
pcl::PointCloud<RIFT32>::Ptr descriptors2) {
int correspondences = 0;
for (int i = 0; i < descriptors1->points.size(); ++i) {
RIFT32 des1 = descriptors1->points[i];
double minDis = 100000000000000000;
double actDis = 0;
//Find closest descriptor
for (int j = 0; j < descriptors2->points.size(); ++j) {
actDis = distanceRIFT(des1, descriptors2->points[j]);
//std::cout << "act distance: " << actDis << std::endl;
if (actDis < minDis) {
minDis = actDis;
}
}
//std::cout << "min distance: " << minDis << std::endl;
//If distance between both descriptors is less than threshold we
found correspondence
if (minDis < 0.5)
++correspondences;
}
return correspondences;
}
What am I doing wrong?

C++ - Comparing multiple int arrays and return array with smallest span

I am working on a problem where i have multiple arrays that are to be compared against a single array. The array with the shortest span between indexes will be returned.
Here is an example of a set of arrays i would be working with:
(if it is of any importance these values represent RGB values)
int[] workingset = {55, 34,102};
int[] test1 = {54,36,99};`
int[] test2 = {21,65,231};
int[] test3 = {76,35,1};
int[] test4 = {133,22,3};
Because test1[] values are closest to workingset[], test1[] would be the array that would be returned.
*I apologize for not putting up sample code, but i simply could not think of a way to piece this puzzle together.
you could easily sum up all components (r,g,b) and check which has the smallest difference.
#include <iostream>
int main(int argc, char* args[])
{
int workingset [] = {55, 34,102};
int test1 [] = {54,36,99};
int test2 []= {21,65,231};
int test3 [] = {76,35,1};
int test4 [] = {133,22,3};
int sums [4] = {};
for(int i=0; i<3;++i){
sums[0] += abs(test1[i]-workingset[i]);
}
std::cout << "diff test1 " << sums[0] << std::endl;
for(int i=0; i<3;++i){
sums[1] += abs(test2[i]-workingset[i]);
}
std::cout << "diff test2 " << sums[1] << std::endl;
for(int i=0; i<3;++i){
sums[2] += abs(test3[i]-workingset[i]);
}
std::cout << "diff test3 " << sums[2] << std::endl;
for(int i=0; i<3;++i){
sums[3] += abs(test4[i]-workingset[i]);
}
std::cout << "diff test4 " << sums[3] << std::endl;
int smallestIndex = 0;
int smallestDiff = INT_MAX;
for(int i=0; i< 4; i++){
if(sums[i] < smallestDiff){
smallestIndex = i;
smallestDiff = sums[i];
}
}
std::cout << "array with smallest index: " << smallestIndex << std::endl;
return 0;
}
I edited the microsoft specific includes and datatypes.
What's your metric for "shortest span between indexes"? I'm guessing that you're trying to minimize the sum of the absolute values of the differences between the two arrays?
Once you've defined your metric, try something like this pseudocode:
min_metric = MAX_INT
min_array = null
for array in arrays:
if array.length() == workingset.length():
metric = calculateMetric(workingset, array)
if metric < min_metric:
min_metric = metric
min_array = array
Let me guess too. Assuming you are trying to write a color matcher. Consider these arrays vectors. Then the absolute length of the vector difference between workingset and testX will be the metric to use.
Or in the code:
int [] delta = { 0, 0, 0 };
for (int i = 0; i < delta.length; ++i) delta[i] = workingset[i] - testX[i];
double metric = 0;
for (int x: delta) metric += x * x;
metric = sqrt(metric); // and use this to assess how far away the testX color is from the workingset color (sqrt operation is optional)

Need floating point precision, GUI gui uses int

I have a flow layout. Inside it I have about 900 tables. Each table is stacked one on top of the other. I have a slider which resizes them and thus causes the flow layout to resize too.
The problem is, the tables should be linearly resizing. Their base size is 200x200. So when scale = 1.0, the w and h of the tables is 200.
I resize by a fixed amount each time making them 4% bigger each time. This means I would expect them to grow by 8 pixels each time. What happens is, every few resizes, the tables grow by 9 pixels. I use doubles everywhere. I have tried rounding, floor and ceil but the problem persists. What could I do so that they always grow by the correct amount?
void LobbyTableManager::changeTableScale( double scale )
{
setTableScale(scale);
}
void LobbyTableManager::setTableScale( double scale )
{
scale += 0.3;
scale *= 2.0;
std::cout << scale << std::endl;
agui::Gui* gotGui = getGui();
float scrollRel = m_vScroll->getRelativeValue();
setScale(scale);
rescaleTables();
resizeFlow();
...
double LobbyTableManager::getTableScale() const
{
return (getInnerWidth() / 700.0) * getScale();
}
void LobbyFilterManager::valueChanged( agui::Slider* source,int val )
{
if(source == m_magnifySlider)
{
DISPATCH_LOBBY_EVENT
{
(*it)->changeTableScale((double)val / source->getRange());
}
}
}
void LobbyTableManager::renderBG( GraphicsContext* g, agui::Rectangle& absRect, agui::Rectangle& childRect )
{
int cx, cy, cw, ch;
g->getClippingRect(cx,cy,cw,ch);
g->setClippingRect(absRect.getX(),absRect.getY(),absRect.getWidth(),absRect.getHeight());
float scale = 0.35f;
int w = m_bgSprite->getWidth() * getTableScale() * scale;
int h = m_bgSprite->getHeight() * getTableScale() * scale;
int numX = ceil(absRect.getWidth() / (float)w) + 2;
int numY = ceil(absRect.getHeight() / (float)h) + 2;
float offsetX = m_activeTables[0]->getLocation().getX() - w;
float offsetY = m_activeTables[0]->getLocation().getY() - h;
int startY = childRect.getY() + 1;
if(moo)
{
std::cout << "TS: " << getTableScale() << " Scr: " << m_vScroll->getValue() << " LOC: " << childRect.getY() << " H: " << h << std::endl;
}
if(moo)
{
std::cout << "S=" << startY << ",";
}
int numAttempts = 0;
while(startY + h < absRect.getY() && numAttempts < 1000)
{
startY += h;
if(moo)
{
std::cout << startY << ",";
}
numAttempts++;
}
if(moo)
{
std::cout << "\n";
moo = false;
}
g->holdDrawing();
for(int i = 0; i < numX; ++i)
{
for(int j = 0; j < numY; ++j)
{
g->drawScaledSprite(m_bgSprite,0,0,m_bgSprite->getWidth(),m_bgSprite->getHeight(),
absRect.getX() + (i * w) + (offsetX),absRect.getY() + (j * h) + startY,w,h,0);
}
}
g->unholdDrawing();
g->setClippingRect(cx,cy,cw,ch);
}
void LobbyTable::rescale( double scale )
{
setScale(scale);
float os = getObjectScale();
double x = m_baseHeight * os;
if((int)(x + 0.5) > (int)x)
{
x++;
}
int oldH = getHeight();
setSize(m_baseWidth * os, floor(x));
...
I added the related code. The slider sends a value changed which is multiplied to get a 4 percent increase (or 8 percent if slider moves 2 values etc...) then the tables are rescaled with this.
The first 3 are when the table size increased by 9, the 4th time it increased by 8px. But the scale factor increases by 0.04 each time.
Why is the 4th time inconsistant?
the pattern seems like 8,8,8,9,9,9,8,8,8,9,9,9...
It increases by 1 pixel more for a few and then decreases by 1 ten increases by 1 etc, thats my issue...
I still don't see the "add 4%" code there (in a form I can understand, anyway), but from your description I think I see the problem: adding 4% twice is not adding 8%. It is adding 8.16% (1.04 * 1.04 == 1.0816). Do that a few more times and you'll start getting 9 pixel jumps. Do it a lot more times and your jumps will get much bigger (they will be 16 pixel jumps when the size gets up to 400x400). Which, IMHO is how I like my scaling to happen.