C++ OpenGL Wrong Collada Texture Coordinates - c++

I am parsing a Collada file for animations. I have it drawn and animated fine but the issue now is how to setup the texture coordinates. I feed it to OpenGL exactly how the collada dae file gives it to me but its mapped completely wrong. The coordinates are range from [0-1].
Do I have to rearrange it?
If I do then please explain to me on how to go about it. I tried using GL_LINEAR and GL_NEAREST but it doesn't solve the problem. Any ideas why?
The models that I am using is the AstroBoy that http://www.wazim.com/Collada_Tutorial_1.htm gives and the Amnesia Servant Grunt.

Based on how you said it turns out to be mapped completely wrong, I'm guessing you haven't taken into account the the texture index values. I had a similar problem as well (although with a difference model). Just like you can have an array of index values so that OpenGL knows which order to draw the verticies, so to does Collada assign UV index values (and normal index values), and, annoyingly, they are never in the same order. Take the following Collada sample for instance:
<source id="Box001-POSITION">
<float_array id="Box001-POSITION-array" count="1008">
-167.172180 -193.451920 11.675772
167.172180 -193.451920 11.675772 .....
....
....
<source id="Box001-Normal0">
<float_array id="Box001-Normal0-array" count="5976">
-0.000000 -0.025202 -0.999682
-0.000000 -0.025202 -0.999682 .....
....
....
<source id="Box001-UV0">
<float_array id="Box001-UV0-array" count="696">
0.000000 0.000000
1.000000 0.000000
0.000000 1.000000 .....
....
....
<triangles count="664" material="_13 - Default">
<input semantic="VERTEX" offset="0" source="#Box001-POSITION"/>
<input semantic="NORMAL" offset="1" source="#Box001-Normal0"/>
<input semantic="TEXCOORD" offset="2" set="0" source="#Box001-UV0"/>
<p> 169 0 171 170 1 172 171 2 173 171 3
173 168 4 170 169 5 171 173 6 175 174
7 176 175 8 177 175 9 177 172 10 174 173 11 175 108 ....
The first three sections indicate the values of the verticies/normals/texture-coords but the final section indicates the index of each value. Notice how the first vertex index is 169, but the first normal index is 0. In fact, the normal indicies are completely normal, they progress as "0..1..2..3" but the indicies for the verticies and textures are all over the place! You have to order your vertex and texture values in the way the Collada file spcifies.
The other way is to write a little program that parses the collada file and rearranges all your vertex, normal and UV values into the right order based on the index values. Then you can just feed your points straight into OpenGL no questions asked. It's up to you of course, which way you want to handle it.
(PS: If you can make a good parser for Collada files, then the 'interleaved-indexing' is actually quite handy, if not though, I find it an over-complication on Collada's part, but you can't really do anything about it.)

No, I advice you to read some basic knowledge of collada .
<triangles count="664" material="_13 - Default">
<input semantic="VERTEX" offset="0" source="#Box001-POSITION"/>
<input semantic="NORMAL" offset="1" source="#Box001-Normal0"/>
<input semantic="TEXCOORD" offset="2" set="0" source="#Box001-UV0"/>
<p> 169 0 171 170 1 172 171 2 173 171 3......
the 169 is the first point index of a triangles,the 0 is the first normal index ,and the 171 is the first texcoord index ,and so on .

Related

Manual cache blocking and Intel Optimization Flags

I'm trying to test the effectiveness of a manual cache blocking or loop tiling optimization that has been applied on some Fortran scientific code routine. Concerning Tile Size Selection, I used an algorithm based on classical Distinct Lines Estimation. I am using Intel Fortran Compiler ifort 13.0.0 (2012)
To observe some Execution Time speed-up, I have to switch -O2 optimization flag (there IS a 10% of speed-up between -O2 code WITH manual cache blocking and -O2 code without manual cache blocking). If I set -O3 or -O3 -xHost, then the Execution Time remain unimproved (more or less equal to the Execution Time of the base code without manual cache blocking, compiled with -O3 -xHost).
Notice that vectorization is present only with -O3 -xHost compiler flags. But with only -O3 still I can't observe any speed-up. SO the question is:
What are the optimization(s) that are actually interfering with the manual cache blocking at O2?
Here there is the Intel HLO (High Level Optimizer) report of an -O3 only compilation of the manually tiled code:
HLO REPORT LOG OPENED ON Mon Mar 5 10:41:19 2018
</users/home/mc28217/dev_HPC_Gyre_benchmark_test_trunk_2/NEMOGCM/CONFIG/GYRE_BENCHMARK_BLKD/BLD/ppsrc/nemo/traadv_fct.f90;-1:-1;hlo;traadv_fct_mp_tra_adv_fct_;0>
High Level Optimizer Report (traadv_fct_mp_tra_adv_fct_)
Unknown loop at line #346
Perfect Nest of depth 2 at line 226
Perfect Nest of depth 2 at line 232
Perfect Nest of depth 2 at line 251
Perfect Nest of depth 2 at line 251
Perfect Nest of depth 2 at line 254
Perfect Nest of depth 2 at line 254
Perfect Nest of depth 2 at line 254
Perfect Nest of depth 2 at line 254
Perfect Nest of depth 2 at line 254
Perfect Nest of depth 2 at line 254
Perfect Nest of depth 2 at line 257
Perfect Nest of depth 2 at line 257
Perfect Nest of depth 2 at line 276
Perfect Nest of depth 2 at line 277
Perfect Nest of depth 2 at line 296
Perfect Nest of depth 2 at line 296
Perfect Nest of depth 2 at line 296
Perfect Nest of depth 2 at line 296
Perfect Nest of depth 2 at line 313
Perfect Nest of depth 2 at line 314
Perfect Nest of depth 2 at line 325
Perfect Nest of depth 2 at line 325
Perfect Nest of depth 2 at line 325
Perfect Nest of depth 2 at line 325
Perfect Nest of depth 2 at line 361
Perfect Nest of depth 3 at line 361
Perfect Nest of depth 2 at line 361
Adjacent Loops: 3 at line 361
Perfect Nest of depth 2 at line 361
Perfect Nest of depth 3 at line 361
Perfect Nest of depth 2 at line 361
Perfect Nest of depth 2 at line 374
Perfect Nest of depth 2 at line 377
Perfect Nest of depth 2 at line 377
Perfect Nest of depth 2 at line 377
Perfect Nest of depth 2 at line 377
Perfect Nest of depth 2 at line 378
Perfect Nest of depth 2 at line 378
Perfect Nest of depth 2 at line 382
Perfect Nest of depth 2 at line 382
Perfect Nest of depth 2 at line 382
Perfect Nest of depth 2 at line 382
Perfect Nest of depth 2 at line 382
Perfect Nest of depth 2 at line 382
Perfect Nest of depth 2 at line 382
Perfect Nest of depth 2 at line 400
Perfect Nest of depth 2 at line 400
Perfect Nest of depth 2 at line 401
Perfect Nest of depth 2 at line 401
Perfect Nest of depth 2 at line 402
Perfect Nest of depth 2 at line 402
Perfect Nest of depth 2 at line 406
Perfect Nest of depth 2 at line 407
Perfect Nest of depth 2 at line 408
Perfect Nest of depth 2 at line 412
Perfect Nest of depth 2 at line 412
Perfect Nest of depth 2 at line 416
Perfect Nest of depth 2 at line 416
Perfect Nest of depth 2 at line 417
QLOOPS 246/246/0 ENODE LOOPS 246 unknown 1 multi_exit_do 0 do 245 linear_do 233 lite_throttled 0
LINEAR HLO EXPRESSIONS: 1900 / 5384 + LINEAR(innermost): 1628 / 5384
------------------------------------------------------------------------------
</users/home/mc28217/dev_HPC_Gyre_benchmark_test_trunk_2/NEMOGCM/CONFIG/GYRE_BENCHMARK_BLKD/BLD/ppsrc/nemo/traadv_fct.f90;200:200;hlo_scalar_replacement;in traadv_fct_mp_tra_adv_fct_;0>
#of Array Refs Scalar Replaced in traadv_fct_mp_tra_adv_fct_ at line 200=9
</users/home/mc28217/dev_HPC_Gyre_benchmark_test_trunk_2/NEMOGCM/CONFIG/GYRE_BENCHMARK_BLKD/BLD/ppsrc/nemo/traadv_fct.f90;216:216;hlo_scalar_replacement;in traadv_fct_mp_tra_adv_fct_;0>
#of Array Refs Scalar Replaced in traadv_fct_mp_tra_adv_fct_ at line 216=4
#of Array Refs Scalar Replaced in traadv_fct_mp_tra_adv_fct_ at line 216=1
</users/home/mc28217/dev_HPC_Gyre_benchmark_test_trunk_2/NEMOGCM/CONFIG/GYRE_BENCHMARK_BLKD/BLD/ppsrc/nemo/traadv_fct.f90;239:239;hlo_scalar_replacement;in traadv_fct_mp_tra_adv_fct_;0>
#of Array Refs Scalar Replaced in traadv_fct_mp_tra_adv_fct_ at line 239=1
</users/home/mc28217/dev_HPC_Gyre_benchmark_test_trunk_2/NEMOGCM/CONFIG/GYRE_BENCHMARK_BLKD/BLD/ppsrc/nemo/traadv_fct.f90;267:267;hlo_scalar_replacement;in traadv_fct_mp_tra_adv_fct_;0>
#of Array Refs Scalar Replaced in traadv_fct_mp_tra_adv_fct_ at line 267=1
</users/home/mc28217/dev_HPC_Gyre_benchmark_test_trunk_2/NEMOGCM/CONFIG/GYRE_BENCHMARK_BLKD/BLD/ppsrc/nemo/traadv_fct.f90;281:281;hlo_scalar_replacement;in traadv_fct_mp_tra_adv_fct_;0>
#of Array Refs Scalar Replaced in traadv_fct_mp_tra_adv_fct_ at line 281=3
</users/home/mc28217/dev_HPC_Gyre_benchmark_test_trunk_2/NEMOGCM/CONFIG/GYRE_BENCHMARK_BLKD/BLD/ppsrc/nemo/traadv_fct.f90;289:289;hlo_scalar_replacement;in traadv_fct_mp_tra_adv_fct_;0>
#of Array Refs Scalar Replaced in traadv_fct_mp_tra_adv_fct_ at line 289=1
</users/home/mc28217/dev_HPC_Gyre_benchmark_test_trunk_2/NEMOGCM/CONFIG/GYRE_BENCHMARK_BLKD/BLD/ppsrc/nemo/traadv_fct.f90;301:301;hlo_scalar_replacement;in traadv_fct_mp_tra_adv_fct_;0>
#of Array Refs Scalar Replaced in traadv_fct_mp_tra_adv_fct_ at line 301=1
</users/home/mc28217/dev_HPC_Gyre_benchmark_test_trunk_2/NEMOGCM/CONFIG/GYRE_BENCHMARK_BLKD/BLD/ppsrc/nemo/traadv_fct.f90;318:318;hlo_scalar_replacement;in traadv_fct_mp_tra_adv_fct_;0>
#of Array Refs Scalar Replaced in traadv_fct_mp_tra_adv_fct_ at line 318=3
</users/home/mc28217/dev_HPC_Gyre_benchmark_test_trunk_2/NEMOGCM/CONFIG/GYRE_BENCHMARK_BLKD/BLD/ppsrc/nemo/traadv_fct.f90;330:330;hlo_scalar_replacement;in traadv_fct_mp_tra_adv_fct_;0>
#of Array Refs Scalar Replaced in traadv_fct_mp_tra_adv_fct_ at line 330=3
</users/home/mc28217/dev_HPC_Gyre_benchmark_test_trunk_2/NEMOGCM/CONFIG/GYRE_BENCHMARK_BLKD/BLD/ppsrc/nemo/traadv_fct.f90;352:352;hlo_scalar_replacement;in traadv_fct_mp_tra_adv_fct_;0>
#of Array Refs Scalar Replaced in traadv_fct_mp_tra_adv_fct_ at line 352=1
#of Array Refs Scalar Replaced in traadv_fct_mp_tra_adv_fct_ at line 352=1
</users/home/mc28217/dev_HPC_Gyre_benchmark_test_trunk_2/NEMOGCM/CONFIG/GYRE_BENCHMARK_BLKD/BLD/ppsrc/nemo/traadv_fct.f90;361:361;hlo_distribution;in traadv_fct_mp_tra_adv_fct_;0>
LOOP DISTRIBUTION in traadv_fct_mp_tra_adv_fct_ at line 361
Estimate of max_trip_count of loop at line 361=12
Estimate of max_trip_count of loop at line 361=12
Estimate of max_trip_count of loop at line 361=12
Estimate of max_trip_count of loop at line 361=12
</users/home/mc28217/dev_HPC_Gyre_benchmark_test_trunk_2/NEMOGCM/CONFIG/GYRE_BENCHMARK_BLKD/BLD/ppsrc/nemo/traadv_fct.f90;365:365;hlo_scalar_replacement;in traadv_fct_mp_tra_adv_fct_;0>
#of Array Refs Scalar Replaced in traadv_fct_mp_tra_adv_fct_ at line 365=1
</users/home/mc28217/dev_HPC_Gyre_benchmark_test_trunk_2/NEMOGCM/CONFIG/GYRE_BENCHMARK_BLKD/BLD/ppsrc/nemo/traadv_fct.f90;389:389;hlo_scalar_replacement;in traadv_fct_mp_tra_adv_fct_;0>
#of Array Refs Scalar Replaced in traadv_fct_mp_tra_adv_fct_ at line 389=1
#of Array Refs Scalar Replaced in traadv_fct_mp_tra_adv_fct_ at line 389=1
Loop dual-path report:
</users/home/mc28217/dev_HPC_Gyre_benchmark_test_trunk_2/NEMOGCM/CONFIG/GYRE_BENCHMARK_BLKD/BLD/ppsrc/nemo/traadv_fct.f90;179:179;hlo;traadv_fct_mp_tra_adv_fct_;0>
Loop at 179 -- selected for multiversion- Assume shape array stride tests
Loop at 179 -- selected for multiversion- Assume shape array stride tests
Loop at 179 -- selected for multiversion- Assume shape array stride tests
</users/home/mc28217/dev_HPC_Gyre_benchmark_test_trunk_2/NEMOGCM/CONFIG/GYRE_BENCHMARK_BLKD/BLD/ppsrc/nemo/traadv_fct.f90;184:184;hlo;traadv_fct_mp_tra_adv_fct_;0>
Loop at 184 -- selected for multiversion- Assume shape array stride tests
Loop at 188 -- selected for multiversion- Assume shape array stride tests
</users/home/mc28217/dev_HPC_Gyre_benchmark_test_trunk_2/NEMOGCM/CONFIG/GYRE_BENCHMARK_BLKD/BLD/ppsrc/nemo/traadv_fct.f90;190:190;hlo;traadv_fct_mp_tra_adv_fct_;0>
Loop at 190 -- selected for multiversion- Assume shape array stride tests
Based on these results from opt-report, I tried to completely disable the scalar replacement optimization and I managed to remove loop fusion with a compiler directive from the various loops. Despite this attempt, I cannot see any difference.
What could be the interfering optimization introduced by -O3?
Some information: Because for license reasons I cannot post code. I have thirteen 3D loops, and based on the Distinct Lines Estimation analysis, I tiled the centermost loop of every loop nest.
EDIT: This is a loop nest example:
DO jk = 2, jpkm1
DO jltj = 1, jpj, OBS_UPSTRFLX_TILEY
DO jj = jltj, MIN(jpj, jltj+OBS_UPSTRFLX_TILEY-1)
DO ji = 1, jpi
zfp_wk = pwn(ji,jj,jk) + ABS( pwn(ji,jj,jk) )
zfm_wk = pwn(ji,jj,jk) - ABS( pwn(ji,jj,jk) )
zwz(ji,jj,jk) = 0.5 * ( zfp_wk * ptb(ji,jj,jk,jn) + zfm_wk * ptb(ji,jj,jk-1,jn) ) * wmask(ji,jj,jk)
END DO
END DO
END DO
END DO
Other loop nests are more or less the same, with tiling performed on the centermost loop.

Forvalues dropping leading 0's, how to fix?

I am attempting to create a loop to save me having to type out the code many times. Essentially, I have 60 csv files that I need to alter and save. My code looks as follows:
forvalues i = 0203 0206 : 1112 {
cd "C:\Users\User\Desktop\Data\"
import delimited `i'.csv, varnames(1)
gen time=`i'
keep rssd9017 rssd9010 bhck4074 bhck4079 bhck4093 bhck2170 time
save `i'.dta, replace
}
However, I am getting the error "203.csv" does not exist. It seems to be dropping the leading 0, any way to fix this?
You are asking for a numlist, but in this context 0203, with nothing else said, just looks to Stata like a quirky but acceptable way to write 203: hence your problem.
But do you really have a numlist that is 0203 0206 : 1112?
Try it:
numlist "0203 0206 : 1112"
ret li
The list starts 203 206 209 212 215 218 221 224 227 230 233 236 ...
My wild guess is that you have files, one for each quarter over a period, labelled 0203 for March 2002 through to 1112 for December 2011. In fact you do say that you have times, even though my guess implies 40 files, not 60. If so, that means you won't have a file that is labelled 0215, so this is the wrong way to think in any case.
Here is a better approach. First take the cd out of the loop: you need only do that once!
cd "C:\Users\User\Desktop\Data"
Now find the files that are ????.csv. You need only install fs once.
ssc inst fs
fs ????.csv
foreach f in `r(files)' {
import delimited `f', varnames(1)
gen time = substr("`f'", 1, 4)
keep rssd9017 rssd9010 bhck4074 bhck4079 bhck4093 bhck2170 time
save `time'.dta, replace
}
On my guess, you still need to fix the time to something civilised and you would be better off appending the files, but one problem at a time.
Note that insisting on leading zeros, which you think is the problem here, but is probably a red herring, is written up here.

How to detect an inclination of 90 degrees or 180?

In my project I deal with images which I don't know if they are inclined or not.
I work with C++ and OpenCV. I try with Hough transformation to determine the angle of inclination: if it is 90 or 180. But it doesn't give a result.
A link to example image (full resolution TIFF) here.
The following illustration is the full-res image scaled down and converted to PNG:
If I want to attack your image with the Hough lines method, I would do a Canny edge detection first, then find the Hough lines and then look at the generated lines. So it would look like this in ImageMagick - you can transform to OpenCV:
convert input.jpg \
\( +clone -canny x10+10%+30% \
-background none -fill red \
-stroke red -strokewidth 2 \
-hough-lines 9x9+150 \
-write lines.mvg \
\) \
-composite hough.png
And in the lines.mvg file, I can see the individual detected lines:
# Hough line transform: 9x9+150
viewbox 0 0 349 500
line 0,-3.74454 349,8.44281 # 160
line 0,55.2914 349,67.4788 # 206
line 1,0 1,500 # 193
line 0,71.3012 349,83.4885 # 169
line 0,125.334 349,137.521 # 202
line 0,142.344 349,154.532 # 156
line 0,152.351 349,164.538 # 155
line 0,205.383 349,217.57 # 162
line 0,239.453 349,245.545 # 172
line 0,252.455 349,258.547 # 152
line 0,293.461 349,299.553 # 163
line 0,314.464 349,320.556 # 169
line 0,335.468 349,341.559 # 189
line 0,351.47 349,357.562 # 196
line 0,404.478 349,410.57 # 209
line 349.39,0 340.662,500 # 187
line 0,441.484 349,447.576 # 198
line 0,446.484 349,452.576 # 165
line 0,455.486 349,461.578 # 174
line 0,475.489 349,481.581 # 193
line 0,498.5 349,498.5 # 161
I resized your image to 349 pixels wide (to make it fit on Stack Overflow and process faster), so you can see there are lots of lines that start at 0 on the left side of the image and end at 349 on the right side which tells you they go across the image, not up and down it. Also, you can see that the right end of the lines is generally 16 pixels lower than the left, so the image is rotated tan inverse (16/349) degrees.
Here is a fairly simple approach that may help you get started, or give you ideas that you can adapt. I use ImageMagick, but the concepts and techniques should be readily applicable in OpenCV.
First, I note that the image is rotated a few degrees and that gives the black triangle at top right, so the first thing I would consider is cropping the middle out of the image - i.e. removing around 10-15% off each side.
The next thing I note is that, the image is poorly scanned with lots of noisy, muddy grey areas. I would tend to want to blur these together so that they become a bit more uniform and can be thresholded.
So, if I want to do those two things in ImageMagick, I would do this:
convert input.tif \
-gravity center -crop 75x75%+0+0 \
-blur x10 -threshold 50% \
-negate \
stage1.jpg
Now, I can count the number of horizontal black lines that run the full width of the image (without crossing anything white). I do this by squidging the image till it is just a single pixel wide (but still the full original height) and counting the number of black rows:
convert stage1.jpg -resize 1x! -threshold 1 txt: | grep -c black
1368
And I do the same for vertical black lines that run the full height of the image from top to bottom, uninterrupted by white. I do that by squidging the image till it is a single pixel tall and the full original width:
convert stage1.jpg -resize x1! -threshold 1 txt: | grep -c black
0
Therefore there are 1,368 lines across the image and none up and down it, so I can say the dark lines in the original image tend to run left-right across the image rather than top-bottom up and down the image.

Send output from iPython console to .csv file. (& viewing data issue)

Using the iPython console, I built a pandas dataframe called df.
for (k1,k2), group in df.groupby(['II','time']):
print k1,k2
print group
df['II'] stores integers between: [-10,10].
'time' can be either 930 or 1620
My goal is to save the output (of this loop) to a single .csv file. (Not great, but I copied and pasted the output to a csv. However, in doing so, I noticed that "II"== -1, at both times: 930/1620, do not appear in (full data view) like the others. (They both exist, though).
For example, for "II"== -1 # 930 it appears in the console as :
-1 930
<class 'pandas.core.frame.DataFrame'>
Int64Index: 268 entries, 2 to 2140
Data columns:
index 268 non-null values
date 268 non-null values
time 268 non-null values
price 268 non-null values
round5 268 non-null values
II 268 non-null values
Pattern 268 non-null values
pl 268 non-null values
dtypes: float64(2), int64(4), object(2)
With the knowledge that it exists, I tried brute force, pulling them manually:
u=df['II']== -1
one=df.groupby('time')[u]
#To check the result:
one.to_csv('file.csv')
I'm grouping by 'time', so all times should appear. Yet the resulting csv only contains the 1620 times--all results at 930 are, unfortunately, missing in action. It's bizarre. Your suggestions greatly appreciated.

Loading Collada animation joints?

I'm having trouble loading joint data information from 'animation' node of collada file.
First, I try to load joints from 'library_visual_scenes' :
The first 2 joints look like that :
<visual_scene id="" name="">
<node name="joint1" id="joint1" sid="joint1" type="JOINT">
<translate sid="translate">0.000000 -2.000000 0.000000</translate>
<rotate sid="jointOrientZ">0 0 1 90.000000</rotate>
<rotate sid="rotateZ">0 0 1 0.000000</rotate>
<rotate sid="rotateY">0 1 0 0.000000</rotate>
<rotate sid="rotateX">1 0 0 0.000000</rotate>
<scale sid="scale">1.000000 1.000000 1.000000</scale>
<extra>
<node name="joint2" id="joint2" sid="joint2" type="JOINT">
<translate sid="translate">2.000000 0.000000 0.000000</translate>
<rotate sid="rotateZ">0 0 1 0.000000</rotate>
<rotate sid="rotateY">0 1 0 0.000000</rotate>
<rotate sid="rotateX">1 0 0 0.000000</rotate>
<scale sid="scale">1.000000 1.000000 1.000000</scale>
<extra>
which went well !
Maya joints :
My joints :
I would like to put a picture but as a new member, i'm not allowed. You'll have to trust me on this case, in my engine, joints are in the same place as in maya.
Then, I try to load joints from 'animation' node. Here is the problem, I can't find any jointOrient.
<animation id="joint1-anim" name="joint1">
<animation>
<source id="joint1-translate.Y-output">
<float_array id="joint1-translate.Y-output-array" count="2">-2.000000 -2.000000</float_array>
<animation>
<source id="joint1-rotateZ.ANGLE-output">
<float_array id="joint1-rotateZ.ANGLE-output-array" count="2">0.000000 0.000000</float_array>
<animation id="joint2-anim" name="joint2">
<animation>
<source id="joint2-translate.X-output">
<float_array id="joint2-translate.X-output-array" count="2">2.000000 2.000000</float_array>
So after loading joints, they look like that :
Anybody here could help ?
Thanks.
(Sorry as I don't have more than 10 reputations, i'm not allowed to put pictures.)
I finally figured out the answer, for those who might be interested.
The visual_scene node from collada will give you the bind pose for your joints.
So, I'm going to load visual_scene joint coordinates in a structure :
Something like that :
struct Pose
{
vec3 translation,
orientation,
rotation,
scale;
};
Pose bind_pose;
Then I'm going to create another instanciation of "Pose" struct, with a constructor which take a Pose as parameter :
Pose anim_pose(bind_pose);
So after construction, bind_pose from visual_scene and anim_pose are the same.
Then I'm going to iterate through all the animation node in library_animations, find the channel and get interested with :
the source data, which tell where to find joint animations info ("n" float(s) for "n" animation(s) :))
and the target joint.
<channel source="#joint1-translate.X" target="joint1/translate.X"></channel>
This tell us (and that's where I was a little lost) that we are going to REPLACE the targeted value with the source value.
If the source data find in channel node is the same as the target data, ie. :
bind_pose.translation.x has -3.0 as a value after loading visual_scene data, and
<source id="joint1-translate.X-output">
<float_array id="joint1-translate.X-output-array" count="1">-3.000000</float_array>
I do nothing.
If the source data is different from the target data, I simply replace in anim_pose with the good value.
And that's pretty much all you have to do to properly load animated joints from collada.
If you see anything wrong here, please tell me.
Hope this will help.