Manual cache blocking and Intel Optimization Flags - fortran

I'm trying to test the effectiveness of a manual cache blocking or loop tiling optimization that has been applied on some Fortran scientific code routine. Concerning Tile Size Selection, I used an algorithm based on classical Distinct Lines Estimation. I am using Intel Fortran Compiler ifort 13.0.0 (2012)
To observe some Execution Time speed-up, I have to switch -O2 optimization flag (there IS a 10% of speed-up between -O2 code WITH manual cache blocking and -O2 code without manual cache blocking). If I set -O3 or -O3 -xHost, then the Execution Time remain unimproved (more or less equal to the Execution Time of the base code without manual cache blocking, compiled with -O3 -xHost).
Notice that vectorization is present only with -O3 -xHost compiler flags. But with only -O3 still I can't observe any speed-up. SO the question is:
What are the optimization(s) that are actually interfering with the manual cache blocking at O2?
Here there is the Intel HLO (High Level Optimizer) report of an -O3 only compilation of the manually tiled code:
HLO REPORT LOG OPENED ON Mon Mar 5 10:41:19 2018
</users/home/mc28217/dev_HPC_Gyre_benchmark_test_trunk_2/NEMOGCM/CONFIG/GYRE_BENCHMARK_BLKD/BLD/ppsrc/nemo/traadv_fct.f90;-1:-1;hlo;traadv_fct_mp_tra_adv_fct_;0>
High Level Optimizer Report (traadv_fct_mp_tra_adv_fct_)
Unknown loop at line #346
Perfect Nest of depth 2 at line 226
Perfect Nest of depth 2 at line 232
Perfect Nest of depth 2 at line 251
Perfect Nest of depth 2 at line 251
Perfect Nest of depth 2 at line 254
Perfect Nest of depth 2 at line 254
Perfect Nest of depth 2 at line 254
Perfect Nest of depth 2 at line 254
Perfect Nest of depth 2 at line 254
Perfect Nest of depth 2 at line 254
Perfect Nest of depth 2 at line 257
Perfect Nest of depth 2 at line 257
Perfect Nest of depth 2 at line 276
Perfect Nest of depth 2 at line 277
Perfect Nest of depth 2 at line 296
Perfect Nest of depth 2 at line 296
Perfect Nest of depth 2 at line 296
Perfect Nest of depth 2 at line 296
Perfect Nest of depth 2 at line 313
Perfect Nest of depth 2 at line 314
Perfect Nest of depth 2 at line 325
Perfect Nest of depth 2 at line 325
Perfect Nest of depth 2 at line 325
Perfect Nest of depth 2 at line 325
Perfect Nest of depth 2 at line 361
Perfect Nest of depth 3 at line 361
Perfect Nest of depth 2 at line 361
Adjacent Loops: 3 at line 361
Perfect Nest of depth 2 at line 361
Perfect Nest of depth 3 at line 361
Perfect Nest of depth 2 at line 361
Perfect Nest of depth 2 at line 374
Perfect Nest of depth 2 at line 377
Perfect Nest of depth 2 at line 377
Perfect Nest of depth 2 at line 377
Perfect Nest of depth 2 at line 377
Perfect Nest of depth 2 at line 378
Perfect Nest of depth 2 at line 378
Perfect Nest of depth 2 at line 382
Perfect Nest of depth 2 at line 382
Perfect Nest of depth 2 at line 382
Perfect Nest of depth 2 at line 382
Perfect Nest of depth 2 at line 382
Perfect Nest of depth 2 at line 382
Perfect Nest of depth 2 at line 382
Perfect Nest of depth 2 at line 400
Perfect Nest of depth 2 at line 400
Perfect Nest of depth 2 at line 401
Perfect Nest of depth 2 at line 401
Perfect Nest of depth 2 at line 402
Perfect Nest of depth 2 at line 402
Perfect Nest of depth 2 at line 406
Perfect Nest of depth 2 at line 407
Perfect Nest of depth 2 at line 408
Perfect Nest of depth 2 at line 412
Perfect Nest of depth 2 at line 412
Perfect Nest of depth 2 at line 416
Perfect Nest of depth 2 at line 416
Perfect Nest of depth 2 at line 417
QLOOPS 246/246/0 ENODE LOOPS 246 unknown 1 multi_exit_do 0 do 245 linear_do 233 lite_throttled 0
LINEAR HLO EXPRESSIONS: 1900 / 5384 + LINEAR(innermost): 1628 / 5384
------------------------------------------------------------------------------
</users/home/mc28217/dev_HPC_Gyre_benchmark_test_trunk_2/NEMOGCM/CONFIG/GYRE_BENCHMARK_BLKD/BLD/ppsrc/nemo/traadv_fct.f90;200:200;hlo_scalar_replacement;in traadv_fct_mp_tra_adv_fct_;0>
#of Array Refs Scalar Replaced in traadv_fct_mp_tra_adv_fct_ at line 200=9
</users/home/mc28217/dev_HPC_Gyre_benchmark_test_trunk_2/NEMOGCM/CONFIG/GYRE_BENCHMARK_BLKD/BLD/ppsrc/nemo/traadv_fct.f90;216:216;hlo_scalar_replacement;in traadv_fct_mp_tra_adv_fct_;0>
#of Array Refs Scalar Replaced in traadv_fct_mp_tra_adv_fct_ at line 216=4
#of Array Refs Scalar Replaced in traadv_fct_mp_tra_adv_fct_ at line 216=1
</users/home/mc28217/dev_HPC_Gyre_benchmark_test_trunk_2/NEMOGCM/CONFIG/GYRE_BENCHMARK_BLKD/BLD/ppsrc/nemo/traadv_fct.f90;239:239;hlo_scalar_replacement;in traadv_fct_mp_tra_adv_fct_;0>
#of Array Refs Scalar Replaced in traadv_fct_mp_tra_adv_fct_ at line 239=1
</users/home/mc28217/dev_HPC_Gyre_benchmark_test_trunk_2/NEMOGCM/CONFIG/GYRE_BENCHMARK_BLKD/BLD/ppsrc/nemo/traadv_fct.f90;267:267;hlo_scalar_replacement;in traadv_fct_mp_tra_adv_fct_;0>
#of Array Refs Scalar Replaced in traadv_fct_mp_tra_adv_fct_ at line 267=1
</users/home/mc28217/dev_HPC_Gyre_benchmark_test_trunk_2/NEMOGCM/CONFIG/GYRE_BENCHMARK_BLKD/BLD/ppsrc/nemo/traadv_fct.f90;281:281;hlo_scalar_replacement;in traadv_fct_mp_tra_adv_fct_;0>
#of Array Refs Scalar Replaced in traadv_fct_mp_tra_adv_fct_ at line 281=3
</users/home/mc28217/dev_HPC_Gyre_benchmark_test_trunk_2/NEMOGCM/CONFIG/GYRE_BENCHMARK_BLKD/BLD/ppsrc/nemo/traadv_fct.f90;289:289;hlo_scalar_replacement;in traadv_fct_mp_tra_adv_fct_;0>
#of Array Refs Scalar Replaced in traadv_fct_mp_tra_adv_fct_ at line 289=1
</users/home/mc28217/dev_HPC_Gyre_benchmark_test_trunk_2/NEMOGCM/CONFIG/GYRE_BENCHMARK_BLKD/BLD/ppsrc/nemo/traadv_fct.f90;301:301;hlo_scalar_replacement;in traadv_fct_mp_tra_adv_fct_;0>
#of Array Refs Scalar Replaced in traadv_fct_mp_tra_adv_fct_ at line 301=1
</users/home/mc28217/dev_HPC_Gyre_benchmark_test_trunk_2/NEMOGCM/CONFIG/GYRE_BENCHMARK_BLKD/BLD/ppsrc/nemo/traadv_fct.f90;318:318;hlo_scalar_replacement;in traadv_fct_mp_tra_adv_fct_;0>
#of Array Refs Scalar Replaced in traadv_fct_mp_tra_adv_fct_ at line 318=3
</users/home/mc28217/dev_HPC_Gyre_benchmark_test_trunk_2/NEMOGCM/CONFIG/GYRE_BENCHMARK_BLKD/BLD/ppsrc/nemo/traadv_fct.f90;330:330;hlo_scalar_replacement;in traadv_fct_mp_tra_adv_fct_;0>
#of Array Refs Scalar Replaced in traadv_fct_mp_tra_adv_fct_ at line 330=3
</users/home/mc28217/dev_HPC_Gyre_benchmark_test_trunk_2/NEMOGCM/CONFIG/GYRE_BENCHMARK_BLKD/BLD/ppsrc/nemo/traadv_fct.f90;352:352;hlo_scalar_replacement;in traadv_fct_mp_tra_adv_fct_;0>
#of Array Refs Scalar Replaced in traadv_fct_mp_tra_adv_fct_ at line 352=1
#of Array Refs Scalar Replaced in traadv_fct_mp_tra_adv_fct_ at line 352=1
</users/home/mc28217/dev_HPC_Gyre_benchmark_test_trunk_2/NEMOGCM/CONFIG/GYRE_BENCHMARK_BLKD/BLD/ppsrc/nemo/traadv_fct.f90;361:361;hlo_distribution;in traadv_fct_mp_tra_adv_fct_;0>
LOOP DISTRIBUTION in traadv_fct_mp_tra_adv_fct_ at line 361
Estimate of max_trip_count of loop at line 361=12
Estimate of max_trip_count of loop at line 361=12
Estimate of max_trip_count of loop at line 361=12
Estimate of max_trip_count of loop at line 361=12
</users/home/mc28217/dev_HPC_Gyre_benchmark_test_trunk_2/NEMOGCM/CONFIG/GYRE_BENCHMARK_BLKD/BLD/ppsrc/nemo/traadv_fct.f90;365:365;hlo_scalar_replacement;in traadv_fct_mp_tra_adv_fct_;0>
#of Array Refs Scalar Replaced in traadv_fct_mp_tra_adv_fct_ at line 365=1
</users/home/mc28217/dev_HPC_Gyre_benchmark_test_trunk_2/NEMOGCM/CONFIG/GYRE_BENCHMARK_BLKD/BLD/ppsrc/nemo/traadv_fct.f90;389:389;hlo_scalar_replacement;in traadv_fct_mp_tra_adv_fct_;0>
#of Array Refs Scalar Replaced in traadv_fct_mp_tra_adv_fct_ at line 389=1
#of Array Refs Scalar Replaced in traadv_fct_mp_tra_adv_fct_ at line 389=1
Loop dual-path report:
</users/home/mc28217/dev_HPC_Gyre_benchmark_test_trunk_2/NEMOGCM/CONFIG/GYRE_BENCHMARK_BLKD/BLD/ppsrc/nemo/traadv_fct.f90;179:179;hlo;traadv_fct_mp_tra_adv_fct_;0>
Loop at 179 -- selected for multiversion- Assume shape array stride tests
Loop at 179 -- selected for multiversion- Assume shape array stride tests
Loop at 179 -- selected for multiversion- Assume shape array stride tests
</users/home/mc28217/dev_HPC_Gyre_benchmark_test_trunk_2/NEMOGCM/CONFIG/GYRE_BENCHMARK_BLKD/BLD/ppsrc/nemo/traadv_fct.f90;184:184;hlo;traadv_fct_mp_tra_adv_fct_;0>
Loop at 184 -- selected for multiversion- Assume shape array stride tests
Loop at 188 -- selected for multiversion- Assume shape array stride tests
</users/home/mc28217/dev_HPC_Gyre_benchmark_test_trunk_2/NEMOGCM/CONFIG/GYRE_BENCHMARK_BLKD/BLD/ppsrc/nemo/traadv_fct.f90;190:190;hlo;traadv_fct_mp_tra_adv_fct_;0>
Loop at 190 -- selected for multiversion- Assume shape array stride tests
Based on these results from opt-report, I tried to completely disable the scalar replacement optimization and I managed to remove loop fusion with a compiler directive from the various loops. Despite this attempt, I cannot see any difference.
What could be the interfering optimization introduced by -O3?
Some information: Because for license reasons I cannot post code. I have thirteen 3D loops, and based on the Distinct Lines Estimation analysis, I tiled the centermost loop of every loop nest.
EDIT: This is a loop nest example:
DO jk = 2, jpkm1
DO jltj = 1, jpj, OBS_UPSTRFLX_TILEY
DO jj = jltj, MIN(jpj, jltj+OBS_UPSTRFLX_TILEY-1)
DO ji = 1, jpi
zfp_wk = pwn(ji,jj,jk) + ABS( pwn(ji,jj,jk) )
zfm_wk = pwn(ji,jj,jk) - ABS( pwn(ji,jj,jk) )
zwz(ji,jj,jk) = 0.5 * ( zfp_wk * ptb(ji,jj,jk,jn) + zfm_wk * ptb(ji,jj,jk-1,jn) ) * wmask(ji,jj,jk)
END DO
END DO
END DO
END DO
Other loop nests are more or less the same, with tiling performed on the centermost loop.

Related

How to find which constraint is violated from pyomo's ipopt interface?

I am running an optimization problem using pyomo's ipopt solver. My problem is sort of complicated, and it is declared infeasible by IPOPT. I will not post the entire problem unless needed. But, one thing to note is, I am providing a warm start for the problem, which I thought would help prevent infeasibility from rearing its ugly head.
Here's the output from pyomo and ipopt when I set tee=True inside of the solver:
Ipopt 3.12.4:
******************************************************************************
This program contains Ipopt, a library for large-scale nonlinear optimization.
Ipopt is released as open source code under the Eclipse Public License (EPL).
For more information visit http://projects.coin-or.org/Ipopt
******************************************************************************
This is Ipopt version 3.12.4, running with linear solver mumps.
NOTE: Other linear solvers might be more efficient (see Ipopt documentation).
Number of nonzeros in equality constraint Jacobian...: 104
Number of nonzeros in inequality constraint Jacobian.: 0
Number of nonzeros in Lagrangian Hessian.............: 57
Total number of variables............................: 31
variables with only lower bounds: 0
variables with lower and upper bounds: 0
variables with only upper bounds: 0
Total number of equality constraints.................: 29
Total number of inequality constraints...............: 0
inequality constraints with only lower bounds: 0
inequality constraints with lower and upper bounds: 0
inequality constraints with only upper bounds: 0
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
0 0.0000000e+00 1.00e+01 1.00e+02 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0
WARNING: Problem in step computation; switching to emergency mode.
1r 0.0000000e+00 1.00e+01 9.99e+02 1.0 0.00e+00 20.0 0.00e+00 0.00e+00R 1
WARNING: Problem in step computation; switching to emergency mode.
Restoration phase is called at point that is almost feasible,
with constraint violation 0.000000e+00. Abort.
Restoration phase in the restoration phase failed.
Number of Iterations....: 1
(scaled) (unscaled)
Objective...............: 0.0000000000000000e+00 0.0000000000000000e+00
Dual infeasibility......: 9.9999999999999986e+01 6.0938999999999976e+02
Constraint violation....: 1.0000000000000000e+01 1.0000000000000000e+01
Complementarity.........: 0.0000000000000000e+00 0.0000000000000000e+00
Overall NLP error.......: 9.9999999999999986e+01 6.0938999999999976e+02
Number of objective function evaluations = 2
Number of objective gradient evaluations = 2
Number of equality constraint evaluations = 2
Number of inequality constraint evaluations = 0
Number of equality constraint Jacobian evaluations = 2
Number of inequality constraint Jacobian evaluations = 0
Number of Lagrangian Hessian evaluations = 2
Total CPU secs in IPOPT (w/o function evaluations) = 0.008
Total CPU secs in NLP function evaluations = 0.000
EXIT: Restoration Failed!
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
model, tee=True)
4
/Library/<path to solvers.pyc> in solve(self, *args, **kwds)
616 result,
617 select=self._select_index,
--> 618 default_variable_value=self._default_variable_value)
619 result._smap_id = None
620 result.solution.clear()
/Library/Frameworks<path to>/PyomoModel.pyc in load_from(self, results, allow_consistent_values_for_fixed_vars, comparison_tolerance_for_fixed_vars, ignore_invalid_labels, id, delete_symbol_map, clear, default_variable_value, select, ignore_fixed_vars)
239 else:
240 raise ValueError("Cannot load a SolverResults object "
--> 241 "with bad status: %s" % str(results.solver.status))
242 if clear:
243 #
ValueError: Cannot load a SolverResults object with bad status: error
You can actually see from the log outputted above, that there were only 2 constraint evaluates from this line:
Number of equality constraint evaluations = 2
So, it actually was declared infeasible pretty quickly, so I imagine it won't be difficult to figure out which constraint was violated.
How do I find out which constraint was violated? Or which constraint is making it infeasible?
Here is a different question, but one that still is informative about IPOPT: IPOPT options for reducing constraint violation after fewer iterations
Running Ipopt with option print_level set to 8
gives me output like
DenseVector "modified d_L scaled" with 1 elements:
modified d_L scaled[ 1]= 2.4999999750000001e+01
DenseVector "modified d_U scaled" with 0 elements:
...
DenseVector "curr_c" with 1 elements:
curr_c[ 1]= 7.1997853012817359e-08
DenseVector "curr_d" with 1 elements:
curr_d[ 1]= 2.4999999473733212e+01
DenseVector "curr_d - curr_s" with 1 elements:
curr_d - curr_s[ 1]=-2.8774855209690031e-07
curr_c are the activity of equality constraints (seen as c(x)=0 internally for Ipopt), curr_d are the activites of inequality constraints (seen as d_L <= d(x) <= d_U internally).
So absolute values of curr_c are violations of equality constraints and max(d_L-curr_d,curr_d-d_U,0) are violations of inequality constraints.
The last iterate including constraint activites is also returned by Ipopt and may be passed back to Pyomo, so you can just compare these values with the left- and right-hand-side of your constraints.

How to detect water level in a transparent container?

I am using opencv-python library to do the liquid level detection. So far I was able to convert the image to gray scale and applying canny edge detection the container has been identified.
import numpy as np
import cv2
import math
from matplotlib import pyplot as plt
from cv2 import threshold, drawContours
img1 = cv2.imread('botone.jpg')
kernel = np.ones((5,5),np.uint8)
#convert the image to grayscale
imgray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
edges = cv2.Canny(imgray,120,230)
I need to know how to find water level from this stage.
Should I try machine learning, or is there any other option or algorithm available?
I took an approach of finding out the horizontal line in the edge detected image. If the horizontal line crosses certain threshold I can consider it as level.But the result is not consistent.
I want to know if there are any other approaches i can go with or white papers for reference?
I don't know how you would do that with numpy and opencv, because I use ImageMagick (which is installed on most Linux distros and is avilable for OSX and Windows), but the concept should be applicable.
First, I would probably go for a Sobel filter that is rotated to find horizontal edges - i.e. a directional filter.
convert chemistry.jpg -morphology Convolve Sobel:90 sobel.jpg
Then I would probably look at adding in a Hough Transform to find the lines within the horizontal edge-detected image. So, my one-liner looks like this in the Terminal/shell:
convert chemistry.jpg -morphology Convolve Sobel:90 -hough-lines 5x5+30 level.jpg
If I add in some debug, you can see the coefficients of the Sobel filter:
convert chemistry.jpg -define showkernel=1 -morphology Convolve Sobel:90 -hough-lines 5x5+30 sobel.jpg
Kernel "Sobel#90" of size 3x3+1+1 with values from -2 to 2
Forming a output range from -4 to 4 (Zero-Summing)
0: 1 2 1
1: 0 0 0
2: -1 -2 -1
If I add in some more debug, you can see the coordinates of the lines detected:
convert chemistry.jpg -morphology Convolve Sobel:90 -hough-lines 5x5+30 -write lines.mvg level.jpg
lines.mvg
# Hough line transform: 5x5+30
viewbox 0 0 86 196
line 0,1.52265 86,18.2394 # 30 <-- this is the topmost, somewhat diagonal line
line 0,84.2484 86,82.7472 # 40 <-- this is your actual level
line 0,84.5 86,84.5 # 40 <-- this is also your actual level
line 0,94.5 86,94.5 # 30 <-- this is the line just below the surface
line 0,93.7489 86,95.25 # 30 <-- so is this
line 0,132.379 86,124.854 # 32 <-- this is the red&white valve(?)
line 0,131.021 86,128.018 # 34
line 0,130.255 86,128.754 # 34
line 0,130.5 86,130.5 # 34
line 0,129.754 86,131.256 # 34
line 0,192.265 86,190.764 # 86
line 0,191.5 86,191.5 # 86
line 0,190.764 86,192.265 # 86
line 0,192.5 86,192.5 # 86
As I said in my comments, please also think about maybe lighting your experiment better - either with different coloured lights, more diffuse lights, different direction lights. Also, if your experiment happens over time, you could consider looking at differences between images to see which line is moving...
Here are the lines on top of your original image:

Is there a way I can get a random number from 0 - 1 billion?

I'm not caring for efficiency right now. I'm just looking for a decent way I can generate random numbers from 0 through 1 billion. I've tried doing rand() * rand() but it's been giving me only numbers estimating greater than about 10 million. I would like the range to be way more spread out. Anyone have any suggestions?
Sure, just use the modern <random> facilities of C++:
std::random_device rd;
std::mt19937 gen(rd());
std::uniform_int_distribution<> dis(1, 1000000000);
for (int n=0; n<10; ++n)
std::cout << dis(gen) << ' ';
std::cout << '\n';
(from here, slightly modified to do what OP needs) will do what you need.
An analog function for floating point values also exists if needed.
Remark: In the unlikely case that your platform's int cannot hold one billion, or if you need even bigger numbers, you can also use bigger integer types like this:
std::uniform_int_distribution<std::int64_t> dis(1, 1000000000);
Also note that seeding the mt as presented here is not optimal; see my question here for more information.
One billion is just below 2^30. If you can't generate a 30 bit number directly, then generate two 15-bit numbers, shift one left by 15 bits and XOR with the unshifted number to get a 30-bit number.
If the 30-bit result exceeds 1 billion, then throw it away and generate another 30-bit number. 2^30 = 1073741824, so the result will only be too large in about 7% of cases.
RANDOMIZED SERIAL/SEQUENTIAL NUMBERS (UNIQUE & UNPREDICTABLE)
If only the random numbers are allowed to be of unique value.
12345678900 72 12345678901 34. 12345678926 34. 12345678951 24.
12345678976 84. 12345678902 65. 12345678927 63. 12345678952 51.
12345678977 67. 12345678903 09. 12345678928 11. 12345678953 19.
12345678978 53. 12345678904 22. 12345678929 44. 12345678954 78.
12345678979 04. 12345678905 21. 12345678930 85. 12345678955 76.
12345678980 35. 12345678906 37. 12345678931 01. 12345678956 31.
12345678981 73. 12345678907 42. 12345678932 55. 12345678957 12.
12345678982 16. 12345678908 20. 12345678933 95. 12345678958 87.
12345678983 77. 12345678909 71. 12345678934 49. 12345678959 83.
12345678984 13. 12345678910 32. 12345678935 60. 12345678960 50.
12345678985 45. 12345678911 58. 12345678936 86. 12345678961 02.
12345678986 61. 12345678912 66. 12345678937 30. 12345678962 64.
12345678987 23. 12345678913 10. 12345678938 48. 12345678963 94.
12345678988 40. 12345678914 79. 12345678939 89. 12345678964 27.
12345678989 70. 12345678915 93. 12345678940 43. 12345678965 92.
12345678990 08. 12345678916 46. 12345678941 72. 12345678966 03.
12345678991 88. 12345678917 57. 12345678942 14. 12345678967 47.
12345678992 65. 12345678918 52. 12345678943 38 12345678968 62.
12345678993 17. 12345678919 15. 12345678944 75. 12345678969 80.
12345678994 54. 12345678920 41. 12345678945 07. 12345678970 18.
12345678995 28. 12345678921 62. 12345678946 25. 12345678971 58.
12345678996 74. 12345678922 26. 12345678947 69. 12345678972 43.
12345678997 29. 12345678923 91. 12345678948 82. 12345678973 59.
12345678998 33. 12345678924 05. 12345678949 56. 12345678974 81.
12345678999 78. 12345678925 36. 12345678950 68. 12345678975 90.
12345679000 06.
These are 101 unique random numbers.
Each number consists of 13 digits, out of which first 11 digits are sequential numbers and the 12th and 13th digits together form a random number.
These last two digits transform the 11 digit sequential number into a 13 digit random number. Thus when a sequential number is transformed into a random number by addition of 1 or 2 digits, such randomization does not need math based algorithm.
Even if the two digits are created by math based algorithms, there can be innumerable such algorithms that can create two digit random numbers.
Hence, my claim is that when 1, 2 or 3 randomly created digits are attached to the sequential number, you award randomness to it and such randomized sequential numbers are unpredictable.
Thus a SHORTEST POSSIBLE sequence of 11 digits can accommodate one billion unpredictable random numbers and a sequence of only 14 digits can accommodate one trillion unpredictable random numbers.

How to detect an inclination of 90 degrees or 180?

In my project I deal with images which I don't know if they are inclined or not.
I work with C++ and OpenCV. I try with Hough transformation to determine the angle of inclination: if it is 90 or 180. But it doesn't give a result.
A link to example image (full resolution TIFF) here.
The following illustration is the full-res image scaled down and converted to PNG:
If I want to attack your image with the Hough lines method, I would do a Canny edge detection first, then find the Hough lines and then look at the generated lines. So it would look like this in ImageMagick - you can transform to OpenCV:
convert input.jpg \
\( +clone -canny x10+10%+30% \
-background none -fill red \
-stroke red -strokewidth 2 \
-hough-lines 9x9+150 \
-write lines.mvg \
\) \
-composite hough.png
And in the lines.mvg file, I can see the individual detected lines:
# Hough line transform: 9x9+150
viewbox 0 0 349 500
line 0,-3.74454 349,8.44281 # 160
line 0,55.2914 349,67.4788 # 206
line 1,0 1,500 # 193
line 0,71.3012 349,83.4885 # 169
line 0,125.334 349,137.521 # 202
line 0,142.344 349,154.532 # 156
line 0,152.351 349,164.538 # 155
line 0,205.383 349,217.57 # 162
line 0,239.453 349,245.545 # 172
line 0,252.455 349,258.547 # 152
line 0,293.461 349,299.553 # 163
line 0,314.464 349,320.556 # 169
line 0,335.468 349,341.559 # 189
line 0,351.47 349,357.562 # 196
line 0,404.478 349,410.57 # 209
line 349.39,0 340.662,500 # 187
line 0,441.484 349,447.576 # 198
line 0,446.484 349,452.576 # 165
line 0,455.486 349,461.578 # 174
line 0,475.489 349,481.581 # 193
line 0,498.5 349,498.5 # 161
I resized your image to 349 pixels wide (to make it fit on Stack Overflow and process faster), so you can see there are lots of lines that start at 0 on the left side of the image and end at 349 on the right side which tells you they go across the image, not up and down it. Also, you can see that the right end of the lines is generally 16 pixels lower than the left, so the image is rotated tan inverse (16/349) degrees.
Here is a fairly simple approach that may help you get started, or give you ideas that you can adapt. I use ImageMagick, but the concepts and techniques should be readily applicable in OpenCV.
First, I note that the image is rotated a few degrees and that gives the black triangle at top right, so the first thing I would consider is cropping the middle out of the image - i.e. removing around 10-15% off each side.
The next thing I note is that, the image is poorly scanned with lots of noisy, muddy grey areas. I would tend to want to blur these together so that they become a bit more uniform and can be thresholded.
So, if I want to do those two things in ImageMagick, I would do this:
convert input.tif \
-gravity center -crop 75x75%+0+0 \
-blur x10 -threshold 50% \
-negate \
stage1.jpg
Now, I can count the number of horizontal black lines that run the full width of the image (without crossing anything white). I do this by squidging the image till it is just a single pixel wide (but still the full original height) and counting the number of black rows:
convert stage1.jpg -resize 1x! -threshold 1 txt: | grep -c black
1368
And I do the same for vertical black lines that run the full height of the image from top to bottom, uninterrupted by white. I do that by squidging the image till it is a single pixel tall and the full original width:
convert stage1.jpg -resize x1! -threshold 1 txt: | grep -c black
0
Therefore there are 1,368 lines across the image and none up and down it, so I can say the dark lines in the original image tend to run left-right across the image rather than top-bottom up and down the image.

C++ OpenGL Wrong Collada Texture Coordinates

I am parsing a Collada file for animations. I have it drawn and animated fine but the issue now is how to setup the texture coordinates. I feed it to OpenGL exactly how the collada dae file gives it to me but its mapped completely wrong. The coordinates are range from [0-1].
Do I have to rearrange it?
If I do then please explain to me on how to go about it. I tried using GL_LINEAR and GL_NEAREST but it doesn't solve the problem. Any ideas why?
The models that I am using is the AstroBoy that http://www.wazim.com/Collada_Tutorial_1.htm gives and the Amnesia Servant Grunt.
Based on how you said it turns out to be mapped completely wrong, I'm guessing you haven't taken into account the the texture index values. I had a similar problem as well (although with a difference model). Just like you can have an array of index values so that OpenGL knows which order to draw the verticies, so to does Collada assign UV index values (and normal index values), and, annoyingly, they are never in the same order. Take the following Collada sample for instance:
<source id="Box001-POSITION">
<float_array id="Box001-POSITION-array" count="1008">
-167.172180 -193.451920 11.675772
167.172180 -193.451920 11.675772 .....
....
....
<source id="Box001-Normal0">
<float_array id="Box001-Normal0-array" count="5976">
-0.000000 -0.025202 -0.999682
-0.000000 -0.025202 -0.999682 .....
....
....
<source id="Box001-UV0">
<float_array id="Box001-UV0-array" count="696">
0.000000 0.000000
1.000000 0.000000
0.000000 1.000000 .....
....
....
<triangles count="664" material="_13 - Default">
<input semantic="VERTEX" offset="0" source="#Box001-POSITION"/>
<input semantic="NORMAL" offset="1" source="#Box001-Normal0"/>
<input semantic="TEXCOORD" offset="2" set="0" source="#Box001-UV0"/>
<p> 169 0 171 170 1 172 171 2 173 171 3
173 168 4 170 169 5 171 173 6 175 174
7 176 175 8 177 175 9 177 172 10 174 173 11 175 108 ....
The first three sections indicate the values of the verticies/normals/texture-coords but the final section indicates the index of each value. Notice how the first vertex index is 169, but the first normal index is 0. In fact, the normal indicies are completely normal, they progress as "0..1..2..3" but the indicies for the verticies and textures are all over the place! You have to order your vertex and texture values in the way the Collada file spcifies.
The other way is to write a little program that parses the collada file and rearranges all your vertex, normal and UV values into the right order based on the index values. Then you can just feed your points straight into OpenGL no questions asked. It's up to you of course, which way you want to handle it.
(PS: If you can make a good parser for Collada files, then the 'interleaved-indexing' is actually quite handy, if not though, I find it an over-complication on Collada's part, but you can't really do anything about it.)
No, I advice you to read some basic knowledge of collada .
<triangles count="664" material="_13 - Default">
<input semantic="VERTEX" offset="0" source="#Box001-POSITION"/>
<input semantic="NORMAL" offset="1" source="#Box001-Normal0"/>
<input semantic="TEXCOORD" offset="2" set="0" source="#Box001-UV0"/>
<p> 169 0 171 170 1 172 171 2 173 171 3......
the 169 is the first point index of a triangles,the 0 is the first normal index ,and the 171 is the first texcoord index ,and so on .