df.ix gets NAN as subset - python-2.7

I have a dataframe like below [72 rows x 25 columns]:
Pin CPULabel Freq(MHz) DCycle Skew(1-3)min Skew(1-3)mean
0 Dif0 BP100_Fast 99.9843 0.492 0 0
1 Dif0 BP100_Slow 100.011 0.493 0 0
2 Dif0 100HiBW_Fast 100.006 0.503 0 0
3 Dif0 100HiBW_Slow 100.007 0.504 0 0
4 Dif0 100LoBW_Fast 100.005 0.503 0 0
5 Dif0 100LoBW_Slow 99.9951 0.504 0 0
8 Dif1 BP100_Fast 99.9928 0.492 7 10
9 Dif1 BP100_Slow 99.9962 0.492 11 12
10 Dif1 100HiBW_Fast 100.014 0.502 10 11
11 Dif1 100HiBW_Slow 100.006 0.503 6 13
12 Dif1 100LoBW_Fast 99.9965 0.502 5 10
13 Dif1 100LoBW_Slow 99.9946 0.503 12 14
16 Dif2 BP100_Fast 99.9929 0.493 2 6
17 Dif2 BP100_Slow 99.997 0.493 8 13
18 Dif2 100HiBW_Fast 100.002 0.504 4 9
19 Dif2 100HiBW_Slow 99.9964 0.504 13 17
20 Dif2 100LoBW_Fast 100.021 0.504 8 9
I am just interested in the rows which contain BP100_Fast, 100HiBW and 100HiBW strings. So I used the the command below:
excel = pd.read_excel('25C_3.3V.xlsx', skiprows=1)
excel.fillna(value=0, inplace=True)
general = excel[excel['Pin'] != 'Clkin']
general.drop_duplicates(keep=False, inplace=True)
slew = general[(general['CPULabel']=='BP100_Fast') | (general['CPULabel']=='100LoBW_Fast') | (general['CPULabel']=='100HiBW_Fast')]
I am able to get what I want[36 rows x 25 columns]:
Pin CPULabel Freq(MHz) DCycle Skew(1-3)min Skew(1-3)mean
0 Dif0 BP100_Fast 99.9843 0.492 0 0
2 Dif0 100HiBW_Fast 100.006 0.503 0 0
4 Dif0 100LoBW_Fast 100.005 0.503 0 0
8 Dif1 BP100_Fast 99.9928 0.492 7 10
10 Dif1 100HiBW_Fast 100.014 0.502 10 11
12 Dif1 100LoBW_Fast 99.9965 0.502 5 10
16 Dif2 BP100_Fast 99.9929 0.493 2 6
18 Dif2 100HiBW_Fast 100.002 0.504 4 9
20 Dif2 100LoBW_Fast 100.021 0.504 8 9
However, if I changed the last command:
slew = general.ix[['BP100_Fast', '100LoBW_Fast', '100HiBW_Fast'], :]
I got NAN as my result. [3 rows x 25 columns]
Pin CPULabel Freq(MHz) DCycle Skew(1-3)min Skew(1-3)mean
BP100_Fast NaN NaN NaN NaN NaN NaN
100LoBW_Fast NaN NaN NaN NaN NaN NaN
100HiBW_Fast NaN NaN NaN NaN NaN NaN
Is there any way to complete this with df.ix? Thank you very much.

Per Docs
The .ix indexer is deprecated, in favor of the more strict .iloc and .loc indexers. .ix offers a lot of magic on the inference of what the user wants to do. To wit, .ix can decide to index positionally OR via labels, depending on the data type of the index. This has caused quite a bit of user confusion over the years. The full indexing documentation is here. (GH14218)
Option 1
isin
general[general.CPULabel.isin(['BP100_Fast', '100LoBW_Fast', '100HiBW_Fast'])]
Pin CPULabel Freq(MHz) DCycle Skew(1-3)min Skew(1-3)mean
0 Dif0 BP100_Fast 99.9843 0.492 0 0
2 Dif0 100HiBW_Fast 100.0060 0.503 0 0
4 Dif0 100LoBW_Fast 100.0050 0.503 0 0
8 Dif1 BP100_Fast 99.9928 0.492 7 10
10 Dif1 100HiBW_Fast 100.0140 0.502 10 11
12 Dif1 100LoBW_Fast 99.9965 0.502 5 10
16 Dif2 BP100_Fast 99.9929 0.493 2 6
18 Dif2 100HiBW_Fast 100.0020 0.504 4 9
20 Dif2 100LoBW_Fast 100.0210 0.504 8 9
Option 2
query
general.query('CPULabel in ["BP100_Fast", "100LoBW_Fast", "100HiBW_Fast"]')
Pin CPULabel Freq(MHz) DCycle Skew(1-3)min Skew(1-3)mean
0 Dif0 BP100_Fast 99.9843 0.492 0 0
2 Dif0 100HiBW_Fast 100.0060 0.503 0 0
4 Dif0 100LoBW_Fast 100.0050 0.503 0 0
8 Dif1 BP100_Fast 99.9928 0.492 7 10
10 Dif1 100HiBW_Fast 100.0140 0.502 10 11
12 Dif1 100LoBW_Fast 99.9965 0.502 5 10
16 Dif2 BP100_Fast 99.9929 0.493 2 6
18 Dif2 100HiBW_Fast 100.0020 0.504 4 9
20 Dif2 100LoBW_Fast 100.0210 0.504 8 9
Option 3
pd.Series.str.endswith
general[general.CPULabel.str.endswith('Fast')]
Pin CPULabel Freq(MHz) DCycle Skew(1-3)min Skew(1-3)mean
0 Dif0 BP100_Fast 99.9843 0.492 0 0
2 Dif0 100HiBW_Fast 100.0060 0.503 0 0
4 Dif0 100LoBW_Fast 100.0050 0.503 0 0
8 Dif1 BP100_Fast 99.9928 0.492 7 10
10 Dif1 100HiBW_Fast 100.0140 0.502 10 11
12 Dif1 100LoBW_Fast 99.9965 0.502 5 10
16 Dif2 BP100_Fast 99.9929 0.493 2 6
18 Dif2 100HiBW_Fast 100.0020 0.504 4 9
20 Dif2 100LoBW_Fast 100.0210 0.504 8 9

Try this approach:
labels = ['BP100_Fast', '100HiBW', '100HiBW']
slew = \
pd.read_excel('25C_3.3V.xlsx', skiprows=1) \
.fillna(value=0) \
.query("Pin != Clkin and CPULabel in #labels") \
.drop_duplicates(keep=False)
alternatively you can change:
slew = general.ix[['BP100_Fast', '100LoBW_Fast', '100HiBW_Fast'], :]
to:
slew = general.loc[general['CPULabel'].isin(['BP100_Fast','100LoBW_Fast','100HiBW_Fast'])]

Related

Sample for a minimal DXF containing a spline

I need to create DXF files containing curves and spline.
I am since long able to create simple DXF R12 File like this:
0
SECTION
2
ENTITIES
0
POLYLINE
8
0
66
1
10
0.0
20
0.0
30
0.0
70
0
75
6
62
1
0
VERTEX
8
0
10
1044.52
20
825.596
30
0.0
0
VERTEX
8
0
10
1044.52
20
700.099
30
0.0
0
SEQEND
0
ENDSEC
0
EOF
According to the DXF documentation (https://www.autodesk.com/techpubs/autocad/acad2000/dxf/polyline_dxf_06.htm and also http://paulbourke.net/dataformats/dxf/dxf10.html) even R12 POLYLINES can be given a "SPLINE" attribute, so i tried this:
0
SECTION
2
ENTITIES
0
POLYLINE
8
0
66
1
10
0.0
20
0.0
30
0.0
70
4
62
1
0
VERTEX
8
0
10
1044.52
20
1825.596
30
0.0
70
8
0
VERTEX
8
0
10
644.52
20
1025.596
30
0.0
70
8
0
VERTEX
8
0
10
544.52
20
325.596
30
0.0
70
8
0
VERTEX
8
0
10
44.52
20
25.596
30
0.0
70
8
0
VERTEX
8
0
10
1044.52
20
700.099
30
0.0
0
SEQEND
0
ENDSEC
0
EOF
But this still shows only straight lines.
Is it possible to create DXF Files with SPLINE content in DXF 12
Can anybody point to SEIMPLE sample of how to create a spline in
a DXF file
Started with the information in
https://forums.autodesk.com/t5/visual-lisp-autolisp-and-general/manually-writing-a-minimal-dxf/td-p/2081140
and was able to make a headerless, tableless file.dxf
which succesfully draws two lines
MinTwoLines.dxf
0
SECTION
2
ENTITIES
0
LINE
8
DXF_Lines_LayerName
10
0.0
20
0.0
30
0.0
11
1.0
21
0.0
31
0.0
0
LINE
8
DXF_Lines_LayerName
10
1.0
20
0.0
30
0.0
11
1.0
21
0.0
31
2.0
0
ENDSEC
0
EOF
Plopping a SPLINE ENTITY into the same location as the LINE failed.
so nabbed the splineA.dxf file from
https://github.com/jscad/sample-files/blob/master/dxf/dxf-parser/splines.dxf
and incrementally hacked off bits until no more could be successfully removed.
The files resulting from laminating the below files can be successfully imported into Rhino 7.
The file lamination commands in Windows are:\
to make a single file with only the FourPoint spline
copy MinSplineHead.txt+FourPointSplineBody.txt+MinSplineTail.txt MinSpline.dxf
to make a single file with only the TwentyOnePoint spline
copy MinSplineHead.txt+TwentyOnePointSplineBody.txt+MinSplineTail.txt MinSpline.dxf
to make a single file with both splines
copy MinSplineHead.txt+FourPointSplineBody.txt+TwentyOnePointSplineBody.txt+MinSplineTail.txt MinSpline.dxf
MinSplineHead.txt
0
SECTION
2
HEADER
9
$ACADVER
1
AC1021
0
ENDSEC
0
SECTION
2
TABLES
0
TABLE
2
LAYER
5
2
330
0
100
AcDbSymbolTable
70
1
0
LAYER
5
10
330
2
100
AcDbSymbolTableRecord
100
AcDbLayerTableRecord
2
0
70
0
62
7
6
CONTINUOUS
370
0
390
F
0
ENDTAB
0
TABLE
2
BLOCK_RECORD
5
1
330
0
100
AcDbSymbolTable
70
2
0
BLOCK_RECORD
5
1F
330
1
100
AcDbSymbolTableRecord
100
AcDbBlockTableRecord
2
*Model_Space
70
0
280
1
281
0
0
BLOCK_RECORD
5
1E
330
1
100
AcDbSymbolTableRecord
100
AcDbBlockTableRecord
2
*Paper_Space
70
0
280
1
281
0
0
ENDTAB
0
ENDSEC
0
SECTION
2
ENTITIES
FourPointSplineBody.txt
0
SPLINE
8
DXF_Spline_LayerName
100
AcDbSpline
70
8
71
2
72
7
73
4
74
0
42
1e-07
43
1e-07
40
0
40
0
40
0
40
0.5
40
1
40
1
40
1
10
0
20
0
30
0
10
122.4
20
-38.5
30
0
10
77.5
20
149.5
30
0
10
200
20
100
30
0
TwentyOnePointSplineBody.txt
0
SPLINE
8
DXF_Spline_LayerName
100
AcDbSpline
70
0
71
3
72
25
73
21
74
0
42
0.00000
43
0.00000
40
0.0
40
0.0
40
0.0
40
0.0
40
0.02777
40
0.05555
40
0.08333
40
0.11111
40
0.13888
40
0.16666
40
0.19444
40
0.22222
40
0.25
40
0.27777
40
0.30555
40
0.33333
40
0.36111
40
0.38888
40
0.41666
40
0.44444
40
0.47222
40
0.5
40
0.5
40
0.5
40
0.5
10
7.000
20
-4.000
30
-5.500
10
7.288
20
-4.008
30
-5.492
10
7.955
20
-4.029
30
-5.416
10
8.881
20
-4.062
30
-5.168
10
9.750
20
-4.097
30
-4.763
10
10.535
20
-4.136
30
-4.213
10
11.213
20
-4.177
30
-3.535
10
11.763
20
-4.222
30
-2.750
10
12.168
20
-4.270
30
-1.881
10
12.416
20
-4.321
30
-0.955
10
12.500
20
-4.375
30
-0.000
10
12.416
20
-4.432
30
0.955
10
12.168
20
-4.492
30
1.881
10
11.763
20
-4.556
30
2.750
10
11.213
20
-4.622
30
3.535
10
10.535
20
-4.691
30
4.213
10
9.750
20
-4.764
30
4.763
10
8.881
20
-4.840
30
5.168
10
7.955
20
-4.918
30
5.416
10
7.288
20
-4.975
30
5.492
10
7.000
20
-5.000
30
5.500
MinSplineTail.txt
0
ENDSEC
0
EOF

Sequential READ or WRITE not allowed after EOF marker

I have this code:
SUBROUTINE FNDKEY
1( FOUND ,IWBEG ,IWEND ,KEYWRD ,INLINE ,
2 NFILE ,NWRD )
IMPLICIT DOUBLE PRECISION (A-H,O-Z)
LOGICAL FOUND
CHARACTER*80 INLINE
CHARACTER*(*) KEYWRD
DIMENSION
1 IWBEG(40), IWEND(40)
C***********************************************************************
C FINDS AND READS A LINE CONTAINING A SPECIFIED KEYWORD FROM A FILE.
C THIS ROUTINE SEARCHES FOR A GIVEN KEYWORD POSITIONED AS THE FIRST
C WORD OF A LINE IN A FILE.
C IF THE GIVEN KEYWORD IS FOUND THEN THE CORRESPONDING LINE IS READ AND
C RETURNED TOGETHER WITH THE NUMBER OF WORDS IN THE LINE AND TWO INTEGER
C ARRAYS CONTAINING THE POSITION OF THE BEGINNING AND END OF EACH WORD.
C***********************************************************************
1000 FORMAT(A80)
C
FOUND=.TRUE.
IEND=0
10 READ(NFILE,1000,END=20)INLINE
NWRD=NWORD(INLINE,IWBEG,IWEND)
IF(NWRD.NE.0)THEN
IF(INLINE(IWBEG(1):IWEND(1)).EQ.KEYWRD)THEN
GOTO 999
ENDIF
ENDIF
GOTO 10
20 IF(IEND.EQ.0)THEN
IEND=1
REWIND NFILE
GOTO 10
ELSE
FOUND=.FALSE.
ENDIF
999 RETURN
END
And the following file named "2.dat" that I am trying to read:
TITLE
Example 7.5.3 - Simply supported uniformly loaded circular plate
ANALYSIS_TYPE 3 (Axisymmetric)
AXIS_OF_SYMMETRY Y
LARGE_STRAIN_FORMULATION OFF
SOLUTION_ALGORITHM 2
ELEMENT_GROUPS 1
1 1 1
ELEMENT_TYPES 1
1 QUAD_8
4 GP
ELEMENTS 10
1 1 1 19 11 20 16 21 13 22
2 1 13 21 16 23 10 24 2 25
3 1 3 26 18 27 17 28 4 29
4 1 18 30 7 31 12 32 17 27
5 1 3 33 5 34 14 35 18 26
6 1 18 35 14 36 6 37 7 30
7 1 5 38 8 39 15 40 14 34
8 1 14 40 15 41 9 42 6 36
9 1 10 23 16 43 17 32 12 44
10 1 16 20 11 45 4 28 17 43
NODE_COORDINATES 45 CARTESIAN
1 0.0000000000e+00 0.0000000000e+00
2 0.0000000000e+00 1.0000000000e+00
3 6.0000000000e+00 0.0000000000e+00
4 4.0000000000e+00 0.0000000000e+00
5 8.0000000000e+00 0.0000000000e+00
6 8.0000000000e+00 1.0000000000e+00
7 6.0000000000e+00 1.0000000000e+00
8 1.0000000000e+01 0.0000000000e+00
9 1.0000000000e+01 1.0000000000e+00
10 2.0000000000e+00 1.0000000000e+00
11 2.0000000000e+00 0.0000000000e+00
12 4.0000000000e+00 1.0000000000e+00
13 0.0000000000e+00 5.0000000000e-01
14 8.0000000000e+00 5.0000000000e-01
15 1.0000000000e+01 5.0000000000e-01
16 2.0000000000e+00 5.0000000000e-01
17 4.0000000000e+00 5.0000000000e-01
18 6.0000000000e+00 5.0000000000e-01
19 1.0000000000e+00 0.0000000000e+00
20 2.0000000000e+00 2.5000000000e-01
21 1.0000000000e+00 5.0000000000e-01
22 0.0000000000e+00 2.5000000000e-01
23 2.0000000000e+00 7.5000000000e-01
24 1.0000000000e+00 1.0000000000e+00
25 0.0000000000e+00 7.5000000000e-01
26 6.0000000000e+00 2.5000000000e-01
27 5.0000000000e+00 5.0000000000e-01
28 4.0000000000e+00 2.5000000000e-01
29 5.0000000000e+00 0.0000000000e+00
30 6.0000000000e+00 7.5000000000e-01
31 5.0000000000e+00 1.0000000000e+00
32 4.0000000000e+00 7.5000000000e-01
33 7.0000000000e+00 0.0000000000e+00
34 8.0000000000e+00 2.5000000000e-01
35 7.0000000000e+00 5.0000000000e-01
36 8.0000000000e+00 7.5000000000e-01
37 7.0000000000e+00 1.0000000000e+00
38 9.0000000000e+00 0.0000000000e+00
39 1.0000000000e+01 2.5000000000e-01
40 9.0000000000e+00 5.0000000000e-01
41 1.0000000000e+01 7.5000000000e-01
42 9.0000000000e+00 1.0000000000e+00
43 3.0000000000e+00 5.0000000000e-01
44 3.0000000000e+00 1.0000000000e+00
45 3.0000000000e+00 0.0000000000e+00
NODES_WITH_PRESCRIBED_DISPLACEMENTS 6
1 10 0.000 0.000 0.000
2 10 0.000 0.000 0.000
8 01 0.000 0.000 0.000
13 10 0.000 0.000 0.000
22 10 0.000 0.000 0.000
25 10 0.000 0.000 0.000
MATERIALS 1
1 VON_MISES
0.0
1.E+07 0.240
2
0.000 16000.0
1.000 16000.0
LOADINGS EDGE
EDGE_LOADS 5
2 3 10 24 2
1.000 1.000 1.000 0.000 0.000 0.000
4 3 7 31 12
1.000 1.000 1.000 0.000 0.000 0.000
6 3 6 37 7
1.000 1.000 1.000 0.000 0.000 0.000
8 3 9 42 6
1.000 1.000 1.000 0.000 0.000 0.000
9 3 10 12 44
1.000 1.000 1.000 0.000 0.000 0.000
*
* Monotonic loading to collapse
*
INCREMENTS 12
100.0 0.10000E-06 11 1 1 0 1 0
100.0 0.10000E-06 11 1 1 0 1 0
20.0 0.10000E-06 11 1 1 0 1 0
10.0 0.10000E-06 11 1 1 0 0 0
10.0 0.10000E-06 11 1 1 0 1 0
10.0 0.10000E-06 11 1 1 0 0 0
5.0 0.10000E-06 11 1 1 1 1 0
2.0 0.10000E-06 11 1 1 0 0 0
2.0 0.10000E-06 11 1 1 0 0 0
0.5 0.10000E-06 11 1 1 1 1 0
0.25 0.10000E-06 11 1 1 0 0 0
0.02 0.10000E-06 11 1 1 0 0 0
And I am getting the following error:
At line 22 of file GENERAL/fndkey.f (unit = 15, file = './2.dat')
Fortran runtime error: Sequential READ or WRITE not allowed after EOF marker, possibly use REWIND or BACKSPACE
The following file is the one that call's FNDKEY. When it calls FNDKWYm it passes to KEYWRD the string "RESTART".
SUBROUTINE RSTCHK( RSTINP ,RSTRT )
IMPLICIT DOUBLE PRECISION (A-H,O-Z)
LOGICAL RSTRT
CHARACTER*256 RSTINP
C
LOGICAL AVAIL,FOUND
CHARACTER*80 INLINE
DIMENSION IWBEG(40),IWEND(40)
C***********************************************************************
C CHECKS WETHER MAIN DATA IS TO BE READ FROM INPUT RE-START FILE
C AND SET INPUT RE-START FILE NAME IF REQUIRED
C***********************************************************************
1000 FORMAT(////,
1' Main input data read from re-start file'/
2' ======================================='///
3' Input re-start file name ------> ',A)
C
C Checks whether the input data file contains the keyword RESTART
C
CALL FNDKEY
1( FOUND ,IWBEG ,IWEND ,'RESTART',
2 INLINE ,15 ,NWRD )
IF(FOUND)THEN
C sets re-start flag and name of input re-start file
RSTRT=.TRUE.
RSTINP=INLINE(IWBEG(2):IWEND(2))//'.rst'
WRITE(16,1000)INLINE(IWBEG(2):IWEND(2))//'.rst'
C checks existence of the input re-start file
INQUIRE(FILE=RSTINP,EXIST=AVAIL)
IF(.NOT.AVAIL)CALL ERRPRT('ED0096')
ELSE
RSTRT=.FALSE.
ENDIF
C
RETURN
END
I solved the problem adding the comand BACKSPACE(NFILE) above the RETURN:
SUBROUTINE FNDKEY
1( FOUND ,IWBEG ,IWEND ,KEYWRD ,INLINE ,
2 NFILE ,NWRD )
IMPLICIT DOUBLE PRECISION (A-H,O-Z)
LOGICAL FOUND
CHARACTER*80 INLINE
CHARACTER*(*) KEYWRD
DIMENSION
1 IWBEG(40), IWEND(40)
C***********************************************************************
C FINDS AND READS A LINE CONTAINING A SPECIFIED KEYWORD FROM A FILE.
C THIS ROUTINE SEARCHES FOR A GIVEN KEYWORD POSITIONED AS THE FIRST
C WORD OF A LINE IN A FILE.
C IF THE GIVEN KEYWORD IS FOUND THEN THE CORRESPONDING LINE IS READ AND
C RETURNED TOGETHER WITH THE NUMBER OF WORDS IN THE LINE AND TWO INTEGER
C ARRAYS CONTAINING THE POSITION OF THE BEGINNING AND END OF EACH WORD.
C***********************************************************************
1000 FORMAT(A80)
C
FOUND=.TRUE.
IEND=0
10 READ(NFILE,1000,END=20)INLINE
NWRD=NWORD(INLINE,IWBEG,IWEND)
PRINT *,KEYWRD
IF(NWRD.NE.0)THEN
IF(INLINE(IWBEG(1):IWEND(1)).EQ.KEYWRD)THEN
GOTO 999
ENDIF
ENDIF
GOTO 10
20 IF(IEND.EQ.0)THEN
IEND=1
REWIND NFILE
GOTO 10
ELSE
FOUND=.FALSE.
ENDIF
BACKSPACE(NFILE)
999 RETURN
END

GLSL: pow vs multiplication for integer exponent

Which is faster in GLSL:
pow(x, 3.0f);
or
x*x*x;
?
Does exponentiation performance depend on hardware vendor or exponent value?
I wrote a small benchmark, because I was interested in the results.
In my personal case, I was most interested in exponent = 5.
Benchmark code (running in Rem's Studio / LWJGL):
package me.anno.utils.bench
import me.anno.gpu.GFX
import me.anno.gpu.GFX.flat01
import me.anno.gpu.RenderState
import me.anno.gpu.RenderState.useFrame
import me.anno.gpu.framebuffer.Frame
import me.anno.gpu.framebuffer.Framebuffer
import me.anno.gpu.hidden.HiddenOpenGLContext
import me.anno.gpu.shader.Renderer
import me.anno.gpu.shader.Shader
import me.anno.utils.types.Floats.f2
import org.lwjgl.opengl.GL11.*
import java.nio.ByteBuffer
import kotlin.math.roundToInt
fun main() {
fun createShader(code: String) = Shader(
"", null, "" +
"attribute vec2 attr0;\n" +
"void main(){\n" +
" gl_Position = vec4(attr0*2.0-1.0, 0.0, 1.0);\n" +
" uv = attr0;\n" +
"}", "varying vec2 uv;\n", "" +
"void main(){" +
code +
"}"
)
fun repeat(code: String, times: Int): String {
return Array(times) { code }.joinToString("\n")
}
val size = 512
val warmup = 50
val benchmark = 1000
HiddenOpenGLContext.setSize(size, size)
HiddenOpenGLContext.createOpenGL()
val buffer = Framebuffer("", size, size, 1, 1, true, Framebuffer.DepthBufferType.NONE)
println("Power,Multiplications,GFlops-multiplication,GFlops-floats,GFlops-ints,GFlops-power,Speedup")
useFrame(buffer, Renderer.colorRenderer) {
RenderState.blendMode.use(me.anno.gpu.blending.BlendMode.ADD) {
for (power in 2 until 100) {
// to reduce the overhead of other stuff
val repeats = 100
val init = "float x1 = dot(uv, vec2(1.0)),x2,x4,x8,x16,x32,x64;\n"
val end = "gl_FragColor = vec4(x1,x1,x1,x1);\n"
val manualCode = StringBuilder()
for (bit in 1 until 32) {
val p = 1.shl(bit)
val h = 1.shl(bit - 1)
if (power == p) {
manualCode.append("x1=x$h*x$h;")
break
} else if (power > p) {
manualCode.append("x$p=x$h*x$h;")
} else break
}
if (power.and(power - 1) != 0) {
// not a power of two, so the result isn't finished yet
manualCode.append("x1=")
var first = true
for (bit in 0 until 32) {
val p = 1.shl(bit)
if (power.and(p) != 0) {
if (!first) {
manualCode.append('*')
} else first = false
manualCode.append("x$p")
}
}
manualCode.append(";\n")
}
val multiplications = manualCode.count { it == '*' }
// println("$power: $manualCode")
val shaders = listOf(
// manually optimized
createShader(init + repeat(manualCode.toString(), repeats) + end),
// can be optimized
createShader(init + repeat("x1=pow(x1,$power.0);", repeats) + end),
// can be optimized, int as power
createShader(init + repeat("x1=pow(x1,$power);", repeats) + end),
// slightly different, so it can't be optimized
createShader(init + repeat("x1=pow(x1,${power}.01);", repeats) + end),
)
for (shader in shaders) {
shader.use()
}
val pixels = ByteBuffer.allocateDirect(4)
Frame.bind()
glClearColor(0f, 0f, 0f, 1f)
glClear(GL_COLOR_BUFFER_BIT or GL_DEPTH_BUFFER_BIT)
for (i in 0 until warmup) {
for (shader in shaders) {
shader.use()
flat01.draw(shader)
}
}
val flops = DoubleArray(shaders.size)
val avg = 10 // for more stability between runs
for (j in 0 until avg) {
for (index in shaders.indices) {
val shader = shaders[index]
GFX.check()
val t0 = System.nanoTime()
for (i in 0 until benchmark) {
shader.use()
flat01.draw(shader)
}
// synchronize
glReadPixels(0, 0, 1, 1, GL_RGBA, GL_UNSIGNED_BYTE, pixels)
GFX.check()
val t1 = System.nanoTime()
// the first one may be an outlier
if (j > 0) flops[index] += multiplications * repeats.toDouble() * benchmark.toDouble() * size * size / (t1 - t0)
GFX.check()
}
}
for (i in flops.indices) {
flops[i] /= (avg - 1.0)
}
println(
"" +
"$power,$multiplications," +
"${flops[0].roundToInt()}," +
"${flops[1].roundToInt()}," +
"${flops[2].roundToInt()}," +
"${flops[3].roundToInt()}," +
(flops[0] / flops[3]).f2()
)
}
}
}
}
The sampler function is run 9x 512² pixels * 1000 times, and evaluates the function 100 times each.
I run this code on my RX 580, 8GB from Gigabyte, and collected the following results:
Power
#Mult
GFlops*
GFlopsFp
GFlopsInt
GFlopsPow
Speedup
2
1
1246
1429
1447
324
3.84
3
2
2663
2692
2708
651
4.09
4
2
2682
2679
2698
650
4.12
5
3
2766
972
974
973
2.84
6
3
2785
978
974
976
2.85
7
4
2830
1295
1303
1299
2.18
8
3
2783
2792
2809
960
2.90
9
4
2836
1298
1301
1302
2.18
10
4
2833
1291
1302
1298
2.18
11
5
2858
1623
1629
1623
1.76
12
4
2824
1302
1295
1303
2.17
13
5
2866
1628
1624
1626
1.76
14
5
2869
1614
1623
1611
1.78
15
6
2886
1945
1943
1953
1.48
16
4
2821
1305
1300
1305
2.16
17
5
2868
1615
1625
1619
1.77
18
5
2858
1620
1625
1624
1.76
19
6
2890
1949
1946
1949
1.48
20
5
2871
1618
1627
1625
1.77
21
6
2879
1945
1947
1943
1.48
22
6
2886
1944
1949
1952
1.48
23
7
2901
2271
2269
2268
1.28
24
5
2872
1621
1628
1624
1.77
25
6
2886
1942
1943
1942
1.49
26
6
2880
1949
1949
1953
1.47
27
7
2891
2273
2263
2266
1.28
28
6
2883
1949
1946
1953
1.48
29
7
2910
2279
2281
2279
1.28
30
7
2899
2272
2276
2277
1.27
31
8
2906
2598
2595
2596
1.12
32
5
2872
1621
1625
1622
1.77
33
6
2901
1953
1942
1949
1.49
34
6
2895
1948
1939
1944
1.49
35
7
2895
2274
2266
2268
1.28
36
6
2881
1937
1944
1948
1.48
37
7
2894
2277
2270
2280
1.27
38
7
2902
2275
2264
2273
1.28
39
8
2910
2602
2594
2603
1.12
40
6
2877
1945
1947
1945
1.48
41
7
2892
2276
2277
2282
1.27
42
7
2887
2271
2272
2273
1.27
43
8
2912
2599
2606
2599
1.12
44
7
2910
2278
2284
2276
1.28
45
8
2920
2597
2601
2600
1.12
46
8
2920
2600
2601
2590
1.13
47
9
2925
2921
2926
2927
1.00
48
6
2885
1935
1955
1956
1.47
49
7
2901
2271
2279
2288
1.27
50
7
2904
2281
2276
2278
1.27
51
8
2919
2608
2594
2607
1.12
52
7
2902
2282
2270
2273
1.28
53
8
2903
2598
2602
2598
1.12
54
8
2918
2602
2602
2604
1.12
55
9
2932
2927
2924
2936
1.00
56
7
2907
2284
2282
2281
1.27
57
8
2920
2606
2604
2610
1.12
58
8
2913
2593
2597
2587
1.13
59
9
2925
2923
2924
2920
1.00
60
8
2930
2614
2606
2613
1.12
61
9
2932
2946
2946
2947
1.00
62
9
2926
2935
2937
2947
0.99
63
10
2958
3258
3192
3266
0.91
64
6
2902
1957
1956
1959
1.48
65
7
2903
2274
2267
2273
1.28
66
7
2909
2277
2276
2286
1.27
67
8
2908
2602
2606
2599
1.12
68
7
2894
2272
2279
2276
1.27
69
8
2923
2597
2606
2606
1.12
70
8
2910
2596
2599
2600
1.12
71
9
2926
2921
2927
2924
1.00
72
7
2909
2283
2273
2273
1.28
73
8
2909
2602
2602
2599
1.12
74
8
2914
2602
2602
2603
1.12
75
9
2924
2925
2927
2933
1.00
76
8
2904
2608
2602
2601
1.12
77
9
2911
2919
2917
2909
1.00
78
9
2927
2921
2917
2935
1.00
79
10
2929
3241
3246
3246
0.90
80
7
2903
2273
2276
2275
1.28
81
8
2916
2596
2592
2589
1.13
82
8
2913
2600
2597
2598
1.12
83
9
2925
2931
2926
2913
1.00
84
8
2917
2598
2606
2597
1.12
85
9
2920
2916
2918
2927
1.00
86
9
2942
2922
2944
2936
1.00
87
10
2961
3254
3259
3268
0.91
88
8
2934
2607
2608
2612
1.12
89
9
2918
2939
2931
2916
1.00
90
9
2927
2928
2920
2924
1.00
91
10
2940
3253
3252
3246
0.91
92
9
2924
2933
2926
2928
1.00
93
10
2940
3259
3237
3251
0.90
94
10
2928
3247
3247
3264
0.90
95
11
2933
3599
3593
3594
0.82
96
7
2883
2282
2268
2269
1.27
97
8
2911
2602
2595
2600
1.12
98
8
2896
2588
2591
2587
1.12
99
9
2924
2939
2936
2938
1.00
As you can see, a power() call takes exactly as long as 9 multiplication instructions. Therefore every manual rewriting of a power with less than 9 multiplications is faster.
Only the cases 2, 3, 4, and 8 are optimized by my driver. The optimization is independent of whether you use the .0 suffix for the exponent.
In the case of exponent = 2, my implementation seems to have lower performance than the driver. I am not sure, why.
The speedup is the manual implementation compared to pow(x,exponent+0.01), which cannot be optimized by the compiler.
Because the multiplications and the speedup align so perfectly, I created a graph to show the relationship. This relationship kind of shows that my benchmark is trustworthy :).
Operating System: Windows 10 Personal
GPU: RX 580 8GB from Gigabyte
Processor: Ryzen 5 2600
Memory: 16 GB DDR4 3200
GPU Driver: 21.6.1 from 17th June 2021
LWJGL: Version 3.2.3 build 13
While this can definitely be hardware/vendor/compiler dependent, advanced mathematical functions like pow() tend to be considerably more expensive than basic operations.
The best approach is of course to try both, and benchmark. But if there is a simple replacement for an advanced mathematical functions, I don't think you can go very wrong by using it.
If you write pow(x, 3.0), the best you can probably hope for is that the compiler will recognize the special case, and expand it. But why take the risk, if the replacement is just as short and easy to read? C/C++ compilers don't always replace pow(x, 2.0) by a simple multiplication, so I wouldn't necessarily count on all GLSL compilers to do that.

How to loop rows and columns in pandas while replacing values with a constant increment

I am trying to replace values in a dataframe by 0. the first column I need to replace the 1st 3 values, the next column the 1st 6 values so on so forth increasing by 3 every time
a=np.array([133,124,156,189,132,176,189,192,100,120,130,140,150,50,70,133,124,156,189,132])
b = pd.DataFrame(a.reshape(10,2), columns= ['s','t'])
for columns in b:
yy = 3
for i in xrange(yy):
b[columns][i] = 0
yy += 3
print b
the outcome is the following
s t
0 0 0
1 0 0
2 0 0
3 189 189
4 132 132
5 176 176
6 189 189
7 192 192
8 100 100
9 120 120
I am clearly missing something really simple, to make the loop replace 6 values instead of only 3 in column t, any ideas?
i would do it this way:
i = 1
for c in b.columns:
b.ix[0 : 3*i-1, c] = 0
i += 1
Demo:
In [86]: b = pd.DataFrame(np.random.randint(0, 100, size=(20, 4)), columns=list('abcd'))
In [87]: %paste
i = 1
for c in b.columns:
b.ix[0 : 3*i-1, c] = 0
i += 1
## -- End pasted text --
In [88]: b
Out[88]:
a b c d
0 0 0 0 0
1 0 0 0 0
2 0 0 0 0
3 10 0 0 0
4 8 0 0 0
5 49 0 0 0
6 55 48 0 0
7 99 43 0 0
8 63 29 0 0
9 61 65 74 0
10 15 29 41 0
11 79 88 3 0
12 91 74 11 4
13 56 71 6 79
14 15 65 46 81
15 81 42 60 24
16 71 57 95 18
17 53 4 80 15
18 42 55 84 11
19 26 80 67 59
You need inicialize yy=3 before loop:
yy = 3
for columns in b:
for i in xrange(yy):
b[columns][i] = 0
yy += 3
print b
Python 3 solution:
yy = 3
for columns in b:
for i in range(yy):
b[columns][i] = 0
yy += 3
print (b)
s t
0 0 0
1 0 0
2 0 0
3 189 0
4 100 0
5 130 0
6 150 50
7 70 133
8 124 156
9 189 132
Another solution:
yy= 3
for i, col in enumerate(b.columns):
b.ix[:i*yy+yy-1, col] = 0
print (b)
s t
0 0 0
1 0 0
2 0 0
3 189 0
4 100 0
5 130 0
6 150 50
7 70 133
8 124 156
9 189 132

Histogram image maching using opencv

I need to perform a matching between an image and an histogram I receive as a text.
I do the cdf for both of them:
//Calculating cumulative histogram of src
double total = src.rows*src.cols;
double probSrc[255];
int newValuesSrc[255];
double cuml = 0;
for(int j = 0; j < 256; j++)
{
probSrc[j] = imageHistogram[j]/total; // Probability of each value in image
cuml = cuml + probSrc[j]; // Cumulative probability of current and all previous values
double cdfmax = cuml * 255; // Cumulative probability * max value
newValuesSrc[j] = (int) round(cdfmax);
cout << imageHistogram[j] << " "<< probSrc[j] << " " << newValuesSrc[j] << endl;
}
readHistogramFromFile();
//Calculating cumulative histogram from file
double probDst[255];
int newValuesDst[255];
cuml = 0;
for(int j = 0; j < 256; j++)
{
probDst[j] = receivedHistogram[j]/total; // Probability of each value in image
cuml = cuml + probDst[j]; // Cumulative probability of current and all previous values
double cdfmax = cuml * 255; // Cumulative probability * max value
newValuesDst[j] = (int) round(cdfmax);
cout << receivedHistogram[j] << " "<< probDst[j] << " " << newValuesDst[j] << endl;
}
and I get this values:
For the src image:
207677 0.0901376 23
37615 0.016326 27
19098 0.00828906 29
11955 0.0051888 31
8744 0.00379514 32
7386 0.00320573 32
6546 0.00284115 33
6178 0.00268142 34
5967 0.00258984 34
5437 0.00235981 35
5280 0.00229167 36
5127 0.00222526 36
5002 0.00217101 37
4839 0.00210026 37
4754 0.00206337 38
4676 0.00202951 38
4547 0.00197352 39
4517 0.0019605 39
4484 0.00194618 40
4290 0.00186198 40
4197 0.00182161 41
4188 0.00181771 41
4265 0.00185113 42
4229 0.0018355 42
4233 0.00183724 43
4245 0.00184245 43
4358 0.00189149 44
4330 0.00187934 44
4400 0.00190972 45
4474 0.00194184 45
4519 0.00196137 46
4415 0.00191623 46
4477 0.00194314 47
4468 0.00193924 47
4580 0.00198785 48
4416 0.00191667 48
4558 0.0019783 49
4674 0.00202865 49
4705 0.0020421 50
4998 0.00216927 50
4848 0.00210417 51
4782 0.00207552 51
4883 0.00211936 52
4989 0.00216536 52
4957 0.00215148 53
4987 0.0021645 53
5133 0.00222786 54
4967 0.00215582 54
5217 0.00226432 55
5185 0.00225043 56
5140 0.0022309 56
5236 0.00227257 57
5291 0.00229644 57
5458 0.00236892 58
5473 0.00237543 59
5464 0.00237153 59
5495 0.00238498 60
5439 0.00236068 60
5458 0.00236892 61
5557 0.00241189 62
5881 0.00255252 62
5900 0.00256076 63
5935 0.00257595 64
5902 0.00256163 64
6040 0.00262153 65
6203 0.00269227 66
6146 0.00266753 66
6140 0.00266493 67
6075 0.00263672 68
6054 0.0026276 68
6238 0.00270747 69
6060 0.00263021 70
6153 0.00267057 70
6303 0.00273568 71
6231 0.00270443 72
6278 0.00272483 72
6360 0.00276042 73
6359 0.00275998 74
6368 0.00276389 75
6438 0.00279427 75
6329 0.00274696 76
6408 0.00278125 77
6360 0.00276042 77
6378 0.00276823 78
6329 0.00274696 79
6394 0.00277517 79
6517 0.00282856 80
6521 0.0028303 81
6707 0.00291102 82
6788 0.00294618 82
6761 0.00293446 83
6878 0.00298524 84
7004 0.00303993 85
6963 0.00302214 85
7050 0.0030599 86
6940 0.00301215 87
6875 0.00298394 88
7073 0.00306988 89
7035 0.00305339 89
7146 0.00310156 90
7007 0.00304123 91
7159 0.0031072 92
7089 0.00307682 92
7185 0.00311849 93
7410 0.00321615 94
7237 0.00314106 95
7334 0.00318316 96
7364 0.00319618 97
7452 0.00323437 97
7760 0.00336806 98
7839 0.00340234 99
7882 0.00342101 100
7885 0.00342231 101
8055 0.00349609 102
7923 0.0034388 103
8165 0.00354384 103
8306 0.00360503 104
8271 0.00358984 105
8275 0.00359158 106
8634 0.0037474 107
8684 0.0037691 108
8752 0.00379861 109
9080 0.00394097 110
8958 0.00388802 111
9094 0.00394705 112
9279 0.00402734 113
9234 0.00400781 114
9348 0.00405729 115
9440 0.00409722 116
9431 0.00409332 117
9662 0.00419358 118
9842 0.0042717 119
9816 0.00426042 121
9957 0.00432161 122
10353 0.00449349 123
10626 0.00461198 124
10764 0.00467187 125
10832 0.00470139 126
10767 0.00467318 128
11222 0.00487066 129
11469 0.00497786 130
11661 0.0050612 131
11731 0.00509158 133
12023 0.00521832 134
12086 0.00524566 135
12094 0.00524913 137
12362 0.00536545 138
12364 0.00536632 139
12659 0.00549436 141
12587 0.00546311 142
12776 0.00554514 144
13037 0.00565842 145
13252 0.00575174 147
13425 0.00582682 148
13595 0.00590061 150
13795 0.00598741 151
14308 0.00621007 153
14232 0.00617708 154
14657 0.00636155 156
14966 0.00649566 157
14867 0.00645269 159
15051 0.00653255 161
15510 0.00673177 162
15357 0.00666536 164
15326 0.00665191 166
15308 0.0066441 168
15316 0.00664757 169
15321 0.00664974 171
15298 0.00663976 173
15435 0.00669922 174
15496 0.00672569 176
15307 0.00664366 178
15343 0.00665929 179
15356 0.00666493 181
15315 0.00664714 183
15444 0.00670312 185
15346 0.00666059 186
15583 0.00676345 188
15429 0.00669661 190
15641 0.00678863 191
15661 0.00679731 193
15638 0.00678733 195
15689 0.00680946 197
15866 0.00688628 198
15552 0.00675 200
15150 0.00657552 202
15185 0.00659071 203
14941 0.00648481 205
14989 0.00650564 207
14585 0.0063303 208
14718 0.00638802 210
14553 0.00631641 212
14612 0.00634201 213
14520 0.00630208 215
14358 0.00623177 216
13931 0.00604644 218
13580 0.0058941 220
13370 0.00580295 221
13281 0.00576432 222
13053 0.00566536 224
12711 0.00551693 225
12556 0.00544965 227
12556 0.00544965 228
12125 0.00526259 229
12184 0.00528819 231
11975 0.00519748 232
12198 0.00529427 233
11919 0.00517318 235
11898 0.00516406 236
11589 0.00502995 237
11348 0.00492535 239
11011 0.00477908 240
10523 0.00456727 241
10388 0.00450868 242
9795 0.0042513 243
9251 0.00401519 244
9014 0.00391233 245
8436 0.00366146 246
8266 0.00358767 247
7851 0.00340755 248
7299 0.00316797 249
6996 0.00303646 250
6303 0.00273568 250
5625 0.00244141 251
5375 0.0023329 251
5102 0.00221441 252
4747 0.00206033 253
4313 0.00187196 253
3809 0.00165321 253
3307 0.00143533 254
2756 0.00119618 254
2276 0.000987847 254
1935 0.000839844 255
1617 0.000701823 255
1087 0.000471788 255
547 0.000237413 255
217 9.4184e-05 255
31 1.34549e-05 255
4 1.73611e-06 255
0 0 255
0 0 255
0 0 255
0 0 255
0 0 255
0 0 255
0 0 255
0 0 255
0 0 255
0 0 255
0 0 255
0 0 255
0 0 255
0 0 255
0 0 255
0 0 255
0 0 255
And for the histogram I receive:
10 4.34028e-06 0
11 4.77431e-06 0
12 5.20833e-06 0
13 5.64236e-06 0
14 6.07639e-06 0
15 6.51042e-06 0
16 6.94444e-06 0
17 7.37847e-06 0
18 7.8125e-06 0
19 8.24653e-06 0
20 8.68056e-06 0
22 9.54861e-06 0
24 1.04167e-05 0
26 1.12847e-05 0
28 1.21528e-05 0
30 1.30208e-05 0
34 1.47569e-05 0
38 1.64931e-05 0
42 1.82292e-05 0
50 2.17014e-05 0
60 2.60417e-05 0
70 3.03819e-05 0
80 3.47222e-05 0
90 3.90625e-05 0
100 4.34028e-05 0
120 5.20833e-05 0
140 6.07639e-05 0
160 6.94444e-05 0
160 6.94444e-05 0
150 6.51042e-05 0
140 6.07639e-05 0
130 5.64236e-05 0
120 5.20833e-05 0
110 4.77431e-05 0
100 4.34028e-05 0
90 3.90625e-05 0
80 3.47222e-05 0
70 3.03819e-05 0
60 2.60417e-05 0
50 2.17014e-05 0
40 1.73611e-05 0
30 1.30208e-05 0
20 8.68056e-06 0
10 4.34028e-06 0
10 4.34028e-06 0
10 4.34028e-06 0
10 4.34028e-06 0
11 4.77431e-06 0
12 5.20833e-06 0
13 5.64236e-06 0
14 6.07639e-06 0
15 6.51042e-06 0
16 6.94444e-06 0
17 7.37847e-06 0
18 7.8125e-06 0
19 8.24653e-06 0
20 8.68056e-06 0
22 9.54861e-06 0
24 1.04167e-05 0
26 1.12847e-05 0
28 1.21528e-05 0
30 1.30208e-05 0
34 1.47569e-05 0
38 1.64931e-05 0
42 1.82292e-05 0
50 2.17014e-05 0
60 2.60417e-05 0
70 3.03819e-05 0
80 3.47222e-05 0
90 3.90625e-05 0
100 4.34028e-05 0
120 5.20833e-05 0
140 6.07639e-05 0
160 6.94444e-05 0
160 6.94444e-05 0
150 6.51042e-05 0
140 6.07639e-05 0
130 5.64236e-05 1
120 5.20833e-05 1
110 4.77431e-05 1
100 4.34028e-05 1
90 3.90625e-05 1
80 3.47222e-05 1
70 3.03819e-05 1
60 2.60417e-05 1
50 2.17014e-05 1
40 1.73611e-05 1
30 1.30208e-05 1
20 8.68056e-06 1
10 4.34028e-06 1
10 4.34028e-06 1
11 4.77431e-06 1
12 5.20833e-06 1
13 5.64236e-06 1
14 6.07639e-06 1
15 6.51042e-06 1
16 6.94444e-06 1
17 7.37847e-06 1
18 7.8125e-06 1
19 8.24653e-06 1
20 8.68056e-06 1
22 9.54861e-06 1
24 1.04167e-05 1
26 1.12847e-05 1
28 1.21528e-05 1
30 1.30208e-05 1
34 1.47569e-05 1
38 1.64931e-05 1
42 1.82292e-05 1
50 2.17014e-05 1
60 2.60417e-05 1
70 3.03819e-05 1
80 3.47222e-05 1
90 3.90625e-05 1
100 4.34028e-05 1
120 5.20833e-05 1
140 6.07639e-05 1
160 6.94444e-05 1
160 6.94444e-05 1
150 6.51042e-05 1
140 6.07639e-05 1
130 5.64236e-05 1
120 5.20833e-05 1
110 4.77431e-05 1
100 4.34028e-05 1
90 3.90625e-05 1
80 3.47222e-05 1
70 3.03819e-05 1
60 2.60417e-05 1
50 2.17014e-05 1
40 1.73611e-05 1
30 1.30208e-05 1
20 8.68056e-06 1
10 4.34028e-06 1
10 4.34028e-06 1
10 4.34028e-06 1
10 4.34028e-06 1
11 4.77431e-06 1
12 5.20833e-06 1
13 5.64236e-06 1
14 6.07639e-06 1
15 6.51042e-06 1
16 6.94444e-06 1
17 7.37847e-06 1
18 7.8125e-06 1
19 8.24653e-06 1
20 8.68056e-06 1
22 9.54861e-06 1
24 1.04167e-05 1
26 1.12847e-05 1
28 1.21528e-05 1
30 1.30208e-05 1
34 1.47569e-05 1
38 1.64931e-05 1
42 1.82292e-05 1
50 2.17014e-05 1
60 2.60417e-05 1
70 3.03819e-05 1
80 3.47222e-05 1
90 3.90625e-05 1
100 4.34028e-05 1
120 5.20833e-05 1
140 6.07639e-05 1
160 6.94444e-05 1
160 6.94444e-05 1
150 6.51042e-05 1
140 6.07639e-05 1
130 5.64236e-05 1
120 5.20833e-05 1
110 4.77431e-05 1
100 4.34028e-05 1
90 3.90625e-05 1
80 3.47222e-05 1
70 3.03819e-05 1
60 2.60417e-05 1
50 2.17014e-05 1
40 1.73611e-05 1
30 1.30208e-05 1
20 8.68056e-06 1
10 4.34028e-06 1
10 4.34028e-06 1
10 4.34028e-06 1
20 8.68056e-06 1
30 1.30208e-05 1
40 1.73611e-05 1
50 2.17014e-05 1
60 2.60417e-05 1
70 3.03819e-05 1
80 3.47222e-05 1
90 3.90625e-05 1
100 4.34028e-05 1
120 5.20833e-05 1
140 6.07639e-05 1
160 6.94444e-05 1
160 6.94444e-05 1
150 6.51042e-05 1
140 6.07639e-05 1
130 5.64236e-05 1
120 5.20833e-05 1
110 4.77431e-05 1
100 4.34028e-05 1
90 3.90625e-05 1
80 3.47222e-05 1
70 3.03819e-05 1
60 2.60417e-05 1
50 2.17014e-05 1
40 1.73611e-05 1
30 1.30208e-05 1
20 8.68056e-06 1
10 4.34028e-06 1
10 4.34028e-06 1
10 4.34028e-06 1
20 8.68056e-06 1
30 1.30208e-05 1
40 1.73611e-05 1
40 1.73611e-05 1
50 2.17014e-05 1
55 2.38715e-05 1
60 2.60417e-05 1
65 2.82118e-05 1
70 3.03819e-05 1
75 3.25521e-05 1
80 3.47222e-05 1
85 3.68924e-05 2
90 3.90625e-05 2
95 4.12326e-05 2
90 3.90625e-05 2
80 3.47222e-05 2
70 3.03819e-05 2
60 2.60417e-05 2
50 2.17014e-05 2
40 1.73611e-05 2
30 1.30208e-05 2
20 8.68056e-06 2
10 4.34028e-06 2
10 4.34028e-06 2
10 4.34028e-06 2
20 8.68056e-06 2
30 1.30208e-05 2
40 1.73611e-05 2
40 1.73611e-05 2
50 2.17014e-05 2
55 2.38715e-05 2
60 2.60417e-05 2
65 2.82118e-05 2
70 3.03819e-05 2
75 3.25521e-05 2
80 3.47222e-05 2
85 3.68924e-05 2
90 3.90625e-05 2
95 4.12326e-05 2
100 4.34028e-05 2
105 4.55729e-05 2
110 4.77431e-05 2
115 4.99132e-05 2
120 5.20833e-05 2
As you can see, the src histogram are a lot more distributed than the histrogram I receive ([0-255] against [0-2]).
My question is, what do I do now? How do I match them?
Why don't you scale [0-2] histogram to [0-255]? oldValue * 255 / 2.