I have scanned coldfusion code using the cflint jar CFLint-1.3.0-1ll.jar from the command line as
java -jar <jar path> -folder <mylocalColdfusioncodeFolder>
It gives cflint-result.html file in the corresponding folder.
In the report, I found that no cross site scripting and DOM related issues as mentioned by Fortify Audit Workbench tool. CFLint is basically gives language specific issues because it's mainly run on CFParser.
When I run the below command to know the rules against scan I found all are language specific rules.
java -jar CFLint-1.3.0-all.jar -rules gives a list of rules as
The Supported rules to check against the cfm code :
-----------------------------------------------------
1 ComplexBooleanExpressionChecker
2 GlobalLiteralChecker
3 CFBuiltInFunctionChecker
4 CreateObjectChecker
5 CFDumpChecker
6 FunctionTypeChecker
7 ArrayNewChecker
8 LocalLiteralChecker
9 SelectStarChecker
10 TooManyFunctionsChecker
11 QueryParamChecker
12 FunctionLengthChecker
13 OutputParmMissing
14 WriteDumpChecker
15 CFExecuteChecker
16 ComponentLengthChecker
17 GlobalVarChecker
18 CFModuleChecker
19 CFIncludeChecker
20 CFDebugAttributeChecker
21 ComponentDisplayNameChecker
22 ArgVarChecker
23 NestedCFOutput
24 VarScoper
25 FunctionHintChecker
26 ArgumentNameChecker
27 TooManyArgumentsChecker
28 SimpleComplexityChecker
29 TypedQueryNew
30 CFInsertChecker
31 StructKeyChecker
32 BooleanExpressionChecker
33 VariableNameChecker
34 MethodNameChecker
35 AbortChecker
36 ComponentNameChecker
37 UnusedArgumentChecker
38 StructNewChecker
39 PackageCaseChecker
40 CFAbortChecker
41 ComponentHintChecker
42 ArgumentTypeChecker
43 CFUpdateChecker
44 IsDebugModeChecker
45 ArgDefChecker
46 UnusedLocalVarChecker
47 CFSwitchDefaultChecker
48 ArgumentHintChecker
49 CFCompareVsAssignChecker
-----------------------------------------------------
And I found that CFLint does not raises errors of CSS attacks. When I run the same coldfusioncodefolder with Fortify tool (Audit workbench), I got CSS issues like
Cross-Site Scripting: Reflected
, Cross-Site Scripting: DOM
, Unreleased resource
, Dynamic Code Evaluation: Code Injection
, Hardcoded Password
, Sql Injection
, Path Manipulation
, log forging and privacy violation with the tags cfdocument
, cfdirectoryexists
, cfcookie
, cflog
, cffile.....
Can you please clarify whether CFLint scans CSS issues or it only checks the rules only specific to ColdFusion language?
CFLint is only concerned about ColdFusion code. It is not a security scanner nor a CSS linter. You are mixing up your tools and their purposes.
A linter scans for code issues - not security issues.
Related
We have build Ejabberd in AWS EC2 instance and have enabled the clustering in the 6 Ejabberd servers in Tokyo, Frankfurt, and Singapore regions.
The OS, middleware, applications and settings for each EC2 instance are exactly the same.
But currently, the Ejabberd CPUs in the Frankfurt and Singapore regions are overloaded.
The CPU of Ejabberd in the Japan region is normal.
Could you please tell me the suspicious part?
You can take a look at the ejabberd log files of the problematic (and the good) nodes, maybe you find some clue.
You can use the undocumented "ejabberdctl etop" shell command in the problematic nodes. It's similar to "top", but runs inside the erlang virtual machine that runs ejabberd
ejabberdctl etop
========================================================================================
ejabberd#localhost 16:00:12
Load: cpu 0 Memory: total 44174 binary 1320
procs 277 processes 5667 code 20489
runq 1 atom 984 ets 5467
Pid Name or Initial Func Time Reds Memory MsgQ Current Function
----------------------------------------------------------------------------------------
<9135.1252.0> caps_requests_cache 2393 1 2816 0 gen_server:loop/7
<9135.932.0> mnesia_recover 480 39 2816 0 gen_server:loop/7
<9135.1118.0> dets:init/2 71 2 5944 0 dets:open_file_loop2
<9135.6.0> prim_file:start/0 63 1 2608 0 prim_file:helper_loo
<9135.1164.0> dets:init/2 56 2 4072 0 dets:open_file_loop2
<9135.818.0> disk_log:init/2 49 2 5984 0 disk_log:loop/1
<9135.1038.0> ejabberd_listener:in 31 2 2840 0 prim_inet:accept0/3
<9135.1213.0> dets:init/2 31 2 5944 0 dets:open_file_loop2
<9135.1255.0> dets:init/2 30 2 5944 0 dets:open_file_loop2
<9135.0.0> init 28 1 3912 0 init:loop/1
========================================================================================
Whenever I run my code, I upload the result to my s3 bucket, which doesn't have a specific name pattern.
Suppose the following files are present in the bucket right now.
2017-12-06 11:40:47 93185 RAW_D_3600_S_1_1294573351559346926-0.metadata
2017-12-06 11:40:47 167170 RAW_D_3600_S_1_1294573351559346926-0.txt
2017-12-06 10:55:54 21 USERENROLL_1_1-0.metadata
2017-12-06 10:55:54 190 USERENROLL_1_1-0.txt
2017-12-05 17:56:36 174 USERENROLL_1_1-duke1.csv
2017-12-06 11:13:45 105 USERENROLL_1_7-0.metadata
2017-12-06 11:13:45 599 USERENROLL_1_7-0.txt
2017-12-06 11:15:51 126 USERENROLL_1_8-0.metadata
2017-12-06 11:15:51 600 USERENROLL_1_8-0.txt
2017-12-06 10:59:26 21 USERENROLL_1_9-0.metadata
2017-12-06 10:59:26 181 USERENROLL_1_9-0.txt
I want to download the file which was created on 2017-12-06 11:40:47.
I fetch the file by storing the current time and search for files in the bucket that are generated after the stored time. But this doesn't give accurate results all the time, as file might take longer to be generated.
Is there a way to tackle this problem?
PS - Can't change the naming convention being used.
I've got gperftools installed and collecting data, which looks reasonable so far. I see one node (?) that gets sampled a lot - but I'm interested in the callers to that node - I don't see them? I've tried callgrind/kcachegrind also, I feel like I'm missing something? Here's a snippet of the output when using --text
Total: 1844 samples
573 31.1% 31.1% 573 31.1% US_strcpy
185 10.0% 41.1% 185 10.0% US_strstr
167 9.1% 50.2% 167 9.1% US_strlen
63 3.4% 53.6% 63 3.4% PS_CompressTable
58 3.1% 56.7% 58 3.1% LX_LexInternal
51 2.8% 59.5% 51 2.8% US_CStrEql
47 2.5% 62.0% 47 2.5% 0x40472984
40 2.2% 64.2% 40 2.2% PS_DoSets
38 2.1% 66.3% 38 2.1% LX_ProcessCatRange
So I'm interested in seeing the callers to US_strcpy, but I don't seem to have any? I do get a nice call graph from kcachegrind for 0x40472984 (still trying to match that to a symbol)
There are several ways:
a) pprof --web or kcachgrind will show you callers nicely if it is captured correctly. It is sometimes useful to do pprof --traces (only with github.com/google/pprof version). Which will be somewhat like low-tech method that Mike mentioned above.
b) if the data is really unavailable, you're having problem with stacktrace capturing and/or symbolization. For that, build gperftools with libunwind and build all of your program with debug info.
I copied a FORTRAN IV program from a thesis, so it presumably worked at the time it was written. I compiled it with gfortran. When running, it stalls in an integration subroutine. I have tried easing off the residuals but to no avail. I am asking for help because (presuming no mistakes in code) gfortran might not like the archaic 66/IV code, and updating it is outside my abilities.
The program gets stuck by line 9, so I wonder if the DO loops are responsible. Note, lines 1 and 6 are unusual to me because ',1' has been added to the ends: e.g. =1,N,1.
I don't think it's necessary to show the FUNC subroutine called on line 5 but am happy to provide it if necessary.
If you need more detailed information I am happy to provide it.
00000001 13 DO 22 TDP=QDP,7,1
00000002 TD=TDP-1
00000003 X=X0+H0
00000004 IF(TD.EQ.QD) GOTO 15
00000005 CALL FUNC(N,DY,X,Y,J)
00000006 15 DO 21 RD=1,N,1
00000007 GOTO (120,121,122,123,124,125,126),TDP
00000008 120 RK(5*N*RD)=Y(RD)
00000009 GOTO 21
00000010 121 RK(RD)=HD*DY(RD)
00000011 H0=0.5*HD
00000012 F0=0.5*RK(RD)
00000013 GOTO 20
00000014 122 RK(N+RD)=HD*DY(RD)
00000015 F0=0.25*(RK(RD)+RK(N+RD))
00000016 GOTO 20
00000017 123 RK(2*N+RD)=HD*DY(RD)
00000018 H0=HD
00000019 F0=-RK(N+RD)+2.*RK(2*N+RD)
00000020 GOTO 20
00000021 124 RK(3*N+RD)=HD*DY(RD)
00000022 H0=0.66666666667*HD
00000023 F0=(7.*RK(RD)+10.*RK(N+RD)+RK(3*N+RD))/27.
00000024 GOTO 20
00000025 125 RK(4*N+RD)=HD*DY(RD)
00000026 H0=0.2*HD
00000027 F0=(28.*RK(RD)-125.*RK(N+RD)+546.*RK(2*N+RD)+54.*RK(3*N+RD)-
00000028 1378.*RK(4*N+RD))/625.
00000029 GOTO 20
00000030 126 RK(6*N+RD)=HD*DY(RD)
00000031 F0=0.1666666667*(RK(RD)+4.*RK(2*N+RD)+RK(3*N+RD))
00000032 X=X0+HD
00000033 ER=(-42.*RK(RD)-224.*RK(2*N+RD)-21.*RK(3*N+RD)+162.*RK(4*N+RD)
00000034 1+125.*RK(6*N+RD))/67.2
00000035 YN=RK(5*N+RD)+F0
00000036 IF(ABS(YN).LT.1E-8) YN=1
00000037 ER=ABS(ER/YN)
00000038 IF(ER.GT.G0) GOTO 115
00000039 IF(ED.GT.ER) GOTO 20
00000040 QD=-1
00000041 20 Y(RD)=RK(5*N+RD)+F0
00000042 21 CONTINUE
00000043 22 CONTINUE
It's difficult to be certain (not entirely sure your snippet exactly matches your source file) but your problem might arise from an old FORTRAN gotcha -- a 0 in column 6 is (or rather was) treated as a blank. Any other (non-blank) character in column 6 is/was treated as a continuation indicator, but not the 0.
Not all f66 compilers adhered to the convention of executing a loop at least once, but it was a common (non-portable) assumption.
Similarly, the assumption of all static variables was not a portable one, but can be implemented by adding a SAVE statement, beginning with f77. A further assumption that SAVE variables will be zero-initialized is even more non-portable, but most compilers have an option to implement that.
If an attempt is being made to resurrect old code, it is probably worth while to get it working before modernizing it incrementally so as to make it more self-documenting. The computed goto looks like a relatively sane one which could be replaced by select case, at a possible expense of optimization. Here the recent uses of the term "modernization" become contradictory.
The ,1 bits are to get the compiler to spot the errors. It is quite common to do the following
DO 10 I = 1.7
That is perfectly legal since spaces are allowed in variable names. If you wish to avoid that, then put in the extra number. The following will generate errors
DO 10 I = 1.7,1
DO 10 I = 1,7.1
DO 10 I = 1.7.1
Re the program getting struck, try puting a continuation line between labels 21 and 22. The if-goto is the same as if-not-then in the later versions of Fortran and the computed goto is the same as a select statement. You don't need to recode it: there is nothing wrong with it other than youngsters getting confused whenever they see goto. All you need to do is indent it and it becomes obvious. So what you will have is
DO 22 TDP = QDP, 7, 1
...
DO 23 RD = 1, N, 1
GOTO (...) TDP
...
GOTO 21
...
GOTO 20
...
GOTO 20
...
20 CONTINUE
Y(RD) = ...
21 CONTINUE
23 CONTINUE
22 CONTINUE
You will probably end up with far more code if you try recoding it. It will look exactly the same except that gotos have been replaced by other words. It is possible that the compiler is generating the wrong code so just help it by putting a few dummy (CONTINUE) statements.
So recently I was playing around with Django on the Jython platform and wanted to see its performance in "production". The site I tested with was just a simple return HttpResponse("Time %.2f" % time.time()) view, so no database involved.
I tried the following two combinations (measurements done with ab -c15 -n500 -k <url>, everything in Ubuntu Server 10.10 on VirtualBox):
J2EE application server (Tomcat/Glassfish), deployed WAR file
I get results like
Requests per second: 143.50 [#/sec] (mean)
[...]
Percentage of the requests served within a certain time (ms)
50% 16
66% 16
75% 16
80% 16
90% 31
95% 31
98% 641
99% 3219
100% 3219 (longest request)
Obviously, the server hangs for a few seconds once in a while, which is not acceptable. I assume it has something to do with reloading Jython because starting the jython shell takes about 3 seconds, too.
AJP serving using patched flup package (+ Apache as frontend)
Note: flup is the package used by manage.py runfcgi, I had to patch it because flup's threading/forking support doesn't seem to work on Jython (-> AJP was the only working method).
Almost the same results here, but sometimes the last 100 requests don't even get answered at all (but server process still alive).
I'm asking this on SO (instead of serverfault) because it's very Django/Jython-specific. Does anyone have experience with deploying Django sites on Jython? Is there maybe another (faster) way to serve the site? Or is it just too early to use Django on the Java platform?
So as nobody replied, I investigated a bit more and it seems like my problem might have to do with VirtualBox. Using different server OSes (Debian Squeeze, Ubuntu Server), I had similar problems. For example, with simple static file serving, I got this result from the Apache web server (on Debian):
> ab -c50 -n1000 http://ip.of.my.vm/some/static/file.css
Requests per second: 91.95 [#/sec] (mean) <--- quite impossible for static serving
[...]
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 2 22.1 0 688
Processing: 0 206 991.4 31 9188
Waiting: 0 96 401.2 16 3031
Total: 0 208 991.7 31 9203
Percentage of the requests served within a certain time (ms)
50% 31
66% 47
75% 63
80% 78
90% 156
95% 781
98% 844
99% 9141 <--- !!!!!!
100% 9203 (longest request)
This led to the conclusion that (I don't have a conclusion, but) I think the Java reloading might not be the problem here, rather the virtualization. I will try it on a real host and leave this question unanswered till then.
FOLLOWUP
Now I successfully tested a bare-bones Django site (really just the welcome page) using Jython + AJP over TCP/mod_proxy_ajp on Apache (again with patched flup package). This time on a real host (i7 920, 6 GB RAM). The result proved that my above assumption was correct and that I really should never benchmark on a virtual host again. Here's the result for the welcome page:
Document Path: /jython-test/
Document Length: 2059 bytes
Concurrency Level: 40
Time taken for tests: 24.688 seconds
Complete requests: 20000
Failed requests: 0
Write errors: 0
Keep-Alive requests: 0
Total transferred: 43640000 bytes
HTML transferred: 41180000 bytes
Requests per second: 810.11 [#/sec] (mean)
Time per request: 49.376 [ms] (mean)
Time per request: 1.234 [ms] (mean, across all concurrent requests)
Transfer rate: 1726.23 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 1 1.5 0 20
Processing: 2 49 16.5 44 255
Waiting: 0 48 16.5 44 255
Total: 2 49 16.5 45 256
Percentage of the requests served within a certain time (ms)
50% 45
66% 48
75% 51
80% 53
90% 69
95% 80
98% 90
99% 97
100% 256 (longest request) # <-- no multiple seconds of waiting anymore
Very promising, I would say. The only downside is that the average request time is > 40 ms whereas the development server has a mean of < 3 ms.