I have been using Sonar for years now without any issues, a great pleasure… until last week where I have encountered a really strange issue and I have absolutely no idea what is happening.
I am using the pipelines of GitLab to execute some separate unit tests and functional tests, and then merge the 2 lcov and 2 xunit reports for Sonar, which give me at the end 1 lcov file listing the code covered by both unit and functional tests, and 1 xunit file with all executed tests.
Then I pass these 2 files to our SonarQube 8.6 - Developer Edition with the sonar-scanner 4.5.0.2216.
Here is the sonar-scanner parameters I am using :
sonar.projectKey=xxx
sonar.projectName=xxx
sonar.projectVersion=$CI_BUILD_ID
sonar.sources=src
sonar.tests=test
sonar.javascript.lcov.reportPaths=test-results/lcov-merged.info
sonar.testExecutionReportPaths=test-results/xunit-merged.xml
Here is the content of the final xunit-merged.xml file after merging both unit and functional reports.
<testExecutions version="1">
<file path="test\unit\index.test.ts">
<testCase name=": Save" duration="4"/>
<testCase name="Get functional logs: with empty query" duration="3"/>
<testCase name="Class: ManagementFormService: Function: get (GET handler)" duration="4"/>
<testCase name="Class: ManagementFormService: Function: update (POST handler)" duration="4"/>
<testCase name="Module: Export management form service, controller and externally injected config service" duration="29"/>
<testCase name="Class: ManagementFormService: Function: createForm" duration="1"/>
<testCase name="Class: ManagementFormService: Function: save (success) and createForm" duration="1"/>
<testCase name="Get functionals logs by origin: From valid queries and request" duration="2"/>
<testCase name="Get functionals logs by origin: From valid queries but no request" duration="4"/>
<testCase name="Get functionals logs by origin: From valid queries and request" duration="1"/>
<testCase name="Get functionals logs by origin: From valid queries but no request" duration="1"/>
<testCase name="Get functionals logs by origin: From valid queries and request" duration="0"/>
<testCase name="Get functionals logs by origin: From valid queries but no request" duration="1"/>
<testCase name="Module version: Cache ON, Repository ON, Amqp ON: From valid UUID and request" duration="1"/>
<testCase name="Module version: Cache ON, Repository ON, Amqp ON: From valid UUID but no request" duration="0"/>
<testCase name="Module version: Cache OFF, Repository ON, Amqp ON: From valid UUID and request" duration="0"/>
<testCase name="Module version: Cache OFF, Repository ON, Amqp ON: From valid UUID but no request" duration="0"/>
<testCase name="Module version: Cache OFF, Repository OFF, Amqp ON: From valid UUID and request" duration="1"/>
<testCase name="Module version: Cache OFF, Repository OFF, Amqp ON: From valid UUID but no request" duration="1"/>
<testCase name="Module version: Cache OFF, Repository OFF, Amqp OFF: From valid UUID and request" duration="0"/>
<testCase name="Module version: Cache OFF, Repository OFF, Amqp OFF: From valid UUID but no request" duration="1"/>
</file>
<file path="test\functional\index.test.ts">
<testCase name="Boot: Boot and log booting status" duration="163"/>
</file>
</testExecutions>
It contains 22 tests, 21 unit tests executed by one test file (test\unit\index.test.ts) and 1 functional test executed by another test file (test\functional\index.test.ts).
The issue here is that Sonar only count the 21 unit tests and ignore the last one (as you can see in the capture).
The last, functional test, is ignored while clearly present in the xml.
In the exact same way, the code coverage is reduced. All the source code covered by the unit tests are marked as covered, but the source code covered by the functional test is marked as uncovered, while the lconv-merged.info clearly indicate that the file has been tested correctly.
As you can see in this capture, the lines 45, 46 and 49 of the file src/technical/technical.controller.rest.ts are marked as uncovered.
While the lconv-merged.info report clearly indicate that these lines are covered.
If Sonar was saying there is no test and no coverage, I would thing about something malformed in the report files. But here it’s like Sonar is cherry-picking what it want to display and I don’t know why.
All the source codes are in the same src source path, accessible by sonar-scanner, and browsable in sonar (which indicate they habe been analyzed correctly).
I have been searching for a full long week now about why would be a part of both reports be ignored while taking the rest of the files, and I still have absolutely no idea.
Can someone help me ?
Finally found out the issue... I am dumb.
I was triggering the pipelines in GitLab for the merge requests, but in Sonar I was looking for the branch, not the merge requests.
After configuring it correctly everything work fine, the functional tests are listed and the coverage is correct.
Related
I have recently installed Artifactory OSS 6.5.2 on a remote server in our network which runs on windows server 2012.
I can enter the UI locally (the machine running the Artifactory instance) through any of the browsers with this address:
"http://{local-ip}:8081/artifactory/webapp/#/"
When I try entering the UI from one of the machines on the network I get a "This site can’t be reached" message after multiple attempts to connect.
The request.log at {ARTIFACTORY_HOME}\logs\request.log shows that the request got through and succeeded:
"REQUEST|{remote-ip}|anonymous|GET|/webapp/|HTTP/1.1|200|0"
The same is showed for requests coming from the server running the Artifactory instance:
"REQUEST|{local-ip}|anonymous|GET|/webapp/|HTTP/1.1|200|0"
However, in contrary to the previous request from a remote machine, the initial request is followed by more requests:
"REQUEST|{local-ip}|anonymous|GET|/ui/auth/screen/footer|HTTP/1.1|200|0
REQUEST|{local-ip}|anonymous|GET|/ui/treebrowser/repoOrder|HTTP/1.1|200|0
REQUEST|{local-ip}|anonymous|GET|/ui/onboarding/initStatus|HTTP/1.1|200|0
REQUEST|{local-ip}|anonymous|GET|/ui/auth/current|HTTP/1.1|200|0"
I thought maybe there is an automatic redirection that uses 'localhost' instead of the ip or hostname so I tried changing the {ARTIFACTORY_HOME}\tomcat\conf\server.xml:
<Service name="Catalina">
<Connector port="8081" sendReasonPhrase="true" relaxedPathChars='[]' relaxedQueryChars='[]'/>
<!-- Must be at least the value of artifactory.access.client.max.connections -->
<Connector port="8040" sendReasonPhrase="true" maxThreads="50"/>
<!-- This is the optional AJP connector -->
<Connector port="8019" protocol="AJP/1.3" sendReasonPhrase="true"/>
<Engine name="Catalina" defaultHost="localhost">
<Host **name="localhost" -> name="{hostname}** appBase="webapps" startStopThreads="2"/>
</Engine>
</Service>
But then the Artifactory failed to initialize:
"[art-init] [INFO ] (o.a.s.a.ArtifactoryAccessClientConfigStore:643) -
Using Access Server URL: http://localhost:8040/access (bundled)
source: detected
[art-init] [INFO ] (o.a.s.a.AccessServiceImpl:308) - Waiting for
access server...
[art-init] [WARN ] (o.j.a.c.AccessClientHttpException:41) -
Unrecognized ErrorsModel by Access. Original message: Failed on
executing /api/v1/system/ping, with response: Not Found"
I did not set any proxies or reverse proxies as I don't think it's related, but I may be mistaken as I don't have a lot of experience with web services.
Any ideas or suggestions?
Thnx,
Tom.
I was deploying artifactory 6 via helm, then upgraded to 6.8.2 and ran into this.
had to
cd $ARTIFACTORY_HOME && chown -R artifactory:artifactory .
artifactory itself, on startup, seemed not to be able to deploy the access.war and then maybe also was not able to read the credentials it needed to hit this /access context health check "ping" api endpoint.
All requests to a specific server are timing out with the error ETIMEDOUT.
[...]$ newman -c Test.json.postman_collection
Iteration 1 of 1
RequestError: [223f1c83-1bb6-b40c-acc7-d90a2dd4e4ce] 'HB Heart Beat' terminated. Complete error:
Error: ETIMEDOUT
at null._onTimeout (~/.nvm/versions/node/v0.12.9/lib/node_modules/newman/node_modules/request/request.js:808:15)
at Timer.listOnTimeout (timers.js:119:15)
The tests work in Postman and Collection Runner. I can hit the target server using curl in bash. I'm not experienced enough to dig into the Newman errors further than this, and any help would be appreciated.
The actual request is simple. I've stripped out any environment variables to see if I could get it working:
POST HTTP/1.1
Host: http://<IP ADDRESS GOES HERE>
Content-Length: 161
Cache-Control: no-cache
Postman-Token: 0e650324-356e-0a21-6ee1-2d7731a3f28c
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<HB timestamp="123456789" xmlns="http://google.com"/>
This same behavior happens in the top version of Newman and the beta version for Node 4.0+. There is a bug mentioned in the newman git repo. I'm thinking this may have something to do with it, but other requests are processing, so I wanted to be sure.
Anything?
This specific behavior was caused by a difference in the Content-Length label between Postman and Newman. Postman uses capitalized 'Content-Length' while Newman's node packages use lowercase 'content-length'. The process I was talking to did not recognize the lowercase version.
I'm working on Camera firmware to support ONVIF specs (using Java). I'm using Apache CXF to run services. I've imported remotediscovery.wsdl and devicemngmt.wsdl and created services for them.
I'm sure the services are running and bound to SOAP UDP:
Starting Server
[Fatal Error] addressing:2:2: The markup in the document following the root element must be well-formed.
28.11.2014 16:01:12 org.apache.cxf.service.factory.ReflectionServiceFactoryBean isEmptywsdl
28.11.2014 16:01:12 org.apache.cxf.service.factory.ReflectionServiceFactoryBean buildServiceFromClass
INFO: Creating Service {http://www.onvif.org/ver10/network/wsdl}DiscoveryService from class org.onvif.ver10.network.wsdl.DiscoveryLookupPort
28.11.2014 16:01:13 org.apache.cxf.endpoint.ServerImpl initDestination
INFO: Setting the server's publish address to be http://localhost:8080/onvif/device_service
28.11.2014 16:01:13 org.eclipse.jetty.server.Server doStart
INFO: jetty-8.1.15.v20140411
28.11.2014 16:01:13 org.eclipse.jetty.server.AbstractConnector doStart
INFO: Started SelectChannelConnector#localhost:8080
28.11.2014 16:01:13 org.apache.cxf.service.factory.ReflectionServiceFactoryBean buildServiceFromWSDL
INFO: Creating Service {http://docs.oasis-open.org/ws-dd/ns/discovery/2009/01}Discovery from WSDL: classpath:/org/apache/cxf/ws/discovery/wsdl/wsdd-discovery-1.1-wsdl-os.wsdl
28.11.2014 16:01:13 org.apache.cxf.endpoint.ServerImpl initDestination
INFO: Setting the server's publish address to be soap.udp://239.255.255.250:3702
28.11.2014 16:01:13 org.apache.cxf.service.factory.ReflectionServiceFactoryBean buildServiceFromClass
INFO: Creating Service {http://docs.oasis-open.org/ws-dd/ns/discovery/2009/01}DiscoveryProxy from class org.apache.cxf.jaxws.support.DummyImpl
Starting Server
28.11.2014 16:01:31 org.apache.cxf.service.factory.ReflectionServiceFactoryBean buildServiceFromWSDL
INFO: Creating Service {http://www.onvif.org/ver10/device/wsdl}DeviceService from WSDL: /tmp/wsdl/devicemgmt.wsdl
28.11.2014 16:01:51 org.apache.cxf.endpoint.ServerImpl initDestination
INFO: Setting the server's publish address to be http://localhost:8080/onvif/DeviceService/device
Starting Server
[Fatal Error] addressing:2:2: The markup in the document following the root element must be well-formed.
28.11.2014 16:01:58 org.apache.cxf.service.factory.ReflectionServiceFactoryBean isEmptywsdl
28.11.2014 16:01:58 org.apache.cxf.service.factory.ReflectionServiceFactoryBean buildServiceFromClass
INFO: Creating Service {http://www.onvif.org/ver10/network/wsdl}DiscoveryService from class org.onvif.ver10.network.wsdl.RemoteDiscoveryPort
28.11.2014 16:01:58 org.apache.cxf.endpoint.ServerImpl initDestination
INFO: Setting the server's publish address to be http://localhost:8080/service/DiscoveryService/discovery
Server ready...
I've tested it using CXF WSDiscoveryClient and they can be found:
16 clients found
<?xml version="1.0" encoding="UTF-8" standalone="yes"?><EndpointReference xmlns="http://www.w3.org/2005/08/addressing"><Address>http://localhost:8080/onvif/device_service</Address><ReferenceParameters/></EndpointReference>
<?xml version="1.0" encoding="UTF-8" standalone="yes"?><EndpointReference xmlns="http://www.w3.org/2005/08/addressing"><Address>http://localhost:8080/onvif/DeviceService/device</Address><ReferenceParameters/></EndpointReference>
<?xml version="1.0" encoding="UTF-8" standalone="yes"?><EndpointReference xmlns="http://www.w3.org/2005/08/addressing"><Address>http://localhost:8080/service/DiscoveryService/discovery</Address><ReferenceParameters/></EndpointReference>
...
But I can't see my device using SmartICRSS or any ONVIF client.
What's wrong? I expect my device to do discovered by WS-Discovery and then I expect probe method to be invoked by client. I have breakpoint in and it's not invoked.
PS. I've installed network traffic interceptor app and found that production IPCameras response with ProbeMatches response (in soap body) and CXF's impl does not reply. How can I make CXF service reply with ProbeMatches?
WS-Discovery is not ONVIF, it is a XML SOAP WS-Discovery
http://specs.xmlsoap.org/ws/2005/04/discovery/ws-discovery.pdf
And, the version you have to use: April2005 version11
Please refer
https://code.google.com/p/java-ws-discovery/
We have a CD and CM server configured. We want to configure the CM server to process ECM emails. I cannot find any documentation to turn this off on the CD server.
Does anyone know how to turn this off?
Remove the processors from the initialize pipeline and the task agent from the scheduling section of Sitecore.EmailCampaign.config. You can use a patch config:
<pipelines>
<initialize>
<processor type="Sitecore.EmailCampaign.Presentation.UI.Pipelines.Loader.InitializePresenterBinder, Sitecore.EmailCampaign.Presentation.UI">
<patch:delete />
</processor>
<processor type="Sitecore.EmailCampaign.Presentation.UI.Pipelines.Loader.ConfigurePresenterBinderContainer, Sitecore.EmailCampaign.Presentation.UI">
<patch:delete />
</processor>
</initialize>
</pipelines>
<scheduling>
<agent hint="ECM">
<patch:delete />
</agent>
</scheduling>
You also need to make sure the module files and a connectionstring to a web service on CM is added in your CD environment. See section 3.4.2 of the Email Campaign Manager Developer Guide for more details.
I am trying to create a build script using Visual Build Pro v7 for an Analysis Services Cube I have created. The cube deploys to my local machine without issue using the following build script steps...
Other than replacing the target server in the .dwproj.user file, backing up and removing any traces of a possible previous version of the cube, my build script just contains the steps:
Step 1 : "%VS2008IDE%\devenv.exe" "%PROJDIR%\%CUBE_NAME%.sln" /Build
Step 2 : "%SSAS_Deploy_EXE%" "%PROJDIR%\%CUBE_NAME%\bin\%CUBE_NAME%.asdatabase" /s /o:"%PROJDIR%\deployscript.xmla"
Step 3 : "%ASCMD_LOCATION%" -S %CUBE_SQL_INSTANCE% -U DOMAIN\%UID% -P %PWD% -i "%PROJDIR%\deployscript.xmla"
The cube's data source is a MySQL db. The build fails on Step 3 when deploying to a remote server.
I have downloaded and installed MySql Connector/Net on the server, but when I run the build script I get the following error:
<return xmlns="urn:schemas-microsoft-com:xml-analysis">
<results xmlns="http://schemas.microsoft.com/analysisservices/2003/xmla-multipleresults">
<root xmlns="urn:schemas-microsoft-com:xml-analysis:empty"/>
<root xmlns="urn:schemas-microsoft-com:xml-analysis:empty">
<Exception xmlns="urn:schemas-microsoft-com:xml-analysis:exception" />
<Messages xmlns="urn:schemas-microsoft-com:xml-analysis:exception">
<Error ErrorCode="3238002695" Description="Internal error: The operation terminated unsuccessfully." Source="Microsoft SQL Server 2008 R2 Analysis Services" HelpFile="" />
<Error ErrorCode="3239116921" Description="Errors in the back-end database access module. The managed provider 'MySql.Data.MySqlClient' could not be instantiated." Source="Microsoft SQL Server 2008 R2 Analysis Services" HelpFile="" />
<Error ErrorCode="3239182436" Description="Errors in the high-level relational engine. A connection could not be made to the data source with the DataSourceID of 'Linkdex', Name of 'Linkdex'." Source="Microsoft SQL Server 2008 R2 Analysis Services" HelpFile="" />
<Error ErrorCode="3240034316" Description="Errors in the OLAP storage engine: An error occurred while the dimension, with the ID of 'Keyword', Name of 'Keyword' was being processed." Source="Microsoft SQL Server 2008 R2 Analysis Services" HelpFile="" />
<Error ErrorCode="3240034317" Description="Errors in the OLAP storage engine: An error occurred while the 'Project Id' attribute of the 'Keyword' dimension from the 'LinkDexCube' database was being processed." Source="Microsoft SQL Server 2008 R2 Analysis Services" HelpFile="" />
<Error ErrorCode="3239837698" Description="Server: The operation has been cancelled." Source="Microsoft SQL Server 2008 R2 Analysis Services" HelpFile="" />
</Messages>
</root>
<root xmlns="urn:schemas-microsoft-com:xml-analysis:empty"/>
</results>
</return>
When I check the .asdatabase and .xmla I can see that the user id and password details from my ConnectionString have been removed. I'm not sure why this is or where this happens?
Does anyone have any ideas what's happening? Is it likely to be a permissions issue or something to do with the mysql connector? Or some third option?