Unable use Quality Gates - sonarqube5.6

I defined Quality Gate and already seeing the project violating threshold but it does not show any error/warning message.
Please refer attached screenshot which shows Quality Gate I set for project but at the top says "You should define a quality gate on this project."

This appears to be the case when you have first defined a quality gate but not run any scans using it. Once you have run your first scan you should see the Quality gate summary at the top as expected.

Related

How to make the tutorial from pepper work on setlanguage and dialogue?

its my first time working with pepper and i am working locally on my computer with no connection to a real pepper. I am trying to follow the steps from http://doc.aldebaran.com/2-8/getting_started/helloworld_choregraphe_dialog.html#helloworld-choregraphe-dialog .
I am getting the error messages:
[WARN ] behavior.box :getService:16
_Behavior__lastUploadedChoregrapheBehaviorbehavior_193854592:/Set Language_2: ALSpeechRecognition is not available, language setting
cannot be applied to recognition.
[ERROR] behavior.box
:onInput_onSet:49
_Behavior__lastUploadedChoregrapheBehaviorbehavior_193854592:/Set Language_2: Language English could not be set for one or more
services.
I tried following all steps in the tutorial and removed the german language which i originally intended to use. I have linked the modules correctly as well and would be really happy if you could tell me which mistake i need to fix in order to get it to work.
Thanks in advance and best regards
You most likely forgot to define the projects language.
Click on properties (on the left side of choregraphe, next to the blue cube) and then, on the window that just opened, click the languages of your choice on the right column. Since you are here, don't forget either to give an application id.
Setting the language has no effect on a virtual robot, because it as no TTS or ASR.
You can safely ignore the error or catch it, for instance by adding an onError output wired to the rest of your behavior.

Cumilitive log in SAS EG

Long story short - Familiar with BASE 9, now using EG (7.1) due to a new role with another company. The transition is painful, but there is one thing that bothers me the most and that is the log.
As I am sure most know, it will rewrite/refresh for every piece of code you execute.
Surely there must be an option to maintain a "running log" within the SAS code you are running/building (not necessarily for the whole project, but just for the program node within the project).
Can this be done?
Any assistance is greatly appreciated. Searched for some reference, but none citing the subject specifically.
Yes - from SAS's support pages:
You’ll notice that a separate log node is generated for each code node. By turning on Project Logging, you can
easily tell Enterprise Guide that you’d like a single SAS log to be generated for all of the tasks and code nodes in your
Project. This single Project Log will be created in addition to the individual logs created for each task or code node.
Helpful Hint: If Project Logging is turned on, the log represents a running log of the entire project. To
turn on the Project Logging, select Project Log in the Context Menu of the Process Flow, and then select
Turn On.

How to access PSSE working case variables by Python a code

I am a transmission planning engineer and trying to automate the execution of PSSE 100 times or more at one go through a Python code. I already runs, change loads, reruns psse and write bus based summary report to *.csv file. What I really want to do is select the first active power load variable of a PSSE case and increase it by 1 MW. Then run psse, write results to a csv file. Change the selected load back to its original value and move on to the next active load to do the same again and again until I have done same for all load busses.
This will help me to calculate transmission loss factors for entire network with one go.
Thanks
#dsmtlk, if you're experienced in Python, you can readily find the information you need in the PSSE API Manual located in your PSSE program folder (mine is in C:\Program Files (x86)\PTI\PSSE33\DOCS). The API routines for getting bus data are in section 8.6. The routine for changing bus data—viz., psspy.load_data_4()—is in section 2.21.
If you're new to Python, here are a couple links I found helpful when I first started:
https://docs.python.org/2/tutorial/
http://www.tutorialspoint.com/python/

Load existing Model in Weka Knowledge Flow

I am trying to plot multiple ROC curves in the same diagram in Weka. I have learnt that I can do this in Weka Knowledge Flow using "Model Performance Chart". However, I can't figure out how to do this for existing models.
I have tried using ArffLoader and TestSetMaker to generate the testing data, and connected this to a suitable Classifier icon (eg AdaBoostM1 when this is the kind of model I am trying to load). In the configurations of the Classifier icon I choose "load model" and in the Status bar it says "Loaded model.". However, when I run this it says "ERROR: no trained/loaded classifier to use for prediction".
Can anyone tell me what I am doing wrong here? Thanks in advance!
There is a post that was published here that indicates some ambiguity in the meaning of the error. It also continues to state that the order of attributes and the number and order of values is also rather important.
It also states that 'for performance results to be computed, your Knowledge Flow process will need a "ClassifierPerformanceEvaluator" component after the classifier and before a TextViewer component.'
If you are new with the KnowledgeFlow environment, there is a great tutorial here from Rushdi Shams that details the general process.
Below is a sample workflow that has generated desirable results using AdaBoost (preloaded model):
Hope this Helps!

What do you need from a test harness?

I'm one of the people involved in the Test Anything Protocol (TAP) IETF group (if interested, feel free to join the mailing list). Many programming languages are starting to adopt TAP as their primary testing protocol and they want more from it than what we currently offer. As a result, we'd like to get feedback from people who have a background in xUnit, TestNG or any other testing framework/methodology.
Basically, aside from a simple pass/fail, what information do you need from a test harness? Just to give you some examples:
Filename and line number (if applicable)
Start and end time
Diagnostic output such as the difference between what you got and what you expected.
And so on ...
Most definitely all things from your list for each individual item:
Filename
Line number
Namespace/class/function name
Test coverage
Start time and end time
And/or total time (this would be more useful for me than the top two items)
Diagnostic output such as the
difference between what you got and
what you expected.
From the top of my head not much else but for the group of tests I would like to know
group name
total execution time
It must be very, very easy to write a test, and equally easy to run them. That, to me, is the single most important feature of a testing harness. If someone has to fire up a GUI or jump through a bunch of hoops to write a test, they won't use it.
An arbitrary set of tags - so I can mark a test as, for example "integration, UI, admin".
(you knew I was going to ask for this didn't you :-)
To what you said I'd add:
Method/function/class name
Coverage counting tool, with exceptions (Do not count these methods)
Result of N last runs available
Mandate that ways to easily parse test results must exist
Any sort of diagnostic output - especially on failure is critical. If a test fails, you don't want to always have to rerun the test under a debugger to see what happened - there should be some cludes in the output.
I also like to see a before and after snapshot of critical system variables like memory or hard disk space available as those can provide great clues as well.
Finally, if you're using random seeds for any of the tests, write the seed out to the logfile so that the test can be reproduced if necessary.
I'd like the ability to concatenate and nest TAP streams.
A unique id (uuid, md5sum) to be able to identify an individual test -- say, for use when inserting test results in a database, or identifying them in a bug tracker to make it possible for QA to rerun an individual test.
This would also make it possible to trace an individual test's behavior from build-to-build through the entire lifecycle of multiple revisions of a product. This could eventually allow larger-scale correlations between 'historic' events (new hire, product release, hardware upgrades) and the profile(s) of tests that fail as a result of such events.
I'm also thinking that TAP should be emitted through a dedicated side-channel rather than mixed in with stdout. I'm not sure this is under the scope of the protocol definition.
I use TAP as output protocol for a set of simple C++ test methods, and have seen the following shortcomings:
test steps cannot be put into groups (there's only the grouping into several test scripts; but for running all tests in our software, I need at least one more level of grouping, so that a single test step would be identified by like "DB connection" -> "Reconnection Test" -> "test step #3")
seeing differences between expected and actual output is useful; I either print the diff to stderr (as comment) or actually launch a graphical diff tool
the protocol and tools must be really language-independent. For example, so far I only know of the Perl "prove" tool for running tests, which is limited to running Perl scripts
In the end, the test output must be suitable as basis for easily generating an HTML report file which lists succeeded tests very concisely, gives detailed output for failed tests, and makes it possible to quickly jump into the IDE to the failing test line.
optional ascii coloured output, green for good, yellow for pending, red for errors
the idea of things being pending
a summary at the end of the test report of commands that will run the individual tests where
List item
something went wrong
something in the test was pending
Extension idea for TAP:
1..4
ok 1 - yay
not ok 2 - boo
ok 3 - yay #json:{...}
ok 4 - see my json
Ability to attach a #json comment...
- can be safely ignored by existing code
- well-defined tags can be easily reserved at testanything.org
- easy to produce, parse and read complex types
- yaml is a pain