Enable Debug mode at build time using Grunt - build

I would like to mock some services of a client side web application when I am during development and use proper services when in production.
It would be great if I could do this at build time by passing grunt a flag or something that I could then use in the code to know what endpoints to use.
My idea is to do something like this:
if (DEBUG) {
service = MOCK_SERVICE; } else { service = SERVICE; }
and set this DEBUG flag at build.
If there is a solution for this or even a better alternative, I would much appreciate it.
Thanks

Try grunt.option:
var DEBUG = !!grunt.option('dbug');
Then you can enable that option any time with grunt --dbug. The option could be named anything though: http://gruntjs.com/api/grunt.option
Grunt internally uses an option named debug which you could use as well but it will make your output more verbose so just be aware of that.

Related

Cannot use WebStorm console in 'live' mode

How do I view/peruse objects in WebStorm in console live mode?
In Chrome, when I am sitting on a breakpoint, I can type something into the console like so :
myObject.myVal[0].elem
However when I am in WebStorm, sitting on a breakpoint on one of my tests, I open the console, but I cannot type into the console!! I cannot see any objects.
It just looks like this :
I seem to remember it used to be able to do something like this. Is there some setting somewhere that I need to set?
[..Update..]
Yes, I was right. It was possible for me to do this in the past :
https://blog.jetbrains.com/phpstorm/2014/07/new-live-console-in-javascript-and-node-js-debugger/
This seems not to work for me anymore, or it is broken when running tests.
It makes WebStorm pretty useless for me at the moment.
'Live' mode is not available in Test Frameworks Console; please vote for WEB-20297 to be notified on any progress with this feature

How to debug ember-cli tests running in phantomjs

Context: I have an acceptance test for my ember-cli application, and the test passes just fine in Chrome. However, in phantomjs, my test fails -- the UI doesn't get created the same way, and I'm trying to work out why. (I think the test is broken because of https://github.com/ember-cli/ember-cli/issues/1763, but the general question of how to debug remains)
In Chrome, I can use the standard debugging tools on my tests and all is well -- but in phantomjs, I can't get at it with a debugger. I also don't see console.log() messages show up in the output -- all I get is a list of test results in my terminal window.
I can sort-of get diagnostic info by writing things like
equal(true, false, "This is a log message");
and then I get the message as details for the assertion that failed, or I can try and work out what's in the DOM with
equal(true, false, document.getElementsByClassName("my-class".innerHTML);
but both of those a:stop the test going any further, and b:only let me log information from the test itself, not my application.
Is there a way to run my tests outside of "ember test", or some way to attach to the running test processes? Alternatively, is there a way to get console.log() messages to show up in the output?
You can expose PhantomJS debug port and open it in browser then you can interact with context at your debugger breakpoints.
Debugging tests on PhantomJS using Testem test runner
In testem.json add "phantomjs_debug_port": 9000.
While you run your tests visit http://localhost:9000 in your browser and click the long link that shows up.
Source: cssugared
I had no luck with the other answers, so here's what I found out:
Add a return pauseTest(); at the point in your test where you want to be able to interact with the container in the browser. This is in the docs but I'm not sure it's in the guides.
To answer the part of my original question about "how do I get log messages to show up", if I use the TAP reporter, then console.log (in my app and in my tests) messages show up in the output; the xunit reporter doesn't pass console.log on, which was confusing me.
(I've also hit issues where running the tests on teamcity behaves differently than running locally; in that situation, combining the TAP reporter with https://github.com/aghassemi/tap-xunit (or the TAP teamcity plugin) lets me get log messages and also test counts)

How can I debug a each scenarios in my feature file separately

I have a test project that I wrote to test different services in the same solution. I used specflow and I have many scenarios to test.
In order to debug my test I have to run my services. about 3 of them.
The problem I have now is If I go to the test explorer window and right click on a single scenario and try to debug, the option is disabled.
If I right click on the features file and select the option debug specflow scenarios it debug all my scenarios but I don't want that.
how can I debug a each scenarios in my feature file separately while running my services?
Note: I am using msTest and VS2012.
Well you could switch to NUnit, the NUnitTestAdapter supports running individual tests.
You don't have to do it permanently, just long enough to debug this test.
Or, add a Debugger.Launch() in the method that is bound to your When. Let all the other tests finish, and then step through this way. You will of course need to connect to the other services using Debug > Connect to process..., before stepping across the process boundary.

Profiling a Play framework application (2.0.2) through VisualVM

I'm having some serious problems regarding my Play! Applications performance. I already tried to change the server and the data base, but the slowness persists.
Using Firebug to measure my http requests I found out that they are taking around 20 seconds just to start replying.
So my last hope is to use VisualVM to profile my application and find its bottle necks. But I don't know the proper way of passing some arguments like -Dcom.sun.management.jmxremote without messing with the global JAVA_OPTS variable.
Thanks again!
It looks like Metrics handles this automatically.
Add the following to your Build.scala app dependencies:
"com.yammer.metrics" % "metrics-core" % "2.1.2"
And start instrumenting your code. Then start up the application with "play run" -- VisualVM should show your JVM process and you can just connect to it directly (assuming you have the VisualVM-MBeans plugin). Check to see if you have at least 1.3.4. This is what I see when I start up:
the xsbt.boot.Boot process is Play.
More generally, this article really helps when debugging Akka based frameworks like Play.
In case someone needs to profile a Play 2.3.x app:
Put your JAVA_OPTS settings in ~/.activator/activatorconfig.txt (c.f. https://typesafe.com/activator/docs):
-Dcom.sun.management.jmxremote.port=1234
-Dcom.sun.management.jmxremote.rmi.port=1234
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
-Djava.rmi.server.hostname=127.0.0.1
In VisualVM, add a Local JMX connection to localhost:1234

How to trace Coldfusion template execution in details?

I have some 'spaghetti'-style code that is generously saused with Custom tags and Stored procedures calls. Templates include each other, custom tags nested and stored procedures are callind other stored procedures in their place.
Problem is that one template call is hanging somewhere in between. I cannot get any error out and cannot see debug output. What is best way to debug such 'hanging' request with as much detail as possible ?
Thanks!
If you are using CF 8+, you can use the Step Debugging tools in Eclipse to step through the code: http://www.adobe.com/devnet/coldfusion/articles/debugger.html
If you are using an earlier version, you can use a 3rd party product like Fusion Debug ( http://www.fusion-debug.com/fd/ ) to do the same thing.
If you are using CF8, you can also use the CF Admin Server Monitor to see where a thread is hanging as well: http://www.adobe.com/devnet/coldfusion/articles/monitoring_pt1.html
If the built-in debugger is of no use cause the request just hang the other quick way is to just start with a cfabort at the top and keep moving it down until you hit the file causing the request to hang.
CFTrace is a great tool for this. It is native and reports time information as well.
Have you looked at the standard coldfusion server log files to see what might be in there?
Have you run the server in a console window so you can see what is appearing in the console as the templates are running (or not as the case might be)?
You could Take jvm thread dumps. You can do from command line or via server monitoring if you have Enterprise 8+