Execute nodejs Script as child process (not from other script) - c++

I am trying to find a way of running my Nodejs app in the background. I did a lot of research and I am aware of them (node-windows, forever, nssm, ...).
During this what came to my mind was to create my OWN service wrapper in c++ which executes the script (windows) as a child process.
Therefore my question: Is it possible? and what are the possibilities to communicate with the node.exe executing my script? In Google in find tons of articles about the node "childprocess" module but nothing where the node.exe is the childprocess.
BTW: In one of the answers here on SO I found a solution with the sc.exe, but when I am installing the node.exe with the script it gets terminated because it does not respond to the SCM commands. Did this ever work?
Thank you alot in advance

You can make the process run in background using pm2
pm2 start app.js --watch
This will start the process and will also look for changes in the file. More about watch flag

Related

Django test api while api is running in dev environment [duplicate]

I have looked at this question but I am not sure I got it correctly or not.
I have opened pycharm and one python script and its running (it's topic modeling).
Also I have another python script in which I opened in another pycharm in the same server. I also run it.
Now these two program are running in the same server, I should mention that I have not changed any configuration neither server nor pycharm.
Do you think its ok in this way? or one script technically won't run(in terms of progressing I mean it just show its running but practically wont run) until the other script finished?
Edit Configurations -> Allow parallel run. Done
First, PyCharm will create independent processes on the server, so both scripts will run. You can check it with something like htop - search for processes and verify that they're running.
Second, you don't have to open second PyCharm window to run the second script. You can run both of them from the single one. There are at least two ways: with run configurations or by spawning multiple terminal windows and running scripts from there.
From the Run/Debug Configurations windows you can add a Compound configuration that contains multiple configurations that will run in parallel. The Allow parallel run option for child configurations make no difference in this case.
The default behaviour was changed starting from version 2018.3. You can allow multiple runs by selecting Allow parallel run within the Edit Configurations menu.

WebStorm, setting JavaScript Debug with another task running before

In WebStorm I can very easy setup JavaScript Debug and then when I run this configuration, IDE opens the Chrome browser and all breakpoints are active. The problem begins when I need to run specific tasks prior to starting debugging, for example running npm build script. When I define it in Before launch (see the picture below), the Chrome browser not being opened when I activate this debug configuration but being opened after I stop it.
This requires from me, manually run a project from command line and then run Browser Debug
Can I define the additional tasks in a way that Chrome will be opened as usually?
Thank you.
A process added to Before launch section has to return an exit code, the main process is waiting for it to start and thus doesn't start until the first process terminates. This is the way Before launch is designed - it's supposed to be used to run some sort of pre-processing before running the main process. You can add a build task (a script that builds your app and then exits) to this section; but start:dev likely doesn't exit, it starts the server your application is hosted on, and it has to be running to make your application work, doesn't it? Please remove your npm script from Before launch, start it separately or use the Compound configuration to start both npm script and Javascript Debug run configuration concurrently

Selenium PhantomJS is it the same as running IE

I need to run my Selenium Python test script for IE.
If i ran it using a headless browser PhantomJS, is it going to be different than running it for IE?
I am asking because I am having a problem running my Selenium Python test script from a batch file from task scheduler.
I can run my batch file on it's own and that runs ok. But when i run it from Task Scheduler the browser does not open so the test fails.
The dev says Task Scheduler runs in the background with a headless browser.
If i used PhantomJS it won't be the same as IE?
I need to test it using IE but the batch file which runs my Selenium test won't open the browser from task scheduler.
My batch file is as follows:
set TEST_HOME=%~dp0
cd %~dp0
SET PATH=%PATH%;G:\test_runners\selenium_regression_test_5_1_1\IEDriverServer\64bit
cd %~dp0selenium_regression_test_5_1_1
set PYTHONPATH=%~dp0selenium_regression_test_5_1_1
c:\Python27\Scripts\nosetests.exe "%~dp0selenium_regression_test_5_1_1\Regression_TestCase\split_into_parts\RegressionProject_TestCase_Part1.py" --with-html --html-file="%~dp0selenium_regression_test_5_1_1\TestReport\SeleniumTestReport_part1.html"
I appreciate some help on this.
Thanks, Riaz
That's the same as asking, is the ouput the same in IE and in Firefox? Not exactly, visually it will look the same but in the source code some elements are adapted to the browser you're using.
Phantomjs is a browser on it's own, so, some elements can be hidden or not even loaded but it's rare. A good example of this is Twitter. I noticed during some tests that the click on the twit box to write some text behaves differently in phantomjs than in other browsers!
The reason why task scheduler doesn't let you use IE is because you can't use any graphical environments during the lifetime of the proccess.

Django Celery Task does not execute with .delay

I am able to execute my task no problem using
scrape_adhoc_reporting([store], [types], inventory)
This is a problem though, because this task can easily take an hour. So I try to make the task async. I tried both of following:
scrape_adhoc_reporting.apply_async(args=[[store], [types], inventory])
scrape_adhoc_reporting.delay([store], [types], inventory)
Both of these methods did not work. The view just redirects as it should, but the task never gets executed. There are no errors in the error log. Any insight as to what I am doing wrong?
Edit: After looking around a little bit more, I see people talking about registering a task. Is this something I need to do?
I ran in the same issue and I just solved it. MattH is right: this is due to non-running workers.
I'm using Django (1.5), Celery (3.0+) and Django-Celery on Windows. To get Celery Beat working, I followed this tutorial: http://mrtn.me/blog/2012/07/04/django-on-windows-run-celery-as-a-windows-service/ as on Windows, Beat can only be launched as a service.
However, as you, my tasks were launched but not executed. This came from a bug in the packaged version django-windows-tools (from pip).
I fixed the issue by downloading the latest version of django-windows-tools from GitHub (https://github.com/antoinemartin/django-windows-tools).
If you want it to be run remotely, you need a worker process running with that task loaded and a routing system configured to get the task request sent between the caller and the worker.
Have a look at the celery documentation for workers and tasks.
The code that you're running is just executing the task locally.
When using asynchronous celery task in Windows normally you get an error that is fixed by setting a parameter.
i.e:
With Django in the file celery.py you should:
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'main.settings')
os.environ.setdefault('FORKED_BY_MULTIPROCESSING', '1') <== Add this line for Windows compatibility.
This will fix the problem on Windows and does not create incompatibility problems on other systems.

Fake X server for testing?

At work we fully test the GUI components. The problem arises from the fact that, while the testsuite is running, the various components pop up, stealing the focus or making it impossible to continue working. The first thing I thought of was Xnest, but I was wondering if there's a more elegant solution to this problem.
I think what you need to do here is have your tests run on a different Display than the one you're working on.
When we moved our TeamCity agents to EC2, we had to figure out a solution to running our UI unit tests on a headless Linux server. I found a way to do it in this blog post, which outlines how to use Xvfb.
For my case, all I had to do was:
yum install xorg-x11-server-Xvfb
Xvfb :100 -ac to run the server. I added this to my rc.local file on my EC2 agents to start it at machine startup.
Then I added env.DISPLAY :100 to my TeamCity build configuration