Not able to find any document related to implementing the profiling concepts with nestjs framework, can anyone suggest a way for this.
Thanks,
You can profile a nestjs application by running the usual nest start command with the --debug flag.
nest start --debug
You can see this flag in the nestjs docs here. The debug flag runs the node application in inspect mode (by appending --inspect to the underlying node command). Once that is done, the application can be connected to any node debugging/profiling tool. See the node debugging guide for more info and a list of popular tools.
Related
I have an existing Windows desktop application written in C++ that needs to add support for SNMP so that a few pieces of status information are available on some SNMP OIDs. I found the net-snmp project and have been trying to understand how this can best fit into the existing program.
Questions:
Do I need to run snmpd, or can I just integrate the agent code into my application? I would prefer that starting my application does everything necessary rather than worry about deploying and running multiple processes, but the documentation doesn't speak much about doing this. The net-snmp agent daemon tutorial has an option for running the sample code as the full-agent rather than sub-agent, but I'm not sure about any limitations of doing this.
What would the PROs/CONs be for running a full agent in my application vs using snmpd and putting a subagent in my application? Is there a 3rd option I should also consider?
If I can integrate the full agent into the existing program, how do I pass it a configuration file via the API? Can I avoid the config file all together by passing these parameters in via function call instead?
Currently, I have subscribed to codacy pro.
And I want to use codacy in my Jenkins pipeline, and I found codacy-analysis-cli.
I tried to do a test on my local using this command:
codacy-analysis-cli analyze --directory /home/codacy/backend-service --project-token <myprojectoken> --allow-network --verbose --upload
But when I checked app.codacy.com, there were no recent results.
Can you help me please? what is the best practice for using codacy in my jenkins pipeline.
That command seems about right, on this cases, is better to reach out to Codacy support specifying what is the repository that you are trying send results to. So you can get specific help for your repository.
That said, make sure that:
The project-token is set for the correct repository
The settings have the Tun analysis through build server enabled
If you still have problems, better to reach out to support.
I've not been able to find any way in Concourse to show a 'build summary page' as you get in Jenkins/TFS etc. In those tools you can see build history (OK/failures), build durations, unit test results, code coverage, various graphs etc - but Concourse just has build history which is simple log files.
There doesn't seem to be any extensions system or other way to achieve this.
I'd prefer to use Concourse for the pipelines and build-in-containers approach, but it's a hard sell to developers who see it as a step backwards.
Thanks
Paul
The containers/workers of your plan/build are spinned up and spinned down. Everything you want to keep you need to PUT to a resource. In the concourse ecosystem there are many resources created.
The results and output logs of your builds/jobs are by default available in concourse UI (ok/not-ok, time). See for example: https://ci.concourse-ci.org/teams/main/pipelines/main/jobs/build-fly/builds/1273 In the job itself you can expand and see the logs, so output log of your unit tests for example.
If you want to keep the sonar report you just install a resource like https://github.com/cathive/concourse-sonarqube-resource and you can simply run this after your unit test by using the PUT (and store it on your Sonar server). So indeed you have no html report you can keep on that specific build, but you can put it everywhere you need it to be by using/creating a plugin. Very simple and nicer because the whole Sonar overview of your project is where it belongs :)
I'm developing a node-red node, and I'd like to know the best or intended way to test those nodes. I've been looking at some other nodes, and they look to depend on a peer installation of node-red itself and use the "helper.js" to load those nodes. I was expecting to have a more "unit-level" testing perhaps mocking the node-red.
If you look at the way that the core nodes are tested, this should give you some ideas.
There is also an effort underway to extract the test harness from core and make it available for everyone. You should also check out the Node-RED WIKI which contains some information on testing.
In general, it looks like you will need to run your node in an embedded version of Node-RED. Running NR using your chosen test tool.
What should I do to profile worflows exposed as Windows Workflow services? Which tool did you use?
I have tried to use dotTrace (jetBrains): I can see data in the profiling snapshot, but it seems I cannot see methods called by workflows.
Depending on the information you want to get out of it you can use AppFabric. Once installed you can go into IIS and set monitoring to "Troubleshooting" and get back pretty much everything the workflow has done.