WSO2 does not start; The process is being killed - wso2

I installed wso2 on linux server. When I want to run the application with the sh wso2server.sh command in the bin file, "Killed" appears on the console screen and the application does not work. I also tried with the sh wso2server start command. It still didn't work. There is no error when I look at the log files. What is the reason for this and how can I solve it?

It seems the Operating System is Killing your process. One reason for this is you do not have enough memory in the VM to start the Java process. So first check whether you have allocated enough memory to this VM?
If that's not the case, you should be able to find more information from the Kernal logs to determine why the process was killed. Typically located at /var/log/kern.log or /var/log/dmesg.
Have a look at this question as well.

Related

Shutdown scripts to run upon AWS termination

I am trying to get some scripts to run upon an aws termination action. I have created /etc/init.d/Script.sh and linked symbolically to /etc/rc01.d/K01Script.sh
However terminating through aws console did not produce the output I was looking for. (It is a script that does a quick API call to a server over https should take only a few seconds).
Then I tried again but specifically changed a kernel parameter:
'sudo sysctl -w kernel.poweroff_cmd=/etc/rc0.d/K01Script.sh'
and again no output.
I get the message "The system is going down for power off NOW!" when terminating the server so I'm pretty sure the Ubuntu server is going into runlevel 0. Permissions are owned by root.
I know I could create a lifecycle to do something like this but my team prefers the quick and dirty way.
any help very much appreciated!

Invoking the application as system user(Windows)

We have a native GUI application which runs on a windows machine, and recently we have found out that the application terminates unexpectedly. After days I have found that this is happening because the application is run by explorer.exe and it gets killed unexpectedly, random somehow, so it causes termination of all child processes including our application.
Is there a way to invoke/call our app as system process (not with explorer.exe)?
Also assume that application/user has administrator access too.
Thanks in advance.
Killing explorer does not in general kill other processes. This is very easy to verify yourself by killing explorer from the task manager. Notice that other processes stay alive when you kill explorer. Something else is killing your process.
If killing explorer leads to your process dying, then the obvious explanation is that something in your process is leading to its death. In other words the problem is most likely in your code, and you need to work out what that problem is.
Also note that explorer isn't really a special "system process" as such. It's just a normal process that that runs under the logged on user's token.
You may need to give some hand of a OS services, then run the service as admin(run as system boot), then start he application from the service ,this will ensures you the app will started as admin and without the explorer.exe(as child)

gdb debug remote core dump

I have a server written in C++ crashing in the production environment to which I have no direct access to. The crash generates a huge core dump ~34G which I cannot copy locally. I need to analyze the core dump but don't know how it can be done without copying it over. I tried running gdbserver on target but it doesn't take a core file as a parameter and seems only good for debugging running remote applications from a host machine. Is there a way this can be done?
I need to analyze the core dump but don't know how it can be done without copying it over.
You can't. You'll need to get the core dump to where you can run GDB.
I cannot ssh to the remote machine but can ask the sys admin to run something like gdbserver for me but he cannot analyze and debug the core file.
You don't need sysadmin to analyze anything. You just need to ask him to run a series of GDB commands, and send you the output. E.g.
where
thread apply all where
info registers
disas
... will get you a long way to understanding the problem, and will take your sysadmin less than 5 minutes.
I still will need to uncompress it to run it on gdb which I don't want to do on my local machine.
Also, talk to your manager. Your development setup is unreasonable. You have to be able to analyze production crashes locally, and that means you have to have access to a sufficiently beefy machine.

Windows event log service holding executable file handle

I have a service application that on startup and shutdown logs an event log record.
I rebuild the application frequently and also then the executable on the host machine. And here is the problem, after my service shutdown the Windows Eventlog service (not the event log viewer) is holding an open handle to the executable so I cant update it.
I have the event log messages embedded in the executable, i could move it out but then I just move the update problem to another file.
I've double checked and I have paired ::RegisterEventSource/::DeregisterEventSource correctly.
Anyone encountered this problem ?
I've also run into this issue, so just adding some of my experiences.
I have a Windows 2008 Service system (have not seen this on 2003 Server), and when I stop my service, and instance of svchost.exe loads the service executable (visible using vmmap.exe or Process Hacker) preventing it from being deleted/overwritten during uninstall/install. The instance of svchost.exe is running the DHCP Client (Dhcp), TCP/IP NetBIOS Helper (lmhosts), and Windows Event Log (EventLog) services.
In our case, we have created a registry entry to make our service executable an event source. (though I'm unsure exactly why we are doing this, or whether we should be doing this).
Empirically, if I remove that registry entry before stopping the service, the executable is not loaded by svchost.exe and all is fine. If the service has already been stopped and executable loaded by svchost.exe, restarting the Event Log service (or killing the process) also frees up the executable.
I'm guessing our service is not well-behaved (perhaps a side effect of being a 32-bit process on 64-bit OS?) or correctly installed, but haven't isolated the issue yet.
Update: It appears this issue is only happening on HP systems (and not Dell or IBM) which is curious. There are HP-specific management components installed, so perhaps one of them is altering the behavior somehow?
I've also run into this issue. In my case, nxlog service reading logs. Simply stop nxlog service before replace event source file.
I think it is probably the event log viewer. Close the viewer and you'll be fine.

Increasing Jenkins process priority?

When my C++ program build script is called from a Jenkins job, it takes far more time to be built. The CPU usage instead of being on 100% is on only taking 16%.
Of course I don't want Jenkins to fully occupy my computer rendering it unusable while doing a build but making it faster would be very useful.
I have installed Jenkins via brew on Mac OS.
Does anyone know how to change the priority of the Jenkins process so it's allowed to use more CPU while building?
Following one of the comments suggestions I have decided to increase the heap size of the Java machine on the homebrew.mxcl.jenkins.plist file:
<string>-Xmx2048m</string>
And then call:
brew services stop jenkins
brew services start jenkins
The behaviour was the same so I decided to restart the machine and try again and now it is working as supposed. I'm not sure if this was a general glitch or if it was related with the Java heap size parameter.