My problem is simple, I have 1 computer conected to many powerfull servers. I want to execute the app locally but run the process (heavy load) in the remote servers.
The app+settings vary a lot, and I want that this exactly version of the app+settings folder to be used by the remote instances.
My approach so far:
Launch the app locally
Use PSEXEC to remote launch the same executable as it is running in local -> in the servers (with a random port number passed by argument)
Contect to them via sockets
Send commands to execute remotely and get the results
My problem relies in the config files, wich are many(50+) and some of them +4MB. This config files are TXT files in a config folder.
What is the proper way to do it? Is it possible to use PSEXEC to copy remotely a whole folder? Can I do any good trick on the sockets to directly pass a copy of the local files to remote?
I would like all the process to be semi-transparent. Since many people will use it with different versions and settings at the same time. So manually copying the files to 20+servers is NOT an option.
Thank you!
Put the program/script that you want to execute by all machines on one common location on local network (put your configs there too). On all servers create a batch file say 'runme.bat' that will execute your program directly from network location.
This way you can use psexec to run runme.bat essentially executing your program/script on any server you want.
Since often - there are issues using psexec - you may invoke your scripts from Task Scheduler etc.
I do that for 500+ servers and it works. If working for me it will work for you.
You might want to look at HTCondor (http://research.cs.wisc.edu/htcondor/) which could perhaps manage all of this for you.
Related
I am attempting to create a small C++ Visual Studio Forms (via CPPCLR_WinformsProjekt) application that is essentially a browser but it also starts a local Tomcat 8.5 server with a WAR file in its web apps folder and redirects you to the localhost page. I am working on Windows.
My question precisely is - what is the best way to start the Tomcat server through C++ libraries?
Edit: The way I started solving this is by simply having the tomcat folder with the WAR file zipped within the Visual Studio project. On execution, the file gets unzipped, and I am thinking of having a system(*start tomcat command*).
NB: I know I can start Tomcat through the cmd but I need to get it working via C++.
[I am assuming you are under Windows, but on Unix-es similar ways are available.]
In a C++ program you can execute all commands that a shell can, so the easiest way to start Tomcat would be to use CreateProcess to execute catalina.bat (or startup.bat). The is also the most easily configurable way: a user can adapt setenv.bat to its needs.
Of course, if you want to omit the *.bat files you can:
either instantiate a JVM using java.exe with the appropriate parameters: you need at least bin/bootstrap.jar in the system classpath (and usually bin/tomcat-juli.jar) and call the main method of org.apache.catalina.startup.Bootstrap with a parameter of start.
or instantiate a JVM using jvm.dll through the Invocation API in a similar way as it is done by procrun.
I don't believe these methods give you any advantage over the *.bat scripts. To stop a modern Tomcat just send the kill signal.
Edit: If you plan to start only one specific web application a full-fledged Tomcat might be overkill. You might instead:
either use Tomcat Embedded, which boils up to writing on class and calling its main method instead of Boostrap#start. The advantage is, you just need to distribute a bunch of jar files and your WAR and you don't need a traditional Tomcat installation directory structure,
or user use Spring Boot.
Normally I've developed locally (on my own machine) and pushed to wherever things needed to go via mapped drives, ftp, github, etc. I have done a bit of work with vagrant/virtualbox (but again, locally) with a shared/mirrored folder.
I am now in a situation where everyone here has access to their own dev box (a vm on the network). I see some working in Vim directly via SSH, I believe, but I'm not there yet. So I'm left with the question: What's the best way for (more of a front end guy) to approach this?
I have heard of doing an SSH-mount from my workstation... if that's a viable thing. I'm curious what everyone's take on this kind of environment is and (perhaps) any best practices. Tips, links, and reading is highly welcome and appreciated, too... any pointing in a good direction would be wonderful.
Thank you.
The best answer will come from what virtual resources do you want to capitalize on for the virtual networked VMs. If you just want the storage space, then share the VM's drives, and mount them locally, treat them as local, end of story. If you want to run all the processing on the remote machine, and connect from a thin client, you have a couple of options, but they all take the same form. Connect to the machine, edit the files on the remote machine. Depending on your OS, you will have different options available.
If the remote machine doesn't have an graphical client installed you are stuck with either, mounting the remote share locally (you can use whatever editor you want) or ssh to the remote machine and using a commandline editor (vim, nano, emacs).
If there is a graphical client installed you have more options:
Remote in the server using any visual viewer (mstsc for windows, vnc is an option), and then use any remotely installed editor of your choice.
Remote in using ssh -X, and then run the remotely installed editor. Assuming you have an X-Server locally (if you are running linux you already do), the GUI part of the application will be run on the client side of the ssh tunnel, and the process will be run on the server. This is probably the best option.
So:
Make sure the remote server has a desktop client software (gtk, kde, gnome, almost any windows os, etc...)
install GUI editor of your choice on that server
ssh -X to that server
install sublime text, geany, or your choice of editor
run subl, geany, or other to start the application.
SSH mounting would indeed allow you to use all of the files on the VM as if they were stored in your local machine, letting you edit and update files without having to manually copy them every time you perform changes. You will run into a speed bump though, since files changed will have to be synchronized/copied to your remote machine every time and that takes a couple of seconds. Check this post by DigitalOcean, they explain how to get the SSH mount working.
A better option you have (IMHO) is to use an IDE in your local machine that allows you to push changes to a server after saving or by manually doing so. This would allow you to develop faster by using your local resources (local web server) since no files would have to be copied over the network to the remote VM; and would also allow you to test on that remote VM when needed by uploading the files when you are ready to test on that environment.
PS: Exporting visual apps or environments form the remote machine to your local one can be slow (depending on your network and the VM host load running your machine). If you still like that approach, you could also install something to access that VM over something more standard and lightweight like RDP for GNU/Linux (xrdp).
I have a scheduling program running on Server A running Windows 2008 RS. Server B is my SAS server under Windows 2008 R2. How do I kick-off a job on SAS server from my scheduling server? I can either use the sas.exe or a batch file to start my job. Owners of the SAS server tell me that I cannot add an application or Windows service to the SAS server. Is this even possible?
Below is a copy of my answer to a slightly different question (source: http://www.runsubmit.com/questions/260/hide-sas-batch-jobs-winxp). I'm copy/pasting it here for perpetuity and also because it's more likely to help people searching:
You can use PsExec which is part of Microsoft/Sysinternals list of utility programs. This file will go on the scheduling server. Grab it from here:
http://technet.microsoft.com/en-us/sysinternals/bb897553.aspx
The tool is designed to allow you to execute jobs on remote machines. For example, if you want to launch a SAS program from the command line you could run:
psexec \\machinename sas.exe -sysin remotedrivename:\remotefolder\myprogram.sas
This would launch SAS.EXE on the remote machine and run the supplied program that exists on the remote machine. When it launches SAS it appears to launch it within a PsServ service. Because it's running within a service no interface will be displayed. I'm not even sure if you would see it appear as it's own process or application in windows task manager. If you use SysInternals other program, ProcessExplorer, instead of Task Manager you can see this happening.
Note that the REMOTE MACHINE and the LOCAL machine can be the same machine.
PROS: Many other uses for this technique. It's free. PsExec is only required on the machine that is making the call, not both machines.
CONS: Its a bit of a roundabout way to do things. Need to install a third party program (although it is now a MS tool). Some antivirus programs/network admins may not allow it.
Note that if your SAS jobs access network resources then you will probably need to make the network resource available first using the net map command. I suggest running your sas job in a batch file like so (or use the 'x' command from within your SAS file to call the 'net use' commands):
Command executed from local machine:
psexec \\machinename -sysin remotedrivename:\remotefolder\myprogram.BAT
Contents of batch file on remote machine:
net use m: \\fileserver\sharedfolder /USER:mynetworkdomainname\myusername mypassword
sas.exe -sysin remotedrivename:\remotefolder\myprogram.sas
net use m: /delete
I need to allow an external client to change the IP of the Linux machine where the program is running (C++). I already know how to list all the local interfaces and the current IPs assigned to them. I also know how to programatically change said IPs.
What I need to know is how to make this change permanent so, if the machine reboots, it keeps the same network configuration.
What's the best way to do this? Manually parsing /etc/network/interfaces? Calling some linux command?
Edit: I'm using Debian.
Thanks!
Yes, manipulating /etc/network/interfaces is the way to accomplish that (just store the backup in case things go wrong).
Also, if interfaces are managed by network manager (which is rarely the case for servers, but happens on the desktop), you may manipulate it via dbus calls, I think.
You should've mentioned distribution, btw, not the language — if you didn't mention the file it would be impossible to guess ;-)
To make changes permanent, you have to write the network configuration in /etc/network/interfaces and maybe DNS Servers (resolv.conf).
http://wiki.debian.org/NetworkConfiguration
If you don't want to parse the interfaces each time, you could save the IP and Network in a config file to restore it.
Then you have to rewrite "/etc/network/interfaces" only.
After Changes to the network interface configuration, you have to restart the network stack (distro specific).
Restart Interfaces with auto :
$ /etc/init.d/networking restart
Restart other interfaces:
$ ifup [iface]
You can call ifconfig and route commands in a script or better, you can edit the file you mention, depending on your Linux distro.
What can be done to know and list all running processes on a remote computer?
One idea is to have a server listening to our request on the remote machine and the other one is to use ssh.
The problem is i dont know whether there will be such a server running on the remote machine and i cannot use ssh because it needs authentication.
Is there any other way out ?
If you
cannot install a server program on the remote machine
cannot use anything that requires authentication
then you should not be allowed to know the list of all running processes on a machine. That request would be a security nightmare!
You can do something much simpler without (as many) security problems: scan the publicly-available ports for programs that are running. Programs like nmap.org let you know a fair bit of information about the publicly-running programs on machines.
I have done something similar in the past using SNMP. I don't have the specifics in front of me, but something like "snmpwalk -v2 -c public hostname prTable" got me the process table. I recall later configuring SNMP to generate errors when the number of processes didn't meet our specified requirement, like httpd must have at least 1 and less than 50.
I suggest you look at the code for a remote login, rlogin. You could remotely login to an account that has the privileges that you need. Once logged in, you can fetch a list of processes.
This looks like a good application for a script rather than a C or C++ program.