In task scheduler action(s) I would like to execute following DOS commands without using any batch file. Would it be possible?
/* Store current date and time in a variable */
set TIMESTAMP=%DATE:~0,2%-%DATE:~3,2%-%DATE:~6,4%-%TIME:~0,2%.%TIME:~3,2%.%TIME:~6,2%
/* Create directory */
md "%TIMESTAMP%"
/* Pass above created directory to one of the program */
HTBase.exe /full /logfile=%LOGDIR%\HTBase.log /r /y %APPDIR% %FULLDIR%/"%TIMESTAMP%" >> %LOGDIR%\HTBaseFullBackup.log
Currently it is done through batch file but I would like to execute these command through Task scheduler action(s) without using any batch file.
Regards,
Pabitra
The task is achieved through adding a action with PowerShell script.
Here is the details, https://taskscheduler.codeplex.com/discussions/570198
Related
I tried to run test.bat file using cfexecute. It shows timeout error after loding for sometime. The output file is blank. But when i double click the test.bat file it works fine. My code is this,
<cfexecute name="C:\Windows\System32\cmd.exe" arguments="/C C:\ColdFusion2018\cfusion\wwwroot\test.bat" timeout="60" outputfile="C:\ColdFusion2018\cfusion\wwwroot\log_output1.txt"></cfexecute>
We recommend using CFX_EXEC (Windows) instead of the built-in CFExecute. When running BAT files, we've encountered many cases where we needed to run it under a separate Windows account that had privileges different than the CF Service. CFX_EXEC enabled us to specify the specific account whereas CFExecute doesn't have the option at all. We also use CFX_EXEC for performing IP/DNS look-ups as it's a lot faster than Java, honors TTL and doesn't cache the lookup results "forever".
If you want to run test.bat using cfexecute, test.bat should be the value of the name attribute, not the arguments attribute.
<cfexecute name="C:\ColdFusion2018\cfusion\wwwroot\test.bat"
timeout="60"
arguments ="whatever applies"
outputfile="C:\ColdFusion2018\cfusion\wwwroot\log_output1.txt">
</cfexecute>
Thanks for your response,
The batch file successfully executed after suppressing the 'Press any key to continue..'(pause) in the command line. It makes the cfexecute loading till timeout. That was the issue here.
I am running into an issue with my EMR jobs where too many input files throws out of memory errors. Doing some research I think changing the HADOOP_HEAPSIZE config parameter is the solution. Old amazon forums from 2010 say it cannot be done.
can we do that now in 2018??
I run my jobs using the C# API for EMR and normally I set configurations using statements like below. can I set HADOOP_HEAPSIZE using similar commands.
config.Args.Insert(2, "-D");
config.Args.Insert(3, "mapreduce.output.fileoutputformat.compress=true");
config.Args.Insert(4, "-D");
config.Args.Insert(5, "mapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.compress.GzipCodec");
config.Args.Insert(6, "-D");
config.Args.Insert(7, "mapreduce.map.output.compress=true");
config.Args.Insert(8, "-D");
config.Args.Insert(9, "mapreduce.task.timeout=18000000");
If I need to bootstrap using a file, I can do that too. If someone can show me the contents of the file for the config change.
Thanks
I figured it out...
I created a shell script to increase the memory size on the master machine (code at the end)...
I run a bootstrap action like this
ScriptBootstrapActionConfig bootstrapActionScriptForHeapSizeIncrease = new ScriptBootstrapActionConfig
{
Path = "s3://elasticmapreduce/bootstrap-actions/run-if",
Args = new List<string> { "instance.isMaster=true", "<s3 path to my shell script>" },
};
The shell script code is this
#!/bin/bash
SIZE=8192
if ! [ -z $1 ] ; then
SIZE=$1
fi
echo "HADOOP_HEAPSIZE=${SIZE}" >> /home/hadoop/conf/hadoop-user-env.sh
Now I am able to run a EMR job with master machine tye as r3.xlarge and process 31 million input files
Running the following from a command line to launch a process on remote computer
wmic /node:remotemachine /user:localadmin process call create "cmd.exe /c C:\temp\myfolder\test.bat"
basically it's just
echo Some Text > output.txt
I tested by double clicking the batch file and it creates the output.txt file.
the batch file just echoes to a file. I did this to see if it actually runs.
The cmd process starts. I can see it in the processes, but the batch file never creates the text file.
I started off trying to run an EXE from my C# application, but it will create the process for the executable, but the actions the executable takes, never occurs.
So I started testing other ways to do the same thing, and I am encountering the same issue. it creates the process, but doesn't actually run the bat or exe.
Any help would be appreciated.
I need to be more specific
I'm using the following code within my C# application:
public static void ConnectToRemoteClient(string client_machine, string target_exe )
{
var connection = new ConnectionOptions();
object[] theProcessToRun = { target_exe };
var wmiScope = new ManagementScope($#"\\{client_machine}\root\cimv2", connection);
wmiScope.Connect();
using (var managementClass = new ManagementClass(wmiScope, new ManagementPath("Win32_Process"), new ObjectGetOptions()))
{
managementClass.InvokeMethod("Create", theProcessToRun );
}
}
It's called as follows:
It is called using the following syntax:
string exe = string.Format(#"cmd.exe /c C:\temp\Myfolder\test.bat");
ConnectToRemoteClient("ClientMachine", exe);
It will launch the process and I see the cmd.exe running, but the test.bat actions never occur.
Telling WMIC to run a single command is pretty straight forward. Trouble shows up once we try to nest one command inside another. :-)
Since this case has an outer command (cmd.exe) and an inner command (C:\temp\Myfolder\test.bat), the trick is separating them in a way that WMIC can use. There are 3 techniques that'll work, but the one which has the fewest issues with special characters is the single-to-double-wrap method. Effectively you use single quotes around the outer command, and double quotes around the inner command. For example:
wmic /node:NameOfRemoteSystem process call create 'cmd.exe /c "whoami /all >c:\temp\z.txt"'
Wrapping in this way will preserve the redirector (>) and it also doesn't require you to double your backslashes on the inner command.
Output From Example:
dir \\NameOfRemoteSystem\c$\temp\z.txt
File Not Found
wmic /node:NameOfRemoteSystem process call create 'cmd.exe /c "whoami /all >c:\temp\z.txt"'
Executing (Win32_Process)->Create()
Method execution successful.
Out Parameters:
instance of __PARAMETERS
{
ProcessId = 20460;
ReturnValue = 0;
};
dir \\NameOfRemoteSystem\c$\temp\z.txt
03/27/2019 04:40 PM 17,977 z.txt
Please use below mentioned powershell command
Invoke-Command -ComputerName <remoteMachine> -Credential $cred -ScriptBlock {<location of batch file>}
I have a program that needs to run a program we'll call externalProg in parallel on our linux (CentOS) cluster - or rather, it needs to run many instances of externalProg, each on different cores. Each "thread" creates 3 files based on a few parameters - the inputs to externalProg, a command file to tell externalProg how to execute my file, and a bash script to set up the environment (calls a setup script provided by the manufacturer) and actually call externalProg with my inputs.
Since this needs to be parallel with an unknown number of concurrent threads and I don't want to risk overwriting another thread's files, I am creating temp files using
mkstemp("PREFIX_XXXXXX")
for these input files. After the external program runs, I extract the relevant data and store it, and close the temp files (therefore deleting them).
We'll call the files created (Which actually have a name based on the template above)
tmpInputs - Inputs to externalProg
tmpCommand - Input that tells externalProg how to execute tmpInputs
tmpBash - bash script to set up and call externalProg with my inputs
The file tmpBash looks something like
source /path/to/setup/script # Sets up environment variables
externalProg < /path/to/tmpCommand
where tmpCommand is just a simple text file.
The problem I'm having is actually executing the bash script. Within my program, I call
ostringstream launchcmd;
launchcmd << "bash " << path_to_tmpBash
system(launchcmd.str().c_str());
But nothing happens. No error, no warning, no 'file not found' or permission denied or anything. I have verified that the files are being created and have the correct content. The rest of the code after system() is executed successfully (Though it fails since externalProg wasn't run).
Strangely, if I go back to the terminal and type
bash /path/to/tmpBash
then externalProg is executed successfully. I have also cout'd the launchcmd string, copy and pasted that in to the terminal, which also works successfully. For some reason, this only fails when called within my program.
After a bit of experimentation, I've determined that system() calls /bin/sh on our cluster. If I change launchcmd to look like
/path/to/tmpBash
(So that the full command should look like /bin/sh /path/to/tmpBash), I get a permission denied error, which is no surprise. The problem is that I can't chmod +x the tmpBash file while it's still open, and if I close the file, it gets deleted - so I'm not sure how to address that.
Is there something obviously wrong I'm doing, or does system() have some nuance that I'm missing?
edit: I wanted to add that I can successfully call things like
system("echo $PATH")
and get the expected results (in this case, my default $PATH).
Two separate ideas:
Change your SHELL environment variable to be /bin/bash, then call system(),
or:
Use execve directly `execve('/bin/bash', ['/path/to/tmpBash'], environ)
i have an interface where i use to execute the mml command in my solaris unix like below:
> eaw 0004
<RLTYP;
BSC SYSTEM TYPE DATA
GSYSTYPE
GSM1800
END
<
As soon as i do eaw <name> on the command line.It will start an interface where in i can execute mml commands and i can see the output of those commands executed.
My idea here is to parse the command output in c++.
I can do away with some logic for parsing.But to start with How can get the command to be executed inside c++ ? Is there any predefined way to do this.
This should be similar to executing sql queries inside c++.But we use other libraries to execute sql queries.I also donot want to run a shell script or create temporary files in between.
what i want is to execute the command inside c++ and get the output and even that in c++.
could anybody give me the right directions?
You have several options. From easiest and simplest to hardest and most complex to use:
Use the system() call to spawn a shell to run a command
Use the popen() call to spawn a subprocess and either write to its standard input stream or read from its standard output stream (but not both)
Use a combination of pipe(), fork(), dup()/dup2(), and exec*() to spawn a child process and set up pipes for a child process's standard input and output.
The below code is done with the sh command. This redirects stdout to a file named "out" which can be read later to process the output. Each command to the process can be written through the pipe.
#include <stdio.h>
int main()
{
FILE *fp;
fp = popen("sh > out", "w");
if (fp) {
fprintf(fp, "date\n");
fprintf(fp, "exit\n");
fclose(fp);
}
return 0;
}