I'm running CentOS 7.5. I've currently got rsyslog setup and working. The only problem is that it's not recognizing the %HOSTNAME% variable in my file path. Instead of writing the logs to a separate file for each host, it's creating a file called '%HOSTNAME%' and writing the logs there.
This is what I currently have in my /etc/rsyslog.conf file.
if $fromhost-ip startswith "10." then /var/log/Client_Logs/%HOSTNAME%.log
& ~
Everything with this is working, except for creating a separate file for each device. I'm not sure if the client device is failing to provide this information or not. I've got several client devices so far and they are all Cisco switches.
I've also tried using other variables like '%FROMHOST-IP%' and they also do not work.
Any help is appreciated. Thanks.
For dynamic files, you must first define them separately in a template. Eg
$template DynFile,"/var/log/Client_Logs/%HOSTNAME%.log"
if $fromhost-ip startswith "10." then ?DynFile
See actions.
Related
I've been searching and searching but i did not find anything useful, i would like to implement some automation in POSTMAN.
I don't know if this is even possible but i would like to force POSTMAN to automatically read JSON files from a directory , i.e: file system or whatever. Do you get me?
Everytime that i want to execute anything on POSTMAN i have to open the COLLECTION, select the desired COLLECTION, click on RUNNER and then: choose the ENVIRONMENT, select the data file and finally: click on Start Run. I don't want to do it manually no more
Take a look at these questions:
Is it possible to schedule a task on POSTMAN?
Is it possible to read/reach files from file systems or something like
that?
A friend of mine told me that it was possible but i don't have the details and i want to do it.
Can you help me? I'm pretty lost
You can this using Newman to run the Collection. All the usage details and examples can be found here:
https://github.com/postmanlabs/newman
You can use the -d flag to specify a file path to the datafile, for the Collection to use. This would be the same as running the collection in the UI, this just brings it out to the command line.
Before asking this question I searched a lot about Logging (the terminal Debug Log) into a file for Tizen Application. I figured out some other ways to implement using several alternatives a bit complex pathway for this problem. But I want something straightforward, simple and builtin for Tizen Applications.
So here is what I want -
I will run a Tizen application written in C/C++. It will generate response logs on the terminal based on the several queries I ask to the app.
I want to save those logs into a specific file like file_name.log .
That file_name.log will be saved somewhere within my PC. Developer can change the location as my own.
Is there any command or an existing system for Tizen apps ?
Thank you in advance.
Read https://developer.tizen.org/development/guides/native-application/error-handling/system-logs about Tizen's built-in logging system.
As stated in the page, the logs can be also retrieved from the command line using sdb shell dlogutil [options...] [TAG], or simply sdb dlog [options...] [TAG]. So if you want to save the output as a file, simply do sdb dlog [-d] MY_APP > file_name.log. If this is not what you are searching for, please be more specific in your question.
I want to use two files as input to a MapReduce program. but using * doesn't work as a filename pattern.
I would expect working with input/ should do the trick. To get started try running the Wordcount example: http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
At the end of this tutorial they explain you how to run the job (they run it on multiple dictionary files which reside in an input map).
EDIT: Also check this tutorial for using the distributed file system, you usually need your input files in the dfs..
It works and it should work on your machine as well. Are you sure about the path you are giving? Is it input/190*.txt or /input/190*.txt. Please mind the "/". Path without a / are assumed to be present inside the /user where as paths with a / are present directly under the root directory.
And it works with mv(or any other HDFS command for that matter) as well.
What I want to achieve
User would provide a command which would do remote execution. Command (protocol for remote execution) can be SSH/RSH... etc. So I want it to be part of a configuration file or a template file (assume parameters are fixed across protocol) like below sample -
template.cfg file (as configured by user):
ssh $ip $commandList
I would generate a list of values in another data file which would contain the ip address and the command list. Like
10.182.215.214|echo $UNAME
10.251.142.142|echo $SHELLNAME
I would like to have a script call it driver.sh which when executed, generates the actual script/scripts with the command from template to another execution script - execute.sh
Questions
How can I generate the script based on template/plugin (which can take liberty and provide the command)?
If the data is generated in an online application (C/C++), other than normal file based operation (read from the cfg file and update the execute.sh) is there any better way?
1.
while IFS=\| read ip commandList
do eval echo $(<template.cfg)
done <data >execute.sh
You may want to quote the variable expansions in the data file.
2.
Since you want the user-provided command to be part of a configuration file, I see no other way than to read from the cfg file; on the other hand, you may well directly execute the generated commands instead of writing them to an execute.sh.
This almost looks as if you're trying to re-implement automated configuration tools like puppet or chef. Beats the ssh loop.
Puppet contains a module called facter, which is used to report/collect all kinds of data about your remote systems.
All of these tools require some setup (public/private keypairs, software installation).
They both have the advantage of builtin logging - good for audits.
I would like to redirect the output of the Fabric command
print local("git add .")
so that I make sure that everything is working properly. Is there a way to pipe the results of this command to the console? Thanks!
You should already see the output of the command in the console since that's the default behaviour. If you need to customize it, please have a look at the managing output documentation section.
Besides this, there's an ongoing effort to use the logging module (issue #57) that should provide more options like logging to a file, but that hasn't been merged into the trunk branch yet.