I'm trying to run an app (let's say top) so it will read from a file for stdin and write to another file from stdout.
Currently I have
mkfifo stdin.pipe
(tail -f stdin.pipe) | top
which works as expected, as I can then echo something to that file and top will receive it.
But I'm unable to redirect the output of top.
How can I achieve this?
EDIT:
Ok, let's scratch top.
I'm testing with this:
cat test.sh
echo Say something
read something
echo you said $something
Let's forget about top, that appears to be a red herring.
To map stdin or stdout to files, you can use redirection:
some_program < input_file # Redirects stdin
another_program > output_file # Redirects stdout
or even:
yet_another < input_file > output_file
Is there a way I can map stdin and stdout to files and use them to control a cli app?
It sounds like you are looking for coprocesses, added to Bash in 4.0.
coproc cat # Start cat in background
echo Hello >&${COPROC[1]} # Say "Hello" to cat
read LINE <&${COPROC[0]} # Read response
echo $LINE # cat replied "Hello"!
Before 4.0 you had to use two named pipes to achieve this.
Related
I had a bunch of bash scripts in a directory that I "backed up" doing $ tail -n +1 -- *.sh
The output of that tail is something like:
==> do_stuff.sh <==
#! /bin/bash
cd ~/my_dir
source ~/my_dir/bin/activate
python scripts/do_stuff.py
==> do_more_stuff.sh <==
#! /bin/bash
cd ~/my_dir
python scripts/do_more_stuff.py
These are all fairly simple scripts with 2-10 lines.
Given the output of that tail, I want to recreate all of the above files with the same content.
That is, I'm looking for a command that can ingest the above text and create do_stuff.sh and do_more_stuff.sh with the appropriate content.
This is more of a one-off task so I don't really need anything robust and I believe there are no big edge cases given files are simple (e.g none of the files actually contain ==> in them).
I started with trying to come up with a matching regex and it will probably look something like this (==>.*\.sh <==)(.*)(==>.*\.sh <==), but I'm stuck into actually getting it to capture filename, content and output to file.
Any ideas?
Presume your backup file is named backup.txt
perl -ne "if (/==> (\S+) <==/){open OUT,'>',$1;next}print OUT $_" backup.txt
Above version is for Windows
fixed version on *nix:
perl -ne 'if (/==> (\S+) <==/){open OUT,">",$1;next}print OUT $_' backup.txt
#!/bin/bash
while read -r line; do
if [[ $line =~ ^==\>[[:space:]](.*)[[:space:]]\<==$ ]]; then
out="${BASH_REMATCH[1]}"
continue
fi
printf "%s\n" "$line" >> "$out"
done < backup.txt
Drawback: extra blank line at the end of every created file except the last one.
Im trying to make a bash script to rename some files wich match my regex, if they match i want to rename them using the regex and overwrite an old existing file.
I want to do this because on computer 1 i have a file, on computer 2 i change the file. Later i go back to computer 1 and it gives an example conflict so it saves them both.
Example file:
acl_cam.MYI
Example file after conflict:
acl_cam (Example conflit with .... on 2015-08-20).MYI
I tried a lot of thinks like rename, mv and couple other scripts but it didn't work.
the regex i should use in my opinion:
(.*)/s\(.*\)\.(.*)
then rename it to value1 . value2 and replace the old file (acl_cam.MYI) and do this for all files/directories from where it started
can you guys help me with this one?
The issue you have, if I understand your question correctly, is two part. (1) What is the correct regex that will match the error string and produce a filename?; and (2) how to use the returned filename to move/remove the offending file?
If the sting at issue is:
acl_cam (Example conflit with .... on 2015-08-20).MYI
and you need to return the MySQL file name, then a regex similar to the following will work:
[ ][(].*[)]
The stream editor sed is about as good as anything else to return the filename from your string. Example:
$ printf "acl_cam (Example conflit with .... on 2015-08-20).MYI\n" | \
sed -e 's/[ ][(].*[)]//'
acl_cam.MYI
(shown with line continuation above)
Then it is up to you how you move or delete the file. The remaining question is where is the information (the error string) currently stored and how do you have access to it? If you have a file full of these errors, then you could do something like the following:
while read -r line; do
victim=$( printf "%s\n" "$line" | sed -e 's/[ ][(].*[)]//' )
## to move the file to /path/to/old
[ -e "$victim" ] && mv "$victim" /path/to/old
done <$myerrorfilename
(you could also feed the string to sed as a here-string, but omitted for simplicity)
You could also just delete the file if that suits your purpose. However, more information is needed to clarify how/where that information is stored and what exactly you want to do with it to provide any more specifics. Let me know if you have further questions.
Final solution for this question for people who are interested:
for i in *; do
#Wildcar check if current file containt (Exemplaar
if [[ $i == *"(Exemplaar"* ]]
then
#Rename the file to the original name (without Exemplaar conflict)
NewFileName=$(echo "$i" | sed -E -e 's/[ ][(].*[)]//')
#Remove the original file
rm $NewFileName;
#Copy the conflict file as the original file name
cp -a "$i" $NewFileName;
#Delete the conflict file
rm "$i";
echo "Removed file: $NewFileName with: $i";
fi
done
I used this code to replace my database conflict files created by dropbox sync with different computers.
I have thousands of text documents and they have varied number of lines of texts. I want to combine all the lines into one single line in each document individually. That is for example:
abcd
efgh
ijkl
should become as
abcd efgh ijkl
I tried using sed commands but it is quite not achieving what I want as the number of lines in each documents vary. Please suggest what I can do. I am working on python in ubuntu. One line commands would be of great help. thanks in advance!
If you place your script in the same directory as your files, the following code should work.
import os
count = 0
for doc in os.listdir('C:\Users\B\Desktop\\newdocs'):
if doc.endswith(".txt"):
with open(doc, 'r') as f:
single_line = ''.join([line for line in f])
single_space = ' '.join(single_line.split())
with open("new_doc{}.txt".format(count) , "w") as doc:
doc.write(single_space)
count += 1
else:
continue
#inspectorG4dget's code is more compact than mine -- and thus I think it's better. I tried to make mine as user-friendly as possible. Hope it helps!
Using python wouldn't be necessary. This does the trick:
% echo `cat input.txt` > output.txt
To apply to a bunch of files, you can use a loop. E.g. if you're using bash:
for inputfile in /path/to/directory/with/files/* ; do
echo `cat ${inputfile}` > ${inputfile}2
done
assuming all your files are in one directory,have a .txt extension and you have access to a linux box with bash you can use tr like this:
for i in *.txt ; do tr '\n' ' ' < $i > $i.one; done
for every "file.txt", this will produce a "file.txt.one" with all the text on one line.
If you want a solution that operates on the files directly you can use gnu sed (NOTE THIS WILL CLOBBER YOUR STARTING FILES - MAKE A BACKUP OF THE DIRECTORY BEFORE TRYING THIS):
sed -i -n 'H;${x;s|\n| |g;p};' *.txt
If your files aren't in the same directory, you can used find with -exec:
find . -name "*.txt" -exec YOUR_COMMAND \{\} \;
If this doesn't work, maybe a few more details about what you're trying to do would help.
I have a list of files in files.txt, a hugely simplified example
$FOO%foo\bar\biz.asmx
%FOO%foo\bar\biz.cs
%FOO%baz\bar\foo\biz.asmx
It is my desire to insert App_Code in the path of .asmx files like:
$FOO%foo\bar\App_code\biz.asmx
%FOO%foo\bar\biz.cs
%FOO%baz\bar\foo\App_Code\biz.asmx
Though I'm on a windows box I have gnuwin32, which gives me sed/awk/grep and other fancy stuff.
I'm not wedded to a particular solution, but am interested in the sed/awk route for my on enlightenment
I have tried:
sed "s/\\([:alnum:]*)\.asmx/App_Code\/{1}/"
which I had thought would capture any alphanumeric characters after a path separator (filename) that are followed by .asmx, and then replace it with `App_Code{contents of group}.
Something is off as it never finds what I want. I'm strugging with the docs and examples, advice and guidance would be appreciated.
Quoting on Windows is a pain so put the following script into a file called appcode.awk:
BEGIN {
FS=OFS="\\"
}
$NF~/[.]asmx/{
$NF = "App_code" OFS $NF
}
{
print
}
And run like:
$ awk -f appcode.awk file
$FOO%foo\bar\App_code\biz.asmx
%FOO%foo\bar\biz.cs
%FOO%baz\bar\foo\App_code\biz.asmx
Using awk
awk -F\\ '/\.asmx/ {$NF="App_Code\\"$NF}1' OFS=\\ file
$FOO%foo\bar\App_Code\biz.asmx
%FOO%foo\bar\biz.cs
%FOO%baz\bar\foo\App_Code\biz.asmx
Using sed:
sed -r 's/(\\\w+\.asmx)/\\App_Code\1/' files.txt
Output:
$FOO%foo\bar\App_Code\biz.asmx
%FOO%foo\bar\biz.cs
%FOO%baz\bar\foo\App_Code\biz.asmx
EDIT
As suggested in by sudo_O, capture group can be dropped and & can be used in the same command.
sed -r 's/\\\w+\.asmx/\\App_Code&/' files.txt
I'm trying to extract the size (in kb) from a file. Trying to do so as follows:
textA=$(du a)
sizeA=$(expr match "$textA" '\(^[^\s]*\)')
textB=$(du b)
sizeB=$(expr match "$textB" '\(^[^\s]*\)')
echo $textA
echo $sizeA
echo $textB
echo $sizeB
[[ $sizeA == $sizeB ]] && echo "eq"
But this just prints in console textA and textB. Both are like:
30745 a
Can someone please explain why is not the regex matching? I've tried to test the regex against the text in many sites, just to make sure, and it appears to capture the correct text.
I've also tried changing it to:
'^\([^\s]*\)'
But this way it will capture all the text. Any thoughts?
My expr match does not understand \s or other extended regexps. Try '\([0-9]*\)' instead.
But as others mentioned already, using regexp for getting "the first word" is a little overkill. I'd use du s | { read a b; echo $a; }, but you could also use the awk version or solutions using cut.
Not a direct answer, but I would do it like this:
sizeA=$(du a | awk '{print $1}')
size=$(wc -c < file)
If you want to use du, I would use the bash builtin read:
read size filename < <(du file)
Note that you can't say du file | read size filename because in bash, components of a pipeline are executed in subshells, so the variables will disappear when the subshell exits.
Do not parse the output of du, if available you can e.g. use stat to get the size of a file in bytes:
sizeA=$(stat -c%s "${fileA}")