I've been having lots of troubles using pscp and regex on AIX server.
As you can see, I am trying to use Putty's pscp to transfer file "stdout" to my local folder.
This usually works fine, but my problem is that I won't know exact same folder name so I need to use REGEX.
I've been told that possibly my regex was written for grep and that it wasn't supported by pscp.
What would be alternative then to write regex for pscp.
Error message is: "multiple-level wildcards unsupported"
pscp.exe -P 22 -pw krt_345 testuser#testserver5:"/app/log/s500/20201023/.\*/20201023-02\.2[0-9]\.[0-5][0-9]_s500_testuser.\*"/stdout C:\logs
regex only:
"/app/log/s500/20201023/.\*/20201023-02\.2[0-9]\.[0-5][0-9]_s500_testuser.\*"/stdout
With SCP protocol, the filemask in the path is resolved by the server. With a typical OpenSSH scp "server", you can use standard Linux glob masks. Definitely not regex. Though your mask is simple enough, that a simple glob mask 20201023-02.2[0-9].[0-5][0-9]_s500_testuser* would. But you can use glob mask for the last path component only. Not for the parent directory. What is what the "multiple-level wildcards unsupported" error message is trying to tell you.
So what you are doing is not doable with SCP. You would have to obtain the folder name using other means. Like using a shell commands over SSH.
And I belive you have asked for this already:
Finding folder name on a server using batch file
And based on your comments to the answer, you already know what you need to combine a shell command like find with scp. So I do not understand, why don't you ask for that.
Related
I'm trying to find a simple/elegant command-line solution for a process that is often used in scripts. Something like: (Fictional example)
CopyWithReplace <SourceFile> <DestFile> -m <match regular expression> -r <replacement regular expression>
It would copy the text file with the matched text replaced as specified. Ideally, the find/replace would happen in the pipeline, rather than as a secondary step. (Destinations quite often are remote locations, and long distance WAN links are often not as fast and reliable as desired.)
What would be the simplest** way to achieve this scriptable functionality in a windows environment?
** Simplest = easy to write batch code, fewest 3rd party tools, etc. Bonus points for a reasonably standard Regex implementation.
This can be achieved with sed.
The basic usage pattern, for a substitution as you described, is:
sed 's/regexp/replacement/g' inputFileName > outputFileName
sed is a Unix utility, but there are several ways of using it in Windows if you wish. This StackOverflow post lists the various options available.
We can search files in windows 7 or higher version using the following tool:
(I don't have image uploading privilage. I mean the top-right area in windows file explorer.)
When I search for MATLAB files using "*.m", it not only returns *.m files, but also returns *.mp3, *.mp4 files. Is there any way to show *.m files exclusively?
Thanks!
I assume you used the quotation marks here to show the text you typed, because ironically the exact way how it should work is to put the search in quotation marks...
so
*.m
finds .mp3 as well as .m but
"*.m"
should only find the .m files. Alternatively you could also write
ext:".m"
which would guarantee that only extensions are searched. (Although I am not sure if this is ever necessary here, because while windows can have a dot in the filename and also can have files without extensions I am not sure if it is possible to have both at the same time.)
using the following
"*.m"
will solve your problem.You can find more information on regex to be used in msdn in the following link .Advanced query syntax
Above that, you can also take advantage of the wildcard character *.
For example, if you want to search for a file with a name ending with 024 or starting with 024 then you can put in the search box like *024.* or 024*.* respectively.
Here the * after . represents files with any extensions, if you want particular then mention extension line 024.png.
Explorer don't have a function of finding with RegEx.
You need to use Power-Shell instead of Win Explorer;
for example: where '(?i)Out' is a regex
Get-ChildItem -Path e:\temp -Recurse -File | Where-Object { $_.Name -match '(?i)Out' }
alternatively you can just simply search for your extension like this:
.extension
eg:
typing .exe will give you all the files with .exe extensions in a folder.
PS: Typing .xml OR .vmcx will give you both type of files. It is useful if you seek to make an archive of different kinds of files stored in different folders or locations.
You can get close to proper regex support from the mostly awesome Cygwin, and as a bonus you get most every linux tool running natively on linux. But it still doesnn't know that .* means "zero or more of anything", ^ means the start of a line (and $ the end), so some things are still weird.
And a startlingly large bunch of weird corner cases that only deranged perl programmers notice fail the test.
So many other things it gets wrong, but it's more workable than anything in any windows OS, plus you get perl, grep, diff, wget, curl, etc. -- the whole GNU lib for free.
If you want a full on bash shell with proper respect for regex, install the super neet-o Bash for Windows 10
Either will do what you want. And they're a billion times faster than that stupid search bar that takes off at 100 mph then crawls to 1 pixel per 10 minutes near the end.
I looked for an applescript to extract the DOI from a PDF file, but could not find it. There is enough information available on the actual format of the DOI (i.e. the regular expression), but how could I use this to get the identifier from the PDF file?
(It would be no problem if some external program were used, such as Hazel.)
If you're ok with using an app, I'd recommend Skim. Good AppleScript support. I'd probably structure it like this (especially if the document might be large):
set DOIFound to false
tell application "Skim"
set pp to pages of document 1
repeat with p in pp
set t to text of p
--look for DOI and set DOIFound to true
if DOIFound then exit repeat--if it's not found then use url?
end repeat
end tell
I'm assuming a DOI would always exist on one page (not spread out to between two). Looks like they are invariably (?) on the first page of an article, which would make this quick of course, even with a large doc.
[edit]
Another way would be to get the Xpdf OSX binaries from http://www.foolabs.com/xpdf/download.html and use pdftotext in the command line (just tested this; it works well) and parse the text using AppleScript. If you want to stay in AppleScript, you can do something like:
do shell script "path/to/pdftotext 'path/to/pdf/file.pdf'"
which would output a file in the same directory with a txt file extension -- you parse that for DOI.
Have you tried it with pdfgrep? It works really well in commmandline
pdfgrep -n --max-count 1 --include "*.pdf" "DOI"
i have no idea to build an apple script though, but i would be interested in one also. so that if i drop a pdf into that folder it just automatically extracts the DOI and renames the file with the DOI in the filename.
I was wondering how does a utility like this one redirects a folder to a driver letter?
PS. I need this done with C/C++/MFC.
It probably uses DefineDosDevice, as an ordinary subst command does.
I think I've written maybe one shell script my entire life, and I'm not even sure if it's possible to do this, but I'm trying to write a script that will ftp the contents of a directory, one at a time. That is, it'll ftp one and then close the connection, then ftp the second, and close that etc. This is because there may be up to five files in a directory all of which are a minimum of 2GB each. FTPing them all at once always results in a reset connection. I thought that if I could match by partial filename, then perhaps that will help, as they are all named the same way.
So, in a directory, it'll have:
SampleFileA_20100322_1.txt
SampleFileA_20100322_2.txt
SampleFileB_20100322_1.txt
SampleFileC_20100322_1.txt
I'd like to ftp SampleFileA_xxxx_1 first, then SampleFileA_xxxx_2, etc. This is the current ftp script, which tries to download everything all at once...
#!/bin/bash
REMOTE='ftp.EXAMPLE.com'
USER='USERNAME'
PASSWORD='PASSWORD'
FTPLOG='/tmp/ftplog'
date >> $FTPLOG
ftp -in $REMOTE <<EOF
_FTP>>$FTPLOG
quote USER $USER
quote PASS $PASSWORD
bin
cd download
mget *
quit
_FTP
:wq!
based on your question I think you need something like
files=`ls Sample*txt`
for file in $files
do
run_ftp_function $file
done
you'll need to setup "run_ftp_function" to do the send (like you already have) using $1 as the file to send