Some lines in my script :
full_path=$(sed -n "${a}p" tmp/archive_file_header_path )
modified_date=$(GetFileInfo -m "$full_path" | awk '{print $1}')
My script works fine (I think) and I get the output as expected.
But there is one bugged me. If I run my script for a file inside a RAID storage, it produce error GetFileInfo: could not refer to file (-43).
Even though it has error, the output show correct date.
This error only happen if the file inside a RAID storage, but no error if inside a single external drive.
And also if I run these lines along with variables outside the script, it does not produce that error, even if the file inside RAID storage.
If its matter, I tried in zsh and bash, both get same result.
My knowledge is very limited, if anyone brought the light to me, I very gratefull
Related
I am running Jupyter with a Python 2.7 kernel and I was able to get
nbconvert to run on both the command line (ipython nbconvert --to pdf file.ipynb) as well as through the jupyter browser interface.
For some reason, however, my file only prints 4 out of the 8 total pages I see on the browser. I can convert an html preview (generated by Juypter) file, and then use an online converter at pdfcrowd.com to get the full pdf files printed.
However, the online converter is lacking in latex capability.
When I installed nbconvert, I did not manually install MikTex latex for windows, as I already had it successfuly installed for Rstudio and Sweave.
I didn't want to mess with the installation.
I printed a few longer files (greater than 4 pages), without latex and a few that had latex and printed the first couple of pages fine. But that last few pages always seem to get cut off from the pdf output.
Does anyone know if that might be causing it to limit the print output, or any other ideas why?
Also, I do notice the command line prompt where jupyter is launched has a few messages...
1.599 [\text{feature_matrix}
] =
?
! Emergency stop.
<inserted text>
1.599 [\text{feature_matrix}
] =
Output written on notebook.pdf (r pages)
[NbConvertApp] PDF successfully created
I ran another file and saw the output showed
!you can't use 'macro parameter character #,' in restricted horizontal mode.
...ata points }}{\mbox{# totatl data points}}
?
! Emergency stop
Look like it is somehow choking on using the latex character #
then halting publishing. But it at least publishes all the pages prior to the error and emergency stop.
While not a solution, I manually changed the latex symbol # to number and
the entire file printed. I wonder if this is a problem with my version of latex maybe?
halted pdf job
$$\mbox{accuracy} = \frac{\mbox{# correctly classified data points}}{\mbox{# total data points}}$$
passed pdf job
$$\mbox{accuracy} = \frac{\mbox{number correctly classified data points}}{\mbox{number total data points}}$$
edit: if it help anyone, I used the suggestion here, to use a backslash escape character before the #, to solve the issue; so it looks like it is a latex issue, if that helps anyone.
I'm using Fabric 1.10 for my project. For one of the tasks, I need to display a list of files present locally but not yet uploaded to the remote server. For this I use rsync.
rsync -avun <local-directory> <remote-server>
This works fine but it also displays a few summary lines and unwanted output, so I try to grep the results. However, this causes an error.
rsync -avun <local-directory> <remote-server> | egrep "(\.png|\.jpg|\.jpeg|\.ico|\.gif)"
Fatal error: local() encountered an error (return code 1)...
Is it not possible to pipe output in Fabric commands?
My guess is that you've mixed up, or fabric is conflicting with, your quotes. Try and use """'s to surround your commands and perhaps single quotes on the egrep.
Ultimately, I want a way (a free way) to monitor changes in a folder and then output those changes to text file. Now this would need to be done programmatically as im using autohotkey to start the process and id like it if it was unseen by the user. I've tried copying the contents of the folder to text file before the change then after the change then using findstr to compare the two and only output the difference to text file. But, findstr outputs false positives. It would output file names as differences even though I can visually confirm they are the same in both files. I need something similar to findstr.
I need something to work like this:
file1.txt
file1.mp3
file2.mp3
file3.mp3
file2.txt
file1.mp3
file2.mp3
file3.mp3
file4.mp3
diff.txt
file4.mp3
Findstr works exactly how I need it to work except for the errors.
findstr /lv /g:file1.txt file2.txt>diff.txt
The error I speak of is with the case sensitivity on, I get 2 files in diff.txt that I know are the same in file1.txt and file2.txt. With case sensitivity off, I get like 30 0r 40 files that are the same in both.
Autohotkey has a very round-about way to accomplish this but I would much prefer a simpler way. Either batch, vbs, or anything else that will run natively on windows 7 or a standalone exe. Something relatively simple and free.
What I am trying to achieve is whatever file is added to the folder, a text file is then created, with the same file name minus the extension, and added to dropbox then IFTTT detects the file and notifies me via sms that 'filename' has finished downloading.
I finished and perfected the method of texting the download link to my pc (via a combination of AHK, batch and IFTTT) which automatically downloads the file. Now I just want a way to be notified its done.
I am very new to linux and apologize if my descriptions are not savvy. I will try to be relevantly detailed.
Currently I am working on a terminal using Fedora, and my goal is to create a smaller data set to run a program. I was given an example, and my mentor said that to run the program all I had to do was type "./filename" into the console.
filename has command line arguments as follows: "./main ./textfile1 ./textfile2" Basically, each argument is separated by a space.
I tried recreating this document with similar format, but I am not sure what to save it as, nor does it work when I try running it the same way as the file with a larger data set.
Also, filename is bold in the terminal, whereas the document I created it is not. I'm not sure if this helps at all, but it is a difference I noticed.
Any help would be appreciated.
You need to set the execute bit on your file.
chmod +x filename
Make sure you compile the program first (in case you haven't. I use the g++ compiler typically) and then use the ./filename like your instructor said, but do not put "./" in front of the arguments. Just write it as "./filename textfile1.txt textfile2.txt"
There is a directory where a buddy adds new builds of a product.
The listing looks like this
$ ls path-to-dir/
01
02
03
04
$
where the numbers listed are not files but names of directories containing the builds.
I have to manually go and check every time whether there is a new build or not. I am looking for a way to automate this, so that the program can send an email to some people (including me) whenever path-to-dir/ is updated.
Do we have an already existing utility or a Perl library that does this?
inotify.h does something similar, but it is not supported on my kernel (2.6.9).
I think there can be an easy way in Perl.
Do you think this will work?
Keep running a loop in Perl that does a ls path-to-dir/ after, say, every 5 minutes and stores the results in an array. If it finds that the new results are different from the old results, it sends out an email using Mail or Email.
If you're going for perl, I'm sure the excellent File::ChangeNotify module will be extremely helpful to you. It can use inotify, if available, but also all sorts of other file-watching mechanisms provided by different platforms. Also, as a fallback, it has its own watching implementation, which works on every platform, but is less efficient than the specialized ones.
Checking for different ls output would send a message even when something is deleted or renamed in the directory. You could instead look for files with an mtime newer than the last message sent.
Here's an example in bash, you can run it every 5 minutes:
now=`date +%Y%m%d%H%M.%S`
if [ ! -f "/path/to/cache/file" ] || [ -n "`find /path/to/build/dir -type f -newer /path/to/cache/file`" ]
then
touch /path/to/cache/file -t "$now"
sendmail -t <<< "
To: aaa#bbb.ccc
To: xxx#yyy.zzz
Subject: New files found
Dear friend,
I have found a couple of new files.
"
fi
Can't it be a simple shell script?
while :;do
n = 'ls -al path-to-dir | wc -l'
if n -gt old_n
# your Mail code here; set old_n=n also
fi
sleep 5
done
Yes, a loop in Perl as described would do the trick.
You could keep a track of when the directory was last modified; if it hasn't changed, there isn't a new build. If it has changed, an old build might have been deleted or a new one added. You probably don't want to send alerts when old builds are removed; it is crucial that the email is sent when new builds are added.
However, I think that msw has the right idea; the build should notify when it has completed the copy out to the new directory. It should be a script that can be changed to notify the correct list of people - rather than a hard-wired list of names in the makefile or whatever other build control system you use.
you could use dnotify it is the predecessor of inotify and should be available on your kernel. It is still supported by newer kernels.