I want to compress all logs which matches the pattern entered by the user for which i am using the below code but getting an error.
import subprocess
input=input("Enter log details to compress")
subprocess.call(['gzip',input],shell=True)
Input given as : purato.log.2017-08*
Error : gzip: compressed data not written to a terminal. Use -f to force compression.
You are giving gzip the string purato.log.2017-08* in python, however in when you execute gzip purato.log.2017-08* in the shell, the wild card * substituted before gzip is invoked.
In order to replicate this behavior, enable shell on the call() function.
subprocess.call(['gzip purato.log.2017-08*'],shell=True)
Related
I am trying to import a file 'filename.csv.gz' into SAS.
Currently, I am trying to gunzip
FILENAME in PIPE "gunzip -dc filename.csv.gz" LRECL=80 ;
I am getting these errors:
ERROR: Insufficient authorization to access PIPE.
ERROR: Error in the FILENAME statement.
I am trying to figure out a way around the PIPE and I cannot manually do this.
Sounds like your SAS admin has disabled the ability to run operating system commands from SAS, so the PIPE engine does not work.
But you don't need the PIPE engine to read a gzipped file. Use the ZIP engine instead with the GZIP option.
FILENAME in zip "filename.csv.gz" gzip LRECL=80 ;
Sounds like you need OPTIONS XCMD; which may not be permitted. Can you run your program in BATCH?
https://support.sas.com/kb/15/179.html
I am running Jupyter with a Python 2.7 kernel and I was able to get
nbconvert to run on both the command line (ipython nbconvert --to pdf file.ipynb) as well as through the jupyter browser interface.
For some reason, however, my file only prints 4 out of the 8 total pages I see on the browser. I can convert an html preview (generated by Juypter) file, and then use an online converter at pdfcrowd.com to get the full pdf files printed.
However, the online converter is lacking in latex capability.
When I installed nbconvert, I did not manually install MikTex latex for windows, as I already had it successfuly installed for Rstudio and Sweave.
I didn't want to mess with the installation.
I printed a few longer files (greater than 4 pages), without latex and a few that had latex and printed the first couple of pages fine. But that last few pages always seem to get cut off from the pdf output.
Does anyone know if that might be causing it to limit the print output, or any other ideas why?
Also, I do notice the command line prompt where jupyter is launched has a few messages...
1.599 [\text{feature_matrix}
] =
?
! Emergency stop.
<inserted text>
1.599 [\text{feature_matrix}
] =
Output written on notebook.pdf (r pages)
[NbConvertApp] PDF successfully created
I ran another file and saw the output showed
!you can't use 'macro parameter character #,' in restricted horizontal mode.
...ata points }}{\mbox{# totatl data points}}
?
! Emergency stop
Look like it is somehow choking on using the latex character #
then halting publishing. But it at least publishes all the pages prior to the error and emergency stop.
While not a solution, I manually changed the latex symbol # to number and
the entire file printed. I wonder if this is a problem with my version of latex maybe?
halted pdf job
$$\mbox{accuracy} = \frac{\mbox{# correctly classified data points}}{\mbox{# total data points}}$$
passed pdf job
$$\mbox{accuracy} = \frac{\mbox{number correctly classified data points}}{\mbox{number total data points}}$$
edit: if it help anyone, I used the suggestion here, to use a backslash escape character before the #, to solve the issue; so it looks like it is a latex issue, if that helps anyone.
I'm using Fabric 1.10 for my project. For one of the tasks, I need to display a list of files present locally but not yet uploaded to the remote server. For this I use rsync.
rsync -avun <local-directory> <remote-server>
This works fine but it also displays a few summary lines and unwanted output, so I try to grep the results. However, this causes an error.
rsync -avun <local-directory> <remote-server> | egrep "(\.png|\.jpg|\.jpeg|\.ico|\.gif)"
Fatal error: local() encountered an error (return code 1)...
Is it not possible to pipe output in Fabric commands?
My guess is that you've mixed up, or fabric is conflicting with, your quotes. Try and use """'s to surround your commands and perhaps single quotes on the egrep.
I am getting the "Not a gzipped file" exception while retrieving a gzipped sitemap xml (tested on amazon.de)
According to the bugtrackers, there used to be a bug regarding "Not a gzipped file"
I am using Python 2.7.3 and Scrapy 0.24.4
Can anyone confirm this as a bug or am I overseeing something?
UPDATE
I think this is some valuable information, already posted on github as well
Possible bug:
retrieving a gzipped sitemap xml (tested on amazon.de) fails.
Reproduce with:
modify /utils/gz.py gunzip method to write the incoming data to a file.
gunzip the file on the command line.
the unzipped file contains garbled content
gunzip that file with garbled content a second time and get the correct content
I suspect that the content coming from the target server is already gzip compressed and scrapy has a bug that causes the gzip http decompression to not work properly, resulting in a double compressed file arriving at the /utils/gz.py gunzip method
Our environment: CentOS 5, which comes with Apache 2.2 and rsyslog 2.0.6
In order to send Apache 2.2 error log we followed instructions found on the here: http://wiki.rsyslog.com/index.php/Working_Apache_and_Rsyslog_configuration
It works, but the included perl script is very inefficient - it takes huge part of the system resources and from looking at the Sys::Syslog::syslog subroutine I can imagine why - it does lots of parameter parsing and moving around before it actually sends the message.
Is there some efficient C/C++ program to replace this script? It seems to be a 5-liner but I'd rather not re-invent the wheel.
Other solutions to efficiently send apache ERROR logs to syslog would also be welcome.
Thanks.
Actually it's pretty redundant - the "logger" command line utility will read standard input and send each line to syslog if it is not passed a message on the command line.
You are welcome anyway....:)
I'v written a C program which does the same function as the perl script in the link above.
It seems to take much less resources.
The program's source code was uploaded to the link in my question.