file1.gif is 637.8 kB. A file2.gif is created but stops at 36 bytes.
(require '[clojure.java.io :as io])
(io/copy "/var/project-dir/uploads/file1.gif"
(io/file "/var/project-dir/uploads/file2.gif"))
Any idea what could be going on?
Your code inserts the string "/var/project-dir/uploads/file1.gif" into the destination file. You need to provide the actual source of the data:
(io/copy (io/file "/var/project-dir/uploads/file1.gif")
(io/file "/var/project-dir/uploads/file2.gif"))
Related
I have a piece of code which looks as follows:
(defn dump [path blob]
(spit path
(with-out-str (pr blob))))
This is dumping out GBs of data. Is there am more efficient way of doing this? (without creating the intermediate string that with-out-str creates)?
The built in serilization functions use the dynamically bound variable *out* to define where they write to:
user> (def data [1 2 3 4 5])
#'user/data
user> (with-open [output (clojure.java.io/writer "/tmp/data.edn")]
(binding [*out* output]
(prn data)))
nil
user> (slurp "/tmp/data.edn")
"[1 2 3 4 5]\n"
So if you bind this to a file writer (remember to close it, and beware of lazy-evaluation and closing file descriptors) then all the output will go strait to that file. pr and prn will write in a format that makes sure it can be read back. The other print functions will write it a way that's easier for humans and not guaranteed for computers.
(def ^:private props
(doto (java.util.Properties.)
(.put "annotators" "tokenize, ssplit, pos, lemma, parse")
(.put "parse.maxlen" (str (-> config :nlp :max-sentence-length)))
(.put "pos.maxlen" (str (-> config :nlp :max-sentence-length)))))
(def ^:private pipeline (StanfordCoreNLP. props))
(defn- annotated-doc [s]
(.process pipeline s))
(def input-text (slurp "/home/you/some.txt"))
(annotated-doc input-text)
Which then produces either a properly annotated result as expected or it produces this exception:
java.lang.NullPointerException: null
MorphaAnnotator.java:68 edu.stanford.nlp.pipeline.MorphaAnnotator.addLemma
MorphaAnnotator.java:55 edu.stanford.nlp.pipeline.MorphaAnnotator.annotate
AnnotationPipeline.java:67 edu.stanford.nlp.pipeline.AnnotationPipeline.annotate
StanfordCoreNLP.java:881 edu.stanford.nlp.pipeline.StanfordCoreNLP.annotate
StanfordCoreNLP.java:910 edu.stanford.nlp.pipeline.StanfordCoreNLP.process
(Unknown Source) sun.reflect.GeneratedMethodAccessor27.invoke
DelegatingMethodAccessorImpl.java:43
sun.reflect.DelegatingMethodAccessorImpl.invoke
Method.java:606 java.lang.reflect.Method.invoke
Reflector.java:93 clojure.lang.Reflector.invokeMatchingMethod
Reflector.java:28 clojure.lang.Reflector.invokeInstanceMethod
The text file is very vanilla. I have reduced my annotators list down to what produces the issue. I have 6 GB of memory configured for it. The text file is 3886 characters long, UTF-8 BOM formatted file. It works with partial text from this file just fine. It even works if I take the whole file as in (take 3886 input-text). So I'm stumped. Not sure what to make of it. Any suggestions?
Here is a link to the text file I was using: http://nectarineimp.com/spooky-action.txt
From my project.clj file:
:dependencies [[org.clojure/clojure "1.6.0"]
[edu.stanford.nlp/stanford-corenlp "3.3.1"]
[edu.stanford.nlp/stanford-corenlp "3.3.1" :classifier "models"]]
I agree that this is was a bug in the Annotator. I'm not sure what your configuration settings were for:
(def ^:private props
(doto (java.util.Properties.)
(.put "annotators" "tokenize, ssplit, pos, lemma, parse")
(.put "parse.maxlen" (str (-> config :nlp :max-sentence-length)))
(.put "pos.maxlen" (str (-> config :nlp :max-sentence-length)))))
But they were irrelevant in reproducing the issue. I recently upgraded a project to version 3.5.2 and your code block ran without issues (CoreNLP Version History).
Running in 3.1.1 produces your results exactly. It looks like at version 3.5.0 a Java 1.8 JVM is required.
This bug appears to be fixed in version 3.4 of CoreNLP as well, which doesn't require JVM updates.
Your question was posted a while ago, so I'm pretty sure you've already figured all this out, but for the sake of Google, I've left these comments.
In Conjure, I need to read in a long file, too long to slurp in, and I wish to pass the open file pointer into method, which I can call recursively, reading until it is empty. I have found examples using open-with, but is there a way to open a file and then read from it inside of a function? Points to examples or docs would be helpful.
Is this along the lines of what you have in mind?
(defn process-file [f reader]
(loop [lines (line-seq reader) acc []]
(if (empty? lines)
acc
(recur (rest lines) (conj acc (f (first lines)))))))
(let [filename "/path/to/input-file"
reader (java.io.BufferedReader. (java.io.FileReader. filename))]
(process-file pr-str reader))
Note that if you (require '[clojure.java.io :as io]) you can use io/reader as a shortcut for invoking BufferedReader and FileReader directly. However, using with-open would still be preferable - it will ensure the file is closed properly, even in the event of an exception - and you can absolutely pass the open reader to other functions from within a with-open block.
Here's how you could make use of with-open in the scenario you use in the answer you've posted, passing the reader and writer objects to a function:
(with-open [rdr (io/reader "/path/to/input-file")]
(with-open [wtr (io/writer "/path/to/output-file")]
(transfer rdr wtr)))
I should also note that in my example scenario it would be preferable to map or reduce over the line-seq but I used loop/recur since you asked about recursion.
Here's the ClojureDocs page on the clojure.java.io namespace.
Playing around I discovered the answer, so for any others looking, here is a version of my solution.
(defn transfer
[inFile outFile]
(.write outFile (.read inFile))
...
...
(transfer (clojure.java.io/reader "fileIn.txt)
(clojure.java.io/writer "out.txt"))
I've project created via Leiningen with core.clj:
(ns cotd.core
(:gen-class)
(:use [clojure.repl :only (doc)]))
(defmacro eval-doc
[form]
(let [resulting-symbol (eval form)]
`(doc ~resulting-symbol)))
(defn- random-function-name []
(rand-nth (keys (ns-publics 'clojure.core))))
(defn -main
"Display random doc page"
[& args]
(eval-doc (random-function-name)))
And after compiling and running it always yields the same result:
$ java -jar cotd.jar
-------------------------
clojure.core/unchecked-negate
([x])
Returns the negation of x, a long.
Note - uses a primitive operator subject to overflow.
$ java -jar cotd.jar
-------------------------
clojure.core/unchecked-negate
([x])
Returns the negation of x, a long.
Note - uses a primitive operator subject to overflow.
But with two consecutive calls:
(do
(eval-doc (random-function-name))
(eval-doc (random-function-name))))
It yields two different results in single "call".
What I've tried is googling, reading, etc. but I have no clues what's going on...
How to invoke this rand-nth dynamically?
The problem wasn't with rand-nth but because the resulting-symbol in the let statement is produced during the compilation phase. #beyamor provided answer here: Unable to get random (doc) from a namespace
I'm having trouble downloading images using Clojure, there seems to be an issue with the way the following code works: -
(defn download-image [url filename]
(->> (slurp url) (spit filename)))
This will 'download' the file to the location I specify but the file is unreadable by any image application I try to open it with (for example, attempting to open it in a web browser just return a blank page, attempting to open it in Preview (osx) says it's a corrupted file)
I'm thinking this is might be because slurp should only really be used for text files rather than binary files
Could anyone point me in the right direction for my code to work properly? Any help would be greatly appreciated!
slurp uses java.io.Reader underneath, which will convert the representation to a string, and this is typically not compatible with binary data. Look for examples that use input-stream instead. In some ways, this can be better, because you can transfer the image from the input buffer to the output buffer without having to read the entire thing into memory.
edit
Since people seem to find this question once in awhile and I needed to rewrite this code again. I thought I'd add an example. Note, this does not stream the data, it collects it into memory and returns it an array of bytes.
(require '[clojure.java.io :as io])
(defn blurp [f]
(let [dest (java.io.ByteArrayOutputStream.)]
(with-open [src (io/input-stream f)]
(io/copy src dest))
(.toByteArray dest)))
Test...
(use 'clojure.test)
(deftest blurp-test
(testing "basic operation"
(let [src (java.io.ByteArrayInputStream. (.getBytes "foo" "utf-8"))]
(is (= "foo" (-> (blurp src) (String. "utf-8")))))))
Example...
user=> (blurp "http://www.lisperati.com/lisplogo_256.png")
#<byte[] [B#15671adf>