haystack order_by not in proper - django

I am using django1.8 and haystack 2.4 and solr 4.10. Somehow order_by is not working as expected, Please have a look at below code,
>>> sqs = SearchQuerySet()
>>> sqs = sqs.using('entry').filter(status=0)
>>> for b in sqs.filter(content="see").order_by('title'): print b.title
501 Must-See Movies
Look See, Look at Me!
Last Chance to See
1,000 Places to See Before You Die
Pretend You Don't See Her
Learning to See Creatively : Design, Color and Composition in Photography
Behavior Solutions for the Inclusive Classroom : See a Behavior? Look It Up!
See No Evil
Last Chance to See
See It and Sink It : Mastering Putting Through Peak Visual Performance
See No Evil : The True Story of a Ground Soldier in the CIA's War on Terrorism
Voice for Now : Changing the Way We See Ourselves As Women
See Jane Win : The Rimm Report on How 1,000 Girls Became Successful Women
Kaplan Medical USMLE Medical Ethics : The 100 Cases You Are Most Likely to See on the Exam
I See You
You'll See It When You Believe It : The Way to Your Personal Transformation
Body Code : Diet and Fitness Programme: Master Your Metabolism and See the Weight Fall Off
descending Order
>>> sqs = SearchQuerySet()
>>> sqs = sqs.using('entry').filter(status=0)
>>> for b in sqs.filter(content="see").order_by('-title'): print b.title
You'll See It When You Believe It : The Way to Your Personal Transformation
Body Code : Diet and Fitness Programme: Master Your Metabolism and See the Weight Fall Off
Kaplan Medical USMLE Medical Ethics : The 100 Cases You Are Most Likely to See on the Exam
I See You
Voice for Now : Changing the Way We See Ourselves As Women
See Jane Win : The Rimm Report on How 1,000 Girls Became Successful Women
See No Evil : The True Story of a Ground Soldier in the CIA's War on Terrorism
See It and Sink It : Mastering Putting Through Peak Visual Performance
Last Chance to See
See No Evil
Behavior Solutions for the Inclusive Classroom : See a Behavior? Look It Up!
Learning to See Creatively : Design, Color and Composition in Photography
Pretend You Don't See Her
1,000 Places to See Before You Die
Last Chance to See
Look See, Look at Me!
501 Must-See Movies
Why odering is not working like A --> Z and Z --> A

Recently i had same issue with haystack order_by for title. I used python lambda function to sort object list.
ascending order using title:
sqs = sqs.using('entry').filter(status=0)
sorted_list = sorted([s.object for s in sqs], key=lambda x: x.title, reverse=False)
descending order:
sqs = sqs.using('entry').filter(status=0)
rev_sorted_list = sorted([s.object for s in sqs], key=lambda x: x.title, reverse=True)
sqs.order_by works very well with Integer fields.

Related

Weka j48 output

I have confusion about the numbers at the end of the branches of a J48 tree. For example, using the weather.nominal data the tree looks the same, whether the Test options are set to Use training set or Cross-validation or Percentage split.
This is the output:
J48 pruned tree
------------------
outlook = sunny
| humidity = high: no (3.0)
| humidity = normal: yes (2.0)
outlook = overcast: yes (4.0)
outlook = rainy
| windy = TRUE: no (2.0)
| windy = FALSE: yes (3.0)
According to the textbook by the authors of this software, in an example using this exact data they say, "In the tree structure, a colon introduces the class label that has been assigned to a particular leaf, followed by the number of instances that reach that leaf, expressed as a decimal number because of the way the algorithm uses fractional instances to handle missing values. If there were incorrectly classified instances (there aren’t in this example) their number would appear, too: thus 2.0/1.0 means that two instances reached that leaf, of which one is classified incorrectly"
So this means that no instances were incorrectly classified in the above tree with the weather.nominal dataset.
On the other hand, when the test options are set to either 'Use training set' or 'Percentage split' (with the default random seed), there are incorrectly classified instances. For example, with a 60 percentage split, it shows the following
=== Evaluation on test split ===
=== Summary ===
Correctly Classified Instances 2 40 %
Incorrectly Classified Instances 3 60 %
There seems to be a contradiction here but I must be missing something. Is the tree shown initially not the tree that is built with the 60 percentage split?
That is not stated anywhere as far as I have seen but I can't think of any other explanation.
Just for completeness, the data is here:
outlook,temperature,humidity,windy,play
sunny,hot,high,FALSE,no
sunny,hot,high,TRUE,no
overcast,hot,high,FALSE,yes
rainy,mild,high,FALSE,yes
rainy,cool,normal,FALSE,yes
rainy,cool,normal,TRUE,no
overcast,cool,normal,TRUE,yes
sunny,mild,high,FALSE,no
sunny,cool,normal,FALSE,yes
rainy,mild,normal,FALSE,yes
sunny,mild,normal,TRUE,yes
overcast,mild,high,TRUE,yes
overcast,hot,normal,FALSE,yes
rainy,mild,high,TRUE,no
If you take a closer look at the output, you will see the following:
=== Classifier model (full training set) ===
The model that is being depicted there is the model that was trained on the full dataset, not your split.
The next section has the following heading:
=== Evaluation on test split ===
The statistics that you are referring to are based on a model trained and evaluated on your dataset split.

How to iterate a python list and compare items in a string or another list

Following my earlier question, I have tried to work on a code to return a string if a search term in a certain list is in a string to be returned as follows.
import re
from nltk import tokenize
from nltk.tokenize import sent_tokenize
def foo():
List1 = ['risk','cancer','ocp','hormone','OCP',]
txt = "Risk factors for breast cancer have been well characterized. Breast cancer is 100 times more frequent in women than in men.\
Factors associated with an increased exposure to estrogen have also been elucidated including early menarche, late menopause, later age\
at first pregnancy, or nulliparity. The use of hormone replacement therapy has been confirmed as a risk factor, although mostly limited to \
the combined use of estrogen and progesterone, as demonstrated in the WHI (2). Analysis showed that the risk of breast cancer among women using \
estrogen and progesterone was increased by 24% compared to placebo. A separate arm of the WHI randomized women with a prior hysterectomy to \
conjugated equine estrogen (CEE) versus placebo, and in that study, the use of CEE was not associated with an increased risk of breast cancer (3).\
Unlike hormone replacement therapy, there is no evidence that oral contraceptive (OCP) use increases risk. A large population-based case-control study \
examining the risk of breast cancer among women who previously used or were currently using OCPs included over 9,000 women aged 35 to 64 \
(half of whom had breast cancer) (4). The reported relative risk was 1.0 (95% CI, 0.8 to 1.3) among women currently using OCPs and 0.9 \
(95% CI, 0.8 to 1.0) among prior users. In addition, neither race nor family history was associated with a greater risk of breast cancer among OCP users."
words = txt
corpus = " ".join(words).lower()
sentences1 = sent_tokenize(corpus)
a = [" ".join([sentences1[i-1],j]) for i,j in enumerate(sentences1) if [item in List1] in word_tokenize(j)]
for i in a:
print i,'\n','\n'
foo()
The problem is that the python IDLE does not print anything. What could I have done wrong. What it does is run the code and I get this
>
>
Your question isn't very clear to me so please correct me if i'm getting this wrongly. Are you trying to match the list of keywords (in list1) against the text (in txt)? That is,
For each keyword in list1
Do a match against every sentences in txt.
Print the sentence if they matches?
Instead of writing a complicated regular expression to solve your problem I have broken it down into 2 parts.
First I break the whole lot of text into a list of sentences. Then write simple regular expression to go through every sentences. Trouble with this approach is that it is not very efficient but hey it solves your problem.
Hope this small chunk of code can help guide you to the real solution.
def foo():
List1 = ['risk','cancer','ocp','hormone','OCP',]
txt = "blah blah blah - truncated"
words = txt
matches = []
sentences = re.split(r'\.', txt)
keyword = List1[0]
pattern = keyword
re.compile(pattern)
for sentence in sentences:
if re.search(pattern, sentence):
matches.append(sentence)
print("Sentence matching the word (" + keyword + "):")
for match in matches:
print (match)
--------- Generate random number -----
from random import randint
List1 = ['risk','cancer','ocp','hormone','OCP',]
print(randint(0, len(List1) - 1)) # gives u random index - use index to access List1

How would I approach a lot of structured-but-inconsistent data?

I'm attempting to parse EDGAR documents - they're SEC filings. Specifically, I'm attempting to parse both SEC Schedule 13D and Schedule 13G filings.
There appears to be lots of failed attempts at parsing these filings, and I assume that's because doing so is a behemoth task that an entire team would have to tackle.
I was tasked with parsing those filings. We need the information from the data tables found throughout. The problem is that the filings on record make it hard for me to distinguish between data points, table section headers, etc.
So far, I've only been able to scrape information from around 10% of the Schedule 13D files, and even what I've scraped need considerable cleaning. In a nutshell, I'm matching a regular expression pattern to text. The pattern takes one known (English) section header and the one that comes next (I set each manually) and extracts what's in between: e.g., CHECK THE APPROPRIATE BOX IF A MEMBER OF A GROUP(.*?)SEC USE ONLY. Clearly, that's not going to get me very far, and it isn't. Using the same logic, here's what I get based on the following example string (as an example):
example text
NAMES OF REPORTING PERSONS I.R.S. IDENTIFICATION NOS. OF ABOVE PERSONS
(ENTITIES ONLY)Robert DePaloCHECK THE APPROPRIATE BOX IF A MEMBER OF A
GROUP(see
instructions)(a)    (b)    SEC
USE ONLYSOURCE OF FUNDS (see instructions)CHECK BOX IF DISCLOSURE OF
LEGAL PROCEEDINGS IS REQUIRED PURSUANT TO ITEMS 2(d) or
2(e)     CITIZENSHIP OR PLACE OF
ORGANIZATIONUnited StatesSOLE VOTING POWER45,119,857 (1)SHARED VOTING
POWER-0-SOLE DISPOSITIVE POWER45,119,857 (1)10.SHARED DISPOSITIVE
POWER-0-11.AGGREGATE AMOUNT BENEFICIALLY OWNED BY EACH REPORTING
PERSON45,119,857 (1)12.CHECK BOX IF THE AGGREGATE AMOUNT IN ROW (11)
EXCLUDES CERTAIN SHARES(see
instructions)    13.PERCENT OF CLASS
REPRESENTED BY AMOUNT IN ROW (11)33.4% (2)14.TYPE OF REPORTING PERSON
(see instructions)(1)  Consists of 44,194,298 shares of
Common Stock held by the Reporting Person and 925,559 shares of Common
Stock held by Arjent Limited UK.  The Reporting Person is
the Chairman of Arjent Limited UK and has voting and investment
authority over shares held by it.  Does not include any
classes of preferred shares that the Reporting Person and an entity
owned by the Reporting Person’s wife are entitled to receive, as
discussed in Item 6 below.(2)  Does not include the voting
interest that the Reporting Person is entitled to receive under the
SPHC Series B Preferred Shares, as discussed in Item 6 of this
Schedule 13D.
example output
key: CHECK THE | v: (a)    (b)    
key: CITIZENSHI | v: United States
key: CHECK BOX | v:      
key: SHARED VOT | v: -0-
key: PERCENT OF | v: PERCENT OF CLASS REPRESENTED BY AMOUNT IN ROW \(11\)
key: TYPE OF RE | v: TYPE OF REPORTING PERSON \(see instructions\)
key: CHECK BOX | v:     13.
key: SOLE DISPO | v: 45,119,857
key: SEC USE ON | v: SEC USE ONLY
key: SHARED DIS | v: -0
key: SOLE VOTIN | v: 45,119,857
key: NAMES OF R | v: Robert DePalo
key: AGGREGATE | v: 45,119,857 12.
key: SOURCE OF | v: SOURCE OF FUNDS \(see instructions\)
Are there any other approaches? This doesn't work for most of the 13D filings, and it won't work for 13G. I have a feeling I'm a little too naive in my approach, and I need a common approach to a problem like this. I'm looking to scrape at least 80% of at least 80% of the filings.

Searching words in sentences in R

I'd like to ask you for an advice with the following stuff. I have a data frame:
reviews <- data.frame(value = c("Product was received in excellent condition. Made with high quality materials. Very Good product",
"Inexpensive. An improvement over integrated graphics.",
"I love that product so excite. I will order again if I need more .",
"Excellent card, great graphics."),
user = c(1,2,3,4),
Review_Id = c("101968","101968","210546","112546"))
Then I have a topics from each of these sentences mentioned above:
topics <- data.frame(topic = c("product","condition","materials","product","integrated graphics","product","card","graphics"),
user = c(1,1,1,1,2,3,4,4), Review_Id = c("101968","101968","101968","101968","101968","210546","112546","112546"))
and I need to find original sentence where particular topic appears if I know user and Review_Id for sentences and also topics. Then write this sentence into column review.
Desired output should looks like following.
topic user Review_Id review
product 1 101968 Product was received in excellent condition.
condition 1 101968 Product was received in excellent condition.
materials 1 101968 Made with high quality materials.
product 1 101968 Very Good product
integrated graphics 2 101968 An improvement over integrated graphics.
product 3 210546 I love that product so excite.
card 4 112546 Excellent card, great graphics.
graphics 4 112546 Excellent card, great graphics.
Any advice or approach will be very appreciated. Thanks a lot in forward.
you can try
merge.data.frame(x = topics, y = reviews, by = c("Review_Id"), all.x = TRUE, all.y = FALSE)

R - does failed RegEx pattern matching originate in file conversion or use of tm package?

As a relative novice in R and programming, my first ever question in this forum is about regex pattern matching, specifically line breaks. First some background. I am trying to perform some preprocessing on a corpus of texts using R before processing them further on the NLP platform GATE. I convert the original pdf files to text as follows (the text files, unfortunately, go into the same folder):
dest <- "./MyFolderWithPDFfiles"
myfiles <- list.files(path = dest, pattern = "pdf", full.names = TRUE)
lapply(myfiles, function(i) system(paste('"C:/Program Files (x86)/xpdfbin-win-3.04/bin64/pdftotext.exe"', paste0('"', i, '"')), wait = FALSE))
Then, having loaded the tm package and physically(!) moved the text files to another folder, I create a corpus:
TextFiles <- "./MyFolderWithTXTfiles"
EU <- Corpus(DirSource(TextFiles))
I then want to perform a series of custom transformations to clean the texts. I succeeded to replace a simple string as follows:
ReplaceText <- content_transformer(function(x, from, to) gsub(from, to, x, perl=T))
EU2 <- tm_map(EU, ReplaceText, "Table of contents", "TOC")
However, a pattern that is a 1-3 digit page number followed by two line breaks and a page break is causing me problems. I want to replace it with a blank space:
EU2 <- tm_map(EU, ReplaceText, "[0-9]{1,3}\n\n\f", " ")
The ([0-9]{1,3}) and \f alone match. The line breaks don't. If I copy text from one of the original .txt files into the RegExr online tool and test the expression "[0-9]{1,3}\n\n\f", it matches. So the line breaks do exist in the original .txt file.
But when I view one of the .txt files as read into the EU corpus in R, there appear to be no line breaks even though the lines are obviously breaking before the margin, e.g.
[3] "PROGRESS TOWARDS ACCESSION"
[4] "1"
[5] ""
[6] "\fTable of contents"
Seeing this, I tried other patterns, e.g. to detect one or more blank space ("[0-9]{1,3}\s*\f"), but no patterns worked.
So my questions are:
Am I converting and reading the files into R correctly? If so, what has happened to the line breaks?
If no line breaks is normal, how can I pattern match the character on line 5? Is that not a blank
space?
(A tangential concern:) When converting the pdf files, is there code that will put them directly in a new folder?
Apologies for extending this, but how can one print or inspect only a few lines of the text object? The tm commands and head(EU) print the entire object, each a very long text.
I know my problem(s) must appear simple and perhaps stupid, but one has to start somewhere and extensive searching has not revealed a source that explains comprehensively how to use RegExes to modify text objects in R. I am so frustrated and hope someone here will take pity and can help me.
Thanks for any advice you can offer.
Brigitte
p.s. I think it's not possible to upload attachments in this forum, therefore, here is a link to one of the original PDF documents: http://ec.europa.eu/enlargement/archives/pdf/key_documents/1998/czech_en.pdf
Because the doc is long, I created a snippet of the first 3 pages of the TXT doc, read it into the R corpus ('EU') and printed it to the console and this is it:
dput(EU[[2]])
structure(list(content = c("REGULAR REPORT", "FROM THE COMMISSION ON",
"CZECH REPUBLIC'S", "PROGRESS TOWARDS ACCESSION ***********************",
"1", "", "\fTable of contents", "A. Introduction", "a) Preface The Context of the Progress Report",
"b) Relations between the European Union and the Czech Republic The enhanced Pre-Accession Strategy Recent developments in bilateral relations",
"B. Criteria for membership", "1. Political criteria", "1.1. Democracy and the Rule of Law Parliament The Executive The judicial system Anti-Corruption measures",
"1.2. Human Rights and the Protection of Minorities Civil and Political Rights Economic, Social and Cultural Rights Minority Rights and the Protection of Minorities",
"1.3. General evaluation", "2. Economic criteria", "2.1. Introduction 2.2. Economic developments since the Commission published its Opinion",
"Macroeconomic developments Structural reforms 2.3. Assessment in terms of the Copenhagen criteria The existence of a functioning market economy The capacity to cope with competitive pressure and market forces 2.4. General evaluation",
"3. Ability to assume the obligations of Membership", "3.1. Internal Market without frontiers General framework The Four Freedoms Competition",
"3.2. Innovation Information Society Education, Training and Youth Research and Technological Development Telecommunications Audio-visual",
"3.3. Economic and Fiscal Affairs Economic and Monetary Union",
"2", "", "\fTaxation Statistics "), meta = structure(list(author = character(0),
datetimestamp = structure(list(sec = 50.1142621040344, min = 33L,
hour = 15L, mday = 3L, mon = 10L, year = 114L, wday = 1L,
yday = 306L, isdst = 0L), .Names = c("sec", "min", "hour",
"mday", "mon", "year", "wday", "yday", "isdst"), class = c("POSIXlt",
"POSIXt"), tzone = "GMT"), description = character(0), heading = character(0),
id = "CZ1998ProgressSnippet.txt", language = "en", origin = character(0)), .Names = c("author",
"datetimestamp", "description", "heading", "id", "language",
"origin"), class = "TextDocumentMeta")), .Names = c("content",
"meta"), class = c("PlainTextDocument", "TextDocument"))
Yes, working with text in R is not always a smooth experience! But you can get a lot done quickly with some effort (maybe too much effort!)
If you could share one of your PDF files or the output of dput(EU), that might help to identify exactly how to capture your page numbers with regex. That would also add a reproducible example to your question, which is an important thing to have in questions here so that people can test their answers and make sure they work for your specific problem.
No need to put PDF and text files in separate folders, instead you can use a pattern like so:
EU <- Corpus(DirSource(pattern = ".txt"))
This will only read the text files and ignore the PDF files
There is no 'snippet view' method in tm, which is annoying. I often use just names(EU) and EU[[1]] for quick looks
UPDATE
With the data you've just added, I'd suggest a slightly tangential approach. Do the regex work before passing the data to the tm package formats, like so:
# get the PDF
download.file("http://ec.europa.eu/enlargement/archives/pdf/key_documents/1998/czech_en.pdf", "my_pdf.pdf", method = "wget")
# get the file name of the PDF
myfiles <- list.files(path = getwd(), pattern = "pdf", full.names = TRUE)
# convert to text (not my pdftotext is in a different location to you)
lapply(myfiles, function(i) system(paste('"C:/Program Files/xpdf/bin64/pdftotext.exe"', paste0('"', i, '"')), wait = FALSE))
# read plain text int R
x1 <- readLines("my_pdf.txt")
# make into a single string
x2 <- paste(x1, collapse = " ")
# do some regex...
x3 <- gsub("Table of contents", "TOC", x2)
x4 <- gsub("[0-9]{1,3} \f", "", x3)
# convert to corpus for text mining operations
x5 <- Corpus(VectorSource(x4))
With the snippet of data your provided using dput, the output from this method is
inspect(x5)
<<VCorpus (documents: 1, metadata (corpus/indexed): 0/0)>>
[[1]]
<<PlainTextDocument (metadata: 7)>>
REGULAR REPORT FROM THE COMMISSION ON CZECH REPUBLIC'S PROGRESS TOWARDS ACCESSION *********************** TOC A. Introduction a) Preface The Context of the Progress Report b) Relations between the European Union and the Czech Republic The enhanced Pre-Accession Strategy Recent developments in bilateral relations B. Criteria for membership 1. Political criteria 1.1. Democracy and the Rule of Law Parliament The Executive The judicial system Anti-Corruption measures 1.2. Human Rights and the Protection of Minorities Civil and Political Rights Economic, Social and Cultural Rights Minority Rights and the Protection of Minorities 1.3. General evaluation 2. Economic criteria 2.1. Introduction 2.2. Economic developments since the Commission published its Opinion Macroeconomic developments Structural reforms 2.3. Assessment in terms of the Copenhagen criteria The existence of a functioning market economy The capacity to cope with competitive pressure and market forces 2.4. General evaluation 3. Ability to assume the obligations of Membership 3.1. Internal Market without frontiers General framework The Four Freedoms Competition 3.2. Innovation Information Society Education, Training and Youth Research and Technological Development Telecommunications Audio-visual 3.3. Economic and Fiscal Affairs Economic and Monetary Union Taxation Statistics