ColdFusion collection / search won't populate - coldfusion

I am new to using cfcollection and cfsearch, but I gave it a go and it seemed to work. Then, I purged the collection and it stopeed working. I then, decided that I would delete the collection and start over. The collection deleted fine, but now the same code won't return any results.
My query returns 5 results that the collection and subsequent search should be picking up, but the returned search is empty, even when I specify a wildcard * in my criteria.
Is there anything wrong with my code below? No errors or anything, just blank results.
public void function ajax() {
param name="params.keywords" default="SoundCloud";
onlyProvides("json");
local.collectionPath = expandPath( "./" ) & "collections/";
// Delete
/*
collection
action="delete"
collection="pincollection"
path="#local.collectionPath#";
*/
collection
action="list"
name="local.collectionList";
local.collectionList = valueList(collectionList.name);
if ( ! listFind(local.collectionList, "pincollection") ) {
collection
action="create"
collection="pincollection"
engine="solr"
categories="yes"
path="#local.collectionPath#";
}
local.pins = model("pin").findAll(
include = "user",
order = "createdat DESC"
);
index
collection="pincollection"
action="update"
type="custom"
title="title"
body="description"
custom1="latitude"
custom2="longitude"
custom3="typeid"
custom4="createdAt"
custom5="updatedAt"
query="local.pins"
category="typeid"
key="id";
search
name="local.pinSearch"
collection="pincollection"
contextHighlightBegin="<strong>"
contextHighlightEnd="</strong>"
category="2,1"
maxrows="100"
criteria="•";
writeDump(var=local.pinSearch); // Empty search query.
writeDump(var=local.pins, abort=true); // Original query returns 5 results.
renderWith(data=local.pinSearch, layout=false);
}
I am using Railo.
I can see that in my collections folder, a folder for my collection has been created, but this does not contain any files.
I'm a newbie at using ColdFusion / Railo for search. It seems straight-forward, but I'm stumped.
Thanks,
Mikey.
PS - I am using CFWheels, hence some CFWheels specific functions. These can be ignored.

Before you go too far with troubleshooting this, consider switching to a dedicated Solr server if you can. We quickly hit the limits of Solr in CF9 years ago and used the CFSolrLib project by Shannon Hicks to connect to a fresh install of Solr on Tomcat. Just some of the benefits to this are:
Reducing overhead for your CF server.
Indexing performance boost.
Separates search related troubleshooting from the rest of your application (like in your case).
You gain the freedom of making changes/upgrades to your Solr server independently.
Shannon's project on github: https://github.com/iotashan/cfsolrlib
MC

Related

Google Cloud BlobListOption only iterates one level below currentDir

I was testing some of the Cloud Storage functionality and just seen that the iterative approach only works with a level underneath the current directory?
Page<Blob> blobs = STORAGE_INSTANCE.list(bucket, Storage.BlobListOption.currentDirectory(),
Storage.BlobListOption.prefix(getBucketKey(GS_SCHEMA, prefix).concat(URI_DELIMITER)));
Given that what .prefix() receives is, for example, /dir/ and this prefix contains two nested levels such as /dir/content/ and /dir/content/mycontent.txt.
If that call is executed with the previously mentioned /dir/, only /dir/content/ is listed, but no more prefixes.
So, whenever I want to iterate through all the prefixes below /dir/, no matter what I have to reiterate /dir/content/ so that I can see dir/content/mycontent.txt listed.
Is there an easy way to fix this or am I not using the API properly?
Remove the Storage.BlobListOption.currentDirectory() parameter from the list() method. The following code snippet managed to display all Blobs containing a specific prefix for me:
Page<Blob> blobs = storage.list(bucketName, BlobListOption.prefix(prefix));
for (Blob blob : blobs.iterateAll()) {
System.out.println(blob);
}

Searching a Mongo database using PyMongo, while using regex

I currently have a PyMongo collection with around 100,000 documents. I need to perform a regex search on each of these documents, checking each document against around 1,800 values to see if a particular field (which is an array) contains one of the 1,800 strings. After testing a variety of ways of using regex, such as compiling into a regular expression, multiprocessing and multi-threading, the performance is still abysmal, and takes around 30-45 minutes.
The current regex I'm using to find the value at the end of the string is:
rgx = re.compile(string_To_Be_Compared + '$')
And then this is ran using a standard pymongo find query:
coll.find( { 'field' : rgx } )
I was wondering if anyone had any suggestions for querying these values in a more optimal way? Ideally the search to return all the values should take less than 5 minutes. Would the best course of action to be use something like ElasticSearch or am I missing something basic?
Thanks for you time

Is there a equivalent utassert.eqtable (Available in utplsql version 2.X) in utplsql version 3.3?

After going through the documentations of Utplsql 3.0.2 , I couldn't find any references the assertion api as available in the older versions. Please let me know whether is there a equivalent assertion like utassert.eqtable available in newer versions.
I have just recently gone through the same pain. Most utPLSQL examples out there are for utPLSQL v2. It transpires appears that the assertions have been deprecated, and have been replaced by "Expects". I found a great blog post by Jacek Gebal that describes this. I've tried to put this and other useful links a page about how unit testing fits into Redgate's Oracle DevOps pipeline (I work for Redgate and we often get asked how to best implement automated unit testing for Oracle).
I don't think you can compare tables straight away, but you can compare cursors, which is quite flexible, because you can, for instance, set-up a cursor with test data based on a dual query, and then check that against the actual data in the table, something like this:
procedure TestCursorExample is
v_Expected sys_refcursor;
v_Actual sys_refcursor;
begin
-- Arrange (Nothing really to arrange, except setting the expectation).
open v_Expected for
select 'me#example.com' as Email
from dual;
-- Act
SomeUpsertProc('me', 'me#example.com');
-- Assert
open v_Actual for
select Email
from Tbl_User
where UserName = 'me';
ut.expect(v_Actual).to_equal(v_Expected);
end;
Also, the example above works in Oracle 11, but if you're in 12c, apparently things got even easier, because you can use the table operator with locally defined types.
I've used a similar solution to verify that certain columns of a row were updated, while others were not. You can easily open a cursor for the original data, with some columns replaces by the new fixed values. Then do the update. Then open a cursor with the new actual data of all columns. You still have to write the queries, but it's way more compact than querying everything into variables and comparing those individually.
And, because you can open the 'expected' cursor before doing the actual 'act' step of the test, you can be sure that the query with 'expected' data is not affected by the test itself, and can even base that cursor on the data you are going to modify.
For comparing the data, the cursors are serialized to XML. This may have some side effects. In the test example above, my act step didn't actually do anything, so I got this difference, showing the count as well as showing the missing data.
If your cursors have more columns, and multiple difference, it can sometimes take a seconds to spot the differences between the XML tags. Also, there are currently some edge-case issues with this, I think because of how trimming works in XML.
1) testcursorexample
Actual: refcursor [ count = 0 ] was expected to equal: refcursor [ count = 1 ]
Diff:
Rows: [ 1 differences ]
Row No. 1 - Missing: <EMAIL>me#example.com</EMAIL>
at "MySchema.MyTestPackage", line 410 ut.expect(v_Actual).to_equal(v_Expected);
See also: 'comparing cursors' from utPLSQL 3 concepts

Why would order of search terms affect solr query results?

If I search for Authorname "Title of Work" the records don't come up, but if I search for "Title of Work" Authorname then they do.
Why might this happen?
This is solr running on Coldfusion. The only change is the order of the terms.
Update
Sample coldfusion code. Note that in this example first one gets 2 matches while the second one gets 1. So it looks like this changes depending on the actual string searched, but it still means that changing the order of terms changes the number of records returned.
I could understand it changing the order of records returned, since changing the order would change the relevance of the results. But all 3 records should show up for either one. I'll see if I can find the solr logs and post them, that might help.
<cfset term1='"globalization of information"'>
<cfset term2='Reiter'>
<cfsearch name="ExampleOne" criteria='#term1# #term2#' collection="abstracts,fulltexts">
<cfoutput>#ExampleOne.recordcount#</cfoutput>
<cfsearch name="ExampleTwo" criteria='#term2# #term1#' collection="abstracts,fulltexts">
<cfoutput>#ExampleTwo.recordcount#</cfoutput>
<cfabort>
Output:
2 1
Just try giving search term in single quotes, I have tested in on CF 10 and it is working fine for me.
So Instead of:
cfset term1='"globalization of information"'
try this
cfset term1="'globalization of information'"

Simple query working for years, then suddenly very slow

I've had a query that has been running fine for about 2 years. The database table has about 50 million rows, and is growing slowly. This last week one of my queries went from returning almost instantly to taking hours to run.
Rank.objects.filter(site=Site.objects.get(profile__client=client, profile__is_active=False)).latest('id')
I have narrowed the slow query down to the Rank model. It seems to have something to do with using the latest() method. If I just ask for a queryset, it returns an empty queryset right away.
#count returns 0 and is fast
Rank.objects.filter(site=Site.objects.get(profile__client=client, profile__is_active=False)).count() == 0
Rank.objects.filter(site=Site.objects.get(profile__client=client, profile__is_active=False)) == [] #also very fast
Here are the results of running EXPLAIN. http://explain.depesz.com/s/wPh
And EXPLAIN ANALYZE: http://explain.depesz.com/s/ggi
I tried vacuuming the table, no change. There is already an index on the "site" field (ForeignKey).
Strangely, if I run this same query for another client that already has Rank objects associated with her account, then the query returns very quickly once again. So it seems that this is only a problem when their are no Rank objects for that client.
Any ideas?
Versions:
Postgres 9.1,
Django 1.4 svn trunk rev 17047
Well, you've not shown the actual SQL, so that makes it difficult to be sure. But, the explain output suggests it thinks the quickest way to find a match is by scanning an index on "id" backwards until it finds the client in question.
Since you said it has been fast until recently, this is probably not a silly choice. However, there is always the chance that a particular client's record will be right at the far end of this search.
So - try two things first:
Run an analyze on the table in question, see if that gives the planner enough info.
If not, increase the stats (ALTER TABLE ... SET STATISTICS) on the columns in question and re-analyze. See if that does it.
http://www.postgresql.org/docs/9.1/static/planner-stats.html
If that's still not helping, then consider an index on (client,id), and drop the index on id (if not needed elsewhere). That should give you lightning fast answers.
latests is normally used for date comparison, maybe you should try to order by id desc and then limit to one.