How can I get all PriceRow there are 2 at least with the same prices in the same baseStore in flexible query - flexible-search

I need a flexible query to get all PriceRowModel with at least 2 same prices in the same base store. It's that possible or I need a groovy script for select them?

Related

Lucene Syntax For More Complicated Queries

I am developing a website for my company, that allows users to query a database in order to get the information they need.
Currently, the users are used to a particular form of queries, and I don't want to make them change the way they are used to. Therefore, I need to convert their query to Lucene's query syntax.
There are some cases which I'm not sure what is the best way to implement them using Lucene syntax, I was wondering maybe you have some better ideas:
"Current Query" : serverRole=~'(ServerOne|ServerTwo|ServerThree)'
"Lucene Suggested": (serverRole:*ServerOne* OR serverRole:*ServerTwo* OR serverRole:*ServerThree*)
Take into account that I'm using Regex to convert these queries, so one of the difficulties I'm facing for example, is how to do it if the number of elements (ServerOne|ServerTwo|ServerThree.....) is dynamic:
luceneQuery = currentQuery
.replace(/(==~|=~)('|")([a-zA-Z0-9]+)(\|)([a-zA-Z0-9]+)('|")/g, ':*$3 OR $5*')
Another query for example:
"Current Query" : OS=~'SLES1[12]'
"Lucene Suggested": (OS:*SLES11* OR OS:*SLES12*)
I would recomand you to check BooleanQuery() on Lucene to create more complex queries like Wildcard , Term, Fuzzy U can include all by using Occur parameter while u build your queries. As an example
Query query1 = new WildcardQuery(new Term("contents", "*ServerOne*"));
Query query2 = new WildcardQuery(new Term("contents", "*ServerTwo*"));
BooleanQuery booleanQuery = new BooleanQuery.Builder()
.add(query1, BooleanClause.Occur.SHOULD)
.add(query2, BooleanClause.Occur.SHOULD)
.build();
There is also regex queries you can directly run but when your indexed field will be complicates it taking time to find regex match

Django filtering with F and Q operations

I have a model class in my django project:
*user_id
*amount
*net_balance
*created_on
I have a list of user_ids(let's say 3). I need to get the last row for each user_id and then do some operation and create a new row for each user id. How do this efficiently. I can certainly do 6 transactions (if there are 3 items in list of userids).
If you want the most recent entry then
YourModel.objects.filter(user=user_id).latest('created_on')
If I understand your question correctly then you need to get all the user_ids (presumably you have a separate User model?) and then loop through them - for each user getting the most recent entry and then create the new row.
You need 1 select (at least) for all the records you interested and 1 insert query for each record returned.
The select query can be generated by ORM abilities (aggregation) or you can use raw SQL if you fill comfortable. If you use PostgreSQL, you can use distinct ability (I recommended) as:
Model.objects.order_by('user_id', '-created_on').distinct('user_id')
or you can use aggregation abilities as:
Model.objects.filter(user_id__in=[1,2,3]).values('user_id', 'created_on').annotate(last_row=Max('created_on')).filter(created_on=F('last_row'))
The correct answer depends on your Django version and database. But there are lots of good features in Django to achieve this kind of stuffs.

Cassandra: How to query the complete data set?

My table has 77k entries (number of entries keep increasing this a high rate), I need to make a select query in CQL 3. When I do select count(*) ... where (some_conditions) allow filtering I get:
count
-------
10000
(1 rows)
Default LIMIT of 10000 was used. Specify your own LIMIT clause to get more results.
Let's say the 23k rows satisfied this some_condition. The 10000 count above is of the first 10k rows of these 23k rows, right? But how do I get the actual count?
More importantly, How do I get access to all of these 23k rows, so that my python api can perform some in-memory operation on the data in some columns of the rows. Are there a some sort pagination principles in Cassandra CQL 3.
I know I can just increase the limit to a very large number but that's not efficient.
Working Hard is right, and LIMIT is probably what you want. But if you want to "page" through your results at a more detailed level, read through this DataStax document titled: Paging through unordered partitioner results.
This will involve using the token function on your partitioning key. If you want more detailed help than that, you'll have to post your schema.
While I cannot see your complete table schema, by virtue of the fact that you are using ALLOW FILTERING I can tell that you are doing something wrong. Cassandra was not designed to serve data based on multiple secondary indexes. That approach may work with a RDBMS, but over time that query will get really slow. You should really design a column family (table) to suit each query you intend to use frequently. ALLOW FILTERING is not a long-term solution, and should never be used in a production system.
you just have to specify limit with your query.
let's assume your database is containing under 1 lack records so if you will execute below query it will give you the actual count of the records in table.
select count(*) ... where (some_conditions) allow filtering limit 100000;
Another way is to write python code, the cqlsh indeed is python script.
use
statement = " select count(*) from SOME_TABLE"
future = session.execute_async(statement)
rows = future.result()
count = 0
for row in rows:
count = count + 1
the above is using cassandra python driver PAGE QUERY feature.

Cassandra NOT EQUAL Operator

Question to all Cassandra experts out there.
I have a column family with about a million records.
I would like to query these records in such a way that I should be able to perform a Not-Equal-To kind of operation.
I Googled on this and it seems I have to use some sort of Map-Reduce.
Can somebody tell me what are the options available in this regard.
I can suggest a few approaches.
1) If you have a limited number of values that you would like to test for not-equality, consider modeling those as a boolean columns (i.e.: column isEqualToUnitedStates with true or false).
2) Otherwise, consider emulating the unsupported query != X by combining results of two separate queries, < X and > X on the client-side.
3) If your schema cannot support either type of query above, you may have to resort to writing custom routines that will do client-side filtering and construct the not-equal set dynamically. This will work if you can first narrow down your search space to manageable proportions, such that it's relatively cheap to run the query without the not-equal.
So let's say you're interested in all purchases of a particular customer of every product type except Widget. An ideal query could look something like SELECT * FROM purchases WHERE customer = 'Bob' AND item != 'Widget'; Now of course, you cannot run this, but in this case you should be able to run SELECT * FROM purchases WHERE customer = 'Bob' without wasting too many resources and filter item != 'Widget' in the client application.
4) Finally, if there is no way to restrict the data in a meaningful way before doing the scan (querying without the equality check would returning too many rows to handle comfortably), you may have to resort to MapReduce. This means running a distributed job that would scan all rows in the table across the cluster. Such jobs will obviously run a lot slower than native queries, and are quite complex to set up. If you want to go this way, please look into Cassandra Hadoop integration.
If you want to use not-equals operator on a specific partition key and get all other data from table then you can use a combination of range queries and TOKEN function from CQL to achieve this
For example, if you want to fetch all rows except the ones having partition key as 'abc' then you execute below 2 queries
select <column1>,<column2> from <keyspace1>.<table1> where TOKEN(<partition_key_column_name>) < TOKEN('abc');
select <column1>,<column2> from <keyspace1>.<table1> where TOKEN(<partition_key_column_name>) > TOKEN('abc');
But, beware that result is going to be huge (depending on size of table and fields you need). So you might want to use this in conjunction with dsbulk kind of utility. Also note that there is no guarantee of ordering in your result. This is just a kind of data dump which will most probably be useful for some kind of one-time data migration like scenarios.

Efficiently processing all data in a Cassandra Column Family with a MapReduce job

I want to process all of the data in a column family in a MapReduce job. Ordering is not important.
An approach is to iterate over all the row keys of the column family to use as the input. This could be potentially a bottleneck and could replaced with a parallel method.
I'm open to other suggestions, or for someone to tell me I'm wasting my time with this idea. I'm currently investigating the following:
A potentially more efficient way is to assign ranges to the input instead of iterating over all row keys (before the mapper starts). Since I am using RandomPartitioner, is there a way to specify a range to query based on the MD5?
For example, I want to split the task into 16 jobs. Since the RandomPartitioner is MD5 based (from what I have read), I'd like to query everything starting with a for the first range. In other words, how would I query do a get_range on the MD5 with the start of a and ends before b. e.g. a0000000000000000000000000000000 - afffffffffffffffffffffffffffffff?
I'm using the pycassa API (Python) but I'm happy to see Java examples.
I'd cheat a little:
Create new rows job_(n) with each column representing each row key in the range you want
Pull all columns from that specific row to indicate which rows you should pull from the CF
I do this with users. Users from a particular country get a column in the country specific row. Users with a particular age are also added to a specific row.
Allows me to quickly pull the rows i need based on the criteria i want and is a little more efficient compared to pulling everything.
This is how the Mahout CassandraDataModel example functions:
https://github.com/apache/mahout/blob/trunk/integration/src/main/java/org/apache/mahout/cf/taste/impl/model/cassandra/CassandraDataModel.java
Once you have the data and can pull the rows you are interested in, you can hand it off to your MR job(s).
Alternately, if speed isn't an issue, look into using PIG: How to use Cassandra's Map Reduce with or w/o Pig?