Converting normal sql query into hibernate criteria query - hibernate-criteria

i need to convert this sql query to hibernate criteria,please help guys.
SELECT NAME, COUNT(*) AS app
FROM device
GROUP BY NAME
ORDER BY app DESC
LIMIT 3

Try this code:
select device.name, count(device)
from Device device
group by device.name
order by count(device) desc
This assumes that you have an entity class called Device with a field name along with a getter method getName(). You may have to change the query depending on what your actual code is (which you never showed us).
The LIMIT clause you had is not applicable for HQL. Instead, you should be doing Query.setMaxResults().

Related

AWS Quicksight : How to use parameter value inside SQL (data set) to render dynamic data on dashboard?

There is a provision to pass value for quicksight parameters via URL. But how can I use the value of the parameter inside SQL (data set) to get dynamic data on dashboard?
For example:
QUERY as of now:
select * from CITYLIST;
Dashboard:
CITYLIST
city_name | cost_of_living
AAAAAAAAA | 20000
BBBBBBBBB | 25000
CCCCCCCCC | 30000
Parameter Created : cityName
URL Triggered : https://aws-------------------/dashboard/abcd123456xyz#p.cityName=AAAAAAAAA
Somehow I need to use the value passed in URL inside SQL so that I can write a dynamic query as below:
select * from CITYLIST where city_name = SomeHowNeedAccessOfParameterValue;
QuickSight doesn't provide a way to access parameters via SQL directly.
Instead you should create a filter from the parameter to accomplish your use-case.
This is effectively QuickSight's way of creating the WHERE clause for you.
This design decision makes sense to me. Though it takes filtering out of your control in the SQL, it makes your data sets more reusable (what would happen in the SQL if the parameter weren't provided?)
Create a parameter, then control and then filter ("Custom filter" -> "Use parameters").
If you select Direct query option and Custom SQL query for data set, then SQL query will be executed per each visual change/update.
The final query on DB side will look like [custom SQL query] + WHERE clause.
For example:
Visual side:
For control Control_1 selected values "A", "B", "C";
DB side:
[Custom SQL from data set] + 'WHERE column in ("A", "B", "C")'
Quicksight builds a query for you and runs it on DB side.
This allows to reduce data sent over network.
Yes now it provides you to sql editor amd you can use it for the same purpose
l
For full details please find the below reference
https://docs.aws.amazon.com/quicksight/latest/user/adding-a-SQL-query.html

Let users run arbitrary queries our MySQL database safely

We provide various service to our clients, for example, sending emails etc. Their users are saved in our databases (MySQL). We would like to give an ability to our clients to run arbitrary search on our database without compromising our database. Let me elaborate:
Below is an existing table definition
* User Table
- email varchar
- category integer
Currently the only way our clients can select group of users for whom an action to be taken (say, sending an email) is by sending us a category. They send us category and we take action for all the users in that category.
However, this is quite restrictive and there are times where it would be desired to run custom searches by our clients in an unrestricted way. For example, below is our new table
* User Table
- email varchar
- category integer
- gender integer
- dob integer
- country varchar
and we would like our clients to run arbitrary searches on their users using all the fields mentioned, at least logical AND, OR, % (like), () operations to begin with, for example,
(gender = 1 AND dob < 1999) OR category = 2.
The idea is that they pass us a subquery which we append to 'SELECT' statement in WHERE clause. However, this is risky and we want to ensure that we tackle this safely without compromising our database by any malicious attempt to exploit this feature. And hence I need your help/inputs.
What would be the safest way to go about providing this kind of ability to search users safely? We use C++ for our backend. Client supplies logic using REST API which will be received by our C++ backend.
The best way to approach this is probably to use a prepared statement. Difficulties arise, because, using your second User table as an example, a customer could potentially request a query whose WHERE clause involves any combination of the 5 columns in that table.
One trick which might work is something like this:
SELECT *
FROM Users
WHERE
email = COALESCE(?, email) AND
category = COALESCE(?, category) AND
gender = COALESCE(?, gender) AND
age = COALESCE(?, age) AND
country = COALESCE(?, country);
The basic idea is that you have a single prepared statement, to which you always bind parameters for all columns in the Users table. If a customer makes a request which does not involve a certain column, then you bind NULL, and that condition in the WHERE clause effectively gets ignored.
The exact solution you use would depend on the programming language you are using to expose MySQL.

Unconnected lookup input value not working

In Informatica, I am trying to get the date after certain working days (say 10,20,30) based on another conditions ( say prio 1,2,3). Already I have one DIM_DATE table where holidays and working days are configured. There is no relation with priority table and DIM_DATE table.Here I am using one unconnected lookup with doing the query override. Below the query I used:
select day_date as DAY_DATE
--,rank1
--,PRIORITY_name
from (
select day_date as DAY_DATE,DENSE_RANK() OVER (ORDER BY day_date) as RANK1,PRIORITY_name as PRIORITY_NAME from (
select date_id,day_date from dim_date where day_date between to_date('10.15.2018','MM.DD.YYYY') and to_date('10.15.2018','MM.DD.YYYY') +interval '250' DAY(3) and working_day=1
)
,DIM_PRIORITY
where DIM_PRIORITY.PRIORITY_name='3'
) where rank1=10
order by RANK1 --
In this example I have hardcoded the day_date,priority_name,rank1. But I need to pass all of them as input coming from mapping.
This hardcode is working but while taking as input like ?created? then it is not working. Here created is the date which will come from mapping flow.
Could you please suggest if this is feasible which I am trying?
?created? is giving error missing right paranthesis but the hardcoded query is running fine in sql.
You match your incoming port against one of the return fields of one of the records in the cache via the lookup condition (not by feeding ports into the override itself).
If this is not possible for you for some unexplained reason then you could define 3 mapping variables and set them to be equal to each of the input ports you care about (using setvariable) before feeding the record in to the lookup. Then use the variables in your lookup override

PDI - Update field value in Logging tables

I'm trying create a transformation that can change field value in DB (postgreSQL what i use).
Case :
In postgre db I have table called Monitoring and it has several field like id, date, starttime, endtime, duration, transformation name, status, desc. All those value I get from Transformation Logging.
So, when I run the transformation it will insert into Monitoring table and set value for field status with Running. And when it done it will update the status into Finish. What I'm trying is to define value in table field by myself not take it from Transformation Logging so I can customize the value like I want to.
Goal is Update transformation status value from 'running' to 'finish/error/abort etc' in my db using pentaho and display that status in web app
I have thinking to used Modified Java Script step to do it but if there any other way maybe? A better one. (Just need opinion about this)
Apart from my remark, did you try the Value Mapper?
modified javascript is not a good idea to use. Ideally, it shouldn't be used due to the performance issue. You can use "add constant" step or "User defined Java Class" for an alternative.
You cannot change the values of the built-in Logging tables, for the simple reason that they are reserved for PDI usage. This causes a known issue in case of hard error: for example the status is not set to finish when the data base server crashes, or when a NullException is not catch by the DPI code.
You have some work around.
The simplest, the one used in the ETL-Pilot is to test (Status=Finish OR LogDate< 15 minutes ago) is the web app.
You can update the table when the transformation is not running. For example, put an hourly (or less) crontab that changes to Finish the status of any transformation whose LogDate is older than 15 mn. This crontab may be a simple SQL or included in a transformation that also check the tables size and/or send an email in case of potential error.
You can copy the table (if it is a non locking operation in your DB system), modify the Status column and use this table for your web app.

sqlite3 c/c++, get the table names involved int an aggregate query

I am using sqlite in a C++ project and I would like to be able to get the table names involved in a query.
Ex:
SELECT * FROM Employee
should return Employee
Now I use successfully qlite3_column_table_name (doc) for this kind of queries but for aggregate queries, the function returns null as the result does not belong to a table directly.
ex:
SELECT SUM(salary) AS total FROM Employee
Surely, when sqlite compiles the statement, the "Employee" keyword is recognised as a table. Do you know aby way to have access to this?
I tried to step through the code of the parser without success...
An authorizer callback allows you to detect which tables are actually accessed by a query.