Capture of sys.dm_pdw_dms_workers - azure-sqldw

I want to collect the data from sys.dm_pdw_dms_workers and retain it for some amount of time.
I am running the following SQL on a near quiescent DWU6000 system and it has run for over 40 minutes.
What could be the explanation for such a long run-time? I had executed this process on a little-used DWU100 system and the query ran in seconds.
INSERT INTO
#dm_pdw_dms_workers
(
request_id
,step_index
,dms_step_index
,pdw_node_id
,distribution_id
,type
,status
,bytes_per_sec
,bytes_processed
,rows_processed
,start_time
,end_time
,cpu_time
,query_time
,buffers_available
,sql_spid
,dms_cpid
,error_id
,source_info
,destination_info
)
select
request_id
,step_index
,dms_step_index
,pdw_node_id
,distribution_id
,type
,status
,bytes_per_sec
,bytes_processed
,rows_processed
,start_time
,end_time
,cpu_time
,query_time
,buffers_available
,sql_spid
,dms_cpid
,error_id
,source_info
,destination_info
from
sys.dm_pdw_dms_workers
where
end_time is not null
;
Temp table definition is:
create table
#dm_pdw_dms_workers
(
request_id nvarchar(32) /* PK of sys.dm_pdw_dms_workers. */
,step_index integer /* PK of sys.dm_pdw_dms_workers. */
,dms_step_index integer /* PK of sys.dm_pdw_dms_workers. */
,pdw_node_id integer
,distribution_id integer
,type nvarchar(32)
,status nvarchar(32)
,bytes_per_sec bigint
,bytes_processed bigint
,rows_processed bigint
,start_time datetime
,end_time datetime
,cpu_time bigint
,query_time integer
,buffers_available integer
,sql_spid integer
,dms_cpid integer
,error_id nvarchar(36)
,source_info nvarchar(4000)
,destination_info nvarchar(4000)
)
WITH
(
HEAP
,DISTRIBUTION = hash(request_id)
)
;
Source data is not large:
select
count(step_index)
from
sys.dm_pdw_dms_workers
where
end_time is not null
;
Results in
485,694

Removing the DISTRIBUTION clause of the temp table keeps a non-optimum plan from being generated at high DWUs

Related

How to use string as column name in Bigquery

There is a scenario where I receive a string to the bigquery function and need to use it as a column name.
here is the function
CREATE OR REPLACE FUNCTION METADATA.GET_VALUE(column STRING, row_number int64) AS (
(SELECT column from WORK.temp WHERE rownumber = row_number)
);
When I call this function as select METADATA.GET_VALUE("TXCAMP10",149); I get the value as TXCAMP10 so we can say that it is processed as SELECT "TXCAMP10" from WORK.temp WHERE rownumber = 149 but I need it as SELECT TXCAMP10 from WORK.temp WHERE rownumber = 149 which will return some value from temp table lets suppose the value as A
so ultimately I need value A instead of column name i.e. TXCAMP10.
I tried using execute immediate like execute immediate("SELECT" || column || "from WORK.temp WHERE rownumber =" ||row_number) from this stack overflow post to resolve this issue but turns out I can't use it in a function.
How do I achieve required result?
I don't think you can achieve this result with the help of UDF in standard SQL in BigQuery.
But it is possible to do this with stored procedures in BigQuery and EXECUTE IMMEDIATE statement. Consider this code, which simulates the situation you have:
create or replace table d1.temp(
c1 int64,
c2 int64
);
insert into d1.temp values (1, 1), (2, 2);
create or replace procedure d1.GET_VALUE(column STRING, row_number int64, out result int64)
BEGIN
EXECUTE IMMEDIATE 'SELECT ' || column || ' from d1.temp where c2 = ?' into result using row_number;
END;
BEGIN
DECLARE result_c1 INT64;
call d1.GET_VALUE("c1", 1, result_c1);
select result_c1;
END;
After some research and trial-error methods, I used this workaround to solve this issue. It may not be the best solution when you have too many columns but it surely works.
CREATE OR REPLACE FUNCTION METADATA.GET_VALUE(column STRING, row_number int64) AS (
(SELECT case
when column_name = 'a' then a
when column_name = 'b' then b
when column_name = 'c' then c
when column_name = 'd' then d
when column_name = 'e' then e
end from WORK.temp WHERE rownumber = row_number)
);
And this gives the required results.
Point to note: the number of columns you use in the case statement should be of the same datatype else it won't work

How to remove null rows from MDX query results

How can I remove the null row from my MDX query results?
Here is the query I'm currently working with
select
non empty
{
[Measures].[Average Trips Per Day]
,[Measures].[Calories Burned]
,[Measures].[Carbon Offset]
,[Measures].[Median Distance]
,[Measures].[Median Duration]
,[Measures].[Rider Trips]
,[Measures].[Rides Per Bike Per Day]
,[Measures].[Total Distance]
,[Measures].[Total Riders]
,[Measures].[Total Trip Duration in Minutes]
,[Measures].[Total Members]
} on columns
,
non empty
{
(
[Promotion].[Promotion Code Name].children
)
} on rows
from [BCycle]
where ([Program].[Program Name].&[Madison B-cycle])
;results
This is not a null value however it is one of the children of [Promotion].[Promotion Code Name].Children.
You can exclude that particular value from children using the EXCEPT keyword of MDx.
Example query:
//This query shows the number of orders for all products,
//with the exception of Components, which are not
//sold.
SELECT
[Date].[Month of Year].Children ON COLUMNS,
Except
([Product].[Product Categories].[All].Children ,
{[Product].[Product Categories].[Components]}
) ON ROWS
FROM
[Adventure Works]
WHERE
([Measures].[Order Quantity])
Reference -> https://learn.microsoft.com/en-us/sql/mdx/except-mdx-function?view=sql-server-2017

WSO2 DAS SiddhiQL: Initialize an event table and update it after

I have some related questions :
Fisrt, I want to initialize an event table with default values and 100 rows like on this picture:
Second, once the initialization is done I would like to update this table. How can I execute in the same execution plan a query2 once the query1 execution is finished?
To finish, I have an event with 'altitude' attribute. In my execution plan, for each event, I want to increment count1 of every row of my event table where the num column is smaller than the alitude. I tried it but that doesn't increment count of all rows.
FROM inputStream JOIN counterTable
SELECT count1+1 as count1, altitude as tempNum
update counterTable on counterTable.count1 < tempNum;
FROM inputStream JOIN counterTable
SELECT counterTable.num as theAltitude, counterTable.count1 as countAltitude
INSERT INTO outputStream;
If you want to initialize each time the execution plan gets deployed, you should use an in-memory event table (as shown below). Otherwise, you can simply use an RDBMS event table, where it's already been initialized.
Queries will run in the order that they have defined, but that process will occur per each event (not as a batch, i.e if there two queries which consume from inputStream, when event 1 comes into inputStream it goes to query 1, then to query 2, and then only event 2 will get consumed..).
Refer to the below snippet;
/* Define the trigger to be used with initialization */
define trigger triggerStream at 'start';
/* Define streams */
define stream inputStream (altitude int);
define stream outputStream (theAltitude long, countAltitude int);
/* Define table */
define table counterTable (num long, count1 int, count2 int, tempNum int);
/* Iterate and generate 100 events */
from triggerStream[count() < 100]
insert into triggerStream;
/* Using above 100 events, initialize the event table */
from triggerStream
select count() as num, 0 as count1, 0 as count2, 0 as tempNum
insert into counterTable;
/* Perform the update logic here */
from inputStream as i join counterTable as c
on c.count1 < i.altitude
select c.num, (c.count1 + 1) as count1, c.count2, altitude as tempNum
insert into updateStream;
from updateStream
insert overwrite counterTable
on counterTable.num == num;
/* Join the table and get the updated results */
from inputStream join counterTable as c
select c.num as theAltitude, c.count1 as countAltitude
insert into outputStream;
Table values can be initialized as follows.
#info(name='initialize table values')
from inputStream[count()==1]
select 1 as id, 0 as counter1, 0 as counter2, 0 as counter3
insert into counterTable;

QSqlQuery - alter table if column not exists before [duplicate]

We've recently had the need to add columns to a few of our existing SQLite database tables. This can be done with ALTER TABLE ADD COLUMN. Of course, if the table has already been altered, we want to leave it alone. Unfortunately, SQLite doesn't support an IF NOT EXISTS clause on ALTER TABLE.
Our current workaround is to execute the ALTER TABLE statement and ignore any "duplicate column name" errors, just like this Python example (but in C++).
However, our usual approach to setting up database schemas is to have a .sql script containing CREATE TABLE IF NOT EXISTS and CREATE INDEX IF NOT EXISTS statements, which can be executed using sqlite3_exec or the sqlite3 command-line tool. We can't put ALTER TABLE in these script files because if that statement fails, anything after it won't be executed.
I want to have the table definitions in one place and not split between .sql and .cpp files. Is there a way to write a workaround to ALTER TABLE ADD COLUMN IF NOT EXISTS in pure SQLite SQL?
I have a 99% pure SQL method. The idea is to version your schema. You can do this in two ways:
Use the 'user_version' pragma command (PRAGMA user_version) to store an incremental number for your database schema version.
Store your version number in your own defined table.
In this way, when the software is started, it can check the database schema and, if needed, run your ALTER TABLE query, then increment the stored version. This is by far better than attempting various updates "blind", especially if your database grows and changes a few times over the years.
One workaround is to just create the columns and catch the exception/error that arise if the column already exist. When adding multiple columns, add them in separate ALTER TABLE statements so that one duplicate does not prevent the others from being created.
With sqlite-net, we did something like this. It's not perfect, since we can't distinguish duplicate sqlite errors from other sqlite errors.
Dictionary<string, string> columnNameToAddColumnSql = new Dictionary<string, string>
{
{
"Column1",
"ALTER TABLE MyTable ADD COLUMN Column1 INTEGER"
},
{
"Column2",
"ALTER TABLE MyTable ADD COLUMN Column2 TEXT"
}
};
foreach (var pair in columnNameToAddColumnSql)
{
string columnName = pair.Key;
string sql = pair.Value;
try
{
this.DB.ExecuteNonQuery(sql);
}
catch (System.Data.SQLite.SQLiteException e)
{
_log.Warn(e, string.Format("Failed to create column [{0}]. Most likely it already exists, which is fine.", columnName));
}
}
If you are doing this in a DB upgrade statement, perhaps the simplest way is to just catch the exception thrown if you are attempting to add a field that may already exist.
try {
db.execSQL("ALTER TABLE " + TABLE_NAME + " ADD COLUMN foo TEXT default null");
} catch (SQLiteException ex) {
Log.w(TAG, "Altering " + TABLE_NAME + ": " + ex.getMessage());
}
SQLite also supports a pragma statement called "table_info" which returns one row per column in a table with the name of the column (and other information about the column). You could use this in a query to check for the missing column, and if not present alter the table.
PRAGMA table_info(foo_table_name)
Sample output:
cid
name
type
notnull
dflt_value
pk
0
id
integer
0
null
1
1
type
text
0
null
0
2
data
json
0
null
0
http://www.sqlite.org/pragma.html#pragma_table_info
threre is a method of PRAGMA is table_info(table_name), it returns all the information of table.
Here is implementation how to use it for check column exists or not,
public boolean isColumnExists (String table, String column) {
boolean isExists = false
Cursor cursor;
try {
cursor = db.rawQuery("PRAGMA table_info("+ table +")", null);
if (cursor != null) {
while (cursor.moveToNext()) {
String name = cursor.getString(cursor.getColumnIndex("name"));
if (column.equalsIgnoreCase(name)) {
isExists = true;
break;
}
}
}
} finally {
if (cursor != null && !cursor.isClose())
cursor.close();
}
return isExists;
}
You can also use this query without using loop,
cursor = db.rawQuery("PRAGMA table_info("+ table +") where name = " + column, null);
For those want to use pragma table_info()'s result as part of a larger SQL.
select count(*) from
pragma_table_info('<table_name>')
where name='<column_name>';
The key part is to use pragma_table_info('<table_name>') instead of pragma table_info('<table_name>').
This answer is inspired by #Robert Hawkey 's reply. The reason I post it as a new answer is I don't have enough reputation to post it as comment.
I come up with this query
SELECT CASE (SELECT count(*) FROM pragma_table_info(''product'') c WHERE c.name = ''purchaseCopy'') WHEN 0 THEN ALTER TABLE product ADD purchaseCopy BLOB END
Inner query will return 0 or 1 if column exists.
Based on the result, alter the column
In case you're having this problem in flex/adobe air and find yourself here first, i've found a solution, and have posted it on a related question: ADD COLUMN to sqlite db IF NOT EXISTS - flex/air sqlite?
My comment here: https://stackoverflow.com/a/24928437/2678219
I took the answer above in C#/.Net, and rewrote it for Qt/C++, not to much changed, but I wanted to leave it here for anyone in the future looking for a C++'ish' answer.
bool MainWindow::isColumnExisting(QString &table, QString &columnName){
QSqlQuery q;
try {
if(q.exec("PRAGMA table_info("+ table +")"))
while (q.next()) {
QString name = q.value("name").toString();
if (columnName.toLower() == name.toLower())
return true;
}
} catch(exception){
return false;
}
return false;
}
You can alternatively use the CASE-WHEN TSQL statement in combination with pragma_table_info to know if a column exists:
select case(CNT)
WHEN 0 then printf('not found')
WHEN 1 then printf('found')
END
FROM (SELECT COUNT(*) AS CNT FROM pragma_table_info('myTableName') WHERE name='columnToCheck')
Here is my solution, but in python (I tried and failed to find any post on the topic related to python):
# modify table for legacy version which did not have leave type and leave time columns of rings3 table.
sql = 'PRAGMA table_info(rings3)' # get table info. returns an array of columns.
result = inquire (sql) # call homemade function to execute the inquiry
if len(result)<= 6: # if there are not enough columns add the leave type and leave time columns
sql = 'ALTER table rings3 ADD COLUMN leave_type varchar'
commit(sql) # call homemade function to execute sql
sql = 'ALTER table rings3 ADD COLUMN leave_time varchar'
commit(sql)
I used PRAGMA to get the table information. It returns a multidimensional array full of information about columns - one array per column. I count the number of arrays to get the number of columns. If there are not enough columns, then I add the columns using the ALTER TABLE command.
All these answers are fine if you execute one line at a time. However, the original question was to input a sql script that would be executed by a single db execute and all the solutions ( like checking to see if the column is there ahead of time ) would require the executing program either have knowledge of what tables and columns are being altered/added or do pre-processing and parsing of the input script to determine this information. Typically you are not going to run this in realtime or often. So the idea of catching an exception is acceptable and then moving on. Therein lies the problem...how to move on. Luckily the error message gives us all the information we need to do this. The idea is to execute the sql if it exceptions on an alter table call we can find the alter table line in the sql and return the remaining lines and execute until it either succeeds or no more matching alter table lines can be found. Heres some example code where we have sql scripts in an array. We iterate the array executing each script. We call it twice to get the alter table command to fail but the program succeeds because we remove the alter table command from the sql and re-execute the updated code.
#!/bin/sh
# the next line restarts using wish \
exec /opt/usr8.6.3/bin/tclsh8.6 "$0" ${1+"$#"}
foreach pkg {sqlite3 } {
if { [ catch {package require {*}$pkg } err ] != 0 } {
puts stderr "Unable to find package $pkg\n$err\n ... adjust your auto_path!";
}
}
array set sqlArray {
1 {
CREATE TABLE IF NOT EXISTS Notes (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name text,
note text,
createdDate integer(4) DEFAULT ( cast( strftime('%s', 'now') as int ) ) ,
updatedDate integer(4) DEFAULT ( cast( strftime('%s', 'now') as int ) )
);
CREATE TABLE IF NOT EXISTS Version (
id INTEGER PRIMARY KEY AUTOINCREMENT,
version text,
createdDate integer(4) DEFAULT ( cast( strftime('%s', 'now') as int ) ) ,
updatedDate integer(4) DEFAULT ( cast( strftime('%s', 'now') as int ) )
);
INSERT INTO Version(version) values('1.0');
}
2 {
CREATE TABLE IF NOT EXISTS Tags (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name text,
tag text,
createdDate integer(4) DEFAULT ( cast( strftime('%s', 'now') as int ) ) ,
updatedDate integer(4) DEFAULT ( cast( strftime('%s', 'now') as int ) )
);
ALTER TABLE Notes ADD COLUMN dump text;
INSERT INTO Version(version) values('2.0');
}
3 {
ALTER TABLE Version ADD COLUMN sql text;
INSERT INTO Version(version) values('3.0');
}
}
# create db command , use in memory database for demonstration purposes
sqlite3 db :memory:
proc createSchema { sqlArray } {
upvar $sqlArray sql
# execute each sql script in order
foreach version [lsort -integer [array names sql ] ] {
set cmd $sql($version)
set ok 0
while { !$ok && [string length $cmd ] } {
try {
db eval $cmd
set ok 1 ; # it succeeded if we get here
} on error { err backtrace } {
if { [regexp {duplicate column name: ([a-zA-Z0-9])} [string trim $err ] match columnname ] } {
puts "Error: $err ... trying again"
set cmd [removeAlterTable $cmd $columnname ]
} else {
throw DBERROR "$err\n$backtrace"
}
}
}
}
}
# return sqltext with alter table command with column name removed
# if no matching alter table line found or result is no lines then
# returns ""
proc removeAlterTable { sqltext columnname } {
set mode skip
set result [list]
foreach line [split $sqltext \n ] {
if { [string first "alter table" [string tolower [string trim $line] ] ] >= 0 } {
if { [string first $columnname $line ] } {
set mode add
continue;
}
}
if { $mode eq "add" } {
lappend result $line
}
}
if { $mode eq "skip" } {
puts stderr "Unable to find matching alter table line"
return ""
} elseif { [llength $result ] } {
return [ join $result \n ]
} else {
return ""
}
}
proc printSchema { } {
db eval { select * from sqlite_master } x {
puts "Table: $x(tbl_name)"
puts "$x(sql)"
puts "-------------"
}
}
createSchema sqlArray
printSchema
# run again to see if we get alter table errors
createSchema sqlArray
printSchema
expected output
Table: Notes
CREATE TABLE Notes (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name text,
note text,
createdDate integer(4) DEFAULT ( cast( strftime('%s', 'now') as int ) ) ,
updatedDate integer(4) DEFAULT ( cast( strftime('%s', 'now') as int ) )
, dump text)
-------------
Table: sqlite_sequence
CREATE TABLE sqlite_sequence(name,seq)
-------------
Table: Version
CREATE TABLE Version (
id INTEGER PRIMARY KEY AUTOINCREMENT,
version text,
createdDate integer(4) DEFAULT ( cast( strftime('%s', 'now') as int ) ) ,
updatedDate integer(4) DEFAULT ( cast( strftime('%s', 'now') as int ) )
, sql text)
-------------
Table: Tags
CREATE TABLE Tags (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name text,
tag text,
createdDate integer(4) DEFAULT ( cast( strftime('%s', 'now') as int ) ) ,
updatedDate integer(4) DEFAULT ( cast( strftime('%s', 'now') as int ) )
)
-------------
Error: duplicate column name: dump ... trying again
Error: duplicate column name: sql ... trying again
Table: Notes
CREATE TABLE Notes (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name text,
note text,
createdDate integer(4) DEFAULT ( cast( strftime('%s', 'now') as int ) ) ,
updatedDate integer(4) DEFAULT ( cast( strftime('%s', 'now') as int ) )
, dump text)
-------------
Table: sqlite_sequence
CREATE TABLE sqlite_sequence(name,seq)
-------------
Table: Version
CREATE TABLE Version (
id INTEGER PRIMARY KEY AUTOINCREMENT,
version text,
createdDate integer(4) DEFAULT ( cast( strftime('%s', 'now') as int ) ) ,
updatedDate integer(4) DEFAULT ( cast( strftime('%s', 'now') as int ) )
, sql text)
-------------
Table: Tags
CREATE TABLE Tags (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name text,
tag text,
createdDate integer(4) DEFAULT ( cast( strftime('%s', 'now') as int ) ) ,
updatedDate integer(4) DEFAULT ( cast( strftime('%s', 'now') as int ) )
)
-------------
select * from sqlite_master where type = 'table' and tbl_name = 'TableName' and sql like '%ColumnName%'
Logic: sql column in sqlite_master contains table definition, so it certainly contains string with column name.
As you are searching for a sub-string, it has its obvious limitations. So I would suggest to use even more restrictive sub-string in ColumnName, for example something like this (subject to testing as '`' character is not always there):
select * from sqlite_master where type = 'table' and tbl_name = 'MyTable' and sql like '%`MyColumn` TEXT%'
I solve it in 2 queries. This is my Unity3D script using System.Data.SQLite.
IDbCommand command = dbConnection.CreateCommand();
command.CommandText = #"SELECT count(*) FROM pragma_table_info('Candidat') c WHERE c.name = 'BirthPlace'";
IDataReader reader = command.ExecuteReader();
while (reader.Read())
{
try
{
if (int.TryParse(reader[0].ToString(), out int result))
{
if (result == 0)
{
command = dbConnection.CreateCommand();
command.CommandText = #"ALTER TABLE Candidat ADD COLUMN BirthPlace VARCHAR";
command.ExecuteNonQuery();
command.Dispose();
}
}
}
catch { throw; }
}
Apparently... in SQLite... the "alter table" statement does not generate exceptions if the column already exists.
Found this post in the support forumn and tested it.

Why does Relation.size sometimes return a Hash in Rails 4

I can run a query in two different ways to return a Relation.
When I interrogate the size of the Relation one query gives a Fixnum as expected the other gives a Hash which is a hash of each value in the Relations Group By statement with the number of occurrences of each.
In Rails 3 I assume it always returned a Fixnum as I never had a problem whereeas with Rails 4 it sometimes returns a Hash and a statement like Rel.size.zero? gives the error:
undefined method `zero?' for {}:Hash
Am I best just using the .blank? method to check for zero records to be sure of avoiding unexpected errors?
Here is a snippet of code with looging statements for the two queries and the resulting log
CODE:
assessment_responses1=AssessmentResponse.select("process").where("client_id=? and final = ?",self.id,false).group("process")
logger.info("-----------------------------------------------------------")
logger.info("assessment_responses1.class = #{assessment_responses1.class}")
logger.info("assessment_responses1.size.class = #{assessment_responses1.size.class}")
logger.info("assessment_responses1.size value = #{assessment_responses1.size}")
logger.info("............................................................")
assessment_responses2=AssessmentResponse.select("distinct process").where("client_id=? and final = ?",self.id,false)
logger.info("assessment_responses2.class = #{assessment_responses2.class}")
logger.info("assessment_responses2.size.class = #{assessment_responses2.size.class}")
logger.info("assessment_responses2.size values = #{assessment_responses2.size}")
logger.info("-----------------------------------------------------------")
LOG
-----------------------------------------------------------
assessment_responses1.class = ActiveRecord::Relation::ActiveRecord_Relation_AssessmentResponse
(0.5ms) SELECT COUNT(`assessment_responses`.`process`) AS count_process, process AS process FROM `assessment_responses` WHERE `assessment_responses`.`organisation_id` = 17 AND (client_id=43932 and final = 0) GROUP BY process
assessment_responses1.size.class = Hash
CACHE (0.0ms) SELECT COUNT(`assessment_responses`.`process`) AS count_process, process AS process FROM `assessment_responses` WHERE `assessment_responses`.`organisation_id` = 17 AND (client_id=43932 and final = 0) GROUP BY process
assessment_responses1.size value = {"6 Month Review(1)"=>3, "Assessment(1)"=>28, "Assessment(2)"=>28}
............................................................
assessment_responses2.class = ActiveRecord::Relation::ActiveRecord_Relation_AssessmentResponse
(0.5ms) SELECT COUNT(distinct process) FROM `assessment_responses` WHERE `assessment_responses`.`organisation_id` = 17 AND (client_id=43932 and final = 0)
assessment_responses2.size.class = Fixnum
CACHE (0.0ms) SELECT COUNT(distinct process) FROM `assessment_responses` WHERE `assessment_responses`.`organisation_id` = 17 AND (client_id=43932 and final = 0)
assessment_responses2.size values = 3
-----------------------------------------------------------
size on an ActiveRecord::Relation object translates to count, because the former tries to get the count of the Relation. But when you call count on a grouped Relation object, you receive a hash.
The keys of this hash are the grouped column's values; the values of this hash are the respective counts.
AssessmentResponse.group(:client_id).count # this will return a Hash
AssessmentResponse.group(:client_id).size # this will also return a Hash
This is true for the following methods: count, sum, average, maximum, and minimum.
If you want to check for rows being present or not, simply use exists? i.e. do the following:
AssessmentResponse.group(:client_id).exists?
Instead of this:
AssessmentResponse.group(:client_id).count.zero?