How to read columns with spaces from coldfusion queries? - coldfusion

I am reading data from a spreadsheet. One of the column in the spreadsheet contains spaces.
For Example, Columns names are [first name,last name,roll].
I am getting a qryObj after reading the spreadsheet.
Now when i am trying to read first name from the query
<cfquery dbtype="query" name="getName">
SELECT [first name]
FROM qryObj
</cfquery>
It is throwing db error. I have tried with ['first name'] also but still it is throwing error.
The error is:
Query Of Queries syntax error.
Encountered "[. Incorrect Select List, Incorrect select column

I did crazy stuff like googling to see what people had done in other situations, and tried various SQL approaches to escaping non-standard column names (back ticks, square barackets, double quotes, combos thereof) , and drew a blank. So I agree with #da_didi that QoQ/IMQ does not cater for this. You should raise a ticket in the Adobe bug tracker.
You could do SELECT *, which removes the need to reference the column name. Or you could serialize the query, use a string replace to rename the column, deserialise it again then QoQ on the revised name. I'd only do this with a small amount of data though.
Or you could push back on the owbner of the XLS file and say "no can do unless you revise your column names".
You could also perhaps suppress the column names as they stand from the XLS file using excludeHeaderRow,and then specify your own columns names. How did I find out one could do that? By RTFMing the <cfspreadsheet> docs.

Thats easy:
Query
Select [FIRST NAME]
in output loop of query
["FIRST NAME"]

Try this - set a variable works for me
<cfset first_name = #spreadsheetData['first name'][CurrentRow]#>

You cannot. Best practices: I always replace all spaces with an underline.

Simple. Just alias the select. Select [FIRST NAME] as FIRSTNAME from qryObj

Related

Converting JSON into Table (PowerQuery)

What would be a correct PowerQuery syntax to extract the information from this Web JSON into a table:
I'm not very familiar with PowerQuery, and this is probably the only time I'll need this, so I'd be grateful if someone would help me out without refering me to documentation. Thanks
[{"time_entry_group": {"minutes": 301,"time_entries_params": {"locked": "0","from": "2021-02-01","to": "2021-02-28","customer_id": "11223344","project_id": "223388","service_id": "435248"},"revenue": 57691.6666666667,"project_id": 223388,"project_name": "Scrb","service_id": 435248,"service_name": "Meetings","month": "202102"}}
, {"time_entry_group": {"minutes": 1175,"time_entries_params": {"locked": "1","from": "2021-01-01","to": "2021-01-31","customer_id": "11223344","project_id": "223388","service_id": "421393"},"revenue": 225208.333333333,"project_id": 223388,"project_name": "Scrb","service_id": 421393,"service_name": "Design","month": "202101"}}
, {"time_entry_group": {"minutes": 24,"time_entries_params": {"locked": "1","from": "2021-01-01","to": "2021-01-31","customer_id": "11223344","project_id": "3168911","service_id": "95033"},"revenue": 4600.0,"project_id": 3168911,"project_name": "youkn Dev","service_id": 95033,"service_name": "Reviews","month": "202101"}}]
For future reference, if you have a column that you need to expand, you can instead click this arrow icon to the right of the column name. Clicking it should display a menu that should then allow you to specify which nested columns you want to get expand or get at. To be clear, it will expand that column for all rows in that table, not just one.
The JSON you've included is basically an array of objects, so maybe use:
Json.Document to parse the JSON, which should give you a list of records
Table.FromRecords to turn the list of records into a table.
Table.ExpandRecordColumn to expand a nested record columns.
Example implementation:
let
json = "[{""time_entry_group"":{""minutes"":301,""time_entries_params"":{""locked"":""0"",""from"":""2021-02-01"",""to"":""2021-02-28"",""customer_id"":""11223344"",""project_id"":""223388"",""service_id"":""435248""},""revenue"":57691.6666666667,""project_id"":223388,""project_name"":""Scrb"",""service_id"":435248,""service_name"":""Meetings"",""month"":""202102""}},{""time_entry_group"":{""minutes"":1175,""time_entries_params"":{""locked"":""1"",""from"":""2021-01-01"",""to"":""2021-01-31"",""customer_id"":""11223344"",""project_id"":""223388"",""service_id"":""421393""},""revenue"":225208.333333333,""project_id"":223388,""project_name"":""Scrb"",""service_id"":421393,""service_name"":""Design"",""month"":""202101""}},{""time_entry_group"":{""minutes"":24,""time_entries_params"":{""locked"":""1"",""from"":""2021-01-01"",""to"":""2021-01-31"",""customer_id"":""11223344"",""project_id"":""3168911"",""service_id"":""95033""},""revenue"":4600,""project_id"":3168911,""project_name"":""youkn Dev"",""service_id"":95033,""service_name"":""Reviews"",""month"":""202101""}}]",
parsed = Json.Document(json),
initialTable = Table.FromRecords(List.Transform(parsed, each [time_entry_group])),
expanded = Table.ExpandRecordColumn(initialTable, "time_entries_params", {"locked", "from", "to", "customer_id"})
in
expanded
One thing about the code above is that it doesn't expand nested fields project_id and service_id (present within time_entries_params). This is because these columns already exist in the table (and having duplicate column names would cause an error). I've assumed this isn't a problem, as the nested values aren't different.

Split a column of lists into multiple columns in PowerBI

I have imported a JSON file into PowerBI and it contains a column in which the values are of type "List". I am looking to expand that column into multiple columns.
Specifically, the data contains a Sprint Name, the start date and the end date of the sprint, along with some other values associated with each sprint.
Trying to use "Expand to new rows" duplicates each sprint instance, creating a table that looks like this, duplicating each sprint instance multiple times for each associated value:
Sprint Name Value
JAN(S1Dev) 2019-01-01
JAN(S1Dev) 2019-01-13
JAN(S1Dev) {attribute}
JAN(S1Dev) {attribute}
JAN(S2Dev) 2019-01-14
JAN(S2Dev) 2019-01-31
JAN(S2Dev) {attribute}
JAN(S2Dev) {attribute}
FEB(S1Test) 2019-02-01
FEB(S1Test) 2019-02-15
... ...
I would like to do something similar to the "expand" feature, which instead creates a new column with each attribute rather than a new row. This is currently vastly increasing the size of my table for no reason, while also making the data practically un-useable. Any help would be appreciated, cheers!
I have found a very simple solution to this, but as it took me some time to figure it out I will answer my own question instead of deleting it to help others in the future...
Upon importing the JSON data into PowerBI first select "Convert to Table" to view the data as a table with editable properties.
Next, click the arrows pointing away from each other at the top of the column of Lists, and select "Extract Values".
Select a delimiter to use for concatenating values, I am choosing a comma since I know that the data contained within the list does not have any commas in it. If your data contains commas within it, choose something else. Similarly, if your data contains one of the delimiters, do not choose that as the delimiter.
It should now display a comma-separated list where it previously displayed "List" in orange text.
Now, right-click on the column and select "Split Column" then choose "By Delimiter"
Select the delimiter that you previously chose, and under "split at" select "Each occurrence of the delimiter" then click OK.
Your column should now be split into multiple columns based on the list!

Python + MySQL DB dynamic insert query based on number of columns to insert

I'm pretty novice at programming (recently learned functions), and have found myself re-writing the same "insert into mysql table" function (below) from script to script... mainly to just modify these two section - (name,insert_ts) &&& VALUES (%s, %s)
Is there a good way to re-write the below to accept ANY number of values , based on length of a tuple that contains values as well as inserting the column headers based on 'labels' list? VALUES (%s, %s) and this part (name,insert_ts)
list_of_tuples = [] #list of records to be inserted.
#take a list of dictionaries - and create a list of tuples in proper format/order
for dict1 in output:
one_list = []
one_list.extend((dict1['name'],dict1['insert_ts']))
list_of_tuples.append(tuple(one_list))
labels = ['name', 'insert_ts']
#db_write accepts table name as str, labels as str, and output as list of tuples
def db_write(table,labels,output):
local_cursor.executemany(""" INSERT INTO my_table
(name,insert_ts) #this is pulled from 'labels'
VALUES (%s, %s) #number of %s comes from len(labels)
"""
, list_of_tuples)
local_db.commit()
local_db.close()
#print 'done posting!'
Or, is there a better way to accomplish what I'm trying to do, using mysqldb?
Thank you all in advance!
After a bit of experience (3 months, heh), wanted to update everyone on the solution that seems to work pretty well!
Instead of using mysqldb, I spent some time learning how to use SQL Alchemy python package, and would recommend everyone do the same!
SQL Alchemy allows you to:
1) Define a table within python code (used Excel to come up with column names, etc).
2) Most important! You can pass on a dictionary to SQL Alchemy, and as long as dictionary's key names match the table's key names, everything will magically get posted to your SQL table. If you have 60 columns in your sql table, but your dict has only two keys - BAM, SQL Alchemy will take care of everything and post just the two values, and leave the other values in MySQL as blanks. MAGIC!

How to tweak LISTAGG to support more than 4000 character in select query?

Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production.
I have a table in the below format.
Name Department
Johny Dep1
Jacky Dep2
Ramu Dep1
I need an output in the below format.
Dep1 - Johny,Ramu
Dep2 - Jacky
I have tried the 'LISTAGG' function, but there is a hard limit of 4000 characters. Since my db table is huge, this cannot be used in the app. The other option is to use the
SELECT CAST(COLLECT(Name)
But my framework allows me to execute only select queries and no PL/SQL scripts.Hence i dont find any way to create a type using "CREATE TYPE" command which is required for the COLLECT command.
Is there any alternate way to achieve the above result using select query ?
You should add GetClobVal and also need to rtrim as it will return delimiter in the end of the results.
SELECT RTRIM(XMLAGG(XMLELEMENT(E,colname,',').EXTRACT('//text()')
ORDER BY colname).GetClobVal(),',') from tablename;
if you cant create types (you can't just use sql*plus to create on as a one off?), but you're OK with COLLECT, then use a built-in array. There's several knocking around in the RDBMS. run this query:
select owner, type_name, coll_type, elem_type_name, upper_bound, length
from all_coll_types
where elem_type_name = 'VARCHAR2';
e.g. on my db, I can use sys.DBMSOUTPUT_LINESARRAY which is a varray of considerable size.
select department,
cast(collect(name) as sys.DBMSOUTPUT_LINESARRAY)
from emp
group by department;
A derivative of #anuu_online but handle unescaping the XML in the result.
dbms_xmlgen.convert(xmlagg(xmlelement(E, name||',')).extract('//text()').getclobval(),1)
For IBM DB2, Casting the result to a varchar(10000) will give more than 4000.
select column1, listagg(CAST(column2 AS VARCHAR(10000)), x'0A') AS "Concat column"...
I end up in another approach using the XMLAGG function which doesn't have the hard limit of 4000.
select department,
XMLAGG(XMLELEMENT(E,name||',')).EXTRACT('//text()')
from emp
group by department;
You can use:
SELECT department
, REGEXP_REPLACE(XMLCAST(XMLAGG(XMLELEMENT(x, name, ',')) AS CLOB), ',$')
FROM emp
GROUP BY department
it will return CLOB that has no size limit, handles correctly XML entity escapes and separators.
Instead of REGEXP_REPLACE(..., ',$')) you can use RTRIM(..., ','), which should be faster, but will remove all separators from the end of the result (including those that can appear in name at the end, or previous ones if last names are empty).

Is there a way to escape and use ColdFusion query reserved words as column names in a query of query?

I'm working with a query that has a column named "Date."
The original query returns okay from the database. You can output the original query, paginate the original query, get a ValueList of the Date column, etc.
Query of Query
<cfquery name= "Query" dbtype= "query">
select
[Query].[Date]
from [Query]
</cfquery>
Response from ColdFusion
Query Of Queries syntax error. Encountered "Date. Incorrect Select
List,
Typically, I use descriptive names so I haven't run across this issue previously.
In this case, I'm working with a stored procedure that someone else wrote. I ended up modifying the stored procedure to use a more descriptive column name.
I have a service I use for transforming, searching and sorting queries with ColdFusion. I'm curious to know the answer to my original question, so that I can modify my service to either throw a better error or handle reserved words.
Is there a way to escape and use ColdFusion query reserved words as column names in a query of query?
The following code works fine for me:
<cfset query = queryNew("date")>
<cfdump var="#query#">
<cfquery name= "Query" dbtype= "query">
select
[Query].[Date]
from [Query]
</cfquery>
<cfdump var="#query#">
In standard mysql you'd "escape" the fields by using the ` character.
So for example:
select `query`.`date` from `query`
Try that and see if it works?