Expression Error Key didn't Match Any Rows - powerbi

I am trying to get today's current date and format it to yymmdd because my table name change daily. e.g. MICRINFO210616 and tomorrow it will be MICRINFO210617
When I run thecode below I get the following error:
Expression.Error: The key didn't match any rows in the table.
Key=
Schema=dbo
Item=MICRINFO210617
Table=[Table]
code:
let
Source = Sql.Database("TEST", "TEST"),
formattedDate = Date.ToText(DateTime.Date(DateTime.LocalNow()), "yyMMdd"),
combine = "MICRINFO" & formattedDate,
dbo_MICRINFO210616 = Source{[Schema="dbo", Item=combine]}[Data]
in
dbo_MICRINFO210616

Make sure the account you're using has at least read permissions (to the new table).
Check if the structure of both tables is the same (same number of columns, same datatype).

Related

Column does not exist AWS Timestream Query error

I am trying to apply WHERE clause on DIMENSION of the AWS Timestream records. However, I got the error: Column does not exist
Here is my table schema:
The table schema
The table measure
First, I will show all the sample data I put in the table
SELECT username, time, manual_usage
FROM "meter-reading"."meter-metrics"
ORDER BY time DESC
LIMIT 4
The result:
Result
What I wanted to do is to query and filter the records by the Dimension ("username" specifically).
SELECT *
FROM "meter-reading"."meter-metrics"
WHERE measure_name = "OnceADay"
ORDER BY time DESC LIMIT 10
Then I got the Error: Column 'OnceADay' does not exist
I tried to search for any quotas for Dimensions name and check for error in my schema:
https://docs.aws.amazon.com/timestream/latest/developerguide/ts-limits.html#limits.naming
https://docs.aws.amazon.com/timestream/latest/developerguide/ts-limits.html#limits.system_identifier
But I didn't find that my "username" for the dimension violate any of the above rules.
I checked for some other queries by AWS Blog, the author used the WHERE clause for the Dimension filter normally:
https://aws.amazon.com/blogs/database/effective-queries-for-common-query-patterns-in-amazon-timestream/
I figured it out after I tried with the sample code. Turn out it was a silly mistake I believe.
Using apostrophe (') instead of single quotation marks ("") solved my problem.
SELECT *
FROM "meter-reading"."meter-metrics"
WHERE username = 'OnceADay'
ORDER BY time DESC LIMIT 10

Can I dynamically derive Dax Filters from "dissecting"a criteria field in my database?

I have a database table that contains records, where each record has a criteria attribute. This criteria attribute can hold anywhere between 1-n criteria that I'd like to apply as filters on a different table.
This looks something like:
message.status:::eq:::submitted;;;message.count:::ge:::5
but could also be only
message.count:::ge:::5
What I'd like to do in DAX, is take that string and translate it into dynamic Filter attributes. So I somehow need to split the string based on ;;;, and then disect each section into the target (e.g. message[count]), the operator (e.g. ge --> >=) and the value (e.g. 5).
So in the end the following DAX snippet should be added to my Calculate 1 or more times:
example measure = CALCULATE(
COUNTROWS('message),
FILTER (
ALL('message'),
--- line below should be dynamically injected
message[count] >= 5
),
I'm struggling with how to create a loop (is this even possible in PBI?), and then even with a single string... hoe to filter based on this.
Thanks
You can try to build new table for splited message.
Table 2 =
var _splitby = ";;;"
var _string = SELECTCOLUMNS(ADDCOLUMNS(VALUES('Table'[message]),"pathx", SUBSTITUTE([message],_splitby,"|")),"pathx",[pathx])
var _generate = GENERATE(_string, GENERATESERIES(1, PATHLENGTH([pathx])))
var _GetVal = SELECTCOLUMNS(_generate, "Msg", PATHITEM([pathx], [Value]))
return
_GetVal
If you have always message.count:::ge::: like string at the end, you can follow these steps in Power Query-
Step 1: Duplicate you message column
Step-2: apply split on new duplicated column using string message.count:::ge::: and you will have a new column with last Numeric value from your original text.
Step-3: you can now apply filter on the new column.
Sample Output-

Filter a date by current date in PowerQuery / PowerBI

I'm creating a report in PowerBI, and need to filter out some erroneous record from my source. Its a payment table, and some records are with a future date, eg in 2799. I'd like to make a Filter to remove records after today + 1 year. I already had this filter :
= Table.SelectRows(_cobranca, each [Vencimento] >= DATA_LIMITE)
DATA_LIMITE is a parameter, and the code above is already working. I tried to change it to :
= Table.SelectRows(_cobranca, each [Vencimento] >= DATA_LIMITE and [Vencimento] <= DateTime.LocalNow())
But I'm getting this error:
DataFormat.Error: Syntax error in date in query expression '[_].[Vencimento] >= #2020-01-01 00:00:00# and [_].[Vencimento] <= #2020-10-21 10:58:07.4411693'.
It seems that DateTime.LocaNow function is not returning the date in the correct format.
Replace DateTime.LocalNow() with Date.From(DateTime.LocalNow()) as I assume your [Vencimento] column (due date?) is just a date data type not date time

How to Compose Query in BigQuery with Destination Table?

I am trying to Query on a BQ table and load that queried data into destination table with use of legacy_sql
Code:
bigquery_client = bigquery.Client.from_service_account_json(config.ZF_FILE)
job_config = bigquery.QueryJobConfig()
job_config.use_legacy_sql = True
# Allow for query results larger than the maximum response size
job_config.allow_large_results = True
# When large results are allowed, a destination table must be set.
dest_dataset_ref = bigquery_client.dataset('datasetId')
dest_table_ref = dest_dataset_ref.table('datasetId:mydestTable')
job_config.destination = dest_table_ref
query =""" SELECT abc FROM [{0}] LIMIT 10 """.format(mySourcetable_name)
# run the Query here now
query_job = bigquery_client.query(query, job_config=job_config)
Error:
google.api_core.exceptions.BadRequest: 400 POST : Invalid dataset ID "datasetId:mydestTable". Dataset IDs must be alphanumeric (plus underscores, dashes, and colons) and must be at most 1024 characters long.
The job_config.destination gives :
print job_config.destination
TableReference(u'projectName', 'projectName:dataset', 'projectName:dataset.mydest_table')
The datasetId is correct from my side still the error?
May I know how to get the proper destination table ?
This may be helpfull to someone in future
It worked by just naming only the Names instead of full Id of dataset and table as below
dest_dataset_ref = bigquery_client.dataset('dataset_name')
dest_table_ref = dest_dataset_ref.table('mydestTable_name')

Kettle database lookup case insensitive

I've a table "City" with more than 100k records.
The field "name" contains strings like "Roma", "La Valletta".
I receive a file with the city name, all in upper case as in "ROMA".
I need to get the id of the record that contains "Roma" when I search for "ROMA".
In SQL, I must do something like:
select id from city where upper(name) = upper(%name%)
How can I do this in kettle?
Note: if the city is not found, I use an Insert/update field to create it, so I must avoid duplicates generated by case-sensitive names.
You can make use of the String Operations steps in Pentaho Kettle. Set the Lower/Upper option to Y
Pass the city (name) from the City table to the String operations steps which will do the Upper case of your data stream i.e. city name. Join/lookup with the received file and get the required id.
More on String Operations step in pentaho wiki.
You can use a 'Database join' step. Here you can write the sql:
select id from city where upper(name) = upper(?)
and specify the city field name from the text file as parameter. With 'Number of rows to return' and 'Outer join?' you can control the join behaviour.
This solution doesn't work well with a large number of rows, as it will execute one query per row. In those cases Rishu's solution is better.
This is how I did:
First "Modified JavaScript value" step for create a query:
var queryDest="select coalesce( (select id as idcity from city where upper(name) = upper('"+replace(mycity,"'","\'\'")+"') and upper(cap) = upper('"+mycap+"') ), 0) as idcitydest";
Then I use this string as a query in a Dynamic SQL row.
After that,
IF idcitydest == 0 then
insert new city;
else
use the found record
This system make a query for file's row but it use few memory cache