So I'm pulling items from a database via a query to put that data into a datalake. All that works great, the preview comes out wonderfully on the source tab. On the Mapping tab however, Whenever I hit "Import Data" or even input the "StartDate" column manually to a blank mapping (which is a datetime in the source DB) -- it changes the startdate to int96. Then, once I pull this data into PBI obviously I have to do a BUNCH of weird massaging to get the int96 back to a datetime. It's rediculous.
Here is a pic of what's happening.
Does anybody know why this is happening or what I can do to map the sink column as a datetime? I can't seem to change the type anywhere.
Parquet internally stores dates as integers, but the clients, including Power BI should automatically convert them back to dates. EG this works fine for me, with a parquet file created as you indicate.
let
Source = AzureStorage.DataLake("https://xxxx.dfs.core.windows.net/datalake/stage/xxx.parquet"),
f = Source{[#"Folder Path"="https://xxxx.dfs.core.windows.net/datalake/stage/",Name="xxx.parquet"]}[Content],
#"Imported Parquet" = Parquet.Document(f)
in
#"Imported Parquet"
Related
I have been trying to find an answer to my problem but to no avail, so hoping someone is able to provide some ideas / advice on if this is possible and if so, how to go about it? I've tried various things and none have worked.
We create views within SQL and then connect to them using 'Import' from the Data Connectivity Mode when connecting to the SQL Server from within Power BI. Within the view, we use tables that contain a 'Valid From Date' and 'Valid To Date' for each row of data, so when a change occurs a row is closed off and a new row is created. This is so we can limit the rows of data within the table.
When trying to create a report within Power BI, we need to make it that the end user can use a data drop down list to select a date and the data within the whole of the report shows any rows that the selected date falls on or between the Valid From and Valid To dates. We use Cards, Tables, Matrix and Charts within our reports, so all would need to reflect the date selected by the user.
I have tried various methods that I could think to get it to work but each have had limitations where it either doesn't work or only partly works.
Any help / advice on this would be really appreciated.
Many Thanks
Jon
I am rather new at PowerBI because I have mostly used other tools but I need to use them for a customer. I have this problem that I haven't been able to solve.
I have created two CSV-files using python and I save them in a specific location. I update these files every day and I have since their first creation date never changed anything. The files are ";" separated.
Now, I created a dashboard in PowerBI by first importing these files and connecting them using a common key. All the visuals work just fine. The next day, I wanted to update the data since the CSV-files on my disc had been updated with my python code.
The problem is that PowerBI will not update the tables. In fact I get the error "Det gick inte att hitta kolumnen <namn på kolumnen> i tabellen" which in english would be Column cannot be found in table. The column is the first column in the PWBI table (since the application ordered these columns in alphabetical order).
I have tried every possible thing, from the cleansing cache memory to change the order of columns. Whatever I try fails.
In this image above it says: "Cannot find column BKVehicelID in the table" and the second error is "Update was blocked because of errors in other queries". I can, however, update the second column by itself.
I would really appreciate any help on this.
I'm trying to get some records into BigQuery, but I've gotten stuck at a date error. I've tried formatting the date in the way that BQ want, but these haven't seemed to help. Here is a (fake, obviously) record I am trying to work with.
{"ID":"1","lastName":"Doe","firstName":"John","middleName":"C","DOB":"1901-01-01 00:00:00","gender":"Male","MRN":"1","diagnosis":["something"],"phone":"888-555-5555","fax":"888-555-5555","email":"j#doe.org"}
And here is the error that I get when I try and upload the file
Provided Schema does not match Table x:y.z. Field dob has changed type from DATETIME to TIMESTAMP
I'm just not sure what the difference in my format could be that BQ is unhappy about. I have my date formatted properly, the time is formatted properly (even tried 00:00:00.0), but I just can't seem to get this data into the table. I've also not specified any time zone, which makes it even odder that it thinks I'm supplying a timestamp.
Any help would be appreciated
I'm getting this error message:
Datetime column not provided as part table configuration and is required by this type of chart
I'd like to create a chart showing change in total volume over time. I have a date field created as a column that I'd like to use as a datetime but cannot figure out how.
Data is an uploaded .csv file.
How do you change a column to datetime in table configuration?
How do you change a column to datetime in table configuration?
You should be fine casting that column to a timestamp instead.
Superset complains about a missing datetime column because it's written in Python; but assuming you're querying a postgres DB, the equivalent type would be timestamp.
If your datasource is based off a query; you should be able to cast the date type to timestamp (use ::timestamp or to_timestamp()) and use that as your "datetime" column.
In process of uploading the CSV file, we should set up names of columns with dates:
check if you have date or time series in your data once you make sure of that,if the format of the date just year, then the DATETIME format is this '%Y'.
if the format is yyyy/mm/dd then the DATATIME format is "%Y/%m/%d" and for the rest of the formats check here
all these formats had to be edited in the edit dataset window in supersetclick here for image
go with the documentation it helps a lot.
Define in CSV to Database configuration before uploading:
In Power BI, I've got some query tables generated from imported data. All the data comes in as type 'Any', and I'm trying to automatically detect the type of the data in each column.
Some of the queries generate tables with columns based on the in-coming data - I don't know what the columns are going to be until the query runs and sets up the table (data comes from an Azure blob). As I will have quite a few tables to maintain, which columns can change (possibly new columns being added) with any data refresh, it would be unmanageable to go through all of them each time and press 'Detect Data Type' on the columns.
So I'm trying to figure out how I can do a 'Detect Data Type' in the query formula language to attach to the end of the query that generates the table columns. I've tried grabbing the first entry in a column and do Value.Type(column{0}), however this seems to come out as 'Text' for a column which has integers in it. Pressing 'Detect Data Type' does however correctly identifies the type as 'Whole Number'.
Does anyone know how to detect a column's entry types?
P.S. I'm not too worried about a column possibly holding values of different data types
You seem to have multiple issues here. And your solution will be fragile, there's a better way. But let's first deal with column type detection. Power Query uses the 'any' data type as it's go to data type. You can write a function that samples the rows of a column in a table does a best match data type detection then explicitly sets the data type of the column. This is probably messy and tricky since you need to do it once per column. This might be workable for a fixed schema but for a dynamic schema you'll run into a couple of things very quickly. First you'll need to write some crazy PQ code to list all the columns and run you function on each. This will work the first time, but might break in subsequent refreshes because data model changes are not allowed during refresh. If you're using a tool like Power BI Desktop, you'll be able to fix things up. If you publish your report to the Power BI service, you'll just see refresh errors.
Dynamic Schemas will suffer the same data model change issue I mentioned above.
The alternate solution that you won't have problems with is using a Direct Query data source instead of using Power Query. If you load your data into Azure SQL or a Tabular Model, the reporting layer will get the updated fields automatically so you don't have to try to work around using PQ.