I am new to looker studio. I want to create report on a bigquery table. But that bigquery table doesn't have any column named, FILE. But our looker report table should contain FILE field, with null in it. How to do that?
Add a field with that expression
Related
A CSV file in Google Cloud Storage has a date value in 'dd/mm/yyyy' format, which when loaded into a Bigquery table, it goes as 'mm/dd/yyyy' format.
To counter it, I created a table with the said field as 'string' and when trying to load the data from the file it says:
Provided Schema does not match Table <table name>. Field TRADE_DATE has changed type from STRING to DATE
How do I load Date as String from a CSV file into a BigQuery table?
Since you created table beforehand and ~date values quotated, BQ shouldn't behave like this.
is there any chance to your Load Job has --autodetect configuration still?
Since your table already has a schema with the date column defined as string data type, when you are trying to load the data from GCS with auto-detect on, there is a schema mis-match. The auto-detect schema reads the date column as date data type rather than string. If you are unchecking the auto-detect schema option, you need to provide the schema manually while loading the data.
Consider using the following steps:-
Create a table from the CSV file in the GCS bucket and manually provide the schema by keeping the auto-detect schema option unchecked.
Provide data type as string for the date column. Skip the header row, if any, using advanced options drop-down.
Parse the date column say ‘TDate’ to the correct format running the following query on the table:-
SELECT Tid, parse_date("%d/%m/%Y", TDate) as TDate FROM `projectName.DatasetName.tableName`
I have used ‘/’ as a separator in the format string to match the date format provided by you. You can refer to this document for more supported format elements.
Save the result of the above query in a different table by clicking the ‘Save Results’ button on the console. You can see the datatype of the ‘TDate’ column is date in the new table. You can refer to this document if you need help with saving the query output to a table.
You can verify if BigQuery is recognising the date in the format you have parsed by running the following query:-
SELECT EXTRACT(Day FROM TDate ) as Day, EXTRACT(MONTH FROM TDate ) as Month FROM `projectName.DatasetName.tableName`
You can refer to this document for more information on date functions in BigQuery.
If changing the CSV file is an option for you then you can refer to this BigQuery documentation. It mentions that while you load data from a CSV file to BigQuery, values in the ‘Date’ column must use the ‘-’ separator and the date must be in the following format: YYYY-MM-DD.
I am in the process of creating a dashboard in power BI with multiple people. Currently I have 4 entities in a Dataflow that move to a dataset which are then visualized in reports. I recently added a column to one of my entities that I would like to show up in a report that is already created. However, despite the column being added to the entity (it shows up when I try to create a new report), it isn't displayed in the older report. How can I get my new column to display in an already created report?
You need to get the old report, go to the Query Editor and refresh the preview for it to pick up the new column.
You may have to go through the steps to make sure it is not removed, by for example reducing the columns down via a selection. When you create a new report you can see the column as it is getting the dataflow table structure with out any history in the query. Note this is not just for Dataflows, but for most types of connection where the structure changes, for example CSV, Excel etc.
Check if the source data set is set to private by the person who published the report. Changing this might grant you access to the source dataset.
I've created a tabular model in Power BI and now I'd like to create that same model in Azure Analysis Services, using Visual Studio 2017 and SSDT. Some of my tables in my Power BI model have a SQL query as the source and not a physical table or view. However, in SSDT, when I attempt to add a new table to my model I'm not given the choice of entering a SQL query. It seems I have to either select a physical table or a view.
In SSDT is it not possible to add a table to my model based on a SQL query?
On the top menu bar, go to Model then Existing Connections. After this press Open and select the second radio button, "Write a query that will specify the data to import." If you're accessing an object that's not in the database used as the Initial Catalog in the connection string then the three part naming convention (Database.Schema.Table) is necessary.
I am Using PowerBI Desktop Direct Query on SQL database
When the data is loaded into PowerBI Desktop I can see that there are certain fields missing from the table. When I view in SQL Server Manager Studio I can see the entire table.
Is there a known reason why all fields in the table would not be returned?
Check in the Query Editor window (hit Edit Queries) - steps can be added to any Query to remove columns, or specify a selected set of columns.
It could also be that the columns were added to the SQL table after the Power BI Query was built. For that scenario you just need to use Refresh Preview in the Query Editor window and they will flow through to the Power BI table.
Is there any functionality to DELETE rows from a table, or even remove file from path list, or do I have to DROP the entire table and re-create it with the rows I need?
Also is there a TRUNCATE function?
Vora tables are based on files in HDFS. As of today (Vora1.2) you can only APPEND to an existing table. There is no DELETE or TRUNCATE functionality. You can however DROP the table and re-create it based on different files.