Correct Way to Use Google Reporting API Headers for Visualization Data Table Headers - google-visualization

Technologies: Google Reporting API V4 and Google Visualization API, combining PHP and Javascript. There are specific reasons we are not able to install and use the Google Client library.
The problem: the Visualization Api is giving me "invalid type Integer" when that is what the type the Google Reporting API is returning. I know that type integer is not supported in the Visualization API,
https://developers.google.com/chart/interactive/docs/reference#DataTable_addColumn
So the question is what is the correct approach to dynamically use the Reporting API headers to construct chart table headers? Do we need to map the data types every time?
In a nutshell: I query for Analytics data and get the following header structure:
[columnHeader] => Array
(
[dimensions] => Array
(
[0] => ga:date
)
[metricHeader] => Array
(
[metricHeaderEntries] => Array
(
[0] => Array
(
[name] => ga:users
[type] => INTEGER
)
[1] => Array
(
[name] => ga:sessions
[type] => INTEGER
)
)
)
)
When I attempt to create columns,
...
$mtype = $headers['metricHeader']['metricHeaderEntries'][0]['type'];
...
$column_object = "{'type':'$mtype','label':'$mname'}";
// produces {'type':'INTEGER','label':'ga:users'}
...
data.addColumn($column_object);
(Firefox) console logs "Invalid type, INTEGER, for column "Users."
I can "cheat" here by hard coding 'number' for type:
$column_object = "{'type':'number','label':'$mname'}";
Which works fine but I shouldn't have to (or am missing something) and presents some challenges in making metrics and dimensions dynamic. "Users" is indeed a number/integer. Can't help feeling it's something I overlooked that would easily map the columns from the data.

Sans responses, after much research and experimentation, thought I might post an outline of my solution (and offer it up for criticism/feedback.)
Keep in mind this is a chart "selector" between multiple chart types and multiple dimensions/metrics selections, a hard-coded one-off won't work. It has to be dynamic and perform all transformations for all chart types to work.
Analytics returns headers of types STRING, INTEGER, PERCENT, TIME, CURRENCY, FLOAT, which indeed need to be mapped to Visualization-compatible types of string, number, boolean, date, datetime, and timeofday. The server-side processing returns the raw JSON encoded Analytics data in the event other apps may consume it, and any mods to the data are performed in Javascript with a leg up from jQuery.
The problem I encountered is we "need to know" the original data type of a given column for proper formatting on chart output. This is further complicated by the header dimensions member not having a "type". For example, Analytics returns a dimension of "date" as a string "20190305" that needs to be converted to a proper Javascript Date object for chart output but in our case needs to display as a string "03/05/2019."
On receipt of Analytics data, I run a function to add the original data types from Analytics to all header columns as custom fields. These are not touched by the charts so it does not affect functionality. I then map the G.A. types to the Visualization types so they properly output the chart, and refer to my "custom types members" which contain the original data type for formatting in the chart.
The original types are referenced to properly format data using the hAxis options for the hAxis display, and Visualization [Date|Number]Formatter (s) for column data (tooltip values, etc.)
Effectively
percent -> type number, but formats as percent with %.
float -> type number, formats as decimal.
currency -> type number, formats as $currency.
date -> type Date(), formats as MM/DD/YYY
time -> type Date(), formats various (generally HH:MM:SS for our purposes)
string obviously string
Hope this is helpful for anyone diving into G.A./Visualization integration.

Related

Issues with PowerBI connector in PowerApps

Up front: this isn't about PowerBI tiles or bringing visualizations into PowerApps. There is a PowerBI data connector that provides a method called ExecuteDatasetQuery that allows for passing in a DAX query for, ostensibly, returning the data from a published dataset. It takes three parameters: workspaceGuid, datasetGuid, and queryText (with an optional object for serializer settings).
There is no query I can send this thing that doesn't return a giant empty table and I have no idea what I'm doing wrong. My queries, which work fine in other systems that do the same thing (JavaScript API calls, PowerAutomate, PowerBI Desktop), all produce a table with no columns and no values in those columns but with a number of rows equal to the rows I'd expect to get back from a query. The result, viewed in PowerApps, looks like this:
And, just for fun, I've converted the return to a JSON string and can confirm that the return is...
just empty. I can find no documentation of merit for the PowerBI connector or this method, so no luck there. Just wondered if anyone's had any experience with this thing and can maybe point me in the right direction. For reference, the query I'm trying to pass in (that works everywhere else) is:
DEFINE
VAR _reqs = SELECTCOLUMNS(MyTable,
"ReqNum",[Title],
"BusinessArea",[BusinessArea],
"Serial1",[Serial1],
"Serial2",[Serial2],
"Department",[Department],
"OM",[OM],
"Requestor",[Requestor],
"StrategicObjective",[ITStrategicObjective],
"Area",[Area],
"ProductLine",[ProductLine],
"ProjectManager",[ProjectManager],
"BusinessLiaison",[BusinessLiaison],
"Customer",[Customer],
"SolutionArchitect",[SolutionArchitect],
"VicePresident",[VicePresident],
"Created",DATEVALUE([Created])
)
EVALUATE
_reqs
ORDER BY
[Created] DESC
But the PowerApps method returns the same empty table even with something as simple as EVALUATE(MyTable).

Talend Populating Web Service Parameter that is an Array of Strings

I am using the tESBConsumer component (using Talend Data Integration 6.4.1) to talk to a SOAP Web Service. We need to use a Web Service method that requires one of its parameters to be an array of strings, such that the payload needs to become something like:
"... <simpleParam>Simple Param Value</simpleParam><arrayOfStringsParam>
<string>Array
Item 1</string><string>Array Item
2</string><string>Array Item
3</string></arrayOfStringsParam> ..."
I cannot determine how to pass the array into the control to get it in the correct format. There is no type for Schema fields of String[].
I tried a type List but it got written out in the payload as something like:
"... <arrayOfStringsParam>[Array Item 1, Array Item 2,Array Item
3]</arrayOfStringsParam> ..."
.
I tried a type of String and formatted the data in previous steps as
"<string>Array Item 1</string><string>Array Item
2</string><string>Array Item 3</string>"
but in the resulting payload the less-than (<) signs got encoded to "<" so they weren't recognised as proper tags.
Can someone please let me know how to get this data in the appropriate format.
Thanks.

BIRT - using multiple webservices to get the data

I am trying to generate a report using Eclipse BIRT report designer.
The scenario is this:
There are 2 web service data sources. There are 2 datasets for webservices 'WS1' and 'WS2' respectively.
The output element 'COUNTRYID' of one webservice 'WS1' would go as input for another webservice 'WS2'.
What I did:
Created a parameter COUNTRYID.
Created a dummy Computed Column in the dataset of the web service 'WS1' with the expression:
params["COUNTRYID"].value=row["COUNTRYID"]
Now the input parameters for the 'WS2' dataset is related to the global paramter 'COUNTRYID'.
When I run the report, I see that the global parameter contains the value from the 'WS1' output.
But the report does not display the values from the response of the web service 'WS2'
My questions:
How can I see, if the webservice got fired or not?
How can I see, if the webservice got fired with correct values ?
WS1 is not fired unless it is explicitely bound to a report element. Typically, to achieve this we apply following steps:
insert a data element at the beginning of the report body
turn the property visibility of this new element to false (or let it visible during testing)
bind it to the first dataset WS1
It will force a silent execution of WS1, and therefore this will populate your parameter COUNTRYID before WS2 runs.
However this approach would not work if:
WS2 dataset has to be used to populate selection items of a report parameter (which does not seem to be the case here)
If COUNTRYID parameter is used at render time. This point is much more annoying, if you need this parameter in chart expressions for example. If so, i would recommend to store WS1 in a report variable instead of (or why not in addition to) a report parameter. See this topic to see how to create a report variable.
You can initialize it at the same place you did for the report parameter with:
vars["COUNTRYID"]=row["COUNTRYID"];
and use it anywhere with
vars["COUNTRYID"];
Report variables are available from the palette of expressions editor :

Facebook Ads Insights API reportstats endpoint

I'm using reportstats edge to download some reports in CSV format. (It probably applies to XLS as well)
What I've noticed:
headers have different descriptions than the data columns parameters - is there a resource describing the mapping? (eg. adgroup_id -> 'Ad ID', adgroup_name -> 'Ad Name', unique_impressions -> 'Reach'...
will the order of csv columns be as defined in data_columns param?
! some columns are not returned in csv format - two I've identified so far are inline_actions and unique_social_clicks - the column is skipped in csv format but available in json - is it a bug or there is a reason for that?
general question - does csv format require pagination or I will always get all of the data?
value mapping - the constant values in csv/xls format have different labels, eg. placement(desktop_feed -> 'News Feed on Desktop Computers'), Is there a resource describing all the possible values?
asynchronous report requests - it happens quite often that although I'm checking the report_run_id for async_percent_completion the data is still not available when it should . I'm getting a text response No data available.. I need to retry and then it's usually available. Is this expected?
Thanks!
different names in API and XLS are intentional; API developers prefer naming consistent with the rest of Ads API, but people using XLS exports are often not developers and prefer human-friendly naming
you can use export_columns to define the order
inline_actions/unique_social_clicks - not sure, maybe these might be deprecated
it will give you all of the data
I don't think there's public resource for mapping between placement values :-(
you need to check report_run_id for the job status (field "async_status"), that should work reliable; once it's "Job Completed" you should be able to get the data

Calculating total number of rows for an IR from Page 0

I have a number of different Interactive Reports within my application (using Oracle ApEx 4.1), on different pages.
On each of these reports, I have pagination but I have been asked to also provide on each of the reports, the overall total number of records, displayed somewhere on the page per report.
Is there a means of doing this functionality on Page 0 or do I need to calculate total records for each report separately?
Update for commenter
As mentioned in my previous comment, I added the new "p_use_filter" boolean parameter and set this to FALSE and all works fine.
I have another query. On one of the IR reports, within my where condition, I am using a bind variable but when I drill down on a report field that passes to the other report with the bind variable a STRING and then look at the backend SQL statement, I am getting an INVALID RELATIONAL OPERATOR error because the SQL has a WHERE condition like:
WHERE (CUSTOMER_NAME = ABC AIRLINES LTD )
i.e. it is trying to parse the SQL without single quotes around ABC AIRLINES LTD ?
I'd propose to retrieve the IR SQL and then execute that SQL wrapped in a SELECT COUNT(*) from (...).
Now, with Apex 4.2 you get the apex_ir package which can get the report SQL, but in pre 4.2 you do not have such a package. I have made one for myself, and I've used it for several things (I've blogged a bit about these for example).
The code needs some cleaning, but it is functional and has plenty of commentary in it. It'll parse all the filters and transform them into SQL again. It does not handle computations or GROUP BYs. It'll maybe have some kinks left in the cable somewhere, but it may depend on what you want to do with it. In this case it shouldn't be a problem.
Git link
Basically, install the package in your schema, and then use code like this to run it:
DECLARE
lNullTable DBMS_SQL.VARCHAR2_TABLE;
lDebug VARCHAR2(4000);
lSql VARCHAR2(4000);
lCount NUMBER;
BEGIN
lSql := apex_ir_pkg.get_ir_sql
(
p_app_id => :APP_ID,
p_session_id => :APP_SESSION,
p_page_id => :APP_PAGE_ID,
p_report_id => NULL,
p_app_user => :APP_USER,
p_use_session_state => TRUE,
p_binds => lNullTable,
p_binds_val => lNullTable,
p_incl_order_by => FALSE,
p_debug => lDebug,
);
lSql := 'SELECT count(*) FROM ('||lSql||')';
EXECUTE IMMEDIATE lSql INTO lCount;
END;
You can use #TOTAL_ROWS# in the header or footer of the region, but it will only show the initial value. i use it for classic reports. I don't think it plays well with the IR javascript. That's a partial page refresh.
See: https://forums.oracle.com/forums/thread.jspa?threadID=2239712 and http://docs.oracle.com/cd/E23903_01/doc/doc.41/e21674/ui_region.htm#CHDBCGJH