I am trying to get all the values in Power BI from Rest API but unfortunately result is limited to 10 and I would like to iterate through all until it will not return any data.
So the URL is like "https://web.com/api/v1/devices/?after_device_id=0" which will show me 10 first devices so the next iteration would be "https://web.com/api/v1/devices/?after_device_id=10" and so on.
let
apiUrl = "https://web.com/api/v1/devices/",
options = [Headers=[#"APIToken"="XXXX",#"Authorization"="XXXX", #"Accept"="application/json"]],
result = Json.Document(Web.Contents(apiUrl , options)),
devices = result[devices],
#"Converted to Table" = Table.FromList(devices, Splitter.SplitByNothing(), null, null, ExtraValues.Error)
in
#"Converted to Table"
So if someone would wonder it was made by making function and query.
Function (fDevices):
(Page as number)=>
let
apiUrl = "https://web.com/api/v1/devices/"&Number.ToText(Page),
options = [Headers=[#"APIToken"="XXXX",#"Authorization"="XXXX",#"Accept"="application/json"]],
result = Json.Document(Web.Contents(apiUrl , options)),
devices = result[devices],
#"Converted to Table" = Table.FromList(devices, Splitter.SplitByNothing(), null, null, ExtraValues.Error)
in
#"Converted to Table"
Query:
List.Generate(()=>
[Result= try fDevices(0) otherwise null, Page=0],
each [Result] <> null,
each [Result = try fDevices([Page]+10) otherwise null, Page=[Page]+10],
each [Result])
Related
I am new to PQ and I want to get data from REST API.
The API uses pagination and in the 1st quary it returns two fileds. Table contains only 100 records.
IssueList = Table
NextBookmark = value
The url /issues?bookmark=[value] to get another page of issues.
I am trying to use https://learn.microsoft.com/en-us/power-query/samples/trippin/1-odata/readme tutorial to create connector but fails to get values from another page.
Code below:
Table.GenerateByPage = (getNextPage as function) as table =>
let
listOfPages = List.Generate(
() => getNextPage(null), // get the first page of data
(lastPage) => lastPage <> null, // stop when the function returns null
(lastPage) => getNextPage(lastPage) // pass the previous page to the next function call
),
// concatenate the pages together
tableOfPages = Table.FromList(listOfPages, Splitter.SplitByNothing(), {"Column1"}),
firstRow = tableOfPages{0}?
in
// if we didn't get back any pages of data, return an empty table
// otherwise set the table type based on the columns of the first page
if (firstRow = null) then
Table.FromRows({})
else
Value.ReplaceType(
Table.ExpandTableColumn(tableOfPages, "Column1", Table.ColumnNames(firstRow[Column1])),
Value.Type(firstRow[Column1])
);
GetAllPagesByNextLink = (url as text) as table =>
Table.GenerateByPage((previous) =>
let
// if previous is null, then this is our first page of data
nextLink = if (previous = null) then url else Value.Metadata(previous)[NextLink]?,
// if NextLink was set to null by the previous call, we know we have no more data
page = if (nextLink <> null) then GetPage(nextLink) else null
in
page
);
GetPage = (url as text) as table =>
let
response = Web.Contents(url, [ Headers = DefaultRequestHeaders ]),
body = Json.Document(response),
nextLink = GetNextLink(body),
data = Table.FromRecords(body[IssueList])
in
data meta [NextLink = nextLink];
GetNextLink = (response) as nullable text => Record.FieldOrDefault(response[NextBookmark], "NextBookmark");
PQ.Feed = (url as text) as table => GetAllPagesByNextLink(url);
Can anyone help me out?
Thanks in advance for words of advice :)
MSI has table - 'FeatureComponents' with two columns: 'Feature_' and 'Component_'. What I'm trying to do is to change all values in 'Feature_' column at once.
IntPtr hDb = IntPtr.Zero;
int res = MsiInvoke.MsiOpenDatabase(PathToMsi, MsiInvoke.MSIDBOPEN_TRANSACT, out hDb);
string FFF = "SELECT `Feature_` FROM `FeatureComponents`"; <- sql string
IntPtr hView2 = IntPtr.Zero;
res = MsiInvoke.MsiDatabaseOpenView(hDb, FFF, out hView2);
res = MsiInvoke.MsiViewExecute(hView2, IntPtr.Zero);
IntPtr hRec2 = IntPtr.Zero;
res = MsiInvoke.MsiViewFetch(hView2, out hRec2);
res = MsiInvoke.MsiRecordSetString(hRec2, 1, "DUMMY");
res = MsiInvoke.MsiViewModify(hView2, 4, hRec2);
res = MsiInvoke.MsiViewClose(hView2);
res = MsiInvoke.MsiDatabaseCommit(hDb);
However, this only changes first entry in table. So, I'm wondering, how to iterate over all entries in table and change all column values? As right now, I can only do this to one entry and have no idea how to apply this to all entries.
Basically you just turn that fetching code into a loop, and you keep calling MsiViewFetch until you get the ERROR_NO_MORE_ITEMS result, each call returning the next record until you get that error. Simple old example in C++ but the principle is the same:
while ( (errorI = MsiViewFetch (hViewSELECT, &hRecord)) != ERROR_NO_MORE_ITEMS)
nBuffer = (DWORD)256;
MsiRecordGetString(hRecord, 1, svPropname, &nBuffer);
nBuffer = (DWORD)256;
MsiRecordGetString(hRecord, 2, svPropvalue, nBuffer);
}
Hey guys anyone run into this issue where the ADO.net connector returns "Invalid value for column: Expected INT64" when the column parameters are ordered in some other ways:
Error details:
"Operation was rejected because the system is not in a state required for the operation's execution.
Status(StatusCode=FailedPrecondition, Detail="Invalid value for column GeographyKey in table Table1: Expected INT64.")"
var qry = "DELETE Table1";
var col = new SpannerParameterCollection();
col.Add("CustomerKey", SpannerDbType.Int64, 0);
col.Add("EmailAddress", SpannerDbType.String, "some#email.com");
col.Add("GeographyKey", SpannerDbType.Int64, 37);
This one throws error on CustomerKey:
var qry = "DELETE Table1";
var col = new SpannerParameterCollection();
col.Add("EmailAddress", SpannerDbType.String, "some#email.com");
col.Add("GeographyKey", SpannerDbType.Int64, 37);
col.Add("CustomerKey", SpannerDbType.Int64, 0);
But when the columns are arrange this way, moving the GeographyKey and CustomerKey in the first list, it doesn't throw an error
var qry = "DELETE Table1";
var col = new SpannerParameterCollection();
col.Add("GeographyKey", SpannerDbType.Int64, 37);
col.Add("CustomerKey", SpannerDbType.Int64, 0);
col.Add("EmailAddress", SpannerDbType.String, "some#email.com");
A delete mutation takes a KeySet as its only argument. A KeySet does not include the column names, only the values of the key. It is therefore required that you specify the key values in the order of the key columns of the table. Have a look here for a specification of the method that is being called under the hood: https://cloud.google.com/spanner/docs/reference/rest/v1/projects.instances.databases.sessions/commit#Delete
I am not familiar with the ADO.NET connector, but even if you are able to specify key values using a combination of column name and value in the ADO.NET API, in the end only the key values will be sent to Cloud Spanner.
I try to receive the numer of records for a Select statement and the record set at once.
The Recordset Object offers the RecordCount Property for this issue.
It works just fine using a static, server-side cursor, but if i view the events in SQL Server Profiler, I realize, that it seems to fetch every row of the whole recordset, just to count the rows.
On the other hand, I can do a MoveLast on the recordset and the Bookmark contains the index of the last row (== Recordcount).
I do not want to use the Bookmark instead of the RecordCount and wonder if someone could explain this behaviour.
If anyone is interessted, I created a small code sample to reproduce it:
::CoInitialize(NULL);
ADODB::_ConnectionPtr pConn;
HRESULT hr;
hr = pConn.CreateInstance(__uuidof(ADODB::Connection));
pConn->CursorLocation = ADODB::adUseServer;
pConn->ConnectionTimeout = 0;
pConn->Provider = "SQLOLEDB";
pConn->Open(bstr_t("Provider=sqloledb;Data Source=s11;Initial Catalog=...;Application Name=DBTEST"), "", "", ADODB::adConnectUnspecified);
// Create Command Object
_variant_t vtRecordsAffected;
ADODB::_CommandPtr cmd;
hr = cmd.CreateInstance(__uuidof(ADODB::Command));
cmd->ActiveConnection = pConn;
cmd->CommandTimeout = 0;
// Create a test table
cmd->CommandText = _bstr_t("create table #mytestingtab (iIdentity INT)");
cmd->Execute(&vtRecordsAffected, NULL, ADODB::adCmdText);
// Populate
cmd->CommandText = _bstr_t(
"DECLARE #iNr INT\r\n"
"SET #iNr = 0\r\n"
"WHILE #iNr < 10000\r\n"
"BEGIN\r\n"
" INSERT INTO #mytestingtab (iIdentity) VALUES (#iNr)\r\n"
" SET #iNr = #iNr + 1\r\n"
"END\r\n"
);
cmd->Execute(&vtRecordsAffected, NULL, ADODB::adCmdText);
// Create a Recordset Object
_variant_t vtEmpty(DISP_E_PARAMNOTFOUND, VT_ERROR);
ADODB::_RecordsetPtr Recordset;
hr = Recordset.CreateInstance(__uuidof(ADODB::Recordset));
Recordset->CursorLocation = ADODB::adUseServer;
cmd->CommandText = _bstr_t(
"SELECT * FROM #mytestingtab"
);
Recordset->PutRefSource(cmd);
Recordset->Open(vtEmpty, vtEmpty, ADODB::adOpenStatic, ADODB::adLockReadOnly, ADODB::adCmdText);
// Move to the Last Row
Recordset->MoveLast();
_variant_t bookmark = Recordset->Bookmark;
// Recordcount
long tmp = Recordset->RecordCount;
Recordset->Close();
pConn->Close();
::CoUninitialize();
Is there a way to use the Recordset Property, without transferring all rows to the client??
I have maybe about 1,000,000 rows to load into C++ objects (through about 10,000 SELECTS). I've profiled the load, and note that the sqlite3_step statement here is the bottleneck.
sqlite3_stmt *stmt;
std::string symbol = stock->getSymbol();
boost::format sql("SELECT date,open,high,low,close,volume FROM Prices WHERE symbol=\"%s\" ORDER BY date DESC");
sql % symbol;
if (sqlite3_prepare_v2(databaseHandle_, sql.str().c_str(), -1, &stmt, NULL) == SQLITE_OK) {
while (sqlite3_step(stmt) == SQLITE_ROW) {
int date = sqlite3_column_int(stmt, 0);
float open = sqlite3_column_double(stmt, 1);
float high = sqlite3_column_double(stmt, 2);
float low = sqlite3_column_double(stmt, 3);
float close = sqlite3_column_double(stmt, 4);
int volume = sqlite3_column_int(stmt, 5);
Price *price = new Price(new Date(date), open, close, low, high, volume);
stock->add(price);
}
} else {
std::cout << "Error loading stock" << std::endl;
}
I am using the amagalmated sqlite.h/c version 3.15.0. Any ideas how I can speed up performance?
More info:
CREATE TABLE Prices (symbol VARCHAR(10) NOT NULL, date INT(11) NOT NULL, open DECIMAL(6,2) NOT NULL, high DECIMAL(6,2) NOT NULL,low DECIMAL(6,2) NOT NULL, close DECIMAL(6,2) NOT NULL, volume INT(10) NOT NULL, PRIMARY KEY (symbol, date))
CREATE INDEX `PricesIndex` ON `Prices` (`symbol` ,`date` DESC)
EXPLAIN QUERY PLAN SELECT * FROM Prices WHERE symbol="TSLA" ORDER BY date DESC;
returns
SEARCH TABLE PRICES USING INDEX PricesIndex (symbol=?)
Further Note: Such SELECTs as shown above take 2ms in SQLite Browser for Mac Execute SQL.
Your index already speeds up searching for matching rows, and returns them in the correct order so that no separate sorting step is required.
However, the database still has to look up the corresponding table row for each index entry.
You can speed up this particular query by creating a covering index on all the used columns:
CREATE INDEX p ON Prices(symbol, date, open, high, low, close, volume);
But instead of duplicating all data in the index, it would be a better idea to make this table a clustered index:
CREATE TABLE Prices (
symbol VARCHAR(10) NOT NULL,
date INT(11) NOT NULL,
open DECIMAL(6,2) NOT NULL,
high DECIMAL(6,2) NOT NULL,
low DECIMAL(6,2) NOT NULL,
close DECIMAL(6,2) NOT NULL,
volume INT(10) NOT NULL,
PRIMARY KEY (symbol, date)
) WITHOUT ROWID;