I am wondering if anyone can help me with api pagination... I am trying to get all records from an external api but it restricts me with only getting maximum of 10. There are around 40k records..
The api also does not shows "no.of pages"(response below). hence i cant get my head around a solution.
There is NO "skip" or "count" or "top" supported either.. i am stuck...and i dont know how to create a loop in M language until all records are fetched. Can someone help me write a code or how it can look like
Below is my code.
let
Source = Json.Document(
Web.Contents(
"https://api.somedummy.com/api/v2/Account",
[
RelativePath ="Search",
Headers =
[
ApiKey = "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXx",
Authorization = "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX",
#"Content-Type" = "application/json"
],
Content=
Json.FromValue(
[key="status", operator="EqualTo", value="Active", resultType="Full"]
)
]
)
)
in
Source
and below is output
"data": {
"totalCount": 6705,
"page": 1,
"pageSize": 10,
"list":[
This might help you along your way. While I was looking into something similar for working with Jira, I found some helpful info from two individuals in the Atlassian Community site. Below is what I think might be a relevant snippet from a query I developed with the assistance of their posts. (To be clear this snippet is their code, which I used in my query.) While I'm providing more of the query (the segment of which is also comprised of their code) below, I think the key part that relates to your particular issue is this part.
yourJiraInstance = "https://site.atlassian.net/rest/api/2/search",
Source = Json.Document(Web.Contents(yourJiraInstance, [Query=[maxResults="100",startAt="0"]])),
totalIssuesCount = Source[total],
// Now it is time to build a list of startAt values, starting on 0, incrementing 100 per item
startAtList = List.Generate(()=>0, each _ < totalIssuesCount, each _ +100),
urlList = List.Transform(startAtList, each Json.Document(Web.Contents(yourJiraInstance, [Query=[maxResults="100",startAt=Text.From(_)]]))),
// ===== Consolidate records into a single list ======
// so we have all the records in data, but it is in a bunch of lists each 100 records
// long. The issues will be more useful to us if they're consolidated into one long list
I'm thinking that maybe you could try substituting pageSize for maxResults and totalCount for totalIssuesCount. I don't know about startAt. There must be something similar available to you. Who knows? It could actually be startAt. I believe your pageSize would be 10 and you would increment your startAt by 10 instead of 100.
This is from Nick's and Tiago's posts on this thread. I think the only real difference may be that I buffered a table. (It's been a while and I did not dig into their thread and compare it for this answer.)
let
// I must credit the first part of this code -- the part between the ********** lines -- as being from Nick Cerneaz (and Tiago Machado) from their posts on this thread:
// https://community.atlassian.com/t5/Marketplace-Apps-Integrations/All-data-not-displayed-in-Power-BI-from-Jira/qaq-p/723117.
// **********
yourJiraInstance = "https://site.atlassian.net/rest/api/2/search",
Source = Json.Document(Web.Contents(yourJiraInstance, [Query=[maxResults="100",startAt="0"]])),
totalIssuesCount = Source[total],
// Now it is time to build a list of startAt values, starting on 0, incrementing 100 per item
startAtList = List.Generate(()=>0, each _ < totalIssuesCount, each _ +100),
urlList = List.Transform(startAtList, each Json.Document(Web.Contents(yourJiraInstance, [Query=[maxResults="100",startAt=Text.From(_)]]))),
// ===== Consolidate records into a single list ======
// so we have all the records in data, but it is in a bunch of lists each 100 records
// long. The issues will be more useful to us if they're consolidated into one long list
//
// In essence we need extract the separate lists of issues in each data{i}[issues] for 0<=i<#"total"
// and concatenate those into single list of issues .. from which then we can analyse
//
// to figure this out I found this post particulary helpful (thanks Vitaly!):
// https://potyarkin.ml/posts/2017/loops-in-power-query-m-language/
//
// so first create a single list that has as its members each sub-list of the issues,
// 100 in each except for the last one that will have just the residual list.
// So iLL is a List of Lists (of issues):
iLL = List.Generate(
() => [i=-1, iL={} ],
each [i] < List.Count(urlList),
each [
i = [i]+1,
iL = urlList{i}[issues]
],
each [iL]
),
// and finally, collapse that list of lists into just a single list (of issues)
issues = List.Combine(iLL),
// Convert the list of issues records into a table
#"Converted to table" = Table.Buffer(Table.FromList(issues, Splitter.SplitByNothing(), null, null, ExtraValues.Error)),
// **********
Related
I had a recent urge to play with reading and formatting a GEDCOM (Genealogy) file and I'm using Svelte to accomplish this. But I'm stuck on a problem I'm not quite sure the best way to do it. Extracted from the GEDCOM file is a collection of names sorted by surnames. Let's say it's this:
Adams, John
Anderson, Neo
Cash, Johnny
Clapton, Eric
Cross, Christopher
Denver, John
The real list is quite a bit bigger and needs indicators for the letters of the alphabet. Essentially, I want to have the following as output:
Names starting with A:
Adams, John; Anderson, Neo;
Names starting with C:
Cash, Johnny; Clapton, Eric; Cross, Christopher;
Names starting with D:
Denver, John;
I have no problem spewing out the list. Svelte makes that easy enough. What I haven't figured out yet is how to interject the breaks. Each thing I've tried or thought about seems to be flawed. As far as I can tell, there's no good way to simply alter a variable as a loop progresses and there doesn't seem to be a way to get the previous item in the loop. Any suggestions for an approach?
I currently have this, but where to begin injecting the header?
{#each data.sort(compare) as n }
{#if n.ReverseName }
{n.ReverseName}
{/if}
{/each}
First of all, you want to ask yourself, "what is the format of the data output I want to obtain?". Considering your question and your objective to be able to loop through the output, you'd probably want the following structure:
output = [
[array of names starting with A, if any],
[array of names starting with B, if any],
etc.
]
I believe a relatively straightforward way to achieve this is with a reducer:
// start with sorting the input array so names are
// already properly sorted before being input to our reducer
const output = input.sort().reduce((acc, current) => {
// if the accumulator is an empty array,
// return the current value in a new sub array
// (the very first name pushed into the very first sub array)
if (acc.length === 0) {
return [ [ current ] ];
}
// otherwise, if the initial of the first name in
// the current (i.e. last created) sub array matches
// the initial of the current value, add that current
// value to the current sub array
const [ currentSub ] = acc.slice(-1);
if (currentSub[0].charAt(0) === current.charAt(0)) {
return [ ...acc.slice(0, -1), [ ...currentSub, current ] ];
}
// else add a new sub array and initialise it with the current value
return [ ...acc, [ current ] ];
}, []); // initialise with an empty array
The output array will have the desired shape [ [names starting with A, if any], [names starting with B, if any], ... ] and all that's left is for you to iterate through that output array:
{#each output as letterArray}
<p>Names starting with <strong>{letterArray[0].charAt(0)}</strong>:</p>
<p>{letterArray.join('; ')}</p>
{/each}
Or if you want to iterate over individual names, add an inner each block that iterates over letterArray. Many possibilities exist once your data is shaped the way you want it to be.
Demo REPL
Edit:
If input is dynamic (i.e. it is updated through the life of the component, either via a fetch or because it is received as a prop that might get updated), you can keep output automatically updated by turning its declaration into reactive code:
$: output = input.sort().reduce(...same code as above...)
How can I loop through the request data and post it as one line in to the database, user can submit multiple descriptions, lengths and so on, problem I have is in the DB its creating massive amounts of rows to get to the correct format of the last one A1 but the user could submit A1,1,1,1,1; A2,2,2,8,100 and so on as its a dynamic add form)
descriptions = request.POST.getlist('description')
lengths = request.POST.getlist('lengthx')
widths = request.POST.getlist('widthx')
depths = request.POST.getlist('depthx')
quantitys = request.POST.getlist('qtyx')
for description in descriptions:
for lengt in lengths:
for width in widths:
for depth in depths:
for quantity in quantitys:
newquoteitem = QuoteItem.objects.create(
qdescription=description,
qlength=lengt,
qwidth=width,
qdepth=depth,
qquantity=quantity,
quote_number=quotenumber,
)
bottom entry is correct
post system
First solutions
Use formsets. That is exactly what they are meant to handle.
Second solution
descriptions = request.POST.getlist('description') is returning a list of all descriptions, so let's say there are 5, it iterates 5 times. Now lengths = request.POST.getlist('lengthx') is a list of all lengths, again, 5 of them, so it will iterate 5 times, and since it is nested within the descriptions for loop, that's 25 times!
So, although I still think formsets are the way to go, you can try the following:
descriptions = request.POST.getlist('description')
lengths = request.POST.getlist('lengthx')
widths = request.POST.getlist('widthx')
depths = request.POST.getlist('depthx')
quantitys = request.POST.getlist('qtyx')
for i in range(len(descriptions)):
newquoteitem = QuoteItem.objects.create(
qdescription=descriptions[i],
qlength=lengths[i],
qwidth=widths[i],
qdepth=depths[i],
qquantity=quantitys[i],
quote_number=quotenumber,
)
Here, if there are 5 descriptions, then len(descriptions) will be 5, and there is one loop, which will iterate 5 times in total.
I have a list that contains sublists. The sequence of the sublist is fixed, as are the number of elements.
schedule = [['date1', 'action1', beginvalue1, endvalue1],
['date2', 'action2', beginvalue2, endvalue2],
...
]
Say, I have a date and I want find what I have to do on that date, meaning I require to find the contents of the entire sublist, given only the date.
I did the following (which works): I created a intermediate list, with all the first values of the sublists. Based on the index i was able to retrieve its entire contents, as follows:
dt = 'date150' # To just have a value to make underlying code more clear
ls_intermediate = [item[0] for item in schedule]
index = ls_intermediate.index(dt)
print(schedule[index])
It works but it just does not seem the Python way to do this. How can I improve this piece of code?
To be complete: there are no double 'date' entries in the list. Every date is unique and appears only once.
Learning Python, and having quite a journey in front of me...
thank you!
Hello NetLogo community,
I am trying to ask agents named "users" to save certain value (string) of a variable for last two ticks (last two instances when "Go" command is executed). But, users have to store these values after first two ticks. Can anyone suggest me a way out? I have tried implementing the following logic but it does not seem to work.
ask users
[
set history-length-TM 2
if ticks > 2
[
set TM-history n-values history-length-TM [mode-taken]
foreach TM-history [x = "car"]
[
commands that are to be executed
.....
......
]
]
]
"history-length-TM" is the extent of ticks for which the values are to be stored. "TM-History" is the list to store the values of variable "mode-taken". Please advise a better method that could help me achieve the intent. Thanks in advance.
I am not sure I completely understand how ticks relates to this question. My suggestion would be something along these lines:
globals [history-length-TM]
users-own [TM-history]
to setup
set history-length-TM 2
...
end
ask users
....
set TM-history fput mode-taken TM-history
if length [TM-history] > history-length-TM [set TM-history but-last TM-history]
end
The idea is that the memory fills up (using fput) by placing the new mode-taken at the front of the list. Once the memory is too long, then the last (which is oldest) is dropped off the list.
I'm using python 2.7 with Elasticsearch-DSL package to query my elastic cluster.
Trying to add "from and limit" capabilities to the query in order to have pagination in my FE which presents the documents elastic returns but 'from' doesn't work right (i.e. I'm not using it correctly I spouse).
The relevant code is:
s = Search(using=elastic_conn, index='my_index'). \
filter("terms", organization_id=org_list)
hits = s[my_from:my_size].execute() # if from = 10, size = 10 then I get 0 documents, altought 100 documents match the filters.
My index contains 100 documents.
even when my filter match all results (i.e nothing is filtered out), if I use
my_from = 10 and my_size = 10, for instance, then I get nothing in hits (no matching documents)
Why is that? Am I misusing the from?
Documentation states:
from and size parameters. The from parameter defines the offset from the first result you want to fetch. The size parameter allows you to configure the maximum amount of hits to be returned.
So it seems really straightforward, what am I missing?
The answer to this question can be found in their documentation under the Pagination Section of the Search DSL:
Pagination
To specify the from/size parameters, use the Python slicing API:
s = s[10:20]
# {"from": 10, "size": 10}
The correct usage of these parameters with the Search DSL is just as you would with a Python list, slicing from the starting index to the end index. The size parameter would be implicitly defined as the end index minus the start index.
Hope this clears things up!
Try to pass from and size params as below:
search = Search(using=elastic_conn, index='my_index'). \
filter("terms", organization_id=org_list). \
extra(from_=10, size=20)
result = search.execute()