How can I use Apollo/GraphQL to incrementally/progressively query a datasource? - apollo

I have a query like this in my React/Apollo application:
const APPLICATIONS_QUERY = gql`
{
applications {
id
applicationType {
name
}
customer {
id
isActive
name
shortName
displayTimezone
}
deployments {
id
created
user {
id
username
}
}
baseUrl
customerIdentifier
hostInformation
kibanaUrl
sentryIssues
sentryShortName
serviceClass
updown
updownToken
}
}
`;
The majority of the items in the query are in a database and so the query is quick. But a couple of the items, like sentryIssues and updown rely on external API calls, so they make the duration of the query very long.
I'd like to split the query into the database portion and the external API portion so I can show the applications table immediately and add loading spinners for the two columns that hit an external API... But I can't find a good example of incremental/progressive querying or merging the results of two queries with Apollo.

This is a good example of where the #defer directive would be helpful. You can indicate which fields you want to defer for a given query like this:
const APPLICATIONS_QUERY = gql`
{
applications {
id
applicationType {
name
}
customer #defer {
id
isActive
name
shortName
displayTimezone
}
}
}
`
In this case, the client will make one request but receive 2 responses -- the initial response with all the requested fields sans customer and a second "patch" response with just the customer field that's fired once that resolver is finished. The client does the heavy lifting and pieces these two responses together for you -- there's no additional code necessary.
Please be aware that only nullable fields can be deferred, since the initial value sent with the first response will always be null. As a bonus, react-apollo exposes a loadingState property that you can use to check the loading state for your deferred fields:
<Query query={APPLICATIONS_QUERY}>
{({ loading, error, data, loadingState }) => {
const customerComponent = loadingState.applications.customer
? <CustomerInfo customer={data.applications.customer} />
: <LoadingIndicator />
// ...
}}
</Query>
The only downside is this is an experimental feature, so at the moment you have to install the alpha preview version of both apollo-server and the client libraries to use it.
See the docs for full details.

Related

Postman parameterized tests with actual values and expected errors for same request

I have request with number of tests cases, same endpoint, different actual values, different expected error messages.
I would like to create parameterized request sending particular value and check particular error message from list with all of the cases.
Request body:
{
"username": "{{username}}",
"password": "{{password}}",
...
}
Response:
{
"error_message": "{{error_message}}",
"error_code": "{{error_code}}"
}
Error message changes due to different cases:
Missed username
Missed password
Incorrect password or username
etc
Now, I have separate request on each case.
Question:
Is there way have 1 request with set of different values, checking
particular error messages/codes?
Create a csv:
username,password,error_message,error_code
username1,password1,errormessage1,errorcode1
username1,password1,errormessage1,errorcode1
Now use this as data file in collection runner or newman.
variable name is same as the column name and, for each iteration you will have corresponding row-column value as the variable value. Eg for iteration1 username will be username1
. As danny mentioned postman has a really rich documentation that you can make use of
https://learning.postman.com/docs/running-collections/working-with-data-files/
Adding another answer on how to run data driven from same request:
Create a environment variable called "csv" and copy the below content and paste it as value:
username,password,error_message,error_code
username1,password1,errormessage1,errorcode1
username1,password1,errormessage1,errorcode1
Now in pr-request add :
if (!pm.variables.get("index")) {
const parse = require('csv-parse/lib/sync')
//Environmental variable where we copy-pasted the csv content
const input = pm.environment.get("csv");
const records = parse(input, {
columns: true,
skip_empty_lines: true
})
pm.variables.set("index", 0)
pm.variables.set("records", records)
}
records = pm.variables.get("records")
index = pm.variables.get("index")
if (index !== records.length) {
for (let i of Object.entries(records[index])) {
pm.variables.set(i[0], i[1])
}
pm.variables.set("index", ++index)
pm.variables.get("index")===records.length?null:postman.setNextRequest(pm.info.requestName)
}
Now you can run data driven for that one particular request:
Eg collection:
https://www.getpostman.com/collections/eb144d613b7becb22482
use the same data as environment variable content , now run the collection using collection runner or newman
Output

What I should use instead of resolvers?

As u already know the Local resolvers are deprecated so we can't use it as a perspective way to handling REST cache. What we should use instead of resolvers?
'field policies' are not good for that at all. Let's imagine... You have two different client queries: getBooks and getBook. Each query getting data from the rest API. Somehow we need to handle the situation when we already got the data from getBooks and runing another query getBook. getBook should not make a request because the data were already cached. We did that in resolvers before it was deprecated. We were just checking the cache and return the data if it already exists in the cache if not did a request. How we can handle this in current circumstances?
Sorry but it's a bit not what I meant. Here is a code example:
export const getBooks = gql`
query getBooks () {
getBooks ()
#rest(
type: "Book"
path: "books"
endpoint: "v1"
) {
id
title
author
}
}
`
export const getBook = gql`
query getBook ($id: Int!) {
getBook (id: $id)
#rest(
type: "Book"
path: "book/{args.id}"
endpoint: "v1"
) {
id
title
author
}
}
`
So we have two different queries. The goal is when we run both in turn the getBook should not make a REST request because we already have the same data in the cache since we get it from getBooks. Before resolvers were deprecated we handle it in resolvers. Like: if this ID is not exist in the cache just make a request if exist give me data from the cache. How we can do that now?
As u can see fetchPolicy it's completely different.
Local fields it's also not good because it's something about fields not about the whole entity.

How to get subscription data from client cache?

i'm new to all the hot graphql/apollo stuff.
I have a subscription which gets a search result:
export const SEARCH_RESULTS_SUBSCRIPTION = gql`
subscription onSearchResultsRetrieved($sid: String!) {
searchResultsRetrieved(sid: $sid) {
status
clusteredOffers {
id
}
}
}
`;
Is it possible to query the "status" field from client cache if i need it inside another component? Or do i have to use an additional ?
In the apollo dev-tools i can see that there is a cache entry under "ROOT_SUBSCRIPTION" not "ROOT_QUERY". What does that mean?
....thanks
I found out that subscribeToMore is my friend to solve this.
At first i wrote a normal query for the data i want to subscribe to have cached data, then the cache will be updated by the subscription.
<3 apollo

Modify ChangeStream Responce In Loopback 3

First off, if you're not familiar with change streams, please read this.
It seems, when using lb to scaffold applications, that a change stream endpoint is automatically created for models. I have already successfully implemented a change stream where, on submitting a new model instance to my Statement model the changes are sent to all connected clients in real time. This works great.
Except it only sends the modelInstance of the Statement model. I need to know a bit about the user that submitted the statement as well. Since Statement has a hasOne relationship with my user model I would normally make my query with an includes filter. But I'm not making a query here... that's not how change streams work. The node server sends the information to the client without any query for that information being sent first.
My question is, how can I hook the outgoing changestream in the Statement model so that I can pull in the needed data from the user module? Something like:
module.exports = function(Statement) {
Statement.hookChangeStream(function(ctx, statementInstance, cb) {
const myUser = Statement.app.models.myUser
myUser.findOne({ 'where': { 'id': statementInstance.userId } }, function(err, userInstance) {
if (err !== undefined && err !== null) cb(err, null);
// strip sensitive data from user model
cleanUserInstance = someCleanerFunc(userInstance);
// add cleaned myUser modelInstance to Statement modelInstance
statementInstance.user = cleanUserInstance;
cb(null, true);
}
});
}
Can this be done? If so, how?

LoopBack: "order" filter not always applied

I have a model that uses the memory connector. On the client side the REST-API request looks like this:
TrackedAircraft.find({ filter:
{ order: 'altitude ASC',
where: { altitude: { neq: null }}
}
}).$promise.then(function (results) {
$scope.aircrafts = results;
});
"altitude" is a numeric value. Most of the time this works as expected, but like 1% of the requests end up with the default order.
You can inspect your db queries in console, starting the app as
DEBUG=loopback:datasource slc run.
Add if queries will be correct, but response - not, dig deeper to db's result handler.