I have a situation where I would like to conditionally exclude a field from a query selection before I hit that query's resolver.
The use case being that my underlying API only exposes certain 'fields' based on the user's locale, and calls made to this API will throw errors if fields are requested that are not included of that locale.
I have tried an approach with directives,
type Person {
id: Int!
name: String!
medicare: String #locale(locales: ["AU"])
}
type query {
person(id: Int!): Person
}
And using the SchemaDirectiveVisitor.visitFieldDefinition, I override field.resolve for the medicare field to return null when the user locale doesn't match any of the locales defined on the directive.
However, when a client with a non "AU" locale executes the following
query {
person(id: 111) {
name
medicareNumber
}
}
}
the field resolver for medicare is never called and the query resolver makes a request to the underlying API, appending the fields in the selection set (including the invalid medicareNumber) as query parameters. The API call returns an error object at this point.
I believe this makes sense as it seems that the directive resolver is on the FieldDefinition and would only be called when the person resolver returns a valid result.
Is there a way to achieve this sort of functionality, with or without directives?
In general, I would caution against this kind of schema design. As a client, if I include a field in the selection set, I expect to see that field in the response -- removing the field from the selection set server-side goes against the spec and can cause unnecessary confusion (especially on a larger team or with a public API).
If you are examining the requested fields in order to determine the parameters to pass to your API call, then forcing a certain field to resolve to null won't do anything -- that field will still be included in the selection set. In fact, there's really no way to create a schema directive that will impact the selection set of a request.
The best approach here would be to 1) ensure any potentially-null fields are nullable in the schema and 2) explicitly filter the selection set wherever your selection-set-to-parameters logic is.
EDIT:
Schema directives won't show up as part of the schema object returned in the info, so they can't be used as flags. My suggestion would be to maintain a separate in-memory map. For example:
const fieldsByLocale = {
US: {
Person: ['name', 'medicareNumber'],
},
AU: {
Person: ['name'],
},
}
then you could just access the appropriate list to filter with fieldsByLocale[context.locale][info.returnType]. This filtering logic is specific to your data source (in this case, the external API), so this is a bit cleaner than "polluting" the schema with information that pertains to the storage layer. If the APIs change, or you switch to a different source for this information altogether (like a database), you can update the resolvers without touching your type definitions. In fact, this way, the filtering logic can easily live inside a domain/service layer instead of your resolvers.
Related
I have several times come across a want to have a Django model field that comprises multiple database columns, and am wondering what the most Django way to do it would be.
Three use cases come specifically to mind.
I want to provide a field that wraps another field, keeping record of whether the wrapped field has been set or not. A use case for this particular field would be for dynamic configuration. A new configuration value is introduced, and a view marks itself as dependent upon a configuration value, redirecting if the value isn't set. Storing whether it's been set yet or not allows for easy indefinite caching of the state. This also lets the configuration value itself be not-nullable, and the application can ignore any value it might have when unset.
I want to provide a money field that combines a decimal (or integer) value, and a currency.
I want to provide a file field with a link to some manner of access rule to determine whether the request should include it/a request for it should succeed.
For each of the use cases, there exists a workaround, that in each case seems less elegant.
Define the configuration fields as nullable. This is undesirable for a few reasons: it removes the validity of NULL as a value for the configuration itself, so tristates and other use valid cases for NULL have to become a pair of fields or a different data type, or an edge case; null=True on the fields allows them to be set back to None in modelforms and the admin without writing a custom FormField for them every time; and every nullable column in a database is arguably bad design.
Define the field as a subclass of DecimalField with an argument accepting a string, and use that to contribute another field to the model. (This is what django-money does). Again, this is undesirable: fields are appearing "as if by magic" on the model; and configuring the currency field becomes not obvious.
Define the combined file+rule field instead as an entire model, and one-to-one to it from the model where you want to have the field. This is a solution to all use cases, but again comes with downsides: there's an extra JOIN required for every instance of the field - one can imagine a User with profile_picture, cv, passport, private_key etc.; there's an implicit requirement to .select_related(*fields) on every query that would ever want to access the fields; and the layout of the related model is going to have cold data interleaved with hot data all over the place given that it's reused everywhere.
In addition to solution 3., there's also the option to define a mixin factory that produces the multiple fields with matching names and whatever desired properties and methods. Again this isn't perfect because the user ends up with fields being defined in the model body, but also above that in the inheritance list.
I think the main reason this keeps sending me in circles is because custom Django model fields are always defined in terms of a single base field, because it's done by inheritance.
What is the accepted way to achieve this end?
I'm upgrading my application with Apollo Client from v2 to v3 and I can't find the correct solution to the following problem.
I have a schema with a product and inside this product, a price. This price is not a simple number as it contains the duty free value, the all taxes included value and the VAT.
type Product {
id: ID
price: Price
}
type Price {
dutyFree: Float
allTaxesIncluded: Float
VAT: Float
}
In Apollo Client 2, whenever there was no explicit id or _id property, the InMemoryCache created a fallback fake identifier to normalize data, based on the path to the object.
In Apollo Client 3, this fallback fake identifier is no longer generated. Instead you have two options to handle non-normalized data. The first is to use the new TypePolicy option and indicates explicitly the data you receive should not be normalize. In that case, data will be linked to the parent normalized data.
The doc :
Objects that are not normalized are instead embedded within their parent object in the cache. You can't access these objects directly, but you can access them via their parent.
new InMemoryCache({
typePolicies: {
Price {
keyFields: false
}
}
})
All happy, I though my problem was solved. Well, wrong ... I can create a product in my app and add a price. But whenever I change an existing price, I get the following warning :
Cache data may be lost when replacing the price field of a Product object.
Because, when I get my Product after an update, the InMemoryCache does not know how to merge the field Price because no id is defined, which is the point of non-normalized data.
I know there is the second option to explicitly define a merge function for my Product.price field, but this example is a simpler version of the reality. I have a large number of fields through multiple objects which are typed Price, and manually defining a merge function for each and everyone one of them (even by externalizing the common logic in a function) is something I find quite inefficient and source of errors.
So my question is : what did I misunderstood about the keyFields: false option and what can I do to solve this problem without having to resort to define a merge function to 50+ fields in my app ?
Thanks for the help :)
I'm not sure you've misunderstood keyFields: false. My understanding is that when the Product is updated in the cache, InMemoryCache must handle any differences in the Price objects embedded in the price field of the old Product and the new Product. If there isn't a TypePolicy to define how that should be done, the cache logs a warning.
Starting in Apollo Client 3.3, merge functions can be defined for types in addition to fields. Here's an example from their docs:
const cache = new InMemoryCache({
typePolicies: {
Book: {
fields: {
// No longer necessary!
// author: {
// merge: true,
// },
},
},
Author: {
merge: true,
},
},
});
Since you don't want to define a merge function on a field-by-field basis, you might try defining the merge function for the Price type instead.
Using AWS Cloudsearch, I need to query 2 separate fields for the same value using a structured (compound) query e.g.
(and (or name:'john smith') (or curr_addr:'123 someplace' other_addr:'123 someplace'))
This query works, but I'm wondering if it's necessary to repeat the value for each field that I want to search against. Is there some way to specify the value only once e.g. curr_addr+other_addr:'123 someplace'
That is the correct way to structure your compound query. From the AWS documentation, you'll see that they structure their example query the same way:
(and title:'star' (or actors:'Harrison Ford' actors:'William Shatner')(not actors:'Zachary Quinto'))
From Constructing Compound Queries
You may be able to get around this by listing the more repetitive fields in the query options (q.options), and then specify the field for the rest of the fields. The fields list is sort of a fallback for when you don't specify which field you are searching in a compound query. So if you list the address fields there, and then only specify the name field in your query, you may get close to the behavior you're looking for.
Query options
q.options={fields: ['curr_addr','other_addr']}
Query
(and (or name:'john smith') (or '123 someplace'))
But this approach would only work for one set of repetitive fields, so it's not a silver bullet by any means.
From Search API Reference (see q.options => fields)
I'm working on a project which includes the following activated modules:
Drupal core 8.2.3
Database Search 8.x-1.0-beta4
Search API 8.x-1.0-beta4
Search API Term Handlers 8.x-1.0-beta4
Views 8.2.3
I have a list of nids which need to be excluded from the search result of the site-wide search. The search uses Search API and has been setup using Views.
The table in the database is: "search_api_db_default_index"
The field I wish to target is: "nid"
I wasn't able to get HOOK__search_api_query_alter or HOOK_search_api_results_alter to fire, so I am attempting to manipulate the query through HOOK_views_query_alter.
I have attempted to use both the "addWhere" and "addCondition" methods with the following syntax:
When using the addCondition method, I attempted
$query->addCondition('search_api_db_default_index.nid', $oneBadNid, '<>');
and
$query->addCondition('search_api_db_default_index.nid', $manyBadNids, 'NOT IN');
and when using the addWhere method, I attempted
$query->addWhere('AND', 'search_api_index_default_index.nid', $oneBadNid, '<>');
and
$query->addWhere('AND', 'search_api_index_default_index.nid', $manyBadNids, 'NOT IN');
Regardless of whether or not I prefix the field with the table name, searching always results in triggering the following notice:
Unknown field in filter clause: 'search_api_db_default_index.nid' .
It seems that the field name is always wrapped in an html encoded string representing a single quotation, but this occurs both when using double quotations or single quotations around the supplied table.field parameter.
I am not even sure that this is what is keeping me from altering my query, but it is the only thing close to an error which I have discovered in this process. It's also possible that I'm simply not supposed to be targeting the table in the manner written, but I did not find any documentation directing me to the proper methodology.
I would appreciate any insight into this issue! Thanks!
Generally you can use
$fields = $query->getIndex()->getFields();
on the query to get an array of fields you can use within the search_api query.
Piggy-backing off of Nebel54's comment, and attempting this on my own, you don't need to include the 'table' name when setting the addCondition. However, I did need to use hook_search_api_query_alter over a views-specific one.
function mymodule_search_api_query_alter(\Drupal\search_api\Query\QueryInterface &$query) {
// Ensure field_myfield is being indexed
$fields = $query->getIndex()->getFields();
if (isset($fields['field_myfield'])) {
$query->addCondition('field_myfield', 'myvalue', '<>');
}
}
I am building a web service API, using JSON as the data language. Designing the structure of the data returned from the service, I am having some trouble deciding how to deal with missing values.
Consider this example: I have a product in my web store for which the price is yet unknown, maybe because the product has not yet been released. Do I include price: null (as shown below) or do I simply omit the price property on this item?
{
name: 'OSX 10.6.10',
brand: 'Apple',
price: null
}
My main concern is making the API as easy to consume as possible. The explicit null value makes it clear that a price can be expected on a product, but at the other hand it seems like wasted bytes. There could be a whole bunch of properties that are completely irrelevant to this particular product, while relevant for other products – should I show these as explicitly null as well?
{
name: 'OSX 10.6.10',
price: 29.95,
color: null,
size: null
}
Are there any "best practices" on web service design, favoring explicit or implicit null values? Any de-facto standard? Or does it depend entirely on the use case?
FWIW, my personal opinion:
Do I include price: null (as shown below) or do I simply omit the price property on this item?
I would set the values of "standard" fields to null. Although JSON is often used with JavaScript and there, missing properties can be handled similarly as the ones set to null, this must not be the case for other languages (e.g. Java). Having to test first whether a field is present seems inconvenient. Setting the values to null but having the fields present would be more consistent.
There could be a whole bunch of properties that are completely irrelevant to this particular product, while relevant for other products – should I show these as explicitly null as well?
I would only include those fields that are relevant for a product (e.g. not pages for a CD). It's the client's task to deal with these "optional" fields properly. If you have no value for a certain field which is relavant to a product, set it to null too.
As already said, the most important thing is to be consistent and to clearly specify which fields can be expected. You can reduce the data size using gzip compression.
i don't know which is "best practice". But i usually don't send fields that i don't need.
When i read response, i check if value exists:
if (res.size) {
// response has size
}