loopback 4 how to use has one get method - loopbackjs

I have set up has one relation between two models product and product-prices. The product has one product-price and product-price belongs to the product. But I am unable to use get method that is been added to product repository after successful implementation of has one relation.
Here is the code
#get('/products/{id}/product-price', {
responses: {
'200': {
description: 'Array of productPrice\'s belonging to Product',
content: {
'application/json': {
schema: { type: 'array', items: getModelSchemaRef(ProductPrice) },
},
},
},
},
})
async find(
#param.path.number('id') id: number,
#param.query.object('filter') filter?: Filter<ProductPrice>,
#param.query.object('where', getWhereSchemaFor(ProductPrice)) where?: Where<ProductPrice>,
): Promise<ProductPrice[]> {
return this.productRepository.productPrices(id).get(filter) // error
}
here is the error
Type 'ProductPrice' is missing the following properties from type 'ProductPrice[]': length, pop, push, concat, and 26 more

I think you should get back to TypeScript problem. You are querying beLongTo that mean your response should be Promise<ProductPrice> and not Promise< ̶P̶r̶o̶d̶u̶c̶t̶P̶r̶i̶c̶e̶[̶]̶>

Related

Nested resolvers with depth greater than 1

The Problem
Looking at this GraphQL query,
query {
asset {
name
interfaces {
created
ip_addresses {
value
network {
name
}
}
}
}
}
How do I define a resolver for just the network field on ip_addresses?
My First Thought
Reading docs the give examples of single nested queries, e.g
const resolverMap = {
Query: {
author(obj, args, context, info) {
return find(authors, { id: args.id });
},
},
Author: {
posts(author) {
return filter(posts, { authorId: author.id });
},
},
};
So I thought - why not just apply this pattern to nested properties?
const resolverMap = {
Query: {
asset,
},
Asset: {
interfaces: {
ip_addresses: {
network: () => console.log('network resolver called'),
},
},
},
};
But this does not work, when I run the query - I do not see the console log.
Further Testing
I wanted to make sure that a resolver will always be called if its on root level of the query return type.
My hypothesis:
Asset: {
properties: () => console.log('properties - will be called'), // This will get called
interfaces: {
created: () => console.log('created - wont be called'),
ip_addresses: {
network_id: () => console.log('network - wont be called'),
},
},
},
And sure enough my console showed
properties - will be called
The confusing part
But somehow apollo is still using default resolvers for created and ip_addresses, as I can see the returned data in playground.
Workaround
I can implement "monolith" resolvers as follows:
Asset: {
interfaces,
},
Where the interfaces resolver does something like this:
export const interfaces = ({ interfaces }) =>
interfaces.map(interfaceObj => ({ ...interfaceObj, ip_addresses: ip_addresses(interfaceObj) }));
export const ip_addresses = ({ ip_addresses }) =>
ip_addresses.map(ipAddressObj => ({
...ipAddressObj,
network: network(null, { id: ipAddressObj.network_id }),
}));
But I feel that this should be handled by default resolvers, as these custom resolvers aren't actually doing anything, but passing data down to another resolver.
The resolver map passed to the ApolloServer constructor is an object where each property is the name of a type in your schema. The value of this property is another object, wherein each property is a field for that type. Each of those properties then maps to a resolver function for that specified field.
You posted a query without posting your actual schema, so we don't know what any of your types are actually named, but assuming the network field is, for example, Network, your resolver map would need to look something like:
const resolver = {
// ... other types like Query, IPAddress, etc. as needed
Network: {
name: () => 'My network name'
}
}
You can, of course, introduce a resolver for any field in the schema. If the field returns an object type, you return a JavaScript Object and can let the default resolver logic handle resolving "deeper" fields:
const resolvers = {
IPAddress: {
network: () => {
return {
name: 'My network name',
}
}
}
}
Or...
const resolvers = {
Interface: {
ip_addresses: () => {
return [
{
value: 'Some value',
network: {
name: 'My network name',
},
},
]
}
}
}
Where you override the default resolver just depends at what point the data returned from your root-level field no longer matches your schema. For a more detailed explanation of the default resolver behavior, see this answer.

Reusing data cached from one query in another

Given a simple graphql schema that looks something like:
type Contact {
id: ID!
name: String
}
type Query {
RecentContacts: [Contact]
Contact(id: ID!): Contact
}
If I query Recent contacts:
const GET_RECENT_CONTACTS = gql`
query RecentContacts {
RecentContacts {
id
name
}
}`
<Query client={client} query={GET_RECENT_CONTACTS}>
{({loading, error, data}) => { /* etc... */ }}
</Query>
And receive data for e.g. contacts with ids 1 and 2, which is cached like:
ROOT_QUERY
RecentContacts: [Contact]
0: Contact:1
id: 1
name: Jack
1: Contact:2
id: 2
name: Jill
Is there a way to let Apollo know that it can used the already-cached entries for queries Contact(id: 1) and Contact(id: 2) without needing to make another network request just to bring back data that already exists in the cache?
Specifically, I would like for this query to not have to make a network request after RecentContacts has been queried, since the data it needs is already in the cache (albeit returned from a call to a different query):
const GET_CONTACT = gql`
query Contact($id: ID!){
Contact(id: $id){
id
name
}
}
<Query client={client} query={GET_CONTACT} variables={{id: 1}}>
{({loading, error, data}) => {/* etc... */}}
</Query>
You can use cache redirects to do just that. Here's the example from the docs modified to work with your schema:
import { InMemoryCache } from 'apollo-cache-inmemory';
const cache = new InMemoryCache({
cacheRedirects: {
Query: {
User: (_, args, { getCacheKey }) =>
getCacheKey({ __typename: 'User', id: args.id })
},
},
});

Is it possible to atomically add several items to DynamoDB array with checking their occurance in it (to avoid duplication)?

I have email black list stored as one item in DynamoDB
// item example
{
id: "blackList" // PrimaryKey of item
list: [ "email_1#example.com", "email_2#example.com" ]
}
It is possible to add new email to the list and the same time check if it's not already presented in the list (to avoid duplication) by atomic update:
const email = "email_new#example.com";
const params = {
TableName: "myTable",
Key: {
id: "blackList"
},
AttributeUpdates: {
list: {
Action: "ADD",
Value: [email] // several emails can also be added with incorrect Expected check
},
},
Expected: {
list: {
ComparisonOperator: "NOT_CONTAINS",
Value: email
},
}
};
await docClient.update(params).promise();
The question is whether it's possible to perform the same atomic operation for several emails at once?
Use a string set if you want there to be no duplicates. If you want to see if they existed in the set before you added them, return the old item.
const email = "email_new#example.com";
const params = {
TableName: "myTable",
Key: {
id: "blackList"
},
UpdateExpression: "ADD #email :email",
ExpressionAttributeNames: {
"#email": "email"
},
ExpressionAttributeValues: {
":email": docClient.createSet(email)
}
};
await docClient.update(params).promise();
This will add the email_new#example.com to the email attribute as a string set. If the email attribute doesn't exist on the object it will be created.
See https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.UpdateExpressions.html#Expressions.UpdateExpressions.ADD for the documentation.

Modify specific record from an array (method PUT)

I have Model "user" and I have a model that includes an array of objects:
{users: [
{
id: 1,
name: user1,
status: new
},
{
id: 2,
name: user2,
status: new
},
...
]}
But, I have to change the status of a single user. I try:
actions: {
checkAccept(id) {
this.store.findRecord('user', id).then((record) => {
record.set('status', 'accepted');
record.save();
});
}
With the same method to remove the recording works if used record.DestroyRecord. Why instead PUT method sent GET ???
UPDATED
I'm using:
actions: {
checkAccept(record) {
this.store.adapterFor('userNetwork').updateRecord(this.store, this.userNetwork.type, record);
}
}
but get this ERROR:
Uncaught Error: Assertion Failed: The `attr` method is not available on DS.Model, a DS.Snapshot was probably expected. Are you passing a DS.Model instead of a DS.Snapshot to your serializer

What is the format expected by a find(id) request?

My backend replies to find all requests:
User.find();
Like this
{ 'users' : [ user1_obj, user2_obj ] }
Ember-data is happy about it. Now if I do a simple single object find:
User.find('user1');
I have tried configuring the backend to return any of the following:
user1
{ 'user1' : user1_obj }
{ 'user' : { 'user1' : user1_obj } }
{ 'user' : user1_obj }
But none of those are working. What should I return from the backend in reply to find("obj-id") requests? According to the documentation about JSON ROOT, the right format looks like:
{ 'user' : user1_obj }
Ember does not complain about it, but the Ember Objects processed have a very strange structure, like this:
As you can see, _reference.record is referring to the top record. Also (not shown here) _data field is empty.
What could be causing that strange nesting?
EDIT
As linked by mavilein in his answer, the JSON API suggests using a different format for singular resources:
{ 'users' : [user1_obj] }
That means, the same format as for plural resources. Not sure if Ember will swallow that, I'll check now.
Following this specification, i would suspect the following:
{
'users' : [{
"id": "1",
"name" : "John Doe"
},{
"id": "2",
"name" : "Jane Doe"
}]
}
For singular resources the specification says:
Singular resources are represented as JSON objects. However, they are
still wrapped inside an array:
{
'users' : [{
"id": "1",
"name" : "John Doe"
}]
}
Using User.find() will expect the rootKey pluralized and in your content an array of elements, the response format is the following json:
{
users: [
{ id: 1, name: 'Kris' },
{ id: 2, name: 'Luke' },
{ id: 3, name: 'Formerly Alex' }
]
}
And with User.find(1) the rootKey in singular, and just one object:
{
user: {
id: 1, name: 'Kris'
}
}
Here a demo showing this working