GraphQL: prevent empty array as input [duplicate] - apollo

This question already has an answer here:
GraphQL: Non-nullable array/list
(1 answer)
Closed 2 years ago.
I'm learning GraphQL now and I faced a problem that I am unable to resolve.
i have defined type in schema as
input School {
name: String!
exams: [subject!]!
}
input subject {
name: String!
mark: Number!
}
But while running the server.
I give data as
"data":
{
"name":"varun",
"exams":[],
}
Server does not throw any validation error.
Please suggest me solution.
Is there any possible way to get validation error on passing exams as empty array.
I have tried alternate methods, but nothing works on schema level. I have to put a check explicitly to check exam field should not be an empty array.
Thanks

GraphQL cannot validate the length of a list. You need to include this validation logic as part of your resolver.

Related

regexp_match not working on text string with special characters in google data studio

I've been trying to use REGEXP_MATCH to create a custom field in Google Data Studio but it's not working as expected.
Example of the data I'm using it on (this is how the data is formatted in the tags_name field:
{construction,po-johnson,po-james}
{construction,po-sandy,po-occonor}
The objective is to check if a certain name exists, then create a new label.
Here's the code I'm trying (tags_name is the field name where the original text string exists):
CASE
WHEN REGEXP_MATCH(tags_name, ".*(johnson?).*") THEN "Marc Johnson"
WHEN REGEXP_MATCH(tags_name, ".*(occonor?).*") THEN "Sam Occonor"
ELSE "undefined"
END
Is this happening due to the presence of the curly brackets/commas/hyphens?
I've tried to reproduce the error in Google Data Studio based on your problem statement. Everything worked exactly as expected though.
I've entered your input (and a few other expressions for confirmation) in the tags_name field and placed your REGEXP_MATCH function into another field:
Here is the result:
Is this the result you expected?
Is there still an issue? If so, you could edit your question and add corresponding screenshots.

How to search for "any" with .where() in firestore? [duplicate]

This question already has answers here:
Conditional where clause in firestore queries
(2 answers)
Closed 3 years ago.
Is there a way to have an "empty" where query (that searches for anything).
Having
db.collection('skies').where('sky','==','blue')
right now I have to do
if (searchForAny){
db.collection('skies').otherStuff();
}
else {
db.collection('skies').where('sky','==','blue').otherStuff();
}
Is there a way to have an "empty" where query (that searches for anything)
Yes it is. To solve that, simply remove the call to:
.where('sky','==','blue')
When you are using the where() function, you tell Firstore to return only those documents where the sky property holds the value of blue. If you need all documents, use only a reference to the skies collection, not a query.

Sort: Get the author's documents before other documents

I have the following question:
Is there any way to use a specific constant value as a tie-breaker when sorting?
Here's my example:
Assume the index structure:
{
title: "Title",
author: "Name of author"
}
Say we have the following search query : http://example.cloudsearch.amazonaws.com/2013-01-01/search?q=test&return=_all_fields%2C_score&sort=_score%20desc
The problem is I have if there are 10 documents with the same title "test" they will all have the same score, now I want to sort these documents and get the documents created by the current author on top. I have tried using an expression but I can't seem to get it to work, this is what I tried:
http://example.cloudsearch.amazonaws.com/2013-01-01/search?q=test&return=_all_fields%2C_score&sort=_score%20desc,isauthor%20desc&expr.isauthor=(author%3D%3D%id) however I doubt that cloudsearch will accept that. Is there any way to solve this via the search string or do I need to index something like a numeric author identifier?
If anyone else is interested, this is what I ended up doing.
Created a new index field : author_identifier which I used to index a unique integer identifier for each user (which was another challenge to get because the user ids were GUIDs so I had to associate them with numbers.
Then I used a sort expression: http://example.cloudsearch.amazonaws.com/2013-01-01/search?q=test&return=_all_fields%2C_score&sort=_score%20desc,isauthor%20desc&expr.isauthor=(author_identifer%3D%3D%mappedid)
This was not an ideal solution, but it's better than nothing I guess.

Creating Index based on another field in logstash

this question was asked 3 months ago. One of the answers helped me but doesn't solve evey issues.
I am new to ELK and I have an issue to build the index based on another field.
Alain Collins solution (see link) is pretty good: I could format the index as I wanted but the send_to field appears in the output and the field cannot be removed. send_to acts as a temporary variable used in the index. Is there any way to not output the send_to field ?
Sure - use a relatively new feature called metadata.
Put the value in a field like [#metadata][send_to], which you can then refer to in the output stanza. metadata fields aren't sent to elasticsearch, so they won't "pollute" your documents.

Handling invalid dates in Oracle

I am writing simple SELECT queries which involve parsing out date from a string.
The dates are typed in by users manually in a web application and are recorded as string in database.
I am having CASE statement to handle various date formats and use correct format specifier accordingly in TO_DATE function.
However, sometimes, users enter something that's not a valid date(e.g. 13-31-2013) by mistake and then the entire query fails. Is there any way to handle such rougue records and replace them with some default date in query so that the entire query does not fail due to single invalid date record?
I have already tried regular expressions but they are not quite reliable when it comes to handling leap years and 30/31 days in months AFAIK.
I don't have privileges to store procedures or anything like that. Its just plain simple SELECT query executed from my application.
This is a client task..
The DB will give you an error for an invalid date (the DB does not have a "TO_DATE_AND_FIX_IF_NOT_CORRECT" function).
If you've got this error- it means you already tried to cast something to an invalid date.
I recommend doing the migration to date on your application server, and in the case of exception from your code - send a default date to the DB.
Also, that way you send to the DB an object of type DbDate and not a string.
That way you achieve two goals:
1. The dates will always be what you want them to be (from the client).
2. You close the door for SQL Injection attacks.
It sounds like in your case you should write the function I mentioned...
it should look something like that:
Create or replace function TO_DATE_SPECIAL(in_date in varchar2) return DATE is
ret_val date;
begin
ret_val := to_date(in_date,'MM-DD-YYYY');
return ret_val;
exception
when others then
return to_date('01-01-2000','MM-DD-YYYY');
end;
within the query - instead of using "to_date" use the new function.
that way instead of failing - it will give you back a default date.
-> There is not IsDate function .. so you'll have to create an object for it...
I hope you've got the idea and how to use it, if not - let me know.
I ended up using crazy regex that checks leap years, 30/31 days as well.
Here it is:
((^(0?[13578]|1[02])[\/.-]?(0?[1-9]|[12][0-9]|3[01])[\/.-]?(18|19|20){0,1}[0-9]{2}$)|(^(0?[469]|11)[\/.-]?(0?[1-9]|[12][0-9]|30)[\/.-]?(18|19|20){0,1}[0-9]{2}$)|(^([0]?2)[\/.-]?(0?[1-9]|1[0-9]|2[0-8])[\/.-]?(18|19|20){0,1}[0-9]{2}$)|(^([0]?2)[\/.-]?29[\/.-]?(((18|19|20){0,1}(04|08|[2468][048]|[13579][26]))|2000|00)$))
It is modified version of the answer by McKay here.
Not the most efficient but it works. I'll wait to see if I get a better alternative.