How check name already exist using express validatator in update calls - express-validator

I have to check the name is already exist in database before adding the values.
So, I have decided to add express validator custom option. This is working fine in create call. But not working in update call. Here is my code
const { check, body } = require('express-validator/check');
var models = require("../models");
let Validations = [
check('email').isEmail().withMessage("Invalid Email"),
check('phone').isLength({ min: 5 }).withMessage("Min length Required"),
check('name').not().isEmpty().withMessage("Value is Required"),
body("name").custom(value => {
return models.fundraisers.findByName(value).then(user => {
if (user) {
return Promise.reject('Name already in use');
}
})
})
]
How to handle this in update calls.
Thanks in advance.

This is my check and it worked well at create and update case :
check('name')
.not().isEmpty()
.isString()
.custom(value => {
return Group
.findByName(value)
.then(groups => {
if(groups.length > 0) {
return Promise.reject(value + '\'s already in use');
}
})
})
By the way, I defined only body-check :
const { check, validationResult } = require('express-validator/check');
Hope it helps :)

Related

Unit testing sessionStorage value in emberJS

I'm new to ember and trying to figure out how to unit test, using sinon, the sessionStorage based on url parameters when that page is visited. I've tried a few things but still can't get the desired result. It passes even if I change the 'sessionValue' without editing the query param.
Thank you in advance.
ember component
beforeModel(transition) {
//transition will contain an object containing a query parameter. '?userid=1234' and is set in the sessionStorage.
if(transition.queryparam.hasOwnProperty('userid')){
sessionStorage.setItem('user:id', transition.queryparam)
}
}
Ember test
test('Session Storage contains query param value', async assert => {
let sessionKey = "user:id";
let sessionValue = "1234"
let store = {};
const mockLocalStorage = {
getItem: (key) => {
return key in store ? store[key] : null;
},
setItem: (key, value) => {
store[key] = `${value}`;
},
clear: () => {
store = {};
}
};
asserts.expect(1);
let spy = sinon.spy(sessionStorage, "setItem");
spy.calledWith(mockLocalStorage.setItem);
let stub = sinon.stub(sessionStorage, "getItem");
stub.calledWith(mockLocalStorage.getItem);
stub.returns(sessionValue);
await visit('/page?userid=1234');
mockLocalStorage.setItem(sessionKey, sessionValue);
assert.equal(mockLocalStorage.getItem(sessionKey), sessionValue, 'storage contains value');
})
Welcome to Ember!
There are many ways to test, and the below suggestion is one way (how I would approach interacting with the SessionStorage).
Instead of re-creating the SessionStorage API in your test, how do you feel about using a pre-made proxy around the Session Storage? (ie: "Don't mock what you don't own")
Using: https://github.com/CrowdStrike/ember-browser-services/#sessionstorage
Your app code would look like:
#service('browser/session-storage') sessionStorage;
beforeModel(transition) {
// ... details omitted ...
// note the addition of `this` -- the apis are entirely the same
// as SessionStorage
this.sessionStorage.setItem('user:id', ...)
}
then in your test:
module('Scenario Name', function (hooks) {
setupApplicationTest(hooks);
setupBrowserFakes(hooks, { sessionStorage: true });
test('Session Storage contains query param value', async assert => {
let sessionKey = "user:id";
let sessionValue = "1234"
let sessionStorage = this.owner.lookup('browser/session-storage');
await visit('/page?userid=1234');
assert.equal(sessionStorage.getItem(sessionKey), '1234', 'storage contains value');
});
})
With this approach, sinon isn't even needed :)

Apollo client mutation with writeQuery not triggering UI update

I have a mutation to create a new card object, and I expect it should be added to the user interface after update. Cache, Apollo Chrome tool, and console logging reflect the changes, but the UI does not without a manual reload.
const [createCard, { loading, error }] = useMutation(CREATE_CARD, {
update(cache, { data: { createCard } }) {
let localData = cache.readQuery({
query: CARDS_QUERY,
variables: { id: deckId }
});
localData.deck.cards = [...localData.deck.cards, createCard];
;
client.writeQuery({
query: CARDS_QUERY,
variables: { id: parseInt(localData.deck.id, 10) },
data: { ...localData }
});
I have changed cache.writeQuery to client.writeQuery, but that didn't solve the problem.
For reference, here is the Query I am running...
const CARDS_QUERY = gql`
query CardsQuery($id: ID!) {
deck(id: $id) {
id
deckName
user {
id
}
cards {
id
front
back
pictureName
pictureUrl
createdAt
}
}
toggleDeleteSuccess #client
}
`;
I managed the same result without the cloneDeep method. Just using the spread operator solved my problem.
const update = (cache, {data}) => {
const queryData = cache.readQuery({query: USER_QUERY})
const cartItemId = data.cartItem.id
queryData.me.cart = queryData.me.cart.filter(v => v.id !== cartItemId)
cache.writeQuery({query: USER_QUERY, data: {...queryData}})
}
Hope this helps someone else.
Ok, finally ran into a long Github thread discussing their solutions for the same issue. The solution that ultimately worked for me was deep cloning the data object (I personally used Lodash cloneDeep), which after passing in the mutated data object to cache.writeQuery, it was finally updating the UI. Ultimately, it still seems like there ought to be a way to trigger the UI update, considering the cache reflects the changes.
Here's the after, view my original question for the before...
const [createCard, { loading, error }] = useMutation(CREATE_CARD, {
update(cache, { data: { createCard } }) {
const localData = cloneDeep( // Lodash cloneDeep to make a fresh object
cache.readQuery({
query: CARDS_QUERY,
variables: { id: deckId }
})
);
localData.deck.cards = [...localData.deck.cards, createCard]; //Push the mutation to the object
cache.writeQuery({
query: CARDS_QUERY,
variables: { id: localData.deck.id },
data: { ...localData } // Cloning ultimately triggers the UI update since writeQuery now sees a new object.
});
},
});

Apollo GraphQL client doesn't return cached nested types in a query

I'm performing a query to get PowerMeter details in which contains another type inside called Project. I write the query this way:
query getPowerMeter($powerMeterId: ID!) {
powerMeter: powerMeter(powerMeterId: $powerMeterId) {
id
name
registry
project {
id
name
}
}
}
When I perform the query for the first time, project is successfully returned. The problem is that when I perform subsequent queries with the same parameters and default fetchPolicy (cache-first), project isn't returned anymore.
How may I solve this problem?
Also, I call readFragment to check how powerMeter is saved in the cache and the response shows that powerMeter has project saved.
const frag = client.readFragment({
fragment: gql`
fragment P on PowerMeter {
id
name
registry
project {
id
name
}
}
`,
id: 'PowerMeter:' + powerMeterId,
});
Power Meter returned first time
{
"powerMeter":{
"id":"7168adb4-4198-443e-ab76-db0725be2b18",
"name":"asd123123",
"registry":"as23",
"project":{
"id":"41d8e71b-d1e9-41af-af96-5b4ae9e492c1",
"name":"ProjectName",
"__typename":"Project"
},
"__typename":"PowerMeter"
}
}
Fragment after calling power meter first time
{
"id":"7168adb4-4198-443e-ab76-db0725be2b18",
"name":"asd123123",
"registry":"as23",
"project":{
"id":"41d8e71b-d1e9-41af-af96-5b4ae9e492c1",
"name":"ProjectName",
"__typename":"Project"
},
"__typename":"PowerMeter"
}
Power Meter returned second time
{
"powerMeter":{
"id":"7168adb4-4198-443e-ab76-db0725be2b18",
"name":"asd123123",
"registry":"as23",
"__typename":"PowerMeter"
}
}
Fragment after calling power meter second time
{
"id":"7168adb4-4198-443e-ab76-db0725be2b18",
"name":"asd123123",
"registry":"as23",
"project":{
"id":"41d8e71b-d1e9-41af-af96-5b4ae9e492c1",
"name":"ProjectName",
"__typename":"Project"
},
"__typename":"PowerMeter"
}
Edit 1: Fetching Query
The code below is how I'm fetching data. I'm using useApolloClient and not a query hook because I'm using AWS AppSync and it doesn't support query hook yet.
import { useApolloClient } from '#apollo/react-hooks';
import gql from 'graphql-tag';
import { useEffect, useState } from 'react';
export const getPowerMeterQuery = gql`
query getPowerMeter($powerMeterId: ID!) {
powerMeter: powerMeter(powerMeterId: $powerMeterId) {
id
name
registry
project {
id
name
}
}
}
`;
export const useGetPowerMeter = (powerMeterId?: string) => {
const client = useApolloClient();
const [state, setState] = useState<{
loading: boolean;
powerMeter?: PowerMeter;
error?: string;
}>({
loading: true,
});
useEffect(() => {
if (!powerMeterId) {
return setState({ loading: false });
}
client
.query<GetPowerMeterQueryResponse, GetPowerMeterQueryVariables>({
query: getPowerMeterQuery,
variables: {
powerMeterId,
},
})
.then(({ data, errors }) => {
if (errors) {
setState({ loading: false, error: errors[0].message });
}
console.log(JSON.stringify(data));
const frag = client.readFragment({
fragment: gql`
fragment P on PowerMeter {
id
name
registry
project {
id
name
}
}
`,
id: 'PowerMeter:' + powerMeterId,
});
console.log(JSON.stringify(frag));
setState({
loading: false,
powerMeter: data.powerMeter,
});
})
.catch(err => setState({ loading: false, error: err.message }));
}, [powerMeterId]);
return state;
};
Edit 2: Fetching Policy Details
When I use fetchPolice equals cache-first or network-only, the error persists. When I use no-cache, I don't get the error.
I think this might have been the solution:
https://github.com/apollographql/apollo-client/issues/7050
Probably way too late, but it could help people coming to this issue in the future.
When using apollo client's InMemoryCache it seems you need to provide a list of possible types so the fragment matching can be done correctly when using the InMemoryCache.
You can do that manually when having few union types and a pretty stable API which doesn't change very often.
Or you automatically generate these types into a json file, which you can use directly in the InMemoryCache's possibleTypes config directly.
Visit this link to the official docs to find out how to do it.
Cheers.

lookback API where filter with multiple conditions

When using loopback API, is 'AND' operator redundant in 'where' filter with multiple conditions?
For example, I tested the following two queries and they return the same result:
<model>.find({ where: { <condition1>, <condition2> } });
<model>.find({ where: { and: [<condition1>, <condtion2>] } });
To be more specific, suppose this is the table content:
name value
---- -----
a 1
b 2
When I execute 'find()' using two different 'where' filters, I get the first record in both cases:
{ where: { name: 'a', value: 1 } }
{ where: { and: [ { name: 'a'}, { value: 1 } ] } }
I've read through the API documents, but didn't find what logical operator is used when there are multiple conditions.
If 'AND' is redundant as shown in my test, I prefer not using it. But I just want to make sure if this is true in general, or if it just happens to work with postgreSQL which I'm using.
This is a valid query which could only be accomplished with an and statement.
{
"where": {
"or": [
{"and": [{"classification": "adn"}, {"series": "2"}]},
{"series": "3"}
]
}
}
EDIT: https://github.com/strongloop/loopback-filters/blob/master/index.js
function matchesFilter(obj, filter) {
var where = filter.where;
var pass = true;
var keys = Object.keys(where);
keys.forEach(function(key) {
if (key === 'and' || key === 'or') {
if (Array.isArray(where[key])) {
if (key === 'and') {
pass = where[key].every(function(cond) {
return applyFilter({where: cond})(obj);
});
return pass;
}
if (key === 'or') {
pass = where[key].some(function(cond) {
return applyFilter({where: cond})(obj);
});
return pass;
}
}
}
if (!test(where[key], getValue(obj, key))) {
pass = false;
}
});
return pass;
}
It iterates through the keys of the where object where looking for failure, so it acts like an implicit and statement in your case.
EDIT 2: https://github.com/strongloop/loopback-datasource-juggler/blob/cc60ef8202092ae4ed564fc7bd5aac0dd4119e57/test/relations.test.js
The loopback datasource juggler contains tests which use the implicit and format
{PictureLink.findOne({where: {pictureId: anotherPicture.id, imageableType: 'Article'}},
{pictureId: anotherPicture.id, imageableId: article.id, imageableType: 'Article',}
But I just want to make sure if this is true in general, or if it just happens to work with postgreSQL which I'm using.
Is it true in general? No.
It appears that this is handled for PostgreSQL and MySQL (and probably other SQL databases) in SQLConnector. So, it is possible connectors not using SQLConnector (e.g MongoDB) don't support this. However, given the many examples I've seen online, I would say it's safe to assume other connectors have implemented it this way, too.

Advanced update using mongodb [duplicate]

In MongoDB, is it possible to update the value of a field using the value from another field? The equivalent SQL would be something like:
UPDATE Person SET Name = FirstName + ' ' + LastName
And the MongoDB pseudo-code would be:
db.person.update( {}, { $set : { name : firstName + ' ' + lastName } );
The best way to do this is in version 4.2+ which allows using the aggregation pipeline in the update document and the updateOne, updateMany, or update(deprecated in most if not all languages drivers) collection methods.
MongoDB 4.2+
Version 4.2 also introduced the $set pipeline stage operator, which is an alias for $addFields. I will use $set here as it maps with what we are trying to achieve.
db.collection.<update method>(
{},
[
{"$set": {"name": { "$concat": ["$firstName", " ", "$lastName"]}}}
]
)
Note that square brackets in the second argument to the method specify an aggregation pipeline instead of a plain update document because using a simple document will not work correctly.
MongoDB 3.4+
In 3.4+, you can use $addFields and the $out aggregation pipeline operators.
db.collection.aggregate(
[
{ "$addFields": {
"name": { "$concat": [ "$firstName", " ", "$lastName" ] }
}},
{ "$out": <output collection name> }
]
)
Note that this does not update your collection but instead replaces the existing collection or creates a new one. Also, for update operations that require "typecasting", you will need client-side processing, and depending on the operation, you may need to use the find() method instead of the .aggreate() method.
MongoDB 3.2 and 3.0
The way we do this is by $projecting our documents and using the $concat string aggregation operator to return the concatenated string.
You then iterate the cursor and use the $set update operator to add the new field to your documents using bulk operations for maximum efficiency.
Aggregation query:
var cursor = db.collection.aggregate([
{ "$project": {
"name": { "$concat": [ "$firstName", " ", "$lastName" ] }
}}
])
MongoDB 3.2 or newer
You need to use the bulkWrite method.
var requests = [];
cursor.forEach(document => {
requests.push( {
'updateOne': {
'filter': { '_id': document._id },
'update': { '$set': { 'name': document.name } }
}
});
if (requests.length === 500) {
//Execute per 500 operations and re-init
db.collection.bulkWrite(requests);
requests = [];
}
});
if(requests.length > 0) {
db.collection.bulkWrite(requests);
}
MongoDB 2.6 and 3.0
From this version, you need to use the now deprecated Bulk API and its associated methods.
var bulk = db.collection.initializeUnorderedBulkOp();
var count = 0;
cursor.snapshot().forEach(function(document) {
bulk.find({ '_id': document._id }).updateOne( {
'$set': { 'name': document.name }
});
count++;
if(count%500 === 0) {
// Excecute per 500 operations and re-init
bulk.execute();
bulk = db.collection.initializeUnorderedBulkOp();
}
})
// clean up queues
if(count > 0) {
bulk.execute();
}
MongoDB 2.4
cursor["result"].forEach(function(document) {
db.collection.update(
{ "_id": document._id },
{ "$set": { "name": document.name } }
);
})
You should iterate through. For your specific case:
db.person.find().snapshot().forEach(
function (elem) {
db.person.update(
{
_id: elem._id
},
{
$set: {
name: elem.firstname + ' ' + elem.lastname
}
}
);
}
);
Apparently there is a way to do this efficiently since MongoDB 3.4, see styvane's answer.
Obsolete answer below
You cannot refer to the document itself in an update (yet). You'll need to iterate through the documents and update each document using a function. See this answer for an example, or this one for server-side eval().
For a database with high activity, you may run into issues where your updates affect actively changing records and for this reason I recommend using snapshot()
db.person.find().snapshot().forEach( function (hombre) {
hombre.name = hombre.firstName + ' ' + hombre.lastName;
db.person.save(hombre);
});
http://docs.mongodb.org/manual/reference/method/cursor.snapshot/
Starting Mongo 4.2, db.collection.update() can accept an aggregation pipeline, finally allowing the update/creation of a field based on another field:
// { firstName: "Hello", lastName: "World" }
db.collection.updateMany(
{},
[{ $set: { name: { $concat: [ "$firstName", " ", "$lastName" ] } } }]
)
// { "firstName" : "Hello", "lastName" : "World", "name" : "Hello World" }
The first part {} is the match query, filtering which documents to update (in our case all documents).
The second part [{ $set: { name: { ... } }] is the update aggregation pipeline (note the squared brackets signifying the use of an aggregation pipeline). $set is a new aggregation operator and an alias of $addFields.
Regarding this answer, the snapshot function is deprecated in version 3.6, according to this update. So, on version 3.6 and above, it is possible to perform the operation this way:
db.person.find().forEach(
function (elem) {
db.person.update(
{
_id: elem._id
},
{
$set: {
name: elem.firstname + ' ' + elem.lastname
}
}
);
}
);
I tried the above solution but I found it unsuitable for large amounts of data. I then discovered the stream feature:
MongoClient.connect("...", function(err, db){
var c = db.collection('yourCollection');
var s = c.find({/* your query */}).stream();
s.on('data', function(doc){
c.update({_id: doc._id}, {$set: {name : doc.firstName + ' ' + doc.lastName}}, function(err, result) { /* result == true? */} }
});
s.on('end', function(){
// stream can end before all your updates do if you have a lot
})
})
update() method takes aggregation pipeline as parameter like
db.collection_name.update(
{
// Query
},
[
// Aggregation pipeline
{ "$set": { "id": "$_id" } }
],
{
// Options
"multi": true // false when a single doc has to be updated
}
)
The field can be set or unset with existing values using the aggregation pipeline.
Note: use $ with field name to specify the field which has to be read.
Here's what we came up with for copying one field to another for ~150_000 records. It took about 6 minutes, but is still significantly less resource intensive than it would have been to instantiate and iterate over the same number of ruby objects.
js_query = %({
$or : [
{
'settings.mobile_notifications' : { $exists : false },
'settings.mobile_admin_notifications' : { $exists : false }
}
]
})
js_for_each = %(function(user) {
if (!user.settings.hasOwnProperty('mobile_notifications')) {
user.settings.mobile_notifications = user.settings.email_notifications;
}
if (!user.settings.hasOwnProperty('mobile_admin_notifications')) {
user.settings.mobile_admin_notifications = user.settings.email_admin_notifications;
}
db.users.save(user);
})
js = "db.users.find(#{js_query}).forEach(#{js_for_each});"
Mongoid::Sessions.default.command('$eval' => js)
With MongoDB version 4.2+, updates are more flexible as it allows the use of aggregation pipeline in its update, updateOne and updateMany. You can now transform your documents using the aggregation operators then update without the need to explicity state the $set command (instead we use $replaceRoot: {newRoot: "$$ROOT"})
Here we use the aggregate query to extract the timestamp from MongoDB's ObjectID "_id" field and update the documents (I am not an expert in SQL but I think SQL does not provide any auto generated ObjectID that has timestamp to it, you would have to automatically create that date)
var collection = "person"
agg_query = [
{
"$addFields" : {
"_last_updated" : {
"$toDate" : "$_id"
}
}
},
{
$replaceRoot: {
newRoot: "$$ROOT"
}
}
]
db.getCollection(collection).updateMany({}, agg_query, {upsert: true})
(I would have posted this as a comment, but couldn't)
For anyone who lands here trying to update one field using another in the document with the c# driver...
I could not figure out how to use any of the UpdateXXX methods and their associated overloads since they take an UpdateDefinition as an argument.
// we want to set Prop1 to Prop2
class Foo { public string Prop1 { get; set; } public string Prop2 { get; set;} }
void Test()
{
var update = new UpdateDefinitionBuilder<Foo>();
update.Set(x => x.Prop1, <new value; no way to get a hold of the object that I can find>)
}
As a workaround, I found that you can use the RunCommand method on an IMongoDatabase (https://docs.mongodb.com/manual/reference/command/update/#dbcmd.update).
var command = new BsonDocument
{
{ "update", "CollectionToUpdate" },
{ "updates", new BsonArray
{
new BsonDocument
{
// Any filter; here the check is if Prop1 does not exist
{ "q", new BsonDocument{ ["Prop1"] = new BsonDocument("$exists", false) }},
// set it to the value of Prop2
{ "u", new BsonArray { new BsonDocument { ["$set"] = new BsonDocument("Prop1", "$Prop2") }}},
{ "multi", true }
}
}
}
};
database.RunCommand<BsonDocument>(command);
MongoDB 4.2+ Golang
result, err := collection.UpdateMany(ctx, bson.M{},
mongo.Pipeline{
bson.D{{"$set",
bson.M{"name": bson.M{"$concat": []string{"$lastName", " ", "$firstName"}}}
}},
)