I got a function in AWS Lambda that lists every patient in a table from DynamoDB. I realized that some items from the table were not on the list. This is my function to list:
module.exports.listPatients = async (event) => {
try {
const queryString = {
limit: 5,
...event.queryStringParameters,
};
const { limit, next, name } = queryString;
const localParams = {
...patientsParams,
Limit: limit,
FilterExpression: "contains(full_name, :full_name)",
ExpressionAttributeValues: { ":full_name": name },
};
if (next) {
localParams.ExclusiveStartKey = {
id: next,
};
}
const data = await dynamoDb.scan(localParams).promise();
const nextToken = data.LastEvaluatedKey ? data.LastEvaluatedKey.id : "";
const result = {
items: data.Items,
next_token: nextToken,
};
return {
statusCode: 200,
body: JSON.stringify(result),
};
} catch (error) {
console.log("Error: ", error);
return {
statusCode: error.statusCode ? error.statusCode : 500,
body: JSON.stringify({
error: error.name ? error.name : "Exception",
message: error.message ? error.message : "Unknown error",
}),
};
}
};
Am I missing something?
I tried with and without a limit, removed the filters, and yet nothing.
I tested one of the ids with get() to test with the server can find one of those who are missing, and it worked.
I am using Serverless to deploy the code, and when I try offline, it's working.
Stackoverflow recommended this post when writing my question, but I am using DynamoDB.DocumentClient without specifying the full attribute type in the filter expression:
How to scan in DynamoDB without primary sort key with Nodejs
Looks like you are paginating using scan(). Using query() with some Global Secondary Indexes and ScanIndexForward would give you a much better performance. scan() doesn't scale well when your data grows.
I have the method which gets data from contentful using graphql and returns some data:
exports.getMetadata = async (graphql, reporter, query) => {
const result = await graphql(query)
if (result.errors) {
reporter.panicOnBuild("Error while running medatada GraphQL query")
}
const {
data: {
allContentfulPages: {
edges: {
0: {
node: { meta, opengraph },
},
},
},
},
} = result
const metaJson = JSON.parse(meta.internal.content)
const opengraphJson = JSON.parse(opengraph.internal.content)
return { metaJson, opengraphJson }
}
that's how graphql query looks:
query {
# since our Contentful has enabled "locales", but pages slug doesn't need it, get only default language data
allContentfulPages(filter: { node_locale: { eq: "en-US" }, slug:{eq: "insights"} }) {
edges {
node {
meta {
internal {
content
}
}
opengraph {
internal {
content
}
}
}
}
}
}
when i start project executing npm run develop everything works fine and i don't have any error in console but while building npm run build i get TypeError: Cannot read property 'node' of undefined i tried to add statement like if result !== null ... and if result....edges[0].node !== null in many variants it didn't work, application all time breaks in one place. Please help me to figure out what;s going on ?
Too much [unguarded/unconditional] decomposition... stop at must exist node:
const { data: { allContentfulPages: { edges }}} = result;
if( edges && edges[0] ) {
return {
metaJson: JSON.parse(edges[0].node.meta.internal.content),
opengraphJson: JSON.parse(edges[0].node.opengraph.internal.content)
};
If I have a repository with many properties and I want to find something by the non-id property, do I just find all and then return the data after a boolean comparison, or is there a better way to find by a property that's not the ID?
In loopback4, you need to use repository for this purpose. Do as below.
For case where you know there will be just one entry with value. (Unique columns)
const user = await this.userRepository.findOne({
where: {
username: 'test_admin'
}
});
For case where there can be multiple.
const user = await this.userRepository.find({
where: {
firstName: 'test admin'
}
});
For Loopback 3, here you find the documentation for querying data: https://loopback.io/doc/en/lb3/Querying-data.html
Basically, use a query filter like this:
const objects = await app.models.ModelName.find(
{
where: {
propertyName: value
}
}
)
Don't forget to define an index for the property you want to query because otherwise, the database engine will perform a full table scan.
"properties": {
"propertyName": {
"type": "string",
"index": {
"unique": true
}
},
...
}
I created this question in case anyone was curious on how to add union / Polymorphic types in Apollo. Hopefully this will make it easier for them.
In this example I wanted the response to either be a Worksheet or ApiError
// typedefs.js
export default [`
schema {
query: Query
}
type Query {
worksheet(id: String!): Worksheet | Error
}
type Worksheet {
id: String!
name String
}
type ApiError {
code: String!
message: String!
}
`];
// resolvers.js
export default {
Query: {
worksheet(_, args, { loaders }) {
return loaders.worksheet.get(args.id).catch(() => {
// ApiError
return {
code: '1',
message: 'test'
}
});
}
}
};
// Express Server
import { graphqlExpress } from 'apollo-server-express';
import { makeExecutableSchema } from 'graphql-tools';
import typeDefs from './typedefs';
import resolvers from './resolvers';
...
app.post(
'/graphql',
graphqlExpress(req => ({
makeExecutableSchema({ typeDefs, resolvers }),
context: mkRequestContext(req.ctx, req.log),
formatError: formatGraphQLError(req.ctx, req.log)
}))
);
In GraphQL to add a union type in the typedefs you have to define the union
i.e union WorksheetOrError = Worksheet | ApiError
// typedefs.js
export default [
`
schema {
query: Query
}
type Query {
worksheet(id: String!): WorksheetOrError
}
union WorksheetOrError = Worksheet | ApiError
type Worksheet {
id: String!
name String
}
type ApiError {
code: String!
message: String!
}
`];
In the resolvers you have to define a resolver for the union type that has the property __resolveType. This will help tell the GraphQL executor which type the result is.
// resolvers.js
export default {
Query: {
worksheet() {
...
}
},
WorksheetOrError: {
__resolveType(obj) {
if (obj.id) {
return 'Worksheet';
}
if (obj.code) {
return 'ApiError';
}
return null;
}
},
};
To create a GraphQL Query in Apollo Client
// Your application code.
// This is my Worksheet Query in my React code.
const WorksheetQuery = gql`
query GetWorksheet($worksheetId: String!) {
worksheet(id: $worksheetId) {
... on Worksheet {
id
name
}
... on ApiError {
code
message
}
}
}
Now you can check the __typename to check what type is in the response.
Note: For those who are wondering why I'm not using GraphQL errors. It's because Apollo doesn't seem to handle errors well when it encounters a graphQL error. So for a work around I'm trying to return a custom ApiError in my response.
There a few reasons why using a union with an error type is nice.
Currently if you wanted a partial response with GraphQLError. Apollo does not cache the errors so if you wanted to re-use the cached response later you wouldn't have the complete response since the errors are removed. (Now you can't display the proper UI with errors)
Getting GraphQLError back in Apollo would return a flat list of errors with the path to where the error is in the data. So you would need to verify that which part of your schema did the error occur in. However if you follow the instructions above you would have the error within the schema already. That way you already know which part of the schema the error happened.
In MongoDB, is it possible to update the value of a field using the value from another field? The equivalent SQL would be something like:
UPDATE Person SET Name = FirstName + ' ' + LastName
And the MongoDB pseudo-code would be:
db.person.update( {}, { $set : { name : firstName + ' ' + lastName } );
The best way to do this is in version 4.2+ which allows using the aggregation pipeline in the update document and the updateOne, updateMany, or update(deprecated in most if not all languages drivers) collection methods.
MongoDB 4.2+
Version 4.2 also introduced the $set pipeline stage operator, which is an alias for $addFields. I will use $set here as it maps with what we are trying to achieve.
db.collection.<update method>(
{},
[
{"$set": {"name": { "$concat": ["$firstName", " ", "$lastName"]}}}
]
)
Note that square brackets in the second argument to the method specify an aggregation pipeline instead of a plain update document because using a simple document will not work correctly.
MongoDB 3.4+
In 3.4+, you can use $addFields and the $out aggregation pipeline operators.
db.collection.aggregate(
[
{ "$addFields": {
"name": { "$concat": [ "$firstName", " ", "$lastName" ] }
}},
{ "$out": <output collection name> }
]
)
Note that this does not update your collection but instead replaces the existing collection or creates a new one. Also, for update operations that require "typecasting", you will need client-side processing, and depending on the operation, you may need to use the find() method instead of the .aggreate() method.
MongoDB 3.2 and 3.0
The way we do this is by $projecting our documents and using the $concat string aggregation operator to return the concatenated string.
You then iterate the cursor and use the $set update operator to add the new field to your documents using bulk operations for maximum efficiency.
Aggregation query:
var cursor = db.collection.aggregate([
{ "$project": {
"name": { "$concat": [ "$firstName", " ", "$lastName" ] }
}}
])
MongoDB 3.2 or newer
You need to use the bulkWrite method.
var requests = [];
cursor.forEach(document => {
requests.push( {
'updateOne': {
'filter': { '_id': document._id },
'update': { '$set': { 'name': document.name } }
}
});
if (requests.length === 500) {
//Execute per 500 operations and re-init
db.collection.bulkWrite(requests);
requests = [];
}
});
if(requests.length > 0) {
db.collection.bulkWrite(requests);
}
MongoDB 2.6 and 3.0
From this version, you need to use the now deprecated Bulk API and its associated methods.
var bulk = db.collection.initializeUnorderedBulkOp();
var count = 0;
cursor.snapshot().forEach(function(document) {
bulk.find({ '_id': document._id }).updateOne( {
'$set': { 'name': document.name }
});
count++;
if(count%500 === 0) {
// Excecute per 500 operations and re-init
bulk.execute();
bulk = db.collection.initializeUnorderedBulkOp();
}
})
// clean up queues
if(count > 0) {
bulk.execute();
}
MongoDB 2.4
cursor["result"].forEach(function(document) {
db.collection.update(
{ "_id": document._id },
{ "$set": { "name": document.name } }
);
})
You should iterate through. For your specific case:
db.person.find().snapshot().forEach(
function (elem) {
db.person.update(
{
_id: elem._id
},
{
$set: {
name: elem.firstname + ' ' + elem.lastname
}
}
);
}
);
Apparently there is a way to do this efficiently since MongoDB 3.4, see styvane's answer.
Obsolete answer below
You cannot refer to the document itself in an update (yet). You'll need to iterate through the documents and update each document using a function. See this answer for an example, or this one for server-side eval().
For a database with high activity, you may run into issues where your updates affect actively changing records and for this reason I recommend using snapshot()
db.person.find().snapshot().forEach( function (hombre) {
hombre.name = hombre.firstName + ' ' + hombre.lastName;
db.person.save(hombre);
});
http://docs.mongodb.org/manual/reference/method/cursor.snapshot/
Starting Mongo 4.2, db.collection.update() can accept an aggregation pipeline, finally allowing the update/creation of a field based on another field:
// { firstName: "Hello", lastName: "World" }
db.collection.updateMany(
{},
[{ $set: { name: { $concat: [ "$firstName", " ", "$lastName" ] } } }]
)
// { "firstName" : "Hello", "lastName" : "World", "name" : "Hello World" }
The first part {} is the match query, filtering which documents to update (in our case all documents).
The second part [{ $set: { name: { ... } }] is the update aggregation pipeline (note the squared brackets signifying the use of an aggregation pipeline). $set is a new aggregation operator and an alias of $addFields.
Regarding this answer, the snapshot function is deprecated in version 3.6, according to this update. So, on version 3.6 and above, it is possible to perform the operation this way:
db.person.find().forEach(
function (elem) {
db.person.update(
{
_id: elem._id
},
{
$set: {
name: elem.firstname + ' ' + elem.lastname
}
}
);
}
);
I tried the above solution but I found it unsuitable for large amounts of data. I then discovered the stream feature:
MongoClient.connect("...", function(err, db){
var c = db.collection('yourCollection');
var s = c.find({/* your query */}).stream();
s.on('data', function(doc){
c.update({_id: doc._id}, {$set: {name : doc.firstName + ' ' + doc.lastName}}, function(err, result) { /* result == true? */} }
});
s.on('end', function(){
// stream can end before all your updates do if you have a lot
})
})
update() method takes aggregation pipeline as parameter like
db.collection_name.update(
{
// Query
},
[
// Aggregation pipeline
{ "$set": { "id": "$_id" } }
],
{
// Options
"multi": true // false when a single doc has to be updated
}
)
The field can be set or unset with existing values using the aggregation pipeline.
Note: use $ with field name to specify the field which has to be read.
Here's what we came up with for copying one field to another for ~150_000 records. It took about 6 minutes, but is still significantly less resource intensive than it would have been to instantiate and iterate over the same number of ruby objects.
js_query = %({
$or : [
{
'settings.mobile_notifications' : { $exists : false },
'settings.mobile_admin_notifications' : { $exists : false }
}
]
})
js_for_each = %(function(user) {
if (!user.settings.hasOwnProperty('mobile_notifications')) {
user.settings.mobile_notifications = user.settings.email_notifications;
}
if (!user.settings.hasOwnProperty('mobile_admin_notifications')) {
user.settings.mobile_admin_notifications = user.settings.email_admin_notifications;
}
db.users.save(user);
})
js = "db.users.find(#{js_query}).forEach(#{js_for_each});"
Mongoid::Sessions.default.command('$eval' => js)
With MongoDB version 4.2+, updates are more flexible as it allows the use of aggregation pipeline in its update, updateOne and updateMany. You can now transform your documents using the aggregation operators then update without the need to explicity state the $set command (instead we use $replaceRoot: {newRoot: "$$ROOT"})
Here we use the aggregate query to extract the timestamp from MongoDB's ObjectID "_id" field and update the documents (I am not an expert in SQL but I think SQL does not provide any auto generated ObjectID that has timestamp to it, you would have to automatically create that date)
var collection = "person"
agg_query = [
{
"$addFields" : {
"_last_updated" : {
"$toDate" : "$_id"
}
}
},
{
$replaceRoot: {
newRoot: "$$ROOT"
}
}
]
db.getCollection(collection).updateMany({}, agg_query, {upsert: true})
(I would have posted this as a comment, but couldn't)
For anyone who lands here trying to update one field using another in the document with the c# driver...
I could not figure out how to use any of the UpdateXXX methods and their associated overloads since they take an UpdateDefinition as an argument.
// we want to set Prop1 to Prop2
class Foo { public string Prop1 { get; set; } public string Prop2 { get; set;} }
void Test()
{
var update = new UpdateDefinitionBuilder<Foo>();
update.Set(x => x.Prop1, <new value; no way to get a hold of the object that I can find>)
}
As a workaround, I found that you can use the RunCommand method on an IMongoDatabase (https://docs.mongodb.com/manual/reference/command/update/#dbcmd.update).
var command = new BsonDocument
{
{ "update", "CollectionToUpdate" },
{ "updates", new BsonArray
{
new BsonDocument
{
// Any filter; here the check is if Prop1 does not exist
{ "q", new BsonDocument{ ["Prop1"] = new BsonDocument("$exists", false) }},
// set it to the value of Prop2
{ "u", new BsonArray { new BsonDocument { ["$set"] = new BsonDocument("Prop1", "$Prop2") }}},
{ "multi", true }
}
}
}
};
database.RunCommand<BsonDocument>(command);
MongoDB 4.2+ Golang
result, err := collection.UpdateMany(ctx, bson.M{},
mongo.Pipeline{
bson.D{{"$set",
bson.M{"name": bson.M{"$concat": []string{"$lastName", " ", "$firstName"}}}
}},
)