I'm trying to create a FederatedPrincipal in aws-cdk with multiple Action as shown below:
Currently, I'm doing this (as shown below) in c#
new FederatedPrincipal("cognito-identity.amazonaws.com", new Dictionary<string, object>
{
{ "ForAnyValue:StringLike", new Dictionary<string,string> { ["cognito-identity.amazonaws.com:amr"] = "authenticated" } },
{ "StringEquals", new Dictionary<string,string> { ["cognito-identity.amazonaws.com:aud"] = cfn_identitypool.Ref } }
}, "sts:AssumeRoleWithWebIdentity");
How do I add the 2nd action - sts:TagSession?
This is currently not possible using high-level constructs. See this still open issue: https://github.com/aws/aws-cdk/issues/6699
TL;DR
The IPrincipal requires assumeRoleAction to be a string. But what you need is an array. It looks like it's been put on-hold because it means a BC-breaking change that the team does not want to introduce.
What I ended up with, is to use a low-level construct CfnRole. I use TypeScript but it should be straightforward to port it to C#.
const authenticatedRole = new iam.CfnRole(this, 'AuthenticatedRole', {
assumeRolePolicyDocument: {
'Statement': [{
'Effect': iam.Effect.ALLOW,
'Action': ['sts:AssumeRoleWithWebIdentity', 'sts:TagSession'],
'Condition': {
'StringEquals': {
'cognito-identity.amazonaws.com:aud': identityPool.getAtt('Ref')
},
'ForAnyValue:StringLike': {
'cognito-identity.amazonaws.com:amr': 'authenticated'
}
},
'Principal': {
'Federated': 'cognito-identity.amazonaws.com'
}
}]
}
});
const roleAttachment = new cognito.CfnIdentityPoolRoleAttachment(this, 'RoleAttachment', {
identityPoolId: identityPool.getAtt('Ref').toString(),
roles: {
'authenticated': authenticatedRole.getAtt('Arn'),
}
});
You can use the The withSessionTags method of the PrincipalBase class to address this issue, as described here and documented here
Related
I have a DynamoDB table that I need to read/write to. I am trying to create a model for reading and writing from DynamoDB with Kotlin. But I keep encountering com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBMappingException: MyModelDB[myMap]; could not unconvert attribute when I run dynamoDBMapper.scanPage(...). Some times myMap will be MyListOfMaps instead, but I guess it's from iterating the keys of a Map.
My code is below:
#DynamoDBTable(tableName = "") // Non-issue, I am assigning the table name in the DynamoDBMapper
data class MyModelDB(
#DynamoDBHashKey(attributeName = "id")
var id: String,
#DynamoDBAttribute(attributeName = "myMap")
var myMap: MyMap,
#DynamoDBAttribute(attributeName = "MyListOfMapItems")
var myListOfMapItems: List<MyMapItem>,
) {
constructor() : this(id = "", myMap = MyMap(), myListOfMaps = mutableListOf())
#DynamoDBDocument
class MyMap {
#get:DynamoDBAttribute(attributeName = "myMapAttr")
var myMapAttr: MyMapAttr = MyMapAttr()
#DynamoDBDocument
class MyMapAttr {
#get:DynamoDBAttribute(attributeName = "stringValue")
var stringValue: String = ""
}
}
#DynamoDBDocument
class MyMapItem {
#get:DynamoDBAttribute(attributeName = "myMapItemAttr")
var myMapItemAttr: String = ""
}
}
I am using the com.amazonaws:aws-java-sdk-dynamodb:1.11.500 package and my dynamoDBMapper is initialised with DynamoDBMapperConfig.Builder().build() (along with some other configurations).
My question is what am I doing wrong and why? I have also seen that some Java implementations use DynamoDBTypeConverter. Is it better and I should be using that instead?
Any examples would be appreciated!
A couple comments here. First, you are not using the AWS SDK for Kotlin. You are using another SDK and simply writing Kotlin code. Using this SDK, you are not getting full benefits of Kotlin such as support of Coroutines.
The AWS SDK for Kotlin (which does offer full support of Kotlin features) was just released as DEV Preview this week. See the DEV Guide:
Setting up the AWS SDK for Kotlin
However this SDK does not support this mapping as of now. To place items into an Amazon DynamoDB table using the AWS SDK for Kotlin, you need to use:
mutableMapOf<String, AttributeValue>
Full example here.
To map Java Objects to a DynamoDB table, you should look at using the DynamoDbEnhancedClient that is part of AWS SDK for Java V2. See this topic in the AWS SDK for Java V2 Developer Guide:
Mapping items in DynamoDB tables
You can find other example of using the Enhanced Client in the AWS Github repo.
Ok, I eventually got this working thanks to some help. I edited the question slightly after getting a better understanding. Here is how my data class eventually turned out. For Java users, Kotlin compiles to Java, so if you can figure out how the conversion works, the idea should be the same for your use too.
data class MyModelDB(
#DynamoDBHashKey(attributeName = "id")
var id: String = "",
#DynamoDBAttribute(attributeName = "myMap")
#DynamoDBTypeConverted(converter = MapConverter::class)
var myMap: Map<String, AttributeValue> = mutableMapOf(),
#DynamoDBAttribute(attributeName = "myList")
#DynamoDBTypeConverted(converter = ListConverter::class)
var myList: List<AttributeItem> = mutableListOf(),
) {
constructor() : this(id = "", myMap = MyMap(), myList = mutableListOf())
}
class MapConverter : DynamoDBTypeConverter<AttributeValue, Map<String,AttributeValue>> {
override fun convert(map: Map<String,AttributeValue>>): AttributeValue {
return AttributeValue().withM(map)
}
override fun unconvert(itemMap: AttributeValue?): Map<String,AttributeValue>>? {
return itemMap?.m
}
}
class ListConverter : DynamoDBTypeConverter<AttributeValue, List<AttributeValue>> {
override fun convert(list: List<AttributeValue>): AttributeValue {
return AttributeValue().withL(list)
}
override fun unconvert(itemList: AttributeValue?): List<AttributeValue>? {
return itemList?.l
}
}
This would at least let me use my custom converters to get my data out of DynamoDB. I would go on to define a separate data container class for use within my own application, and I created a method to serialize and unserialize between these 2 data objects. This is more of a preference for how you would like to handle the data, but this is it for me.
// For reading and writing to DynamoDB
class MyModelDB {
...
fun toMyModel(): MyModel {
...
}
}
// For use in my application
class MyModel {
var id: String = ""
var myMap: CustomObject = CustomObject()
var myList<CustomObject2> = mutableListOf()
fun toMyModelDB():MyModelDB {
...
}
}
Finally, we come to the implementation of the 2 toMyModel.*() methods. Let's start with input, this is what my columns looked like:
myMap:
{
"key1": {
"M": {
"subKey1": {
"S": "some"
},
"subKey2": {
"S": "string"
}
}
},
"key2": {
"M": {
"subKey1": {
"S": "other"
},
"subKey2": {
"S": "string"
}
}
}
}
myList:
[
{
"M": {
"key1": {
"S": "some"
},
"key2": {
"S": "string"
}
}
},
{
"M": {
"key1": {
"S": "some string"
},
"key3": {
"M": {
"key4": {
"S": "some string"
}
}
}
}
}
]
The trick then is to use com.amazonaws.services.dynamodbv2.model.AttributeValue to convert each field in the JSON. So if I wanted to access the value of subKey2 in key1 field of myMap, I would do something like this:
myModelDB.myMap["key1"]
?.m // Null check and get the value of key1, a map
?.get("subKey2") // Get the AttributeValue associated with the "subKey2" key
?.s // Get the value of "subKey2" as a String
The same applies to myList:
myModelDB.myList.foreach {
it?.m // Null check and get the map at the current index
?.get("key1") // Get the AttributeValue associated with the "key1"
...
}
Edit: Doubt this will be much of an issue, but I also updated my DynamoDB dependency to com.amazonaws:aws-java-sdk-dynamodb:1.12.126
I am trying to deploy a lambda using AWS CDK and it seems not to be working/deployed properly.
The "box" in the pipeline is green, so no errors are returned.
Everything appears to be fine, but when I ran it manually to test, I receive the next message:
{
"errorType": "LambdaException",
"errorMessage": "Could not find the required 'QuickSight.Lambdas.SpiceRefresh.deps.json'. This file should be present at the root of the deployment package."
}
The issue is that if I download the artefact manually to my machine, and upload it with the Function package upload button, it is working properly.
I have one Stack which contains CfnParametersCode which is the stack I use to create the lambda.
public class LambdaStack : Stack
{
public CfnParametersCode LambdaCode { get; set; }
//code
private Function BuildSpiceRefreshLambda()
{
LambdaCode = Code.FromCfnParameters();
var func = new Function(this, Constants.Lambda.LambdaName, new FunctionProps
{
Code = LambdaCode,
Handler = Constants.Lambda.LambdaHandler,
FunctionName = Constants.Lambda.LambdaName,
MemorySize = 1024,
Tracing = Tracing.ACTIVE,
Timeout = Duration.Seconds(480),
Runtime = Runtime.DOTNET_CORE_2_1,
Environment = new Dictionary<string, string>()
{
{"ENVIRONMENT", Fn.Ref(Constants.EnvironmentVariables.Environment)},
{"APPLICATION_NAME", Constants.Lambda.ApplicationName},
{"AWS_ACCOUNT_ID", Fn.Ref("AWS::AccountId")},
{"LOG_GROUP_NAME", Constants.Lambda.LogGroupName}
},
ReservedConcurrentExecutions = 1,
Role = SpiceRefreshLambdaRole,
Vpc = this.GetProjectVpc(),
SecurityGroups = new ISecurityGroup[]
{
securityGroup
}
});
return func;
}
}
and then I have the pipeline which one of the steps is build the lambda:
var lambdaBuild = new PipelineProject(this, "appLambda", new PipelineProjectProps
{
BuildSpec = BuildSpec.FromObject(new Dictionary<string, object>
{
["version"] = "0.2",
["phases"] = new Dictionary<string, object>
{
["install"] = new Dictionary<string, object>
{
["commands"] = new string[]
{
"echo \"Installing lambda tools for dotnet\"",
"dotnet tool install -g Amazon.Lambda.Tools",
}
},
["build"] = new Dictionary<string, object>
{
["commands"] = new string[]
{
"echo \"Packaging app lambda\"",
"(cd app/src/Lambdas/app.Lambdas.Action; dotnet lambda package)"
}
}
},
["artifacts"] = new Dictionary<string, object>
{
["files"] = new[]
{
"app/src/Lambdas/app.Lambdas.Action/bin/Release/netcoreapp2.1/app.Lambdas.Action.zip",
}
}
}),
Environment = new BuildEnvironment
{
BuildImage = LinuxBuildImage.STANDARD_2_0
}
});
var lambdaBuildOutput = new Artifact_("LambdaBuildOutput");
new Amazon.CDK.AWS.CodePipeline.Pipeline(this, "appPipeline", new PipelineProps
{
ArtifactBucket = Bucket.FromBucketAttributes(this, "artifact-bucket", new BucketAttributes
{
BucketArn = "bucket",
EncryptionKey = "key"
}),
Role = "role",
Stages = new[]
{
new StageProps
{
StageName = "Source",
Actions = new[]
{
new CodeCommitSourceAction(new CodeCommitSourceActionProps
{
ActionName = "Source",
Repository = code,
Output = sourceOutput,
})
}
},
new StageProps
{
StageName = "Build",
Actions = new[]
{
new CodeBuildAction(new CodeBuildActionProps
{
ActionName = "Lambda_Build",
Project = lambdaBuild,
Input = sourceOutput,
Outputs = new[] {lambdaBuildOutput},
}),
}
},
new StageProps
{
StageName = "Deploy",
Actions = new[]
{
new CloudFormationCreateUpdateStackAction(new CloudFormationCreateUpdateStackActionProps
{
ActionName = "DeployLambdaapp",
TemplatePath = props.appLambdaStack.StackTemplate,
StackName = "appLambdaDeploymentStack",
AdminPermissions = true,
ParameterOverrides = props.appLambdaStack.LambdaCode.Assign(lambdaBuildOutput.S3Location),
ExtraInputs = new[] {lambdaBuildOutput},
Role = "role",
DeploymentRole = "deployRole"
}),
}
}
}
});
there is more steps but they are not relevant.
so as you can see I am applying the ParameterOverrides to props.appLambdaStack.LambdaCode.Assign(lambdaBuildOutput.S3Location) which seems to be fine, because When the lambda gets created, and it specifies the size, which is the same size as the lambda was supposed to be, but when I execute it I receive that "errorMessage": "Could not find the required 'QuickSight.Lambdas.SpiceRefresh.deps.json'. This file should be present at the root of the deployment package."
On the result Cloudformation file seems to be fine too:
"appLambdaF0BB8286": {
"Type": "AWS::Lambda::Function",
"Properties": {
"Code": {
"S3Bucket": {
"Ref": "appLambdaSourceBucketNameParameter"
},
"S3Key": {
"Ref": "appLambdaSourceObjectKeyParameter"
}
},
"Handler": "Constants.Lambda.LambdaHandler", //same as the constant in c#
//Rest of the properties
}
}
I checked before creating the post and most of the people had a problem with the handler. Un fortunately if I download manually the object in appLambdaSourceBucketNameParameter, appLambdaSourceObjectKeyParameter and upload it to the lambda, it works perfectly. I think that will exclude my issue.
Any idea what can be wrong?
Found the solution.
The issue is that in the artifact I am returning the lambda .zip
["files"] = new[]
{
"app/src/Lambdas/app.Lambdas.Action/bin/Release/netcoreapp2.1/app.Lambdas.Action.zip",
}
But what I really need is to return the binaries of the lambda. (the publish folder)
["artifacts"] = new Dictionary<string, object>
{
["base-directory"] = "app/src/Lambdas/app.Lambdas.Action/bin/Release/netcoreapp2.1/publish",
["files"] = new[] { "**.*" }
}
nothing else changed, and it worked.
Cloudformation translation:
before I was exporting artifact::app.Lambdas.Action.zip
then AWS was trying to find the binaries.
Now it is exporting artifact::**.*, so all the files.
I tried to set params to picture.width(360).height(360)
const infoRequest = new GraphRequest(
'/me',
{
accessToken: tokenData.accessToken,
parameters: {
fields: {
string: 'id,email,name,picture.width(360).height(360)'
}
}
},
responseInfoCallback
);
but it returns an incorrect dimension 480x480
{
"profile":{
"picture":{
"data":{
"width":480,
"height":480,
"url":"https://scontent.fmnl3-1.fna.fbcdn.net/v/t1.30497-1/c141.0.480.480a/p480x480/84628273_176159830277856_972693363922829312_n.jpg?_nc_cat=1&_nc_sid=12b3be&_nc_eui2=AeF95aCnm2ggUPNPmTv9zCouik--Qfnh2B6KT75B-eHYHvMDChmr6ZbCJUK-KjNtt6PEAlHLBWx9GsGneBpfz-Jm&_nc_ohc=VQqsNSqP_MgAX_0Hjw6&_nc_ht=scontent.fmnl3-1.fna&oh=f11455cd5e09ac5466f5b7590d489e7e&oe=5EDF5715",
"is_silhouette":true
}
},
"id":"102794254765525",
"name":"Elizabeth Aleajdheafejh Fallerwitz",
"email":"swgqsfetew_1588681446#tfbnw.net"
}
}
any help would be much appreciated
You can use this .
const user = {
"name": getUser.name,
"firstName":getUser.first_name,
"lastName": getUser.last_name,
// "profileImage": getUser.picture.data.url,
profileImage:`http://graph.facebook.com/${getUser.id}/picturetype=large&redirect=true&width=500&height=500`,
"token": data.accessToken,
}
if any problem you may ask.
I have created this stack:
export class InfrastructureStack extends cdk.Stack {
constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);
const bucket = new s3.Bucket(this, "My Hello Website", {
websiteIndexDocument: 'index.html',
websiteErrorDocument: 'error.html',
publicReadAccess: true,
removalPolicy: cdk.RemovalPolicy.DESTROY
});
const api = new apigateway.RestApi(this, "My Endpoint", {
restApiName: "My rest API name",
description: "Some cool description"
});
const myLambda = new lambda.Function(this, 'My Backend', {
runtime: lambda.Runtime.NODEJS_8_10,
handler: 'index.handler',
code: lambda.Code.fromAsset(path.join(__dirname, 'code'))
});
const apiToLambda = new apigateway.LambdaIntegration(myLambda)
api.root.addMethod('GET', apiToLambda);
updateWebsiteUrl.newUrl(api.url);
}
}
Last line of code is my function to update asset that will be deployed on S3 as a website with a API url that will be created during deployment. This is just a plain Node.js script that replaces files PLACEHOLDER with api.url.
Of course during compile time the CDK does not know what will be the final adress of REST endpoint because this is happening during deploy time and it updates my url with somethis like:
'https://${Token[TOKEN.26]}.execute-api.${Token[AWS::Region.4]}.${Token[AWS::URLSuffix.1]}/${Token[TOKEN.32]}/;'
Is there any way that I can update this after integrating lambda with API endpooint after deploying those?
I would like to use #aws-cdk/aws-s3-deployment module to deploy code to newly created bucket. All in the same Stack, so one cdk deploy will update everything I need.
To avoid confusion. My updateWebsiteUrl is:
export function newUrl(newUrl: string): void {
const scriptPath = path.join(__dirname, '/../../front/');
const scriptName = 'script.js';
fs.readFile(scriptPath + scriptName, (err, buf) => {
let scriptContent : string = buf.toString();
let newScript = scriptContent.replace('URL_PLACEHOLDER', newUrl);
fs.writeFile(scriptPath + 'newScript.js', newScript, () => {
console.log('done writing');
});
});
}
And my script is just simple:
const url = URL_PLACEHOLDER;
function foo() {
let req = new XMLHttpRequest();
req.open('GET', url , false);
req.send(null);
if (req.status == 200) {
replaceContent(req.response);
}
}
function replaceContent(content) {
document.getElementById('content').innerHTML = content;
}
I ran into the same issue today and managed to find a solution for it.
The C# code I am using in my CDK program is the following:
// This will at runtime be just a token which refers to the actual JSON in the format {'api':{'baseUrl':'https://your-url'}}
var configJson = stack.ToJsonString(new Dictionary<string, object>
{
["api"] = new Dictionary<string, object>
{
["baseUrl"] = api.Url
}
});
var configFile = new AwsCustomResource(this, "config-file", new AwsCustomResourceProps
{
OnUpdate = new AwsSdkCall
{
Service = "S3",
Action = "putObject",
Parameters = new Dictionary<string, string>
{
["Bucket"] = bucket.BucketName,
["Key"] = "config.json",
["Body"] = configJson,
["ContentType"] = "application /json",
["CacheControl"] = "max -age=0, no-cache, no-store, must-revalidate"
},
PhysicalResourceId = PhysicalResourceId.Of("config"),
},
Policy = AwsCustomResourcePolicy.FromStatements(
new[]
{
new PolicyStatement(new PolicyStatementProps
{
Actions = new[] { "s3:PutObject" },
Resources= new [] { bucket.ArnForObjects("config.json") }
})
})
});
}
You will need to install the following package to have the types available: https://docs.aws.amazon.com/cdk/api/latest/docs/custom-resources-readme.html
It is basically a part of the solution you can find as an answer to this question AWS CDK passing API Gateway URL to static site in same Stack, or at this GitHub repository: https://github.com/jogold/cloudstructs/blob/master/src/static-website/index.ts#L134
In MongoDB, is it possible to update the value of a field using the value from another field? The equivalent SQL would be something like:
UPDATE Person SET Name = FirstName + ' ' + LastName
And the MongoDB pseudo-code would be:
db.person.update( {}, { $set : { name : firstName + ' ' + lastName } );
The best way to do this is in version 4.2+ which allows using the aggregation pipeline in the update document and the updateOne, updateMany, or update(deprecated in most if not all languages drivers) collection methods.
MongoDB 4.2+
Version 4.2 also introduced the $set pipeline stage operator, which is an alias for $addFields. I will use $set here as it maps with what we are trying to achieve.
db.collection.<update method>(
{},
[
{"$set": {"name": { "$concat": ["$firstName", " ", "$lastName"]}}}
]
)
Note that square brackets in the second argument to the method specify an aggregation pipeline instead of a plain update document because using a simple document will not work correctly.
MongoDB 3.4+
In 3.4+, you can use $addFields and the $out aggregation pipeline operators.
db.collection.aggregate(
[
{ "$addFields": {
"name": { "$concat": [ "$firstName", " ", "$lastName" ] }
}},
{ "$out": <output collection name> }
]
)
Note that this does not update your collection but instead replaces the existing collection or creates a new one. Also, for update operations that require "typecasting", you will need client-side processing, and depending on the operation, you may need to use the find() method instead of the .aggreate() method.
MongoDB 3.2 and 3.0
The way we do this is by $projecting our documents and using the $concat string aggregation operator to return the concatenated string.
You then iterate the cursor and use the $set update operator to add the new field to your documents using bulk operations for maximum efficiency.
Aggregation query:
var cursor = db.collection.aggregate([
{ "$project": {
"name": { "$concat": [ "$firstName", " ", "$lastName" ] }
}}
])
MongoDB 3.2 or newer
You need to use the bulkWrite method.
var requests = [];
cursor.forEach(document => {
requests.push( {
'updateOne': {
'filter': { '_id': document._id },
'update': { '$set': { 'name': document.name } }
}
});
if (requests.length === 500) {
//Execute per 500 operations and re-init
db.collection.bulkWrite(requests);
requests = [];
}
});
if(requests.length > 0) {
db.collection.bulkWrite(requests);
}
MongoDB 2.6 and 3.0
From this version, you need to use the now deprecated Bulk API and its associated methods.
var bulk = db.collection.initializeUnorderedBulkOp();
var count = 0;
cursor.snapshot().forEach(function(document) {
bulk.find({ '_id': document._id }).updateOne( {
'$set': { 'name': document.name }
});
count++;
if(count%500 === 0) {
// Excecute per 500 operations and re-init
bulk.execute();
bulk = db.collection.initializeUnorderedBulkOp();
}
})
// clean up queues
if(count > 0) {
bulk.execute();
}
MongoDB 2.4
cursor["result"].forEach(function(document) {
db.collection.update(
{ "_id": document._id },
{ "$set": { "name": document.name } }
);
})
You should iterate through. For your specific case:
db.person.find().snapshot().forEach(
function (elem) {
db.person.update(
{
_id: elem._id
},
{
$set: {
name: elem.firstname + ' ' + elem.lastname
}
}
);
}
);
Apparently there is a way to do this efficiently since MongoDB 3.4, see styvane's answer.
Obsolete answer below
You cannot refer to the document itself in an update (yet). You'll need to iterate through the documents and update each document using a function. See this answer for an example, or this one for server-side eval().
For a database with high activity, you may run into issues where your updates affect actively changing records and for this reason I recommend using snapshot()
db.person.find().snapshot().forEach( function (hombre) {
hombre.name = hombre.firstName + ' ' + hombre.lastName;
db.person.save(hombre);
});
http://docs.mongodb.org/manual/reference/method/cursor.snapshot/
Starting Mongo 4.2, db.collection.update() can accept an aggregation pipeline, finally allowing the update/creation of a field based on another field:
// { firstName: "Hello", lastName: "World" }
db.collection.updateMany(
{},
[{ $set: { name: { $concat: [ "$firstName", " ", "$lastName" ] } } }]
)
// { "firstName" : "Hello", "lastName" : "World", "name" : "Hello World" }
The first part {} is the match query, filtering which documents to update (in our case all documents).
The second part [{ $set: { name: { ... } }] is the update aggregation pipeline (note the squared brackets signifying the use of an aggregation pipeline). $set is a new aggregation operator and an alias of $addFields.
Regarding this answer, the snapshot function is deprecated in version 3.6, according to this update. So, on version 3.6 and above, it is possible to perform the operation this way:
db.person.find().forEach(
function (elem) {
db.person.update(
{
_id: elem._id
},
{
$set: {
name: elem.firstname + ' ' + elem.lastname
}
}
);
}
);
I tried the above solution but I found it unsuitable for large amounts of data. I then discovered the stream feature:
MongoClient.connect("...", function(err, db){
var c = db.collection('yourCollection');
var s = c.find({/* your query */}).stream();
s.on('data', function(doc){
c.update({_id: doc._id}, {$set: {name : doc.firstName + ' ' + doc.lastName}}, function(err, result) { /* result == true? */} }
});
s.on('end', function(){
// stream can end before all your updates do if you have a lot
})
})
update() method takes aggregation pipeline as parameter like
db.collection_name.update(
{
// Query
},
[
// Aggregation pipeline
{ "$set": { "id": "$_id" } }
],
{
// Options
"multi": true // false when a single doc has to be updated
}
)
The field can be set or unset with existing values using the aggregation pipeline.
Note: use $ with field name to specify the field which has to be read.
Here's what we came up with for copying one field to another for ~150_000 records. It took about 6 minutes, but is still significantly less resource intensive than it would have been to instantiate and iterate over the same number of ruby objects.
js_query = %({
$or : [
{
'settings.mobile_notifications' : { $exists : false },
'settings.mobile_admin_notifications' : { $exists : false }
}
]
})
js_for_each = %(function(user) {
if (!user.settings.hasOwnProperty('mobile_notifications')) {
user.settings.mobile_notifications = user.settings.email_notifications;
}
if (!user.settings.hasOwnProperty('mobile_admin_notifications')) {
user.settings.mobile_admin_notifications = user.settings.email_admin_notifications;
}
db.users.save(user);
})
js = "db.users.find(#{js_query}).forEach(#{js_for_each});"
Mongoid::Sessions.default.command('$eval' => js)
With MongoDB version 4.2+, updates are more flexible as it allows the use of aggregation pipeline in its update, updateOne and updateMany. You can now transform your documents using the aggregation operators then update without the need to explicity state the $set command (instead we use $replaceRoot: {newRoot: "$$ROOT"})
Here we use the aggregate query to extract the timestamp from MongoDB's ObjectID "_id" field and update the documents (I am not an expert in SQL but I think SQL does not provide any auto generated ObjectID that has timestamp to it, you would have to automatically create that date)
var collection = "person"
agg_query = [
{
"$addFields" : {
"_last_updated" : {
"$toDate" : "$_id"
}
}
},
{
$replaceRoot: {
newRoot: "$$ROOT"
}
}
]
db.getCollection(collection).updateMany({}, agg_query, {upsert: true})
(I would have posted this as a comment, but couldn't)
For anyone who lands here trying to update one field using another in the document with the c# driver...
I could not figure out how to use any of the UpdateXXX methods and their associated overloads since they take an UpdateDefinition as an argument.
// we want to set Prop1 to Prop2
class Foo { public string Prop1 { get; set; } public string Prop2 { get; set;} }
void Test()
{
var update = new UpdateDefinitionBuilder<Foo>();
update.Set(x => x.Prop1, <new value; no way to get a hold of the object that I can find>)
}
As a workaround, I found that you can use the RunCommand method on an IMongoDatabase (https://docs.mongodb.com/manual/reference/command/update/#dbcmd.update).
var command = new BsonDocument
{
{ "update", "CollectionToUpdate" },
{ "updates", new BsonArray
{
new BsonDocument
{
// Any filter; here the check is if Prop1 does not exist
{ "q", new BsonDocument{ ["Prop1"] = new BsonDocument("$exists", false) }},
// set it to the value of Prop2
{ "u", new BsonArray { new BsonDocument { ["$set"] = new BsonDocument("Prop1", "$Prop2") }}},
{ "multi", true }
}
}
}
};
database.RunCommand<BsonDocument>(command);
MongoDB 4.2+ Golang
result, err := collection.UpdateMany(ctx, bson.M{},
mongo.Pipeline{
bson.D{{"$set",
bson.M{"name": bson.M{"$concat": []string{"$lastName", " ", "$firstName"}}}
}},
)