AWS Cognito does not use my custom message lambda - amazon-web-services

I have created an AWS Cognito User Pool through Terraform as follow:
resource "aws_cognito_user_pool" "users-base" {
name = "users"
provider = aws.eu-west-1
auto_verified_attributes = ["email"]
username_attributes = ["email"]
account_recovery_setting {
recovery_mechanism {
name = "verified_email"
priority = 1
}
recovery_mechanism {
name = "verified_phone_number"
priority = 2
}
}
admin_create_user_config {
allow_admin_create_user_only = false
}
email_configuration {
email_sending_account = "DEVELOPER"
from_email_address = "No-reply <no-reply#acme.com>"
source_arn = aws_ses_email_identity.no-reply.arn
}
lambda_config {
custom_message = aws_lambda_function.cognito-users-base.arn
}
}
While I expect that my users receive mails generated from aws_lambda_function.cognito-users-base (which is configured correctly, since it was generating errors previously), they still receive messages generated according to verification_message_template.email_message_by_link, what do I miss?
Edit: I also have checked that the generated smsMessage has less than 140 caracters, and the emailMessage less than 20.000. Moreover, when I add an attribute, the SignUp process blocks, so the lambda is called, one way of another.

It looks like my lambda was generating a body without {##Some text##} in it, leading to a silent discard of the generated body.

Related

Terraform MalformedXML: The XML you provided was not well-formed for aws_s3_bucket_lifecycle_configuration

I really stuck today on the following error:
MalformedXML: The XML you provided was not well-formed
when applying aws_s3_bucket_lifecycle_configuration via Terraform using hashicorp/aws v4.38.0.
I wanted to set a rule that would expire files after 365 days with file size greater than 0 bytes for a my_prefix prefix so the definition of the resource looks like that:
resource "aws_s3_bucket_lifecycle_configuration" "my-bucket-lifecycle-configuration" {
depends_on = [aws_s3_bucket_versioning.my-bucket-versioning]
bucket = aws_s3_bucket.my_bucket.id
rule {
id = "my_prefix_current_version_config"
filter {
and {
prefix = "my_prefix/"
object_size_greater_than = 0
}
}
expiration {
days = 365
}
status = "Enabled"
}
}
Anyone has idea what's wrong with the above definition? :nerd_face:
Documentation: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket_lifecycle_configuration
Remark: the following definition can be applied without problem (no and block):
resource "aws_s3_bucket_lifecycle_configuration" "my-bucket-lifecycle-configuration" {
depends_on = [aws_s3_bucket_versioning.my-bucket-versioning]
bucket = aws_s3_bucket.my_bucket.id
rule {
id = "my_prefix_current_version_config"
filter {
prefix = "my_prefix/"
}
expiration {
days = 365
}
status = "Enabled"
}
}
From the documentation, you have to specify both the object size range (which I guess mean, you have to specify both object_size_greater_than and object_size_less_than) and prefix, for example:
filter {
and {
prefix = "my_prefix/"
object_size_greater_than = 0
object_size_less_than = 500
}
}

How to skip declaring values in root module (for_each loop)

I am trying to build a reusable module that creates multiple S3 buckets. Based on a condition, some buckets may have lifecycle rules, others do not. I am using a for loop in the lifecycle rule resource and managed to do it but not on 100%.
My var:
variable "bucket_details" {
type = map(object({
bucket_name = string
enable_lifecycle = bool
glacier_ir_days = number
glacier_days = number
}))
}
How I go through the map on the lifecycle resource:
resource "aws_s3_bucket_lifecycle_configuration" "compliant_s3_bucket_lifecycle_rule" {
for_each = { for bucket, values in var.bucket_details : bucket => values if values.enable_lifecycle }
depends_on = [aws_s3_bucket_versioning.compliant_s3_bucket_versioning]
bucket = aws_s3_bucket.compliant_s3_bucket[each.key].bucket
rule {
id = "basic_config"
status = "Enabled"
abort_incomplete_multipart_upload {
days_after_initiation = 7
}
transition {
days = each.value["glacier_ir_days"]
storage_class = "GLACIER_IR"
}
transition {
days = each.value["glacier_days"]
storage_class = "GLACIER"
}
expiration {
days = 2555
}
noncurrent_version_transition {
noncurrent_days = each.value["glacier_ir_days"]
storage_class = "GLACIER_IR"
}
noncurrent_version_transition {
noncurrent_days = each.value["glacier_days"]
storage_class = "GLACIER"
}
noncurrent_version_expiration {
noncurrent_days = 2555
}
}
}
How I WOULD love to reference it in the root module:
module "s3_buckets" {
source = "./modules/aws-s3-compliance"
#
bucket_details = {
"fisrtbucketname" = {
bucket_name = "onlythefisrtbuckettesting"
enable_lifecycle = true
glacier_ir_days = 555
glacier_days = 888
}
"secondbuckdetname" = {
bucket_name = "onlythesecondbuckettesting"
enable_lifecycle = false
}
}
}
So when I reference it like that, it cannot validate, because I am not setting values for both glacier_ir_days & glacier_days - understandable.
My question is - is there a way to check if the enable_lifecycle is set to false, to not expect values for these?
Currently, as a workaround, I am just setting zeroes for those and since the resource is not created if enable_lifecycle is false, it does not matter, but I would love it to be cleaner.
Thank you in advance.
The forthcoming Terraform v1.3 release will include a new feature for declaring optional attributes in an object type constraint, with the option of declaring a default value to use when the attribute isn't set.
At the time I'm writing this the v1.3 release is still under development and so not available for general use, but I'm going to answer this with an example that should work with Terraform v1.3 once it's released. If you wish to try it in the meantime you can experiment with the most recent v1.3 alpha release which includes this feature, though of course I would not recommend using it in production until it's in a final release.
It seems that your glacier_ir_days and glacier_days attributes are, from a modeling perspective, attribtues that are required when the lifecycle is enabled and not required when lifecycle is disabled.
I would suggest modelling that by placing these attributes in a nested object called lifecycle and implementing it such that the lifecycle resource is enabled when that attribute is set, and disabled when it is left unset.
The declaration would therefore look like this:
variable "s3_buckets" {
type = map(object({
bucket_name = string
lifecycle = optional(object({
glacier_ir_days = number
glacier_days = number
}))
}))
}
When an attribute is marked as optional(...) like this, Terraform will allow omitting it in the calling module block and then will quietly set the attribute to null when it performs the type conversion to make the given value match the type constraint. This particular declaration doesn't have a default value, but it's also possible to pass a second argument in the optional(...) syntax which Terraform will then use instead of null as the placeholder value when the attribute isn't specified.
The calling module block would therefore look like this:
module "s3_buckets" {
source = "./modules/aws-s3-compliance"
#
bucket_details = {
"fisrtbucketname" = {
bucket_name = "onlythefisrtbuckettesting"
lifecycle = {
glacier_ir_days = 555
glacier_days = 888
}
}
"secondbuckdetname" = {
bucket_name = "onlythesecondbuckettesting"
}
}
}
Your resource block inside the module will remain similar to what you showed, but the if clause of the for expression will test if the lifecycle object is non-null instead:
resource "aws_s3_bucket_lifecycle_configuration" "compliant_s3_bucket_lifecycle_rule" {
for_each = {
for bucket, values in var.bucket_details : bucket => values
if values.lifecycle != null
}
# ...
}
Finally, the references to the attributes would be slightly different to traverse through the lifecycle object:
transition {
days = each.value.lifecycle.glacier_days
storage_class = "GLACIER"
}

What is the workaround of using for each and count together in Terraform?

I have two conditions need to be fulfilled:
Grant users permission to specific project-id based on env. For example: my-project-{env} (env: stg/prd)
I want to loop over the variables, instead of writing down repetitive resource for each user.
Example:
variable some_ext_users {
type = map(any)
default = {
user_1 = { email_id = "user_1#gmail.com" }
user_2 = { email_id = "user_2#gmail.com" }
}
}
To avoid repetitive resource made on each user (imagine 100++ users), I decided to list them in variable as written above.
Then, I'd like to assign these user GCS permission, e.g:
resource "google_storage_bucket_iam_member" "user_email_access" {
for_each = var.some_ext_users
count = var.env == "stg" ? 1 : 0
provider = google-beta
bucket = "my-bucketttt"
role = "roles/storage.objectViewer"
member = "user:${each.value.email_id}"
}
The error I'm getting is clear :
Error: Invalid combination of "count" and "for_each" on
../../../modules/my-tf.tf line 54, in resource
"google_storage_bucket_iam_member" "user_email_access": 54:
for_each = var.some_ext_users The "count" and "for_each"
meta-arguments are mutually-exclusive, only one should be used to be
explicit about the number of resources to be created.
My question is, what is the workaround in order to satisfy the requirements above if count and for_each can't be used together?
You could control the user list according to the environment, rather than trying to control the resource. So, something like this:
resource "google_storage_bucket_iam_member" "user_email_access" {
for_each = var.env == "stg" ? var.some_ext_users : {}
provider = google-beta
bucket = "my-bucketttt"
role = "roles/storage.objectViewer"
member = "user:${each.value.email_id}"
}
The rule for for_each is to assign it a map that has one element per instance you want to declare, so the best way to think about your requirement here is that you need to write an expression that produces a map with zero elements when your condition doesn't hold.
The usual way to project and filter collections in Terraform is for expressions, and indeed we can use a for expression with an if clause to conditionally filter out unwanted elements, which in this particular case will be all of the elements:
resource "google_storage_bucket_iam_member" "user_email_access" {
for_each = {
for name, user in var.some_ext_users : name => user
if var.env == "stg"
}
# ...
}
Another possible way to structure this would be to include the environment keywords as part of the data structure, which would keep all of the information in one spot and potentially allow you to have entries that apply to more than one environment at once:
variable "some_ext_users" {
type = map(object({
email_id = string
environments = set(string)
}))
default = {
user_1 = {
email_id = "user_1#gmail.com"
environments = ["stg"]
}
user_2 = {
email_id = "user_2#gmail.com"
environments = ["stg", "prd"]
}
}
}
resource "google_storage_bucket_iam_member" "user_email_access" {
for_each = {
for name, user in var.some_ext_users : name => user
if contains(user.environments, var.env)
}
# ...
}
This is a variation of the example in the "Filtering Elements" documentation I linked above, which uses an is_admin flag in order to declare different resources for admin users vs. non-admin users. In this case, notice that the if clause refers to the symbols declared in the for expression, which means we can now get a different result for each element of the map, whereas the first example either kept all elements or no elements.

Google Classroom API : Unable to receive push notifications for courseworks and student submissions (COURSE_WORK_CHANGES)

We are seriously blocked. We have followed the documentation below (among many others) to set up the pub/sub pipelines, create service accounts, assign permissions and use the right scopes and feed types for registrations.
https://developers.google.com/classroom/guides/push-notifications
So programmatically we are able to do the following in .net using the API :
We can Create Courses
We can create registrations for a given courseid
We create/update courseworks for the course that we have created a registration.
All good so far,
BUT, we don't receive notifications for that created/update course work.
some code for clarity :
ServiceAccountCredential credential = new ServiceAccountCredential(
new ServiceAccountCredential.Initializer("sa-something#precise-asset-259113.iam.gserviceaccount.com")
{
User = "impersonated user",
Scopes = new string[] { "https://www.googleapis.com/auth/classroom.coursework.students" ,
"https://www.googleapis.com/auth/classroom.courses",
"https://www.googleapis.com/auth/classroom.push-notifications" }}
.FromPrivateKey("My private key"));
//Authorize request
var result = credential.RequestAccessTokenAsync(CancellationToken.None).Result;
var service = new ClassroomService(new BaseClientService.Initializer()
{
HttpClientInitializer = credential,
});
// We get courses
var courses = service.Courses.List().Execute();
// We get one course for registration
var course = courses.Courses.First();
// We create registration
var registration = service.Registrations.Create(new Google.Apis.Classroom.v1.Data.Registration()
{
Feed = new Google.Apis.Classroom.v1.Data.Feed()
{
FeedType = "COURSE_WORK_CHANGES",
CourseWorkChangesInfo = new Google.Apis.Classroom.v1.Data.CourseWorkChangesInfo()
{
CourseId = course.Id
},
},
CloudPubsubTopic = new Google.Apis.Classroom.v1.Data.CloudPubsubTopic()
{
TopicName = "projects/precise-asset-259113/topics/test"
},
});
//Successful response - We get a registrationID
var response = registration.Execute();
var courseWork = new CourseWork()
{
CourseId = course.Id,
Title = "Ver Test",
Description = "Calculus",
WorkType = "ASSIGNMENT",
MaxPoints = 20.0,
State = "PUBLISHED",
AlternateLink = "www.uni.com",
CreatorUserId = course.OwnerId,
CreationTime = DateTime.UtcNow,
DueTime = new TimeOfDay() { Hours = 5, Minutes = 10, Nanos = 10, Seconds = 10 },
DueDate = new Date() { Day = 3, Month = 12, Year = 2019 },
Assignment = new Assignment() { StudentWorkFolder = new DriveFolder() { AlternateLink = "Somewhere", Title = "My Calculus" } }
};
//Create course work for the course that we registered
var courseWorkResponse = service.Courses.CourseWork.Create(courseWork, course.Id).Execute();
SubscriberServiceApiClient subscriber = SubscriberServiceApiClient.Create();
SubscriptionName subscriptionName = new SubscriptionName("precise-asset-259113", "test");
PullResponse pullResponse = subscriber.Pull(
subscriptionName, returnImmediately: true, maxMessages: 20);
// Check for push notifications BUT ....NADA!!!
foreach (ReceivedMessage msg in pullResponse.ReceivedMessages)
{
string text = Encoding.UTF8.GetString(msg.Message.Data.ToArray());
Console.WriteLine($"Message {msg.Message.MessageId}: {text}");
}
Can you please assist?
Thanks
There are several things you need to change in order to ensure you get the notifications:
You need to create your subscription before you send the request that will generate the Pub/Sub message. Only messages published after the successful creation of a subscription are guaranteed to be received by subscribers for that subscription.
A single pull request with returnImmediately set to true is unlikely to return any messages, even if a message has been published. With this property set, if there are no messages immediately in memory of the server reached, the empty response is returned. You should always set returnImmediately to false. This still won't guarantee that messages are returned in a single request/response, even if there are messages available, but it will make it more likely.
Ideally, you would use the asynchronous client library, which opens a stream to the Cloud Pub/Sub service and receives messages as soon as they are available. If you are going to use the synchronous Pull method directly, then you need to keep many of them outstanding simultaneously in order to ensure delivery of messages with minimal latency. As soon as you receive a PullResponse for any of the outstanding requests, you should immediately open up another request to replace it. The goal of the asynchronous client library is to prevent one from having to take all of these step manually to ensure efficient delivery.

AWS Cognito node lambda migration user : authenticateUser is not defined

I would like to migrate users from userPool 1 to userPool 2 with the migration user lambda in AWS Console function. In order to do it, I have used the script provided by AWS but I can't find how I can use authenticateUser for instance. It is not defined when executed.
The migration lambda is executed.
authenticateUser is not defined
I have also tried to create a layer, imported succesfully and set the layer in my lambda function but cannot make it work too.
exports.handler = (event, context, callback) => {
var user;
if ( event.triggerSource == "UserMigration_Authentication" ) {
// authenticate the user with your existing user directory service
user = authenticateUser(event.userName, event.request.password);
if ( user ) {
event.response.userAttributes = {
"email": user.emailAddress,
"email_verified": "true"
};
event.response.finalUserStatus = "CONFIRMED";
event.response.messageAction = "SUPPRESS";
context.succeed(event);
}
else {
// Return error to Amazon Cognito
callback("Bad password");
}
}
else if ( event.triggerSource == "UserMigration_ForgotPassword" ) {
// Lookup the user in your existing user directory service
user = lookupUser(event.userName);
if ( user ) {
event.response.userAttributes = {
"email": user.emailAddress,
// required to enable password-reset code to be sent to user
"email_verified": "true"
};
event.response.messageAction = "SUPPRESS";
context.succeed(event);
}
else {
// Return error to Amazon Cognito
callback("Bad password");
}
}
else {
// Return error to Amazon Cognito
callback("Bad triggerSource " + event.triggerSource);
}
};
authenticateUser is not defined
My question is : how do we import this function ?
Thanks a lot.
That sample code is for migrating a user from a legacy database, and the authenticateUser, lookupUser functions are just abstractions for your business logic (which AWS can't write for you). For instance if you have to migrate from a legacy database (not a user pool), then you would lookup their user in your table, grab their salt, hash the password passed in to the migration trigger using the same logic you did in your legacy authentication method, compare it against the stored hashed password in your legacy database, etc. (It gets a little simpler if you were storing passwords in plaintext, but let's not consider that.)
Here's a snippet that should do most of the migration for you. Someone asked a similar question on Github and referenced this StackOverflow issue.
const AWS = require('aws-sdk');
const cognitoIdentity = new AWS.CognitoIdentityServiceProvider({ region: '<your-region-here>' });
const UserPoolId = process.env.deprecatedUserPoolId;
exports.handler = async (event) => {
const { userName } = event;
const getUserParams = {
Username: userName,
UserPoolId
};
try {
const user = await cognitoIdentity.adminGetUser(getUserParams).promise();
//TODO: if you have custom attributes, grab them from the user variable and store them in the response below
event.response = { finalUserStatus: "CONFIRMED" }
return event;
} catch (e) {
throw e; //no user to migrate, give them an error in the client
}
};