I'm using jenkins-dsl-jobs to programmatically create views and jobs, but is it possible to create lockable resources as well? I have a constantly growing list of test units each of which needs its own jenkins lockable resource and maintaining the list is quite tedious.
Thanks!
Related
I have a few things to get clear, specifically regarding modeling architecture for a serverless application using AWS CDK.
I’m currently working on a serverless application developed using AWS CDK in TypeScript. Also as a convention, we follow the below rules too.
A stack should only have one table (dynamo)
A stack should only have one REST API (api-gateway)
A stack should not depend on any other stack (no cross-references), unless its the Event-Stack (a stack dedicated to managing EventBridge operations)
The reason for that is so that each stack can be deployed independently without any interferences of other stacks. In a way, our stacks are equivalent to micro-services in a micro-service architecture.
At the moment all the REST APIs are public and now we have decided to make them private by attaching custom Lambda authorizers to each API Gateway resource. Now, in this custom Lambda authorizer, we have to do certain operations (apart from token validation) in order to allow the user's request to proceed further. Those operations are,
Get the user’s role from DB using the user ID in the token
Get the user’s subscription plan (paid, free, etc.) from DB using the user ID in the token.
Get the user’s current payment status (due, no due, fully paid, etc.) from DB using the user ID in the token.
Get scopes allowed for this user based on 1. 2. And 3.
Check whether the user can access this scope (the resource user currently requesting) based on 4.
This authorizer Lambda function needs to be used by all the other Stacks to make their APIs private. But the problem is roles, scopes, subscriptions, payments & user data are in different stacks in their dedicated DynamoDB tables. Because of the rules, I have explained before (especially rule number 3.) we cannot depend on the resources defined in other stacks. Hence we are unable to create the Authoriser we want.
Solutions we could think of and their problems:
Since EventBridge isn't bi-directional we cannot use it to fetch data from a different stack resource.
We can invoke a Lambda in a different stack using its ARN and get the required data from its' response but, AWS has discouraged this as a CDK Anti Pattern
We cannot use technology like gRPC because it requires a continuously running server, which is out of the scope of the server-less architecture.
There was also a proposal to re-design the CDK layout of our application. The main feature of this layout is going from non-crossed-references to adopting a fully-crossed-references pattern. (Inspired by layered architecture as described in this AWS best practice)
Based on that article, we came up with a layout like this.
Presentation Layer
Stack for deploying the consumer web app
Stack for deploying admin portal web app
Application Layer
Stack for REST API definitions using API Gateway
Stack for Lambda functions running business-specific operations (Ex: CRUDs)
Stack for Lambda functions runs on event triggers
Stack for Authorisation (Custom Lambda authorizer(s))
Stack for Authentication implementation (Cognito user pool and client)
Stack for Events (EvenBuses)
Stack for storage (S3)
Data Layer
Stack containing all the database definitions
There could be another stack for reporting, data engineering, etc.
As you can see, now stacks are going to have multiple dependencies with other stacks' resources (But no circular dependencies, as shown in the attached image). While this pattern unblocks us from writing an effective custom Lambda authorizer we are not sure whether this pattern won't be a problem in the long run, when the application's scope increases.
I highly appreciate the help any one of you could give us to resolve this problem. Thanks!
Multiple options:
Use Parameter Store rather than CloudFormation exports
Split stacks into a layered architecture like you described in your
answer and import things between Stacks using SSM parameter store like the other answer describes. This is the most obvious choice for breaking inter-stack dependencies. I use it all the time.
Use fixed resource names, easily referencable and importable
Stack A creates S3 bucket "myapp-users", Stack B imports S3 bucket by fixed name using Bucket.fromBucketName(this, 'Users', 'myapp-users'). Fixed resource names have their own downsides, so this should be used only for resources that are indeed shared between stacks. They prevent easy replacement of the resource, for example. Also, you need to enforce the correct Stack deployment order, CDK will not help you with that anymore since there are no cross-stack dependencies to enforce it.
Combine the app into a single stack
This sounds extreme
and counter intuitive, but I found that most real life teams don't
actually have a pressing need for multi-stack deployment. If your only concern is
separating code-owners of different parts of the application - you
can get away by splitting the stack into multiple Constructs,
composed into a single stack, where each team takes care of their
Construct and its children. Think of it as combining multiple Git repos into a Monorepo. A lot of projects are doing that.
A strategy I use to avoid hard cross-references involves storing shared resource values in AWS Systems Manager.
In the exporting stack, we can save the name of an S3 Bucket for instance:
ssm.StringParameter(
scope=self,
id="/example_stack/example_bucket_name",
string_value=self.example_bucket.bucket_name,
parameter_name="/example_stack/example_bucket_name",
)
and then in the importing stack, retrieve the name and create an IBucket by using a .from_ method.
example_bucket_name = ssm.StringParameter.value_for_string_parameter(
scope=self,
parameter_name="/example_stack/example_bucket_name",
)
example_bucket = s3.Bucket.from_bucket_name(
scope=self,
id="example_bucket_from_ssm",
bucket_name=example_bucket_name,
)
You'll have to figure out the right order to deploy your stacks but otherwise, I've found this to be a good strategy to avoid the issues encountered with stack dependencies.
I'm trying to understand the use case of Nested Stacks vs. Constructs specifically in CDK. The AWS docs say the following:
Stacks are the unit of deployment: everything in a stack is deployed together. So when building your application's higher-level logical units from multiple AWS resources, represent each logical unit as a Construct, not as a Stack. Use stacks only to describe how your constructs should be composed and connected for your various deployment scenarios.
This makes sense when evaluating whether to use a Construct or Stack, but what about Nested Stacks? Both Constructs and Nested Stacks solve the problems of:
reusability of component architectures
controlled information sharing between components / mitigating import and export (deadly embrace) issues
and both Constructs and Nested Stacks are deployed together from the root Stack (from what I understand, NestedStacks are rarely deployed alone and are intended to be deployed as part of a group of NestedStacks under one parent Stack)
So what's the benefit of using Nested Stacks over Constructs besides working around the resource limitations of a single Stack (i.e. when should I use one over the other)?
It's instructive how differently the CloudFormation docs and CDK docs present Nested Stacks. The CloudFormation docs focus on their role in component resuse. The CDK docs don't mention reuse, instead presenting them as a workaround for the per-stack resource limit. Of course, Nested Stacks do both things.
CDK Constructs are more composable and portable than the Nested Stacks it inherited from CloudFormation. The CDK docs recommend Constructs for composition here and here.
Apart from overcoming the per-stack resource limit, there is a backwards compatability rationale for using Nested Stacks when including existing CloudFormation templates into CDK apps.
I use CDK to deploy a lambda function (along some IAM role & queue) and monitoring resources about the lambda, lambda log group and queue earlier. What i have right now is basically 2 class, 1 class to create all the lambda related resource and another to create monitoring resource and they are added all into 1 deployment stack.
Recently im deploying this to a new account and i realized my stack fail to create because some of the monitoring stuff is looking for the lambda log group and cant find it since its not created yet.
So what is the better option:
have 2 deployment group, 1 for lambda related resource and 1 for monitoring resource
use dependencies to create some ordering in my stack.
seems like both possible solution but what is a better long term solution?
Assuming you mean a Stack for your two classes, then you are better off making them both cdk.NestedStacks and instantiating them in a single common stack. You can then expose constructs as class attributes in one stack and pass them into the other as parameters to the second. Of course, this only works one way - if you have to go both ways you need to re-evaluate how you have your stacks organized.
The advantage of doing this is great: exposing constructs as an attribute is the best practice as it gives you direct access to that construct before it creates the CloudFormation data for it. you have complete access to every part of that construct from various arns (like dynamodb stream arns which are difficult to import) and automatically know the layer versions for lamdba layers - among many other things.
In addition, you never run into a stack dependency - if they are different top level stacks and you share constructs between them you can very run into lock situations where attempting to change something in one stack creates a dependency lock and prevents the stack from deploying.
The downside is that they all are part of the deployment. So there is a potential for something to be updated when you didnt expect it too - though CDK does use the Cloudformation Changeset system so it should not update things that have no changes applied to them (but sometimes, changes occur because of the way CDK generates tokens and such that you may not be aware of)
IF you do not go this route you are stuck using the various from* methods in cdk constructs to import the existing construct into your stack. This causes some issues, as it it can't import everything about a given construct at synth time (layer version and dynamo stream arns are two notable ones i mentioned already). Plus, you need to know the name of the construct - and Best Practices says you shouldn't deliberately name your constructs so you can easily spin up adhoc versions of your app without naming issues.
I am building a Desktop-on-Demand solution using AWS Workspaces product and I am trying to understand what is the best AWS service to fit my requirements for managing state data for new users.
In a nutshell, solution will create a new AWS Workspace (virtual desktop instance) for a user when multiple conditions are met and checks are satisfied. These tasks would be satisfied by multiple lambda functions.
DynamoDB would be used as a central point for storing confguration data details like user data, user groups data and deployed virtual desktops data.
Logic for Desktops creation would be implemented using Step Functions like below:
Event hook comes from Identity Management system firing a lambda function that checks if user desktop already exists in DynamoDB table
If it does not exist, another lambda creates AWS AD connector
Once this is done, another lambda builds custom image for new desktop if needed
Another lambda pulls latest data from Identity Management system and updates DynamoDB table for users and groups.
Other lambda functions that may be fired up as a dependency
To ensure we have transactional mechanism, we only deploy new desktop when all conditions are met. I can think about few ways of implementing this check:
Use DynamoDB table for keeping State data. When all attributes in item are in expected state, desktop can be created. If any lambda fails or produces data that does not fit, dont' create desktop.
Just use Step Functions and design it's logic flow that all conditions must satisfy before desktop is created
Someone suggested using SQS queue but I don't see how this can be used for my purpose.
What is the best way to keep this data?
Step Functions is the method I would use for this. The DynamoDB solution would also work, but this seems like exactly the sort of thing Step Functions was designed to handle.
I agree that SQS would not be a correct solution.
I have written a web job which will do multiple tasks that run on different schedules like once a day, once in every hour and so and I achieved this by using Timer delegate. Now I am thinking of changing that approach and create a Scheduler job for each scenario. I was able to find some information regarding schedules from googling but was never able to join them to form a flow.
I learned that we can create job collection and each collection can have 'n' jobs based on the pricing tier we are using. After creating a job the program logic that the job must do how can we bind them to the corresponding job?
Also linking jobs to job collection how can I achieve that?
Thanks
A typical workflow is that you would write to a Azure Message Queue with a message, then you would have an Azure Cloud Service that reads from that and does the processing.
To tie specific jobs to specific program logic you can either embed information about the type into the message and have something that generically picks the messages up and turns them into specific operations/classes or you could have behavior specific queues and each job would write to its own queue and you would read from each queue by a different Cloud Service.
I think this will solve my problem either using API calls or queue processing
Solution
If I understand your question, you have a WebJob that has multiple methods, each of which needs to be called on a different schedule. Instead of going through the hassle of setting up a Scheduler and having yet another resource that you have to manage, mark each method you need called with a TimerTriggerAttribute.