I've got the order microservice that's written as go AWS lambda function.
The main function named order-service bound to API Gateway. It receives several parameters like user_id:int, product_ids:array of int, creates an order with artifacts and returns serialized order with order_id and total price.
This function invokes a function named order-item which creates an order-item and returns them in parallel(per product). These functions invoke product and user functions to retrieve information about user and products by their ids.
Then, the order function invokes another lambda called fee-function which takes just total price and user id and gives back fee price. Of course, it calls some other function like user function and so on. Basically, this is a simple example of how the service works in general. Any function calls some others like user-discount, state taxes etc.
The questions are:
Is it good that order function invokes fee function through Amazon, but it can just import fee handler package and run it inside itself?(However, fee function may be called from outside so it must be deployed as a separate function as well)
Is it good that each function receives just user id and loads the user invoking user function? Perhaps, better to preload it and pass it through everywhere? Something else?
Is it good that one function calls other functions and they call some others and so on? is there any better approach in my situation? Use SNS, Step function, dependency injections/aws layers.
The main reason why I asked is to withstand thousands of RPM and not pay a lot.
Thanks for helping. I appreciate this.
This is exactly what Step Functions was created for. You can invoke a Step Functions state machine from API Gateway, as you would a Lambda.
With Step functions you can:
Invoke the state machine with parameters
Orchestrate the order that Lambda functions are invoked
Use the state to store inputs and outputs from each lambda
Have decision points to take different paths based on the output of a previous function
See the AWS Step Functions Getting Started Guide for a good introduction to the service.
Related
Hey I m using MessageRoutingCallback to route to a function in spring cloud functions. It needs FunctionRoutingResult for routing. I also wanted to edit the message. The spring cloud docs says.
"Additionally, the FunctionRoutingResult provides another constructor allowing you to provide an instance of Message as second argument to be used down stream".
But the problem is the constructor with Message type in FunctionRoutingResult is internal and cannot be accessed outside.
Am I doing something wrong here. Any insight would be helpful
Couple of things.
As the documentation explains it is made to assist with routing decisions. For example if routing decision should be made based on payload which may need to be temporarily converted.
The reality is that it is a very bad practice to let framework make such decisions based on the payload, since payload is a privileged information. Similar to the letter in the envelope where mailman does not read the actual letter to make proper routing decisions. .. those all come from the envelope itself. So I will actually update the documentation to remove that paragraph.
And it is definitely not there to modify the message. That would be improper use of MessageRoutingCallback. To modify message you can use function composition. For example MessageRoutingCallback you check some header in the incoming message, determined that the function name should be foo but then actually output modifier|foo as function definition.
In CloudFormation we have the ability to output some values from a template so that they can be retrieved by other processes, stacks, etc. This is typically the name of something, maybe a URL or something generated during stack creation (deployment), etc.
We also have the ability to 'export' from a template. What is the difference between returning a value as an 'output' vs as an 'export'?
Regular output values can't be references from other stacks. They can be useful when you chain or nest your stacks and their scope/visibility is local. Exported outputs are visible globally within account and region, and can be used by any future stack you are going to deploy.
Chaining
When you chain your stacks, you deploy one stack, take it outputs, and use as input parameters to the second stack you are going to deploy.
For example, let's say you have two templates called instance.yaml and eip.yaml. The instance.yaml outputs its instance-id (no export), while eip.yaml takes instance id as an input parameter.
To deploy them both, you have to chain them:
Deploy instance.yaml and wait for its completion.
Note it outputs values (i.e. instance-id) - usually done programmatically, not manually.
Deploy eip.yaml and pass instance-id as its input parameter.
Nesting
When you nest stacks you will have a parent template and a child template. Child stack will be created from inside of the parent stack. In this case the child stack will produce some outputs (not exports) for the parent stack to use.
For example, lets use again instance.yaml and eip.yaml. But this time eip.yaml will be parent and instance.yaml will be child. Also eip.yaml does not take any input parameters, but instance.yaml outputs its instance-id (not export)
In this case, to deploy them you do the following:
Upload parrent template (eip.yaml) to s3
In eip.yaml create the child instance stack using AWS::CloudFormation::Stack and the s3 url from step 1.
This way eip.yaml will be able to access the instance-id from the outputs of the nested stack using GetAtt.
Cross-referencing
When you cross-reference stacks, you have one stack that exports it outputs so that they can be used by any other stack in the same region and account.
For example, lets use again instance.yaml and eip.yaml. instance.yaml is going to export its output (instance-id). To use the instance-id eip.yaml will have to use ImportValue in its template without the need for any input parameters or nested stacks.
In this case, to deploy them you do the following:
Deploy instance.yaml and wait till it completes.
Deploy eip.yaml which will import the instance-id.
Altough cross-referencing seems very useful, it has one major issue, which is that its very difficult to update or delete cross-referenced stacks:
After another stack imports an output value, you can't delete the stack that is exporting the output value or modify the exported output value. All of the imports must be removed before you can delete the exporting stack or modify the output value.
This is very problematic if you are starting your design and your templates can change often.
When to use which?
Use cross-references (exported values) when you have some global resources that are going to be shared among many stacks in a given region and account. Also they should not change often as they are difficult to modify. Common examples are: a global bucket for centralized logging location, a VPC.
Use nested stack (not exported outputs) when you have some common components that you often deploy, but each time they can be a bit different. Examples are: ALB, a bastion host instance, vpc interface endpoint.
Finally, chained stacks (not exported outputs) are useful for designing loosely-coupled templates, where you can mix and match templates based on new requirements.
Short answer from here, use export between stacks, and use output with nested stacks.
Export
To share information between stacks, export a stack's output values.
Other stacks that are in the same AWS account and region can import
the exported values.
Output
With nested stacks, you deploy and manage all resources from a single
stack. You can use outputs from one stack in the nested stack group as
inputs to another stack in the group. This differs from exporting
values.
I have a trigger which acts after 3 checks of ping. Interval of checks 3 minutes.
I need to send message like:
Host unavailable from [time of first unsuccessfully check];
Trigger [time of trigger acts]
Which macros i need to use?
With the comment about scripts I'll offer the following.
Configuration/Actions allow you to specify the content of a message. That message can be thought of as simply passing parameters to something. The easy default is that it sends email, but the same parameters can be passed to a script.
Inside the Operations section, you specify the operation as to whom to send (again, think of this as a parameter), and what media type. The user/groups become a parameter as well.
Under Administration, Media types, you can define a media type of "Script". This invokes an external script you write, and passes to it parameters, which by default the first three are the send-to, subject, and message content. You can (in later Zabbix versions) include other parameters there as well (I do not recall if there is a limit). Before that, I started just passing any data I wanted in a predictable and delimited fashion in the message body, then I parse it out inside my script.
Inside the script itself, you pick up the strings passed in, and do whatever you want. So if one parameter (subject, explicitly a 4th+ parameter, or buried in a predictable place inside the body of the message) is a time, you can then operate on that time in the language of your choice, replace it, expound upon it, etc. Then when you have what you want, you send the message from within the script as desired.
Different actions can send using different media types, so you could do a script only for certain types of triggers, based on the conditions written in the action (e.g. a specific trigger name). So you can use default behavior for some, and custom-write other triggers as desired. The key is to have predictable format in the Config/Action/Triggers, and depend on that format in the Administration/Media types parameters, and inside the script they call. Don't forget to make the script accessible to the zabbix service account and place in the location specified in the zabbix config file. I find it useful to stick with an email-format, then I can "test" my actions by just emailing them, take the resulting email and use it to call my scripts outside of zabbix and ensure they work.
The ability to extend the default alerts by using scripts (and in turn a callable interface to zabbix server itself that can pull additional data from zabbix at script execution) makes alerting a bit arcane, but incredibly powerful. In general you can dynamically include almost anything, including graphs, in the alerts by reacting to the script parameters, and pulling together data to email.
I was wondering about the proper way to store state between sequence invocations in WSO2ESB. In other words, if I have a scheduled task that invokes sequence S, at the end of iteration 0 I want to store some String variable (lets' call it ID), and then I want to read this ID at the start (or in the middle) of iteration 1, and so on.
To be more precise, I want to get a list of new SMS messages from an existing service, Twilio to be exact. However, Twilio only lets me get messages for selected days, i.e. there's no way for me to say give me only new messages (since I last checked / newer than certain message ID). Therefore, I'd like to create a scheduled task that will query Twilio and pass only new messages via REST call to my service. In order to do this, my sequence needs to query Twilio and then go through the returned list of messages, and discard messages that were already reported in the previous invocation. Now, to do this I need to store some state between different task/sequence invocations, i.e. at the end of the sequence I need to store the ID of the newest message in the current batch. This ID can then be used in subsequent invocation to determine which messages were already reported in the previous invocation.
I could use DBLookup and DB Report mediators, but it seems like an overkill (using a database to store a single string) and not very performance friendly. On the other hand, as far as I can see Class mediators are instantiated as singletons, therefore I could create a custom Class mediator that would manage this state and filter the list of messages to be sent to my service. I am quite sure that this will work, but I was wondering if this is the way to go, or there might be a more elegant solution that I missed.
We can think of 3 options here.
Using DBLookup/Report as you've suggested
Using the Carbon registry to store the values (this again uses DBs in the back end)
Using a Custom mediator to hold the state and read/write it from/to properties
Out of these three, obviously the third one will deliver the best performance since everything will be in-memory. It's also quite simple to implement and sometime back I did something similar and wrote a blog post here.
But on the other hand, the first two options can keep the state even when the server crashes, if it's a concern for your use case.
Since esb 490 you can persist and read properties from registry using property mediator.
https://docs.wso2.com/display/ESB490/Property+Mediator
Is there any way to pass parameters or share data with a scheduled task? I understand that you can pass serializable arguments to a Quartz Job, but this seems not to be available in cfschedule. What are the options to achieve this?
The easiest way to do that is just to have a .cfm file that is called by cfschedule that itself calls the CFC and passes the desired methods.
If you want a more flexible solution, I have a Scheduler.cfc that allows you to have a method called at an frequency that you want and you have even pass arguments for the method call.
http://www.bryantwebconsulting.com/blog/index.cfm/2009/2/26/Schedulercfc-10
It can be gotten here.
https://github.com/sebtools/com.sebtools/
The important thing with it is that you have to have Scheduler instantiated into Application scope and a .cfm that is called by cfschedule that runs:
If you just have one method with arguments that needs to be called frequently, then Scheduler.cfc is overkill over the simple solution, but if this is a general problem that you need to solve more frequently, then it can pay off nicely.
You could pass them on the query string of the URL attribute.
example.com/index.cfm?param1=value1¶m2=param2
If your data is complex you can always serialize it to JSON before and use deserializeJSON on the receiving task.