Zabbix Send Message - action

I have a trigger which acts after 3 checks of ping. Interval of checks 3 minutes.
I need to send message like:
Host unavailable from [time of first unsuccessfully check];
Trigger [time of trigger acts]
Which macros i need to use?

With the comment about scripts I'll offer the following.
Configuration/Actions allow you to specify the content of a message. That message can be thought of as simply passing parameters to something. The easy default is that it sends email, but the same parameters can be passed to a script.
Inside the Operations section, you specify the operation as to whom to send (again, think of this as a parameter), and what media type. The user/groups become a parameter as well.
Under Administration, Media types, you can define a media type of "Script". This invokes an external script you write, and passes to it parameters, which by default the first three are the send-to, subject, and message content. You can (in later Zabbix versions) include other parameters there as well (I do not recall if there is a limit). Before that, I started just passing any data I wanted in a predictable and delimited fashion in the message body, then I parse it out inside my script.
Inside the script itself, you pick up the strings passed in, and do whatever you want. So if one parameter (subject, explicitly a 4th+ parameter, or buried in a predictable place inside the body of the message) is a time, you can then operate on that time in the language of your choice, replace it, expound upon it, etc. Then when you have what you want, you send the message from within the script as desired.
Different actions can send using different media types, so you could do a script only for certain types of triggers, based on the conditions written in the action (e.g. a specific trigger name). So you can use default behavior for some, and custom-write other triggers as desired. The key is to have predictable format in the Config/Action/Triggers, and depend on that format in the Administration/Media types parameters, and inside the script they call. Don't forget to make the script accessible to the zabbix service account and place in the location specified in the zabbix config file. I find it useful to stick with an email-format, then I can "test" my actions by just emailing them, take the resulting email and use it to call my scripts outside of zabbix and ensure they work.
The ability to extend the default alerts by using scripts (and in turn a callable interface to zabbix server itself that can pull additional data from zabbix at script execution) makes alerting a bit arcane, but incredibly powerful. In general you can dynamically include almost anything, including graphs, in the alerts by reacting to the script parameters, and pulling together data to email.

Related

passing custom messages downstream in spring cloud function's MessageRoutingCallback

Hey I m using MessageRoutingCallback to route to a function in spring cloud functions. It needs FunctionRoutingResult for routing. I also wanted to edit the message. The spring cloud docs says.
"Additionally, the FunctionRoutingResult provides another constructor allowing you to provide an instance of Message as second argument to be used down stream".
But the problem is the constructor with Message type in FunctionRoutingResult is internal and cannot be accessed outside.
Am I doing something wrong here. Any insight would be helpful
Couple of things.
As the documentation explains it is made to assist with routing decisions. For example if routing decision should be made based on payload which may need to be temporarily converted.
The reality is that it is a very bad practice to let framework make such decisions based on the payload, since payload is a privileged information. Similar to the letter in the envelope where mailman does not read the actual letter to make proper routing decisions. .. those all come from the envelope itself. So I will actually update the documentation to remove that paragraph.
And it is definitely not there to modify the message. That would be improper use of MessageRoutingCallback. To modify message you can use function composition. For example MessageRoutingCallback you check some header in the incoming message, determined that the function name should be foo but then actually output modifier|foo as function definition.

WS2ESB: Store state between sequence invocations

I was wondering about the proper way to store state between sequence invocations in WSO2ESB. In other words, if I have a scheduled task that invokes sequence S, at the end of iteration 0 I want to store some String variable (lets' call it ID), and then I want to read this ID at the start (or in the middle) of iteration 1, and so on.
To be more precise, I want to get a list of new SMS messages from an existing service, Twilio to be exact. However, Twilio only lets me get messages for selected days, i.e. there's no way for me to say give me only new messages (since I last checked / newer than certain message ID). Therefore, I'd like to create a scheduled task that will query Twilio and pass only new messages via REST call to my service. In order to do this, my sequence needs to query Twilio and then go through the returned list of messages, and discard messages that were already reported in the previous invocation. Now, to do this I need to store some state between different task/sequence invocations, i.e. at the end of the sequence I need to store the ID of the newest message in the current batch. This ID can then be used in subsequent invocation to determine which messages were already reported in the previous invocation.
I could use DBLookup and DB Report mediators, but it seems like an overkill (using a database to store a single string) and not very performance friendly. On the other hand, as far as I can see Class mediators are instantiated as singletons, therefore I could create a custom Class mediator that would manage this state and filter the list of messages to be sent to my service. I am quite sure that this will work, but I was wondering if this is the way to go, or there might be a more elegant solution that I missed.
We can think of 3 options here.
Using DBLookup/Report as you've suggested
Using the Carbon registry to store the values (this again uses DBs in the back end)
Using a Custom mediator to hold the state and read/write it from/to properties
Out of these three, obviously the third one will deliver the best performance since everything will be in-memory. It's also quite simple to implement and sometime back I did something similar and wrote a blog post here.
But on the other hand, the first two options can keep the state even when the server crashes, if it's a concern for your use case.
Since esb 490 you can persist and read properties from registry using property mediator.
https://docs.wso2.com/display/ESB490/Property+Mediator

REST API - Update of single resource changes multiple others

I'm looking for a way how to deal with a following problem:
Imagine you modify a resource and that subsequently causes update of other resources.
E.g. you issue a PUT to, say /api/orders/1234, which by definition changes state of all other Orders of given user. There may be UI clients that display the table of Orders and they should know that not only single item in the table was updated, but eventually other as well.
Now, is there any standard way how inform a clients about such a situation?
So far I can only think of sending back the 205 Reset Content HTTP status code to inform the client that he should refresh the state, as not just a single thing was changed.
There are multiple solutions.
You can define specific resources as non-cacheable, so the client does not cache them at all. (no-store)
You can try giving a max-age of 0, so the client will have to re-validate those resources always. In this case you might have to implement ETags and conditional GETs, but it will be easier on the server than option 1.
Some push method like WebSockets.
If you really want to "notify" potentially multiple clients of a change, then it sounds like you would need option 3.
However, correctly configured caching is normally good enough. For example you could mark not-yet-executed orders as not cached (max-age=0), but as soon as it is executed, you might mark it to be cached indefinitely, since it can not change anymore.

TMG SF_NOTIFY_POLICY_CHECK_COMPLETED Event

According to http://msdn.microsoft.com/en-us/library/ff823993%28v=VS.85%29.aspx, during this event the web filter can request GUID of the matching rule. I am assuming that is done by performing a GetServerVariable with type of SELECTED_RULE_GUID, since I could find no other readily identifiable means of doing so.
My problem comes from the fact that I want to see if the rule is allowing or blocking the request. If it's being blocked then my filter doesn't have to take any action, but if it's being allowed I need to do some work. SF_NOTIFY_POLICY_CHECK_COMPLETED seems to be the best event to watch, since it occurs last enough that authentication and various ms_auth traffic has been handled, but just before the request either gets routed or fetched from cache.
I had thought that perhaps I needed to use COM and the IFPC interfaces (following along with example code for registering Web Filters to TMG) to get details on the rule. However, going down via FPC -> FPCArray -> FPCArrayPolicy -> FPCPolicyRules, the only element-returning function takes either an index or a name.
Which is problematic given that I only have a GUID.
The FPCPolicyRule object (singular) doesn't seem have any field related to GUID either, which eliminates just iterating over the collection for it.
So my question boils down to, from the SF_NOTIFY_POLICY_CHECK_COMPLETED event, how would a web filter determine if the request has been allowed or denied?
After more investigation and testing, the GUID is accessible via the PersistentName of the FPCPolicyRule object. Since FPCPolicyRules->Item member only works on either Name or Index, I had to iterate through its items comparing each PersistentName against the GUID.
Apologies if this was obvious, took me a good day to work out :)

Best way to decide on XML or HTML response?

I have a resource at a URL that both humans and machines should be able to read:
http://example.com/foo-collection/foo001
What is the best way to distinguish between human browsers and machines, and return either HTML or a domain-specific XML response?
(1) The Accept type field in the request?
(2) An additional bit of URL? eg:
http://example.com/foo-collection/foo001 -> returns HTML
http://example.com/foo-collection/foo001?xml -> returns, er, XML
I do not wish to oblige machines reading the resource to parse HTML (or XHTML for that matter). Machines like the googlebot should receive the HTML response.
It is reasonable to assume I control the machine readers.
If this is under your control, rather than adding a query parameter why not add a file extension:
http://example.com/foo-collection/foo001.html - return HTML
http://example.com/foo-collection/foo001.xml - return XML
Apart from anything else, that means if someone fetches it with wget or saves it from their browser, it'll have an appropriate filename without any fuss.
My preference is to make it a first-class part of the URI. This is debatable, since there are -- in a sense -- multiple URI's for the same resource. And is "format" really part of the URI?
http://example.com/foo-collection/html/foo001
http://example.com/foo-collection/xml/foo001
These are very easy deal with in a web framework that has URI parsing to direct the request to the proper application.
If this is indeed the same resource with two different representations, the HTTP invites you to use the Accept-header as you suggest. This is probably a very reliable way to distinguish between the two different scenarios. You can be plenty sure that user agents (including search engine spiders) send the Accept-header properly.
About the machine agents you are going to give XML; are they under your control? In that case you can be doubly sure that Accept will work. If they do not set this header properly, you can give XML as default. User agents DO set the header properly.
I would try to use the Accept heder for this, because this is exactly what the Accept header is there for.
The problem with having two different URLs is that is is not automatically apparent that these two represent the same underlying resource. This can be bad if a user finds an URL in one program, which renders HTML, and pastes it in the other, which needs XML. At this point a smart user could probably change the URL appropriately, but this is just a source of error that you don't need.
I would say adding a Query String parameter is your best bet. The only way to automatically detect whether your client is a browser(human) or application would be to read the User-Agent string from the HTTP Request. But this is easily set by any application to mimic a browser, you're not guaranteed that this is going to work.