how to get list of arguments to handler in delayed_job rails - ruby-on-rails-4

I have a list of all the scheduled jobs which I can get using the command
Delayed::Job.all
Every job has a handler field(string) which contains a '-' separated arguments. I want to find one of the arguments of this string. One way is obviously to split the string and extract the value but this method will fail if there is ever any change in the list of arguments passed.
Below given is the handler string of one of my job objects:
"--- !ruby/object:ActiveJob::QueueAdapters::DelayedJobAdapter::JobWrapper\njob_data:\n job_class: ActionMailer::DeliveryJob\n job_id: 7ce42882-de24-439a-a52a-5681453f4213\n queue_name: mailers\n arguments:\n - EventNotifications\n - reminder_webinar_event_registration\n - deliver_now\n - mail#gmail.com\n - yesha\n - 89\n locale: :en\n"
I want to know if there is any way, I can send extra arguments to job object while saving it which can be used later instead of searching in the handler string.
Or, if not this, can i get a list of arguments of handler rather than parsing the string and using it.
Kindly help!

There is a method instance_object for Delayed::Job instances which returns the deserialized handler
job = Delayed::Job.first
handler = job.payload_object
You can use your handler as needed, such as handler.method

To access the job data:
data = job.payload_object.job_data
To then return the actual job class that was queued, you deserialize the job data:
obj = ActiveJob::Base.deserialize(data)
If your job is a mailer and you want to access the parameters to your mailer, then this is where things get a bit hacky and I'm unsure if there's a better way. The following will return all of the data for the mailer as an array containing the mailer class, method names, and arguments.
mailer_args = obj.instance_variable_get :#serialized_arguments
Finally, you can deserialize all of the mailer arguments with the following which will contain the same data as mailer_args, but with any ActiveRecord objects deserialized (with the form gid://...) to the actual instances passed to the mailer.
ActiveJob::Arguments.deserialize(mailer_args)

Related

Status Filter for ListExecutions

Is there a way to provide multiple values to statusFilter parameter when calling ListExecutions for step function executions?
I need to get all the executions that are not RUNNING.
client = boto3.client('stepfunctions')
response = client.list_executions(
stateMachineArn=STEP_FUNCTION_STATE_MACHINE_ARN,
#maxResults=3,
statusFilter="SUCCEEDED|FAILED"
)
When I do it like this, I get an error that only members of ENUM cab be passed.
No there isn't, you can either query twice you you can just filter yourself after returning all data. But keep in mind that step functions have a pagination token that you will need to also iterate through.

Override field in the input before passing to the next state in AWS Step Function

Say I have 3 states, A -> B -> C. Let's assume inputs to A include a field called names which is of type List and each element contains two fields firstName and lastName. State B will process the inputs to A and and return a response called newLastName. If I want to override every element in names such that names[i].lastName = newLastName before passing this input to state C, is there an built-in syntax to achieve that? Thanks.
You control the events passed to the next task in a Step Function with three defintion attributes: ResultPath and OutputPath on leaving one task and InputPath on entering the next one.
You have to first understand how the event to the next task is crafted by a State Machine, and each of the 3 above parameters changes it.
You have to at least have Result Path. This is the key in the event that the output of your lambda will be placed under. so ResultPath="$.my_path" would result in a json object that has a top level key of my_path with the value equal to whatever is outputted from the lambda.
If this is the only attribute, it is tacked onto whatever the input was. So if your Input event was a json object with keys original_key1 and some_other_key your output with just the above result path would be:
{
"original_key_1": some value,
"some_other_key": some other value,
"my_path": the output of your lambda
}
Now if you add OutputPath, this cuts off everything OTHER than the path (AFTER adding the result path!) in the next output.
If you added OutputPath="$.my_path" you would end up with a json of:
{ output of your lambda }
(your output better be a json comparable object, like a python dict!)
InputPath does the same thing ... but for the Input. It cuts off everything other than the path described, and that is the only thing sent into the lambda. But it does not stop the input from being appeneded - so InputPath + ResultPath results in less being sent into the lambda, but everything all together on the exit
There isn't really a loop logic like the one you describe however - Task and State Machine definitions are static directions, not dynamic logic.
You can simply handle it inside the lambda. This is kinda the preferred method. HOWEVER if you do this, then you should use a combination of OutputPath and ResultPath to 'cut off' the input, having replaced the various fields of the incoming event with whatever you want before returning it at the end.

How to return Transaction id, time stamp on execution of invoke function in chaincode?

I need guidance in returning transaction id, time stamp on the client interface after each invoke function call.
I have found that stub.GetTxID() is used to for getting transaction id, but peer.response only take one argument, so i am not able to return the TxID on the client interface.
You can create a response object to capture relevant information, marshal it into json and return it back, something like this:
type ChaincodeResponse struct {
txID string
time *timestamp.Timestamp
}
and then
// rest of the invoke code skipped, here is
// the relevant part:
resp, err := json.Marshal(ChaincodeResponse{
txID: stub.GetTxID(),
time: stub.GetTxTimestamp(),
})
// return json representation of relevant information
// in response
return shim.Success(resp)
I'm working on something at the moment that requires all of our transactions to be timestamped. I tried some things based on your code above but I think the api has moved on considerably since 2017.
Currently, I'm adding a created: stub.GetTxTimestamp() field to all of the things we're putting on the ledger and then reading them later in any queries. Though I'm wondering if the timestamps are already generated and stored, therefore making this unnecessary - do you know if a timestamp is still automatically stored on each item put on the ledger?

SFDC Apex Code: Access class level static variable from "Future" method

I need to do a callout to webservice from my ApexController class. To do this, I have an asycn method with attribute #future (callout=true). The webservice call needs to refeence an object that gets populated in save call from VF page.
Since, static (future) calls does not all objects to be passed in as method argument, I was planning to add the data in a static Map and access that in my static method to do a webservice call out. However, the static Map object is getting re-initalized and is null in the static method.
I will really appreciate if anyone can give me some pointeres on how to address this issue.
Thanks!
Here is the code snipped:
private static Map<String, WidgetModels.LeadInformation> leadsMap;
....
......
public PageReference save() {
if(leadsMap == null){
leadsMap = new Map<String, WidgetModels.LeadInformation>();
}
leadsMap.put(guid,widgetLead);
}
//make async call to Widegt Webservice
saveWidgetCallInformation(guid)
//async call to widge webserivce
#future (callout=true)
public static void saveWidgetCallInformation(String guid) {
WidgetModels.LeadInformation cachedLeadInfo =
(WidgetModels.LeadInformation)leadsMap.get(guid);
.....
//call websevice
}
#future is totally separate execution context. It won't have access to any history of how it was called (meaning all static variables are reset, you start with fresh governor limits etc. Like a new action initiated by the user).
The only thing it will "know" is the method parameters that were passed to it. And you can't pass whole objects, you need to pass primitives (Integer, String, DateTime etc) or collections of primitives (List, Set, Map).
If you can access all the info you need from the database - just pass a List<Id> for example and query it.
If you can't - you can cheat by serializing your objects and passing them as List<String>. Check the documentation around JSON class or these 2 handy posts:
https://developer.salesforce.com/blogs/developer-relations/2013/06/passing-objects-to-future-annotated-methods.html
https://gist.github.com/kevinohara80/1790817
Side note - can you rethink your flow? If the starting point is Visualforce you can skip the #future step. Do the callout first and then the DML (if needed). That way the usual "you have uncommitted work pending" error won't be triggered. This thing is there not only to annoy developers ;) It's there to make you rethink your design. You're asking the application to have open transaction & lock on the table(s) for up to 2 minutes. And you're giving yourself extra work - will you rollback your changes correctly when the insert went OK but callout failed?
By reversing the order of operations (callout first, then the DML) you're making it simpler - there was no save attempt to DB so there's nothing to roll back if the save fails.

How does Sentry aggregate errors?

I am using Sentry (in a django project), and I'd like to know how I can get the errors to aggregate properly. I am logging certain user actions as errors, so there is no underlying system exception, and am using the culprit attribute to set a friendly error name. The message is templated, and contains a common message ("User 'x' was unable to perform action because 'y'"), but is never exactly the same (different users, different conditions).
Sentry clearly uses some set of attributes under the hood to determine whether to aggregate errors as the same exception, but despite having looked through the code, I can't work out how.
Can anyone short-cut my having to dig further into the code and tell me what properties I need to set in order to manage aggregation as I would like?
[UPDATE 1: event grouping]
This line appears in sentry.models.Group:
class Group(MessageBase):
"""
Aggregated message which summarizes a set of Events.
"""
...
class Meta:
unique_together = (('project', 'logger', 'culprit', 'checksum'),)
...
Which makes sense - project, logger and culprit I am setting at the moment - the problem is checksum. I will investigate further, however 'checksum' suggests that binary equivalence, which is never going to work - it must be possible to group instances of the same exception, with differenct attributes?
[UPDATE 2: event checksums]
The event checksum comes from the sentry.manager.get_checksum_from_event method:
def get_checksum_from_event(event):
for interface in event.interfaces.itervalues():
result = interface.get_hash()
if result:
hash = hashlib.md5()
for r in result:
hash.update(to_string(r))
return hash.hexdigest()
return hashlib.md5(to_string(event.message)).hexdigest()
Next stop - where do the event interfaces come from?
[UPDATE 3: event interfaces]
I have worked out that interfaces refer to the standard mechanism for describing data passed into sentry events, and that I am using the standard sentry.interfaces.Message and sentry.interfaces.User interfaces.
Both of these will contain different data depending on the exception instance - and so a checksum will never match. Is there any way that I can exclude these from the checksum calculation? (Or at least the User interface value, as that has to be different - the Message interface value I could standardise.)
[UPDATE 4: solution]
Here are the two get_hash functions for the Message and User interfaces respectively:
# sentry.interfaces.Message
def get_hash(self):
return [self.message]
# sentry.interfaces.User
def get_hash(self):
return []
Looking at these two, only the Message.get_hash interface will return a value that is picked up by the get_checksum_for_event method, and so this is the one that will be returned (hashed etc.) The net effect of this is that the the checksum is evaluated on the message alone - which in theory means that I can standardise the message and keep the user definition unique.
I've answered my own question here, but hopefully my investigation is of use to others having the same problem. (As an aside, I've also submitted a pull request against the Sentry documentation as part of this ;-))
(Note to anyone using / extending Sentry with custom interfaces - if you want to avoid your interface being use to group exceptions, return an empty list.)
See my final update in the question itself. Events are aggregated on a combination of 'project', 'logger', 'culprit' and 'checksum' properties. The first three of these are relatively easy to control - the fourth, 'checksum' is a function of the type of data sent as part of the event.
Sentry uses the concept of 'interfaces' to control the structure of data passed in, and each interface comes with an implementation of get_hash, which is used to return a hash value for the data passed in. Sentry comes with a number of standard interfaces ('Message', 'User', 'HTTP', 'Stacktrace', 'Query', 'Exception'), and these each have their own implemenation of get_hash. The default (inherited from the Interface base class) is a empty list, which would not affect the checksum.
In the absence of any valid interfaces, the event message itself is hashed and returned as the checksum, meaning that the message would need to be unique for the event to be grouped.
I've had a common problem with Exceptions. Currently our system is capturing only exceptions and I was confused why some of these where merged into a single error, others are not.
With your information above I extraced the "get_hash" methods and tried to find the differences "raising" my errors. What I found out is that the grouped errors all came from a self written Exception type that has an empty Exception.message value.
get_hash output:
[<class 'StorageException'>, StorageException()]
and the multiple errors came from an exception class that has a filled message value (jinja template engine)
[<class 'jinja2.exceptions.UndefinedError'>, UndefinedError('dict object has no attribute LISTza_*XYZ*',)]
Different exception messages trigger different reports, in my case the merge was caused due to the lack of the Exception.message value.
Implementation:
class StorageException(Exception):
def __init__(self, value):
Exception.__init__(self)
self.value = value