Django FSM - get_available_FIELD_transitions - django

Using django_fsm I need to get the available transitions a list. When using the following code I do get a <generator object get_available_FIELD_transitions at 0x10b9ba660>
obj = MyModel.objects.get(pk=object_id)
transitions = obj.get_available_status_transitions()
print(transitions)
Instead I would like to get a list of transitions like ['PENDING', 'CLOSED']

The generator has everything you need, it just needs iterating. To get what you want, you can just convert it to a list:
transitions = list(obj.get_available_status_transitions())
You might want to read up on generators in Python, they're very useful.

Generators are iterable python objects. See Generators
This will print each item
transitions = list(obj.get_available_state_transitions())
print(transitions)
I found this from the test cases of django-fsm
Django-fsm TestCase

Related

Sentry-elixir cannot encode tuples

I get it that in a pure sense, JSON doesn't account for tuples, but I don't think it's unreasonable to treat tuples as lists in terms of the JSON encoding. \Has anyone else faced and resolved this? I'd like to stay out of the business of pre-processing my error data to replace tuples with lists.
Perhaps I need to specify a different serialization approach?
EDIT: here is a practical example:
Here is some toy code.
the_data = {:ok, %{...}}
Sentry.capture_message(message, the_data)
All it does is attempt to send a message to Sentry with tuples in the data.
If you're unfamiliar with Sentry, the sentry-elixir library provides two functions (among many other, of course) that are used to explicitly send either exceptions or messages to Sentry. The functions are:
Sentry.capture_exception/2
Sentry.capture_message/2
In addition, errors are sent to Sentry when they bubble up to the "top". These can't be intercepted so I have to specify (and implement) a before_send_event "handler" in the configuration for Sentry.
This is what my configuration looks like for the environment I'm working in:
config :sentry,
dsn: "https://my_neato_sentry_key#sentry.io/33333343",
environment_name: :staging,
enable_source_code_context: true,
root_source_code_path: File.cwd!(),
tags: %{
env: "staging"
},
before_send_event: {SomeApplication.Utils.SentryLogger, :before_send},
included_environments: [:staging],
filter: SomeApplication.SentryEventFilter
My before_send function basically attempts to sanity check the data and replace all tuples with lists. I haven't implemented this entirely yet though and instead of replacing all tuples I am temporarily using Kernel.inspect/2 to convert it to a string. This isn't ideal of course, because they I can't manipulate the data in the Sentry views:
def before_send(sentry_event) do
IO.puts "------- BEFORE SEND TWO ---------------------------"
sentry_event
|> inspect(limit: :infinity)
end
This results in the following output:
{:invalid, {:ok, the_data}}
And the capture_message fails.
By default, sentry uses jason to encode its JSONs and, again by default, jason doesn't encode tuples. You can change that by implementing Jason.Encoder for Tuple:
defimpl Jason.Encoder, for: Tuple do
def encode(tuple, opts) do
Jason.Encode.list(Tuple.to_list(tuple), opts)
end
end
Be warned - this will have a global effect on how tuples are converted to JSON in your application.

About autograd in pyorch, Adding new user-defined layers, how should I make its parameters update?

everyone !
My demand is a optical-flow-generating problem. I have two raw images and a optical flow data as ground truth, now my algorithm is to generate optical flow using raw images, and the euclidean distance between generating optical flow and ground truth could be defined as a loss value, so it can implement a backpropagation to update parameters.
I take it as a regression problem, and I have to ideas now:
I can set every parameters as (required_grad = true), and compute a loss, then I can loss.backward() to acquire the gradient, but I don’t know how to add these parameters in optimizer to update those.
I write my algorithm as a model. If I design a “custom” model, I can initilize several layers such as nn.Con2d(), nn.Linear() in def init() and I can update parameters in methods like (torch.optim.Adam(model.parameters())), but if I define new layers by myself, how should I add this layer’s parameters in updating parameter collection???
This problem has confused me several days. Are there any good methods to update user-defined parameters? I would be very grateful if you could give me some advice!
Tensor values have their gradients calculated if they
Have requires_grad == True
Are used to compute some value (usually loss) on which you call .backward().
The gradients will then be accumulated in their .grad parameter. You can manually use them in order to perform arbitrary computation (including optimization). The predefined optimizers accept an iterable of parameters and model.parameters() does just that - it returns an iterable of parameters. If you have some custom "free-floating" parameters you can pass them as
my_params = [my_param_1, my_param_2]
optim = torch.optim.Adam(my_params)
and you can also merge them with the other parameter iterables like below:
model_params = list(model.parameters())
my_params = [my_param_1, my_param_2]
optim = torch.optim.Adam(model_params + my_params)
In practice however, you can usually structure your code to avoid that. There's the nn.Parameter class which wraps tensors. All subclasses of nn.Module have their __setattr__ overridden so that whenever you assign an instance of nn.Parameter as its property, it will become a part of Module's .parameters() iterable. In other words
class MyModule(nn.Module):
def __init__(self):
super(MyModule, self).__init__()
self.my_param_1 = nn.Parameter(torch.tensor(...))
self.my_param_2 = nn.Parameter(torch.tensor(...))
will allow you to write
module = MyModule()
optim = torch.optim.Adam(module.parameters())
and have the optim update module.my_param_1 and module.my_param_2. This is the preferred way to go, since it helps keep your code more structured
You won't have to manually include all your parameters when creating the optimizer
You can call module.zero_grad() and zero out the gradient on all its children nn.Parameters.
You can call methods such as module.cuda() or module.double() which, again, work on all children nn.Parameters instead of requiring to manually iterate through them.

Pass Parameter to GCMLE Prediction Graph

Ror my ML Engine Prediction Graph, I have a part of the graph which takes a long time to compute and is not always necessary. Is there a way to create a boolean flag that will skip over this section of the graph? I would like to pass this flag when creation a batch predict job or an online prediction. For example, it would be something like:
gcloud ml-engine predict --model $MODEL --version $VERSION --json-instance $JSON_INSTANCES --boolean_flag $BOOLEAN_FLAG
In the example above, I would either pass True/False as the $BOOLEAN_FLAG and then this would determine whether a part of the prediction graph is evaluated. I would imagine that this flag could also be passed in the body of the batch prediction job, just like model/version are. Is this at all possible?
I know that I could add a new input field to the prediction request that is True/False for each element in the batch and just pass that as False when I don't want to obtain the prediction, but I'm curious if there is a way to do this with just a single parameter.
This is not currently possible. We'd like to hear more about your requirements for this feature. Please reach out to us at cloudml-feedback#google.com
How about adding two different export signatures, each with a different head? Then you can deploy to two different endpoints? Choose the url to call depending on whether you want full or partial.
Write two serving input functions, one for each case. In the first case, set the flag to zero, and in the second case, set the flag to one. The reason to use ones_like and zeros_like is to ensure that you have a batch of zeros and ones:
def case1_serving_input_fn():
feature_placeholders = ...
features = ...
features['myflag'] = tf.zeros_like(features['other'])
return tf.estimator.export.ServingInputReceiver(features, feature_placeholders)
def case2_serving_input_fn():
feature_placeholders = ...
features = ...
features['myflag'] = tf.ones_like(features['other'])
return tf.estimator.export.ServingInputReceiver(features, feature_placeholders)
In your train_and_evaluate function, have two exporters:
def train_and_evaluate(output_dir, nsteps):
...
exporter1 = tf.estimator.LatestExporter('case1', case1_serving_input_fn)
exporter2 = tf.estimator.LatestExporter('case2', case2_serving_input_fn)
eval_spec=tf.estimator.EvalSpec(
input_fn = make_input_fn(eval_df, 1),
exporters = [exporter1, exporter2] )
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)

userWarning pymc3 : What does reparameterize mean?

I built a pymc3 model using the DensityDist distribution. I have four parameters out of which 3 use Metropolis and one uses NUTS (this is automatically chosen by the pymc3). However, I get two different UserWarnings
1.Chain 0 contains number of diverging samples after tuning. If increasing target_accept does not help try to reparameterize.
MAy I know what does reparameterize here mean?
2. The acceptance probability in chain 0 does not match the target. It is , but should be close to 0.8. Try to increase the number of tuning steps.
Digging through a few examples I used 'random_seed', 'discard_tuned_samples', 'step = pm.NUTS(target_accept=0.95)' and so on and got rid of these user warnings. But I couldn't find details of how these parameter values are being decided. I am sure this might have been discussed in various context but I am unable to find solid documentation for this. I was doing a trial and error method as below.
with patten_study:
#SEED = 61290425 #51290425
step = pm.NUTS(target_accept=0.95)
trace = sample(step = step)#4000,tune = 10000,step =step,discard_tuned_samples=False)#,random_seed=SEED)
I need to run these on different datasets. Hence I am struggling to fix these parameter values for each dataset I am using. Is there any way where I give these values or find the outcome (if there are any user warnings and then try other values) and run it in a loop?
Pardon me if I am asking something stupid!
In this context, re-parametrization basically is finding a different but equivalent model that it is easier to compute. There are many things you can do depending on the details of your model:
Instead of using a Uniform distribution you can use a Normal distribution with a large variance.
Changing from a centered-hierarchical model to a
non-centered
one.
Replacing a Gaussian with a Student-T
Model a discrete variable as a continuous
Marginalize variables like in this example
whether these changes make sense or not is something that you should decide, based on your knowledge of the model and problem.

How to get weights format from TensorFlow .pb model?

I want to reorganize the nodes of tensorflow .pb model,so I first get NodeDef from GraphDef, and get attr use NodeDef.attr().for the node of "Conv2D".
I can get parameters such as strides,padding,data_format,use_cudnn_on_gpu from attr, but cann't get the weights format parameters.
The language I use is c++.
How to get it! Thank you!
Conv2D has two inputs: the first one is data and the second one is filter (or weights), so you can simply check the format of the second input of Conv2D. If you are using C++, you can try this:
# Assuming inputs: conv2d_node, node_map.
filter_node_name = conv2d_node.input(1)
filter_node = node_map[filter_node_name]
# You might need to check identity node here.
# Get the shape of filter_node using NodeDef.attr()