how to use module_set_weight function in drupal 8? - drupal-8

In module_set_weight($module, $weight) function, what is the representation of the weight parameter?
Does the higher of the number, and the lower for the execute order of the module? Can any one offer an example for me?

From documentation:
int $weight: An integer representing the weight of the module.
In Drupal, weight of the modules will determine the order in which the hook functions (which is implemented within them) are executed. The one with less weight will be executed first. For modules of equal weight, the alphabetical order of module name will be used.
For example, we have module_a (weight = 0) and module_b (weight = 1), both of them have the hook_entity_update implementation inside:
module_a.module
function module_a_entity_update($entity) {
// some logic
}
module_b.module
function module_b_entity_update($entity) {
// some logic
}
So when the hook_entity_update hook functions are called, module_a_entity_update() will be executed before module_b_entity_update().

Related

Data structure or pattern for efficient stack api, for multiple value types, consumers, and phases

I have a custom graphics application for which I want to update the "attributes" of 0 or more "components" at runtime. The app runs at 60FPS. I want to be able to change these component attributes on a frame-by-frame basis. For every attribute, there is a default/fall-back value that will be used if the component doesn't use a custom value for that attribute.
Every frame, the app builds a list of components. There are 2 phases for building these components:
A definition phase.
At compile phase. This is a runtime version of 'compile'.
After the 2 build phases, there is a run phase, which does not make use of the attributes.
There may be lots of components built every frame, as in thousands. Different components use diffent
subsets of possible attributes. So, component A may use attributes (U, V, W), and component B
may use attributes (W, X), etc. The list of attributes which each component uses is fixed; neither the type
nor the amount changes at runtime.
Usually, I want to change the attribute values of a whole bunch of components at once, and only rarely change the
values of individual components. However, it is possible to change attribute values on a per-component basis.
My hope is to use some kind of stack api for this, as it could be use to modify the values of groups of components,
or just individual ones, depending on how attributes are pushed/popped from the attribute stack.
Unfortunately, I'm having no luck. The issue is that the attribute
values may be consume in 2 possible places: at "definition phase", at "compile phase", or both.
If I use a stack, and then push an attribute value at the definition phase, but then pop that value before the compile phase, then
that attribute is not available at the compile phase.
I'm stuck trying to accomplish the following:
Avoiding having each component instance have fields for each possible attribute value that it uses.
This is because, most of the time, the component will either use the default attributes, or the same attributes
values will be used for most components. It's a waste to have all components with fields that are unused most
of the time.
Avoid copying the values of each attribute, for each component, every frame. Again, most components use the
either the default value, of they use values shared by other attributes. Most of the copies would be a waste of processing.
I'm looking for some data structure or pattern that would allow me to create this stack-based api while being (relatively)
efficient with the attribute value copies and memory size.
This is the kind of API I'm trying to have:
struct CompA {
// Trying to avoid having this type of struct, for each instance.
struct {
int age;
float weight;
// ... many others
} attributes;
};
int main() {
Init();
while(mainLoopExit == false) {
DefineComponents(); // May define and consume per-component attributes.
// Problem: any attributes 'popped' before here are not available to CompileComponents().
// ... but, need them to be.
CompileComponents(); // May consume (not define) per-component attributes.
RunComponents(); // Attributes baked into the components.
}
Shutdown();
return 0;
}
void DefineComponents() {
auto a = CompBuilder.Add<CompA>();
auto b = CompBuilder.Add<CompB>();
auto c = CompBuilder.Add<CompC>();
// All 'a', 'b', and 'c' get these attribute values.
AttributesPush(AttrType::Size, 42);
AttributesPush(AttrType::Weight, 100);
a.Build();
// 'b' gets overridden 'Size' attribute value of 84.
// It gets the 'Weight' of 100, which is already on the stack.
AttributesPush(AttrType::Size, 84);
b.Build();
AttributesPop(); // Size
// Only 'c' gets the 'Age' attribute value.
// It gets the 'Size' of 42.
// It gets the 'Weight' of 100.
AttributesPush(AttrType::Age, 300);
c.Build();
AttributesPop(); // Age
AttributesPop(); // Weight
AttributesPop(); // Size
PostDefineProcess(); // *Some* attributes are consumed here.
}
void CompileComponents() {
foreach (auto& c: GetCompoents<CompA>()) {
// Get either a custom attribute value for 'Size', if component has one, or get the default value.
auto size = GetAttr(AttrType::Size, c);
DoSizeStuff(c, size);
}
foreach (auto& c: GetCompoents<CompB>()) {
// ...
}
}
I would make a std::unordered_map object, where keys would be your components ids, and values would be indices in another simple array/vector of attributes. E.g. at index 0 you'd have default values, at index 1 the first group of specific attributes, at index 2 the second group of specific attributes.
To assign components to a group you update the map with the index of a group (e.g. half of components would have group 1, other half group 2, a few would have 0 as default). Thus you can in O(1) change the whole group of attributes, and in O(1) change group index of a component.
However frequent adding removing of groups would be an issue.

How to add a Branch to An Already Existing TTree: ROOT

I have an existing TTree after doing a simulation. I would like to add a Branch to this TTree and I want to call it Muon.Mass to the tree. I would also want to give the Muon.Mass branch a value of 0.1.
How can I write that?
I have seen how to create TTrees from scratch and to have branches of different variables. But I am not sure exactly what to do when I already have a TTree.
You can call the TTree::Branch method on an existing TTree the same way as for a new TTree. Just for filling you need to ensure you only fill the branch. (this is a strongly cut down example from https://github.com/pseyfert/tmva-branch-adder)
void AddABranch(TTree* tree) {
Float_t my_local_variable;
TBranch* my_new_branch = tree->AddBranch( ... /* use address of my_local_variable */ );
for (Long64_t entry = 0 ; entry < tree->GetEntries() ; ++e ) {
tree->GetEntry();
/* something to compute my_local_variable */
my_new_branch->Fill();
}
}
As alternative you might want to look at the root tutorials for tree friends.
As a side note, depending what you want to do with the tree / whom you give the tree to, I advise against using . in branch names as they cause headache when running MakeClass (branch names can contain periods, but c++ variables can't, so the automatically generated class members for each branch will undergo character replacement).

MATLAB : tracking state change in imufilter object

I am creating a function in MATLAB that I want to export as a c++ library. The function takes in accelerometer and gyroscope data, and calculates orientation via imufilter. Here is how it works:
% when 10 samples come in, call below function
function [orientation] = runtime_get_orientation(accelerometer, gyro)
FUSE = imufilter('SampleRate', 50, 'AccelerometerNoise', 0.002, ...
'LinearAccelerationNoise', 0.003, ...
'GyroscopeNoise', 0.444, 'GyroscopeDriftNoise', 0.445);
[orientation,~] = FUSE(accelerometer, gyro);
end
Note: I am creating a realtime system which will call this function over time. Ex: 10 samples come in, and then I call this function. 10 more come in, and I call it again.
The problem I see is that the FUSE object state is re-set every time I make a call to the function. Meaning, the matrix that retain the error state over time and adjust to it, are wiped. If I pass the FUSE object to the function, as demonstrated below, the state is kept and I can view orientation that makes sense.
% define FUSE object outside of the function
FUSE = imufilter('SampleRate', 50, 'AccelerometerNoise', 0.002, ...
'LinearAccelerationNoise', 0.003, ...
'GyroscopeNoise', 0.444, 'GyroscopeDriftNoise', 0.445);
% when 10 samples come in, call below function
function [orientation] = runtime_get_orientation(accelerometer, gyro, FUSE)
[orientation,~] = FUSE(accelerometer, gyro);
end
I'd like to return the object state of the FUSE object back to the calling function, so that I can pass it as an argument back in. I expect that this is some sort of a matrix object. I want to do that because I will eventually want to export it as a c++ function, and exporting a FUSE object might not be possible from what I can tell.
What can I do to keep the state of the FUSE object, in a way that is codegen / c++ friendly?
One simple solution is to make the data a static variable in the function. That way, you can create the filter only the first time the function is called, and you don’t need to know about it outside of the function.
To declare a static variable in MATLAB, use the persistent keyword.

GridGain: MapReduce with node-local data processing?

I am trying to perform some numerical computation on a large distributed data set. The algorithms fit the MapReduce model well with the additional property that output from the map step is small in size compared to the input data. Data can be considered read-only and is statically distributed over the nodes (except for re-balancing on fail-over). Note that this is somewhat contrary to the standard word-count examples where the input data is sent to the nodes performing the map step.
This implies that the map step shall be executed in parallel on all nodes, processing each node's local data, while it is acceptable that the output from the map step is sent to one node for the reduce step.
What is the best way to implement this with GridGain?
It seems there has been a reduce(..) method on GridCache/GridCacheProjection interfaces in earlier versions of GridGain, but this is not present any longer. Is there any replacement? I am thinking of a mechanism that takes a map closure and executes it distributed on each datum exactly once while avoiding to copy any input data across the network.
The (somewhat manual) approach I have come up with so far is the following:
public class GridBroadcastCountDemo {
public static void main(String[] args) throws GridException {
try (Grid grid = GridGain.start(CONFIG_FILE)) {
GridFuture<Collection<Integer>> future = grid.forRemotes().compute().broadcast(new GridCallable<Integer>() {
#Override
public Integer call() throws Exception {
GridCache<Integer, float[]> cache = grid.cache(CACHE_NAME);
int count = 0;
for (float[] array : cache.primaryValues()) {
count += array.length;
}
return count;
}
});
int totalCount = 0;
for (int count : future.get()) {
totalCount += count;
}
// expect size of input data
System.out.println(totalCount);
}
}
}
There is however no guarantee that each datum is processed exactly once with this approach. E.g. when re-balancing takes place while the GridCallables are executed, part of the data could be processed zero or multiple times.
GridGain Open Source (which is now Apache Ignite) has ComputeTask API which has both, map() and reduce() methods. If you are looking for a reduce() method, then ComputeTask is definitely the right API for you.
For now your implementation is OK. Apache Ignite is adding a feature where a node will not be considered primary until the migration is fully finished. It should be coming soon.

Should I write a separate method for any possible parameter value

I am very new to unit testing concept and I stick with writing my first one.
I have a method for normalize ID value. It should return passed value for any positive number (even if it is string with number inside) and zero (0) for any other passed value.
function normalizeId($val) {
// if $val is good positive number return $val;
// else return 0;
}
I want to write a unit test for this function and have assertion to any possible arguments. For example:
5, -5, 0, "5", "-5", 3.14, "fff", new StdClass() etc.
Should I write a method in my TestCase class for any of this condition or have one method with all conditions on separate lines?
I.e.
public function testNormalizeId() {
$this->assertEquals(5, MyClass::normalizeId(5));
$this->assertEquals(0, MyClass::normalizeId(-5));
$this->assertEquals(0, MyClass::normalizeId("fff"));
}
or
public function testNormalizeId_IfPositiveInt_GetPositiveInt() {
$this->assertEquals(5, MyClass::normalizeId(5));
}
public function testNormalizeId_IfNegativeInt_GetZeroInt() {
$this->assertEquals(0, MyClass::normalizeId(-5));
}
public function testNormalizeId_IfNotIntAsString_GetZeroInt() {
$this->assertEquals(0, MyClass::normalizeId("fff"));
}
How about best practices? I hear that the second choice is good but I'm worry about very many methods for very many possible parameter values. It can be positive number, negative number, zero, string with positive number inside, string with negative number inside, string with float inside etc etc.
Edit
Or maybe the third approach with provider?
public function testNormalizeIdProvider()
{
return array(
array(5, 5),
array(-5, 0),
array(0, 0),
array(3.14, 0),
array(-3.14, 0),
array("5", 5),
array("-5", 0),
array("0", 0),
array("3.14", 0),
array("-3.14", 0),
array("fff", 0),
array("-fff", 0),
array(true, 0),
array(array(), 0),
array(new stdClass(), 0),
);
}
/**
* #dataProvider testNormalizeIdProvider
*/
public function testNormalizeId($provided, $expected)
{
$this->assertEquals($expected, MyObject::normalizeId($provided));
}
I'm not very knowledgeable about PHP nor the unit testing frameworks that you can use therein, but in the general sphere of Unit Testing I'd recommend the second approach for these reasons
Gives a specific test case fail for a particular type of input rather than having to trawl through the actual Assert failure message to figure out which one failed.
Makes it much easier to parametrize these tests if you decide that you need to perform tests on a specific type of conversion with more than one input (e.g if you decided to have a text file containing 1,000 random strings and wanted to load these up in a test driver and run the test case for converting strings for each entry by way of functional or acceptance testing later on)
Makes it easier to change out the individual test cases for when you need some special logic to setup
Makes it easier to spot when you've missed a type of conversion because the method names read off easier against a checklist :)
(Dubious) Will maybe make it easier to spot where your "god class" might be in need of internal refactoring to use separate sub-classes to perform specific types of conversions (not saying your approach is wrong but you might find the logic for one type of conversion very nasty; when you review your 20 or 30 individual test cases that could provide the impetus to bite the bullet and develop more specialized converter classes)
Hope that helps.
Use the data provider, as you discovered yourself. There is no benefit in duplicating the exact testcase in multiple methods with only having parameters and expectations change.
Personally, I do really start with the tests all in one method for such simple cases. I'd start with a simple good case, and then gradually adding more cases. I may not feel the need to change this into a data provider instantly, because it won't pay off instantly - but on the other hand things can change, and this test structure can be a short term solution that needs refactoring.
So whenever you observe yourself adding more lines of test into such a multi test case method, stop and make it using a data provider instead.