Different between svelte global store and mobx store - state

import {writable, derived} from 'svelte/store'
What is exact different between svelte store and mobx state management library?

There are many differences between the two libraries, but mobx is definitely a lot more sophisticated (and complicated!). This doesn't mean that mobx should be used as a replacement for svelte/store as svelte's store library is tightly integrated with svelte, for example:
import { someStore } from "$lib/file";
// you can get the value of stores reactively in .svelte files with $
$: number = $someStore * 5
Conceptually, however, there are many similarities:
"Computed" values in mobx are similar to svelte derived
Svelte's "writable" is comparable to mobx's classes fields
(As far as I know) there is no readable in mobx and it's recommended to use external side effects

Related

Can I add a term to the Lagrangian calculation in Pyomo/PySP?

I would like to use Pyomo's PySP framework to do some stochastic optimization. In this model, I have some variables that must be the same across scenarios (i.e., the standard root node variables). As part of the Progressive Hedging approach, PySP creates an augmented Lagrangian, whose multipliers are adjusted iteratively until all these variables are equal across scenarios. All good so far. But I also have some constraints that must be enforced on an expected value basis. In the extensive form, these look like this:
sum(probability[s] * use[s] for s in scenarios) == resource
This complicating constraint could be factored out with a Lagrangian relaxation. This would require adding a term like this to the main objective function (which would then become part of each scenario's objective function):
(
lambda * (sum(probability[s] * use[s] for s in scenarios) - resource)
+ mu/2 * (sum(probability[s] * use[s] for s in scenarios) - resource)**2
)
This is very similar to the Lagrangian terms for the nonanticipativity constraints that are already in the main objective function. At each iteration, the PySP framework automatically updates the multipliers for the nonanticipativity terms and then propagates their values into the individual scenarios.
So my question is, is there some way to add my terms to the standard Lagrangian managed by PySP, and have it automatically update my multipliers along with its own? I don't mind doing some heavy lifting, but I can't find any detailed documentation on how PySP is implemented, so I'm not sure where to start.
The PH implementation in PySP supports some level of customization through the use of user-defined extensions. These are classes you can implement whose methods are called by PH at different points in the algorithm. You can tell PH to use an extension by setting the command-line option "--user-defined-extension" to a file that contains an implementation. A number of examples can be found here (look for files that contain IPHExtension and copy what they do).
Unfortunately, there is not any specific code that will make what you want to do easy. You will have to look at source code to see how PH updates and manages these objective parameters (see ph.py for this and to see where different extension methods are called in the algorithm).

How to unit test a generator/serialization method?

I want to write unit tests for a serialization method. By serialization method I mean a methd that outputs a set of data into a special format.
For example, a method that outputs data in XML format. (I write in C++ but it is the same in every language.)
class Generator
{
public:
std::string serialize();
};
// unit test (pseudo-code)
Generator gen;
// set some data in gen
std::string actual = gen.serialize();
std::string expected = "<xml>...</xml>";
ASSERT_EQUAL(expected, actual);
The problem with this is that the unit test is highly dependent on non-important things, like the formatting of the XML (line breaks) or the order of XML-attributes.
While with XML the previous method will work, it will not work with generators that output binary data.
So, what is a robust way to test serialization methods?
The ideas I have are the following, but all have serious drawbacks.
using external libraries to parse the data (for proprietary formats, there may not exist).
always write pairs of serialization/deserialization and test them in combination (bugs in both methods might remain undiscovered).
store the serialized data in external files and compare against them in the test (the unit test is difficult to read and maintain).
As you are asking about unit-testing, I assume that the intended behaviour of the serializer in all its details is known by you. That is, you know where you want line breaks and indentation etc. to be inserted.
The problem now is, that in every single test case only a subset of these details would be relevant. In other words, in some tests you want to test the proper indentation, and in some tests you just want to be sure that a number is inserted in the right way.
In addition to the options you have provided I recommend another approach: Use regular expression matching instead of string comparison. With the help of regular expressions you can reduce the serialized string to the essential parts which are of interest in the respective test. To check, for example, if the result string contains a certain number, say, 42, you could match it against ^[^0-9]*42[^0-9]*$. Then, the enclosing XML would be ignored in this particular test. This test would then be robust against a large number of changes in the serialization.
With this approach you avoid the dependency on external parsing libraries (well, you are depending on the regular expression library, but that is in many languages today even part of the standard library), you can also test for aspects which the serialize-deserialize can not test (indentation), your tests run fast and are not OS specific (no dependency on the file system).
This is more like a long comment with my first thoughts on the topic.
I think you have to look at two different scenarios. Your data <-> serialized data relation could be either 1:1 or 1:n.
XML would be a 1:n relation, where you XML code would have quite a little bit of freedom, but would still be unserialized to the same data again. In this case it seems to me, that developing and testing serialization/deserialization in combination is the way to go. If there are external libraries available as well, use them of course. If there are no external libraries available, then - as long as serialization / deserialization - yield the same result, you will probably not have "bugs", but "features"...
Testing the deserialization with stored external datafiles does also make sense, but this does not apply to the serialization, imho.
Looking at a 1:1 relation, like maybe putting the data into a certain binary format, you should go for the stored data in external files. Always use external libraries, if they exist, as well, of course.
I would suggest to do all three of those approaches together - where applicable, of course. You should not rely on a single one of them.

How would you idiomatically extend arithmetric functions for other datatypes in Clojure?

So I want to use java.awt.Color for something, and I'd like to be able to write code like this:
(use 'java.awt.Color)
(= Color/BLUE (- Color/WHITE Color/RED Color/GREEN))
Looking at the core implementation of -, it talks specifically about clojure.lang.Numbers, which to me implies that there is nothing I do to 'hook' into the core implementation and extend it.
Looking around on the Internet, there seems to be two different things people do:
Write their own defn - function, which only knows about the data type they're interested in. To use you'd probably end up prefixing a namespace, so something like:
(= Color/BLUE (scdf.color/- Color/WHITE Color/RED Color/GREEN))
Or alternatively useing the namespace and use clojure.core/- when you want number math.
Code a special case into your - implementation that passes through to clojure.core/- when your implementation is passed a Number.
Unfortunately, I don't like either of these. The first is probably the cleanest, as the second makes the presumption that the only things you care about doing maths on is their new datatype and numbers.
I'm new to Clojure, but shouldn't we be able to use Protocols or Multimethods here, so that when people create / use custom types they can 'extend' these functions so they work seemlessly? Is there a reason that +,- etc doesn't support this? (or do they? They don't seem to from my reading of the code, but maybe I'm reading it wrong).
If I want to write my own extensions to common existing functions such as + for other datatypes, how should I do it so it plays nicely with existing functions and potentially other datatypes?
It wasn't exactly designed for this, but core.matrix might be of interest to you here, for a few reasons:
The source code provides examples of how to use protocols to define operations that work with with various different types. For example, (+ [1 2] [3 4]) => [4 6]). It's worth studying how this is done: basically the operators are regular functions that call a protocol, and each data type provides an implementation of the protocol via extend-protocol
You might be interested in making java.awt.Color work as a core.matrix implementation (i.e. as a 4D RGBA vector). I did something simiilar with BufferedImage here: https://github.com/clojure-numerics/image-matrix. If you implement the basic core.matrix protocols, then you will get the whole core.matrix API to work with Color objects. Which will save you a lot of work implementing different operations.
The probable reason for not making arithmetic operation in core based on protocols (and making them only work of numbers) is performance. A protocol implementation require an additional lookup for choosing the correct implementation of the desired function. Although from design point of view it may feel nice to have protocol based implementations and extend them whenever required, but when you have a tight loop that does these operations many times (and this is very common use case with arithmetic operations) you will start feeling the performance issues because of the additional lookup on each operation that happen at runtime.
If you have separate implementation for your own data types (ex: color/-) in their own namespace then it will be more performant due to a direct call to that function and it also make things more explicit and customizable for specific cases.
Another issue with these functions will be their variadic nature (i.e they can take any number of arguments). This is a serious issue in providing a protocol implementation as protocol extended type check only works on first parameter.
You can have a look at algo.generic.arithmetic in algo.generic. It uses multimethods.

Named parameter string formatting in C++

I'm wondering if there is a library like Boost Format, but which supports named parameters rather than positional ones. This is a common idiom in e.g. Python, where you have a context to format strings with that may or may not use all available arguments, e.g.
mouse_state = {}
mouse_state['button'] = 0
mouse_state['x'] = 50
mouse_state['y'] = 30
#...
"You clicked %(button)s at %(x)d,%(y)d." % mouse_state
"Targeting %(x)d, %(y)d." % mouse_state
Are there any libraries that offer the functionality of those last two lines? I would expect it to offer a API something like:
PrintFMap(string format, map<string, string> args);
In Googling I have found many libraries offering variations of positional parameters, but none that support named ones. Ideally the library has few dependencies so I can drop it easily into my code. C++ won't be quite as idiomatic for collecting named arguments, but probably someone out there has thought more about it than me.
Performance is important, in particular I'd like to keep memory allocations down (always tricky in C++), since this may be run on devices without virtual memory. But having even a slow one to start from will probably be faster than writing it from scratch myself.
The fmt library supports named arguments:
print("You clicked {button} at {x},{y}.",
arg("button", "b1"), arg("x", 50), arg("y", 30));
And as a syntactic sugar you can even (ab)use user-defined literals to pass arguments:
print("You clicked {button} at {x},{y}.",
"button"_a="b1", "x"_a=50, "y"_a=30);
For brevity the namespace fmt is omitted in the above examples.
Disclaimer: I'm the author of this library.
I've always been critic with C++ I/O (especially formatting) because in my opinion is a step backward in respect to C. Formats needs to be dynamic, and makes perfect sense for example to load them from an external resource as a file or a parameter.
I've never tried before however to actually implement an alternative and your question made me making an attempt investing some weekend hours on this idea.
Sure the problem was more complex than I thought (for example just the integer formatting routine is 200+ lines), but I think that this approach (dynamic format strings) is more usable.
You can download my experiment from this link (it's just a .h file) and a test program from this link (test is probably not the correct term, I used it just to see if I was able to compile).
The following is an example
#include "format.h"
#include <iostream>
using format::FormatString;
using format::FormatDict;
int main()
{
std::cout << FormatString("The answer is %{x}") % FormatDict()("x", 42);
return 0;
}
It is different from boost.format approach because uses named parameters and because
the format string and format dictionary are meant to be built separately (and for
example passed around). Also I think that formatting options should be part of the
string (like printf) and not in the code.
FormatDict uses a trick for keeping the syntax reasonable:
FormatDict fd;
fd("x", 12)
("y", 3.141592654)
("z", "A string");
FormatString is instead just parsed from a const std::string& (I decided to preparse format strings but a slower but probably acceptable approach would be just passing the string and reparsing it each time).
The formatting can be extended for user defined types by specializing a conversion function template; for example
struct P2d
{
int x, y;
P2d(int x, int y)
: x(x), y(y)
{
}
};
namespace format {
template<>
std::string toString<P2d>(const P2d& p, const std::string& parms)
{
return FormatString("P2d(%{x}; %{y})") % FormatDict()
("x", p.x)
("y", p.y);
}
}
after that a P2d instance can be simply placed in a formatting dictionary.
Also it's possible to pass parameters to a formatting function by placing them between % and {.
For now I only implemented an integer formatting specialization that supports
Fixed size with left/right/center alignment
Custom filling char
Generic base (2-36), lower or uppercase
Digit separator (with both custom char and count)
Overflow char
Sign display
I've also added some shortcuts for common cases, for example
"%08x{hexdata}"
is an hex number with 8 digits padded with '0's.
"%026/2,8:{bindata}"
is a 24-bit binary number (as required by "/2") with digit separator ":" every 8 bits (as required by ",8:").
Note that the code is just an idea, and for example for now I just prevented copies when probably it's reasonable to allow storing both format strings and dictionaries (for dictionaries it's however important to give the ability to avoid copying an object just because it needs to be added to a FormatDict, and while IMO this is possible it's also something that raises non-trivial problems about lifetimes).
UPDATE
I've made a few changes to the initial approach:
Format strings can now be copied
Formatting for custom types is done using template classes instead of functions (this allows partial specialization)
I've added a formatter for sequences (two iterators). Syntax is still crude.
I've created a github project for it, with boost licensing.
The answer appears to be, no, there is not a C++ library that does this, and C++ programmers apparently do not even see the need for one, based on the comments I have received. I will have to write my own yet again.
Well I'll add my own answer as well, not that I know (or have coded) such a library, but to answer to the "keep the memory allocation down" bit.
As always I can envision some kind of speed / memory trade-off.
On the one hand, you can parse "Just In Time":
class Formater:
def __init__(self, format): self._string = format
def compute(self):
for k,v in context:
while self.__contains(k):
left, variable, right = self.__extract(k)
self._string = left + self.__replace(variable, v) + right
This way you don't keep a "parsed" structure at hand, and hopefully most of the time you'll just insert the new data in place (unlike Python, C++ strings are not immutable).
However it's far from being efficient...
On the other hand, you can build a fully constructed tree representing the parsed format. You will have several classes like: Constant, String, Integer, Real, etc... and probably some subclasses / decorators as well for the formatting itself.
I think however than the most efficient approach would be to have some kind of a mix of the two.
explode the format string into a list of Constant, Variable
index the variables in another structure (a hash table with open-addressing would do nicely, or something akin to Loki::AssocVector).
There you are: you're done with only 2 dynamically allocated arrays (basically). If you want to allow a same key to be repeated multiple times, simply use a std::vector<size_t> as a value of the index: good implementations should not allocate any memory dynamically for small sized vectors (VC++ 2010 doesn't for less than 16 bytes worth of data).
When evaluating the context itself, look up the instances. You then parse the formatter "just in time", check it agaisnt the current type of the value with which to replace it, and process the format.
Pros and cons:
- Just In Time: you scan the string again and again
- One Parse: requires a lot of dedicated classes, possibly many allocations, but the format is validated on input. Like Boost it may be reused.
- Mix: more efficient, especially if you don't replace some values (allow some kind of "null" value), but delaying the parsing of the format delays the reporting of errors.
Personally I would go for the One Parse scheme, trying to keep the allocations down using boost::variant and the Strategy Pattern as much I could.
Given that Python it's self is written in C and that formatting is such a commonly used feature, you might be able (ignoring copy write issues) to rip the relevant code from the python interpreter and port it to use STL maps rather than Pythons native dicts.
I've writen a library for this puporse, check it out on GitHub.
Contributions are wellcome.

Drawbacks of using an integer as a bitfield?

I have a bunch of boolean options for things like "acceptable payment types" which can include things like cash, credit card, cheque, paypal, etc. Rather than having a half dozen booleans in my DB, I can just use an integer and assign each payment method an integer, like so
PAYMENT_METHODS = (
(1<<0, 'Cash'),
(1<<1, 'Credit Card'),
(1<<2, 'Cheque'),
(1<<3, 'Other'),
)
and then query the specific bit in python to retrieve the flag. I know this means the database can't index by specific flags, but are there any other drawbacks?
Why I'm doing this: I have about 15 booleans already, split into 3 different logical "sets". That's already a lot of fields, and using 3 many-to-many tables to save a bunch of data that will rarely change seems inefficient. Using integers allows me to add up to 32 flags to each field without having to modify the DB at all.
The main drawback that I can think of is maintainability. Someone writing a query against the database has to understand the bit convention rather than being able to go after a more human readable set of columns. Also, if one of the "accepted payment types" is removed, the data itself has to be migrated rather than just dropping the a column in the table.
This isn't the worst, but there might be a better way.
Define table called PaymentTypes
id, paymentId, key (string), value (boolean)
Now you just populate this table with whatever you want. No long column of booleans, and you can dynamically add new types. The drawback to this is that default of all booleans is NULL or false.
Not sure what database you're using, but MySQL has a set type.
If you could limit your use case to one or more sets of values that can only have one bit true at a time, perhaps you could use enums in your database. You would get the best of both worlds, maintainable like btreat notes, and still smaller (and simpler) than several booleans.
Since that's not possible, I'd agree with your initial assment and go with a bitfield. I would use/create a bitfield wrapper however, so that in your code you don't deal with flipping and shifting bits directly - that becomes difficult to maintain and debug, as btreat says - but instead deal with it like a list or dictionary and convert to/from a bitfield when needed.
Some commentary on enums/bitfields in Django
I think the previous posters were both correct. The cleanest way to do it in a "relational" database would be to define a new relation table that stores payment types. In practice though, this is usually more hassle than it's worth.
Using enums in your code and using something similar in the DB (check constraints in Oracle, AFAIK) should help keep it maintainable, and obvious to the poor soul who's job it will be to add a new type, many many years after you've left