Is it possible to build CLSAG ring signatures on Secp256k1? - cryptojs

Is it possible to implement CLSAG ring signatures with SECP256k1? And would the implementation be very different from this (https://github.com/crate-crypto/CLSAG) Ristretto255 variant of the CLSAG ring signatures?
As I understand it, it doesn't matter what curve parameters I take? So theoretically I could also use a NIST curve? I would just have to swap the values ​​(curve parameters)? Or the library, right? I would want to implement the whole thing in JavaScript using the Noble Libs.
Did I understand this right so far?
PS: I'm not an expert, I've only used crypto so far, but I want to take a deeper look now. Why do I want to take Secp256k1? This has no deeper reasons, is simply the most familiar to me and so far one of the best tested.

Related

Sampling from a discrete probability distribution in C++

I am new to C++ and extremely surprised by the lack of accessible, common probability manipulation tools (i.e. the lack of things in Boost and the standard library). I've done a lot of scientific programming in other languages, but the standard and/or ubiquitious third party add-ons always include a full range of probability tools. A friend billed up Boost to be the equivalent ubiquitous add-on for C++, but as I read the Boost documentation, even it seems to have a dearth of what I would consider extremely elementary built-ins.
I cannot find a built in that takes some sort of array of discrete probabilities and produces an index chosen according to those probabilities. I can of course write my own function for this, but I just wanted to check whether I am missing a standard way to do this.
Having to write my own functions at such a low-level is a bad thing, I feel, but I am writing a new simulation module for a larger project that is all in C++. My usual go-to tactic would be to write it in Python and link the Python to the C++, but because several other people are going to have to manage this code once I finish it, and none of them know Python, I think it would be more prudent to deliver it to them all in C++.
More generally, what do people do in C++ for things like sampling from standard distributions, in particular something as basic as a multi-variate normal distribution?
Perhaps I'm misunderstanding your intention, but it seems to me what you want is simply std::discrete_distribution.
(Moved from comment.)
Did you look at Boost.Math.StatisticalDistributions? Specifically, its Discrete Probability Distributions?
Boost is not a library, it's a collection of libraries, so it can sometimes be difficult to find exactly what you're looking for – but that doesn't mean it isn't there. ;-]
As mentioned, you'll want to look at boost/math/distributions and friends to meet your needs.
Here's a very good, detailed tutorial on getting these working for you in Boost. You may also want to throw your weight behind stan as well, which looks quite promising within this space.
Boost's math libraries are pretty good for working with different distributions, but if you are only interested in sampling (as in the problem you mentioned in your post), then looking at the boost Random libraries might be more germane to your task. This link shows how to simulate rolling a weighted die, for example.
You should do less C++ bashing, and more question asking - we try to be helpful and respectful on SO. Questions like yours are often tagged as inflammatory.
Boost::math seems to provide exactly what you're looking for: https://www.quantnet.com/cplusplus-statistical-distributions-boost/ - I'm not 100% sure about how well it handles multi-variate distributions though (nor am I an expert on statistics).
Get it here: http://www.boost.org/doc/libs/1_49_0/libs/math/doc/html/index.html

Any way to write a unit test framework in Scheme without either apply or macros?

I am wondering if there is a way to write a test framework (a very small one, just as an interesting example of Scheme code) that uses neither APPLY nor macros. I suppose not, since any test framework would need to at least get a list of arguments and apply procedures to them.
You could do that if you only use thunks for computations that you want to test. But both macros and apply will generally make it more convenient to use and to implement. (You should probably also have a look at the number of lightweight testing frameworks floating around.)
I actually wrote such a thing: https://github.com/yawaramin/ggspec/tree/8f88d4641ab603b42510b88bdb3ebaed699d4803
Used a lot of thunks everywhere. Not very elegant from the perspective of the API user. But I since re-implemented it using macros, which made it a lot more convenient to use.

Object Reflection

Does anyone have any references for building a full Object/Class reflection system in C++ ?
Ive seen some crazy macro / template solutions however ive never found a system which solves everything to a level im comfortable with.
Thanks!
Using templates and macros to automatically, or semi-automatically, define everything is pretty much the only option in C++. C++ has very weak reflection/introspection abilities. However, if what you want to do is mainly serialization and storage, this has already been implemented in the Boost Serialization libraries. You can do this by either implementing a serializer method on the class, or have an external function if you don't want to modify the class.
This doesn't seem to be what you were asking though. I'm guessing you want something like automatic serialization which requires no extra effort on the part of the class implementer. They have this in Python, and Java, and many other languages, but not C++. In order to get what you want, you would need to implement your own object system like, perhaps, the meta-object system that IgKh mentioned in his answer.
If you want to do that, I'd suggest looking at how JavaScript implements objects. JavaScript uses a prototype based object system, which is reasonably simple, yet fairly powerful. I recommend this because it seems to me like it would be easier to implement if you had to do it yourself. If you are in the mood for reading a VERY long-winded explanation on the benefits and elegance of prototypes, you can find an essay on the subject at Steve Yegge's blog. He is a very experienced programmer, so I give his opinions some credence, but I have never done this myself so I can only point to what others have said.
If you wanted to remain with the more C++ style of classes and instances instead of the less familiar prototypes, look at how Python objects and serialization work. Python also use a "properties" approach to implementing its objects, but the properties are used to implement classes and inheritance instead of a prototype based system, so it may be a little more familiar.
Sorry that I don't have a simpler answer to your question! But hopefully this will help.
I'm not entirely sure that I understood you intention, however the Qt framework contains a powerful meta object system that lets you do most operation expected from a reflection a system: Getting the class name as string, checking if a object is a instance of a given type, listing and invoking methods, etc.
I've used ROOT's Reflex library with good results. Rather than using crazy macro / template solutions like you described, it processes your C++ header files at build time to create reflection dictionaries then operates off of those.

Testing approach for algorithms with complex outputs

How to test a result of a program that is basically a black box? For example one year ago I had to write a B tree as a homework and I really struggled with testing the correctness. What strategies do you use in such scenarios? Visualization? Robust input-->result sets of testing data? What do you do when it is hard to get such data because the only way how to get them is your proper working program?
EDIT: I think that my question was misunderstood. There was no problem with understanding how B tree works. That is trivial. But writing robust tests for validating its proper functionality is not so trivial. I think that this school problem is similar to many practical REAL word scenarios and test cases. And sometimes understanding the domain is quite different from delivering working and correct program...
EDIT2: And yes, with B tree it is possible to validate proper behavior with pen and paper. But this is really dirty and not fun :) This is not working well with problems that requires huge amount of data for their validation...
I'm not sure these answers really capture the problem at hand. A B-tree's input and output aren't any different from those of any other dictionary---but the algorithm performs better, if it's implemented correctly. It's only really got two functions to test (add, and find) so theoretically, "black-box" testing of this single component should be fine. Designing for testability isn't the issue, since no matter how you do it the whole algorithm will be one component.
So the question is: when you have to implement subtle algorithms, the kinds with complicated output that you can't always understand in your head so well, how do you test them? I think there are three different strategies you can use:
Black-box test basic functionality. For the B-tree case, this is things like cwash suggested, and also, things like making sure that when you add an item, you can then find it, etc.
Test certain invariants that your algorithm should maintain (the B-tree should be balanced, values within nodes should be sorted, etc.)
A few, small "pencil-and-paper" tests may be necessary -- work the algorithm out by hand and check that it matches what your code does. But the big-data tests can all be of type 2. These can also be brittle, so unless you need to be really sure about your algorithm, you may want to avoid them.
If you do not grasp the problem at hand, how can you develop a solution to it? My suggestion would be to understand the domain enough to be able to work out the problem on paper and ensure that your program matches.
Consult with an expert on the subject.
I know if I have a convoluted procedure I'm trying to fix, I have no idea what the output should be after my changes, so I need to consult a fellow developer with more knowledge of the business need, and they are able to verify what I've done is correct.
I would focus on constructing test cases that exercise the functionality of your B-tree algorithm. I haven't looked at it for years, but I'm fairly sure you'll be able to find a documented sequence of steps to insert a set of values in a specific order, then validate that the leaf nodes are as they should be. If you construct your testing along these lines, you should be able to prove your implementation is correct.
The key is to know there is a balance between testing something to death and doing tests that adequately cover what should be covered. Edge cases, e.g null inputs or checking inputs are numeric by testing an alphabet character or a punctuation character, are likely most of the tests you'd need. To complement this there may be one or two common cases to handle to show the program can handle a non-edge case as well. To cover all valid input in most programs is overkill and would result in an overwhelmingly large amount of tests.
I think the answer to the question you're asking boils down to designing for testability. Often you get a testable design for free when you test-drive the development of the solution. But let's face it, when you're implementing a highly mathematical algorithm, this just doesn't fall out.
To make sure you have a testable design, you need to understand what a seam is. Then you need to know a few rules of thumb, such as avoiding statics, using polymorphism, and properly decomposing problems and separating concerns.
Watch "The Clean Code Talks -- Unit Testing" by Misko Hevery, I think it will help you wrap your head around it.
Try looking at it from a requirements point of view, rather than an implementation point of view. Before you write code, you must understand exactly what you want it to do.
Testing and requirements should be a matching pair. If you're having trouble defining tests, maybe it's because the requirements are not well-defined. That in turn implies that you may have bugs that aren't so much implementation bugs, but "lack of clear requirements" bugs. The code writer in that case would be working to a mental list of requirements that he/she thinks is requirements, but can't be sure, and they're not written down for independent understanding and verification.
I've struggled with software where the requirements weren't clear, because the customer couldn't even tell us what they wanted. But when we delivered to them, they sure could tell us then what they didn't like about it! A big part of software engineering is getting the requirements right before the coding begins. This is true on the high-level (overall product, with requirement input from customer) and also the smaller level (modules, individual functions, where requirements are internally defined by software team or individuals). It is still true to some degree I think for iterative development, although the high-level requirements are more fluid.
#Bystrik Jurina,
I often get involved in projects which involve conversions between disparate data formats. Most answers have focused on testing a B-tree or similar algorithm, but it seems that you're looking for a more general answer.
Most of my work is based on the command line. It may sounds like a contradiction, but one of the first tools I use is visualization. I'll write some methods to write out my data structures in a format that's easy to consume. This can (and usually does) include something that's visually clear. But often it also means something that I could easily parse with a smaller test program, or even import into Excel.
I'll start by focusing on the basic outline, and write a program that does the bare minimum of what I need to accomplish. If it's a multi-step process, this might mean implementing one step at a time and validating the results of each step before moving on. Or writing something that works only in specific cases, and then expanding the set of cases where it's expected to work. At first you can validate that the code works in the limited set of cases, such as for known input data. As the project moves forward, you can start logging warnings for cases you might not have tested, or for unexpected types of input data. This has drawbacks, but is a nice approach when you're dealing with a known set of input data
Validation techniques can include formal test cases, or informal programs that work to challenge your assumptions. It could mean writing a basic driver program to exercise the "core" routines. A good example would be to add a record to a database, then read it back and compare the original object against the one loaded from the database.
If you have trouble wrapping your head around the way a program functions, think about what it needs to accomplish. It might be easier to writing code that tests the way different inputs produce different outputs. Producing visualizations is a good help, because the act of deciding how to display the data can make you think about different conditions and focus in on the most critical parts of your data structures.
Often I've found that building a visualization brings me to admit that the way the data is being stored just isn't very clear. For a B-tree, the representation isn't very flexible. But for other cases, you may be using parallel arrays when a nested tree of objects would be more natural.

How do you glue Lua to C++ code?

Do you use Luabind, toLua++, or some other library (if so, which one) or none at all?
For each approach, what are the pro's and con's?
I can't really agree with the 'roll your own' vote, binding basic types and static C functions to Lua is trivial, yes, but the picture changes the moment you start dealing with tables and metatables; things go trickier very quickly.
LuaBind seems to do the job, but I have a philosophical issue with it. For me it seems like if your types are already complicated the fact that Luabind is heavily templated is not going to make your code any easier to follow, as a friend of mine said "you'll need Herb Shutter to figure out the compilation messages". Plus it depends on Boost, plus compilation times get a serious hit, etc.
After trying a few bindings, Tolua++ seems the best. Tolua doesn't seem to be very much in development, where as Tolua++ seems to work fine (plus half the 'Tolua' tutorials out there are, in fact, 'Tolua++' tutorials, trust me on that:) Tolua does generate the right stuff, the source can be modified and it seems to deal with complicated cases (like templates, unions, nameless structs, etc, etc)
The biggest issue with Tolua++ seems to be the lack of proper tutorials, pre-set Visual Studio projects, or the fact that the command line is a bit tricky to follow (you path/files can't have white spaces -in Windows at least- and so on) Still, for me it is the winner.
To answer my own question in part:
Luabind: once you know how to bind methods and classes via this awkward template syntax, it's pretty straightforward and easy to add new bindings. However, luabind has a significant performance impact and shouldn't be used for realtime applications. About 5-20 times more overhead than calling C functions that manipulate the stack directly.
I don't use any library. I have used SWIG to expose a C library some time ago, but there was too much overhead, and I stop using it.
The pros are better performance and more control, but its takes more time to write.
Use raw Lua API for your bindings -- and keep them simple. Take inspiration in the API itself (AUX library) and libraries by Lua authors.
With some practice raw API is the best option -- maximum flexibility and minimum of unneeded overhead. You've got what you want and no more, the way you need it to be.
If you must bind large third-party libraries use automated generators like tolua, tolua++ (or even roll your own for the specific case). It would free you from manual work.
I would not recommend using Luabind. At the moment it's development stalled (however starting to come back to life), and if you would meet some corner case, you may be on your own. Also Luabind heavily uses template metaprogramming. This may (and may not) be unacceptable, depending on the point of view.