Matlab coder build error when trying to access structures - c++

I have a nested structure with some fields labeled as L1, L2 etc. I try to access a substructure within the structure using the following code.
lfield = lfidcalc(le);
substruct = bmstruct.(lfield);
Since le changes its value in every iteration, lfield also changes its value from L1 to L9.
However, when I try to build this code to generate a C++ executable, I get the following error.
Non-constant expression or empty matrix. This expression must be
constant because its value determines the size or class of some
expression. In this context, the value of the string must be known.
Can anybody please try to help me sort out this problem?

I'm not sure exactly what's causing the error message you're seeing, but in any case MATLAB Coder does not support accessing the fields of a structure using dynamic field names.
Perhaps that is the direct cause of what you're seeing, perhaps not: but in either case you have a problem.

Not all correct matlab code can be converted to C/C++. Especially this kind of (very handy) code. Have you put '%#eml' on the second line of your function? It indicates that you will generate code. From matlab website: " We use the %#eml directive to turn on the MATLAB M-Lint code analyzer and check the function code for errors and recommend corrections."

Related

Piecewise function with variable values at domain breakpoints

In our model we need a set of equations
model.y[k] = model.F(model.x[k]) (where k is in a proper Set)
with piecewise function F(x) defined on a fixed set of domain variable (breakpoints), but with unknown, variable (!), values at these breakpoints.
I implemented the case via SOSConstraint(..., sos=2) and it works (I tested by SCIP solver).
Now we want to try other implementations of pw-functions (LOG, BIGM_BIN, DCC, DLOG, etc.) mentioned in Pyomo pyomo.core.base.piecewise.Piecewise code. But it seems that this class requires fixed numerical values for F(x) at all breakpoints. If a function passed by f_rule argument to Piecewise constructor returns Pyomo expression with variable, then I get the following error:
ERROR: evaluating object as numeric value: ...
Can somebody give an advice or an a reference to some example of a "variable" piecewise-function ?
E.g., regarding "LOG" representation (from Vielma etc.) I see the code of _LOGPiecewise function in pyomo/core/base/piecewise.py module. And I do not see any reasons to forbidden to use variables as "y"-values of pw-expression being constructed ...
Updated
I've succeeded to modify pyomo.core.base.piecewise module to use variables as values of pw-function. Frankly speaking, these "modifications" were just commenting of some fragments in piecewise.py...
But, for my problem, I did not get any speed up in comparison with SOSConstraint (for SCIP solver !).
Can anybody give the benefit of experience: what pw-models were faster SOSConstraint or LOG, BIGM_BIN, DCC, DLOG for other problems ?

Maxima: creating a function that acts on parts of a string

Context: I'm using Maxima on a platform that also uses KaTeX. For various reasons related to content management, this means that we are regularly using Maxima functions to generate the necessary KaTeX commands.
I'm currently trying to develop a group of functions that will facilitate generating different sets of strings corresponding to KaTeX commands for various symbols related to vectors.
Problem
I have written the following function makeKatexVector(x), which takes a string, list or list-of-lists and returns the same type of object, with each string wrapped in \vec{} (i.e. makeKatexVector(string) returns \vec{string} and makeKatexVector(["a","b"]) returns ["\vec{a}", "\vec{b}"] etc).
/* Flexible Make KaTeX Vector Version of List Items */
makeKatexVector(x):= block([ placeHolderList : x ],
if stringp(x) /* Special Handling if x is Just a String */
then placeHolderList : concat("\vec{", x, "}")
else if listp(x[1]) /* check to see if it is a list of lists */
then for j:1 thru length(x)
do placeHolderList[j] : makelist(concat("\vec{", k ,"}"), k, x[j] )
else if listp(x) /* check to see if it is just a list */
then placeHolderList : makelist(concat("\vec{", k, "}"), k, x)
else placeHolderList : "makeKatexVector error: not a list-of-lists, a list or a string",
return(placeHolderList));
Although I have my doubts about the efficiency or elegance of the above code, it seems to return the desired expressions; however, I would like to modify this function so that it can distinguish between single- and multi-character strings.
In particular, I'd like multi-character strings like x_1 to be returned as \vec{x}_1 and not \vec{x_1}.
In fact, I'd simply like to modify the above code so that \vec{} is wrapped around the first character of the string, regardless of how many characters there may be.
My Attempt
I was ready to tackle this with brute force (e.g. transcribing each character of a string into a list and then reassembling); however, the real programmer on the project suggested I look into "Regular Expressions". After exploring that endless rabbit hole, I found the command regex_subst; however, I can't find any Maxima documentation for it, and am struggling to reproduce the examples in the related documentation here.
Once I can work out the appropriate regex to use, I intend to implement this in the above code using an if statement, such as:
if slength(x) >1
then {regex command}
else {regular treatment}
If anyone knows of helpful resources on any of these fronts, I'd greatly appreciate any pointers at all.
Looks like you got the regex approach working, that's great. My advice about handling subscripted expressions in TeX, however, is to avoid working with names which contain underscores in Maxima, and instead work with Maxima expressions with indices, e.g. foo[k] instead of foo_k. While writing foo_k is a minor convenience in Maxima, you'll run into problems pretty quickly, and in order to straighten it out you might end up piling one complication on another.
E.g. Maxima doesn't know there's any relation between foo, foo_1, and foo_k -- those have no more in common than foo, abc, and xyz. What if there are 2 indices? foo_j_k will become something like foo_{j_k} by the preceding approach -- what if you want foo_{j, k} instead? (Incidentally the two are foo[j[k]] and foo[j, k] when represented by subscripts.) Another problematic expression is something like foo_bar_baz. Does that mean foo_bar[baz], foo[bar_baz] or foo_bar_baz?
The code for tex(x_y) yielding x_y in TeX is pretty old, so it's unlikely to go away, but over the years I've come to increasing feel like it should be avoided. However, the last time it came up and I proposed disabling that, there were enough people who supported it that we ended up keeping it.
Something that might be helpful, there is a function texput which allows you to specify how a symbol should appear in TeX output. For example:
(%i1) texput (v, "\\vec{v}");
(%o1) "\vec{v}"
(%i2) tex ([v, v[1], v[k], v[j[k]], v[j, k]]);
$$\left[ \vec{v} , \vec{v}_{1} , \vec{v}_{k} , \vec{v}_{j_{k}} ,
\vec{v}_{j,k} \right] $$
(%o2) false
texput can modify various aspects of TeX output; you can take a look at the documentation (see ? texput).
While I didn't expect that I'd work this out on my own, after several hours, I made some progress, so figured I'd share here, in case anyone else may benefit from the time I put in.
to load the regex in wxMaxima, at least on the MacOS version, simply type load("sregex");. I didn't have this loaded, and was trying to work through our custom platform, which cost me several hours.
take note that many of the arguments in the linked documentation by Dorai Sitaram occur in the reverse, or a different order than they do in their corresponding Maxima versions.
not all the "pregexp" functions exist in Maxima;
In addition to this, escaping special characters varied in important ways between wxMaxima, the inline Maxima compiler (running within Ace editor) and the actual rendered version on our platform; in particular, the inline compiler often returned false for expressions that compiled properly in wxMaxima and on the platform. Because I didn't have sregex loaded on wxMaxima from the beginning, I lost a lot of time to this.
Finally, the regex expression that achieved the desired substitution, in my case, was:
regex_subst("\vec{\\1}", "([[:alpha:]])", "v_1");
which returns vec{v}_1 in wxMaxima (N.B. none of my attempts to get wxMaxima to return \vec{v}_1 were successful; escaping the backslash just does not seem to work; fortunately, the usual escaped version \\vec{\\1} does return the desired form).
I have yet to adjust the code for the rest of the function, but I doubt that will be of use to anyone else, and wanted to be sure to post an update here, before anyone else took time to assist me.
Always interested in better methods / practices or any other pointers / feedback.

c++ unordered_set .find method won't compile, gives "expression must have class type" error

So, I'm trying to write a function that can determine how many strings in a collection are anagrams of other strings in that collection. To do this quickly, I've elected to sort the strings, and then move them into a "
valid" and "invalid" hash set, depending on whether I've found a duplicate. Trouble is, When I try to use the find method for unordered_sets, I get a compile time error that tells me "expression must have class type".
I've looked around the website, but I didn't see any posts with that error that I recognized as being the same problem.
I'm working in visual studio, in c++, and I should mention that the code is not finished; I haven't written anything after the line that's giving me the error. Also, it is specifically the name of the std::unordered_set "valid" that is underlined in red.
It is also worth noting that this code is a work-in-progress, so there are a few things written down that I may not actually need; for example, I probably won't end up using those long longs (because I've realized that trying to use a single, enormous character array rather than strings is probably more effort than it's worth.)
Here is the method I'm working on:
Edit: I removed some of the irrelevant parts of this method due to sensitivities concerning it's origin. I apologize for my lack of foresight.
int Anagram_Locator::filterAnagrams()
{
...
//the valid and invalid hash sets
std::unordered_set<std::string> valid();
std::unordered_set<std::string> invalid();
//pull in the words, and sort them. Then, send them to either the valid or invalid string hash sets
while (std::cin >> transferString)
{
...
//is it in the valid list?
std::unordered_set<std::string>::const_iterator found = valid.find (transferString);
}
}
The last line in this code snippet is the one that is not compliling. This is particularly frustrating to me, because it is written exactly as it was in this c++ guide:
The c++ reference page I was looking at
I would think that this is all the code I would need, but experience has taught me that programming problems often have causes in parts of the code that I think are irrelevant. As such, I have posted the rest of my code below.
Edit: the rest of the code turned out to be irrelevant, so I removed it for clarity.
This appears incorrect:
std::unordered_set<std::string> valid();
std::unordered_set<std::string> invalid();
You are declaring two functions that return sets, not two sets.
Don't you really want:
std::unordered_set<std::string> valid;
std::unordered_set<std::string> invalid;

Operations on lists in Prolog

I'm new to Prolog and what I want to achieve is to perform an operation like append/3 over two lists that are saved in my program. I don't need to open swi-prolog and type append([a,b,c],[h,j,k],X). in order to obtain X=[a,b,c,h,j,k]. What I need is something like retrieve two lists from the program and perform the append over them.
I don't even know if what I want to obtain is possible in Prolog.
This is my situation: I got this "esempio.pl" file where I have the following rule and these two facts:
personal_union(F,C,Xs) :-
personal_list(F,Fs),personal_list(C,Cs),append(Fs,Cs,Xs).
personal_list(family,[alessandro,cinzia,fabio]).
personal_list(colors,[blu,giallo,lilla,verde,rosso]).
I'd like to question "esempio.pl" from SWI-prolog and ask it:
personal_union(family,colors,X).
And obtain the unified lists:
X=[alessandro,cinzia,fabio,blu,giallo,lilla,verde,rosso]`
Is my code a possible solution? I couldn't try it myself because it keeps giving me this error: Syntax error: illegal start of term stating that the error is at the start of the body of my rule.
Your code is valid as I tested it on Swish, and it produces the requested result.
The error you get is usually derived of mistaken syntax such as additional commas or brackets issues. Check the rest of you code. Also, you might find http://swish.swi-prolog.org/ comfortable for such debugging.

How do c/c++ compilers know which line an error is on

There is probably a very obvious answer to this, but I was wondering how the compiler knows which line of code my error is on. In some cases it even knows the column.
The only way I can think to do this is to tokenize the input string into a 2D array. This would store [lines][tokens].
C/C++ could be tokenized into 1 long 1D array which would probably be more efficient. I am wondering what the usual parsing method would be that would keep line information.
actually most of it is covered in the dragon book.
Compilers do Lexing/Parsing i.e.: transforming the source code into a tree representation.
When doing so each keyword variable etc. is associated with a line and column number.
However during parsing the exact origin of the failure might get lost and the information might be off.
This is the first step in the long, complicated path towards "Engineering a Compiler" or Compilers Theory
The short answer to that is: there's a module called "front-end" that usually takes care of many phases:
Scanning
Parsing
IR generator
IR optimizer ...
The structure isn't fixed so each compiler will have its own set of modules but more or less the steps involved in the front-end processing are
Scanning - maps character streams into words (also ignores whitespaces/comments) or tokens
Parsing - this is where syntax and (some) semantic analysis take place and where syntax errors are reported
To make this up to you: the compiler knows the location of your error because when something doesn't fit into a structure called "abstract syntax tree" (i.e. it cannot be constructed) or doesn't follow any of the syntax-directed translation rules, well.. there's something wrong and the compiler indicates the location where this didn't happen. If there's a grammar error on just one word/token then even a precise column location can be returned since nothing matched a terminal keyword: a basic token like the if keyword in the C/C++ language.
If you want to know more about this topic my suggestion is to start with the classic academic approach of the "Compiler Book" or "Dragon Book" and then, later on, possibly study an open-source front-end like Clang