self-defined type cast to string in VHDL - casting

I created a VHDL package with a new type as you can see below:
package useOfLength is
type length1 is range -(2**31-1) to 2**31-1
units
um;
mm = 1000 um;
cm = 10 mm;
m = 100 cm;
inch = 25400 um;
yard = 914400 um;
foot = 304800 um;
end units;
end;
in my testbench I am trying to "report" the variable "output" of a signal of type length1:
report "The result is " & to_string(output);
Modelsim gives me the following error message:
** Error: G:/OneDrive - Office/SS2018/DRS/exercise6/1106ex03/ex03_tb.vhd(49): (vcom-1136) Unknown identifier "to_string".
I tried a lot (cast to integer and then to string) and also other ways, but nothing is working. I also found casts on the internet, but only for already given types, not for self-defined types.
I am also new to VHDL and maybe this is a stupid question...

okay, I found out, that a physical type is the same as an integer...
so to convert it to a string is working like this:
length1'image(output);

Related

How can I initialize an array in C++ using a variable initialized at runtime? [duplicate]

This question already has an answer here:
Array[n] vs Array[10] - Initializing array with variable vs numeric literal
(1 answer)
Closed 12 months ago.
What i would like to do is declare an array with "dim" size :int A[dim];.
Now, this works if I declare something like const int dim = 1 but doesn't with const int dim = round(x);, which is what i need to do. (Where x comes from cin >> x.)
Note: With "doesn't work" i refer to Visual Studio Code throwing red wavy line under dim in int A[dim]; and displaying the following when hovering it with my mouse:
`
expression must have a constant valueC/C++(28)
main.cpp(15, 11): the value of variable "dim" (declared at line 13) cannot be used as a constant
`
This is the relevant code:
#include <iostream>
using namespace std;
int main(){
float x;
cin >> x;
const int dim = round(x);
int A[dim];
int i = 0;
}
}
Given the context i believe the error is caused by one of two reasons:
Some characteristic of round() that makes the const int dim = round(x) not recognized as constant from the array later.
The problem is the x and not the round() so cin >> x is the reason.
[Thanks for whoever can explain me what I'm missing or point to some documentation that does. I have done some research but I haven't found a solution to this. Also this is my first question on SO, so tell me if I should change/improve something]
EDIT: Apparently the problem isn't in the round(x) as I previously thought because simply replacing const int dim = round(x); with const int dim = x; gives the same "error".
So the problem has to do with cin >> x .
EDIT 2 Note: I'm looking for a solution that doesn't use std::vector. We haven't studied it yet in the course so I believe the algorithm(from which i took the relevant code) shouldn't comprehend it.
Final Edit I didn't realize that, as #paulmckenzie clarified, using cin made the array dynamic because the imput comes in runtime, it was a really stupid error but I apologize, I'm really a beginner. In my defense we really haven't talked about dynamic size arrays so I guess that's what threw me off. I realized from the beginning I was missing something very basic, sorry for wasting time, I'll put even more time analyzing everything before posting next time.
The size of an array variable must be compile time constant in C++. User input is not compile time constant, hence it cannot be used as the size of an array variable.
In order to create an array with runtime size, you must instead create a dynamic array. The simplest way to do that is to use std::vector from the standard library.
EDIT 2 Note: I'm looking for a solution that doesn't use std::vector.
It's possible to create a dynamic array without std::vector, but that requires the use and understanding of more advanced concepts. Using new expressions directly is more difficult, error prone and is something that isn't (or shouldn't) be done in most programs in practice.
Of course, another solution is to just not use user input but rather an array with constant size.

boost:multiprecision

I have just started using boost::multiprecision trying to speed up some calculations previously done in Matlab. I found quite an unexpected problem, though. My calculations involve complex numbers, so I am using cpp_complex_50 type (e.g. cpp_complex_50 A, B;)
At some point I need to use boost::math::tools::bracket_and_solve_root() function, which requires that the function it works on returns real values. Here comes my problem... I cannot convert my complex multiprecision variable A.real() to any type that is real, eg. to cpp_dec_float_50 type or even double. The task should be streightforward, but I am virtually drowned in error complaints from my compiler (MSVC2015), and cannot solve it. Any hints at how to convert the data are more than welcome.
A somewhat connected question is the problem of initialization of cpp_complex_50 type variables with real values. At the moment I can only use data of type double at initialization, which means I am loosing some accuracy at the initialization stage already, e.g.:
cpp_complex_50 A = 4.0 * boost::math::constants::pi<double>(); // it works
but
cpp_complex_50 A = 4.0 * boost::math::constants::pi<cpp_dec_float_50>(); // It does NOT work
Any hints are needed. I am stuck at this, despite nice initial results.
Regards
Pawel
cpp_complex uses cpp_bin_float.
Live On Compiler Explorer
#include <boost/multiprecision/cpp_complex.hpp>
#include <iostream>
namespace bmp = boost::multiprecision;
int main() {
using Complex = bmp::cpp_complex_100;
using Real = Complex::value_type;
Real r = 4.0 * boost::math::constants::pi<Real>();
Complex b(r, {});
// or
b = r.convert_to<Complex>();
std::cout << b.str(100) << std::endl;
}
Prints
12.56637061435917295385057353311801153678867759750042328389977836923126562514483599451213930136846827
Following valuable comment from sehe... the code
cpp_complex_50 A = 4.0 * boost::math::constants::pi<cpp_bin_float_50>();
cout << A << endl;
works, producing:
12.5663706143591729538505735331180115367886775975
Similarely,
cpp_bin_float_50 B = A.real();
cout << B << endl;
works as well, printing the same.

Is there a built in function for length of boost::numeric::interval

I'm typing the following in my code
boost::numeric::interval<double> foo = ...;
double length = std::abs(foo.upper() - foo.lower());
It surprises me I can't find a simple utility in boost::numeric::interval that calculates the length. I'd expect a length function so that I can do
boost::numeric::interval<double> foo = ...;
double length = length(foo);
Is this either
missing from the library as an oversight
missing from the library for good reason as it doesn't fit the concept
there but under another name which I've failed to see
Obviously I can write a simple inline function to do this but I'm a bit surprised that I have to given that I see functions like min and max
I just found it after I posted the question
Answer is "width"
boost::numeric::interval<double> foo = ...;
double length = width(foo);

Typescript type casting not working

I am pretty new to TypeScript. I started with a book called Typescript Revealed (Pub Feb.2013). In Chapter 2 there is a section called "Casts" that has the following example:
var a : int = <int>SomeNumberAsAString;
I tried to apply the example, as follows:
var SomeNumberAsAString = "1000";
var a: int = <int>SomeNumberAsAString;
But compiler gave me an error:
hello.ts(2,8): error TS2304: Cannot find name 'int'.
hello.ts(2,15): error TS2304: Cannot find name 'int'.
I'm wondering how to do this cast, or has the specification of Typescript changed?
(Pub Feb.2013)
That book is old. Its called number now.
var SomeNumberAsAString = "1000";
var a: number = <number><any>SomeNumberAsAString;
Also this assertion is very unsafe and I would not do this in production code. But it gets the point across :)
More
A more up to date book chapter on assertions : https://basarat.gitbooks.io/typescript/content/docs/types/type-assertion.html
https://basarat.gitbook.io/typescript/type-system/type-assertion
I've read #basarat's answer and decided to post my own since I strongly believe that there's still some room for explanation.
Be warned, <number><any> casting won't generate a number. In fact it will allow your code to be compiled (thus you'll pass all static typing checks) but it won't be a number in javascript. Consider this snippet of code:
let str = "1000";
let a: number = <number><any>str;
let b: number = parseInt(str); //or just let b = parseInt(str)
console.log(typeof a); // string
console.log(typeof b); // number
I hardly can imagine cases when a-case is beneficial compared to b-case. I'd go with just parseInt or parseFloat or Number, whatever fits more. <T><any> casting looks smart but you must be 100% sure what you are supposed to achieve by that idiom.
And in most cases you probably don't want to achieve that )
Here is the cleanest way to do it.
const numberString = '1000';
const a: int = numberString as any;
I prefer this variant
let SomeNumberAsAString = '1000';
let a = +SomeNumberAsAString;
console.log(a);
const a: number = <number> +SomeNumberAsAString;
+SomeNumberAsAString converts the string value to the number.
<number> before +SomeNumberAsAString says to Typescript compiler that we need to cast the type of the value to type number.

How to build a recursive descent parser

I've been working on a recursive descent parser for a simple calculator. When something is declared it is either declared as an int or a float. Currently I am saving the strings into two different vectors one for int and one for float. At this point I don't care what the numbers associated are I simply care that the string is declared before it is used.
My issue is that I have to be able to output a warning message if an int and float are being used in an operation such as float + int.
So if expression is term+expression or term-expression or term. In recursive descent how could I possibly check to see if an int is being used in an operation with a float. Sorry if the explanation is not clear. I'm finding it a bit difficult to explain. I have add some code if necessarily, I just didn't want to flood the question with code.
edit:
there is still a bunch of code missing, I figured just grab the important part, but I can upload the entire thing if need-be. I see some people didn't understand what the main question was. One of the requirements is "When integer and float values are mixed in +, -, * and /, the integer is converted to a float. print a message indicating the line number and that a conversion would be required." At the moment the program reads from a file. if you say "int x;" the program will currently save x in the int vector, then when you say something such as x=5; it will acknowledge that x has been declared and the assignment will pass. my issue is where if you say int x; float y; int z; x=5; y=7.5; z=x+y; how would I be able to check for that as at the moment my program only saves the type for variables and not the value. Essentially I'm wondering if it would be possible to do something like scan the completed parse as if it was a string or some other method of finding out of an operation using int and float is being done.
the lex scanner was created with flex
class Token {
Tokentype type;
string value;
int linenum;
public:
Token(Tokentype t, string v="") {
type = t;
value = v;
}
Tokentype getType() { return type; }
string getValue() { return value; }
int getLinenum() { return linenum; }
};
vector<string> int_list;
vector<string> float_list;
class PTree {
PTreeNodetype type;
PTree *left;
PTree *right;
public:
PTree(PTreeNodetype t, PTree *l=0, PTree *r=0) {
type = t;
left = l;
right = r;
}
PTreeNodetype getType(){ return type;}
};
// expr ::= term PLUS expr | term MINUS expr | term
PTree *
Expr() {
PTree *term = Term();
Token *t;
if (!term)
return 0;
t = getToken();
if (t == NULL){
delete t;
return 0;
}
if(t->getType() != T_SC)
{
if (t->getType() == T_RPAREN){
pushbacktoken(t);
return new PTree(EXPR, term);
}
if (t->getType() != T_PLUS && t->getType() != T_MINUS)
{
cout << t->getLinenum() << ":" << "Error: expected + or -" << endl;
pushbacktoken(t);
delete t;
return 0;
}
delete t;
PTree *expr = Expr();
if (!expr)
return 0;
return new PTree(EXPR, term, expr);
}
pushbacktoken(t);
return new PTree(EXPR, term);
}
I think you need to explain the structure of your code a little more.
In an interpreter like you are talking about normally there are three things going on:
A lexer/scanner is generating a token stream
A parser is taking the token and building semantic objects
A interpreter is consuming the semantic object tree and executing them
Stage 1 doesn't need to care that you are adding an int and a float. Stage 2 can populate an a warning field in your semantic object/struct that interpreter will print when it sees populated or the interpreter can recognize this warning condition itself.
To provide you any more detail or use more specific terminology we would need to see how you are representing operations.
Two options that I see, depending on what you are doing.
First. Don't worry about it while you are building the parse tree. Later, when you walk the tree, you can easily check this and throw an error.
Second. Use different rules for int and float. So you would have a rule for adding two ints and a rule for adding two floats. This also means you wouldn't have a number rule, which I am guessing you do, that mixes both ints and floats.
I definitely recommend the first way.
Calculators don't traditionally "declare" things, so its unclear what your calcualtor knows when it is parsing an expression.
If I assume that you "declare i int, r real" before the expression "i*r" is parsed, you seem to have several questions:
a) how do you know as you parse whether i and r have been declared? The technical answer is that during parsing you don't have to know; you can parse, build up a tree, and do such checking later. On a practical level, people often weave symbol lookups into the parsing process (this gets messier as your language gets bigger, so it isn't recommended for other than calculators [you'll discover that most C compilers do this, adding to their messiness]). The answer is easy: keep a list of defined symbol strings around, and when you encounter an identifier, look to see if its in the list.
b) how do you know the type of "i" or "r"? Easy. Associate with the symbol string, the declared type, e.g., , . Associated sets of declarations are commonly called symbol tables.
c) how do you know if operations are operating on the same ("the right") kind of values? Here you need to associate with every operand, its "type". Constants have obvious type; 1.0 is real, 1 is integer. "i" is integer, and your parser knows it because it looked up the type (above); similarly for "r". Each expression term then has to check its operands for compatibility. What might not be obvious is that each expression has to compute it result type, e.g., 3 * 4.0 is real, not integer. So in parallel to the parsing machinery, you need to propagate a type.
+1 to voidlogic. His answer should give you a basic idea of how to build a recursive descent parser. If you are having trouble with a certain part of yours, it would be nice to get a little more detail about how you are structuring your code.
If you would like to see an example of one, look at this implementation.
Here is a book that may help:
Compilers: Principles, Techniques and Tools ("Dragon Book") by A. Aho, M. Lam and R. Sethi.
Here is a set of tools that may help you:
GNU flex
GNU bison