I was reading about function overloading and found the following:
The compiler lists all functions with the same name and the same
number of arguments, and then it starts looking for a function using
the following rules by a fixed order and stops when there is a match.
1) A function that exactly has the same arguments' types as the call
to it.
2) bool-> int char->int float-> double... (-> means is converted to)
3) int-> double, double->int... (-> means is converted to)
For example:
void foo(int x)
{
return;
}
foo(True);//will trigger step 2.
could someone elaborate number 2 and 3? and add the full list?
1) A function that exactly has the same arguments' types as the call to it
So if im trying to call foo(1) it will first try to find any function named foo that takes a single argument of type int.
2) bool-> int char->int float-> double... (-> means is converted to)
So if i do foo(True) and it doesnt find anything in step 1, it will try converting True to an int and will do step 1) again
3) int-> double, double->int... (-> means is converted to)
If the 2) step doesnt find any suitable functions, it will try converting any int arguments into double, and so on.
Related
What is the order of the function type : int -> int -> int -> int?
I don't really understand what the question is asking or where to begin figuring out the answer. I can't seem to find any helpful pages on sml anywhere.
Firstly, ML is a functional language, so if you're short on SML specific material, you should still be able to glean useful information from any functional language references. Reading up about first order and higher order functions should help. However this is not really an ML (or SML) question, but a functional programming question.
With the caveat that my functional knowledge is possibly rusty:
A function is first order if its arguments and result value is both "data", that is, not other functions. A function is higher-order if it takes another function as a parameter or returns a function as a parameter.
The order of a higher order function then is defined as one more than the order of either its parameters or return value.
The function type int -> int describes a function that takes an int and outputs an int. This is therefore a first order function, with order 1.
The function type int -> int -> int describes a function that takes an int, and outputs (a function that takes an int and outputs an int), e.g. we can rewrite
int -> int -> int
as
int -> (int -> int)
So the output of the function that takes an int is another (first order) function like I described previously.
Now, we know that the order of the parameter (int) is 1 (it's just data, not a function), and the order of the output (e.g. a first order function int -> int) is also 1, therefore the order of the whole function type int -> int -> int is therefore 1+1, e.g. 2.
By a similar extension it follows that the order of the function type
int -> int -> int -> int
is 3.
This fuction convert Decimal number to Binary (I've founded it on Internet)
foo(long dec)
{
long int rem,i=1,sum=0;
do
{
rem=dec%2;
sum=sum + (i*rem);
dec=dec/2;
i=i*10;
} while(dec>0);
return sum;
}
As I found out, when I change type "long" to "int", the function doesn't work for number bigger than 1023. The problem is I have to run this function with integer argument (It has to be foo(int dec)). I have tried to use static_cast but I've failed.
error: cannot convert ‘long int’ to ‘std::ios_base&(std::ios_base&)’ in assignment|
Is there any way to bypass that or maybe change this function somehow.
Thank you for you help.
First of all, this function doesn't exactly "convert decimal to binary". Instead, it will convert a number such that if you print out the result in base 10, then it looks like the binary representation of the number.
Secondly this function definition is invalid because you did not specify a return type.
Of course, if you change long to int in the parameter list, and add int as return type (and your system has smaller ints than longs - some don't) then it will not work for as may numbers.
For example, if your system has 32-bit ints then 1111111111 is the largest output that will work, and that comes from input 1023.
I don't know what you're talking about with the static_cast and ios_base error, you must have done something unrelated to this function.
Make sure you define the type that the function is returning in the function definition, and make sure foo is taking a long int as an argument.
long int foo(long int) {
code...
}
I'm playing with the difference between - as a unary operator and a binary operator in caml-light.
let a b =
print_int b;
print_newline();
;;
let c d e =
print_int d;
print_newline();
print_int e;
print_newline();
;;
a (3 - 4 ) ;
c (9 - 4 )
;;
I expect the code to either throw an error (because it gets confused about how many arguments a or c have) or to print:
-1
5
However, it compiles with no problems (compiler version below) and prints
-1
Can anyone tell me what happens with the last call?
Cobrakai$camlc -v
The Caml Light system, version 0.80
(standard library from /usr/local/lib/caml-light)
The Caml Light runtime system, version 0.80
The Caml Light compiler, version 0.80
The Caml Light linker, version 0.80
Cobrakai$
In ML, all functions take exactly one argument. A seemingly multi-parameter function is actually a function that takes one argument, and returns another function which takes the remaining arguments.
So let c d e = ... is actually syntactic sugar for let c = function d -> function e -> ...
And the type of c is int -> int -> unit, and -> is right-associative, so it is int -> (int -> unit). So you can see clearly that is a function which takes int and returns a function.
When you apply it to multiple arguments like c 1 2, function application is left-associative so it is actually (c 1) 2, so you can see that c 1 evaluates to a function which then is applied to 2.
So, when you give a function "too few arguments", the result is a function. This is a useful and common technique in ML called "partial application", which allows you a convenient way to "fix" the first few arguments of a function.
I am not sure how the Caml Light interpreter handles it when the expression you type evaluates to a function. But from what you're saying, it seems to not print anything.
I want to ask the user to input a variable and check it is real or integer and take two different operations for corresponding actions. Say true if integer else false;
fun realorinteger(n)= if n=int then true else false;
but it definitely does not work. I tried if n in int as well.
Any help?
You cannot do this.
The type system simply doesn't allow a function to take multiple different types, and act according to which type it is. Either your function takes an int, or it takes a real. (Or it takes both, but can also take strings, lists, etc... ie. is polymorphic)
You could fake it by making a datatype, which encapsulates values that can be either integers or reals, like so:
datatype intorreal = IVal of int | RVal of real
You can then use pattern matching on such a value to extract the desired number:
fun realorinteger (IVal i) = ... (* integer case here *)
| realorinteger (RVal r) = ... (* real case here *)
This function will then have the type intorreal -> x, where x is the type of the right-hand-side expressions. Note, the resulting value must be of the same type in both cases.
An example of such a function could be a rounding function:
fun round (IVal i) = i
| round (RVal r) = Real.round r
Which is then called like so:
val roundedInt = round (IVal 6);
val roundedReal = round (RVal 87.2);
I have the following line of code:
double *resultOfMultiplication = new double(*num1 * *num2);
How does the compiler know which * is used for derefencing and which * is used for multiplication?
Also, and probably a more important question is in this case is double a primitive (like in Java) or an object? If it's a primitive how can I create a new one?
The compiler doesn't need to be "smart"; the language's syntax is defined by a tree of grammatical productions that inherently imbue a priority or "precedence" to the application of certain operators over the application of other operators. This is particular handy when an expression might otherwise be ambiguous (because, say, two operators used are represented by the same lexical token).
But this is just lexing and parsing. Whether any particular operation is actually semantically valid is not decided until later in compilation; in particular, given two pointers x and y, the expression *x *y will fail to compile because you cannot multiply *x by y, not because there was a missing operator in what might otherwise have been a dereference followed by another dereference.
I shan't go into an in-depth proof that operator precedence exists in C++; for that, just take a basic course in syntax structure and you'll grok it soon enough.
Maybe an analogy will help:
Q: How do humans tell the dot above the 'i' apart from the dot at the end of of a sentence? Are they really that smart that they don't interpret each and every 'i' as the end of the sentence?
A: Because they are in different locations!
Same is for the '*' for the compiler: they appear in different positions. The multiplication operator stands in the middle of two expressions; The dereferencing operator stands in front of an expression. It may not be obvious to you, but it is obvious to a compiler.
Any decent text on parsing will tell you how compilers are able to do this. The required technology was developed about 40 years ago, and is considered to be among the most basic things in a compiler. A C++ compiler has to have many smart parts, but this is not one of them.
Note to experts: I am aware of factors, lvalues, and so on. But they will only confuse in this case.
It's all about the grammar. There's no postfix *, so * after an identifier has to be treated as infix multiply.
Since the question is about “smartness”, I would like to add one point. My answer will refer to the C language, but I assume the situation is identical in C++.
The compiler does not need to really be smart in this case, simply because the language does not allow to perform multiplications directly on addresses, so the symbol * preceding a pointer can always mean only “dereference”, while the symbol * preceding a non-pointer can only mean “multiplication”.
I will try to explain this with an example.
Let's create a small program and let's call it test.c. Let's then create within the main() function the two pointers first and second and let the address of first be 140732806008300 and the address of second be 140732806008296.
When we try to sum the two addresses the compiler allows us to peacefully use the + operand without a previous cast, because in C the sum of pointers is allowed:
#include <stdio.h>
int main () {
int *first, *second, fifteen = 15, twenty = 20;
first = &fifteen; /* Let the address of `first` be 140732806008300 */
second = &twenty; /* Let the address of `second` be 140732806008296 */
printf("first + second is: %llu\n", (long long unsigned int) first + second);
return 0;
}
What we did here was to directly perform a sum of two pointers, getting a new pointer as a result, which then was casted into an unsigned integer for the sake of the prinf() function. Since this is allowed, we correctly get the string
first + second is: 703664030041496
But if instead we try to multiply the two addresses using the multiplication operand *…
#include <stdio.h>
int main () {
int *first, *second, fifteen = 15, twenty = 20;
first = &fifteen; /* Let the address of `first` be 140732806008300 */
second = &twenty; /* Let the address of `second` be 140732806008296 */
printf("first * second is: %llu\n", (long long unsigned int) first * second);
return 0;
}
…we get the following error:
test.c: In function ‘main’:
test.c:11:69: error: invalid operands to binary * (have ‘long long unsigned int’ and ‘int *’)
printf("first * second is: %llu\n", (long long unsigned int) first * second);
This is because direct multiplication of addresses is not allowed. Therefore we have to cast the pointers into valid integers before being able to eventually use the symbol * with the meaning of “multiplication operand”:
#include <stdio.h>
int main () {
int *first, *second, fifteen = 15, twenty = 20;
first = &fifteen; /* Let the address of `first` be 140732806008300 */
second = &twenty; /* Let the address of `second` be 140732806008296 */
printf("first * second is: %llu\n", (long long unsigned int) ((long unsigned int) first) * ((long unsigned int) second));
return 0;
}
…Now we finally have got the symbol * standing between integers (and not between pointers). And in such context it can only mean “multiplication” (and nothing else), so we finally get the correct result of the operation:
first * second is: 4480243502683625952
I tried to think about examples where the meaning of the symbol * cannot be disambiguated (by humans as well), but without success.
This means that the symbol * can unambiguously mean only one thing in a given context – i.e., there are cases where we have to use parentheses to change its meaning, but its meaning is always unambiguous in its context.
Dereference comes first, so *num1 * *num2 gets parsed as (*num1) * (*num2), which is unambiguous. *resultOfMultiplication is not parsed as dereference because it is a variable definition. In such a context, the compiler expects a data type followed by an identifier so the asterisk is unambiguous.
Primitive data types are still objects in C++. If you use new on a primitive type, all that happens is that enough memory in the free store to hold the object is allocated and its address returned to you. This is unlike 'normal' variables (i.e. double t;), which are of automatic or static storage duration.
It depends from the context in which it is used, for a simple resolution it looks at the left and right word to understand what a symbol is.
Further read at wikipedia page: Lexer_hack