The Array.to_s uses inspect on its content, instead of calling to_s recursively.
This code
class Some
def to_s; "some" end
end
puts [Some.new].to_s
Would produce [#<Some:0x10078ce80>], instead of [some]
Wondering if there could be any bad consequences if I override Array.to_s to use to_s recursively? And why it's not working this way by default?
When you stringify a collection, you need every collection item to be self contained in order to distinguish between one and the other. #inspect takes care of that because it's supposed to stringify the value in an unambiguous way.
For example, if Array#inspect used #to_s on the items, the result of ["a", "b,c", 'd'].to_s would be "[a,b,c,d]" and you can't tell how many items are even in the collection or what types they have.
Related
I`m using a function within a script which produces a number of variables.
The function is used at different places in the script.
It produces variables a,b,c,d,e etc
def function(input):
a,b,c,d,e = 1,2,3,4,5
return a,b,c,d,e
I call the function as follows:
a,b,c,d,e = function(input)
This way all the variables a,b,c,d,e etc... need to be repeated every time I want to use the function. Is there a way to prevent this repetition?
For example I tried to make a list of the variables and then zip them with the output of the function and then connect them but that is not working.
listofvariables = [a,b,c,d,e]
outputfunction = function(input)
gezipt = zip(listofvariables,outputfunction)
for itm in gezipt:
itm[0] = itm[1]
That way when I change the function I would have to rewrite the list with the variables once and not every time the function is used. but that doesnt work.
The answer to your question, of whether you can programmatically set variables is mostly no. There probably are ways you could make it work, but they'd be fragile or hard to understand. Generally speaking, treating variable names as part of your program's data is a bad idea.
If you have lots of related data items, you should generally be using a data structure to group them all together, rather than using separate variables for each one. Your example is dealing with a batch of five variables, and at several different points keeps them in a tuple. That tuple is probably what you want, though you could use a list if you need to modify the values, or maybe a dictionary if you wanted to be able to index them by key (e.g. {"a": 1, "b": 2}). For other more complicated situations, you might need to program up your own container for the data, such as a class. You might even use several different containers for different sub-sets of your data, and then keep those separate containers in a further container (resulting in a nested data structure).
Unfortunately, that's about as detailed as we can get without more details about your circumstances. If you need more specific help, you should probably ask another question with more details about your actual application.
New to D Language here. I'm trying to use higher-order functions (i.e. fold!, reduce!, filter!, map!) to create duplicates of array elements. I'm declaring my generic function as pure and trying to accomplish this task as a one line function. The closest I've come so far is
auto dupList(T)(T[] list) pure { (return map!(a => a.repeat(2)); }
but this gives me the following output
[[1,1],[2,2]]
instead of what I'm actually wanting
[1, 1, 2, 2]
I'm calling the function like so
writeln(dupList(nums));
Instead of using map, I've been trying to use reduce in its place but when I switch out map for reduce I get the following errors:
Error instantiated from here: `staticMap!(ReduceSeedType, __lambda2)` C:\D\dmd2\src\phobos\std\algorithm\iteration.d 3287
Error: template `D_Programs.duplist!int.duplist.__lambda2` cannot deduce function from argument types `!()(int, int)`, candidates are: C:\D\dmd2\src\phobos\std\algorithm\iteration.d 3696
Error: template instance `D_Programs.duplist!int.duplist.F!(__lambda2)` error instantiating C:\D\dmd2\src\phobos\std\meta.d 803
Error instantiated from here: `reduce!(int[])` D_Programs.d (refers to dupList)
Error `D_Programs.duplist!int.duplist.__lambda2` D_Programs.d (refers to dupList)
Error instantiated from here: `duplist!int` D_Programs.d (refers to where I'm calling from)
Any help/advice on understanding at least the top three errors and where I'm going wrong with my function would be appreciated.
map essentially replaces each element with the result of calling the passed function on that element. Since your function returns an array of two ints, the result will be an array of arrays, each element holding two ints.
Armed with this knowledge, we can use std.algorith.iteration.joiner:
auto dupList(T)(T[] list) pure { return list.map!(a => a.repeat(2)).joiner; }
As you note, it should also be possible to use reduce, but it's a bit more complicated:
auto dupList(T)(T[] list) pure { return reduce!((a,b) => a~b~b)((T[]).init, list); }
The reasons it's more complicated are:
1) reduce's function takes two arguments - the result of reducing thus far, and the next element.
2) reduce assumes the first element of the passed array is the starting point for reduction, unless a seed value is passed. Since the first element is a T, not a T[], we will need to pass a seed value. [] won't do, since it's typed as void[], so we will need to create an empty T[]. This can be done either with new T[0], or as above, (T[]).init.
Hope this helps - if there are any more questions, please ask! :)
I am assuming you meant to call map!(a => a.repeat(2))(list) or list.map!(a=>a.repeat(2)) (both are the same) since if you don't pass the actual list to the function, it isn't ever actually being called!
Anyway, neither map nor reduce will do what you want on their own. Map transforms individual elements, but can neither add nor remove elements. Reduce (and btw fold, they are basically the same) runs through the array and... well, reduces it down to just one element, like a sum function turning the array 1,2,3 into the single element, 6. Since you want to add elements, you are going to need something else outside.
But first, a sidestep: your call to reduce is failing to compile because it is being passed incorrect arguments (or something, tbh the error messages are really bad and hard to read without having the code they directly refer to open too, but it definitely refers to a lambda). Passing it your dupList won't work because dupList takes an array, but reduce works with just two elements at a time, for example, sum(a, b).
Anyway, back to the main point, the closest you can get is perhaps running another function outside map to flatten the resulting array, or in other words, join them together. There's a function for that: http://dpldocs.info/experimental-docs/std.algorithm.iteration.joiner.2.html
Suggesting a possible answer:
return list .map!(a => a.repeat(2)) .joiner;
BTW: one line functions are grossly overrated. You are often better off writing it on multiple lines, even if as a single statement, if nothing else but so you can get unique line numbers on the error messages. I would prefer to write this out probably something like this:
return
list
.map!(a => a.repeat(2))
.joiner
;
so each line represents a single step of the process. The exact formatting, of course, is up to you, but I like this more stretched out approach for (slightly) nicer error messages and an easier view when editing to add comments or more stuff before, after, in the middle, whatever.
At SO, I have seen questions that compare Array with Seq, List with Seq and Vector with well, everything. I do not understand one thing though. When should I actually use a Seq over any of these? I understand when to use a List, when to use an Array and when to use a Vector. But when is it a good idea to use Seq rather than any of the above listed collections? Why should I use a trait that extends Iterable rather than all the concrete classes listed above?
You usually should use Seq as input parameter for method or class, defined for sequences in general (just general, not necessarily with generic):
def mySort[T](seq: Seq[T]) = ...
case class Wrapper[T](seq: Seq[T])
implicit class RichSeq[T](seq: Seq[T]) { def mySort = ...}
So now you can pass any sequence (like Vector or List) to mySort.
If you care about algorithmic complexity - you can specialize it to IndexedSeq (quick random element access) or LinearSeq (fast memory allocation). Anyway, you should prefer most top-level class if you want your function to be more polymorphic has on its input parameter, as Seq is a common interface for all sequences. If you need something even more generic - you may use Traversable or Iterable.
The principal here is the same as in a number of languages (E.g. in Java should often use List instead of ArrayList, or Map instead of HashMap). If you can deal with the more abstract concept of a Seq, you should, especially when they are parameters to methods.
2 main reasons that come to mind:
1) reuse of your code. e.g. if you have a method that takes a foo(s:Seq), it can be reused for lists and arrays.
2) the ability to change your mind easily. E.g. If you decide that List is working well, but suddenly you realise you need random access, and want to change it to an Array, if you have been defining List everywhere, you'll be forced to change it everywhere.
Note #1: there are times where you could say Iterable over Seq, if your method supports it, in which case I'd inclined to be as abstract as possible.
Note #2: Sometimes, I might be inclined to not say Seq (or be totally abstract) in my work libraries, even if I could. E.g. if I were to do something which would be highly non-performant with the wrong collection. Such as doing Random Access - even if I could write my code to work with a List, it would result in major inefficiency.
Is there any particular reason for the inconsistent return types of the functions in Dart's ListBase class?
Some of the functions do what (as a functional programmer) I would expect, that is: List -> (apply function) -> List. These include: take, skip, reversed.
Others do not: thus l.removeLast() returns just the final element of the list; to get the List without the final element, you have to use a cascade: l..removeLast().
Others return a lazy Iterable, which requires further work to retrieve the list: newl = l.map(f).toList().
Some functions operate more like properties l.last, as opposed to functions l.removeLast()
Is there some subtle reason for these choices?
mbmcavoy is right. Dart is an imperative language and many List members modify the list in-place. The most prominent is the operator []=, but sort, shuffle, add, removeLast, etc. fall into the same category.
In addition to these imperative members, List inherits some functional-style members from Iterable: skip, take, where, map, etc. These are lazy and do not modify the List in place. They are backed by the original list. Modifying the backing list, will change the result of iterating over the iterable. List furthermore adds a few lazy members, like reversed.
To avoid confusion, lazy members always return an Iterable and not an object implementing the List interface. Some of the iterables guarantee fast length and index-operators (like take, skip and reversed) and could easily implement the List interface. However, this would inevitably lead to bugs, since they are lazy and backed by the original list.
(Disclaimer: I have not yet used Dart specifically, but hope to soon.)
Dart is not a functional programming language, which may the the source of your confusion.
Methods, such as .removeLast() are intended to change the state of the object they are called upon. The operation performed by l.removeLast() is to modify l so that it no longer contains the last item. You can access the resulting list by simply using l in your next statement.
(Note they are called "methods" rather than "functions", as they are not truly functions in the mathematical sense.)
The choice to return the removed item rather than the remaining list is a convenience. most frequently, the program will need to do something with the removed item (like move it to a different list).
For other methods, the returned data will relate to a common usage scenario, but it isn't always necessary to capture it.
I'm writing simple test which checks method returning some of interface beneath Collection. I'm trying to abstract internal representation of this collection as much as possible, so that this test will pass in both cases: when method returns List and Set.
The Set is supposed to be ordered (LinkedHashSet or LinkedHashMapbacked Set) so I've got to test order too. So generally I'd like to write test like this:
assertThat(returnedList, containsOrdered('t1", "t2", "t3"));
which will fail iff both collections aren't "the same" (i.e. the same values in the same ordering).
I've found Hamcrest library to be useful in this case, however I'm stuck in it's documentation. Any help would be appreciated, however I'll try to avoid writing CollectionTestUtil or my own Hamcrest Matcher if it's possible.
You're nearly there.
assertThat(returnedList, contains("t1", "t2", "t3"))
will do it. Compare with containsInAnyOrder.
JUnit has the org.junit.Assert that contains multiple assertArrayEquals-implementations for different types, so you could do something like:
Collection<String> returnedList = new ArrayList<String>(); //Replace with call to whatever returns the ordered collection
Assert.assertArrayEquals(new Object[]{"t1", "t2", "t3"}, returnedList.toArray());