Haskell: Procedurally generating level without slowing down - list

To better learn Haskell, I'm trying to basically build a variant of the well-known Indie Game Super Hexagon.
I am however having a problem with the level generation:
Right now, level generation is done by having a list containing all different "gauntlets" (patterns of walls, implemented as [[Bool]],a variable number of rows, each containing True or False for ); these are the building blocks of the level. Using the getStdGen() number generator, I'm able to create an infinite list of gauntlets.
However, at a single time, we only want to render part of these. To keep track of if a gauntlet has already been passed or not, a second argument is introduced, containing the sum of the number of rows of all gauntlets before this one. (e.g, length (gauntletData currentRandomNumber) + snd $ randomGauntletList !! (n-1), where randomGauntletList has the type [(Gauntlet,Integer)]
The problem is in how this list is used in the rendering function and the updating function: take 30 $ dropWhile (\(_,distance) -> currentDistance > distance) randomGauntletList
The dropWhile results in the program taking longer and longer to return the current 'starting point' of the list, therefore slowing down the game after +- 30 seconds.
I'm out of my depth: Is there a way to solve this problem?

Related

Riddle puzzle in clingo

So in the tag prolog someone wanted to solve the "the giant cat army riddle" by Dan Finkel (see video / Link for description of the puzzle).
Since I want to improve in answer set programming I hereby challenge you to solve the puzzle more efficient than me. You will find my solution as answer. I'll accept the fastest running answer (except if it's using dirty hacks).
Rules:
hardcoding the length of the list (or something similar) counts as dirty hack.
The output has to be in the predicate r/2, where it's first argument is the index of the list and the second its entry.
Time measured is for the first valid answer.
num(0..59).
%valid operation pairs
op(N*N,N):- N=2..7.
% no need to add operations that start with 14
op(Ori,New):- num(Ori), New = Ori+7, num(New), Ori!=14.
op(Ori,New):- num(Ori), New = Ori+5, num(New), Ori!=14.
%iteratively create new numbers from old numbers
l(0,0).
{l(T+1,New) : op(Old,New)} = 1 :- l(T,Old), num(T+1), op(Old,_).
%no number twice
:- 2 #sum {1,T : l(T,Value)}, num(Value).
%2 before 10 before 14
%linear encoding
reached(T,10) :- l(T,10).
reached(T+1,10) :- reached(T,10), num(T+1).
:- reached(T,10), l(T,2).
:- l(T,14), l(T+1,_).
%looks nicer, but quadratic
%:- l(T2,2), l(T10,10), T10<T2.
%:- l(T14,14), l(T10,10), T14<T10.
%we must have these three numbers in the list somewhere
:- not l(_,2).
:- not l(_,10).
:- not l(_,14).
#show r(T,V) : l(T,V).
#show.
Having a slightly more ugly encoding improves grounding a lot (which was your main problem).
I restricted op/2 to not start with 14, as this should be the last element in the list
I do create the list iteratively, this may not be as nice, but at least for the start of the list it already removed impossible to reach values via grounding. So you will never have l(1,33) or l(2,45) etc...
Also list generation stops when reaching the value 14, as no more operation is possible/needed.
I also added a linear scaling version of the "before" section, although it is not really necessary for this short list (but a cool trick in general if you have long lists!) This is called "chaining".
Also note that your show statement is non-trivial and does create some constraints/variables.
I hope this helps, otherwise feel free to ask such questions also on our potassco mailing list ;)
My first attempt is to generate a permutation of numbers and force successor elements to be connected by one of the 3 operations (+5, +7 or sqrt). I predefine the operations to avoid choosing/counting problems. Testing for <60 is not necessary since the output of an operation has to be a number between 0 and 59. The generated List l/2 is forwarded to the output r/2 until the number 14 appears. I guess there is plenty of room to outrun my solution.
num(0..59).
%valid operation pairs
op(N*N,N):- N=2..7.
op(Ori,New):- num(Ori), New = Ori+7, num(New).
op(Ori,New):- num(Ori), New = Ori+5, num(New).
%for each position one number
l(0,0).
{l(T,N):num(N)}==1:-num(T).
{l(T,N):num(T)}==1:-num(N).
% following numbers are connected with an operation until 14
:- l(T,Ori), not op(Ori,New), l(T+1,New), l(End,14), T+1<=End.
% 2 before 10 before 14
:- l(T2,2), l(T10,10), T10<T2.
:- l(T14,14), l(T10,10), T14<T10.
% output
r(T,E):- l(T,E), l(End,14), T<=End.
#show r/2.
first Answer:
r(0,0) r(1,5) r(2,12) r(3,19) r(4,26) r(5,31) r(6,36) r(7,6)
r(8,11) r(9,16) r(10,4) r(11,2) r(12,9) r(13,3) r(14,10) r(15,15)
r(16,20) r(17,25) r(18,30) r(19,37) r(20,42) r(21,49) r(22,7) r(23,14)
There are multiple possible lists with different length.

Skip gram in word2vec - what is the number of outputs

The following images are often represented to describe the word2vec model with skip-gram:
However, after reading this discussion on stackoverflow, it seems that word2vec actually take 1 word and input and 1 word as output. The output word is randomly samples from the window. (And this is performed X number of times to generate X input/output pairs.)
It seems to me then that the above image is not correctly describing the network. My question is: is the 1 input/1 output standard (the Tensorflow word2vec tutorial takes this approach and calls it skip-gram) or do some networks actually take the structure of the above image?
It's not a great diagram.
In CBOW, those converging arrows are an averaging that happens all-at-once, to create one single 'training example' (desired prediction) that is (average(context1, context2, ..., contextN) -> target-word). (In practice averaging is more common than the 'SUM' shown in the diagram.)
In Skip-Gram, those diverging arrows are multiple training examples (desired predictions) made one-after-the-other.
And in both diagrams, while they look a bit like neural-net node-architectures, the actual hidden-layer and internal-connection weights are just implied inside the middle-column-to-right-column arrows.
Skip-gram is always 1 "input" context word used to predict 1 nearby (within the effective 'window') "output" target word.
Implementations tend to iterate through the whole effective window, so every (context -> target) pair gets used as a training-example. And in practice, it doesn't matter if you consider the central word the target-word and each word around it to be context-words, or the central word the context-word and each word around it to be target-words – both methods result in the exact same set of (word -> word) pairs being trained, just in a slightly different iteration order. (I believe the original Word2Vec paper described it one way, but then Google's released code did it the other way for reasons of slightly-better cache efficiency.)
In fact the effective window, for each central word considered, is chosen to be some random number from 1 to the configured maximum window value. This turns out to be a cheap way of essentially weighting nearer-words more: the immediate neighbors are always part of training-pairs, further words only sometimes. That is, pairs are not randomly sampled from the whole window - it's just a random window size. (There's another down-sampling where the most-frequent words will be randomly dropped so as not to overtrain them at the expense of less-frequent words, but that's a totally separate process not reflected in the above.)
In CBOW, instead of up-to 2*window input-output pairs of the (context-word -> target-word) form, there's a single input-output pair of (context-words-average -> target-word). (In CBOW, a loop creates the average value for a single N:1 training-example for one central word, and then splits the backpropagated error across all contributing words. In skip-gram, a loop creates multiple alternate 1:1 training-examples for one central word.)

Haskell List Monad State Dependance

I have to write a program in Haskell that will solve some nondeterministic problem.
I think i understand List Monad in 75% so it is oblivious choice but...
(My problem is filling n x m board with ships and water i am given sums of rows and colums every part of ship has its value etd its not important right now).
I want to guard as early as possible to make algoritm effective the problem is that possibility of insertion of ship is dependant from what i am given / what i have inserted in previus moves lets call it board state and i have no idea how to pass it cuz i can't generate a new state from board alone)
My Algoritm is:
1. Initialize First Board
2. Generate First Row trying applying every possible insertion (i can insert sheep verticaly so i need to remember to insert other parts of sheep in lower rows)
3. Solve the problem for smaller board (ofc after generating each 2 rows i check is everything ok)
But i have no idea how can I pass new states cuz as far as i have read about State Monad it generates new state from old state alone and this is impossible for me to do i would want to generate new state while doing operations on value).
I am sorry for my hatred towards Haskell but after few years of programing in imperative languages being forced to fight with those Monads to do things which in other languages i could write almost instantly makes me mad. (well other things in Haskell are fine for me and some of them are actually quite nice).
Combine StateT with the list monad to get your desired behavior.
Here's a simple example of using the non-determinism of the list monad while still keeping a history of previous choices made:
import Control.Monad
import Control.Monad.Trans.Class
import Control.Monad.Trans.State
fill :: StateT [Int] [] [Int]
fill = do
history <- get
if (length history == 3)
then return history
else do
choice <- lift [0, 1, 2]
guard (choice `notElem` history)
put (choice:history)
fill
fill maintains a separate history for each path that it tries out. If it fills up the board it returns successfully, but if the current choice overlaps with a previous choice it abandons that solution and tries a different path.
You run it using evalStateT, supplying an initial empty history:
>>> evalStateT fill []
[[2,1,0],[1,2,0],[2,0,1],[0,2,1],[1,0,2],[0,1,2]]
It returns a list of all possible solutions. In this case, that just happens to be the list of all permutations in which we could have filled up the board.

C++ pathfinding with a-star, optimization

Im wondering if I can optimize my pathfinding code a bit, lets look at this map:
+ - wall, . - free, S - start, F - finish
.S.............
...............
..........+++..
..........+F+..
..........+++..
...............
The human will look at it and say its impossible, becouse finish is surrounded... But A-star MUST check all fields to ascertain, that there isnt possible road. Well, its not a problem with small maps. But when I have 256x265 map, it takes a lot of time to check all points. I think that i can stop searching while there are closed nodes arround the finish, i mean:
+ - wall, . - free, S - start, F - finish, X - closed node
.S.............
.........XXXXX.
.........X+++X.
.........X+F+X.
.........X+++X.
.........XXXXX.
And I want to finish in this situation (There is no entrance to "room" with finish). I thought to check h, and while none of open nodes is getting closer, then to finish... But im not sure if its ok, maybe there is any better way?
Thanx for any replies.
First of all this problem is better solved with breadth-first search, but I will assume you have a good reason to use a-star instead. However I still recommend you first check the connectivity between S and F with some kind of search(Breadth-first or depth-first search). This will solve our issue.
Assuming the map doesn't change, you can preprocess it by dividing it to connected components. It can be done with a fast disjoint set data structure. Then before launching A* you check in constant time that the source and destination belong to the same component. If not—no path exists, otherwie you run A* to find the path.
The downside is that you will need additional n-bits per cell where n = ceil(log C) for C being the number of connected components. If you have enough memory and can afford it then it's OK.
Edit: in case you fix n being small (e.g. one byte) and have more than that number of components (e.g. more than 256 for 8-bit n) then you can assign the same number to multiple components. To achieve best results make sure each component-id has nearly the same number of cells assigned to it.

Prolog - Term replacement, Term alteration in workflow graphs

In this link ( Meta Interpreter ) I believe to have found a nifty way of solving a problem I have to tackle, but since my prolog is very bad I'd first ask if its even possible what I have in mind.
I want to transform certain parts of a workflow/graph depending on a set of rules. A graph basically consists of sequences (a->b) and split/joins, which are either parallel or conditional, i.e. two steps run in parallel in the workflow or a single branch is picked depending on a condition (the condition itself does not matter on this level) (parallel-split - (a && b) - parallel-join) etc. Now a graph usually has nodes and edges, with the form of using terms I want to get rid of edges.
Furthermore each node has a partner attribute, specifying who will execute it.
I'll try to give a simple example what I want to achieve:
A node called A, executed by a partner X, connected with a node called B, executed by a partner Y.
A_X -> B_Y
seq((A,X),(B,Y))
If I detect a pattern like this, i.e. two steps in sequence with different partners, I want this to be replaced with:
A_X -> Send_(X-Y) -> Receive_(Y-X) - B_Y // send step from X to Y and a receive step at Y waiting for something from X
seq((A,X), seq(send(X-Y), seq(receive(Y-X), B)))
If anyone could give me some pointers or help to come up with a solution I would be very thankful!
A graph basically consists of sequences (a->b) and split/joins, which are either parallel or conditional, i.e. two steps run in parallel in the workflow or a single branch is picked depending on a condition
This sounds an awful lot like an and/or graph. Prolog algorithms on these graphs are covered by Ivan Bratko in Prolog Programming for Artificial Intelligence, chapter 13. Even if your graphs aren't really and/or graphs, you may be able to adapt some of these algorithms to your task.