If we have 5 cities and 5 ants. Does all ants have to start from the same city? What is the difference if they start from different cities.
I am placing the ants at different cities as starting points randomly.
I tried using both cases but my results are same. I want to know if it's correct or there is a problem with my code.
you can start from different nodes and will update the pheromone every time.
Related
I try to distribute x jobs among y persons using reinforcement learning (DQN).
Every person can have a specific amount of tasks and every task can only be done once.
I mask out all the non possible task for each person for example if a task is already choosen it will just be masked out (So the output size stays the same)
I preprocess my data by combining the features of the person with the features of the task. For example I would substract the timeslots: A Person has 4 timeslots left and the task needs 2 the resulting feature would be 2. I do this for every person and with every task resulting in one big matrix where the #rows = #persons and #colums = #tasks * #features.
Now I want to give my network as many information as possible meaning the whole matrix but I am unsure on how to do it.
One possible idea would be to make one big flatten array but the problems would be that the amount of persons can change and also that I can only choose one task at a time for one person so I would need to tell the network which person is the active one.
Another approach would be something like "Hey I have a sequence lets use RNN" but I am not sure how to teach the network which is the current person. I also think this would lead the network to give me the best task over all persons. But it should learn something like "If the task is better for another person don't choose it for the active one".
The output of my network are the actions(tasks) where I choose the maximum.
Maybe some smart person has an idea. Thanks for your help.
Participants are paired together to accomplish a task.
I would like to have a list automatically made showing me how many times participants are paired with each other. This way, I get to pair the participants equally.
In the picture attached, you can see how I want the list generated at the far right. I was thinking "query" would work? But I'm not so familiar on how to do it.
Below example will show you the way:
1.Data are in A:B
2.Unique pair are in D:E - code: =UNIQUE(A3:B)
3.Number of times they where together are in F. Code for F3 and belowe:
=COUNTA(QUERY(A2:B6,"select B where A='"&D3&"' and B='"&E3&"'",0))
Pictures:
Is that what you where trying to get?
Everything I describe is currently occurring in a hydrologic model I am building.
I have some for loops that control the reading of input data across gridded data sets. The initial inputs can be anywhere from 100x100 to 3000x3000 cells. After reading in these inputs, I perform some initial calculations (5-10) across the grid. (See my question here for questions I have related to reading in the inputs: http://bit.ly/1AkyzWy). After the initial calculations, I enter a mode where I step "into" each cell and run 4-15 processes. Each cell has a different subset of roughly 15 processes - some of these cells are identical with others in terms of the processes that are run, and no cell runs a subset that doesn't exist elsewhere. A time step consists of one complete loop through all of the cells. I run anywhere from 30 to 15,000 time steps.
And no here's the important part, I think: Each cell depends on the results of the processes run in the neighboring cells, but not during each time step. Within a time step, when in a cell, the current running processes are referencing the results of the processes run in the neighboring cells during the previous timestep. Nothing within a cell depends on the processes run in a neighboring cell during the same timestep.
So, I think my program, which can take an hour or so to run 1500 time steps on 1000x10000 cells, is ripe for parallelization. I've done initial research into this, I'm worried about solutions affecting portability and performance on different end-users machines.
Does an easy to implement solution exist that doesn't affect portability and adapts to different users' number of computer cores?
I'm pretty sure this question has been asked several times, but either I did not find the correct answer or I didn't understand the solution.
To my current problem:
I have a sensor which measures the time a motor is running.
The sensor is reset after reading.
I'm not interested in the time the motor was running the last five minutes.
I'm more interested in how long the motor was running from the very beginning (or from the last reset).
When storing the values in an rrd, depending on the aggregate function, several values are recorded.
When working with GAUGE, the value read is 3000 (10th seconds) every five minutes.
When working with ABSOLUTE, the value is 10 every five minutes.
But what I would like to get is something like:
3000 after the first 5 minutes
6000 after the next 5 minutes (last value + 3000)
9000 after another 5 minutes (last value + 3000)
The accuracy of the older values (and slopes) is not so important, but the last value should reflect the time in seconds since the beginning as accurate as possible.
Is there a way to accomplish this?
I dont know if it is useful for ur need or not but maybe using TREND/TRENDNAN CDEF function is what u want, look at here:
TREND CDEF function
I now created a small SQLite database with one table and one column in that tabe.
The table has one row. I update that row every time my cron job runs and add the current value to the current value. So the current value of the one row and column is the cumualted value of my sensor. This is then fed into the rrd.
Any other (better) ideas?
The way that I'd tackle this (in Linux) is to write the value to a plain-text file and then use the value from that file for the RRDTool graph. I think that maybe using SQLite (or any other SQL server) just to keep track of this would be unnecessarily hard on a system just to keep track of something like this.
I currently have a problem that for my senior year, I need to choose 5 elective courses out of 20+ possible courses. All these courses are distributed to weekdays. I need to develop a robust algorithm to show me all possible combinations without overlapping any of the course hours. I am a little bit short of time, so I figured I'd ask here, and it would be of help to other people in the future.
My original idea was to try all combinations of 5 out of 20+, and remove the schedules that had overlapping courses. The brute force solution seems easy to implement. Just out of curiosity, would there be another more intelligent solution to this problem? e.g. What if I had 1000+ courses to choose from?
A little bit faster could be to select the first course (out of 1000) and remove all the courses that overlap. Then select the 2nd course out of the remaining courses and remove the overlapping courses again. If you do that 5 times, you will have 5 courses that don't overlap.
The last iteration isn't really necessary because once you have 4 courses, then every course that's left will not overlap.
By backtracking you will get all the possible course combinations. An efficient way of backtracking here could be by using dancing links as proposed by Knuth.