I started using Netlogo a few weeks ago and I still have to learn a lot.
I'm writing a model and I need to access 3 values from my excel file to set initial placement of turtles on the world. Values are an x variable, y variable and a type(1,2,3,4).
I know that I have to convert the file in a txt file but I dont't understand how to organize the values in a list and open it to take them.
I would like to ask each turtle access 'x' and 'y', and populate the screen using this positions, than color turtles according to the type.
I really appreciate any help and suggestions on this matter.
L.
Related
In Netlogo, I have turtles-own lists, which means I set a turtle's variable to be a list. Each tick, another value is added to the list. After a few thousand ticks, these lists are quite long... and the problem arises that I can't open the agent monitor in the GUI any more because it takes too long to load the list.
reproducible code:
breed [persons person]
turtles-own [examplelist]
to setup
clear-all
reset-ticks
create-persons 1 [setxy 0 0]
ask turtles [set examplelist []]
end
to go
ask turtles [set examplelist lput ticks examplelist]
tick
end
I would need the agent monitor to watch another turtle-own variable; I don't need to watch the lists (they are just used to do a calculation every 8760 ticks).
Is there maybe a possibility, to e.g. hide the list from the agent monitor? Or do I need to handle the lists as global variables instead? Being quite unhandy as I would need to create and name separate lists for every turtle...
I can see three options:
1/ If you are creating a modelling framework, I assume that your user cannot actually code in NetLogo. This means that you have to predefine the scenarios for them anyway (for example, they could choose the calculation), so you only need to have the possible calculations stored instead of all the input values to those calculations.
2/ It is not clear from your question why any user would open an inspect window or otherwise access the individual turtle. If the user doesn't need it directly, instead of adding all this information to the turtles, you could export it to a file, adding a line each tick. The user would do the analysis of the simulation in R or Excel or whatever.
3/ You could create a shadow turtle for every turtle. This is not something I would recommend, but the idea is that the shadow turtle has a subset of variables (not the lists) and the variable values it does have are identical to the turtle it is shadowing. The limited set of variables version of the turtle is the one that would accessible to monitor.
I am concerned with, I guess, a relatively simple problem.
I currently conduct a simulation where I track time, position and orientation of one particle for a given number of simulation steps.
The task is: Simply write this data to a h5-file on the fly.
So far, I did this using Jupyter. With the h5py-package it is very simple to create a data set of predefined structure (columns x rows) via
outfile = h5py.File("outfile.h5", "w")
dset = outfile.create_dataset("dsetname", (number_of_lines, number_of_columns))
and then write the data line by line for each simulation time step to the data set with
dset[time_step] = np.array([t, x, phi])
Now, I moved to C++, implemented the simulation there and would like to store the data in the same way I used to do it with Python.
However, from basic examples like this, one would have to store the whole set of data in an array during the simulation run and then write its content to the h5 file afterwards.
This is not very elegant. As I did with Python, I would like to just write the data line by line to the HDF5 data set on the fly – and not store the (sometimes up to several GB) large amounts of data in an array.
Unfortunately, so far, I did not find a way how I can implement the procedure I used with Python into C++.
Has anybody ever encountered a similar problem and could show me a way how to solve this it?
Thank you!
Best,
Sven
I have a .CSV file that's storing data from a laser. It records the height of the laser beam every second.
The .CSV file ends up having rows for each measurement that are all in this format:
DR,04,#
where the # is the height reading.
For example, if the beam is at a height of 10, the reading would say:
DR,04,10.
I want my program in C++ to read only the height (third column of the .CSV) from each row and put it into an array. I do not want the first two columns at all. That way I end up with an array with just a bunch of height values from each measurement.
How do I do that?
You can use strtok() to separate out the three columns. And then just get the last value.
You could also just take the string and scan for the first comma, and then scan from there for the second comma. What follows is the value you are after.
You could also use sscanf() to parse out the individual values.
This really isn't a difficult problem, and there are many ways to approach it. That is why people are complaining that you probably should've tried something and then ask a question here when you get stuck on a specific question.
I have about 30 rasters with 4 bands each that I am trying to create composites so that I can eventually bring all of the rasters together into 1 large raster. But the first step is to create composite rasters. I would like to do this all at once and I found a few examples on various sites on how to do it, including ESRI's. I've pieced them together to create my own code, unfortunately I keep getting error 000271: Cannot open the input datasets. I know the path is correct because arcpy.ListRasters() returns the files in the folder in a large list, so the problem is definitely with the CompositeBands tool. I've looked up possible solutions to this problem, but I did not understand the solutions or how they worked, so if you do have an answer or suggestion, could you comment on your code (if you write one) or answer so I know what is going on and why? About the data - they are all ERDAS Imagine image rasters with 4 image color bands : R, G, B, and whatever N is. All but a few rasters have bands named Layer_1, Layer_2 and so on. The few are called Band_1, Band_2 and so on. Here is my code:
arcpy.env.workspace = r'\\network\folder\subfolder1\subfolder2\All_RGBN'
ws = arcpy.env.workspace
outws = r'\\network\folder\subfolder1\subfolder2\RGBN_Composit'
for ras in arcpy.ListRasters("*.img"):
name = outws+"\\"+ras
try:
arcpy.CompositeBands_management("Layer_1.img;Layer_2.img;Layer_3.img,Layer_4.img", name)
except:
arcpy.CompositeBands_management("Band_1.img;Band_2.img;Band_3.img,Band_4.img", name)
Thanks!
If your rasters have multiple bands, they are already composite. Composite Bands should be used when your bands are distinct raster datasets that you want to merge into one raster.
If you want to merge all your rasters (composite or not) into one single dataset, you should create a Mosaic Dataset or a Raster Catalog and load your rasters into it.
And FYI, you get an error message in the Composite Band tool because your raster bands (inputs) are not correctly referenced, you should write something like:
ras + "\\Layer_x" instead of "Layer_x.img"
But doing this will output the exact same raster as the original one.
I have two very large lists. They both were originally in excel, but the larger one is a list of emails (about 160,000) of them with other information like their name and address etc. And the smaller one is a list of just 18,000 emails.
My question is what would be the easiest way to get rid of all 18,000 rows from the first document that contain the email addresses from the second?
I was thinking regex or maybe there is another application I can use? I have tried searching online but it seems like there isn't much specific to this. I also tried notepad++ but it freezes when I try to compare these large files.
-Thank You in Advance!!
Good question. One way I would tackle this is making a C++ program [you could extrapolate the idea to the language of your choice; You never mentioned which languages you were proficient in] that read each item of the smaller file into a vector of strings. First, of course, use Excel to save the files as CSV instead of XLS or XLSX, which will comma-separate the values so you can work with them easier. For the larger list, "Save As" a copy of just email addresses, deleting the other rows for now.
Then, you could open the larger list and use a nested loop to check if you should output to an output file. Something like:
bool foundMatch=false;
for(int y=0;y<LargeListVector.size();y++) {
for(int x=0;x<SmallListVector.size();x++) {
if(SmallListVector[x]==LargeListVector[y]) foundMatch=true;
}
if(!foundMatch) OutputVector.append(LargeListVector[y]);
foundMatch=false;
}
That might be partially pseudo-code, but do you get the idea?
So I read a forum post at : Here
=MATCH(B1,$A$1:$A$3,0)>0
Column B would be the large list, with the 160,000 inputs and column A was my list of things I needed to delete of 18,000.
I used this to match everything, and in a separate column pasted this formula. It would print out either an error or TRUE. If the data was in both columns it printed out true.
Then because I suck with excel, I threw this text into Notepad++ and searched for all lines that contained TRUE (match case, because in my case some of the data had the word true in it without caps.) I marked those lines, then under search, bookmarks, I removed all lines with bookmarks. Pasted that back into excel and voila.
I would like to thank you guys for helping and pointing me in the right direction :)