I ran my PTV Vissim simulation a few times to see if it works. I noticed that the Data Collection Results tab collected all the data from these runs. How can I delete these previous data?
Edit:
I think I was able to figure it out: 1) you have to delete/move the files from the "results" folder in the same directory, 2) close out the simulation, 3) then open the simulation again...
Yes, one way is to simply delete the results folder and re-open the network.
However, from within Vissim you can go to the menu Evaluation → Result lists → Simulation runs and in the opening list delete the obsolete (or all) simulation runs (this will also remove the corresponding evaluation results).
There is also the option in Evaluation → Configuration, Tab Result management, to keep the result of the "current (multi-)run only". Selecting this will automatically throw away existing results every time you start a new simulation.
Related
I am working on a war game project using legacy C++ code. Attacks appear to be functioning strangely in "extreme cases" (e.g., a combat in one game "tile" in which 5000 arrows are fired at a group of 3 regiments, one of which can be the target for any of the ~5000 potential "hits" in the course of one game turn). My task is to collate data on what is happening during a game turn.
I need to record how the value for one particular Local variable changes during the course of one game turn for any unit that comes under fire (restricting to a 'test scenario' in which only regiments in one tile are under attack).
My Visual Studio "Watch window" at the beginning of a debug session looks about like this:
Name----------------------Value------------Type
targ-----------------------011-------------short
(gUnit[011]).damage--------0---------------short
(gUnit[047]).damage--------0---------------short
(gUnit[256]).damage--------0---------------short
What this mean is: at the first breakpoint, the regiment represented in nested array gUnit[011]) has been determined to be the target ("targ") for a "hit" and the "damage" for that regiment prior to this hit is = 0.
After one loop that may change to this:
Name----------------------Value------------Type
targ-----------------------256-------------short
(gUnit[011]).damage--------3---------------short
(gUnit[047]).damage--------0---------------short
(gUnit[256]).damage--------0---------------short
Which means: after one "hit" Regiment 011 has suffered 3 damage, and now for the next hit that has been determined to occur Regiment 047 is the target. So for each cycle of the call loop, any of the potential targets can be reassigned as the target and the value of 'damage' can change. For this example of 5000 arrows fired, there could be 5000 hits (though it would be unlikely to be > 1300 hits for that many shots fired). For each hit that occurs in the span of one game turn, I need to record:
target and damage
I have breakpoints setup and I can do what I need to do manually meaning:
Advance to first call to the function that determines a hit ["MissileHit()"]
Advance again and note that value for (gUnit[011]).damage has changed from 0 to 3. /* i.e., the Regiment with the ID "011" has taken "3" damage . . . */
Repeat 5000 times for experiment ONE (and noting that, with each loop targ can change to any of the three arrays that represent the units that are under attack in the tile.
Repeat for another 7 experiments (with slightly different conditions) for phase one of testing.
Repeat for perhaps 3 or 4 more phases of testing . . .
In sum, THOUSANDS of rows with at least two columns:
targ.......damage
(as a side note, this csv will get additional columns plugged in to it after each test cycle, so the final spreadsheet will look more like this:
atkrID....Terrain....atkrExpr....atkrFatig....targID....targDamage
The end result of all this data collection being: descriptive statistics can be calculated on the various in-game variables that mediate the outcomes of attacks)
Obviously, doing this "manually" would be ridiculous.
Is there an easy way to get Visual Studio to generate csv files that will record this data for me?
I can certainly go down the path of writing scripts to run when these breakpoints execute, and fstreaming stuff into a file. But, I'm still wet-behind the ears (both with Visual Studio, C++ and developing in general!) so, I thought it worth asking if there are built in features of Visual Studio that would facilitate this.
I think the VS extension like Drop's comment would be a better path for this issue, but I think it would require the high extension knowledge, so I just provide another workaround for this issue: Add the data breakpoint and output the value to the output windows, even if you also need to copy and past the value to your files, but I think it is also convenient for you to analyze the value during debugging.
Visual Studio. Debug. How to save to a file all the values a variable has had during the duration of a run?
I have more than 50 tables running in work. Before, it worked well.
But recently, there are some errors like:
ERROR: An I/O error has occurred on file
WORK.'SASTMP-000000030'n.UTILITY. ERROR: File
WORK.'SASTMP-000000030'n.UTILITY is damaged. I/O processing did not
complete. NOTE: Error was encountered during utility-file processing.
You may be able to execute the SQL statement successfully if you
allocate more space to the WORK library. ERROR: There is not enough WORK disk space to store the results of an internal sorting
phase. ERROR: An error has occurred.
Does anyone know how to solve this error?
Your disk is full. If this is running on a server, ask your system administrator to investigate the problem.
If this is your desktop, find and delete un-needed files to free up space.
Clean out old SAS Work Folders
Often, old SAS Work folders do not get cleared when SAS closes. You can get back a lot of disk space by going to the path defined for SAS Work, and deleting all the old folders.
In SAS
%put %sysfunc(pathname(work));
will show you where the current WORK library is located. One level up is where all SAS Work folders are created.
On my system, that returns:
C:\Users\dpazzula\AppData\Local\Temp\SAS Temporary Files\_TD9512_GXM2L12-PAZZULA_
That means that I should look in "C:\Users\dpazzula\AppData\Local\Temp\SAS Temporary Files\" to find old folders to delete.
Your work space is full.
Your SAS server uses a dedicated directory where all SAS sessions store their temporary files: All files in the work libraries, as well as temp files as used while sorting, joining etc.
Solutions:
Have more space allocated.
Make certain only to put necessary files into work/ clean up/ close old sessions.
Run less processes.
Replace interim datasets with views instead, especially if you're using large source datasets :
data master /view=master ;
set lib.monthlydata20: ; /* all datasets since Jan 2000 */
run ;
proc sql ;
create table want as
select *
from master
where ID in(select ID from lookup) ;
quit ;
try to compress all datasets using this option
OPTIONS COMPRESS=YES REUSE=YES;
this should be in the very beginning of your code. it will compress all datasets by nearly 98%.It will also make your code run faster. It will consume more CPU but will decrease size.
In some cases, this might not help if the compressed data sets exceed the hard disk space.
Also, change your work directory to the biggest drive that has disk space.
Study your code.
Create a Data Flow Diagram to determine WHEN each file is created, where it is used downstream. Find out when a data set is no longer needed and DELETE it. If you have 50 data sets, chances are numerous data sets are 'value-added' by a subsequent step, and can go away freeing up your work space. A cute trick is to REUSE some of the data set names - to keep the number of unneeded data sets in check.
Rule of thumb: leave the environment the way you found it - if there were no files in WORK to start, manually clean up after yourself. Unless it is a Stored Process, which starts a completely new SAS job, and will clean up after itself upon completion of the job.
SAS can take a very long time to load an output table after you click on it from the process flow, is there a way to cancel it? I've wasted hours waiting for tables to load, I am hoping there is a way to exit the "Input Data" or "Output Data" tabs and return to the process flow window.
I don't know of a way to directly stop the table from loading; in my brief tests, ctrl+break and esc don't stop it, which are the usual possibilities, and both do work in some places in EG. I will say that this is somewhat of a weakness in EG in general; it's not perfect at handling things in the background and allowing you to interrupt, though it's improved greatly over the years.
What you can do to avoid this being a problem, at least possibly, is to go to the Tools->Options->Data->Performance option screen, and limit the Maximum dimensions to display in the data grid value to something well under the default 1,000,000. Change it to 50,000 or something else that is a reasonable compromise between your needs and how long it takes to connect to datasets.
Alternately, you can prevent datasets from appearing on the process flow by altering the option in Tools->Options->Results->Results General Maximum number of output data sets to add to the project to zero. That doesn't prevent you from browsing datasets; you would just have to do it through the Servers tab on the lower left (in the default setup).
If I am not wrong then you would "NOT" want the Input and Output datasets to open when the process is run. Then uncheck the highlighted options in SAS EG, Goto Tools--> Options --> Results --> Results General and uncheck the options shown in image below.
Suppose I had the following structure for a script called mycode.do in Stata
-some code to modify original data-
save new_data, replace
-some other code to perform calculations on new_data-
Now suppose I press the break button to stop Stata after it has saved new_data in the script. My understanding is that Stata will undo the changes made to the data if it is interrupted with the break button before it has finished. Following such interruption, will Stata erase new_data.dta from memory if it didn't exist initially (or revert it back to its original form if it already existed before mycode.do was executed)?
Stata documentation says "After you click on Break, the state of the system is the same as if you had never issued the original command." However, it sounds as if you expect that it treats an entire do-file as a "command". I do not believe this is the case. I believe once the save is completed, then the file new_data has been replaced, and Stata is not able to revert the file to the version before the save.
The Stata Reference Manual also says, in the documentation for Stata release 13, [R] 16.1.4 Error handling in do-files, "If you press Break while executing a do-file, Stata responds as though an error has occurred, stopping the do-file." Example 4 discusses this further and seems to support my interpretation.
This seems to me to have interesting implications for Stata "commands" that are implemented as ado files.
How do I run a second batch on a HIT but ensure I have a new set of workers? I want to make a small change to the HIT and start a new batch, but I don't want any of the workers who participated in the first batch to participate in the second.
Your best bet is to add assignments to your existing HITs (you can do this easily through the RUI), which will exclude workers who have already done those HITs.
But, before you do that, you'll need to change the HITs, which is more difficult (but relatively easy through the API using a ChangeHITTypeOfHIT operation for title/description/duration/qualification changes). If you need to change the Question parameter of a HIT (the actual displayed content of the HIT) or the amount it pays (reward), then you need to create new HITs and send them out as a new batch.
To prevent workers from redoing the HITs you can either put a qualification on the HIT and assign all of your current workers to a score below that level.
Or, you can put a note on the HIT saying that duplicate work will be unpaid. If you do this, you should include a link on the HIT that takes workers to a list of past workerids so that they can check whether they've already done the task.
Update 2018:
You don't need to modify the content of the HIT anymore.
Instead, you assign a qualification to the previous participants in a csv sheet.
Then, in the new HIT, you set as requirement that this qualification "has not been granted".
Detailed explanation at the bottom here and step-by-step procedure describe in this pdf.