I have a pandas dataframe looking like this:
Index Stat value1 value2 value3 value4 value5 value6
2016-11-01 00:00:00.000 Gard 0.08 0.24 0.09 6.08 18.4 0.268514431642
2016-11-01 00:00:00.100 Gard 0.08 0.24 0.09 6.08 18.4 0.268514431642
2016-11-01 00:00:00.200 Gard 0.08 0.24 0.09 6.08 18.4 0.268514431642
2016-11-01 00:00:00.300 Gard 0.08 0.24 0.09 6.08 18.4 0.268514431642
2016-11-01 00:00:00.400 Gard 0.08 0.24 0.09 6.08 18.4 0.268514431642
2016-11-02 00:00:00.000 Gard 0.08 0.24 0.09 6.08 18.4 0.268514431642
2016-11-02 00:00:00.100 Gard 0.08 0.24 0.09 6.08 18.4 0.268514431642
2016-11-02 00:00:00.200 Gard 0.15 0.25 0.01 6.08 31.0 0.291719042916
2016-11-02 00:00:00.300 Gard 0.15 0.25 0.01 6.08 31.0 0.291719042916
Of course this is just a snippet, the whole dataframe has about 4.3 million rows.
I would like to extract each line that correponds to a date. So all lines that have the timestamp date 2016-11-01 into one file and 2016-1-02 into another file. So two files looking like this:
Index Stat value1 value2 value3 value4 value5 value6
2016-11-01 00:00:00.000 Gard 0.08 0.24 0.09 6.08 18.4 0.268514431642
2016-11-01 00:00:00.100 Gard 0.08 0.24 0.09 6.08 18.4 0.268514431642
2016-11-01 00:00:00.200 Gard 0.08 0.24 0.09 6.08 18.4 0.268514431642
2016-11-01 00:00:00.300 Gard 0.08 0.24 0.09 6.08 18.4 0.268514431642
2016-11-01 00:00:00.400 Gard 0.08 0.24 0.09 6.08 18.4 0.268514431642
And:
Index Stat value1 value2 value3 value4 value5 value6
2016-11-02 00:00:00.000 Gard 0.08 0.24 0.09 6.08 18.4 0.268514431642
2016-11-02 00:00:00.100 Gard 0.08 0.24 0.09 6.08 18.4 0.268514431642
2016-11-02 00:00:00.200 Gard 0.15 0.25 0.01 6.08 31.0 0.291719042916
2016-11-02 00:00:00.300 Gard 0.15 0.25 0.01 6.08 31.0 0.291719042916
I tried to use groupby in the following command:
grouped_df = df.groupby(df.index.date)["Stat","value1","value2","value3","value4","value5","value6"]
But I don't get any output or error. It runs but nothing happens. Am I doing anything wrong? Is this even the correct function to use? Or is there an easier, better way?
I think you need groupby with apply and custom function with to_csv:
f = lambda x: x.to_csv(r'd:/folder/{}.csv'.format(x.name))
df.groupby(df.index.date).apply(f)
Related
I have an ascii file in the format below:
gc_ab_cd 92641.48 25.2 5.12 9.20 0.00 gc_ht_t_gc_ab_cd
gc_ab_cd/reg 29.24 0.0 0.49 0.01 0.00 gc_ht_t_CHECK1_0
gc_ab_cd/reg/dff_in_gated 17.13 0.0 6.00 11.13 0.00 gc_ht_t_dff_en_in_WIDTH84_0
gc_ab_cd/reg/dff_in_send_gated 0.20 0.0 0.00 0.20 0.00 gc_ht_t_dff_in_WIDTH1_33
gc_ab_cd/reg/rd_rtn 11.42 0.0 4.20 7.22 0.00 gc_ht_t_gfx_2toN_WIDTH32_1
gc_ab_cd/regs 18583.88 5.1 2958.87 25.01 0.00 gc_ht_t_gc_ab_cd_regs
gc_ab_cd/tap_ch 431.51 0.1 144.83 150.05 0.00 gc_ht_t_gc_vm2_qe128
gc_ab_cd/tap_ch/throttle 136.63 0.0 77.33 59.30 0.00 gc_ht_t_gc_vm2__throttle
gc_ab_cd/vm2_dbg 22.79 0.0 0.00 0.00 0.00 gc_ht_t_gfx_dbg_mux_01
gc_ab_cd/vm2_dbg/bg_mux 22.79 0.0 9.90 4.80 0.00 gc_ht_t_gc_dbg_mux_4_1_01
gc_ab_cd/vm2_dbg/bg_mux/clk 0.20 0.0 0.00 0.20 0.00 gc_ht_t_clock
gc_ab_cd/vm2_dbg/bg__mux/flop_mux_flop 5.33 0.0 2.63 2.70 0.00 gc_ht_t_dbg_COUNT4_WIDTH8_0
I need to grep 0 or 1 level of the hierarchy of the first field in the above text, so that the output of the "grep" should print the below in the stdout
gc_ab_cd 92641.48 25.2 5.12 9.20 0.00 gc_ht_t_gc_ab_cd
gc_ab_cd/reg 29.24 0.0 0.49 0.01 0.00 gc_ht_t_CHECK1_0
gc_ab_cd/regs 18583.88 5.1 2958.87 25.01 0.00 gc_ht_t_gc_ab_cd_regs
gc_ab_cd/tap_ch 431.51 0.1 144.83 150.05 0.00 gc_ht_t_gc_vm2_qe128
gc_ab_cd/vm2_dbg 22.79 0.0 0.00 0.00 0.00 gc_ht_t_gfx_dbg_mux_01
I used the regexp https://regex101.com/r/D92KSP/1
But it gives only 3 matches below (1 level of hierarchy in the first field), as can be seen in https://regex101.com/r/D92KSP/1
gc_ab_cd/reg 29.24 0.0 0.49 0.01 0.00 gc_ht_t_CHECK1_0
gc_ab_cd/regs 18583.88 5.1 2958.87 25.01 0.00 gc_ht_t_gc_ab_cd_regs
gc_ab_cd/tap_ch 431.51 0.1 144.83 150.05 0.00 gc_ht_t_gc_vm2_qe128
Questions:
[1] I'm NOT sure why the below line (0 hierarchy in the first field) is NOT being matched by the regexp in https://regex101.com/r/D92KSP/1
gc_ab_cd 92641.48 25.2 5.12 9.20 0.00 gc_ht_t_gc_ab_cd
[2] What should I do to modify the regexp https://regex101.com/r/D92KSP/1 to match the line below
gc_ab_cd/vm2_dbg 22.79 0.0 0.00 0.00 0.00 gc_ht_t_gfx_dbg_mux_01
[3] I used the above regexp with "grep" and in the vim editor in Linux and it doesn't work there, though it works partially in regexp101.com. Don't know why?
regex101 and other such web sites will help you create/validate a regexp that works on that web site, don't assume it'll work anywhere else, especially the mandatory POSIX command-line tools like sed, grep, and awk as each tool uses specific regexp variants (BRE, ERE, and/or PCRE) with different arguments (e.g. -E to enable EREs in grep and sed, -P to enable PCREs in grep with some caveats), extensions (e.g. word boundaries, shortcuts, or back references), and limitations (e.g. delimiter chars). You have to learn which regexp variant with which extensions and limitations the version (e.g. GNU or BSD) of the tool you want to use supports.
In any case, any time you're talking about fields you should be using awk, not grep (or sed) since awk is the tool that separates input into fields. The following will work using any awk in any shell on every Unix box:
$ awk '$1 ~ "^[^/]*/?[^/]*$"' file
gc_ab_cd 92641.48 25.2 5.12 9.20 0.00 gc_ht_t_gc_ab_cd
gc_ab_cd/reg 29.24 0.0 0.49 0.01 0.00 gc_ht_t_CHECK1_0
gc_ab_cd/regs 18583.88 5.1 2958.87 25.01 0.00 gc_ht_t_gc_ab_cd_regs
gc_ab_cd/tap_ch 431.51 0.1 144.83 150.05 0.00 gc_ht_t_gc_vm2_qe128
gc_ab_cd/vm2_dbg 22.79 0.0 0.00 0.00 0.00 gc_ht_t_gfx_dbg_mux_01
or to search for a specific path depth by just setting a numeric variable on the command line:
$ awk -v n=2 '{key=$1} gsub("/","&",key)<n' file
gc_ab_cd 92641.48 25.2 5.12 9.20 0.00 gc_ht_t_gc_ab_cd
gc_ab_cd/reg 29.24 0.0 0.49 0.01 0.00 gc_ht_t_CHECK1_0
gc_ab_cd/regs 18583.88 5.1 2958.87 25.01 0.00 gc_ht_t_gc_ab_cd_regs
gc_ab_cd/tap_ch 431.51 0.1 144.83 150.05 0.00 gc_ht_t_gc_vm2_qe128
gc_ab_cd/vm2_dbg 22.79 0.0 0.00 0.00 0.00 gc_ht_t_gfx_dbg_mux_01
$ awk -v n=3 '{key=$1} gsub("/","&",key)<n' file
gc_ab_cd 92641.48 25.2 5.12 9.20 0.00 gc_ht_t_gc_ab_cd
gc_ab_cd/reg 29.24 0.0 0.49 0.01 0.00 gc_ht_t_CHECK1_0
gc_ab_cd/reg/dff_in_gated 17.13 0.0 6.00 11.13 0.00 gc_ht_t_dff_en_in_WIDTH84_0
gc_ab_cd/reg/dff_in_send_gated 0.20 0.0 0.00 0.20 0.00 gc_ht_t_dff_in_WIDTH1_33
gc_ab_cd/reg/rd_rtn 11.42 0.0 4.20 7.22 0.00 gc_ht_t_gfx_2toN_WIDTH32_1
gc_ab_cd/regs 18583.88 5.1 2958.87 25.01 0.00 gc_ht_t_gc_ab_cd_regs
gc_ab_cd/tap_ch 431.51 0.1 144.83 150.05 0.00 gc_ht_t_gc_vm2_qe128
gc_ab_cd/tap_ch/throttle 136.63 0.0 77.33 59.30 0.00 gc_ht_t_gc_vm2__throttle
gc_ab_cd/vm2_dbg 22.79 0.0 0.00 0.00 0.00 gc_ht_t_gfx_dbg_mux_01
gc_ab_cd/vm2_dbg/bg_mux 22.79 0.0 9.90 4.80 0.00 gc_ht_t_gc_dbg_mux_4_1_01
I have a result from the classification_report from sklearn.metrics and then print the report it would be like:
precision recall f1-score support
1 1.00 0.84 0.91 43
2 0.12 1.00 0.22 1
avg / total 0.98 0.84 0.90 44
Now, the question is how can I show the result in a Jupyter widget (in the above format) and update its value?
Currently, I am using html widgets to show the result:
#pass test and result vectors
report = classification_report(pred_test , self.y_test_data)
predict_table = widgets.HTML(value = "")
predict_table.value = report
but it likes the following:
precision recall f1-score support 1 1.00 0.81 0.90 43 2 0.00 0.00 0.00 0 avg / total 1.00 0.81 0.90 43
I found a simple solution using html techniques! As we are using html widget in Jupyter, the problem can be solved by using pre tag in html:
predict_table.value = "<pre>" + report + "</pre>
I have many dataset from files to be merged and arranged in one single output file. Here is the example of any two datasets to be merged accordingly.
Data 1 from File 1:
9.00 2.80 13.08 12.78 0.73
10.00 -3.44 19.30 18.99 0.14
12.00 2.60 20.28 20.12 0.39
Data 2 from File 2:
2.00 -7.73 20.04 18.49 0.62
5.00 -4.82 17.07 16.38 0.59
6.00 -2.69 12.55 12.25 0.50
8.00 -3.85 18.06 17.64 0.94
9.00 -3.59 16.13 15.73 0.64
Expected output in one file:
9.00 2.80 13.08 12.78 0.73
10.00 -3.44 19.30 18.99 0.14
12.00 2.60 20.28 20.12 0.39
2.00 -7.73 20.04 18.49 0.62
5.00 -4.82 17.07 16.38 0.59
6.00 -2.69 12.55 12.25 0.50
8.00 -3.85 18.06 17.64 0.94
9.00 -3.59 16.13 15.73 0.64
Temporarily the script i used using Python loop for is like this:
import numpy as np
import glob
path='./13-stat-plot-extreme-combine/'
files=glob.glob(path+'13-stat*.dat')
for x in range(len(files)):
file1=files[x]
data1=np.loadtxt(file1)
np.savetxt("Combine-Stats.dat",data1,fmt='%9.2f')
The problem is only one dataset is saved on that new file. Question how to use concatenate to such case at different axis dataset?
Like this:
arrays = [np.loadtxt(name) for name in files]
combined = np.concatenate(arrays)
I've been using the gprof profiler in conjunction with g++.
I have a function in my code which encapsulates several sections of behaviour which are related enough to the primary function that it would not make sense to split them off into their own functions.
I'd like to know how much time is spent in each of these areas of code.
So, if you imagine the code looks like
function(){
A
A
A
B
B
B
C
C
C
}
where A, B, and C represent particular sections of code I'm interested in, is there a way to get gprof to tell me how much time is spent working on those particular sections?
I know it's a old question, but I have found a interesting answer.
As Sam say, the -l option is only for old gcc compiler. But I have found that if you compile and link with -pg -fprofile-arcs -ftest-coverage, run the program, the result of gprof -l is very interesting.
Flat profile:
Each sample counts as 0.01 seconds.
% cumulative self self total
time seconds seconds calls Ts/call Ts/call name
13.86 0.26 0.26 main (ComAnalyste.c:450 # 804b315)
10.87 0.46 0.20 main (ComAnalyste.c:386 # 804b151)
7.07 0.59 0.13 main (ComAnalyste.c:437 # 804b211)
6.25 0.70 0.12 main (ComAnalyste.c:436 # 804b425)
4.89 0.79 0.09 main (ComAnalyste.c:283 # 804a3f4)
4.89 0.88 0.09 main (ComAnalyste.c:436 # 804b1e9)
4.08 0.96 0.08 main (ComAnalyste.c:388 # 804ad95)
3.81 1.03 0.07 main (ComAnalyste.c:293 # 804a510)
3.53 1.09 0.07 main (ComAnalyste.c:401 # 804af04)
3.26 1.15 0.06 main (ComAnalyste.c:293 # 804a4bf)
2.72 1.20 0.05 main (ComAnalyste.c:278 # 804a48d)
2.72 1.25 0.05 main (ComAnalyste.c:389 # 804adae)
2.72 1.30 0.05 main (ComAnalyste.c:406 # 804aecb)
2.45 1.35 0.05 main (ComAnalyste.c:386 # 804ad6d)
2.45 1.39 0.05 main (ComAnalyste.c:443 # 804b248)
2.45 1.44 0.05 main (ComAnalyste.c:446 # 804b2f4)
2.17 1.48 0.04 main (ComAnalyste.c:294 # 804a4e4)
2.17 1.52 0.04 main (ComAnalyste.c:459 # 804b43b)
1.63 1.55 0.03 main (ComAnalyste.c:442 # 804b22d)
1.63 1.58 0.03 main (ComAnalyste.c:304 # 804a56d)
1.09 1.60 0.02 main (ComAnalyste.c:278 # 804a3b3)
1.09 1.62 0.02 main (ComAnalyste.c:285 # 804a450)
1.09 1.64 0.02 main (ComAnalyste.c:286 # 804a470)
1.09 1.66 0.02 main (ComAnalyste.c:302 # 804acdf)
0.82 1.67 0.02 main (ComAnalyste.c:435 # 804b1d2)
0.54 1.68 0.01 main (ComAnalyste.c:282 # 804a3db)
0.54 1.69 0.01 main (ComAnalyste.c:302 # 804a545)
0.54 1.70 0.01 main (ComAnalyste.c:307 # 804a586)
0.54 1.71 0.01 main (ComAnalyste.c:367 # 804ac1a)
0.54 1.72 0.01 main (ComAnalyste.c:395 # 804ade6)
0.54 1.73 0.01 main (ComAnalyste.c:411 # 804aff8)
0.54 1.74 0.01 main (ComAnalyste.c:425 # 804b12a)
0.54 1.75 0.01 main (ComAnalyste.c:429 # 804b19f)
0.54 1.76 0.01 main (ComAnalyste.c:444 # 804b26f)
0.54 1.77 0.01 main (ComAnalyste.c:464 # 804b4a1)
0.54 1.78 0.01 main (ComAnalyste.c:469 # 804b570)
0.54 1.79 0.01 main (ComAnalyste.c:472 # 804b5b9)
0.27 1.80 0.01 main (ComAnalyste.c:308 # 804a5a3)
0.27 1.80 0.01 main (ComAnalyste.c:309 # 804a5a9)
0.27 1.81 0.01 main (ComAnalyste.c:349 # 804a974)
0.27 1.81 0.01 main (ComAnalyste.c:350 # 804a99c)
0.27 1.82 0.01 main (ComAnalyste.c:402 # 804af1d)
0.27 1.82 0.01 main (ComAnalyste.c:416 # 804b073)
0.27 1.83 0.01 main (ComAnalyste.c:417 # 804b0a1)
0.27 1.83 0.01 main (ComAnalyste.c:454 # 804b3ec)
0.27 1.84 0.01 main (ComAnalyste.c:461 # 804b44a)
0.27 1.84 0.01 main (ComAnalyste.c:462 # 804b458)
It's say the time spent per line. It's very interesting result.
I don't know the accuracy or the validity of that, but it's quite interesting.
Hope it's help
Here's a useful resource for you: gprof line by line profiling.
With older versions of the gcc compiler, the gprof -l argument specified line by line profiling.
However, newer versions of gcc use the gcov tool instead of gprof to display line by line profiling information.
If you are using linux, then you can use linux perf instead of gprof, as described here:
http://code.google.com/p/jrfonseca/wiki/Gprof2Dot#linux_perf
Typing perf report and selecting a function will allow you to get line-by-line information about where the CPU time is spent inside the function.
I am new to threading for Windows and would appreciate any and all suggestions. I have created a small program to demonstrate the access violation I am getting.
Here is test.h:
#ifndef TEST_H
#define TEST_H
using namespace std;
#include <windows.h>
#include <iostream>
#include <iomanip>
#include <fstream>
#include <sstream>
#include <string>
#include <vector>
/**************************************************************************************************/
template<typename T>
string toString(const T&x){
stringstream output;
output << x;
return output.str();
}
/**************************************************************************************************/
//custom data structure for threads to use.
// This is passed by void pointer so it can be any data type
// that can be passed using a single void pointer (LPVOID).
struct tempData {
int threadID;
vector<string> filenames;
tempData(){}
tempData(vector<string> f, int tid) {
filenames = f;
threadID = tid;
}
};
/**************************************************************************************************/
static DWORD WINAPI tempThreadFunction(LPVOID lpParam){
tempData* pDataArray;
pDataArray = (tempData*)lpParam;
string fileName = pDataArray->filenames[pDataArray->threadID];
ifstream fileHandle(fileName.c_str());
string output = toString(pDataArray->threadID);
ofstream out(output.c_str());
string name;
int currentNum, num;
vector<string> nameVector;
vector<float> data;
float currentData;
int index = 0;
fileHandle >> num;
while(!fileHandle.eof()){
fileHandle >> name >> currentNum;
nameVector.push_back(name);
for(int i=0;i<num;i++){
fileHandle >> currentData;
data.push_back(currentData);
}
//grab extra white space
char d;
while(isspace(d=fileHandle.get())) { ;}
if(!fileHandle.eof()) { fileHandle.putback(d); }
index++;
cout << "Thread " << pDataArray->threadID << '\t' << index << endl;
out << name << '\t' << "Thread " << pDataArray->threadID << '\t' << index << endl;
}
fileHandle.close();
out.close();
cout << "Thread " << pDataArray->threadID << " read " << nameVector.size() << " lines." << endl;
}
#endif
And here is test.cpp
#include "test.h"
/**************************************************************************************************/
int main(int argc, char *argv[]){
string fileName1 = argv[1];
string fileName2 = argv[2];
vector<string> fileNames; fileNames.push_back(fileName1); fileNames.push_back(fileName2);
vector<tempData*> pDataArray;
DWORD dwThreadIdArray[2];
HANDLE hThreadArray[2];
//Create processor worker threads.
for( int i=0; i<2; i++ ){
// Allocate memory for thread data.
tempData* tempThread = new tempData(fileNames, i);
pDataArray.push_back(tempThread);
hThreadArray[i] = CreateThread(NULL, 0, tempThreadFunction, pDataArray[i], 0, &dwThreadIdArray[i]);
}
//Wait until all threads have terminated.
WaitForMultipleObjects(2, hThreadArray, TRUE, INFINITE);
//Close all thread handles and free memory allocations.
for(int i=0; i < pDataArray.size(); i++){
CloseHandle(hThreadArray[i]);
delete pDataArray[i];
}
return 0;
}
/**************************************************************************************************/
The files the threads are reading look like:
450
F5MMO9001C96XU 450 1.03 0.02 1.00 0.03 0.05 1.02 0.03 1.04 0.05 0.04 2.06 1.05 2.01 0.05 0.98 0.03 0.08 1.05 1.01 0.02 0.05 1.03 0.04 0.04 2.05 1.07 2.04 1.01 0.06 0.05 0.96 2.02 0.06 0.04 0.99 0.06 1.00 0.03 0.06 1.04 0.08 0.01 1.07 0.06 1.02 0.03 0.05 2.00 0.07 0.04 1.00 0.11 0.06 1.01 1.02 1.02 1.03 1.06 0.04 1.04 1.94 1.02 0.06 1.00 0.12 0.06 2.01 1.96 0.94 0.08 0.10 0.96 0.12 0.05 1.01 0.05 2.04 1.11 0.08 0.04 2.00 0.06 1.02 0.04 1.99 0.05 1.03 0.09 0.14 0.98 0.10 1.99 1.02 1.06 2.11 1.00 0.96 0.10 1.00 0.08 0.11 1.08 0.07 0.06 1.03 0.10 0.04 1.01 0.12 1.11 0.09 0.99 0.98 0.12 3.06 0.15 0.12 1.03 0.17 2.00 1.01 0.98 0.06 0.16 2.00 1.00 0.08 1.06 0.19 0.13 2.10 0.13 0.08 1.00 0.19 0.99 0.16 2.00 2.19 0.12 3.96 0.17 0.99 0.05 2.06 0.06 3.03 0.08 1.02 0.06 0.11 1.02 0.17 1.01 1.06 0.15 0.08 3.92 0.14 1.01 0.13 0.12 1.05 2.04 3.04 1.02 0.98 0.08 0.10 2.02 3.19 1.00 0.11 1.98 0.14 1.94 0.14 0.07 2.04 0.08 2.05 0.06 0.98 0.08 1.99 0.04 2.93 1.07 0.11 0.05 1.04 0.17 0.09 0.97 1.05 0.99 0.08 0.11 1.02 1.98 0.07 0.06 1.05 0.06 0.09 1.03 0.17 0.11 1.05 0.14 0.09 2.09 0.19 0.06 1.02 0.13 1.03 0.06 0.15 2.07 0.19 0.98 0.08 0.06 1.06 0.16 1.09 0.14 0.16 1.00 0.17 2.07 0.13 0.13 1.01 0.08 2.04 0.05 0.18 1.03 0.05 0.02 0.99 1.01 0.09 0.07 2.98 0.07 0.13 1.01 0.04 0.10 1.99 0.15 0.15 1.05 1.01 0.01 2.09 0.16 0.13 4.02 0.19 0.06 2.03 0.10 3.97 0.08 0.09 1.01 1.01 0.08 1.03 0.16 0.09 1.03 0.12 0.05 1.02 0.07 1.04 0.04 0.15 1.01 0.13 0.04 1.91 0.10 1.06 0.08 2.99 1.01 1.01 1.00 0.04 1.93 0.13 0.90 0.16 1.01 0.98 0.04 1.14 0.16 1.06 0.05 0.13 3.00 0.12 0.05 2.10 0.99 0.99 0.03 0.09 1.00 1.01 0.04 0.99 0.04 1.02 0.08 1.02 0.14 0.11 0.98 0.20 1.15 1.06 0.06 3.08 0.08 0.09 0.97 0.00 0.97 1.04 0.15 0.12 0.89 0.94 0.05 0.12 2.04 0.14 0.04 1.15 0.11 1.06 0.04 0.08 2.10 1.05 0.03 1.01 0.98 1.04 0.03 2.00 0.03 1.01 0.03 0.91 0.10 1.04 0.08 1.04 0.14 0.03 0.98 0.15 1.13 0.12 0.92 2.14 0.09 0.11 0.96 0.07 1.04 0.13 0.03 1.02 0.05 1.12 1.06 1.00 0.13 0.04 0.88 0.01 1.10 0.14 0.88 0.14 0.10 1.10 0.00 1.14 1.01 1.02 0.06 0.95 1.86 0.07 0.04 1.01 0.04 1.93 0.04 0.08 2.05 1.10 0.10 0.11 0.91 0.11 1.00 0.08 1.09 0.07 0.10 2.14 0.10 3.19 1.07 2.10 0.11 1.02 0.13 0.93 0.09 0.13 0.90 2.17 0.09 0.19 2.09 1.10 0.09 1.13 0.91 2.03 0.08 1.01 2.09 0.19 0.07 1.03 0.10
F5MMO9001DCOF4 450 0.98 0.02 1.03 0.02 0.04 1.04 0.02 1.02 0.03 0.05 2.15 1.04 2.01 0.00 0.93 0.07 0.06 1.01 0.99 0.03 0.05 1.02 0.05 0.02 2.06 1.10 2.02 0.98 0.09 0.06 1.05 2.03 0.08 0.05 1.01 0.10 1.03 0.03 0.09 1.00 0.07 0.01 1.02 0.07 0.98 0.03 0.05 1.98 0.10 0.01 1.02 0.10 0.05 1.03 1.09 1.02 1.02 1.04 0.06 0.99 1.98 0.98 0.07 1.00 0.12 0.04 2.09 1.03 1.00 0.00 0.17 2.02 0.11 0.03 0.96 0.13 2.02 0.04 2.11 0.05 1.03 0.00 1.11 1.07 2.92 1.02 1.02 0.08 0.93 1.03 2.02 0.99 1.01 0.08 1.05 0.09 0.13 1.00 0.11 0.01 2.00 0.11 0.06 1.03 0.18 0.05 1.04 0.07 0.05 1.99 0.11 0.01 0.99 0.16 0.05 1.04 0.11 0.05 1.04 0.13 0.07 1.02 0.11 0.06 2.17 0.10 0.03 1.04 2.07 0.03 0.99 0.13 0.09 0.99 1.02 0.00 0.04 0.94 1.04 0.01 0.06 1.05 1.01 0.02 1.10 0.11 0.11 1.01 0.12 0.03 1.03 0.11 0.09 1.01 1.03 1.06 2.02 0.09 0.99 1.06 1.03 0.03 1.03 0.12 0.17 0.88 0.16 0.02 1.11 2.86 1.07 0.03 0.15 2.10 1.01 0.02 0.04 0.91 0.15 0.99 0.03 1.01 0.06 1.07 0.09 0.16 1.05 0.13 3.03 1.00 1.07 0.05 0.16 0.99 0.13 0.98 0.08 0.90 2.01 1.05 0.08 2.74 0.20 0.16 1.01 0.20 2.07 0.04 2.05 0.11 1.08 0.03 0.16 1.05 0.10 0.02 0.97 0.08 0.99 0.04 0.19 1.02 1.03 0.03 1.08 0.10 1.04 0.05 0.16 1.06 1.01 0.99 0.06 0.15 1.02 1.92 0.13 0.06 1.02 1.02 2.06 0.04 0.09 1.09 0.15 0.01 0.98 0.08 1.06 0.01 2.06 1.02 1.01 0.04 1.08 0.12 0.09 0.90 0.11 0.99 0.17 1.03 1.14 0.08 2.84 0.04 0.86 0.94 1.37 0.08 2.05 0.19 0.16 0.94 0.35 0.11 2.00 0.20 0.18 0.93 0.41 0.15 0.96 2.03 0.16 1.75 0.19 1.45 0.14 1.27 0.04 0.17 2.11 0.23 3.92 0.13 0.32 1.02 2.03 0.07 1.05 0.27 0.30 1.06 0.29 0.08 0.99 0.24 1.04 0.02 0.31 1.03 0.24 0.05 1.93 0.21 0.98 0.09 3.70 1.02 1.44 1.03 0.84 2.42 0.24 1.23 0.09 1.49 2.89 0.24 0.21 3.26 0.93 0.10 2.19 1.98 1.00 0.03 0.45 1.27 1.30 0.02 0.83 0.26 1.17 0.05 1.19 0.12 0.23 0.85 0.20 1.00 0.98 0.15 2.58 0.21 0.27 1.72 0.90 0.16 0.88 0.38 0.01 1.08 1.20 0.12 0.16 2.01 0.24 0.03 1.88 1.39 1.83 0.06 1.36 0.21 0.39 0.87 0.19 0.12 0.84 0.19 1.69 0.09 1.13 0.09 1.42 0.09 1.24 0.09 1.11 0.09 0.21 0.81 0.20 0.93 0.16 1.06 1.70 2.08 0.15 0.16 1.42 0.43 1.06 0.86 1.20 0.12 1.22 0.20 0.25 0.98 0.23 0.82 0.19 0.25 1.01 0.18 1.05 0.11 0.26 0.95 0.22 0.11 1.08 0.19 1.05 1.03 0.21 0.08 2.14 0.21 1.84 0.07 0.40 1.79 1.35 0.90 0.17 1.35 1.12 0.15 1.84 1.23 2.19 0.86 1.35 0.26 0.34 1.00
F5MMO9001CUZ4G 450 1.04 0.01 1.02 0.03 0.04 1.00 0.02 1.01 0.04 0.08 2.06 1.02 1.97 0.03 0.99 0.05 0.07 1.07 1.03 0.02 0.06 1.03 0.05 0.02 1.99 1.04 2.06 0.99 0.09 0.05 1.01 1.98 0.08 0.06 1.00 0.06 1.03 0.05 0.05 1.02 0.11 0.04 1.03 0.06 1.04 0.03 0.06 2.04 0.09 0.05 0.98 0.08 0.06 1.03 1.02 1.03 0.98 1.05 0.07 1.01 1.95 1.05 0.05 1.00 0.11 0.05 2.03 1.96 1.02 0.01 0.11 1.03 0.12 0.02 0.98 0.07 1.97 0.03 1.02 0.04 3.03 1.01 3.02 0.05 0.17 1.01 0.19 0.06 2.00 1.05 2.07 1.03 1.01 0.10 1.04 0.09 0.12 1.03 1.04 0.04 1.01 0.12 1.03 0.05 0.09 1.02 1.00 1.01 0.09 0.12 1.06 0.12 2.01 0.01 0.99 1.05 1.03 0.06 1.05 0.10 0.12 1.02 1.03 0.06 0.05 1.00 0.11 2.00 0.07 0.14 0.98 1.05 0.07 3.04 0.13 1.05 0.12 0.07 1.03 2.03 3.07 1.02 0.99 0.16 0.05 1.98 3.08 0.96 0.08 1.97 0.10 1.96 0.08 0.10 1.98 1.03 1.04 0.07 1.03 0.13 0.16 1.03 0.20 0.07 1.01 0.14 3.08 0.97 0.14 0.05 1.09 0.15 0.06 1.02 1.00 1.01 0.06 0.12 1.02 1.99 0.11 0.03 1.01 0.98 2.02 0.02 0.18 1.06 0.14 0.02 1.03 0.15 1.00 0.03 0.15 1.02 0.15 0.04 1.04 0.13 0.09 0.99 0.16 0.06 1.03 0.15 1.05 0.10 0.16 1.01 0.18 1.99 0.14 0.09 1.05 0.09 1.99 0.04 2.05 1.03 0.10 0.05 3.14 0.15 0.14 1.01 0.11 0.07 2.01 0.12 0.09 0.96 1.00 0.03 0.09 1.02 0.19 0.08 1.03 0.15 0.12 2.14 0.18 0.05 1.02 1.06 0.18 0.04 2.00 0.09 4.08 0.05 0.13 0.98 1.08 0.09 1.03 0.14 0.10 1.00 0.12 0.02 1.01 0.09 1.03 0.04 0.15 0.99 0.12 0.03 2.06 0.10 1.09 0.08 3.21 1.03 1.01 0.99 0.09 2.01 0.15 0.93 0.13 1.02 0.95 0.13 1.02 0.17 1.06 0.05 0.16 3.12 0.12 0.08 2.07 1.06 1.08 1.02 0.09 0.07 0.93 0.13 1.01 0.07 0.98 0.07 1.02 0.11 0.12 0.99 0.21 1.09 1.08 0.10 3.03 0.06 0.12 1.99 0.04 0.12 1.00 0.03 0.11 1.05 1.00 0.07 0.16 1.96 0.12 0.04 2.16 1.98 1.04 0.07 0.90 0.04 0.15 1.09 3.08 0.10 1.04 0.15 0.99 0.08 1.05 0.08 1.07 0.17 0.07 1.01 0.18 2.06 0.13 0.13 2.12 1.97 0.14 0.09 0.91 0.10 1.07 1.09 3.06 1.08 0.98 0.17 0.91 0.09 0.08 3.09 0.11 1.08 0.19 0.00 2.04 0.16 2.05 0.17 0.06 2.07 0.96 2.05 0.09 0.98 0.09 0.06 2.37 0.03 0.16 1.11 0.95 0.09 1.13 0.93 4.07 0.08 0.07 0.95 1.99 0.09 0.12 1.97 1.12 0.11 0.10 2.06 0.18 0.94 0.13 0.09 1.07 0.09 1.03 0.14 0.11 0.98 0.15 1.04 0.15 0.10 1.04 2.06 0.12 1.00 0.07 0.13 2.06 0.94 0.11 0.16 1.03 0.90 0.13 1.03 0.21 1.03 1.09 0.13 2.06 0.06 0.12 1.01 0.10 0.12 1.03 0.06 4.01 0.13 0.06 1.99
...
I don't mind sending the full files if you think it would be helpful. I suspect it's a simple error having to do with an assumption I am making about threads, but I can't seem to spot it. Thanks for taking the time to look at this issue. I really appreciate it!
I tried your program in VC 6 and I was getting access violation when I create ofstream objects. The following link helped me to solve that access violation.
http://www.gamedev.net/topic/73037-ofstream-access-violations-when-multi-threading/
Since you haven't provided more information as where you are getting the violation, I can only give this as a hint.