The gcov data files (*.gcda) accumulate the counts across multiple tests. That is a wonderful thing. The problem is, I can't figure out how to get the .gcov files to accumulate in the same way the .gcda files do.
I have a large project (53 header, 54 cpp) and some headers are used in multiple cpp files. The following example is radically simplified; the brute force approach will take days of manual, tedious work if that is required.
Say for example I have xyz.hpp that defines the xyz class. On line 24 it defines the build() method that builds xyz data, and on line 35 it defines the data() method that returns a reference to the data.
Say I run my test suite, then I execute gcov on abc.cpp. The xyz.hpp.gcov report has a count of 5 for line 24 (build) and a count of zero for line 35 (data). Now I run gcov on def.cpp, and the xyz.hpp.gcov report has a count of zero for line 24 and a count of 7 for line 35. So, instead of accumulating the report information and having a count of 5 for line 24 (build) and 7 for line 35 (data), it replaces the xyz.hpp.gcov each time so all counts are reset. I understand why that's the default behavior, but I can't seem to override it. If I'm unable to accumulate the .gcov reports programatically, I'll be forced to manually compare, say, a dozen different xyz.hpp.gcov in order to assess the coverage.
It looks like LCOV is able to do this accumulation, but it takes weeks to get new software installed in my current work culture.
Thanks in advance for any help.
Related
I have a high number of daily WRF outputs, each one consisting of 24 time steps for every single hour of the day. Now I would like to combine these single output files to one resulting file that comprises the entire time period by using cdo mergetime. I have done this before with some other output files in another context and it worked well.
When I apply this command for example:
cdo mergetime wrf_file1.nc wrf_file2.nc output_file.nc
I get the following message many times: Warning (cdfInqContents): Coordinates variable XTIME can't be assigned!
Since it is only a warning and not an error, the process continues. But it takes way too much time and the resulting output file is way too big. For example, when the two input files are about 6 GB, the resulting output file is above 40 GB, which does not make sense at all.
Anybody with an idea how to solve this?
The merged files are probably large because CDO does not, by default, compress the output file. And the WRF files are probably compressed.
You can modify your call to compress the output as follows:
cdo -z zip -mergetime wrf_file1.nc wrf_file2.nc output_file.nc
I am writing a program to evaluate the hourly energy output from PV in various cities in the US. For simplicity, I have them in a dictionary (tmy3_cities) so that I can loop through them. To read the code, I followed the TMY to Power Tutorial off Github. Rather than showing the whole loop, I have only added the code that reads and shifts the time by 30-min.
Accordingly, the code taken from the tutorial works for all of the TMY3 files except for Houston, Atlanta, and Baltimore (HAB, for simplicity). All of the tmy3 files were downloaded from NREL and renamed for my own use. The error I get in reading these three files is related to the datetime, and it essentially comes down to a "ValueError: invalid literal for int() with base 10: '1' after some traceback.
Rather than looping into the same problem, I entered each file into the reader individually, and sure enough, only the HAB tmy3 files give errors.
Secondly, I downloaded the files again. This, obviously had no impact.
In a lazy attempt to bypass the issue, I copied and pasted the date and time columns from working tmy3 files (e.g., Miami) into the non-working ones (i.e., HAB) via excel.
I am not sure what else to do, as I am still fairly new to Python and coding in general.
#The dictionary below is not important to the problem below, but is
provided only for some clarification.
tmy3_cities = {'miami': 'TMY3Miami.csv',
'houston': 'TMY3Houston.csv',
'phoenix': 'TMY3Phoenix.csv',
'atlanta': 'TMY3Atlanta.csv',
'los_angeles': 'TMY3LosAngeles.csv',
'las_vegas': 'TMY3LasVegas.csv',
'san_francisco': 'TMY3SanFrancisco.csv',
'baltimore': 'TMY3Baltimore.csv',
'albuquerque': 'TMY3Albuquerque.csv',
'seattle': 'TMY3Seattle.csv',
'chicago': 'TMY3Chicago.csv',
'denver': 'TMY3Denver.csv',
'minneapolis': 'TMY3Minneapolis.csv',
'helena': 'TMY3Helena.csv',
'duluth': 'TMY3Duluth.csv',
'fairbanks': 'TMY3Fairbanks.csv'}
#The code below was taken from the tutorial.
pvlib_abspath = os.path.dirname(os.path.abspath(inspect.getfile(pvlib)))
#This is the only section of the code that was modified.
datapath = os.path.join(pvlib_abspath, 'data', 'TMY3Atlanta.csv')
tmy_data, meta = pvlib.tmy.readtmy3(datapath, coerce_year=2015)
tmy_data.index.name = 'Time'
# TMY data seems to be given as hourly data with time stamp at the end
# Shift the index 30 Minutes back for calculation of sun positions
tmy_data = tmy_data.shift(freq='-30Min')['2015']
print(tmy_data.head())
I would expect each tmy3 file that is read to produce its own tmy_data DataFrame. Please comment if you'd like to see the whole
I am given a config file that looks like this for example:
Start Simulator Configuration File
Version/Phase: 2.0
File Path: Test_2e.mdf
CPU Scheduling Code: SJF
Processor cycle time (msec): 10
Monitor display time (msec): 20
Hard drive cycle time (msec): 15
Printer cycle time (msec): 25
Keyboard cycle time (msec): 50
Mouse cycle time (msec): 10
Speaker cycle time (msec): 15
Log: Log to Both
Log File Path: logfile_1.lgf
End Simulator Configuration File
I am supposed to be able to take this file, and output the cycle and cycle times to a log and/or monitor. I am then supposed to pull data from a meta-data file that will tell me how many cycles each of these run (among other things) and then im supposed to calculate and log the total time. for example 5 Hard drive cycles would be 75msec. The config and meta data files can come in any order.
I am thinking I will put each item in an array and then cycle through waiting for true when the strings match(This will also help detect file errors). The config file should always be the same size despite a different order. The metadata file can be any size so I figured i would do a similar thing but in a vector.
Then I will multiply the cycle times from the config file by the number of cycles in the matching metadata file string. I think the best way to read the data from the vector is in a queue.
Does this sound like a good idea?
I understand most of the concepts. But my data structures is shaky in terms of actually coding it. For example when reading from the files, should I read it line by line, or would it be best to separate the int's from the strings to calculate them later? I've never had to do this that from a file that can change before.
If i separate them, would I have to use separate arrays/vectors?
Im using C++ btw
Your logic should be:
Create two std::map variables, one that maps a string to a string, and another that maps a string to a float.
Read each line of the file
If the line contains :, then, split the string into two parts:
3a. Part A is the line starting from zero, and 1-minus the index of the :
3b. Part B is the part of the line starting from 1+ the index of the :
Use these two parts to store in your custom std::map types, based on the value type.
Now you have read the file properly. When you read the meta file, you will simply look up the key in the meta data file, use it to lookup the corresponding key in your configuration file data (to get the value), then do whatever mathematical operation is required.
In one of my Nim projects I'm having performance issues. I'm now trying to use nimprof to see what's going on. I have an import nimprof in my main source file, and I'm compiling with --profiler:on. When I run the program I can see the messages:
writing profile_results.txt...
... done
However, profile_results.txt only contains this:
total executions of each stack trace:
Entry: 1/1 Calls: 2741/2741 = 1.0e+02% [sum: 2741; 2741/2741 = 1.0e+02%]
The run time was about 1 minute -- so I don't think it is just not enough time to sample anything. Is there any way to get something more meaningful out of nimprof?
You need to add the compiler flag --stackTrace:on, or there won't be any function names or line numbers to analyze.
1.0e+02% is just a silly way to say 100%. It says it took a lot of stack samples and they were all the same, which is not surprising.
What you need is to actually see the sample.
It should appear below the line above. It will show you what the problem is.
Just as an aside, it should show line numbers as well as function names,
and it shouldn't just sort the stacks by frequency.
The reason is there could easily be a guilty line of code that is on a large fraction of stacks, even though the stacks are otherwise different, so if the stacks are sorted, that line will not be aggregated.
I have multiple test cases which actually measure the duration of a calculation using Boost timers.
The test cases are defined using Boost Test.
For running the tests I use CTest since I use CMake.
The tests are all specified with add_test().
Since CTest generates XML files I can display the Unit Test results in a corresponding Jenkins job.
For the performance tests I do not only want to display if the test cases succeeded but also the measured durations.
Is it possible in C++ (with Boost Test/CMake) to somehow mark measured durations and to convert them into a file which contains pairs for the test cases with two columns?
unittest0.dat:
test case | time
bla0 10 s
bla3 40 s
Then I would like to display this file and all similar files from previous builds in Jenkins as a plot.
The user should be able to follow the measured values over multiple jobs from the past to see if the performance has improved.
Therefore Jenkins would have to convert the data into files like:
bla0.dat:
build number | time
0 10 s
1 15 s
2 20 s
Maybe there is a complete different approach I don't know about.