Problems with gcovr and regex expressions - regex

I am trying to merge coverage data from specific files from multiple runs into a single "report". In each project I generate test coverage for different files, each in a separate project folder and then at the end I pick and choose what files to include in my final project. If I use the following command line where I select each file with a --filter [complete path and filename], it works perfectly and I get a combined report with just these files. I also see it pick the files up in the log since I have -v option enabled shown below.
gcovr -g -k -s --html --html-details -o code_coverage_summary2.html -r . --filter firmware/crc32/crc_unit_tests.X/mcc_generated_files/boot/boot_verify_crc32.c --filter firmware/sha256/sha_unit_tests.X/mcc_generated_files/boot/boot_verify_sha256.c --filter firmware/checksum/checksum_unit_tests.X/mcc_generated_files/boot/boot_verify_checksum16.c --filter firmware/command_processor/bootloader_pic_tb.X/mcc_generated_files/boot/boot_process.c -v > log4.txt
Log File From above run
(many many lines before)
currdir C:\Users\murphy\Downloads\archive (5)\archive\src\test
gcov_fname C:\Users\murphy\Downloads\archive (5)\archive\src\test\firmware\sha256\sha_unit_tests.X\mcc_generated_files\boot\boot_verify_sha256.c.gcov
[' -', ' 0', 'Source', 'boot_verify_sha256.c\n']
source_fname None
root C:\Users\murphy\Downloads\archive (5)\archive\src\test
fname C:\Users\murphy\Downloads\archive (5)\archive\src\test\firmware\sha256\sha_unit_tests.X\mcc_generated_files\boot\boot_verify_sha256.c
Parsing coverage data for file C:\Users\murphy\Downloads\archive (5)\archive\src\test\firmware\sha256\sha_unit_tests.X\mcc_generated_files\boot\boot_verify_sha256.c
uncovered: {122}
covered: {49: 1, 54: 1, 55: 1, 57: 1, 59: 1, 60: 1, 61: 1, 64: 1, 65: 1, 70: 1, 75: 1, 80: 1, 81: 1, 83: 1, 85: 1, 88: 1, 90: 1, 92: 1, 95: 1, 97: 1, 100: 1, 102: 1, 105: 1, 107: 1, 110: 1, 112: 1, 115: 1, 117: 1, 120: 1, 125: 1, 127: 1, 132: 1}
branches: {}
noncode: {128, 129, 3, 131, 133, 6, 134, 9, 12, 23, 27, 30, 31, 33, 35, 37, 39, 44, 46, 48, 50, 51, 53, 56, 58, 62, 63, 66, 67, 69, 71, 72, 74, 76, 77, 79, 82, 84, 86, 87, 89, 91, 93, 94, 96, 98, 99, 101, 103, 104, 106, 108, 109, 111, 113, 114, 116, 118, 119, 121, 123, 124, 126}
Finding source file corresponding to a gcov data file
currdir C:\Users\murphy\Downloads\archive (5)\archive\src\test
gcov_fname C:\Users\murphy\Downloads\archive (5)\archive\src\test\firmware\crc32\crc_unit_tests.X\mcc_generated_files\boot\boot_verify_crc32.c.gcov
[' -', ' 0', 'Source', 'boot_verify_crc32.c\n']
source_fname None
root C:\Users\murphy\Downloads\archive (5)\archive\src\test
fname C:\Users\murphy\Downloads\archive (5)\archive\src\test\firmware\crc32\crc_unit_tests.X\mcc_generated_files\boot\boot_verify_crc32.c
Parsing coverage data for file C:\Users\murphy\Downloads\archive (5)\archive\src\test\firmware\crc32\crc_unit_tests.X\mcc_generated_files\boot\boot_verify_crc32.c
uncovered: set()
covered: {50: 1, 55: 1, 56: 1, 57: 1, 62: 1, 63: 1, 65: 1, 67: 1, 70: 1, 72: 1, 74: 1, 77: 1, 79: 1, 82: 1, 84: 1, 87: 1, 89: 1, 92: 1, 94: 1, 97: 1, 99: 1, 102: 1, 104: 1, 108: 1, 109: 1, 110: 1, 112: 1, 114: 1, 120: 1, 121: 1, 123: 1, 128: 1, 129: 1, 131: 1, 134: 1, 139: 1}
branches: {}
noncode: {3, 132, 133, 6, 135, 136, 9, 138, 12, 140, 141, 23, 28, 31, 33, 35, 38, 40, 45, 47, 49, 51, 52, 54, 58, 59, 61, 64, 66, 68, 69, 71, 73, 75, 76, 78, 80, 81, 83, 85, 86, 88, 90, 91, 93, 95, 96, 98, 100, 101, 103, 105, 106, 111, 113, 115, 122, 124}
Finding source file corresponding to a gcov data file
(many many lines after)
I have many more files to add and I really do not want to have a specific filter for each unique path & filename. What I want is a filter for all of the files that have the pattern boot_verify*.c in them. I have tried different regex expressions shown below but they all return no files selected.
--filter '(.+/)?boot_verify(.*)\.c$'
--filter '(.+/)?boot_verify_([a-z][A-Z][0-9]+)\.c$'
--filter '(.+/)?boot_verify_([a-z][A-Z][0-9]*)\.c$'
To force the issue I tried a regex with just a single file name below and even that did not work.
--filter '(.+/)?boot_verify_crc32\.c$'
I would really like to get the wild card pattern and the specific filename pattern working. What am I doing wrong?
Ken

When using regexes in command line arguments, they must be properly escaped.
When using Bash or another POSIX shell on Linux or other Unix-like operating systems including MacOS, surround the regex with single quotes. For example:
gcovr --filter '(.+/)?boot_verify_crc32\.c$'
See the Quoting section in the Bash Reference Manual or in the POSIX spec. In particular, escaping/quoting is necessary if the regex contains any of the following characters, or space characters:
| & ; < > ( ) $ ` \ " ' * ? [ # ˜ = %
When using cmd.exe on Windows, surround the regex with double quotes:
gcovr --filter "(.+/)?boot_verify_crc32\.c$"
I cannot find a good reference for this, though Parsing C Command-Line Arguments seems to contain the relevant rules.
When using PowerShell on Windows, surround the regex with single quotes:
gcovr --filter '(.+/)?boot_verify_crc32\.c$'
See the sections about Quoting Rules and about Parsing in the PowerShell documentation. In particular, escaping/quoting is necessary if the regex contains any of the following characters, or space characters:
' " ` , ; ( ) { } | & < > # #
Additionally, please note that gcovr 4.1 is from 2018. Since then, some fixes relating to Windows support and filter matching have been implemented. Assuming that you installed gcovr as pip3 install gcovr, you can upgrade to the latest stable release with:
pip3 install -U gcovr

Related

complex_model_l_gpu supposed to have 8 gpus but has none

I am submitting a keras multi_gpu_model with gpu=8 using the following config.yaml
trainingInput:
scaleTier: CUSTOM
masterType: complex_model_l_gpu
workerType: standard_gpu
parameterServerType: standard_gpu
workerCount: 0
parameterServerCount: 0
I am getting the following error.
Traceback (most recent call last): File "/usr/lib/python2.7/runpy.py", line 162, in _run_module_as_main "main", fname, loader, pkg_name) File "/usr/lib/python2.7/runpy.py", line 72, in _run_code exec code in run_globals File "/root/.local/lib/python2.7/site-packages/trainer/task.py", line 247, in main() File "/root/.local/lib/python2.7/site-packages/trainer/task.py", line 243, in main run() File "/root/.local/lib/python2.7/site-packages/trainer/task.py", line 112, in run run_training(args, unique_id) File "/root/.local/lib/python2.7/site-packages/trainer/task.py", line 139, in run_training unet, template_model = model_lib.train_model(args) File "/root/.local/lib/python2.7/site-packages/trainer/model.py", line 133, in train_model model, template_model = unet_network(args.image_size) File "/root/.local/lib/python2.7/site-packages/trainer/model.py", line 106, in unet_network model = multi_gpu_model(template_model, gpus=8) File "/root/.local/lib/python2.7/site-packages/keras/utils/training_utils.py", line 132, in multi_gpu_model available_devices)) ValueError: To call multi_gpu_model with gpus=8, we expect the following devices to be available: ['/cpu:0', '/gpu:0', '/gpu:1', '/gpu:2', '/gpu:3', '/gpu:4', '/gpu:5', '/gpu:6', '/gpu:7']. However this machine only has: ['/cpu:0']. Try reducing gpus.
According to the documentation I should have 8 gpus available. Anyone seen this? Know how to resolve?
Per the request in the notes below, I ran a vanilla tf gpu graph with the following:
with tf.device('/cpu:0'):
a = tf.constant([1.0, 2.0, 3.0, 4.0], shape=[2, 2], name='a')
with tf.device('/gpu:0'):
b = tf.constant([1.0, 2.0, 3.0, 4.0], shape=[2, 2], name='b')
with tf.device('/gpu:1'):
c = tf.constant([1.0, 2.0, 3.0, 4.0], shape=[2, 2], name='c')
with tf.device('/gpu:2'):
d = tf.constant([1.0, 2.0, 3.0, 4.0], shape=[2, 2], name='d')
with tf.device('/gpu:3'):
e = tf.constant([1.0, 2.0, 3.0, 4.0], shape=[2, 2], name='e')
with tf.device('/gpu:4'):
f = tf.constant([1.0, 2.0, 3.0, 4.0], shape=[2, 2], name='f')
with tf.device('/gpu:5'):
g = tf.constant([1.0, 2.0, 3.0, 4.0], shape=[2, 2], name='g')
with tf.device('/gpu:6'):
h = tf.constant([1.0, 2.0, 3.0, 4.0], shape=[2, 2], name='h')
with tf.device('/gpu:7'):
i = tf.constant([1.0, 2.0, 3.0, 4.0], shape=[2, 2], name='i')
cd = tf.matmul(c, d)
with tf.Session(config=tf.ConfigProto(log_device_placement=True)) as sess:
print sess.run(cd)
and got the following error:
Traceback (most recent call last): File "/usr/lib/python2.7/runpy.py", line 162, in _run_module_as_main "main", fname, loader, pkg_name) File "/usr/lib/python2.7/runpy.py", line 72, in _run_code exec code in run_globals File "/root/.local/lib/python2.7/site-packages/trainer/task.py", line 39, in main() File "/root/.local/lib/python2.7/site-packages/trainer/task.py", line 35, in main run() File "/root/.local/lib/python2.7/site-packages/trainer/task.py", line 31, in run print sess.run(cd) File "/root/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 889, in run run_metadata_ptr) File "/root/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1120, in _run feed_dict_tensor, options, run_metadata) File "/root/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1317, in _do_run options, run_metadata) File "/root/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1336, in _do_call raise type(e)(node_def, op, message) InvalidArgumentError: Cannot assign a device for operation 'i': Operation was explicitly assigned to /device:GPU:7 but available devices are [ /job:localhost/replica:0/task:0/device:CPU:0 ]. Make sure the device specification refers to a valid device. [[Node: i = Constdtype=DT_FLOAT, value=Tensor, _device="/device:GPU:7"]] Caused by op u'i', defined at: File "/usr/lib/python2.7/runpy.py", line 162, in _run_module_as_main "main", fname, loader, pkg_name) File "/usr/lib/python2.7/runpy.py", line 72, in _run_code exec code in run_globals File "/root/.local/lib/python2.7/site-packages/trainer/task.py", line 39, in main() File "/root/.local/lib/python2.7/site-packages/trainer/task.py", line 35, in main run() File "/root/.local/lib/python2.7/site-packages/trainer/task.py", line 26, in run i = tf.constant([1.0, 2.0, 3.0, 4.0], shape=[2, 2], name='i') File "/root/.local/lib/python2.7/site-packages/tensorflow/python/framework/constant_op.py", line 214, in constant name=name).outputs[0] File "/root/.local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2956, in create_op op_def=op_def) File "/root/.local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1470, in init self._traceback = self._graph._extract_stack() # pylint: disable=protected-access InvalidArgumentError (see above for traceback): Cannot assign a device for operation 'i': Operation was explicitly assigned to /device:GPU:7 but available devices are [ /job:localhost/replica:0/task:0/device:CPU:0 ]. Make sure the device specification refers to a valid device. [[Node: i = Constdtype=DT_FLOAT, value=Tensor, _device="/device:GPU:7"]]
Could you check your job log to see if you re-installed CPU Tensorflow? You should see something like:
"Downloading tensorflow-1.4.0..."
Please note that Tensorflow GPU packages are in https://pypi.python.org/pypi/tensorflow-gpu/1.4.0 instead of https://pypi.python.org/pypi/tensorflow/1.4.0. And you don't need to re-install Tensorflow if you pass in runtime_version as 1.4.

Using wso2mb message broker server with a php client

I trying to connect a php client (using php-amqplib lib) to a server based on wso2mb (wso2 message broker version 3.1.0).
I couldnt succed in that when using the amqp_consumer.php and always locked with this error :
INFO {org.wso2.andes.server.protocol.AMQProtocolEngine} - Unable to create SASL Server:AMQPLAIN whilst processing:[ConnectionStartOkBodyImpl: clientProperties={product=[LONG_STRING: AMQPLib], platform=[LONG_STRING: PHP], version=[LONG_STRING: 2.6], information=[LONG_STRING: ], copyright=[LONG_STRING: ], capabilities=[FIELD_TABLE: {authentication_failure_close=[BOOLEAN: true], publisher_confirms=[BOOLEAN: true], consumer_cancel_notify=[BOOLEAN: true], exchange_exchange_bindings=[BOOLEAN: true], basic.nack=[BOOLEAN: true], connection.blocked=[BOOLEAN: true]}]}, mechanism=AMQPLAIN, response=[5, 76, 79, 71, 73, 78, 83, 0, 0, 0, 5, 97, 100, 109, 105, 110, 8, 80, 65, 83, 83, 87, 79, 82, 68, 83, 0, 0, 0, 5, 97, 100, 109, 105, 110], locale=en_US]
[2016-11-04 08:05:26,901] INFO {org.wso2.andes.server.protocol.AMQProtocolEngine} - Closing connection due to: org.wso2.andes.AMQConnectionException: Unable to create SASL Server:AMQPLAIN [error code 506: resource error]
I using thos params in confing.php as conexions params
require_once __DIR__ . '/../vendor/autoload.php';
define('HOST', 'localhost');
define('PORT', 5692);
define('USER', 'admin');
define('PASS', 'admin');
define('VHOST', '/');
My questions :
1. Could you recommand any php library / tutorial to establish communication between some php code and wso2mb ?
2. What are the connexion method that are allowed by wso2mb ? (PLAIN, AMQPLAIN...?)
3. Help plz :)

How to convert list of list to list of frozenset in original order in python

I have a list of lists and I want to convert to list of frozenset in original order. I've tried the code below but the output isn't in the original order.
data=[[118, 175, 181, 348, 353], [117, 175, 181, 371, 282, 297], [119, 166, 176, 54, 349]]
my code:
>>> transactionList=list()
>>> for rec in data:
transaction = frozenset(rec)
transactionList.append(transaction)
output i got is not in original order:
>>> transactionList
[frozenset([353, 348, 181, 118, 175]), frozenset([297, 175, 371, 181, 282, 117]), frozenset([176, 349, 54, 166, 119])]
my expected output in original order:
>>> transactionList
[frozenset([118, 175, 181, 348, 353]), frozenset([117, 175, 181, 371, 282, 297]),frozenset([119, 166, 176, 54, 349])]
frozensets, like sets, have no defined order. See Does Python have an ordered set? for ways to overcome this with custom classes.

How do i get request and response count in scrapyd?

I am trying to get request and response count in scrapyd,while running multiple spider means 8 spider dynamically.I am try to get those count using python.
following counts:
enter code here{'downloader/request_bytes': 130427,
'downloader/request_count': 273,
'downloader/request_method_count/GET': 273,
'downloader/response_bytes': 2169984,
'downloader/response_count': 273,
'downloader/response_status_count/200': 271,
'downloader/response_status_count/404': 2,
'dupefilter/filtered': 416,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2015, 7, 21, 14, 21, 38, 13000),
'item_scraped_count': 119,
'log_count/DEBUG': 406,
'log_count/INFO': 44,
'offsite/domains': 9,
'offsite/filtered': 5865,
'request_depth_max': 14,
'response_received_count': 273,
'scheduler/dequeued': 273,
'scheduler/dequeued/memory': 273,
'scheduler/enqueued': 273,
'scheduler/enqueued/memory': 273,
'start_time': datetime.datetime(2015, 7, 21, 14, 16, 41, 144000)}
enter code here
Thanks,
Use the Stats Collection from Scrapy.
With this you can access the statistics dumped at the end to the console and if you write your own middleware you can combine the results of your 8 spiders together -- like in the example of the documentation.

scrapy keyError: z (freebsd)

I am trying to install scrapy 0.24 in freebsd (MariaDB) system but when I try to run it I have an "keyError: 'z'" which I don't know what it means... I tried to debug it with no success.
File "/usr/local/bin/scrapy", line 9, in <module>
load_entry_point('Scrapy==0.24.4', 'console_scripts', 'scrapy')()
File "/usr/local/lib/python2.7/site-packages/scrapy/cmdline.py", line 143, in execute
_run_print_help(parser, _run_command, cmd, args, opts)
File "/usr/local/lib/python2.7/site-packages/scrapy/cmdline.py", line 89, in _run_print_help
func(*a, **kw)
File "/usr/local/lib/python2.7/site-packages/scrapy/cmdline.py", line 150, in _run_command
cmd.run(args, opts)
File "/usr/local/lib/python2.7/site-packages/scrapy/commands/crawl.py", line 60, in run
self.crawler_process.start()
File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py", line 92, in start
if self.start_crawling():
File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py", line 124, in start_crawling
return self._start_crawler() is not None
File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py", line 139, in _start_crawler
crawler.configure()
File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py", line 47, in configure
self.engine = ExecutionEngine(self, self._spider_closed)
File "/usr/local/lib/python2.7/site-packages/scrapy/core/engine.py", line 65, in __init__
self.scraper = Scraper(crawler)
File "/usr/local/lib/python2.7/site-packages/scrapy/core/scraper.py", line 66, in __init__
self.itemproc = itemproc_cls.from_crawler(crawler)
File "/usr/local/lib/python2.7/site-packages/scrapy/middleware.py", line 50, in from_crawler
return cls.from_settings(crawler.settings, crawler)
File "/usr/local/lib/python2.7/site-packages/scrapy/middleware.py", line 31, in from_settings
mw = mwcls.from_crawler(crawler)
File "/usr/local/lib/python2.7/site-packages/scrapy/contrib/pipeline/media.py", line 29, in from_crawler
pipe = cls.from_settings(crawler.settings)
File "/usr/local/lib/python2.7/site-packages/scrapy/contrib/pipeline/images.py", line 52, in from_settings
return cls(store_uri)
File "/usr/local/lib/python2.7/site-packages/scrapy/contrib/pipeline/files.py", line 150, in __init__
self.store = self._get_store(store_uri)
File "/usr/local/lib/python2.7/site-packages/scrapy/contrib/pipeline/files.py", line 170, in _get_store
store_cls = self.STORE_SCHEMES[scheme]
KeyError: 'z'
I'll try to install also scrapy 0.22 in freebsd just in case that could be the problem
Thanks a lot!!