I have multiple GTests. I would like to run only some of them based on two filters. For example, I want to run all UI properties tests. Something like: --gtest_filter=Properties and --gtest_filter=UI. I would like to use both filters in the same run.
I could not find the syntax for passing 2 filters at the same time.
I read this, I would like to do something like the 3rd example - *./foo_test --gtest_filter=Null:Constructor Runs any test whose full name contains either "Null" or "Constructor"*, but to run any test whose full name contains both "Null" and "Constructor".
any ideas?
This is how you specify multiple filters:
--gtest_filter=*Properties*:*UI*
EDIT:
In case you want both words to be included in the filter, you can use this:
--gtest_filter=*Properties*UI*:*UI*Properties*
Related
Apply Filter to Display data on Grid.
Importer
Multiple Options.
Branch
Multiple Options.
Search By
Search By-1,Search By-2,.... Search By-10
Dropdown
DropDown-1,DropDown-2,.....DropDown-5
4.1) From and To Date.
Vendor
Vendor-1,Vendor-2,.... Vendor-N
Rater.
Rater-1,Rater-2,...... Rater-N
Status
Status-1,Status-2,..... Status-5
Correction
All,Option-1,Option-2
Error
All, Yes, No
Change Type
Option-1,Option-2,.....Option-8
Code
Code-1,Code-2..... Code-N
How to write test cases for this functionality to apply multiple filters and how to test it.
is this possible to test manually?
Good morning all,
I'm looking in Google Data Fusion for a way to make dynamic the name of a source file stored on GCS. The files to be processed are named according to their value date, example: 2020-12-10_data.csv
My need would be to set the filename dynamically so that the pipeline uses the correct file every day (something like this: ${ new Date(). Getfullyear()... }_data.csv
I managed to use the arguments in runtime by specifying the date as a string (2020-12-10) but not with a function.
And more generally is there any documentation on how to enter dynamic parameters with ready-made or custom "functions" (I couldn't find it)
Thanks in advance for your help.
There is a readymade workaround, you can give a try "BigQuery Execute" plugin.
Steps:
Put below query in SQL
select cast(current_date as string) ||'_data.csv' as filename
--for output '2020-12-15_data.csv'
Row As Arguments to 'true'
Now use the above arguments via ${filename} wherever you want to.
I am testing a product search form. Products may be searched by different parameters (like status, material, wight etc.). When I want to search by status I do the following:
Scenario Outline: search by status
When I select "<status>" from "search_form_status"
And I press "Search"
And I wait for 3 seconds // implemented
And I follow random link from test result table // implemented
Then I should see "<status>" in the "div#status" element
Examples:
|status |
|enabled |
|disabled|
And everything is fine. But if I wanted to test the same search for say, productMaterial I'm stuck as product materials are the subject that can be changed at any time (we may need new materials, may edit material names, or delete old unused ones). Add to that the fact that materials will differ on testing environment and on production site.
I know that I can do something like:
Given I select product material
And implement the step with foreach loop like this:
$matList = $this->getSession()->getPage()->findAll('css', "select option");
foreach($matList as $material){
// DO SOMETHING
}
}
But how would I create all the other steps like in the status example?
I imagine that I would want to use a $material variable in the following steps in my search.feature file for the steps that follow that custom step. But how do I do that?
How would I iterate through all of the options list and do a bunch of steps in each iteration?
You'll need to write the PHP code that runs the individual steps that you want, inside your method that contains the code to select all the options.
For example:
$handler = $this->getSession()->getSelectorsHandler();
$optionElements = $this->getSession()->getPage()->findAll('named', array('option', $handler->selectorToXpath('css', 'select ));
foreach ($optionElements as $optionElement) {
$this->getSession()->getPage()->selectFieldOption('portal', $optionElement->getValue());
$this->pressButton("show");
$this->assertPageContainsText(" - Report Report");
}
I want to sort my Store models by their opening times. Store models contains is_open function which controls Store's opening time ranges and produces a boolean if it's open or not. The problem is I don't want to sort my queryset manually because of efficiency problem. I thought if I write a custom annotate function then I can filter the query more efficiently.
So I googled and found that I can extend Django's aggregate class. From what I understood, I have to use pre-defined sql functions like MAX, AVG etc. The thing is I want to check that today's date is in a given list of time intervals. So anyone can help me that which sql name should I use ?
Edit
I'd like to put the code here but it's really a spaghetti one. One pages long code only generates time intervals and checks the suitable one.
I want to avoid :
alg= lambda r: (not (s.is_open() and s.reachable))
sorted(stores,key=alg)
and replace with :
Store.objects.annotate(is_open = CheckOpen(datetime.today())).order_by('is_open')
But I'm totally lost at how to write CheckOpen...
have a look at the docs for extra
We are planning to build a dynamic data import tool. Basically taking information on one end in a specified format (access, excel, csv) and upload it into an web service.
The situation is that we do not know the export field names, so the application will need to be able to see the wsdl definition and map to the valid entries in the other end.
In the import section we can define most of the fields, but usually they have a few that are custom. Which I see no problem with that.
I just wonder if there is a design pattern that will fit this type of application or help with the development of it.
I am not sure where the complexity is in your application, so I will just give an example of how I have used patterns for importing data of different formats. I created a factory which takes file format as argument and returns a parser for particular file format. Then I use the builder pattern. The parser is provided with a builder which the parser calls as it is parsing the file to construct desired data objects in application.
// In this example file format describes a house (complex data object)
AbstractReader reader = factory.createReader("name of file format");
AbstractBuilder builder = new HouseBuilder(list_of_houses);
reader.import(text_stream, builder);
// now the list_of_houses should contain an extra house
// as defined in the text_stream
I would say the Adaptor Pattern, as you are "adapting" the data from a file to an object, like the SqlDataDataAdapter does it from a Sql table to a DataTable
have a different Adaptor for each file type/format? example SqlDataAdptor, MySqlDataAdapter, they handle the same commands but different datasources, to achive the same output DataTable
Adaptor pattern
HTH
Bones
Probably Bridge could fit, since you have to deal with different file formats.
And Façade to simplify the usage. Handle my reply with care, I'm just learning design patterns :)
You will probably also need Abstract Factory and Command patterns.
If the data doesn't match the input format you will probably need to transform it somehow.
That's where the command pattern come in. Because the formats are dynamic, you will need to base the commands you generate off of the input. That's where Abstract factory is useful.
Our situation is that we need to import parametric shapes from competitors files. The layout of their screen and data fields are similar but different enough so that there is a conversion process. In addition we have over a half dozen competitor and maintenance would be a nightmare if done through code only. Since most of them use tables to store their parameters for their shapes we wrote a general purpose collection of objects to convert X into Y.
In my CAD/CAM application the file import is a Command. However the conversion magic is done by a Ruleset via the following steps.
Import the data into a table. The field names are pulled in as well depending on the format.
We pass the table to a RuleSet. I will explain the structure the ruleset in a minute.
The Ruleset transform the data into a new set of objects (or tables) which we retrieve
We pass the result to the rest of the software.
A RuleSet is comprise of set of Rules. A Rule can contain another Rule. A rule has a CONDITION that it tests, and a MAP TABLE.
The MAP TABLE maps the incoming field with a field (or property) in the result. There are can be one mapping or a multitude. The mapping doesn't have to involve just poking the input value into a output field. We have a syntax for calculation and string concatenation as well.
This syntax is also used in the Condition and can incorporate multiple files like ([INFIELD1] & "-" & [INFIELD2])="A-B" or [DIM1] + [DIM2] > 10. Anything between the brackets is substituted with a incoming field.
Rules can contain other Rules. The way this works is that in order for a sub Rule mapping to apply both it's condition and those of it's parent (or parents) have to be true. If a subRule has a mapping that conflicts with a parent's mapping then the subRule Mapping applies.
If two Rules on the same level have condition that are true and have conflicting mapping then the rule with the higher index (or lower on the list if you are looking at tree view) will have it's mapping apply.
Nested Rules is equivalent to ANDs while rules on the same level are equivalent of ORs.
The result is a mapping table that is applied to the incoming data to transform it to the needed output.
It is amicable to be being displayed in a UI. Namely a Treeview showing the rules hierarchy and a side panel showing the mapping table and conditions of the rule. Just as importantly you can create wizards that automate common rule structures.