Wireshark Dissector VoidString type - c++

I am working on a Wireshark Dissector Generator for a senior project. I have done some reading but had a question about the VoidString object in the ProtoField Object. The documentation wasn't too clear on this particular value or what its used for.
Our generator uses C++ so that our client can modify it after the project is complete. I was reading in another thread here that it could be passed a table of key, value pairs. Are there other structures or information this parameter is used for? We're trying to make a data structure to contain the parse of a file passed by the user and we're trying to determine how to best make this object. Would it be better to allow a template object to be passed here instead or is the table sufficient?

I'm not sure to understand your needs but according to the wireshark source code (wslua_proto_fields.c), the definition of the VoidString parameter is :
#define WSLUA_OPTARG_ProtoField_new_VALUESTRING 4 /* A table containing the text that
corresponds to the values, or a table containing unit name for the values if base is
`base.UNIT_STRING`, or one of `frametype.NONE`, `frametype.REQUEST`, `frametype.RESPONSE`,
`frametype.ACK` or `frametype.DUP_ACK` if field type is ftypes.FRAMENUM. */
So the table will be "cast" following the type and print in base representation.

Related

C++/parquet: How to write "Map" data (nested type)?

Based on the "StreamWriter" example (stream_reader_writer), I'm trying to add another column to the parquet schema which I want to write "Map-Data" (Map<Integer, String>) to.
According to this source, maps are supported by the C++ implementation. However, it's not clear to me how to ...
correctly add/define another column of type "Map" to the parquet schema. According to the previous link, Maps are "Logical Types", so my best try so far is
fields.push_back(parquet::schema::PrimitiveNode::Make("Map", parquet::Repetition::REQUIRED, parquet::LogicalType::Map(), parquet::Type::INT64));
This compiles without error but I'm uncertain about the last parameter "primitive_type" (parquet::Type::INT64).
how to define and write actual Map values. Again, the second links provides a general description of a parquet Map:
The github repo is missing a C++ example (or test case) on how to use a Map/Logical Types. I'm looking for advice/an C++ example, how to define and use a Map.

EMF error : the attribute "XYZ.Attribute_name" is not transient o it must have a data type that is serializable

I am creating an ECore model. I created an EClass and inside it I want to create a data member that is a list. So I created an EAttribute of type EEList.
However when I try to create the genmodel file I get an error saying
the attribute "XYZ.Attribute_name" is not transient o it must have a data type that is serializable.
It also gives a warning saying
The generic type associated with the 'EEList' classifier should have 1 type atrgument(s) to match the number of type parameter(s) of the classifier.
Can anyone tell me what I'm doing wrong? I could not figure out how to set the E in EEList<E>.
First error
The first error probably disappears after you've fix the second error. I write an explanation here, but you probably don't have to deal with it to solve you problem.
It is because, to be saved to disk, the EDataTypes of attributes must be convertible to a text format.
There are two ways to ensure this:
Implement conversion to and from strings for the used EDataType. Standard EMF EDataTypes already do this, but if you have created your own EDataType you have to do it manually.
Use a Java type for the EDataType that is serializable. It must thus implement the Serializable interface and provide serializating operations. Many standard Java classes, such as String and Integer already do that.
Another solution is to set the Transient property of the attribute to true. The the attribute will not be saved and its EDataType does not need to be serialized.
Second error
The normal way to create a list attribute is to set the Upper Bound property of the attribute to a value different from 1. To create a list attribute which can contain any number of elements, set Upper Bound to -1, which means Unbounded.
The EAttribute Type should be set to the element type, not to a list type.
The generated Java code will contain a property with the type EList<ElementType>.

libpq: get data type

I am coding a cpp project with the database "postgreSQL".
I created a table in my database its type is character varying(40).
Now I need to SELECT these data FROM the table in my cpp project. I knew that I should use the library libpq, this is the interface of "postgreSQL" for c/cpp.
I have succeeded in selecting data from the table. Now I am considering if it's possible to get the data type of this table. For example, here I want to get character varying(40).
You need to use PQftype.
As described here: http://www.idiap.ch/~formaz/doc/postgreSQL/libpq-chapter17861.htm
And just take a look here about decoding return values: http://www.postgresql.org/message-id/da7021e0608040738l3b0880a1q5a76b838937f8c78#mail.gmail.com
You must also use PQfsize to get field size.

Is there a table which holds all possible states of a determined work Item type in TFS?

I'm developing a Time Tracking system in TFS so we can control how much time is spent in each task. I'm doing it by checking changes in work items states, and recording the time between states.
I'm using WCF and TFS2010 alert subscription.
Then I noticed the State column in the WorkItem table holds a string, instead of an ID pointing to a State.
With that in mind, I noticed I would have to parse each state and check if it corresponds to some string. And then, some day, someone might want to change the State name. Then we're doomed.
But before I hardcore (or put in some random config.xml)... let me ask, is there a table which holds all possible states of a determined work Item type in TFS?
The states of work item types are stored in the process template files. You can export the work item type to an xml file using witadmin.exe and see the allowed values of the "State" in there.
Programmatically, you can use the Microsoft.TeamFoundation.WorkItemTracking.Client namespace to get the WorkItemType object of your work item type, look for the FieldDefinition object of the "State" in the FieldDefinitions property, then get the possible states from the AllowedValues property of FieldDefinition class.

What is a good design pattern to implement a dynamic data importer tool?

We are planning to build a dynamic data import tool. Basically taking information on one end in a specified format (access, excel, csv) and upload it into an web service.
The situation is that we do not know the export field names, so the application will need to be able to see the wsdl definition and map to the valid entries in the other end.
In the import section we can define most of the fields, but usually they have a few that are custom. Which I see no problem with that.
I just wonder if there is a design pattern that will fit this type of application or help with the development of it.
I am not sure where the complexity is in your application, so I will just give an example of how I have used patterns for importing data of different formats. I created a factory which takes file format as argument and returns a parser for particular file format. Then I use the builder pattern. The parser is provided with a builder which the parser calls as it is parsing the file to construct desired data objects in application.
// In this example file format describes a house (complex data object)
AbstractReader reader = factory.createReader("name of file format");
AbstractBuilder builder = new HouseBuilder(list_of_houses);
reader.import(text_stream, builder);
// now the list_of_houses should contain an extra house
// as defined in the text_stream
I would say the Adaptor Pattern, as you are "adapting" the data from a file to an object, like the SqlDataDataAdapter does it from a Sql table to a DataTable
have a different Adaptor for each file type/format? example SqlDataAdptor, MySqlDataAdapter, they handle the same commands but different datasources, to achive the same output DataTable
Adaptor pattern
HTH
Bones
Probably Bridge could fit, since you have to deal with different file formats.
And Façade to simplify the usage. Handle my reply with care, I'm just learning design patterns :)
You will probably also need Abstract Factory and Command patterns.
If the data doesn't match the input format you will probably need to transform it somehow.
That's where the command pattern come in. Because the formats are dynamic, you will need to base the commands you generate off of the input. That's where Abstract factory is useful.
Our situation is that we need to import parametric shapes from competitors files. The layout of their screen and data fields are similar but different enough so that there is a conversion process. In addition we have over a half dozen competitor and maintenance would be a nightmare if done through code only. Since most of them use tables to store their parameters for their shapes we wrote a general purpose collection of objects to convert X into Y.
In my CAD/CAM application the file import is a Command. However the conversion magic is done by a Ruleset via the following steps.
Import the data into a table. The field names are pulled in as well depending on the format.
We pass the table to a RuleSet. I will explain the structure the ruleset in a minute.
The Ruleset transform the data into a new set of objects (or tables) which we retrieve
We pass the result to the rest of the software.
A RuleSet is comprise of set of Rules. A Rule can contain another Rule. A rule has a CONDITION that it tests, and a MAP TABLE.
The MAP TABLE maps the incoming field with a field (or property) in the result. There are can be one mapping or a multitude. The mapping doesn't have to involve just poking the input value into a output field. We have a syntax for calculation and string concatenation as well.
This syntax is also used in the Condition and can incorporate multiple files like ([INFIELD1] & "-" & [INFIELD2])="A-B" or [DIM1] + [DIM2] > 10. Anything between the brackets is substituted with a incoming field.
Rules can contain other Rules. The way this works is that in order for a sub Rule mapping to apply both it's condition and those of it's parent (or parents) have to be true. If a subRule has a mapping that conflicts with a parent's mapping then the subRule Mapping applies.
If two Rules on the same level have condition that are true and have conflicting mapping then the rule with the higher index (or lower on the list if you are looking at tree view) will have it's mapping apply.
Nested Rules is equivalent to ANDs while rules on the same level are equivalent of ORs.
The result is a mapping table that is applied to the incoming data to transform it to the needed output.
It is amicable to be being displayed in a UI. Namely a Treeview showing the rules hierarchy and a side panel showing the mapping table and conditions of the rule. Just as importantly you can create wizards that automate common rule structures.