Searching on 3-level structure (in both directions) - c++

[Note] I try to edit your question. Please accept if this is correct. The original question is very ambigous.
I have some task to do. User can input the name of a function and optionally the name of the class and the file. I have to perform some checks on this function name depending upon a check-list. However, the problem is that the checks in the check-list are described for the files, not the functions. i.e, It describes the checks for all the classes and functions appearing in each file. So, when the user enters the function name, I need to map that to the right file and find out the right checks.
Could you please suggest some efficient ways to do this?
EDIT: (as simple as I can, sorry my English isn't the best ;))
Let's say we have a application (script?) which one we want to profile (yes! we are creating something like profiler! :D) but we don't want to check everything, just few functions. But there is problem. User of our profiler wants to give list of functions to profile in a little bit strange way.
So, he can give us:
- a name of function - we need to profile every function (or method) which has that name no matter where it is (can be in every file or every class or something like standard library (in this case we don't have file name)).
- a name of class - we need to profile every function/method in this class but class itself can be anywhere in few files (we can have different classes with the same name)
- a name of file - we need to profile everything in this file, but there can be few files with the same name (so in every of them we need to profile every function/method).
And all mix of above, so if we have class (let's call it "Bar") and function ("foo") we need to profile this function "foo" in the class "Bar", but still class can be in any file (there can be few "Bar" classes in few files. If we have file name and function name we need to profile every function which has that name (no matter if it is inside or outside any class) in the file (but still there can be few files with the same name).
Few files or few classes isn't really a problem, because I have already replace execution function in profiler (yes, profiler itself is working) but problem is how to storage names of functions (and classes and files) so it could be as fast as possible (memory doesn't really matter if it fast) to search the function should be profile (in short: execution function ask as about it should profile this function or no and we need to give answer, in execute function we have function name (for sure), class name (if function is method from class) and file name (if function isn't from standard library).

You can generate a xml file to store your check list info. as :
<file name="file1"...>
<class name="class1" ...>
<func name=" func1" ... />
</class>
</file>
When you run your program, read the xml into memory, each hierarchy build a object and the high hierarchy object include the lower object.
And build three map lists, which the first 'string' is the name of file or class or func, and the second 'object*' is the point to the object built by xml.
When you get an information, you can just use map::find to search it. If you find the 'object1', you can do the method, defined in 'object1' and its including objects.

You can once go through all your files->classes->functions and create a map (a hash-map) with function name as the key and class and file info as the value.
User enters function name -> you instantly get file/class name by searching your map -> you seek your rules list for the file name and apply the rules.

Related

How to automate M2Doc generation including the settings?

I've applied the guidance on programmatic usage of M2Doc (also with this help) to successfully generate a document via the API, which was previously prepared by using the M2Doc GUI (configured .docx plus a .genconf file). It seems to also work with a configured .docx, but without a .genconf file.
Now I would like to go a step further and ease the user interface in our application. The user should come with a .docx, include the {m:...} fields there, especially for variable definition, and then in our Eclipse application just assign model elements to the list of variables. Finally press "generate". The rest I would like to handle via the M2Doc API:
Get list of variables from the .docx
Tell M2Doc the variable objects (and their types and other required information, if that is separately necessary)
Provide M2Doc with sufficient information to handle AQL expressions like projectmodel::PJDiagram.allInstances() in the Word fields
I tried to analyse the M2Doc source code for this, but have some questions to achieve the goal:
The parse/generate API does not create any config information into the .docx or .genconf files, right? What would be the API to at least generate the .docx config information?
The source code mentions "if you are using a Generation" - what is meant with that? The use of a .genconf file (which seems to be optional for the generate API)?
Where can I get the list of variables from, which M2Doc found in a .docx (during parse?), so that I can present it to the user for Object (Model Element) assignment?
Do I have to tell M2Doc the types of the variables, and in which resource file they are located, besides handing over the variable objects? My guess is no, as using a blank .docx file without any M2Doc information stored also worked for the variables themselves (not for any additional AQL expressions using other types, or .oclAsType() type castings).
How can I provide M2Doc with the types information for the AQL expressions mentioned above, which I normally tell it via the nsURI configuration? I handed over the complete resourceSet of my application, but that doesn't seem to be enough.
Any help would be very much appreciated!
To give you an impression of my code so far, see below - note that it's actually Javascript instead of Java, as our application has a built-in JS-Java interface.
//=================== PARSING OF THE DOCUMENT ==============================
var templateURIString = "file:///.../templateReqs.docx";
var templateURI = URI.createURI(templateURIString);
// canNOT be empty, as we get nullpointer exceptions otherwise
var options = {"TemplateURI":templateURIString};
var exceptions = new java.util.ArrayList();
var resourceSetForModels = ...; //here our application's resource set for the whole model is used, instead of M2Doc "createResourceSetForModels" - works for the moment, but not sure if some services linking is not working
var queryEnvironment = m2doc.M2DocUtils.getQueryEnvironment(resourceSetForModels, templateURI, options);
var classProvider = m2doc.M2DocPlugin.getClassProvider();
// empty Monitor for the moment
var monitor = new BasicMonitor();
var template = m2doc.M2DocUtils.parse(resourceSetForModels.getURIConverter(), templateURI, queryEnvironment, classProvider, monitor);
// =================== GENERATION OF THE DOCUMENT ==============================
var outputURIString = "file:///.../templateReqs.autogenerated.docx";
var outputURI = URI.createURI(outputURIString);
variables["myVar1"] = ...; // assigment of objects...
m2doc.M2DocUtils.generate(template, queryEnvironment, variables, resourceSetForModels, outputURI, monitor);
Thanks!
No the API used to parse an generate don't modifies the template file nor the .genconf file. To modify the configuration of the template you will need to use the
TemplateCustomProperties class. That will allow you to register your metamodels and service classes. This instormation is then used to configure the IQueryEnvironment, so you might also want to directly configure the IQueryEnvironment in your code.
The generation in this context referes to the .genconf file. Note The genconf file is also an EMF model, so you can also craft one in memory to launch you generation if it's easier for you. But yes the use of a .genconf file is optional like in your code example.
To the list of variables in the template you can use the class TemplateCustomProperties:
TemplateCustomProperties.getVariables() will list the variables that are declared with their type
TemplateCustomProperties.getMissingVariables() to list varaibles that are used in the template but not declared
You can also find le list of used metamodels (EPackage nsURIs) and imported services classes.
The type of variables is not needed at generation time, it's only needed if you want to validate your template. At generation time you need to pass a map from the variable name to its value as you did in your example. The value of a variable can be a any object from your model (an EObject), a String, an Integer, ... If you want to use something like oclIsKindOf(pkg::MyEClass) you will need to register the nsURI of pkg first see the next point.
The code you provided should let you use something like projectmodel::PJDiagram.allInstances(). This service needs a ResourceSetRootEObjectProvider() that is initialized in M2DocUtils.getQueryEnvironment(). But you need to declare the nsURI of your metamodel in your template (see TemplateCustomProperties). This will register it in the IQueryEnvironment. You can also register it yourself using IQueryEnvironment.registerEPackage().
This should help you finding the missing parts in the configuration of the AQL environment. Your code seems good and should work when you add the configuration part.

How to get the current database name from a C function/extention?

I'm writing a C function/extension. It's a function that'll be called by a trigger. In it, when a trigger is fired, I need determine the name of the current database.
It's a requirement that using SPI_prepare(), SPI_exec() isn't allowed in this case, therefore querying current_database() won't work.
Some other SPI_get* will be ok. Or, accessing to the current database name via TupleDesc or TriggerData somehow too.
How to do it?
It's not clear to me which of postgresql's server-internal programming interfaces are usable in SPI extensions. However, the implementation of the current_database SQL function does this:
Name db;
db = (Name) palloc(NAMEDATALEN);
namestrcpy(db, get_database_name(MyDatabaseId));
PG_RETURN_NAME(db);
So, I think get_database_name(MyDatabaseId) is the incantation you want. It returns a C string, which your C extension can use directly -- the rest of the above is to box up the string in a Datum object so the query evaluator can work with it.
I figured out that a function called "current_database()" seems useful which seems similar to "select database()". Later returns a string which represents the name of the database.
Yes, a parameter that your extension will get in order to deduce context or so.
PL/SQL can create functions. These can in turn call C-Language extensions via shared libraries. Finally, the name can be delegated from database towards extension.

typo3: extension template namespace

In the root template page.10 is already taken. If I put page.10 into my extension template, I override it. How can I make sure (just putting a large number is not "making sure") that I don't override anything? The root template is very complicated and includes many other templates, so I cannot really tell which numbers are already taken. I just want to use the extension template to append some content.
The safest solution would be not adding anything automatically. Instead you could provide your rendering instruction via lib.* or tt_content.list.* in case it is a registered plugin:
lib.yourContent = USER_INT
lib.yourContent { ... }
Then you only need to document how to add something to the page output, e.g.:
page.11 < lib.yourContent
I know it has been a while since you've asked this question.
You're saying "just putting a large number is not making sure" because you have no idea what content is using which number on the page object.
If you want to be 99,9% sure that a number has not been used, why don't you use the current timestamp as a number on this cObject? That's how the TYPO3 Errorhandler also refers to pages, using the timestamp they wrote it.
The simpliest answer if you can make sure which Page object index is already used for something is you cannot.
It might be set somewhere deeper in extensions, in any condition that meets any expression is defined there.
None of standard built in tools in TYPO3 can predict that and check all combinations of conditions to tell you if such number is set somewhere in some case.
But.
If you are an admin of that page, then the best approach is just to know your template.
Analyse the typoscript and know what numbers in Page object are used for something. Make a tidy consistent template, clean it up, check using Template tools -> Typoscript object browser. You have to know what's going on on your site, the elements in indexes of main Page object are the main and basis things which are shown public.
(Or just guess any random big number and try to search it in whole typoscript using Template tools -> Template Analyzer -> View complete TS listing. Let's be honest, probability that you shoot a big number which is already used is rather low)

Embedded Crystal variables in templating

I’m new to Crystal (and never really used ruby) so apologies for the ignorance here! I've looked at the ecr docs but can't seem to find an answer there.
I’m looking at using Embedded Crystal for dynamic templates in Kemal. Can I confirm - can templates only render variables that are available in the scope of the call, or can one make method/function calls from within the template itself? I.E. is there any possibility/risk of being able to execute “malicious” crystal code from within a template (in this case malicious refers to I/O or file access etc)?
To take an example from the Kemal docs:
get "/:name" do |env|
name = env.params.url["name"]
render "src/views/hello.ecr"
end
In the view hello.ecr - is name the only item that will be available in the template, or could one call File.delete("./foo")from within the template for example?
A template is compiled into Crystal code, you can write any kind of code in there, like File.delete("./foo"), for example if you write <% File.delete("./foo") %> inside of your template.
If your worry is that name will contain code and that will somehow get executed, then don't worry, that's not going to happen. Dynamic runtime code execution in Crystal is not possible, so there's no way someone will inject malicious code into your templates.

How to define a primitive device in Proteus?

I'm trying to make my own full adder and some other devices as a sub-circuit in "Proteus" and use them several times in a bigger circuit.
The problem is when you copy a sub-circuit you need to rename all of its inner parts, otherwise, the parts with a same name in two sub-circuits would be considered as one, therefore you'd get some errors.
So I'd like to know if there is a way to define a full adder of my own and use it like "74LS183" which is a primitive device, and by primitive I mean it has been defined in the library.
From answer posted here
It's in the Proteus Help file:
SCHEMATIC CAPTURE HELP (F1) -> MULTI-SHEET DESIGNS -> Hierarchical Designs
See the section "Module Components":
Any ordinary component can be made into a module by setting the Attach Hierarchy Module checkbox on the Edit Component dialogue form.
The component's value is taken to be the name for the associated circuit,
and the component's reference serves as the instance name.
after this you can use your new implementation of that component with your new name. for making it available at library see "External Modules" section of proteus help.
The problem with copying is solved in "Proteus 8".not completely but sort of.
I mean you can copy a subcircuit without a need to change it's inner parts, but you must change the subcircuit name and I mean the bigger name above the subcircuit not the little one below it.
so there is no need to define a primitive.