I made some changes in AST via ASTRewrite and applied the changes via Change.perform, the C file has been update correctly with the new changes (insert new node), but in the debug mode AST object doesn't feel those changes
ast.getRawSignature(); // C file code as text
ASTRewrite rewriter = ASTRewrite.create(ast);
addNewNode(node, ast, rewriter); //Inserting some node
Change c = rewriter.rewriteAST();
try {
c.perform(new NullProgressMonitor());
} catch (CoreException e) {
e.printStackTrace();
}
/**WHAT I WANT TO FLUSH THE AST HERE TO FEEL THE CHANGES**/
ast.getRawSignature(); //it still the same C old code and the C file already updated
I need to flush the AST to feel those change in the AST object itself, How I can achieve this?
After an AST is initially constructed, it is no longer mutable (IASTTranslationUnit.freeze() is called on it, and any further attempts to call setters on nodes in that AST will fail).
This means that Change.perform() cannot perform the changes on the AST, only on the file. To get an AST that reflects the changes, you need to build a new one.
Related
I'm exploring ways to alter the AST of a C/C++ code (e.g., rename a node, add a new variable) and apply these changes to the source file.
I did a lot of reading here in SO and in Eclipse forums. However I didn't find a minimal working example.
It seems that the correct way to make changes in an AST is by using the ASTRewrite class.
A similar question was asked in SO a few months ago, but it is still pending.
Here is where I'm stuck at the moment:
//get the factory
INodeFactory nodeFactory = myAST.getASTNodeFactory();
//create a new function declarator
IASTNode n = nodeFactory.newFunctionDeclarator(nodeFactory.newName("testMe"));
//get the rewriter
ASTRewrite rewriter = ASTRewrite.create(mainAST);
//replace node with n, node is not null
rewriter.replace(node, n, null);
//make the changes
Change c = rewriter.rewriteAST();
c.perform(new NullProgressMonitor());
When I run this code snippet, I get a
java.lang.NoClassDefFoundError: org/eclipse/ltk/core/refactoring/Change
Any hints are appreciated.
Can anyone help me about Free Marker Template reading process.
I want to know the missing variables in template when compared to data model which i am getting from database in Map.
Configuration cfg = new Configuration(Configuration.VERSION_2_3_24);
cfg.setDirectoryForTemplateLoading(new File(filepath));
cfg.setDefaultEncoding("UTF-8");
cfg.setTemplateExceptionHandler(TemplateExceptionHandler.RETHROW_HANDLER);
Map confMap = new HashMap();
confMap.put("user", "Sunil");
Template temp = cfg.getTemplate("template.txt");
OutputStream os = new FileOutputStream(filepath + "\\template.conf");
Writer out = new OutputStreamWriter(os);
temp.process(confMap, out);
template.txt
user=${user}
firstname =${firstname}
lastname =${lastname}
Am using the above mentioned code. want to know before process the template that data model vs template.
What data-model variables the template needs only turns out as the template executes. That's because of #if-s, .vars[dynamicName]-s, etc. Also it's not always obvious that if you have a ${x} then x refers to a data-model variable, or there will be an x in another scope (like a global variable) at that point.
You could still do a pretty good guess if you walk the tree of the template though. You can start that with Tempalte.getRootTreeNode(). As you will see it's a deprecated API, because backward compatibility is not promised, but it's actually unlikely to have changes in 2.3.x that would break existing code, as far as the code tries to walk the tree with as few assumptions as possible.
What is the correct way to go about automatically running some setup code (either in R or C++) once per package loading? Ideally, said code would execute once the user did library(mypackage). Right now, it's contained in a setup() function that needs to be run once before anything else.
Just for more context, in my specific case, I'm using an external library that uses glog and I need to execute google::InitGoogleLogging() once and only once. It's slightly awkward because I'm trying to use it within a library because I have to, even though it's supposed to be called from a main.
Just read 'Writing R Extensions' and follow the leads -- it is either .onAttach() or .onLoad(). I have lots of packages that do little things there -- and it doesn't matter this calls to C++ (via Rcpp or not) as you are simply asking about where to initialise things.
Example: Rblpapi creates a connection and stores it
.pkgenv <- new.env(parent=emptyenv())
.onAttach <- function(libname, pkgname) {
if (getOption("blpAutoConnect", FALSE)) {
con <- blpConnect()
if (getOption("blpVerbose", FALSE)) {
packageStartupMessage(paste0("Created and stored default connection object ",
"for Rblpapi version ",
packageDescription("Rblpapi")$Version, "."))
}
} else {
con <- NULL
}
assign("con", con, envir=.pkgenv)
}
I had some (not public) code that set up a handle (using C++ code) to a proprietary database the same way. The key is that these hooks guarantee you execution on package load / attach which is what you want here.
I'm trying to use xerces-c in order to parse a rather massive XML document generated from StarUML in order to change some things, but I'm running into issues getting the xpath query to work because it keeps crashing.
To simplify things I split out part of the file into a smaller XML file for testing, which looks like this:
<?xml version="1.0" encoding="utf-8"?>
<XPD:UNIT xmlns:XPD="http://www.staruml.com" version="1">
<XPD:HEADER>
<XPD:SUBUNITS>
</XPD:SUBUNITS>
</XPD:HEADER>
<XPD:BODY>
<XPD:OBJ name="Attributes[3]" type="UMLAttribute" guid="onMjrHQ0rUaSkyFAWtLzKwAA">
<XPD:ATTR name="StereotypeName" type="string">ConditionInteraction</XPD:ATTR>
</XPD:OBJ>
</XPD:BODY>
</XPD:UNIT>
All I'm trying to do for this example is to find all of the XPD:OBJ elements, of which there is only one. The problem seems to stem from trying to query with the namespace. When I pass a very simple xpath query of XPD:OBJ it will crash, but if I pass just OBJ it won't crash but it won't find the XPD:OBJ element.
I assume there's some important property or setting that I'm missing during initialization that I need to set but I have no idea what it might be. I looked up all of the properties of the parser having to do with namespace and enabled the ones I could but it didn't help at all so I'm completely stuck. The initialization code looks something like this, with lots of things removed obviously:
const tXercesXMLCh tXMLManager::kDOMImplementationFeatures[] =
{
static_cast<tXercesXMLCh>('L'),
static_cast<tXercesXMLCh>('S'),
static_cast<tXercesXMLCh>('\0')
};
// Instantiate the DOM parser.
fImplementation = static_cast<tXercesDOMImplementationLS *>(tXercesDOMImplementationRegistry::getDOMImplementation(kDOMImplementationFeatures));
if (fImplementation != nullptr)
{
fParser = fImplementation->createLSParser(tXercesDOMImplementationLS::MODE_SYNCHRONOUS, nullptr);
fConfig = fParser->getDomConfig();
// Let the validation process do its datatype normalization that is defined in the used schema language.
//fConfig->setParameter(tXercesXMLUni::fgDOMDatatypeNormalization, true);
// Ignore comments and whitespace so we don't get extra nodes to process that just waste time.
fConfig->setParameter(tXercesXMLUni::fgDOMComments, false);
fConfig->setParameter(tXercesXMLUni::fgDOMElementContentWhitespace, false);
// Setup some properties that look like they might be required to get namespaces to work but doesn't seem to help at all.
fConfig->setParameter(tXercesXMLUni::fgXercesUseCachedGrammarInParse, true);
fConfig->setParameter(tXercesXMLUni::fgDOMNamespaces, true);
fConfig->setParameter(tXercesXMLUni::fgDOMNamespaceDeclarations, true);
// Install our custom error handler.
fConfig->setParameter(tXercesXMLUni::fgDOMErrorHandler, &fErrorHandler);
}
Then later on I parse the document, find the root node, and then run the xpath query to find the node I want. I'll leave out the bulk of that and just show you where I'm running the xpath query in case there's something obviously wrong there:
tXercesDOMDocument * doc; // Comes from parsing the file.
tXercesDOMNode * contextNode; // This is the root node retrieved from the document.
tXercesDOMXPathResult * xPathResult;
doc->evaluate("XPD:OBJ", contextNode, nullptr, tXercesDOMXPathResult::ORDERED_NODE_SNAPSHOT_TYPE), xPathResult);
The call to evaluate() is where it crashes somewhere deep inside xerces that I can't see very clearly, but from what I can see there are a lot of things that look deleted or uninitialized so I'm not sure what's causing the crash exactly.
So is there anything here that looks obviously wrong or missing that is required to make xerces work with XML namespaces?
The solution was right in front of my face the whole time. The problem was that you need to create and pass a resolver to the evaluate() call or else it will not be able to figure out any of the namespaces and will throw an exception. The crash seems to be a bug in xerces since it's crashing on trying to throw the exception when it can't resolve the namespace. I had to debug deep into the xerces code to find it, which gave me the solution.
So to fix the problem I changed the call to evaluate() slightly to create a resolver with the root node and now it works perfectly:
tXercesDOMDocument * doc; // Comes from parsing the file.
tXercesDOMNode * contextNode; // This is the root node retrieved from the document.
tXercesDOMXPathResult * xPathResult;
// Create the resolver with the root node, which contains the namespace definition.
tXercesDOMXPathNSResolver * resolver(doc->createNSResolver(contextNode));
doc->evaluate("XPD:OBJ", contextNode, resolver, tXercesDOMXPathResult::ORDERED_NODE_SNAPSHOT_TYPE), xPathResult);
// Make sure to release the resolver since anything created from a `create___()`
// function has to be manually released.
resolver->release();
Don't understand what is going on with the attribute's memory and rapidXML.
A function encapsulates the xml parsing, if success, returns a reference to the root node, when calling the traverse DOM tree inside this function I get the correct data stored in an xml file.
typedef rapidxml::xml_node<>* Node;
...
Node Load()
{
Node pRootNode = NULL;
// read file stream in bytes
...
std::vector<char> xmlCopy(bytes.begin(), bytes.end());
xmlCopy.push_back('\0');
rapidxml::xml_document<> doc;
try
{
doc.parse<rapidxml::parse_declaration_node | rapidxml::parse_no_data_nodes>(&bytes[0]);
pRootNode = doc.first_node();
...
TraverseDOMTree(pRootNode);
}
return pRootNode;
}
TraverseDOMTree prints all attributes and node names as expected.
Later, obviously outside the scope of Load, pRootNode will be used to query values from the DOM three, this doesn't work.
For testing purposes calling TraverseDOMTree, which perfectly worked, now prints attribute's garbage values. I can assume the DOM tree is still there, the same hierarchy of nodes as in the first call, but the attributes values are messed up.
I tried making the rapidxml::xml_document<> doc global and also adding the parse_non_destructive flag, none of those make a difference.
If matters, the client using the Load method is running in the same thread. What can be wrong?
std::vector<char> xmlCopy(bytes.begin(), bytes.end());
The local copy of the serial representation of your XML document is local. I would bet that rapidXML makes no copy of the attributes, but rather uses pointers to the sequence. You could check that by looking at the addresses of the attribute values and your document copy.