How to remove the word `References` in the EndNote section of Cross-Reference - aspose

Using Aspose Word for JAVA 16.2.0- JDK 1.6
Two Word document.A.docx and B.docx, each word document contains cross-reference of type EndNote and End of Section.
Using Aspose to merge both the document (A.docx & B.docx) and moving all the cross-reference of type EndNote to the last page EndNote using below code.
Document destinationApsoseDocument=destinationDocument.getDoc();// Output.docx
moveEndNoteToEnd(destinationApsoseDocument);
for(Document document : mergingdocument){ // A.docx & B.docx
Document mergingDocument=document.getDoc();
moveEndNoteToEnd(mergingDocument);
destinationApsoseDocument.appendDocument(mergingDocument,ImportFormatMode.KEEP_SOURCE_FORMATTING);
}
AsposeUtil.convertNumPageFieldsToPageRef(destinationApsoseDocument);
destinationApsoseDocument.updatePageLayout();
destinationApsoseDocument.save(outputFileName); // Output.docx
public void moveEndNoteToEnd(Document dstDoc) {
dstDoc.getFirstSection().getPageSetup().getEndnoteOptions().setLocation(3);
dstDoc.getFirstSection().getPageSetup().getEndnoteOptions().setNumberStyle(0);;
dstDoc.getFirstSection().getPageSetup().getEndnoteOptions().setStartNumber(1);
dstDoc.getFirstSection().getPageSetup().getEndnoteOptions().setRestartRule(0);
}
The cross-reference at EndNote of A.docx moved to to the End of Document EndNote that is B.docx and A.docx & B.docx are merged.-This is expected and working fine.
How to remove the word text Reference from A.docx after merging both the document ?
Note:
The reference are moved to End of Document but the word text Reference remains as it is.It is not getting removed.

Related

Aspose.PDF How Replace replace text on PDF page to all upper case

I am trying to replace text on a specific page to upper case using Aspose.PDF for .Net. If anyone can provide any help that would be great. Thank you.
My name is Tilal Ahmad and I am a developer evangelist at Aspose.
You may use the documentation link for searching and replacing text on a specific page of the PDF document. You should call the Accept method for specific page index as suggested at the bottom of the documentation. Furthermore, for replacing text with uppercase you can use ToUpper() method of String object as follows.
....
textFragment.Text = textFragment.Text.ToUpper();
....
Edit: A sample code to change text case on a specific PDF page
//open document
Document pdfDocument = new Document(myDir + "testAspose.pdf");
//create TextAbsorber object to find all instances of the input search phrase
TextFragmentAbsorber textFragmentAbsorber = new TextFragmentAbsorber("");
//accept the absorber for all the pages
pdfDocument.Pages[2].Accept(textFragmentAbsorber);
//get the extracted text fragments
TextFragmentCollection textFragmentCollection = textFragmentAbsorber.TextFragments;
//loop through the fragments
foreach (TextFragment textFragment in textFragmentCollection)
{
//update text and other properties
textFragment.Text = textFragment.Text.ToUpper();
}
pdfDocument.Save(myDir+"replacetext_output.pdf");

Annotating a document with JAPE

I have been searching for a solution to this for weeks, I have some documents(about 95) that I am trying to classify using GATE. I have put them in one corpus I called training_corpus, however, after ANNIE has annotated the corpus, I have to go back into each file, select all token in the document, and create an annotation called Mention, with feature type and value the class for the document. for example:
type Start End id Features
Mention 0 70000 2588 {type=neg}
Is there anyway to automatically do this with JAPE? Basically, I want to select all tokens and create a new annotation with feature(type=class). Also, the class is appended to the document. Since there are many documents, can JAPE extract the class from the document name and set it to the value of Mentions feature. Example document name is neg_data1.txt, so the annotation will be Mention.type = neg?
Any help will be greatly appreciated. Thanks
I think you answered to your question by yourself.If the class assignment based on just a token present in text - why not simply process text outside of GATE?
For example to create an xml file like:
text and then use it in training process.
Also you can create a simple JAPE rule which will:
a) will take a text within document boundaries (see gate.Utils.length methods AFAIR)
b) based on presence of your token will create a new Annotation instance with features necessary.
an abstract example:
Phase: Instance
Input: Token
Options: control = once
Rule:Instance
(
{Token}
):instance
-->
{
AnnotationSet instances = outputAS.get("INSTANCE_ANNOTATION");
FeatureMap featureMap = Factory.newFeatureMap();
if (instances!=null&&!instances.isEmpty()){
featureMap.put("features when annotation presented in doc");
}else{
featureMap.put("features when annotation not in doc");
}
outputAS.add(new Long(0), new Long(documentLength), "Mention", featureMap);
}

How to replace text in content control after, XML binding using docx4j

I am using docx4j 2.8.1 with Content Controls in my .docx file. I can replace the CustomXML part by injecting my own XML and then calling BindingHandler.applyBindings after supplying the input XML. I can add a token in my XML such as ¶ then I would like to replace that token in the MainDocumentPart, but using that approach, when I iterate through the content in the MainDocumentPart with this (link) method none of my text from my XML is even in the collection extracted from the MainDocumentPart. I am thinking that even after binding the XML, it remains separate from the MainDocumentPart (??)
I haven't tried this with anything more than a little test doc yet. My token is the Pilcrow: ¶. Since it's a single character, it won't be split in separate runs. My code is:
private void injectXml (WordprocessingMLPackage wordMLPackage) throws JAXBException {
MainDocumentPart part = wordMLPackage.getMainDocumentPart();
String xml = XmlUtils.marshaltoString(part.getJaxbElement(), true);
xml = xml.replaceAll("¶", "</w:t><w:br/><w:t>");
Object obj = XmlUtils.unmarshalString(xml);
part.setJaxbElement((Document) obj);
}
The pilcrow character comes from the XML and is injected by applying the XML bindings to the content controls. The problem is that the content from the XML does not seem to be in the MainDocumentPart so the replace doesn't work.
(Using docx4j 2.8.1)

actionscript find and convert text to url [duplicate]

This question already has an answer here:
Closed 10 years ago.
Possible Duplicate:
How do I linkify text using ActionScript 3
I have this script that grabs a twitter feed and displays in a little widget. What I want to do is look at the text for a url and convert that url to a link.
public class Main extends MovieClip
{
private var twitterXML:XML; // This holds the xml data
public function Main()
{
// This is Untold Entertainment's Twitter id. Did you grab yours?
var myTwitterID= "username";
// Fire the loadTwitterXML method, passing it the url to your Twitter info:
loadTwitterXML("http://twitter.com/statuses/user_timeline/" + myTwitterID + ".xml");
}
private function loadTwitterXML(URL:String):void
{
var urlLoader:URLLoader = new URLLoader();
// When all the junk has been pulled in from the url, we'll fire finishedLoadingXML:
urlLoader.addEventListener(Event.COMPLETE, finishLoadingXML);
urlLoader.load(new URLRequest(URL));
}
private function finishLoadingXML(e:Event = null):void
{
// All the junk has been pulled in from the xml! Hooray!
// Remove the eventListener as a bit of housecleaning:
e.target.removeEventListener(Event.COMPLETE, finishLoadingXML);
// Populate the xml object with the xml data:
twitterXML = new XML(e.target.data);
showTwitterStatus();
}
private function addTextToField(text:String,field:TextField):void{
/*Regular expressions for replacement, g: replace all, i: no lower/upper case difference
Finds all strings starting with "http://", followed by any number of characters
niether space nor new line.*/
var reg:RegExp=/(\b(https?|ftp|file):\/\/[-A-Z0-9+&##\/%?=~_|!:,.;]*[-A-Z0-9+&##\/%=~_|])/ig;
//Replaces Note: "$&" stands for the replaced string.
text.replace(reg,"$&");
field.htmlText=text;
}
private function showTwitterStatus():void
{
// Uncomment this line if you want to see all the fun stuff Twitter sends you:
//trace(twitterXML);
// Prep the text field to hold our latest Twitter update:
twitter_txt.wordWrap = true;
twitter_txt.autoSize = TextFieldAutoSize.LEFT;
// Populate the text field with the first element in the status.text nodes:
addTextToField(twitterXML.status.text[0], twitter_txt);
}
If this
/(\b(https?|ftp|file):\/\/[-A-Z0-9+&##\/%?=~_|!:,.;]*[-A-Z0-9+&##\/%=~_|])/ig
is your regexp for converting text to urls, than i have some remarks.
First of all, almost all characters in chacacter classes are parsed literally.
So, here
[-A-Z0-9+&##\/%?=~_|!:,.;]
you say to search any of this characters (except /).
Simple regexp for url search will look similar to this
/\s((https?|ftp|file):\/\/)?([-a-z0-9_.:])+(\?[-a-z0-9%_?&.])?(\s+|$)/ig
I'm not sure, if it will process url borders right, but \b symbol can be a dot, so i think \s (space or linebreak) will suit better.
I`m not sure about ending (is it allowed in actionscript to use end-of-string symbol not at the end of regexp?)
And, of course, you have to tune it to suit your data.

How to use regex in selenium locators

I'm using selenium RC and I would like, for example, to get all the links elements with attribute href that match:
http://[^/]*\d+com
I would like to use:
sel.get_attribute( '//a[regx:match(#href, "http://[^/]*\d+.com")]/#name' )
which would return a list of the name attribute of all the links that match the regex.
(or something like it)
thanks
The answer above is probably the right way to find ALL of the links that match a regex, but I thought it'd also be helpful to answer the other part of the question, how to use regex in Xpath locators. You need to use the regex matches() function, like this:
xpath=//div[matches(#id,'che.*boxes')]
(this, of course, would click the div with 'id=checkboxes', or 'id=cheANYTHINGHEREboxes')
Be aware, though, that the matches function is not supported by all native browser implementations of Xpath (most conspicuously, using this in FF3 will throw an error: invalid xpath[2]).
If you have trouble with your particular browser (as I did with FF3), try using Selenium's allowNativeXpath("false") to switch over to the JavaScript Xpath interpreter. It'll be slower, but it does seem to work with more Xpath functions, including 'matches' and 'ends-with'. :)
You can use the Selenium command getAllLinks to get an array of the ids of links on the page, which you could then loop through and check the href using the getAttribute, which takes the locator followed by an # and the attribute name. For example in Java this might be:
String[] allLinks = session().getAllLinks();
List<String> matchingLinks = new ArrayList<String>();
for (String linkId : allLinks) {
String linkHref = selenium.getAttribute("id=" + linkId + "#href");
if (linkHref.matches("http://[^/]*\\d+.com")) {
matchingLinks.add(link);
}
}
A possible solution is to use sel.get_eval() and write a JS script that returns a list of the links. something like the following answer:
selenium: Is it possible to use the regexp in selenium locators
Here's some alternate methods as well for Selenium RC. These aren't pure Selenium solutions, they allow interaction with your programming language data structures and Selenium.
You can also get get HTML page source, then regular expression the source to return a match set of links. Use regex grouping to separate out URLs, link text/ID, etc. and you can then pass them back to selenium to click on or navigate to.
Another method is get HTML page source or innerHTML (via DOM locators) of a parent/root element then convert the HTML to XML as DOM object in your programming language. You can then traverse the DOM with desired XPath (with regular expression or not), and obtain a nodeset of only the links of interest. From their parse out the link text/ID or URL and you can pass back to selenium to click on or navigate to.
Upon request, I'm providing examples below. It's mixed languages since the post didn't appear to be language specific anyways. I'm just using what I had available to hack together for examples. They aren't fully tested or tested at all, but I've worked with bits of the code before in other projects, so these are proof of concept code examples of how you'd implement the solutions I just mentioned.
//Example of element attribute processing by page source and regex (in PHP)
$pgSrc = $sel->getPageSource();
//simple hyperlink extraction via regex below, replace with better regex pattern as desired
preg_match_all("/<a.+href=\"(.+)\"/",$pgSrc,$matches,PREG_PATTERN_ORDER);
//$matches is a 2D array, $matches[0] is array of whole string matched, $matches[1] is array of what's in parenthesis
//you either get an array of all matched link URL values in parenthesis capture group or an empty array
$links = count($matches) >= 2 ? $matches[1] : array();
//now do as you wish, iterating over all link URLs
//NOTE: these are URLs only, not actual hyperlink elements
//Example of XML DOM parsing with Selenium RC (in Java)
String locator = "id=someElement";
String htmlSrcSubset = sel.getEval("this.browserbot.findElement(\""+locator+"\").innerHTML");
//using JSoup XML parser library for Java, see jsoup.org
Document doc = Jsoup.parse(htmlSrcSubset);
/* once you have this document object, can then manipulate & traverse
it as an XML/HTML node tree. I'm not going to go into details on this
as you'd need to know XML DOM traversal and XPath (not just for finding locators).
But this tutorial URL will give you some ideas:
http://jsoup.org/cookbook/extracting-data/dom-navigation
the example there seems to indicate first getting the element/node defined
by content tag within the "document" or source, then from there get all
hyperlink elements/nodes and then traverse that as a list/array, doing
whatever you want with an object oriented approach for each element in
the array. Each element is an XML node with properties. If you study it,
you'd find this approach gives you the power/access that WebDriver/Selenium 2
now gives you with WebElements but the example here is what you can do in
Selenium RC to get similar WebElement kind of capability
*/
Selenium's By.Id and By.CssSelector methods do not support Regex and By.XPath only does where XPath 2.0 is enabled. If you want to use Regex, you can do something like this:
void MyCallingMethod(IWebDriver driver)
{
//Search by ID:
string attrName = "id";
//Regex = 'a number that is 1-10 digits long'
string attrRegex= "[0-9]{1,10}";
SearchByAttribute(driver, attrName, attrRegex);
}
IEnumerable<IWebElement> SearchByAttribute(IWebDriver driver, string attrName, string attrRegex)
{
List<IWebElement> elements = new List<IWebElement>();
//Allows spaces around equal sign. Ex: id = 55
string searchString = attrName +"\\s*=\\s*\"" + attrRegex +"\"";
//Search page source
MatchCollection matches = Regex.Matches(driver.PageSource, searchString, RegexOptions.IgnoreCase);
//iterate over matches
foreach (Match match in matches)
{
//Get exact attribute value
Match innerMatch = Regex.Match(match.Value, attrRegex);
cssSelector = "[" + attrName + "=" + attrRegex + "]";
//Find element by exact attribute value
elements.Add(driver.FindElement(By.CssSelector(cssSelector)));
}
return elements;
}
Note: this code is untested. Also, you can optimize this method by figuring out a way to eliminate the second search.