I need to know, for a public domain package, if given Fortran code is free of fixed formatted code and it is, unfortunately, not possible to use the extension for this. Is there a reliable way to do this / is there example code that does this?
You would have to write your own program/script that checks the form.
Is there any & before column 72 outside of character strings? -> free form
Is there any non-number in the first columns outside of comments? -> free form
There other possibilities, but they would generally have to be able to decide if the statement is valid and that is more difficult:
Deciding if the character in column is a continuation character or a part of a statement in the free-form.
Deciding if a non-! character in the first column is a comment in the fixed form or a valid statement in the free form.
There may be files that conform to both, but that shouldn't be a problem.
Silverfrost FTN95 uses the file extension:
.for, .f or .fix - fixed format
.f90 .f95 - free format
If the user wants to override the default they use a command line option (which they can also turn on-off as a default)
You could use these extensions as a first guess and then look at the first few lines and see if they match
Related
To reduce size of my simulation output files, I want to give variable name exceptions instead of a list of many certain variables to the simulationsOptions/outputFilter (cf. OpenModelica Users Guide / Output) of my model. I found the regexp operator "^" to fullfill my needs, but that didn't work as expected. So I think that something is wrong with the interpretation of connected character strings when negated.
Example:
When I have any derivatives der(...) in my model and use variableFilter=der.* the output file will contain all the filtered derivatives. Since there are no other varibles beginning with character d the same happens with variableFilter=d.*. For testing I also tried variableFilter=rde.* to confirm that every variable is filtered.
When I now try to except by variableFilter=^der.*, =^rde.* or =^d.*, I get exactly the same result as without using ^. So the operator seems to be ignored in this notation.
When I otherwise use variableFilter=[^der].*, =[^rde].* or even =[^d].*, all wanted derivation variables are filtered from the ouput, but there is no difference between those three expressions above. For me it seems that every character is interpretated standalone and not as as a connected string.
Did I understand and use the regexp usage right or could this be a code bug?
Side/follow-up question: Where can I officially report this for software revision?
_
OpenModelica v.1.19.2 (64-bit)
I want to search for all lines that match this regex
^([0-9IVX]\.)*.*\R
and report with the page number they are at. The output would be something like:
1. Heading/page number
1.1 Subheading/page number
1.1.1. Subsubheading/page number
Is this possible to do in PDF? I suppose that would require Ghostscript, but searching the How to Use Ghostscript page for regex I find nothing.
I can't think why you would expect Ghostscript to do search for you.
I'm not sure if you are hoping to get the data type 'heading, page number' etc from the PDF file, or if you are going to work that out yourself based on the data you find.
If it's the former then the first problem is that, in general, PDF files don't have the kind of structure information you are looking for. There is nothing in most PDF files which says 'this is a heading', 'this is a page number' etc.
There are such things as 'tagged PDF' which adds non-printing elements to a PDF file which do carry that kind of data around with them. This is an entirely optional feature, the vast majority of PDF files don't contain it, and Ghostscript completely ignores it.
Since most PDF files don't have that information, you can't rely on it, unless you are in the happy position of knowing where your PDF files are being generated and that they contain this kind of information. In which case there are numerous tools around which will extract it for you, or enable you to write code to do so.
The problem with just searching for the text is that firstly the text need not be written as a contiguous stream. So if you are looking for '1.1' that might be written as:
(1.1) Tj
(1) Tj
(.) Tj
(1) Tj
[(1) -0.1 (.) 0.1 (1)] TJ
or any combination of those. The individual character codes need not even appear in order or in the same content stream.
Secondly the character code in a PDF content stream need not be (and often is not) a Unicode code point. Or ASCII, or any other standard coding scheme, it can be totally arbitrary.
Some PDF files carry a ToUnicode CMap around which maps the character codes to Unicode code points, but not all do. Some fonts may use a standard (that's PDF standard) Encoding, in which case it's possible to infer the Unicode code points. Some Encodings may contain glyph names, from which it's again possible to infer Unicode code points.
In the end though, some PDF files are simply impossible to extract text from without using OCR.
Your best bet is probably to write code to extract text, and Ghostscript will do that. It even goes through the heirarchy of fallbacks listed above to try and find a Unicode code point. If all else fails it just uses the character code and hopes that's good enough.
If you use Ghostscript's txtwrite device it will produce either a faked up text page (the default) which attempts, as far as possible, to mimic the text layout in the original PDF file, including merging bits of text that aren't contiguous in the PDF file but are next to each other on the page. Or an 'XML-like' output which will tell you which Unicode code points, or character codes, were encountered and what their position is on the original page. If you don't like txtwrite's attempts to figure out which text goes with what, then you can use this to write your own.
I suspect the text page is probably good enough for your purposes. You can have the txtwrite device produce one file per page, so you can get the page number from the filename. Then you can write your own regex expression(s) to search the files and find your matches.
I'm writing a long HOWTO in reStructuredText format and wondering if there's a way to let user specify values for a couple variables (hostname, ip address) at the top so the rest of the document would be filled with those automatically?
Like me, you are probably looking for substitution. At the bottom of the section you'll find how to replace text.
Substitution Definitions
Doctree element: substitution_definition.
Substitution definitions are indicated by an explicit markup start
(".. ") followed by a vertical bar, the substitution text, another
vertical bar, whitespace, and the definition block. Substitution text
may not begin or end with whitespace. A substitution definition block
contains an embedded inline-compatible directive (without the leading
".. "), such as "image" or "replace".
Specifically about text replacement:
Replacement text
The substitution mechanism may be used for simple macro substitution. This may be appropriate when the replacement text is
repeated many times throughout one or more documents, especially if it
may need to change later. A short example is unavoidably contrived:
|RST|_ is a little annoying to type over and over, especially
when writing about |RST| itself, and spelling out the
bicapitalized word |RST| every time isn't really necessary for
|RST| source readability.
.. |RST| replace:: reStructuredText
.. _RST: http://docutils.sourceforge.net/rst.html
reStructuredText is a markup language to define static content. HTML content (I assumed the desired output format is HTML) is typically generated from reStructuredText on build time and then released/shipped to the user.
To allow users to specify variables, you would need a solution on top of reStructuredText, for example:
Ship the content with a JavaScript plugin that dynamically replaces specific strings in the HTML document with user input.
Generate the documentation on-the-fly after the user has specified the variables.
Note that these examples are not necessarily particularly viable solutions.
I've been trying to prohibit users from entering double-quotes (") into some fields that are used in JSON strings, as they cause unexpected termination of values in the strings. Unfortunately, while the regex isn't hard to write, I can't get it to work within XPages.
I tried using both double-quotes alone and using the escape character. Both ways fail any string, not just ones including the double-quotes.
<xp:validateConstraint message="Please do not use double quotes in organization/vendor names">
<xp:this.regex><![CDATA['^[^\"]*$]]></xp:this.regex>
</xp:validateConstraint>
There must be a simple way around this issue.
I think you're running into issues with your regex property for your xp:validateConstraint validator. You seem to be attempting to strip the characters in the xp:this.regex as opposed to specifying what characters are allowed, as I believe the docs read. I might recommend checking out the xp:customConverter (bias: I'm more familiar with the customConverter) which gives you the ability to alter the getValueAsObject and getValueAsString methods; then you can escape the undesired characters.
Here's what I'm thinking of, to strip them out. If you plug this into an XPage, you'll find that when the value is pulled (e.g.- by the partial refresh), it converts the input content accordingly by stripping out quotes (both single and double, in my case).
<?xml version="1.0" encoding="UTF-8"?>
<xp:view xmlns:xp="http://www.ibm.com/xsp/core">
<xp:inputTextarea
id="inputTextarea1"
value="#{viewScope.myStuff}"
disableClientSideValidation="true">
<xp:this.converter>
<xp:customConverter>
<xp:this.getAsString><![CDATA[#{javascript:return value.replace(/["']/g, "");}]]></xp:this.getAsString>
<xp:this.getAsObject><![CDATA[#{javascript:return value.replace(/["']/g, "");}]]></xp:this.getAsObject>
</xp:customConverter>
</xp:this.converter>
</xp:inputTextarea>
<xp:button
value="Do Something"
id="button1">
<xp:eventHandler
event="onclick"
submit="true"
refreshMode="partial"
refreshId="computedField1" />
</xp:button>
<xp:text
escape="true"
id="computedField1"
value="#{viewScope.myStuff}" />
</xp:view>
My interaction with the above code yields:
Notice that for it to reflect in the refresh, I'm modifying both the getAsString and the getAsObject, since it's updating the viewScope'd object during the refresh (a fact I had to remind myself of), but saving to a text field in XPages will get the value by the getAsString (provided your data source knows its a String related field, e.g.- NotesXspDocument as document1, with known Form, where the field is a Text field).
As the above comments alluded to, this performs an act of filtering the input values as opposed to escaping or validating those values. You could also change my replace methods to replacing with a text escape character, return value.replace(/"/g,"\"").replace(/'/g,"\'");.
Is the simple answer just add a JavaScript function call on the submit button to remove the quote?
A more elegant solution would be to not allow typing of the quote by checking the keydown event and preventing for that character code. The user should not be able to type one thing and then have it changed on them in processing
#Eric McCormick recommends a customConverter which in my opinion is a neat solution I probably would be going for in many cases. Sometimes however we need to teach users to adhere to the rules so we have to show them where they did wrong. That's when we may need a validator.
Playing around a bit the simplest solution I came up with is a xp:validateExpression simply looking for the first occurrence of a double quote within the String entered:
<xp:inputText
id="inputText1"
value="#{viewScope.testvalue}">
<xp:this.validators>
<xp:validateExpression
message="Hey, wait! Didn't I tell you not to use double quotes in here?">
<xp:this.expression><![CDATA[#{javascript:value.indexOf("\"")==-1}]]></xp:this.expression>
</xp:validateExpression>
</xp:this.validators>
</xp:inputText>
If that's a single occurrence in your application that's it, really. If you need this and similar solutions all over the place you might want to take a look into writing a small validator bean (java), register it via faces-config.xml and then use it everywhere in your application e.g by using an xp:validator instead
As suggested by #Tomalik and #sidyll, this is attempt to solve the wrong problem. While each of the answers supplied do solve the problem of preventing the user from entering undesirable characters, it is better to encode those characters to preserve the user's input. In this particular case, the intermediate step in providing the data to the user via a JSON string is to pull the value from a view.
So, all I had to do was change the column formula to encode the string using the UTF-8 character set and it displays the values with the "undesirable characters". The unencoded value is stored on the document so that Old Notes access won't create confusion.
#URLEncode ("UTF-8"; vendorName )
In one case, the JSON is computed as part of the form design, but the same solution works.
I have looked at the following question:
How to comment out a block of Python code in Vim
But that does not seem to work for me. How do I comment code easily without resorting to plugins/scripts?
Use ctrl-V to do a block selection and then hit I followed by //[ESC].
Alternatively, use shift-V to do a line-based select and then type :s:^://[Enter]. The latter part could easily go into a mapping. eg:
:vmap // :s:^://<CR>
Then you just shift-V, select the range, and type // (or whatever you bind it to).
You can add this to your .vimrc file
map <C-c> :s/^/\/\//<Enter>
Then when you need to comment a section just select all lines (Shift-V + movement) and then press CtrlC.
To un-comment you can define in a similar way
map <C-u> :s/^\/\///<Enter>
that removes a // at begin of line from the selected range when pressing CtrlU.
You can use the NERD commenter plugin for vim, which has support for a whole bunch of languages (I'm sure C++ is one of them). With this installed, to comment/uncomment any line, use <Leader>ci. To do the same for a block of text, select text by entering the visual mode and use the same command as above.
There are other features in this such as comment n lines by supplying a count before the command, yank before comment with <Leader>cy, comment to end of line with <Leader>c$, and many others, which you can read about in the link. I've found this plugin to be extremely useful and is one of my 'must have' plugins.
There's always #ifdef CHECK_THIS_LATER ... #endif which has the advantage of not causing problems with nested C-style comments (if you use them) and is easy to find and either uncomment or remove completely later.