I have a json.xsl file which needs an Access-Control-Allow-Origin header.
My attempt at getting this working was by adding
<http-headers>
<header name="Access-Control-Allow-Origin" value="*" />
</http-headers>
to the top of the "json.xsl" file, hoping you wouldn't have to set this globally, but despite it still parsing properly, it doesn't add the header.
Is this simply not possible, or am I using the wrong tags (or putting the tags in the wrong location?)
Use Icecast 2.4.2, it allows you to configure arbitrary headers. Both globally and per "mount". You can define a "mount" for a XSLT file to apply e.g. headers or authentication.
Also Icecast >2.4.1 has a working JSON API. (In 2.4.0 there are problems and it is NOT recommended to use!)
https://wiki.xiph.org/Icecast_Server/Installing_latest_version_(official_Xiph_repositories)
Related
I'm currently building an XSLT stylesheet used to document other XSLT stylesheets in a series of folders and sub-folders. My code pulls out specific details about variables, functions, etc and renders it in an output format. The sheets being read are created by a 3rd party product. Most of them have an XSL extension but some of them are proprietary extensions. I have some files with a DTCBS extension but they are just XSL stylesheets.
I'm currently loading the content of these files into a variable using the XSLT function "collection" as follows:
<xsl:variable name="Collection" select="collection(concat('file:///', encode-for-uri(replace($filePath, '\\', '/')),'?select=*.(xsl|dtcbs|xml);recurse=yes'))" as="node()*"/>
The variable works just fine if I use XSL|XML. But if I include the DTCBS extension, the variable blows up citing "the supplied value is xs:base64Binary".
If I manually put the xml declaration line at the top of my DTCBS file, the variable works fine. Those DTCBS files are auto-generated without the declaration line so I can't fix that, nor can I manually edit them each time I want to run my documenter code.
From what I can tell, because it's not an XSL extension, and the XML declaration line isn't present, the XSLT parser thinks it's base64 when it isn't.
I'm using Saxon as my XSLT parser and the Saxon documentation says it uses file extensions and http headers to detect the file type.
Does anyone know if there is a way to force collection() to treat every file as an XSL?
Tried adding the XML declaration line in the DTCBS file. This did correct the issue but I can't do this in all cases as I am trying to automate the entire thing.
I also renamed the DTCBS extension to XSL and the problem went away as well.
As well as Martin's suggestion, you can register content types with the Saxon configuration:
processor.getUnderlyingConfiguration()
.registerFileExtension("dtcbs", "application/xml");
This has been available since Saxon 9.7.
Try to add e.g. content-type=application/xml e.g. '?select=*.(xsl|dtcbs|xml);recurse=yes;content-type=application/xml'.
I'm serving unversioned files via fossil's uv function. Now, this works fine for files without file extension and for archives. But I need to serve a .txt file. The problem now is that it gets delivered as a HTML page including the fossil web layout around it.
Is there a way to tell fossil to not do that, and instead deliver it as a raw .txt file?
You can specify a mimetype parameter on the URL. For example, mimetype=application/octet-stream will cause it to be offered as download.
For example, instead of https://www.fossil-scm.org/index.html/uv/download.html, you’d put https://www.fossil-scm.org/index.html/uv/download.html?mimetype=application/octet-stream.
Fossil reacts to the following mimetypes by putting headers around them:
text/x-fossil-wiki
text/x-markdown
text/html
text/plain
Unfortunately, all other mimetypes appear to lead to the browser downloading the unversioned file instead of displaying it.
If that's a problem, you could try a mimetype of text with no suffix.
Otherwise, you can post on Fossil's support forum. Either as a question or as a feature request. :-)
I need to perform some logic based on cucsom headers. I'm using Chrome's postman to add the headers.
But it seems like I can only add them when the header name doesn't have a '_'
Is there any reason for this ?
ideally i would like to add a header somthing like 'MY_HEADER' and access it via request.META['MY_HEADER'], right now I'm adding it as 'MYHEADER' and accessing it via request.META['MYHEADER']
Thank you vanadium23, Turns out Nginx was making changes. Full answer here Why underscores are forbidden in HTTP header names
Which class of tomcat is responsible for converting .jsp file to .class file? I want to see the source code written for the conversion. My aim is to check the logic how scriptlet comments are eliminated and based on that I'll write my own code that will remove HTML comments as well (I've not decided how will I implement it).
I am sure source code should be available as it's open source.
Or is it possible to implement some kind of filter so the each time server returns a JSP page it removes the comments. I can replace all HTML comments into Scriptlet comments. But I want to ensure, if someone use html comments in future, it is not displayed. It's basically for security.
[Added]
As per the suggestion given by JB Nizet, we will be modifying build.xml file to remove comments. I have come up with this to remove HTML comments -
<target name="-trim.html.comments">
<echo message="Inside trim html comments" />
<fileset id="html.fileset" dir="${build.dir}" includes="**/*.jsp, **/*.html" />
<!-- HTML Comments -->
<replaceregexp replace="" flags="g" match="\<![ \r\n\t]*(--([^\-]|[\r\n]|-[^\-])*--[ \r\n\t]*)\>">
<fileset refid="html.fileset"/>
</replaceregexp>
</target>
However, I am not sure how to remove comments that starts with // or /* */. Any suggestion how can I do so? I have searched over internet but didn't get a clue.
We are using ant script for build.
[Added]
To remove single line comment that starts with // I am using below regex. But somehow it's not working. Can anyone please help me what I'm doing wrong? Thanks in advance.
<replaceregexp flags="gs" match="?:^\s*\/\/(?:.*)$" replace="">
Rather than doing it in Tomcat, use Apache directly. It supports modules which do exactly what you need. mod pagespeed is probably closest to what you want; mod deflate may be configurable to do the same thing, though it also compresses the data, which might be overkill.
As a nice side-effect, this allows you to leave your handy comments in and they'll be served to your internal users (developers) who use port 8080, while those using port 80 will see only the minified product.
I'm using DDX to add headers, footers, and pagination to PDF documents. If possible I would like the header for the first page of each file to be blank, but then to have headers for the remaining pages.
I've looked through the documentation and can't find a way to do this. It seems like a commonly used feature so I'm guessing there must be some way to implement it.
(From the comments)
You might be able to achieve this effect by using multiple <PDF> tags: one for the first page and another for pages 2-N, with a nested <Header> tag.
ie :
<PDF pages="1" src="c:/path/someFile.pdf">
...
</PDF>
<PDF pages="2-last" src="c:/path/someFile.pdf">
<Header...>
</PDF>