I've tried loading a local html file using webkit_web_view_load_uri() with a file:// URL. However, the webview would display a blank page. To circumvent this, I tried using webkit_web_view_load_html() and it worked correctly.
Now that I'm trying to load some images in the html using the <img> tag, the images aren't loaded (It displays a blank page).
I'm puzzled because I tried before (~ 2 months ago) a similar method and it worked.
Note: I copied the contents of the generated HTML into a file and loaded it with Firefox and it worked as it should (The images are visible), but with another WebKitGtk application I had lying around the images didn't load.
Note: I'm using C++ as the main programming language (I'd prefer having C++ types in the solutions only if possible)
Note: I have set webkit_settings_set_allow_file_access_from_file_urls() and webkit_settings_set_allow_universal_access_from_file_urls() to TRUE
Ok, I've managed to solve this. The solution had NOTHING to do with webkitgtk, which is strange. It seems that the application was trying to download the page instead of loading it. This traces to a faulty MIME type database.
Tl;Dr:
Execute this:
rm ~/.local/share/mime/packages/user-extension-html.xml
update-mime-database ~/.local/share/mime
and use webkit_web_view_load_uri() instead of webkit_web_view_load_html() with a file:// URI
I had the same problem in C. You have to explicitly set file:// as base_uri when you call webkit_web_view_load_html().
See also answer here
Related
Using Diagrams.net (draw.io), I would like to link specific elements to web pages. This is easily accomplished currently by creating a link for the element (say a rectangle).
However, I would like to navigate directly to a specific id bookmark in the HTML page. I cannot seem to get that to work.
For example, if I try to use this syntax (which works in the browser location bar):
https://en.wikipedia.org/wiki/Canada#Geography
I will be taken to the main page:
https://en.wikipedia.org/wiki/Canada
However, the goal is to go to the "Geography" section of this page.
I have also tried the json syntax without any success:
data:action/json,{"actions":[{"open":"https://en.wikipedia.org/wiki/Canada#Geography"}]}
I have also played with different action syntax such as:
data:action/json,{"actions":[{"open":"https://en.wikipedia.org/wiki/Canada"},{"scroll":{"tags":["Geography"]}}]}
Note: I'm using the diagrams.net desktop version 14.1.8.
Thank you for taking the time to read this question.
Paul
On Windows this only seems to work if the browser isn't already open. There is not much we can do to fix this as we're passing the link to the OS.
Thanks for everyone in advance.
I encountered a problem when using Scrapy on Python 2.7.
The webpage I tried to crawl is a discussion board for Chinese stock market.
When I tried to get the first number "42177" just under the banner of this page (the number you see on that webpage may not be the number you see in the picture shown here, because it represents the number of times this article has been read and is updated realtime...), I always get an empty content. I am aware that this might be the dynamic content issue, but yet don't have a clue how to crawl it properly.
The code I used is:
item["read"] = info.xpath("div[#id='zwmbti']/div[#id='zwmbtilr']/span[#class='tc1']/text()").extract()
I think the xpath is set correctly and I have checked the return value of this response and it indeed told me that there is nothing under this directory. Results shown here:'read': [u'<div id="zwmbtilr"></div>']
If it has something, there should be something between <div id="zwmbtilr"> and </div>.
Really appreciated if you guys share any thoughts on this!
I just opened your link in Firefox with NoScript enabled. There nothing inside the <div #id='zwmbtilr'></div>. If I enable the javascripts, I can see the content you want. So, as you already new, it is a dynamic content issue.
Your first option is try to identify the request generated by javascript. If you can do that, you can send the same request from scrapy. If you can't do it, the next option is usually to use some package with javascript/browser emulation or someting like that. Something like ScrapyJS or Scrapy + Selenium.
As an example, I'm currently uploading items directly to an S3 bucket using a form. While I was testing, I didn't specify any expected filenames or extensions.
I uploaded a .png which produced this direct link:
https://s3-us-west-2.amazonaws.com/easyhighlighting2/2015-07-271438019663927upload94788
When I place this inside an img tag, it displays on a web page properly.
My question is, without an extension, how would my browser know what type of file it's loading? Inside the bucket, the file's metadata isn't even filled out.
Is there any way to get that file extension, programmatically?
I'm ready to try any clientside methods available; my server-side language is ColdFusion which is somewhat limiting, but I'm open to suggestions for that as well.
Okay, so after some more extensive digging, I found a method of retrieving the file's type that was only added since CF10 was released; that would explain the lack of documentation.
The answer lies in the FileGetMimeType function.
<cfset someVar = "https://s3-us-west-2.amazonaws.com/easyhighlighting2/2015-07-271438019663927upload94788">
<cfset FileType = FileGetMimeType(someVar)>
<cfoutput>#FileType#</cfoutput>
This code would output image/png - which is correct and has worked for every filetype I have tested thus far.
I'm surprised this kind of question hasn't popped up before, but this appears to be the best answer, at least for users of CFML.
Edit:
ColdFusion accomplishes this by either reading the contents of a file, or by trusting its extension. An implicit attribute, 'strict', is used in this function. If true, it reads the file's contents. If false, it uses the provided extension.
True is the default.
Link:
https://wikidocs.adobe.com/wiki/display/coldfusionen/FileGetMimeType
Check the Content-Type HTTP response header returned by Amazon S3.
For example, curl -I https://s3.amazonaws.com/path/to/file fetches only the headers.
I have an application that contains a form which, until recently, I was able to save as a PDF using cfdocument. A few weeks ago we swapped out a server. The old server was running CF 9.0.1. The new server is CF 10. Since then, I've been getting this error when I try to save this particular form as a PDF.
--
An exception occurred when performing document processing. The cause
of this exception was that:
coldfusion.document.spi.DocumentExportException:
java.lang.IllegalStateException: This function should be called while
holding treeLock.
--
I have another page in the application that saves PDFs just fine. It's just this page that's throwing the error. I can't find anything about TreeLock anywhere on the web (at least, nothing that pertains to ColdFusion).
Has anyone else run into this, and if so, how did you fix it? Thanks!
I started getting the error upon promoting a new version. Rendered the content in HTML. Found I had forgotten to promote an image (got the dreaded X for image). Promoted the image, cfdocument pdf works again. (I'm using localUrl="yes")
In other words, you can debug CF errors that halt the process but cfdocument pdf blithely assumes HTML content you supply is correct and complete.
I had the same problem and by process of elimination found that cfdocument doesn't like textarea elements within the form. It is fine with input type=text, but whenever I tried to add textarea elements it fell over with this error. Hope this helps someone
My problem seems similar to Not able to visualize a loaded data , but I have no console errors and I have already added the '-allow-file-access-from-files' flag to my Chrome Browser. Here's my Java coding,
window.onload = function() {
var r = new X.renderer3D();
r.init();
pros = new X.mesh();
pros.file = 'file:///C:/Users/Nathan/Downloads/JB Farmer STL ACII.stl';
pros.caption = 'Prosthetic';
r.add(pros);
r.render();
};
Should I "play around" with with camera position, I know I have to do that in Three.js.
Maybe the model needs normals? I'm not sure if it does or not. I haven't worked with 3D modeling, besides Three.js.
Update: Ummmm, I'm not sure what is going on with this, but I realized that XTK generated 2 canvases . I looked at the first two Lessons and they have one.
^ Now eliminated the extra canvas, must have copied a piece and that was in there.
For the moment, the loader of xtk doesn't seem to be done for local. I mean : it uses an XMLHttpRequest (XHR) to get the file with a GET request. First of all the request must be sent to something that can handle it (a server or localhost emilated by Wamp or equivalent). Then let's imagine if one broswer, no matter what one, allows XHR on a file at client side by his url, and imagine I'm a pirate and you come on my website. I know Windows well, I know in C:/Windows/System32 there always is a file where I can find your personals data. What do I do ? An XHR ! You've been hacked. It's a story but you see the idea.
That's why the only ways allowed by browsers to access local files are HTML5 File API & HTML5 Drag&Drop API (unfortunately...). Actualy a way to go through that limitation is having binary code at the client side (flash, java applet). The client is the only one who can ask to open a file or drop a file, so the browser is sure there won't be any security failure because of him.
So you should test it with something like Wamp and access your file with an url like "http://localhost/.../myfile.stl" or the relative url "/.../myfile.stl", or do the following if you realy want local files.
A few weeks ago I wrote my own parser for a private format for xtk and from local file, it worked well, I just used HTML5 APIs to read the file and get a String or BinaryArray from it and then wrote a parser that transformed it in a X.mesh. So I think the best would be to extend the X.loader for HTML5 file APIs, or like me to manualy load the file.
The following jsFiddle from Haehn helps : here !
What happens if you modify the filename with no space?
JB Farmer_STL_ACII.stl instead of JB Farmer STL ACII.stl