As noted on other cffile upload questions,
GetPageContext().formScope().getUploadResource("myFormField").getName()
is great for getting the filename on the server before actually doing the cffile (for Railo and Lucee - there's a different method for ColdFusion) but I noticed an interesting wrinkle: if the browser is IE then this returns the full source path including the filename. Firefox and Chrome on the other hand, return only the filename.
For my application I need the full path, but haven't been able to find that when the browser is FireFox or Chrome. If anyone has any ideas I would be most grateful!
(Expanded from the comments)
I am not familiar with the getUploadResource() function. However, looking over this related thread, it sounds like it returns file information submitted by the client. While there are recommended guidelines, ultimately the value received on the server is whatever the browser chooses to send. It is not something that can be changed or controlled by server side code. So if Firefox and Chrome return something different than IE, you are out of luck.
(As an aside, personally I have always found Internet Explorer to be a bit odd in this area. Traditionally browsers are restricted from certain file access operations for security reasons, unless a signed control is used. So you might expect those restrictions would prohibit a browser from submitting information about the structure of the client file system as well. In fact, most browsers do not submit path information with uploads, only a file name. Obviously, Internet Explorer chose to do things .. differently .. for whatever reason)
For my application I need the full path
Having said all that, why would you need the path from the client machine?
Related
The recent Windows 10 update for KB5003637 seems to have caused our use of the WebBrowser control to fail. Our applications use a C++ dialog that hosts a web browser control based on the IWebBrowser2 interface and implemented by the COM class 8856f961-340a-11d0-a96b-00c04fd705a2. The control interacts with a bespoke internal 'web server' that is hosted on a localhost port. The web browser is rendering dynamic HTML with a bunch of css and javascript. It's a legacy app that has been working reliably for many years.
Our users that have Windows 10 versions 2004, 20H2, and 21H1 are installing the KB5003637, and when they do the web browser does not render the content that it did before.
Looking at some trace, I can see that the Web Browser is requesting the page's HTML, which seems to be delivered as it should. What normally happens at that time is that the web browser control requests the css and javascript files needed to make the page active. What happens instead is nothing.
The KB5003637 update is pretty big, but does contain fixes for some scripting vulnerabilities described in CVE-2021-31959 which are very much on point. Nothing that I've found so far indicates how this was fixed, the effect that it has on the WebBrowser control, nor what workarounds there might be.
Any help would be appreciated.
Turns out that the Windows update I described did change the behavior of the WebBrowser control. Our bespoke web server was not including content type headers for responses to the WebBrowser's request. For the last decade or more, the control was successfully able to figure out what the content was OR it defaulted to the correct content type in the cases that mattered. After the update, the WebBrowser was defaulting to a content type of 'text' for the initial HTML payload. As a result it was not trying to interpret the payload as HTML and therefore no further actions were necessary (like requesting css and js files).
When I changed the code to include a content type header of "text/html" for the initial payload, the application began working. Content type headers are now included with all replies.
if we see big websites like youtube, google drive, facebook, cloud file download sites, etc., then we will find that every link file, video, image or whatever, then the original file link will not be seen for example videos on youtube, even if we inspect the element and see the source on the video player it isn't visible, the link is just written:
src = "https://www.youtube.com/94118230-9dbf-4207-a098-de7a7ccdf7f6"
without any real address or file extension like .mp4 or others. can anyone help explain how to engineer this and whether django can handle engineering like this?
First point: an url doesn't have to point to a file. What happens when some url is requested is up to the HTTP server serving the url. Serving files from the server's filesystem is the basic default for most HTTP servers (static sites), but that's just one of the possibilies, it can as well be executing a CGI script, delegating to a pool of long running processes (typical Python wsgi app), whatever...
Second point: files extensions are mostly cosmetic. You can have a file name without any extension directly served by Apache, and you can have an url with a filename extension that is actually served by some script or other program dynamicall generating content.
IOW, there's absolutely no relation between what the url looks like and how the response is built, and "original file link" doesn't mean anything, and the youtube url you posted IS as much of a "real address" than any other url.
can anyone help explain how to engineer this
It's impossible to answer this question, at least not with a one-size-fits-all answer. If all you want is to serve a static file without extension, just rename the file without the extension, possibly tweak your HTTP server config so that it correctly handles the case, and you're done.
whether django can handle engineering like this
Any techno that is not able to "handle engineering like this" is either prehistoric or fundamentally broken.
I strongy suggest you (seriously) read about the HTTP protocol and learn to set up and configure a standard HTTP server (Apache comes to mind). Only then will it make sense to worry about specific technos (Django or else).
I see an error using CFCHART with Lucee. Same code works in CF. But in Lucee it try to refer to a file graph.cfm in a folder lucee.
mytestserver.com/lucee/graph.cfm?img=026f01d7b8c85b891a9c35c102623747&type=png
Do I need to create any mapping? Should this mapping be in Lucee admin or in IIS?
The short answer is: No, you don't need to add any additional mapping in IIS, nor in Lucee or Tomcat.
I've seen this question here for too long, so I'm placing an answer here to shed some light into Lucee's graph.cfm.
Some tags in CFML need to create additional image files to later embbed them as an inline HTML element into the reendered output altogether. Examples for such file creation are <cfimage type="captcha" ...> or like you have already noted in your issue, <cfchart>.
For such functionality Lucee needs to create these files temporarily somewhere and also make them publicly available. To achive this for cfimage/cfchart, Lucee creates the files in the web context folder of your webroot (which typically is located at path-to-your-webroot\WEB-INF\lucee\temp\graph ) and embbed them inline with a link to graph.cfm. The template graph.cfm just reads the temporary file from that folder, and delivers it in realtime to your application.
If you want to take a look into Lucees original graph.cfm, we can take a peek thanks to OpenSource:
source of Lucees graph.cfm at github
In order to make the files and the template graph.cfm temporarily publicly available, which by the way sits behind the WEB-INF folder ( which is also hidden/blocked by default in Tomcat for securtiy reasons), Lucee MUST have a virtual mapping. But you don't need to set it up, because these are already set up by default. You can see this in the image below taken from the "Mapping"-section of Lucee Administrator:
Because graph.cfm is a .cfm file, IIS will redirect the request directly through the implemented CFML connector ( probably Boncode Connector ) per AJP to Tomcat. Thus you don't need to set any mapping in IIS neither.
Because you have not submitted any additonal error information, such as http error codes or stack traces, I don't have any clue of what might be the cause of your error. It may also be some incompability issue which might be addressed if you submit it to the Lucee core team.
Another possibility is that many installation guides advise you to lock down the "/lucee/" path with IIS URL Rewrite Module, because this is also the path where the Lucee Administrator sits behind. If so, you can change the setting in IIS Rewrite Rule and adapt the rule in such a manner, that it would not block the graph.cfm.
It's also important to note that many of these cftags are implemented as Lucee extensions (.lex files). These are not necessarily pre-shipped or pre-installed in Lucee, but you may install it within Lucee Administrator or get them from Lucees Download site and upload it through your Lucee Administrator in the "Extension" section.
I've just encountered this too. Issue being though the the default mapping still don't navigate to "graph.cfm" so we've added an IIS virtual mapping instead.
I'm using Liferay 6.1.0 GA1.
My applications runs on two tomcats. I have varnish in front of them. Varnish redirect to particular node when cookie is set on it.
When I'm trying to upload multiples files on Firefox, it loses this cookie (on Chrome it works just fine).
My idea was, to extend URL - add parameter that can later be filtered in Varnish. But I cannot find where should I add this, that Flash can later copy this properly.
Any other ideas that will be helpful are welcome as well.
P.S. Sorry for bad english.
"Loosing a cookie" means that it explicitly is set to another value, or the hostname changes. I suggest you use Firebug or the built-in Developer tools (hit F12) and monitor the requests and responses that go through the line. Pay attention to Set-Cookie directives in the response headers as well as the Host directive in the request headers. This should give some hints where they're going.
It's hard to give more specific advice with the level of detail you provide.
A client I'm working for has mysteriously ended up with some malicious scripting going on on their site. I'm a little baffled however because the site is static and not dynamically generated - no PHP, Rails, etc. At the bottom of the page though, somebody opened a new tag and a script. When I opened the file on the webserver and stripped the malicious stuff and re-uploaded, it was still there. How is this possible? And more importantly, how can I combat this?
EDIT:
To make it weirder, I just noticed the script only shows up in the source if the page is accessed directly as 'domain.com/index.html' but not as just 'domain.com'.
EDIT2:
At any rate, I found some php file (x76x09.php) sitting on the web server that must have been updating the html file despite my attempts to strip it of the script. I'm currently in the clear but I do have to do some work to make sure rogue files don't just appear again and cause problems. If anyone has any suggestions on this feel free to leave a comment, otherwise thanks for the help everyone! It was very much appreciated!
No it's not possible unless someone has access to your files. So in your case someone has access to your files.
Edit: It's best if you ask in serverfault.com regarding what to do in case the server is compromised, but:
change your shell passwords
have a look at /var/log/messages for login attempts
finger root
have a look at last modification time of those files
There is also a high propability that the files where altered via http by using a vulnerability of a software component you use together with the static files.
To the point about the site not having pages executing on the server, XSS is absolutely still possible using a DOM based attack. Usually this will relate to JavaScript execution outputting content to the page. Just last week WhiteHat Security had an XSS vulnerability identified on a purely “static” page.
It may well be that the attack vector relates to file level access but I suggest it’s also worthwhile taking a look at what’s going on JS wise.
You should probably talk to your hosting company about this. Also, check that your file permissions aren't more lenient than they should be for your particular environment.
That's happened to me before - this happens if they get your ftp details. So, whoever did it, obviously got ahold of your ftp details somehow.
Best thing to do is change your password and contact your webhosting company to figure out a better solution.
Unfortunately, FTP isn't the most secure...