Is there any "gauge page" in Internet or some general procedure of querying some popular pages like Google's, so that it return constant known output?
I want to write a Unit test, which will succeeded if internet is working and data is transferring correctly.
UPDATE
I need namely HTTP to check all stack, including my app's part.
I like www.something.com (note the www). It hasn't changed since I found it, and the output is really small.
I don't think it's official or anything though.
Related
We have a collection of VB.NET / IIS web services on some of our servers, and they have web.config files in the websites' root directories that they're already reading configurations from. There is a new configuration that needed to be added that will immediately be quite a bit longer than the others, and it'll only stand to grow. It's essentially a comma-separated value, and I'm wanting to keep it specifically in a configuration file of some sort.
At first I started doing this with a text file, but there was a problem with that. The text file's contents could change while web service threads and processes are running, so they would need to essentially re-read the file every time they needed to access its values. I thought about using some sort of caching, but unless the web services are completely restarted each time the file is updated, caching would block updates to the file from being used immediately. But reading from a text file each time is slow...
Then came the idea of putting that value in web.config, along with the other configurations the services are already using. When web.config is altered, the changes are able to be cached in the code, on top of coming into play immediately. However web.config is, well, web.config, and it's not a totally trivialized text file that is simply read out of in the code. IIS treats web.config in a special manner.
I'm tempted to think any negative consequences of putting a comma-separated value in web.config would be outweighed, in comparison to storing them in a text file (or a database, which probably can't be used for this anyway), but I guess I better ask.
What are the implications of storing a possibly lengthy, comma-separated value in web.config, instead of in its own little text file? Is either file a particularly good or bad idea? To me, it seems like web.config would be easy to get along with without having to re-read the file over and over, but there's certainly more to it than the common user is aware. Thanks!
I recommend using the Application Cache for this:
http://msdn.microsoft.com/en-us/library/vstudio/6hbbsfk6(v=vs.100).aspx
I am writing a webserver in C++. I am looking at the POST documentation on w3:
http://www.w3.org/TR/html401/interact/forms.html#h-17.13.4
I see that a POST is supposed to support the full multi-parts scheme: parts and sub-parts (and obviously, sub-sub-parts...) just like for email attachments.
Is there any browser and/or tool that do that on a normal basis? In other words, is it really important for a server to support parts and sub-parts?
The obvious problem with that is the fact that it could mean that two files are uploaded under the same name. That's quite a problem if you ask me. Also, from what I can see in PHP it is not supported at all in that realm. Am I correct?
Ah! I guess I should have searched a little more and to tell you the truth I had not thought of looking at HTML5 for the answer.
The following paragraph actually includes the answer:
http://www.w3.org/html/wg/drafts/html/master/forms.html#multipart-form-data
Note: In particular, this means that multiple files submitted as
part of a single element will
result in each file having its own field; the "sets of
files" feature ("multipart/mixed") of RFC 2388 is not used.
So it is clear that sub-parts (multipart/mixed) are not to be supported.
In our Windows 8 application, we are using the IXMLHTTPRequest2 method to stream files over HTTP, files whose size can reach gigabytes. This all works perfectly, except for the fact that internally, WinRT has a caching system which stores all that is streamed over the call to IXMLHTTPRequest2 in the temporary internet cache. As we stream more and more files, the cache is never emptied and it just starts taking more and more space on disk, until the disk is full.
Optimally, we would like to disable this caching functionality entirely. Another option we could live with would be that the cached files would be removed after a short while (although we'd like to avoid having to browse the temporary internet cache and removing files manually).
We've tried adding the "Expires: 0" header to the server response, as well as disabling the caching directly inside IE (we thought this might have an influence on the call to IXMLHTTPRequest2), but to no avail.
Anyone has any thoughts on this?
I realize this question is similar to another one posted here, however, our problem has more to do with the space that is taken by the cache rather than by the "freshness" of the files.
EDIT:
We have also found this post on the MSDN forums, where, according to a MSFT Moderator, "The system will also periodically cleans up the cache so you will not have to worry about running out of disk space", but that is not the case in our scenario.
According to this post on the MSDN forums, this isn't possible and is a known limitation with WinRT.
Sometimes the only answer is bad news. :-[
As ildjarn noted, this seems to be unavoidable on Windows 8. But it looks like there might be a way to fix this for clients running Windows 8.1.
I haven't tried it myself, but I just noticed that there is now "IXMLHTTPRequest3" which extends "IXMLHTTPRequest2" with some new features:
http://msdn.microsoft.com/en-us/library/windows/desktop/dn376398%28v=vs.85%29.aspx
The relevant feature is:
XHR_PROP_NO_CACHE – Suppresses cache reads and writes for the HTTP request.
That sounds promising.
I have many source/text file, say file.cpp or file.txt . Now, I want to see all my code/text in browser, so that it will be easy for me to navigate many files.
My main motive for doing all this is, I am learning C++ myself, so whenever I learn something new, I create some sample code and then compile and run it. Also, along these codes, there are comments/tips for me to be aware of. And then I create links for each file for easy navigation purpose. Since, there are many such files, I thought it would be easy to navigate it if I use this html method. I am not sure if it is OK or good approach, I would like to have some feedback.
What I did was save file.cpp/file.txt into file.html and then use pre and code html tag for formatting. And, also some more necessare html tags for viewing html files.
But when I use it, everything inside < > is lost
eg. #include <iostream> is just seen as #include, and <iostream> is lost.
Is there any way to see it, is there any tag or method that I can use ?
I can use regular HTML escape code < and > for this, to see < > but since I have many include files and changing it for all of them is bit time-consuming, so I want to know if there is any other idea ??
So is there any other solution than s/</< and s/>/>
I would also like to know if there any other ideas/tips than just converting cpp file into html.
What I want to have is,
in my main page something like this,
tip1 Do this
tip2 Do that
When I click tip1, it will open tip1.html which has my codes for that tip. And also there is back link in tip1.html, which will take me back to main page on clicking it. Everything is OK just that everything inside < > is lost,not seen.
Thanks.
You might want to take a look at online tools such as CodeHtmler, which allows you to copy into the browser, select the appropriate language, and it'll convert to HTML for you, together with keyword colourisation etc.
Or, do like many other people and put your documentation in Doxygen format (/** */) with code samples in #verbatim/#endverbatim tags. Doxygen is good stuff.
A few ideas:
If you serve the files as mimetype text/plain, the browser should display the text for you.
You could also possibly configure your browser to assume .cpp is text/plain.
Instead of opening the files directly in the browser, you could serve them with a web server than can change the characters for you.
You could also use SyntaxHighlighter to display the code on the client side using JavaScript.
It is pretty much essential that somewhere along the line you use a program to prevent the characters '<>&' from being (mis-)interpreted by your browser (and expand significant repeated blanks into '` '). You have a couple of options for when/how to do that. You could use static HTML, simply converting each file once before putting it into the web server document hierarchy. This has the least conversion overhead if the files are looked at more often than they are modified. Alternatively, you can configure your web server to server the pages via a filter program (CGI, or something more sophisticated) and serve the output of that in lieu of the file. The advantage is that files are only converted when needed; the disadvantage is that the files are converted each time they are needed. You could get fancy and consider a caching solution - convert the file on first demand but retain the converted file for future use. The main downside there is that the web server needs to be able to write to where the converted file is cached - not necessarily a good idea for security reasons. (A minimalist approach to security requires the document hierarchy to be owned by and only writable by one user, say webmaster, and the web server runs as another user, say webserver. Now the web server cannot do any damage because it cannot write anywhere in the document hierarchy. Simple; effective; restrictive.)
The program can be a simple Perl script or a simple C program (the C source for webcode 1.3 is available here).
I'm increasingly becoming aware that there must be major differences in the ways that regular expressions will be interpreted by browsers.
As an example, a co-worker had written this regular expression, to validate that a file being uploaded would have a PDF extension:
^(([a-zA-Z]:)|(\\{2}\w+)\$?)(\\(\w[\w].*))(.pdf)$
This works in Internet Explorer, and in Google Chrome, but does NOT work in Firefox. The test always fails, even for an actual PDF. So I decided that the extra stuff was irrelevant and simplified it to:
^.+\.pdf$
and now it works fine in Firefox, as well as continuing to work in IE and Chrome.
Is this a quirk specific to asp:FileUpload and RegularExpressionValidator controls in ASP.NET, or is it simply due to different browsers supporting regex in different ways? Either way, what are some of the latter that you've encountered?
Regarding the actual question: The original regex requires the value to start with a drive letter or UNC device name. It's quite possible that Firefox simply doesn't include that with the filename. Note also that, if you have any intention of being cross-platform, that regex would fail on any non-Windows system, regardless of browser, as they don't use drive letters or UNC paths. Your simplified regex ("accept anything, so long as it ends with .pdf") is about as good of a filename check as you're going to get.
However, Jonathan's comment to the original question cannot be overemphasized. Never, ever, ever trust the filename as an adequate means of determining its contents. Or the MIME type, for that matter. The client software talking to your web server (which might not even be a browser) can lie to you about anything and you'll never know unless you verify it. In this case, that means feeding the received file into some code that understands the PDF format and having that code tell you whether it's a valid PDF or not. Checking the filename may help to prevent people from trying to submit obviously incorrect files, but it is not a sufficient test of the files that are received.
(I realize that you may know about the need for additional validation, but the next person who has a similar situation and finds your question may not.)
As far as I know firefox doesn't let you have the full path of an upload. Interpretation of regular expressions seems irrelevant in this case. I have yet to see any difference between modern browsers in regular expression execution.
If you're using javascript, not enclosing the regex with slashes causes error in Firefox.
Try doing var regex = /^(([a-zA-Z]:)|(\\{2}\w+)\$?)(\\(\w[\w].*))(.pdf)$/;
As Dave mentioned, Firefox does not give the path, only the file name. Also as he mentioned, it doesn't account for differences between operating systems. I think the best check you could do would be to check if the file name ends with PDF. Also, this doesn't ensure it's a valid PDF, just that the file name ends with PDF. Depending on your needs, you may want to verify that it's actually a PDF by checking the content.
I have not noticed a difference between browsers in regards to the pattern syntax. However, I have noticed a difference between C# and Javascript as C#'s implementation allows back references and Javascript's implementation does not.
I believe JavaScript REs are defined by the ECMA standard, and I doubt there are many differences between JS interpreters. I haven't found any, in my programs, or seen mentioned in an article.
Your message is actually a bit confusing, since you throw ASP stuff in there. I don't see how you conclude it is the browser's fault when you talk about server-side technology or generated code. Actually, we don't even know if you are talking about JS on the browser, validation of upload field (you can no longer do it, at least in a simple way, with FF3) or on the server side (neither FF nor Opera nor Safari upload the full path of the uploaded file. I am surprised to learn that Chrome does like IE...).