I want to build the following back-end service:
For each call to the service, spawn a web browser that loads a webpage (including flash) and returns a screenshot of the page to the caller at intervals (ie every 3 seconds) until the caller disconnects. This needs to scale for many callers (thousands perhaps), each of which needs its own browser session.
When I decided I needed to build this program, I was surprised that I had basically no idea how I could do it.
On stackoverflow, I found the following link which looks promising: http://www.genuitec.com/about/labs.html
Any other ideas?
You can use XULRunner (Mozilla engine) on your server side. I'm in doubt though that this solution is scalable.
Related
I am writing an ASP.NET web page which calls an API to update my client's property website using XML data. The data from the API is real-time, so I would like to run the page every 10 minutes.
Clearly I don't want to load my page manually to keep my client's property website up-to-date. There is a lot of help in Stack Overflow and elsewhere on this type of question but I have become a little overwhelmed by the options. I think that one way to go would be:
Windows Task Scheduler to fire every ten minutes (to trigger a VB.Net Service)
VB.Net Service (to run the web page)
My page runs..
That feels like overkill, and I haven't written a Windows Service or used the Task Scheduler and it feels like there should be 2 steps not three.
Now if I do use a VB.Net Service then I think that it might be better to give more work to the VB.Net Service rather than put my script in a web page, but I am used to writing web pages!
I can't help feeling that if I just keep the page open in a browser somewhere I can easily use JavaScript to run the page every 10 minutes, but that means ensuring it's open in a browser. Bad solution I think...
What I need is an overview of my options to make an informed decision and if it means learning then fine. Thanks in advance!
You can use javascript/Jquery to call a page/webmethod continously in timely manner
setInterval(function() {
// call your page or webmethod
}, 1000 * 60 * x); // x is your time interval in mins, in your case x=10
In my opinion the best approach would be to create a windows service and have the service call the web page. The windows service is much more stable than the Task Scheduler because the task scheduler can overlap if the previous Scheduled event did not finish. Also using the windows service gives you more control over error handling and logging
Get started with this link:
http://code.msdn.microsoft.com/windowsdesktop/CSWindowsService-9f2f568e
I am developing a project for the end of my studies. This project is basically acting as a server, is cross-platform and developed in C++.
I was wondering if it was possible to make a web interface that could be used like for instance the listener design pattern to log what the program does. This would be cross platform, and is ideal since the program is supposed to run on a distant server.
My question is: is there any web technology to could let me update my web page live when the program logs something. I know this is something unusual and I'm not an expert in web technos, that's why I am asking.
Would Erlang do it ?
Thanks for your help
EDIT: To give a more concrete example, I would like to be able to follow the execution of my program live and see the logs of my program appear on the page. The idea would be to use a web page like I would use WPF on Windows or GTK on Linux for instance. Like someone said, it would be some kind on monitor for my application.
It's much easier than you might think. A web server basically gets a request as a path name, and returns a page. If you set it up correctly, it will invoke a program to create the content. This is called "CGI".
If you can do it without live updating, it's then super-easy: just refresh the page and your program can be called again.
If you want live updating, you'll need to do a little more. The easiest way is with a little lightweight javascript. The magic word here is AJAX. There are a number of tutorials on line for both of these, just google.
The main thing is to start with a very very simple example and add to it. Javascript in particular is a little peculiar; follow the tutorials, though, and you'll get it.
You can embed a web server such as http://code.google.com/p/mongoose and poll it using xhr or better use websockets.
Or use a monitoring solution such as Nagios (Nagios Core is free).
I'm putting together a website that will track user-defined events with time limits. Every user would be free to create events, and when the time limit expired, the server would need to take some action based on the outcome of the event. The specific component I'm struggling with is the time-keeping: think like eBay's auction clock -- it's set to expire at a certain time, clearly runs server-side, and takes some action when the time runs out. Searches for a "server side timer," unfortunately, just bring back results for a timer that gets the time from the server instead of the client. :(
The most obvious solution is to run a script on the server, some program that would watch all the clocks and take action when any of them expired. Tragically, I'll be using free web hosting, and sincerely doubt that I'll be able to find someone who'll let me run arbitrary stuff on their servers.
The solutions that I've looked into:
Major concept option 1: persuade each user's browser to run the necessary timers (trivial javascript), and when the timers expire, take necessary action. The problem with this approach is obvious: there could be hundreds, if not thousands, of simultaneous expiring timers (they'll tend to expire in clusters), and the worst case is that every possible user could be viewing their timer expire. That's a server overload waiting to happen at the worst possible instant.
Major concept option 2: have one really trusted browser, say, a user logged in to the website as "cron" which could run all of the timers at once. The action would all happen in that browser's javascript, and would work great, as long as that browser never crashed, that machine never failed, and that internet connection never went down.
As you can see, I feel like I'm barking up the wrong forest on this problem. Some other ideas that have presented themselves:
AJAX: I'm not seeing anything here that will do quite what I need. It's all browser-run stuff, nothing like a server-side process that could run independent of the user's browser.
PHP: Runs neatly on the server, but only in response to client requests. I'm not seeing any clean way to make PHP fork off a process and run a timer independent of the user's browser.
JS: same problems as PHP, but easier to read. ;)
Ruby: There may be some multi-threading with Ruby, but it isn't readily apparent to me. Would it be possible to have each user's browser check to see if a timer process was running for their event, and spawn a new server-side ruby process if it wasn't?
I'm wide open for ideas -- I've started playing with concepts in JS and PHP, but I'm not tied to any language, particularly. The only constraint, really, is that I won't own the server that I'm running the site on, so I can't just run a neat little local process that does what I need it to do. :(
Any thoughts? Thanks in advance,
Dan
ASP.NET has multi-threading. You can have a static variable to collect the event data, and use a thread to do whatever needed when the time comes. After you can empty the static variable so it's ready for future use.
http://leedale.wordpress.com/2007/07/22/multithreading-with-aspnet-20/
You might want to take a look at the Quartz scheduler for Java which also has a .NET version. With a friendly open source license (Apache 2.0) this is probably a very good starting point.
If you can control cron jobs, which at least I could on HostPapa's shared hosting, you could run a php file every second which checks the timers and takes action based on them.
I would suggest AJAX anyway, what we did on a game server was emulation of "server connects to client" via AJAX request to server without any time-out (asynchronous connection). Basically you create one extra connection for each client that hangs on the server and waits for the server to take self-invoked action. After the action is done you start a new hanging connection immediately so you have one hanging all the time (so the server can talk to your client any time it wants). You can send javascript code from the server that will decide what will happen next. You can check clients to have these hanging connections on the server side to count as valid and of course run your timers on the server.
While building this web service and the app that calls it, we have noticed that the first call to the web service each day is extremely slow. It even will time out on some days. However, every call after that work great. Can anybody shed light on why this might be and how we can get rid of this pain?
Thanks in advance!
If it's an ASP.NET web service, it may be the CLR initializing and loading and verifying the assemblies for the first time. You may want to consider pre-compilation
Agree with the other answers on caching, initialization, etc. As far as a workaround, one possibility may be to set up some sort of daily task (SQL Server job, Windows service, something else?) to simulate a hit to the service each day, so that your users don't experience this first slow request.
If it is an ASP.NET web service, then you might want to check the settings of the application pool the web service is running in, especially the idle timeout which defaults to 20 minutes in IIS7.
Configuring IIS7 idle-timeout
Even if it is not an ASP.NET web service, other web servers will have equivalent configuration settings you have to tweak to keep your web service alive overnight.
Can you duplicate the same behavior on your database? It could just be the db needing to optimise the query for the first run (Maybe the parameter is today's date?).
Are there a lot of static constructors or set up code in the Global.asax class? Because IIS recycles worker processes periodically, the start up code may be running again.
The rule for optimization is: don't guess. Put in profiling to find out exactly what is slow, and then work to make that faster. Everything already posted provides excellent tips on where to start looking for slowness.
I develop industrial client/server application (C++) with strong real time requirements.
I feel it is time to change the look of the client interface - which is developed in MFC - but I am wondering which would be the right choice.
If I go for a web client is there any way to exchange data between C++ and javascript other than AJAX <-> Web service <-> COM ?
Requirements for the web client are: Quick statuses refresh, user commands, tables
My team had to make that same decision a few months ago...
The cool thing about making it a web application would be that it would be very easy to modify later on. Even the user of the interface (with a little know-how) could modify it to suit his/her needs. Custom software becomes just that much easier.
We went with a web interface and ajax seems the way to go, it was quite responsive.
On the other hand, depending on how strong your real time requirements are, it might prove difficult. We had the challenge of plotting real time data through a browser, we ended up going with a firefox plugin to draw the plot. If you're simply trying to display real time text data, it shouldn't be as big an issue.
Run some tests for your specific application and see what it looks like.
Something else to consider, if you are having a web page be an interface to your server, keep in mind you will need to figure a way to update one client when another changes the state of the server if you plan on allowing multiple interfaces to your server.
I usually build my applications 2-folded :
Have the real heavy-duty application CLI-only. The protocol used is usually text-only based, composed of requests and answers.
Wrap a GUI around as another process that talks to the CLI back-end.
The web interface is then just another GUI to wrap around. It is also much easier to wrap a REST/JSON based API on the CLI interface (just automatically translate the messages).
The debugging is also quite easy to do, since you can just dump the requests between the 2 elements and reproduce the bugs much more easily.
Write an HTTP server in your server to handle the AJAX feedback. If you don't want to serve files, create your server on a non-standard port (eg. 8081) and use a regular web server for the actual web page delivery. Now have your AJAX engine communicate with the server on the Bizarro port instead of port 80.
But it's not that hard to write the file server part, also. If you do that, you also get to generate web pages on-the-fly with your data pre-filled, if you want.
Google Desktop Search does this now. When I search my desktop for 'foobar', the URL that opens is this:
http://127.0.0.1:4664/search?q=foobar&flags=68&num=10
In this case, the 4664 is the Bizarro port. (GoogleDesktop serves all the data here; it only uses the Bizarro port to avoid conflicts with any web server I might be running.)
You may want to consider where your data lives. If your application feeds a back-end database, you could write a web app leaving your c++ code in tact -- the web application would be independent and offer up pages to web users and talk directly to the database -- In this case you have as many options, and more, as you have indicated.