Should I use MSMQ or IIS - web-services

I have a web site that exposes a web service to all my desktop clients.
Randomly, these clients will invoke the web service which in turn will add a message jpeg in byte array format to the MSMQ.
I have a service application that reads from this queue and performs an enhancement on this jpeg and saves it to the hard drive.
The number of clients uploading at anyone time is unpredictable.
I choose this method because I do not want to put any strain on IIS. The enhancements my service application performs is not much 'erg' but it exists nevertheless.
However, after realizing that my service application had stopped for sometime and required restarting I noticed the RAM leap up to clear the backlog. Whilst I have corrected this and the service is now coded to restart automatically on fail I surmised that a backlog could exists at busy times which again give a higher RAM usage.
Now, should I accept to do the processing all within my web service and then save to the hard drive or am I correct in using a MSMQ?
I am using C# and asp.net

Related

How to push data to a running Windows Service

My question is, is there a good way to push an integer value to a running windows service without restarting it and without writing to disk or having it poll some database?
Here's my scenario and a few thoughts:
I need to pass data to a windows service in real time I DO NOT want there to be a delay. All I need to give it is in integer and it can do the rest. My predecessor had it set up to poll a database every 10 min but that is no longer an option. I need the response time to be less than a second. I suppose technically I could just reduce the poll time to 0.5 sec but I'm thinking that would be bad for the database server. I know you can pass data to a windows service when it starts but restarting this service isn't an option because of what it's doing.
I would love to use a web service and just call a web method to pass in the data but the tasks require elevated (admin) permissions and almost everything involves file system access so my understanding is that a web service isn't really the best option either.
I've thought of using a hybrid scenario where I run a web service and a windows service on the same machine but then I still have the problem of how to pass the integer from the web service to the windows service... I could technically use a file system watcher but I really don't want to create a file just to pass an integer. I thought maybe I could use localDb and have the web service just write the value there and have the windows service poll localDb every 0.5 sec, But I'm not sure how much that polling would affect overall performance of other things. I really want a way to push data to the windows service rather than having the service poll somewhere else.
The project I work on has a front-end UI that communicates with a Windows Service running on the same system. In the past, I used the Windows Communication Foundation (WCF), but found this to be heavy-weight for what I really needed. I am now using a TCP socket over the localhost address (127.0.0.1) to exchange data between the UI and the service.
Based on your description, the web service approach seems heavy-weight, kinda like the WCF approach we used to use. And, as you've noted, it has permissions issues. A simple application that pushes the integer to your service over a socket would be straightforward in my mind.
If WCF is of interest, here's a couple links that might help:
Creating a user interface for monitoring and interacting with a running windows service
GUI and windows service communication

Appropriate architecture for event logging in a game

I'm trying to modify a game engine so it records events (like key presses), and store these in a MySQL database on a remote server. The game engine is written in C++, and I currently have the following straightforward architecture, using mysql++ to directly INSERTrecords into appropriate databases:
Unfortunately, there's a very large overhead when connecting to the MySQL server, and the game stops for a significant amount of time. Pushing a batch of Xs worth of events to the server causes a significant delay in gameplay (60s worth of events can take 12s to synchronise). There are also apparently security concerns with leaving the MySQL port accessible publicly.
I was considering an alternative option, instead sending commands to the server, which can interact with the database in its own time:
Here the game would only send the necessary information (e.g. the table to update and the data to insert). I'm not sure whether the speed increase would be sufficient, or what system would be appropriate for managing the commands sent from the game.
Someone else suggested Log4j, but obviously I need a C++ solution. Is there an appropriate existing framework for accomplishing what I want?
Most applications gathering user-interface interaction data (in your case keystrokes) put it into a local file of some sort.
Then at an appropriate time (for example at the end of the game, or the beginning of another game), they POST that file, often in compressed form, to a publicly accessible web server. The software on the web server decompresses the data and loads it into the analytics system (the MySQL server in your case) for processing.
So, I suggest the following.
stop making your MySQL server's port available to people you don't know and trust.
get your game to gather keystrokes locally somehow.
get it to upload that data in big bunches when your game is not in realtime mode.
write a web service to receive and interpret these files.
That way you'll build a more secure analytics system and a more responsive game.

2008 R2 Domain EvtSubscribe delays

I use the Windows EvtSubscribe API in my program that runs as a service, generally on Windows Server 2008 R2 Domain Controllers. It is registered for kerberos logon events and it's purpose is to provide single sign on for my application on the network.
I grab the username/IP from the logon event and use them to pre-authenticate an IP address. This has worked well in a large number of sites until it was used recently on an extremely large site (60,000 users logging on and off throughout the day). The Domain Controller isn't under extremely high load as far as I can tell from Process Monitor but the events are not being passed on to my application right away, they delay by what can be 20 minutes to an hour.
I use the PUSH method as described in the API. The code is identical.
In Event Viewer, looking at the security logs the logon events come in immediately when a user logs on to the domain. However the event is not pushed to my application till much, much later.
I have never seen this occur at any of the other sites my application has been installed on and I'm wondering if its a configuration issue on the servers themselves. The site with the delays has 4 clustered domain controllers in total with my application running and reporting on each. All 4 periodically experience extended delays in receiving the events.
Has anyone else come across something similar or have any ideas what could be at play?
I have tried replicating it using VMs and ADTest to generate load without much luck.

tips to reduce message traffic and size in order to have less download amount

I have a mobile application integrated to a server where users can see tasks assigned and close the task request after work. In this project timing is very important, at least ones in a minute program should check if a task is assigned. Moreover mobile should also check the server if there is a change on the task that it already downloaded.
Because of the nature of the project download amount is high. How can we reduce it? Should we use another technology for server communication (Now we use ASP.NET Web Service Application)?
Thanks in advance.
Use JSON instead of XML Server.
Try using selective sync options like instead of complete tasks sync as it would become slow with higher number of tasks.
Mark task changes locally on mobile. mark entities dirty and then only update marked tasks to cloud/Server.
as SLaks suggested use push instead of pull it will save mobile battery and user's data package.
Here is what can help you:
Microsoft Sync Framework.
http://msdn.microsoft.com/en-us/sync/bb887608.aspx
http://weblogs.aspnet05.orcsweb.com/sbehera/archive/2009/04/10/sync-framework-for-windows-mobile-devices-amp-some-use-full-links.aspx

Ideal way/architecture to deliver large data over Web Services

We are trying to design 6 web services, which will serve another client component. The client component requires data from the web service we are implementing.
Now, the problem is, there is not 1 Web Service we are implementing, there is one Web Service which the client component hits, this initiates a series (5 more) of Web Services which gather data from their respective data stores and finally provide the data back to the original Web Service, which then delivers the data back to the client component.
So, if the requested data becomes huge, then, this will be a serious problem for our internal communication channel.
So, what do you guys suggest? What can be done to avoid overloading of the communication channel between the internal Web Service and at the same time, also delivering the data to the client component.
Update 1
Using 5 WS, where, 1WS does not know about the others, except the next one is a business requirement. Actually, 5 companies "small services" are being integrated.
We use Java and Axis2
We've had a similar problem. Apart from trying to avoid it (eg for internal communication go direct to db instead of web service) you can mitigate it by at least not performing the 5 or so tasks in series. Make new threads to collect them all in parallel and process them at the end to reduce latency (except where they might contend for the same resource and bottle neck).
But before I'd do anything load test it and see if it is even an issue and get some baseline stats so you can see what improvement each change makes. Also sometimes you might be better off tweaking network settings or the actual network rather than trying to optimise the code - but again test and see.
Put all the data on a temporary compressed file and give back the ftp url of the file.
The client fetches the big data chunk uncompress it and reads it. (maybe some authentication mechanism for the ftp server)