Can someone suggest "handling concurrent best practices" in enterprise level .NET CORE application(WEB). Front end has been implemented using JQuery.
I need suggestion in following points
- Multiple Client requests hit in database same time
- What are MS SQL level handlers
Thanks in advance,
Related
Why we started using javascript in server-side ?? and which is the best javascript language for server-side ?? and why for e.g node.js
By using both client and server JavaScript, you can reduce the number of different concepts required for web development, get a possibility to reuse code between client and server and reduce the need for context switching.
Node.js uses an event-oriented architecture, which integrates very well with JavaScript (callbacks!). Using an event loop, Node interprets requests in a single thread asynchronously rather than sequentially, not allowing locks. This makes it incredibly fast, perfect for handling a very high number of requests, one of the main advantages that has excited so many developers to explore this technology.
I have already used Faye with Ruby On Rails, it's almost at 0 cost for me, because I'm running Faye over another server connected to my Rails App.
However I have faced some problems like when a query takes too long on the Rails server, after a while the Faye Connection would fail and raise an exception.
Now what I'm looking into the Actioncontroller::Live , most of the implementations are using Redis, which will be a bit costy for my startup, however I realized I can't do subscribe/publish style things with the Actioncontroller::Live.
My question: should I move over to Actioncontroller::Live or stick to Faye ? While these are the things that I want to accomplish:
Updates after commenting/feed
Notification system, based on pub/sub, similar to Faye.
Exception handling
Scalability > More users more connections
I know that Faye uses Bayeux vs ActionController::live uses SSE/ HTTP.
Should I consider anything related to Socket.IO ? SockJS ?
I have already read through some of the question about this topic on here like:
Replace Faye with rails 4 server side events? Faye VS rails 4 streaming?
But I need more info:
Here's some notes on why I would stick with Faye, which might bring you closer to an answer on this question:
Browser compatibility
As you read in the related stackoverflow question, Faye has better browser compatibility.
Stability
Rails::Live functionality doesn't seem to be very stable yet. There's currently active development on Rails SSE. As an example, it's quite unlikely that you won't be affected by this issue.
Threading & blocking vs asynchronous non-blocking
Do you use multi-threading in your application? If you don't, I definitely wouldn't introduce it just for Rails::Live as it opens up the possibility of non-threadsafe gem issues & limitations of server choices.
If you do have multi-threading, each client will keep a thread open to your application. If you run out of threads your application will be unresponsive/dead. Consider how many threads you need to cater for peak times with users having multiple browser tabs open, or even DOS attacks where someone opens up a huge amount of idle SSE/websocket connections to reach your max and take your app down. If you set a high amount of max threads to support many idle connections, you open up the possibility of having that many non-idle threads which could have it's own problems. No SSE/websockets and no comet/long polling is much safer for blocking apps. From what I understand, your setup runs Faye separately. The Faye server runs Ruby EventMachine or Node.js which are both asynchronous non-blocking and do not use a thread for each open connection. It can handle a huge amount of concurrent connections without problems.
My opinion is that a normal blocking Rails web application with a separate asynchronous non-blocking server for connections that stay open (to pass messages & make the app live) is the best of both setup. This is what you have with Rails + Faye.
Update: Actioncable was announced at Railsconf 2015. It runs non-blocking as described above, but it's an integrated official Rails solution. Having a single framework with a massive community, an integrated non-blocking connection handler for websockets that you can run and configure separately while everything works "out of the box" is a big advantage of Rails.
From Action Cable readme:
Action Cable is powered by a combination of EventMachine and threads. The framework plumbing needed for connection handling is handled in the EventMachine loop, but the actual channel, user-specified, work is handled in a normal Ruby thread. This means you can use all your regular Rails models with no problem, as long as you haven't committed any thread-safety sins.
To learn more you can read up on ActionCable & Underlying architecture.
I'm planning to develop a program for our university research that has to send lots of post requests to different urls. It must work as quick as possible (we should process about 100kk urls). What language shoud i use (currently i'm writing in c++, delphi and perl a bit)?
Also, I've heard that it's possible to write an multithreaded app in perl using prefork that can process about 20-30k per minute. Is it true?
// Sorry for my bad english, but it seems to be the only place where i can get the right answer
Andrew
The 20-30k per minute is completely arbitrary. If you run this on an 8-core machine with a beefy network connection you could probably surpass that.
However, I don't think your choice of programming language / library is going to matter much here. Instead, you're going to run into the number of concurrent TCP connections allowed by the machine, and also the bandwidth of the link itself.
Webserver Stress Tool claims capable of simulating the HTTP requests generated by up to 10.000 simultaneous users and has an entry in Torry's site: Presumably it's written in Delphi or C++ Builder.
My suggestion:
You can write your custom stress tool (HTTP(S) Client) with Delphi (It happens to be my favorite language so I advocate it) using light HTTP(S) library such as RTC SDK and OmniThreadLibrary for multithreading.
See this page for a clue/hint.
Edit:
Excerpt from Demos\Readme_Demos.txt in RealThinClient_SDK331.zip
App Client, Server and ISAPI demos can be used to stress-test RTC
component using Remote Functions with strong encryption by opening
hundreds of connections from each client and flooding the
Server/ISAPI with requests.
App Client Demo is ideal for stress-testing RTC remote functions using
multiple connections in multi-threaded mode, visualy showing activity
and stage for each connection in a live graph. Client can choose
between "Proxy" and standard connection components, to see the
difference in bandwidth usage and distribution.
I have heard Erlang is pretty good for such applications as it is very efficient to spawn many processes in Erlang quickly. But I think using Python would be fine too, just use the popen module to spawn multiple processes.
After all you are limited by how many you can run at the same time depending on how many processors your machine has. The choice of language may not matter as much depending on what you are doing with the data downloaded from these URLs as that may be more processing intensive than the cost of spawning.
i am developing a turn based multiplayer game with flex and blazeds.
Problem is that i read that the blazeds can handle only hundereds of concurrent users,but this can be increased by using nio server like jetty 7 and servlet 3.0.
does Tomcat 7 supports nio? and i wonder if i can increase concurrent user count by using tomcat 7and blazeds to a few thousands.
Any clue or help will be appreciated.
Thank you.
Do not worry yet about performance. If your game will be successful you will be able to afford the better technical solution. If not, it will not matter if you can handle 1000 or 1000000 requests.
However, related to your question - you may be able to increase the number of concurrent users by doing server related tunings (like stack size, increase the size of the thread pool).
There a couple of solutions implementing Servlet 3.0 (NIO), but you will have to write your own BlazeDS NIO endpoint - so it does not work out of the box.
Edit:
Using the NIO Jetty connector by can be a good idea...but the first thing which should be done is building and testing a valid performance scenario. For example if you plan to support 10000 connected users and to push 1 msg/sec you need to write stress test for that. After that, you can experiment using various connectors/configurations.
There is one tool created by Adobe which can help you with performance testing - it's located here (take a look at the attachments of Adobe LiveCycle Data Services 3 ES2 Performance Brief.pdf). It contains instructions how to configure/run the stress tool. If you cannot manage to run it let me know
Just to give you an example, on my machine (i7 Q820 8gb ram), using the stress tool I was able to handle 10000 connected users.
I've started using SOAP UI recently to test web services and it's pretty cool, but it's a huge resource hog.
Is there any way to reduce the amount of resources it uses?
It shouldn't be a resource hog, although I've seen it do this before. I leave it running on my PC all week, and a co-worker with a similar machine (dual-core running XP) has to kill it every few hours, otherwise it keeps using CPU. I'd try uninstalling/re-installing. Currently, my instance has been up for 10 days, running a mockservice that I've been hitting very hard (I've sent it thousands of requests). CPU time total (over 10 days) is about an hour and a half, but the "right now" number is about 1%.
There are no popular alternatives, aside from writing your own client in the language of your choice.
If you're testing WCF services, you can run wcftestclient from the Visual Studio command line. It works for local or remotely hosted services. Its no good for ASMX-style .NET 2.0 SOAP services though.
if you want to test using only json, you could use some of the light weight Rest clients ex. Mozilla Rest plugin.
We test our SOAP APIs manually with SOAP UI and otherwise use jMeter for automated SOAP API testing. While having a GUI seems attractive first, I find both applications quiet user-unfriendly and time consuming to work with.
As already suggested, you could do it in code using Java or maybe use a dynamic language like Ruby:
Testing SOAP Webservices with RSpec
SOAP web Services testing in RUBY
As user mitchnull mentions in his comment:
Disabling the browser component (-Dsoapui.jxbrowser.disable=true)
solved the 100% CPU usage issues for me. (when it was enabled, it
periodically went to 100% CPU even when not running any
tests/requests).