Preventing a WCF client from issuing too many requests - web-services

I am writing an application where the Client issues commands to a web service (CQRS)
The client is written in C#
The client uses a WCF Proxy to send the messages
The client uses the async pattern to call the web service
The client can issue multiple requests at once.
My problem is that sometimes the client simply issues too many requests and the service starts returning that it is too busy.
Here is an example. I am registering orders and they can be from a handful up to a few 1000s.
var taskList = Orders.Select(order => _cmdSvc.ExecuteAsync(order))
.ToList();
await Task.WhenAll(taskList);
Basically, I call ExecuteAsync for every order and get a Task back. Then I just await for them all to complete.
I don't really want to fix this server-side because no matter how much I tune it, the client could still kill it by sending for example 10,000 requests.
So my question is. Can I configure the WCF Client in any way so that it simply takes all the requests and sends the maximum of say 20, once one completes it automatically dispatches the next, etc? Or is the Task I get back linked to the actual HTTP request and can therefore not return until the request has actually been dispatched?
If this is the case and WCF Client simply cannot do this form me, I have the idea of decorating the WCF Client with a class that queues commands, returns a Task (using TaskCompletionSource) and then makes sure that there are no more than say 20 requests active at a time. I know this will work but I would like to ask if anyone knows of a library or a class that does something like this?
This is kind of like Throttling but I don't want to do exactly that because I don't want to limit how many requests I can send in a given period of time but rather how many active requests can exist at any given time.

Based on #PanagiotisKanavos suggjestion, here is how I solved this.
RequestLimitCommandService acts as a decorator for the actual service which is passed in to the constructor as innerSvc. Once someone calls ExecuteAsync a completion source is created which along with the command is posted to the ActonBlock, the caller then gets back the a Task from the completion source.
The ActionBlock will then call the processing method. This method sends the command to the web service. Depending on what happens, this method will use the completion source to either notify the original sender that a command was processed successfully or attach the exception that occurred to the source.
public class RequestLimitCommandService : IAsyncCommandService
{
private class ExecutionToken
{
public TaskCompletionSource<bool> Source { get; }
public ICommand Command { get; }
public ExecutionToken(TaskCompletionSource<bool> source, ICommand command)
{
Source = source;
Command = command;
}
}
private IAsyncCommandService _innerSrc;
private ActionBlock<ExecutionToken> _block;
public RequestLimitCommandService(IAsyncCommandService innerSvc, int maxDegreeOfParallelism)
{
_innerSrc = innerSvc;
var options = new ExecutionDataflowBlockOptions { MaxDegreeOfParallelism = maxDegreeOfParallelism };
_block = new ActionBlock<ExecutionToken>(Execute, options);
}
public Task IAsyncCommandService.ExecuteAsync(ICommand command)
{
var source = new TaskCompletionSource<bool>();
var token = new ExecutionToken(source, command);
_block.Post(token);
return source.Task;
}
private async Task Execute(ExecutionToken token)
{
try
{
await _innerSrc.ExecuteAsync(token.Command);
token.Source.SetResult(true);
}
catch (Exception ex)
{
token.Source.SetException(ex);
}
}
}

Related

Setting JMSMessageID on stubbed jms endpoints in camel unit tests

I have a route that I am testing. I use stub://jms:queue:whatever to send/receive messages and extending CamelTestSupport for my test classes. I am having an issue with one of the routes that has a bean that uses an idempotent repo to store messages by "message id" for which it reads and stores the JMSMessageID property from exchange.
The problem I run into is that I can't figure out a way to set this property on messages sent on stubbed endpoints. Every time the method that requires this prop is called, the id returns null and i have to handle it as a null pointer. I can do this but the cleanest approach would be to just set the header on the test message. I tried includeSentJMSMessageId=true on endpoint, I tried using sendBodyAndHeader on producer and passing "JMSMessageID", "ID: whatever" in arguments, doesn't appear to work? I read that the driver/connectionfactory is supposed to set the header, but I'm not too familiar with how/where to do this. And since I am using a stubbed end points, I'm not creating any brokers/connectionfactories in my uts.
So dont stud out the JMS component replace it with a processor and then add the preferred JMSMessageID in the processor.
Something like this code:
#Test
void testIdempotency() throws Exception {
mockOut.expectedMinimumMessageCount(1);
//specify the route to test
AdviceWithRouteBuilder.adviceWith(context, "your-route-name", enrichRoute -> {
//replace the from with a end point we can call directly.
enrichRoute.replaceFromWith("direct:start");
//replace the jms endpoint with a processor so it can act as the JMS Endpoint.
enrichRoute.weaveById("jms:queue:whatever").replace().process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
//Set that ID to the one I want to test
exchange.getIn().setHeader("JMSMEssageID", "some-value-to-test");
}
});
// add an endpoint at the end to check if received a mesage
enrichRoute.weaveAddLast().to(mockOut);
});
context.start();
//send some message
Map<String,Object> sampleMsg = getSampleMessageAsHashMap("REQUEST.json");
//get the response
Map<String,Object> response = (Map<String,Object>)template.requestBody("direct:start", sampleMsg);
// you will need to check if the response is what you expected.
// Check the headers etc.
mockOut.assertIsSatisfied();
}
The JMSMessageID can only be set by the provider. It cannot be set by a client despite the fact that javax.jms.Message has setJMSMessageId(). As the JavaDoc states:
This method is for use by JMS providers only to set this field when a message is sent. This message cannot be used by clients to configure the message ID. This method is public to allow a JMS provider to set this field when sending a message whose implementation is not its own.

Play Framework how to purposely delay a response

We have a Play app, currently using version 2.6. We are trying to prevent dictionary attacks against our login by delaying a "failed login" message back to our users when they provide a failed password. We currently hash and salt and have all the best practices, but we are not sure if we are delaying correctly. So we have in our Controller:
public Result login() { return ok(loginHtml) }
and we have a:
public Result loginAction()
{
// Check for user in database
User user = User.find.query()...
// Was the user found?
if (user == null) {
// Wrong password! Delay and redirect
Thread.sleep(10000); <<-- how do delay correctly?
return redirect(routes.Controller.login())
}
// User is not null, so all good!
...
}
We are not sure if Thread.sleep(10000) is the best way to delay a response since this might hang other requests that come in, or use too many thread from the default pool. We have noticed that under 80+ hits per second the Play Framework does not route our HTTP calls to the Routes. That is, if we receive a HTTP POST request, our app will not even send that request to the Controller until 20+ seconds later, HOWEVER, in the SAME time period if we get a HTTP GET request, our app will process that GET instantly!
Currently we have 300 threads as the min/max in our Akka settings for the default fork pool. Any insights would be appreciated. We run a t2.xlarge AWS EC2 instance running Ubuntu.
Thank you.
Thread.sleep causes current thread blocking, please, try to avoid using it in production code as much as possible.
What you need to use, is CompletionStage / CompletableFuture or any abstraction for deeling with async programming and asynchronous action.
Please, take a look for more details about asynchronios actions: https://www.playframework.com/documentation/2.8.x/JavaAsync
In your case solution would look like something too (excuse me, please, this might have mistakes - I'm Scala engineer primary):
import play.libs.concurrent.HttpExecutionContext;
import play.mvc.*;
import javax.inject.Inject;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.CompletionStage;
public class LoginController extends Controller {
private HttpExecutionContext httpExecutionContext;
// Create and inject separate ScheduledExecutorService
private ScheduledExecutorService executor;
#Inject
public LoginController(HttpExecutionContext ec,
ScheduledExecutorService executor) {
this.httpExecutionContext = ec;
this.executor = executor;
}
public CompletionStage<Result> loginAction() {
User user = User.find.query()...
if (user == null) {
return executor.schedule(() -> {redirect(routes.Controller.login());}, 10, TimeUnit.SECONDS);
} else {
// return another response
}
}
}
Hope this helps!
I don't like this approach at all. This hogs threads for no reason and can probably cause your entire system to lock up if someone finds out you are doing this and they have malicious ideas. Let me propose a better approach:
In the User table store a nullable LocalDateTime of the last login attempt time.
When you fetch the user from the DB check the last attempt time (compare to LocalDateTime.now()), if 10 secs have passed since last attempt perform the password comparison.
If passwords don't match store the last attempt time as now.
This can also be handled gracefully on the front end if you provide good error responses.
EDIT: If you want to delay login attempts NOT based on the user, you could create an attempt table and store last attempt by IP address.
If you really want to do your way which I don't recommend you need to read up on this first: https://www.playframework.com/documentation/2.8.x/ThreadPools

Is it possible to use AMAZON LEX to build a chatbot which connects with database and Web service stored on client side?

Our organization wants to develop a "LOST & FOUND System Application" using chatbot integrated in a website.
Whenever the user starts the conversation with the chatbot, the chatbot should ask the details of lost item or item found and it should store the details in database.
How can we do it ?
And can we use our own web-service because organization doesn't want to keep the database in Amazon's Server.
As someone who just implemented this very same situation (with a lot of help from #Sid8491), I can give some insight on how I managed it.
Note, I'm using C# because that's what the company I work for uses.
First, the bot requires input from the user to decide what intent is being called. For this, I implemented a PostText call to the Lex API.
PostTextRequest lexTextRequest = new PostTextRequest()
{
BotName = botName,
BotAlias = botAlias,
UserId = sessionId,
InputText = messageToSend
};
try
{
lexTextResponse = await awsLexClient.PostTextAsync(lexTextRequest);
}
catch (Exception ex)
{
throw new BadRequestException(ex);
}
Please note that this requires you to have created a Cognito Object to authenticate your AmazonLexClient (as shown below):
protected void InitLexService()
{
//Grab region for Lex Bot services
Amazon.RegionEndpoint svcRegionEndpoint = Amazon.RegionEndpoint.USEast1;
//Get credentials from Cognito
awsCredentials = new CognitoAWSCredentials(
poolId, // Identity pool ID
svcRegionEndpoint); // Region
//Instantiate Lex Client with Region
awsLexClient = new AmazonLexClient(awsCredentials, svcRegionEndpoint);
}
After we get the response from the bot, we use a simple switch case to correctly identify the method we need to call for our web application to run. The entire process is handled by our web application, and we use Lex only to identify the user's request and slot values.
//Call Amazon Lex with Text, capture response
var lexResponse = await awsLexSvc.SendTextMsgToLex(userMessage, sessionID);
//Extract intent and slot values from LexResponse
string intent = lexResponse.IntentName;
var slots = lexResponse.Slots;
//Use LexResponse's Intent to call the appropriate method
switch (intent)
{
case: /*Your intent name*/:
/*Call appropriate method*/;
break;
}
After that, it is just a matter of displaying the result to the user. Do let me know if you need more clarification!
UPDATE:
An example implementation of the slots data to write to SQL (again in C#) would look like this:
case "LostItem":
message = "Please fill the following form with the details of the item you lost.";
LostItem();
break;
This would then take you to the LostItem() method which you can use to fill up a form.
public void LostItem()
{
string itemName = string.Empty;
itemName = //Get from user
//repeat with whatever else you need for a complete item object
//Implement a SQL call to a stored procedure that inserts the object into your database.
//You can do a similar call to the database to retrieve an object as well
}
That should point you in the right direction hopefully. Google is your best friend if you need help with SQL stored procedures. Hopefully this helped!
Yes its possible.
You can send the requests to Lex from your website which will extract Intents and Entities.
Once you get these, you can write backend code in any language of your choice and use any DB you want.
In your use case, you might just want to use Lex. PostText will be main function you will be calling.
You will need to create an intent in Lex which will have multiple slots LosingDate, LosingPlace or whatever you want, then it will be able to get all these information from the user and pass it to your web application.

How to lock a long async call in a WebApi action?

I have this scenario where I have a WebApi and an endpoint that when triggered does a lot of work (around 2-5min). It is a POST endpoint with side effects and I would like to limit the execution so that if 2 requests are sent to this endpoint (should not happen, but better safe than sorry), one of them will have to wait in order to avoid race conditions.
I first tried to use a simple static lock inside the controller like this:
lock (_lockObj)
{
var results = await _service.LongRunningWithSideEffects();
return Ok(results);
}
this is of course not possible because of the await inside the lock statement.
Another solution I considered was to use a SemaphoreSlim implementation like this:
await semaphore.WaitAsync();
try
{
var results = await _service.LongRunningWithSideEffects();
return Ok(results);
}
finally
{
semaphore.Release();
}
However, according to MSDN:
The SemaphoreSlim class represents a lightweight, fast semaphore that can be used for waiting within a single process when wait times are expected to be very short.
Since in this scenario the wait times may even reach 5 minutes, what should I use for concurrency control?
EDIT (in response to plog17):
I do understand that passing this task onto a service might be the optimal way, however, I do not necessarily want to queue something in the background that still runs after the request is done.
The request involves other requests and integrations that take some time, but I would still like the user to wait for this request to finish and get a response regardless.
This request is expected to be only fired once a day at a specific time by a cron job. However, there is also an option to fire it manually by a developer (mostly in case something goes wrong with the job) and I would like to ensure the API doesn't run into concurrency issues if the developer e.g. double-sends the request accidentally etc.
If only one request of that sort can be processed at a given time, why not implement a queue ?
With such design, no more need to lock nor wait while processing the long running request.
Flow could be:
Client POST /RessourcesToProcess, should receive 202-Accepted quickly
HttpController simply queue the task to proceed (and return the 202-accepted)
Other service (windows service?) dequeue next task to proceed
Proceed task
Update resource status
During this process, client should be easily able to get status of requests previously made:
If task not found: 404-NotFound. Ressource not found for id 123
If task processing: 200-OK. 123 is processing.
If task done: 200-OK. Process response.
Your controller could look like:
public class TaskController
{
//constructor and private members
[HttpPost, Route("")]
public void QueueTask(RequestBody body)
{
messageQueue.Add(body);
}
[HttpGet, Route("taskId")]
public void QueueTask(string taskId)
{
YourThing thing = tasksRepository.Get(taskId);
if (thing == null)
{
return NotFound("thing does not exist");
}
if (thing.IsProcessing)
{
return Ok("thing is processing");
}
if (!thing.IsProcessing)
{
return Ok("thing is not processing yet");
}
//here we assume thing had been processed
return Ok(thing.ResponseContent);
}
}
This design suggests that you do not handle long running process inside your WebApi. Indeed, it may not be the best design choice. If you still want to do so, you may want to read:
Long running task in WebAPI
https://blogs.msdn.microsoft.com/webdev/2014/06/04/queuebackgroundworkitem-to-reliably-schedule-and-run-background-processes-in-asp-net/

Cancelling async webservice call in windows phone

I'm developing a windows phone app that consumes a .Net Web Service (develop also by me). When I call the a Web Service method a do it asynchronously and don't block the UI. For example, here's a code sample for asking the server for a list o flights Arrivals.
service.MobileWSSoapClient Proxy { get; set; }
Proxy = new service.MobileWSSoapClient();
Proxy.GetArrivalsCompleted += proxy_GetArrivalsCompleted;
Proxy.GetArrivalsAsync(searchFilter);
This way I give the freedom to the user to call again the same method or another one (ex: refreshing the arrival list or searching for a particular arrival). In case the user generates a new call to the services, the app should "cancel" the first call and only show the result of the last call. I think that is technically impossible to Cancel a web service call that already went to the server, we should wait for the server response and then ignore it. Knowing that, it would be helpful to mark somehow that call as obsolete. It would be enough to receive an error as a response of that obsolete call. I'll write a pseudo code of what I imagine/need.
void proxy_GetArrivalsCompleted(object sender, service.GetArrivalsCompletedEventArgs e){
if (e.Error == null){
// DO WORK
}
else
{
if(e.Error == Server Exception || e.Error == Connection Exception){
MessageBox.Show("error");
}
else if (e.Error == obsolete call){
// DO NOTHING
}
}
Thanks in advance.
You can use BackgroundWorker for your scenario. So, when the user calls again to the web service you can cancel your backgroundworker process that will end the service call.
How to use BackgroundWorker here.