We have a Play app, currently using version 2.6. We are trying to prevent dictionary attacks against our login by delaying a "failed login" message back to our users when they provide a failed password. We currently hash and salt and have all the best practices, but we are not sure if we are delaying correctly. So we have in our Controller:
public Result login() { return ok(loginHtml) }
and we have a:
public Result loginAction()
{
// Check for user in database
User user = User.find.query()...
// Was the user found?
if (user == null) {
// Wrong password! Delay and redirect
Thread.sleep(10000); <<-- how do delay correctly?
return redirect(routes.Controller.login())
}
// User is not null, so all good!
...
}
We are not sure if Thread.sleep(10000) is the best way to delay a response since this might hang other requests that come in, or use too many thread from the default pool. We have noticed that under 80+ hits per second the Play Framework does not route our HTTP calls to the Routes. That is, if we receive a HTTP POST request, our app will not even send that request to the Controller until 20+ seconds later, HOWEVER, in the SAME time period if we get a HTTP GET request, our app will process that GET instantly!
Currently we have 300 threads as the min/max in our Akka settings for the default fork pool. Any insights would be appreciated. We run a t2.xlarge AWS EC2 instance running Ubuntu.
Thank you.
Thread.sleep causes current thread blocking, please, try to avoid using it in production code as much as possible.
What you need to use, is CompletionStage / CompletableFuture or any abstraction for deeling with async programming and asynchronous action.
Please, take a look for more details about asynchronios actions: https://www.playframework.com/documentation/2.8.x/JavaAsync
In your case solution would look like something too (excuse me, please, this might have mistakes - I'm Scala engineer primary):
import play.libs.concurrent.HttpExecutionContext;
import play.mvc.*;
import javax.inject.Inject;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.CompletionStage;
public class LoginController extends Controller {
private HttpExecutionContext httpExecutionContext;
// Create and inject separate ScheduledExecutorService
private ScheduledExecutorService executor;
#Inject
public LoginController(HttpExecutionContext ec,
ScheduledExecutorService executor) {
this.httpExecutionContext = ec;
this.executor = executor;
}
public CompletionStage<Result> loginAction() {
User user = User.find.query()...
if (user == null) {
return executor.schedule(() -> {redirect(routes.Controller.login());}, 10, TimeUnit.SECONDS);
} else {
// return another response
}
}
}
Hope this helps!
I don't like this approach at all. This hogs threads for no reason and can probably cause your entire system to lock up if someone finds out you are doing this and they have malicious ideas. Let me propose a better approach:
In the User table store a nullable LocalDateTime of the last login attempt time.
When you fetch the user from the DB check the last attempt time (compare to LocalDateTime.now()), if 10 secs have passed since last attempt perform the password comparison.
If passwords don't match store the last attempt time as now.
This can also be handled gracefully on the front end if you provide good error responses.
EDIT: If you want to delay login attempts NOT based on the user, you could create an attempt table and store last attempt by IP address.
If you really want to do your way which I don't recommend you need to read up on this first: https://www.playframework.com/documentation/2.8.x/ThreadPools
Related
We have a deadlock situation which occured because of this heavy load on the microservice (Say A) causing multiple requests from different client services (B,C). So these calls from B and C come for the same clientId(key) and are served by different instances of A and they try to update the same clientId data in database at same time causing below error.
CannotAcquireLockException is thrown,
(SQL Error: 60, SQLState: 61000..
ORA-00060: deadlock detected while waiting for resource
We have decided to implement sharding at load balancer(haproxy) level which will ensure same instance of A will always serve the requests from B and C for a specific key(clientId), so we dont have multiple instances processing the request for same key(clientId).
Now we get into the mode of everything in single jvm as we have made sure requests from B and C for a specific clientId always come to same instance of A.
With this its still possible that requests from B and C services come for same clientId with difference in time of nanoseconds. Any then multiple threads will again try to update the same clientId data in database at same time causing same error again.
To improve this we are looking for possible solutions and one solutions is ReentrantReadWriteLock which should take care of this based on the concepts.
We are using spring data jpa and have a save being done which looks like
clientJpaRepository.save(ClientObject);
Now is it possible to use something like below.
public void save(Client clientObject) {
String clientId = clientObject.getClientId();
try {
boolean isLockAcquired = writeLock.tryLock(100, TimeUnit.MILLISECONDS);
if (isLockAcquired) {
clientJpaRepository.save(clientObject);
}
} catch (InterruptedException e) {
log.error("exception occured trying to acquire lock for clientId={}", clientId);
} finally {
writeLock.unlock();
}
}
I am not very sure how its going to deal with the keys. As in i don't want any threads to block if they are wanting to update for different key(clientId 2).
Also, other thing to note is there could be reads happening as part of other API calls for this data from database. They would not be waiting too long hopefully and i hope i don't need to make any changes there for the reads.
Sorry for the long question, Hope i will hear from someone soon.
Thanks.
I am using libwidevinecdm.so from chrome to handle DRM protected data. I am currently successfully setting the widevine server certificate I get from the license server. I can also create a session with the pssh box of the media im trying to decode. So far everything is successful (all promises resolve fine).
(session is created like this: _cdm->CreateSessionAndGenerateRequest(promise_id, cdm::SessionType::kTemporary, cdm::InitDataType::kCenc, pssh_box.data(), static_cast<uint32_t>(pssh_box.size()));)
I am then getting a session message of type kLicenseRequest which I am forwarding to the respective license server. The license server responds with a valid response and the same amount of data as I can see in the browser when using Chrome. I am then passing this to my session like this:
_cdm->UpdateSession(promise_id, session_id.data(), static_cast<uint32_t>(session_id.size()),
license_response.data(), static_cast<uint32_t>(license_response.size()));
The problem now is that this promise never resolves. It keeps posting the kLicenseRequest message over and over again to my session without ever returning. Does this mean my response is wrong? Or is this something else?
Br
Yanick
The issue is caused by the fact, that everything in CreateSessionAndGenerateRequest is done synchronous - that means by the time CreateSessionAndGenerateRequest returns your promise will always be resolved.
The CDM will emit the kLicenseRequest inside CreateSessionAndGenerateRequest and it doesn't do so in a "fire & forget" fashion, but the function waits there until you have returned from the cdm::Host_10::OnSessionMessage. Since my implementation of OnSessionMessage was creating a synchronous HTTP Request to the license server before - also synchronously - calling the UpdateSession the entire chain ended up to be blocking.
So ultimately I was calling UpdateSession while still being inside CreateSessionAndGenerateRequest and I assume the CDM cannot handle this and reacts by creating a new session with the given ID and generating a request again, which of course triggered another UpdateSession and so on.
Ultimately the simplest way to break the cycle was to make something asynchronous. I decided to launch a separate thread when receiving kLicenseRequest, wait for a few milliseconds to make sure that CreateSessionAndGenerateRequest has time to finish (not sure if that is really required) and then issue the request to the license server.
The only change I had to do was adding the surrounding std::thread:
void WidevineSession::forward_license_request(const std::vector<uint8_t> &data) {
std::thread{
[=]() {
std::this_thread::sleep_for(std::chrono::milliseconds{100});
net::HttpRequest request{"POST", _license_server_url};
request.add_header("Authorization", fmt::format("Bearer {}", _access_token))
.byte_body(data);
const auto response = _client.execute(request);
if (response.status_code() != 200) {
log->error("Widevine license request not accepted by license server: {} {} ({})", response.status_code(), response.status_text(), utils::bytes_to_utf8(response.body()));
throw std::runtime_error{"Error requesting widevine license"};
}
log->info("Successfully requested widevine license from license server");
_adapter->update_session(this, _session_id, response.body());
}
}.detach();
}
I am writing an application where the Client issues commands to a web service (CQRS)
The client is written in C#
The client uses a WCF Proxy to send the messages
The client uses the async pattern to call the web service
The client can issue multiple requests at once.
My problem is that sometimes the client simply issues too many requests and the service starts returning that it is too busy.
Here is an example. I am registering orders and they can be from a handful up to a few 1000s.
var taskList = Orders.Select(order => _cmdSvc.ExecuteAsync(order))
.ToList();
await Task.WhenAll(taskList);
Basically, I call ExecuteAsync for every order and get a Task back. Then I just await for them all to complete.
I don't really want to fix this server-side because no matter how much I tune it, the client could still kill it by sending for example 10,000 requests.
So my question is. Can I configure the WCF Client in any way so that it simply takes all the requests and sends the maximum of say 20, once one completes it automatically dispatches the next, etc? Or is the Task I get back linked to the actual HTTP request and can therefore not return until the request has actually been dispatched?
If this is the case and WCF Client simply cannot do this form me, I have the idea of decorating the WCF Client with a class that queues commands, returns a Task (using TaskCompletionSource) and then makes sure that there are no more than say 20 requests active at a time. I know this will work but I would like to ask if anyone knows of a library or a class that does something like this?
This is kind of like Throttling but I don't want to do exactly that because I don't want to limit how many requests I can send in a given period of time but rather how many active requests can exist at any given time.
Based on #PanagiotisKanavos suggjestion, here is how I solved this.
RequestLimitCommandService acts as a decorator for the actual service which is passed in to the constructor as innerSvc. Once someone calls ExecuteAsync a completion source is created which along with the command is posted to the ActonBlock, the caller then gets back the a Task from the completion source.
The ActionBlock will then call the processing method. This method sends the command to the web service. Depending on what happens, this method will use the completion source to either notify the original sender that a command was processed successfully or attach the exception that occurred to the source.
public class RequestLimitCommandService : IAsyncCommandService
{
private class ExecutionToken
{
public TaskCompletionSource<bool> Source { get; }
public ICommand Command { get; }
public ExecutionToken(TaskCompletionSource<bool> source, ICommand command)
{
Source = source;
Command = command;
}
}
private IAsyncCommandService _innerSrc;
private ActionBlock<ExecutionToken> _block;
public RequestLimitCommandService(IAsyncCommandService innerSvc, int maxDegreeOfParallelism)
{
_innerSrc = innerSvc;
var options = new ExecutionDataflowBlockOptions { MaxDegreeOfParallelism = maxDegreeOfParallelism };
_block = new ActionBlock<ExecutionToken>(Execute, options);
}
public Task IAsyncCommandService.ExecuteAsync(ICommand command)
{
var source = new TaskCompletionSource<bool>();
var token = new ExecutionToken(source, command);
_block.Post(token);
return source.Task;
}
private async Task Execute(ExecutionToken token)
{
try
{
await _innerSrc.ExecuteAsync(token.Command);
token.Source.SetResult(true);
}
catch (Exception ex)
{
token.Source.SetException(ex);
}
}
}
I have this scenario where I have a WebApi and an endpoint that when triggered does a lot of work (around 2-5min). It is a POST endpoint with side effects and I would like to limit the execution so that if 2 requests are sent to this endpoint (should not happen, but better safe than sorry), one of them will have to wait in order to avoid race conditions.
I first tried to use a simple static lock inside the controller like this:
lock (_lockObj)
{
var results = await _service.LongRunningWithSideEffects();
return Ok(results);
}
this is of course not possible because of the await inside the lock statement.
Another solution I considered was to use a SemaphoreSlim implementation like this:
await semaphore.WaitAsync();
try
{
var results = await _service.LongRunningWithSideEffects();
return Ok(results);
}
finally
{
semaphore.Release();
}
However, according to MSDN:
The SemaphoreSlim class represents a lightweight, fast semaphore that can be used for waiting within a single process when wait times are expected to be very short.
Since in this scenario the wait times may even reach 5 minutes, what should I use for concurrency control?
EDIT (in response to plog17):
I do understand that passing this task onto a service might be the optimal way, however, I do not necessarily want to queue something in the background that still runs after the request is done.
The request involves other requests and integrations that take some time, but I would still like the user to wait for this request to finish and get a response regardless.
This request is expected to be only fired once a day at a specific time by a cron job. However, there is also an option to fire it manually by a developer (mostly in case something goes wrong with the job) and I would like to ensure the API doesn't run into concurrency issues if the developer e.g. double-sends the request accidentally etc.
If only one request of that sort can be processed at a given time, why not implement a queue ?
With such design, no more need to lock nor wait while processing the long running request.
Flow could be:
Client POST /RessourcesToProcess, should receive 202-Accepted quickly
HttpController simply queue the task to proceed (and return the 202-accepted)
Other service (windows service?) dequeue next task to proceed
Proceed task
Update resource status
During this process, client should be easily able to get status of requests previously made:
If task not found: 404-NotFound. Ressource not found for id 123
If task processing: 200-OK. 123 is processing.
If task done: 200-OK. Process response.
Your controller could look like:
public class TaskController
{
//constructor and private members
[HttpPost, Route("")]
public void QueueTask(RequestBody body)
{
messageQueue.Add(body);
}
[HttpGet, Route("taskId")]
public void QueueTask(string taskId)
{
YourThing thing = tasksRepository.Get(taskId);
if (thing == null)
{
return NotFound("thing does not exist");
}
if (thing.IsProcessing)
{
return Ok("thing is processing");
}
if (!thing.IsProcessing)
{
return Ok("thing is not processing yet");
}
//here we assume thing had been processed
return Ok(thing.ResponseContent);
}
}
This design suggests that you do not handle long running process inside your WebApi. Indeed, it may not be the best design choice. If you still want to do so, you may want to read:
Long running task in WebAPI
https://blogs.msdn.microsoft.com/webdev/2014/06/04/queuebackgroundworkitem-to-reliably-schedule-and-run-background-processes-in-asp-net/
I am creating a new ember app. I want to use the newest version of ember-data. (ember-data 2.0). I want it to be a mobile webapp. Therefore it must handle variable network access and even offline.
I want it to store all data locally and use that data when it goes offline so the user gets the same experience regardless of the network connectivity.
Is ember-data 2.0 capable of handling the offline case? Do I just make an adapter that detects offline/online and then do....?
Or do I have to make my own in-between layer to hide the offline handling from ember-data?
Are there any libraries out there that has this problem solved? I have found some, but are there any that is up to date with the latest version of ember-data?
If device will go offline and user will try to transition to route, for which model is not loaded yet, you will have an error. You need to handle these situations yourself. For example, you may create a nice page with error message and a refresh button. To do this, you need:
First, In application route, create error action (it will catch errors during model hook), and when error occurs, save transition in memory. Do not try to use local storage for this task, it will save only properties, while we need an actual transition object. Use either window.failedTransition or inject in controllers and routes a simple object, which will contain a failed transition.
actions: {
error: function (error, transition) {
transition.abort();
/**
* You need to correct this line, as you don't have memoryStorage
* injected. Use window.failedTransition, or create a simple
* storage, Iy's up to you.
*/
this.get('memoryStorage').set('failedTransition', transition);
return true; //This line is important, or the whole idea will not work
}
}
Second, Create an error controller and template. In error controller define an action, retry:
actions: {
retry: function () {
/**
* Correct this line too
*/
var transition = this.get('memoryStorage').getAndRemove('failedTransition');
if (transition !== undefined) {
transition.retry();
}
}
}
Finally, In error template display a status and an error text (if any available) and a button with that action to retry a transition.
This is a simple solution for simple case (device gone offline just for few seconds), maybe you will need something way more complex. If you want your application to fully work without a network access, than you may want to use local storage (there is an addon https://github.com/funkensturm/ember-local-storage) for all data and sync it with server from time to time (i.e sync data every 10 sec in background). Unfortunately I didn't try such things, but I think it is possible.