Does AspNetDB automatically reset Locked Out users? - aspnetdb

Does AspNetDB automatically reset Locked Out users?
I can't find anything saying it does... just looking for confirmation that I haven't missed something.

No, it doesn't automatically unlock users. There is an UnlockUser method you can call, but you have to implement your own logic on when to unlock it.
public bool UnlockAccount(MembershipUser user)
{
return user.UnlockUser();
}

Related

Play Framework how to purposely delay a response

We have a Play app, currently using version 2.6. We are trying to prevent dictionary attacks against our login by delaying a "failed login" message back to our users when they provide a failed password. We currently hash and salt and have all the best practices, but we are not sure if we are delaying correctly. So we have in our Controller:
public Result login() { return ok(loginHtml) }
and we have a:
public Result loginAction()
{
// Check for user in database
User user = User.find.query()...
// Was the user found?
if (user == null) {
// Wrong password! Delay and redirect
Thread.sleep(10000); <<-- how do delay correctly?
return redirect(routes.Controller.login())
}
// User is not null, so all good!
...
}
We are not sure if Thread.sleep(10000) is the best way to delay a response since this might hang other requests that come in, or use too many thread from the default pool. We have noticed that under 80+ hits per second the Play Framework does not route our HTTP calls to the Routes. That is, if we receive a HTTP POST request, our app will not even send that request to the Controller until 20+ seconds later, HOWEVER, in the SAME time period if we get a HTTP GET request, our app will process that GET instantly!
Currently we have 300 threads as the min/max in our Akka settings for the default fork pool. Any insights would be appreciated. We run a t2.xlarge AWS EC2 instance running Ubuntu.
Thank you.
Thread.sleep causes current thread blocking, please, try to avoid using it in production code as much as possible.
What you need to use, is CompletionStage / CompletableFuture or any abstraction for deeling with async programming and asynchronous action.
Please, take a look for more details about asynchronios actions: https://www.playframework.com/documentation/2.8.x/JavaAsync
In your case solution would look like something too (excuse me, please, this might have mistakes - I'm Scala engineer primary):
import play.libs.concurrent.HttpExecutionContext;
import play.mvc.*;
import javax.inject.Inject;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.CompletionStage;
public class LoginController extends Controller {
private HttpExecutionContext httpExecutionContext;
// Create and inject separate ScheduledExecutorService
private ScheduledExecutorService executor;
#Inject
public LoginController(HttpExecutionContext ec,
ScheduledExecutorService executor) {
this.httpExecutionContext = ec;
this.executor = executor;
}
public CompletionStage<Result> loginAction() {
User user = User.find.query()...
if (user == null) {
return executor.schedule(() -> {redirect(routes.Controller.login());}, 10, TimeUnit.SECONDS);
} else {
// return another response
}
}
}
Hope this helps!
I don't like this approach at all. This hogs threads for no reason and can probably cause your entire system to lock up if someone finds out you are doing this and they have malicious ideas. Let me propose a better approach:
In the User table store a nullable LocalDateTime of the last login attempt time.
When you fetch the user from the DB check the last attempt time (compare to LocalDateTime.now()), if 10 secs have passed since last attempt perform the password comparison.
If passwords don't match store the last attempt time as now.
This can also be handled gracefully on the front end if you provide good error responses.
EDIT: If you want to delay login attempts NOT based on the user, you could create an attempt table and store last attempt by IP address.
If you really want to do your way which I don't recommend you need to read up on this first: https://www.playframework.com/documentation/2.8.x/ThreadPools

Assess synchronization for Asynchronous loader

I'd like to receive advice how to solve the following problem.
// Asynchronous loader
class AsyncLoader
{
void SendRequest()
{
std::tr1::bind(&AsyncLoader::OnSendRequest,this,std::tr1::placeholders::_1));
mutex.lock();
// send some request
}
void OnSendRequest()
{
mutex.unlock();
}
TSomeType GetCachedValue()
{
mutex.lock();
TSomeType ret = cachedValue;
mutex.unlock();
return ret;
}
TSomeType cachedValue;
Mutex mutex;
}
Also have client who sends requests using AsyncLoader to update cachedValue from backend. Sometimes I don't need to send request , I can ask AsyncLoader the cached value by calling GetCachedValue().
Sometimes I have synchronization problem when one client sends request to update data, but other client calls GetCachedValue() to receive cached value.
It is clear that I should use mutex to synchronize assess - but I'm confused by fact, that I should lock mutex in SendRequest() and call unlock in other function OnSendRequest(). It seems to me such solution is potential dead lock if something happened during request and OnSendRequest() will not be called.
Other idea was to have real cached value to copying value from previous request - but I'd like to minimize memory usage.
P.S: Maybe even here I can talk about the problem - the use of an object only when it is fully initialized. If object is not initialized yet I should wait it.
Best regards, Roman.

delete operation not successful in Axapta 2009

I have written a simple one record delete operation job in production as requested by user, in an AX instance while the other instance was stuck and open. However the record was not deleted.
try
{
ttsbegin;
select fotupdate tableBuffer where tableBuffer.recid == 5457735:
tableBuffer.delete();
ttscommit;
}
catch (exception::error)
{
info("Delete operation cancelled.");
}
tableBuffer's delete()function was overridden with code after super() to store the deleted record in another table.
I have done the same operation earlier successfully but no where with a scenario like one today(executed in one instance while the other instance was stuck).
Please suggest the possible reason as I find the record still persist both in sql server and AX.
Thank you.
If you're trying to prevent this from happening you can use pessimistic locking, where you obtain an update lock.
select pessimisticLock custTable
where custTable.AccountNum > '1000'
See these links for more info:
http://dev.goshoom.net/en/2011/10/pessimistic-locking/
https://blogs.msdn.microsoft.com/emeadaxsupport/2009/07/08/about-locking-and-blocking-in-dynamics-ax-and-how-to-prevent-it/
https://msdn.microsoft.com/en-us/library/bb190073.aspx

How avoid closing EntityManager when OptimisticLockException occurs?

My problem - process try change entity that already changed and have newest version id. When i do flush() in my code in UnitOfWork's commit() rising OptimisticLockException and catching in same place by catch-all block. And in this catch doctrine closing EntityManager.
If i want skip this entity and continue with another from ArrayCollection, i should not use flush()?
Try recreate EntityManager:
}catch (OptimisticLockException $e){
$this->em = $this->container->get('doctrine')->getManager();
echo "\n||OptimisticLockException.";
continue;
}
And still get
[Doctrine\ORM\ORMException]
The EntityManager is closed.
Strange.
If i do
$this->em->lock($entity, LockMode::OPTIMISTIC, $entity->getVersion());
and then do flush() i get OptimisticLockException and closed entity manager.
if i do
$this->getContainer()->get('doctrine')->resetManager();
$em = $doctrine->getManager();
Old data unregistered with this entity manager and i even can't write logs in database, i get error:
[Symfony\Component\Debug\Exception\ContextErrorException]
Notice: Undefined index: 00000000514cef3c000000002ff4781e
You should check entity version before you try to flush it to avoid exception. In other words you should not call flush() method if the lock fails.
You can use EntityManager#lock() method for checking whether you can flush entity or not.
/** #var EntityManager $em */
$entity = $em->getRepository('Post')->find($_REQUEST['id']);
// Get expected version (easiest way is to have the version number as a hidden form field)
$expectedVersion = $_REQUEST['version'];
// Update your entity
$entity->setText($_REQUEST['text']);
try {
//assert you edit right version
$em->lock($entity, LockMode::OPTIMISTIC, $expectedVersion);
//if $em->lock() fails flush() is not called and EntityManager is not closed
$em->flush();
} catch (OptimisticLockException $e) {
echo "Sorry, but someone else has already changed this entity. Please apply the changes again!";
}
Check the example in Doctrine docs optimistic locking
Unfortunately, nearly 4 years later, Doctrine is still unable to recover from an optimistic lock properly.
Using the lock function as suggested in the doc doesn't work if the db was changed by another server or php worker thread. The lock function only makes sure the version number wasn't changed by the current php script since the entity was loaded into memory. It doesn't read the db to make sure the version number is still the expected one.
And even if it did read the db, there is still the potential for a race condition between the time the lock function checks the current version in the db and the flush is performed.
Consider this scenario:
server A reads the entity,
server B reads the same entity,
server B updates the db,
server A updates the db <== optimistic lock exception
The exception is triggered when flush is called and there is nothing that can be done to prevent it.
Even a pessimistic lock won't help unless you can afford to loose performance and actually lock your db for a (relatively) long time.
Doctrine's solution (update... where version = :expected_version) is good in theory. But, sadly, Doctrine was designed to become unusable once an exception is triggered. Any exception. Every entity is detached. Even if the optimistic lock can be easily solved by re-reading the entity and applying the change again, Doctrine makes it very hard to do so.
As others have said, sometimes EntityManager#lock() is not useful. In my case, the Entity version may change during the same request.
If EntityManager closes after flush(), I proceed like this:
if (!$entityManager->isOpen()) {
$entityManager = EntityManager::create(
$entityManager->getConnection(),
$entityManager->getConfiguration(),
$entityManager->getEventManager()
);
// ServiceManager shoud be aware of this change
// this is for Zend ServiceManager your shoud adapt this part to your usecase
$serviceManager = $application->getServiceManager();
$serviceManager->setAllowOverride(true);
$serviceManager->setService(EntityManager::class, $entityManager);
$serviceManager->setAllowOverride(false);
// Then you should manually reload every Entity you need (or repeat the whole set of actions)
}

Actions memory management: when are they released?

When you add an action to a sprite, since most things in Cocos are autoreleased, is it then released after it completes? Or, because you added it to a node, is it retained by the node?
If the action then ends, either due to completing on its own or because you stop it yourself, is it then released or is it still available to be run later?
I ask because I want to know if you need to recreate actions to reuse them, or if you can simply reference their tag and start and stop them at will whenever you want. Of if they repeat, if you can simply get them by tag number and then run them again; it's not clear the "correct" way to go about this. Thanks for help.
My understanding is that when you create and run an action on a sprite, the action is added to CCActionManager, which is a singleton that will manage the actions for you. This includes releasing all of them when the CCACtionManager itself is released and also when the action is done.
This is the relevant code about the latter (from CCActionManager.m):
-(void) update: (ccTime) dt
{
for(tHashElement *elt = targets; elt != NULL; ) {
...
if( ! currentTarget->paused ) {
// The 'actions' ccArray may change while inside this loop.
for( currentTarget->actionIndex = 0; currentTarget->actionIndex < currentTarget->actions->num; currentTarget->actionIndex++) {
....
if( currentTarget->currentActionSalvaged ) {
....
[currentTarget->currentAction release];
} else if( [currentTarget->currentAction isDone] ) {
....
CCAction *a = currentTarget->currentAction;
currentTarget->currentAction = nil;
[self removeAction:a];
}
.....
}
}
After doing some research, it seems that the topic of reusing and action is on shaky grounds. Anyway, you can read here what the cocos2d best practices suggest. IMO, I would not try and reuse an action...
Actions are one-shot classes. Once the action is "done" or has been stopped or the node that runs the action is deallocated the action will be (auto-)released.
If you need to re-use actions, there's only a rather scary solution available: you need to send the corresponding init… message to the existing action again. You will also have to manually retain the action.
Actions are very lightweight classes, their runtime performance is comparable to allocating a new NSObject instance. Personally, I think if you're in performance trouble because you're creating and releasing many actions, I would say that you're using actions too much and should look for a better solution.
When you pass the reference to the CCNode with the runAction message it hands it to a CCActionManager which sends the action a retain message. Once the action is completes it sends a release message. If you want to keep using an action you should keep a reference to it and send your own retain and release messages.
The actions are designed to be lightweight "fire and forget" objects. I wouldn't worry about it unless you're noticing performance problems and trace it back to them.