Check for live Data Source Name Before proceeding - coldfusion

Would it be ok to get a CF app to check for a valid database before proceeding to process that request?
This is because there may be instances where the database server may be down or being upgraded, hence an error comes when a db dependant request is made.
If there is no connection to the db server, the user can be safely redirected to a safe page.
Or can cfcatch work?
How can this check be done?
Thank you.

in your onRequestStart method of your Application.cfc file or in an Application.cfm file you can run a simple query to check that the database is available. Wrap the query in cftry/cfcatch. If the query fails, you can redirect the user in the cfcatch, if it succeeds, you can be reasonably sure that your database is "alive".

I've used such a check in one project. Code may looks as follows (not sure if it will work in versions of ColdFusion lower than 8), consider this sample as chunk of UDF written in CFScript:
// service factory object instance
factory = CreateObject("java","coldfusion.server.ServiceFactory");
// the datasource service
dsService = factory.DatasourceService;
// verify the dsn
return dsService.verifyDataSource(arguments.dsn);
Oh, I have even found small note in the code I wrote on my old laptop couple of years ago:
// [performance note] this server check takes 1-3ms at local PC (Kubuntu 7.10, CF8 + Apache2, Sempron 3500+, 1GB RAM)
While time looks like small I have found out that doing this check on each request is not really useful for my application. Any way I have a habit to use the try/catch extensively for errors handling. But if your datasources may cheange frequently it may have more sense.

Adding an extra query to every request to make sure that the database is up is a patently bad idea. A better approach would be to build a "maintenance mode" switch into your application, that you would manually enable when you are doing planned maintenance (upgrades, etc).
If you want to have a "friendly" page displayed when an error (like database issues) occur, then use the onError() method in Application.cfc and/or the <cferror .../> tag in Application.cfm, as a global error handler.

If you are worried the db could vanish, I would implement a "SELECT 1 AS A" query in your OnRequestStart handler that runs only every N minutes. This can be accomplished by using the query caching feature. I'd start with performing the query every 30 min.

Related

How could I modify django-tracking2 so users can opt out of tracking

I'm making a website right now and need to use django-tracking2 for analytics. Everything works but I would like to allow users to opt out and I haven't seen any options for that. I was thinking modifying the middleware portion may work but honestly, I don't know how to go about that yet since I haven't written middleware before.
I tried writing a script to check a cookie called no_track and if it wasn't set, I would set it to false for default tracking and if they reject, it sets no_track to True but I had no idea where to implement it (other than the middle ware, when I tried that the server told me to contact the administrator). I was thinking maybe I could use signals to prevent the user being tracked but then that would slow down the webpage since it would have to deal with preventing a new Visitor instance on each page (because it would likely keep making new instances since it would seem like a new user). Could I subclass the Visitor class and modify __init__ to do a check for the cookie and either let it save or don't.
Thanks for any answers, if I find a solution I'll edit the post or post and accept the answer just in case someone else needs this.
I made a function in my tools file (holds all functions used throughout the project to make my life easier) to get and set a session key. Inside the VisitorTrackingMiddleware I used the function _should_track() and placed a check that looks for the session key (after _should_track() checks that sessions is installed and before all other checks), with the check_session() function in my tools file, if it doesn't exist, the function creates it with the default of True (Track the user until they accept or reject) and returns an HttpResponse (left over from trying the cookie method).
When I used the cookie method, the firefox console said the cookie will expire so I just switched to sessions another reason is that django-tracking2 runs on it.
It seems to work very well and it didn't have a very large impact on load times, every time a request is made, that function runs and my debug tells me if it's tracking me or not and all the buttons work through AJAX. I want to run some tests to see if this does indeed work and if so, maybe I'll submit a pull request to django-tracking2 just in case someone else wants to use it.
A Big advantage to this is that you can allow users to change their minds if they want or you can reprompt at user sign up depending on if they accepted or not. with the way check_session() is set up, I can use it in template tags and class methods as well.

Abort or terminate the Request which takes long time to respond back

I have an application developed using SmartGWT,Jaxrs,ejb &jpa.
I have one scenario where user wants to extract the data(called Search Screen) by entering either firstname,lastname or middlebane,ssn,email,etc
Database contains the huge number of records in millions, which takes lot of time to respond back.
for example, user search with firstname which takes lot of time to respond, in that case user wants to cancel/terminate/abort the request.
Is it possible either in smartgwt or jaxrs(web api) to terminate the request.
So that user can terminate the request and move further
PS:: i tried lot of option,but i didn't get the proper solution.
One solution is to put the business logic in stateful bean and put the bean in the http session ... now you have access to the currently used persistence context and the open transaction so you can call Session.cancelQuery() .... but this method has some limitation .. It works only if Result set is not yet returned , if this limitation harms you, check this answer please
There are other workarounds to synchronize the web client with the business method but this is the one I like most
One more thing you need to consider as this is your use case is to introduce a new lexical search engine like solr or elasticsearch which can be updated frequently with data from the database ... It fits perfectly in lexical search, gives the ability to stand typo mistakes and returns result very quickly

Profile attribute being magically set in Siebel

We have a very weird issue in our Siebel 7.8 application.
In the Application_Start event we define a bunch of profile attributes, which determine if the logged user will be allowed to perform certain operations or not. The code is something like this:
if (userHasSuperpowers) {
TheApplication().SetProfileAttr("CanFly", "Y");
} else {
// CanFly is not set, and GetProfileAttr("CanFly") returns ''
}
Everything works fine, except for one of these profile attributes. The conditions are not met, so we don't set its value. But when we check it using GetProfileAttr... it returns 'Y' instead of ''.
I've checked the code. A lot. I've put traces everywhere, and I'm 100% sure that when the last line of the Application_Start event executes, the attribute is still empty. However, in the first Applet_Load event after the login (in the HLS Salutation Applet (HLS Home) applet), its value has already changed to 'Y'. Why!!? I've looked everywhere, but I can't find anywhere else where we'd be doing a SetProfileAttr. So far, I've ruled out:
Every browser and server script for all our applets, application, BCs and business services.
All the runtime business services (the ones defined directly in the application instead of the SRF).
The Personalization Profile business component fields.
SmartScripts (not that they would matter in this particular scenario, I just mention them to acknowledge that you can set profile attributes there too).
Workflows: every step invoking the SIS OM PMT Service method Set Profile Attribute.
Siebel magically setting its value. The profile attribute name is custom made, in Spanish, and it contains our project name and a row_id. I really don't think Siebel is using the same name for its own profile attributes :).
But wait, there is more, I left the best part for last: the problem only happens in our development environment!
It's not an SRF issue: if we promote the same SRF to our testing or production environments, it works and returns the expected value.
It's not a data problem: still with the same SRF, I can use my local thick client, connecting to our development database with the same login and password, and it works fine too.
It's not a concurrency problem: we are testing with only one user logged in. And even if we had more, they wouldn't share sessions. And even if they did, the value wouldn't be always 'Y'.
It's not a temporary glitch, or something due to a wrong incremental compilation or a corrupted SRF: we have been experiencing this for at least 6 months (obviously, in that time frame, we've had dozens of different SRF files... all of them having the same problem, but only in development, and only if you use the server and not the dedicated client... seriously...).
Where else could I search the profile attribute being set? I've read that they can be persisted to the DB, but in order to do so, you have to define them as a field in a BC based on an S_PARTY extension table, right?
Is there any way to trace profile attribute changes somehow? Maybe rising some loglevel?
How can I find out at least what's being executed after the Application_Start, before loading the first applet?
Any other ideas? I tried checking the SQL spool file too, but didn't find anything suspicious there either (i.e., any of the queries we use to check the conditions, being run twice with different parameters).
Update: following Ranjith R's suggestions, I've also checked:
Other vanilla business services which could be also invoked from a workflow to set a profile attr: User Registration > SetProfileAttr, SessionAccessService > SetProfileAttr and ISS Promotion Agreement Manager > SetProfileAttributes.
Runtime events setting profile attributes directly or using a business service (we don't have any runtime events apart from the vanilla ones).
Business services being called from DVMs (we only have vanilla data validation rules, and none of them apply to our buscomps).
Still no luck...
Ok... finally we found what's happening:
We access the URL to our server and get to the login page. This triggers a first Application_Start event, for the SADMIN user.
We set the profile attributes in that session. SADMIN is the Siebel administrator user, so yes, he hasSuperpowers and therefore we do TheApplication().SetProfileAttr("CanFly", "Y");.
The Application_Start event finishes.
We enter our username and password in the login screen to access into Siebel. This triggers a second Application_Start event, this time for our user. This is the one I was monitoring with the trace files.
We set the profile attributes again in the new session. Our user doesn't hasSuperpowers, so we don't set any value for the CanFly attribute.
The Application_Start event finishes, and CanFly is still empty.
Siebel merges both sessions into one before loading the first screen!! Or at least, it transfers over the profile attributes we had set for SADMIN.
I'm sure it happens that way, for two reasons. First, we changed the profile attribute name to include the username too. And second, instead of storing just an "Y", we are storing now the current date:
var time = (new Date()).getTime();
TheApplication().SetProfileAttr("CanFly_" + TheApplication().LoginName(), time);
We end up having CanFly_SADMIN, but no CanFly_USER, and the time value stored is the same we see in the log file for step 2... which is smaller than any of the values for the *_USER attributes.
So that's what happening. I still don't know why Siebel behaves this way, but that would be matter for another question. According to the Siebel bookshelf:
The Start event is called when the client starts and again when the user interface is first displayed.
...but it doesn't say anythign about it being called from two different sessions, different users too, and then merging them together. It must be something misconfigured in our dev environment, considering it doesn't happen in the other ones.
Does Siebel 7.8 have runtime Events? I can't recall. Runtime events have an action set for setevent, which can set/clear profile attributes.
There are still other vanilla business services which can set profile attributes, try searching in tools flat under business service methods for *rofile*tt*.
The SIS OM service can also be invoked from DVMs for from RunTime events directly, so thats also a possibility.
There is no logging system to see values of Profile Attributes changing, testing is the only way out.

Sharing data across Sitecore pipelines

I´m trying to perform some actions in the pipeline "httpRequestBegin" only when necessary.
My processor is executed after Sitecore resolves the user (processor type="Sitecore.Pipelines.HttpRequest.UserResolver, Sitecore.Kernel" ), as i´m resolving the user too if Sitecore is not able to resolve it first.
Later, i want to add some rendering in the pipeline "insertRenderings", only if actions in the previous pipeline were executed (If i resolved the user, show a message), so i´m trying to save some "flag" in the first step, to check in the second.
My question is, where can I store that flag? I´m trying to find some kind of "per request" cache...
So far, I've tried:
The session: Wrong, it's too early, session doesn't exists yet.
Items (HttpContext.Current.Items): It doesn't work either, my item is not there on the seconds step.
So far i'm using the application cache (HttpContext.Current.Cache) with some unique key, but I don´t like this solution.
Anybody body knows a better approach to share this "flag"?
You could add a flag to the request header and then check it's existence in the latter pipelines, e.g.
// in HttpRequest pipeline
HttpContext.Current.Request.Headers.Add("CustomUserResolve", "true");
// in InsertRenderings pipeline
var customUserResolve = HttpContext.Current.Request.Headers["CustomUserResolve"];
if (Sitecore.MainUtil.GetBool(customUserResolve, false))
{
// custom logic goes here
}
This feels a little dirty, I think adding to Request.QueryString or Request.Params would been nicer but those are readonly. However, if you only need this for a one time deal (i.e. only the first time it is resolved) then it will work since in the next request the Headers are back to default without your custom header added.
HttpContext.Current.Cache or HttpRuntime.Cache could be the fastest solution here. Though this approach would not preserve data when the AppPool gets recycled.
If you add only a few keys to the cache and then maintain them, this solution might work for you. If each request puts an entry into the cache, it may eventually overflow the memory used by worker process in a long run.
As alternative to this you may try to use Sitecore.Context.ClientData property. It uses ClientDataStore that employs a database (look for clientDataStore section in the web.config file) to store data. These entries can survive the AppPool recycle.
Though if you use them a lot, it may become a bottleneck under the load when you need to write to and/or read from the entries.
If you do know that there could be a lot of entries created for sharing purposes, I'd create a scheduled task to clean up the data store from obsolete entries.
I know this is a very old question, but I just want post solution I worked around
Below will hold data per http request basis.
HttpContext.Current.Items["ModuleInfo"] = "Custom Module Info"
we can store data to httpcontext in one sitecore pipeline and retrieve in another...
https://www.codeproject.com/Articles/146455/When-Can-We-Use-HttpContext-Current-Items-to-Store

Diagnosing a Sporadic Django/ Postgres "DatabaseError: current transaction is aborted"

I have a "DatabaseError: current transaction is aborted" that comes and goes (to be specific, 11 times out of 841) in a Django 1.3 project using Postgres. The project is a quiz site and the error occurs when a user submits the answer form in the view. From the database's perspective, the process involves a number of queries and looks like this:
Gather all of the correct answers for the question (they are multiple choice and may need more than one answer)*
Grab the user's profile
Save this answer
Query for the user's new point total
Save the total to their profile
Check to see if they qualify for a new reward
Award new reward if they do
Somewhere in that tortured process, this error crops up (I'm guessing because one query isn't waiting for the others). Is there a way for me, in production (i.e., DEBUG = False), to log the database errors just in this case? I'm on WebFaction and the Postgres error logs are not available to me. Could I steal something from this middleware example to fire in just this specific case?
Alternatively, is there a better way to find this error or should I be wrapping the individual queries in transactions (unfortunately they aren't all in the same place in the code, not sure if wrapping the view in a transaction decorator would help)?
*Just to confuse matters, the multiple right answers requirement was added in the middle of development and then dropped right before we went live, so I could simplify this process somewhat, basically skipping steps 1 and 4, but I'd like to know a general answer to this sort of mysterious issue.
You haven't said where in your 7 steps you have transactions that begin and end. That would be helpful to know.
One source of "transaction aborted" messages is due to deadlocks. More details would be in the PostgreSQL logs.
But the bottom line is that you will continue to have a painful and time-consuming experience debugging PostgreSQL if you can't get access to your PostgreSQL error messages. Take that up with WebFaction. If they can't helpful and your time is worth much, your bottom line costs will be lower by moving to an environment that provides this fundamental feature.
You have to enable autocommit for the transaction. In your DATABASES entry, include:
'OPTIONS': {'autocommit': True,},
By default, Django opens a transaction at the first query. By using this option, you manually have to start a transaction (e.g. using #commit_on_success). Since there is no transaction open anymore, you'll get the actual error that was previously masked by the transaction error.
The autocommit setting will be the new default for Django 1.6, see https://docs.djangoproject.com/en/dev/ref/databases/#postgresql-notes