I am working with a shared server running dual instances of CF10. My application stores some functions in application variables and it is very important that both instances get refreshed at the same moment when the functions are updated.
The question then is this: Do both instances get updated when the OnApplicationStart() function is run? This would be the only way to ensure proper code synchronization across instances.
I've not been able to find any reference to this and seem to be getting ambiguous results from the server.
Thanks for any shared knowledge.
Each ColdFusion instance can contain multiple applications.
Each application can contain multiple application and session variables.
The same code base can be run on multiple instances, even as multiple individual applications on the same instance.
When you restart Instance1, only the application(s) on that instance will pick up any code changes related to application or session variables. Therefore, you have to restart all instances on the same server to pick up these changes.
If you need a value to be accessible across all instances by multiple applications on the same physical server, then consider creating a variable in the SERVER scope.
If you set a variable like this <cfset server.foo = "hello", then any application in any instance across the same physical (or virtual) server can access that variable. This would avoid having to restart all of the instances. Just update the function, then run a one-time script to reset the variable.
As of ColdFusion 9, you can opt to define this variable inside of the onServerStart() method of Application.cfc. This will make sure it's available whenever the whole server is restarted.
Related
I am quite new to using Node.js so please excuse my ignorance :)
Right now I have a web app which uses Node/Angular/Express in order to call C++ functions from a DLL on the server, and return the results to the client. However, I need the app to support multiple users.
The C++ server function I am calling returns a result based on a global object defined in the DLL (which can be modified). The problem is that when there are multiple clients accessing the web app, modifying the global object in one session effects the object being accessed in other sessions.
Example:
x is an object in the C++ DLL on the server
User #1 sets x to 5
Server returns 5 to User #1
User #2 sets x to 8
Server returns 8 to User #2
User #1 asks for x value
Server returns 8 to User#1 (Would like 5 to be returned instead, since that's the latest value of x from User #1's perspective).
My assumption is that I should be using something like Socket.IO (similar to the basic tutorial http://socket.io/get-started/chat/). This is able to indicate when a user connects to the app, but it's not able keep the sessions independent from the user's perspective.
Any thoughts on how to go about keeping these sessions independent? Thanks!
This is not a node.js problem. node.js can do sessions just fine (express-session is a very popular way to implement sessions using express and node.js).
This sounds like you have a DLL that supports only a single user at a time which really isn't appropriate for multi-user server usage.
Though there are very ugly ways to work around that (such as launching a cached and aged separate child process for each user so each child process has its own separate loaded copy of the DLL), really what you need is a DLL that doesn't keep a single user's state in global memory so it can be used on behalf of multiple users.
My client wants to move to a ColdFusion load-balancing environment for better availability and scalability of the site. I know how to setup clusters and instances in the ColdFusion Admin. We should also use J2EE session mgmt for sticky sessions.
But I am not sure of other code level changes required while moving from a single server to load-balancing environment.
Anyone having any experience please suggest? Or any helpful links.
Skipping the session scoped issues you're bound to enjoy I'll focus on less common code level strategies.
You will have 2+ isolated application scopes. This creates challenges in synchronicity. Examine the app code for writes to the app scope. Should some condition require updating an app scoped value, that value must be reflected in all sibling application scopes.
Know that each instance will have its own onApplicationStart() & onApplicationEnd() events. Depending on what happens in the code, it could cause mischief.
Be aware of things like FuseBox (framework) when load balancing. FuseBox generates files locally that are not replicated on other server instances.
When logging, emailing errors, etc., use an instance identifier so you'll know which server you're working with.
Should your app need the originating IP address of a request, you may need to enable X-Forwarded-For HTTP headers within your load balancer. Otherwise, you could get the IP of the load balancer on every request.
Verify Identical on EACH Instance:
Security implementations
ColdFusion & Java versions
Datasources
Mappings
Virtual directories
Shared resource locations ..
CF Admin settings: site-wide error handling, etc.
CF account privileges, Important!
Consider using the ColdFusion Server Manager to assist consistification. ;)
SAS has a stored process server that runs stored processes and a workspace server that runs SAS code. But a stored process is nothing but a combination of SAS code statements, so why can't the workspace server run SAS code?
I am trying to understand why SAS developers came up with the concept of a separate server just for stored processes.
A stored process server reuses the SAS process between runs. It is a stateless server meant to run small pre-written programs and return results. The server maintains a pool of processes and allocates requests to that pool. This minimizes the time to run a job as there is no startup/shut down of the process overhead.
A workspace server is a SAS process that is started for 1 user. Every user connection gets a new SAS process on the server. This server is meant to run more interactive processes where a user runs something, looks at output and then runs something else. Code does not have to be prewritten and stored on the server. In that scenario, startup time is not a limiting factor.
Also, a workspace server can provide additional access to the server. A programmer can use this server to access SAS data sets (via ADO in .NET or JDBC in Java) as well as files on the server.
So there are 2 use cases and these servers address them.
From a developers perspective, the two biggest differences are:
Identity. The stored process server runs under the system account (&sysuserid) configured in the SAS General Servers group, sassrv by default. This will affect permissions (eg database access) at the OS level. Workspace sessions are always run under the client account (logged in user) credentials.
Sessions. The option to retain 'state' by leaving your session alive for a set time period (and accessing the same session again using a session id) is available on for the Stored Process server, however - avoid this pattern at all costs! The reason being, that this session will tie up one of your multibridge ports and play havoc with the load balancing. It's also a poor design choice.
Both stored process and workspace servers can be configured to provide pooled sessions (generic sessions kept alive to be re-used, avoiding startup cost for frequent requests).
To further address your points - a Stored Process is a metadata object which points to (or can also contain) raw sas code. A stored process can run on either type (stored process or workspace) of server. The choice of which will depend on your functional needs above, plus performance considerations as per your pooling and load balancing configuration.
I have the following requirements
Multiple JARs. Each running an embedded Jetty.
Run everyone on same domain/port - using reverse proxy (Apache)
A JAR can have multiple instances running on different machines (yet under same host/port).
Complete session separation - absolutely no sharing even between 2 instances of same webapp.
Scale this all dynamically.
I do not know if this is relevant, but I know Spring Security is used in some of these web apps.
I got everything up and running by adding reverse proxy rules and restarting Apache.
Here is a simplified description of 2 instances for webapp-1 and 2 instances for webapp-2.
http://mydomain.com/app1 ==> 1.1.1.1:9099
http://mydomain.com/app2 ==> 1.1.1.1:9100
http://mydomain.com/app3 ==> 1.1.1.2:9099
http://mydomain.com/app4 ==> 1.1.1.2:9100
After setting this up successfully (almost), we see problems with JSESSIONID cookie.
Every app overrides the others' cookie - which means we have yet to achieve total session separation as one affects the other.
I read a lot about this issue online, but the solutions never really suffice in my scenario.
The IDEAL solution for me would be to define JETTY to use some kind of UUID for the cookie name. I still cannot figure out why this is not the default.
I would even go for a JavaScript solution. JavaScript has the advantage that it can see the URL after ReverseProxy manipulation. So for http://mydomain.com/XXX I can define cookie name to be XXX_JSESSIONID.
But I cannot find a howto on these.
So how can I resolve this and get a total separation of sessions?
You must spend some time understanding what session manager you are using and what features/benefits it gives you. If you have no db available, and you have no custom session manager then I am inclined to believe you are using a HashSessionManager that we distribute which is usable for session management on a single host only, there is no session sharing across jvms in this instance.
If you are running 4 separate jvm processes (and using the HashSessionManager) as the above seems to indicate then there is no sessions being shared across nodes.
Also you seem to be looking to change the name of the session id variable for each application. To do that simply set a different name for each application.
http://www.eclipse.org/jetty/documentation/current/session-management.html
You can set a new org.eclipse.jetty.servlet.SessionCookie name for each webapp context and that should address your immediate issue.
There are a lot of solutions for restricting an application from running twice. Searching by process name, using a named mutex etc. But I all of these methods don't work if I want to restrict my application to the shell session.
A user may have more than login session and shell on windows (right?)? If this is true I want to be able to run one instance of my application in every shell session but allow only one.
Is there a way to get a shell identifier which could then be put into the mutex name?
You can create local (session only) or global (whole system) mutexes. See http://msdn.microsoft.com/en-us/library/system.threading.mutex.aspx for more info. Look for global and local.
The shell identifier you want to use is the user name. This is available as Environment::UserName or GetUserName()
You can go from process id of the current process to a WTS session ID, I think that will do what you need. ProcessIdToSessionId
You should be aware that a terminal services session can be disconnected from one desktop and then connected to by another, so the 'shell' can actually change, but the session ID should remain the same.
If you want to restrict it to one instance per login session, even if the same user account has multiple login sessions running at the same time (Server/Terminal Server) you could use the handle from GetShellWindow to check if there is a instance running already for this desktop.