If CF server is restarted, are all the existing Session and Client variables lost?
Client variables generally live in a database or registry, and therefore they do persist after server restarts (see here).
Session variables live in RAM and therefore do not persist through a server restart (see here).
Related
I have a working Django application that is running locally using an sqlite3 database without problem. However, when I change the Django database settings to use my external AWS RDS database all my pages start taking upwards of 40 seconds to load. I have checked my AWS metrics and my instance is not even close to being fully utilized. When I make a request to a view with no database read/write operations I also get the same problem. My activity monitor shows my local CPU spiking with each request. It shows a process named 'WindowsServer' using most of the CPU during each request.
I am aware more latency is expected when using a remote database but I don't think this should result in 40 second page lags. What other problems that could be causing this behaviour?
AWS database monitoring
Local machine
So your computer has connection to the server in Amazon, that's the problem with latency. Production servers should be in the same place as DB servers(or should have very very good connection, so the latency is lowered as much as possible.)
--edit--
So we need more details. What is your ISP? What is your connection properties? Uplink, downlink? What are pings to servers in AWS?
I would like to start a web server on-demand as an inetd "tcp/wait" service which shuts itself down after a programmable period of inactivity.
Many web servers already support inetd "tcp/nowait" mode, but this mode has the disadvantage that a new process needs to be forked for every new connection. It is therefore slower and more resource-intensive than running a dedicated server daemon.
A web server supporting inetd's "tcp/wait" would only be launched by inetd for the first request, then serve any number of requests using the same server instance until no requests occurred for some period of idle time, in which case the server instance automatically terminates and lets inetd start it again once the next period of activity starts.
Such a tcp/wait inetd web server should have approximately the same efficiency as a dedicated web server (i. e. running permanently) during times of activity. However, it will automatically shut down in times of inactivity, saving system resources.
Irregular "anti-demand"-driven shutdowns will also clean up any memory leaks from the web server and possibly associated FGCI-services (which would terminate together with the web server).
I know that it is already possible to use systemd's socket activation in combination with lighttpd's -i option to implement what I want.
However, I want a solution that also works without systemd, depending on nothing else than a running Internet superserver no matter how the latter one has been started (inetd/xinetd started by sysvinit, runit, manually, or systemd's socket activation replacing inetd/xinetd).
I am using Jetty 9 (Embedded) as a Web server, but I am not using Jetty sessions nor its session manager.
When I launch my server, I notice that 2 threads are automatically created with the name org.eclipse.jetty.server.session.HashSessionManager.
From the documentation this is how Jetty manages sessions, removed the timed out sessions and even synchronizing with an external DB if session sharing is enabled.
Since I am not using Jetty's session management, is there any way I can disable this HashSessionManager? (I did read the documentation but either it was not documented or I managed to miss the part describing how to turn it off!)
Thanks
Answering my own question in case someone else blindly copies-pastes the documentation example!
In the documentation about embedding Jetty (http://www.eclipse.org/jetty/documentation/current/embedding-jetty.html), they create the ServlerContextHandler with the SESSION management flag:
ServletContextHandler context = new ServletContextHandler(
ServletContextHandler.SESSIONS);
By simply removing ServletContextHandler.SESSIONS, the HashSessionManager thread disappears.
That will teach me to understand code and not just copy-pasting the examples!
I am currently using Jetty 9.1.4 on Windows.
When I deploy the war file without hot deployment config, and then restart the Jetty service. During that 5-10 seconds starting process, all client connections to my Jetty server are waiting for the server to finish loading. Then clients will be able to view the contents.
Now with hot deployment config on, the default Jetty 404 error page shows within that 5-10 second loading interval.
Is there anyway I can make the hot deployment has the same behavior as the complete restart - clients connections will wait instead seeing the 404 error page ?
Unfortunately this does not seem to be possible currently after talking with the Jetty developers on IRC #jetty.
One solution I will try to use are two Jetty instances with a loadbalancing reverse proxy (e.g. nginx) before them and taking one instance down for deployment.
Of course this will instantly lead to new requirements (session persistence/sharing) which need to be handled. So in conclusion: much work to do in the Java world for zero downtime on deployments.
Edit: I will try this, seems like a simple enough solution http://rafaelsteil.com/zero-downtime-deploy-script-for-jetty/ Github: https://github.com/rafaelsteil/jetty-zero-downtime-deploy
SAS has a stored process server that runs stored processes and a workspace server that runs SAS code. But a stored process is nothing but a combination of SAS code statements, so why can't the workspace server run SAS code?
I am trying to understand why SAS developers came up with the concept of a separate server just for stored processes.
A stored process server reuses the SAS process between runs. It is a stateless server meant to run small pre-written programs and return results. The server maintains a pool of processes and allocates requests to that pool. This minimizes the time to run a job as there is no startup/shut down of the process overhead.
A workspace server is a SAS process that is started for 1 user. Every user connection gets a new SAS process on the server. This server is meant to run more interactive processes where a user runs something, looks at output and then runs something else. Code does not have to be prewritten and stored on the server. In that scenario, startup time is not a limiting factor.
Also, a workspace server can provide additional access to the server. A programmer can use this server to access SAS data sets (via ADO in .NET or JDBC in Java) as well as files on the server.
So there are 2 use cases and these servers address them.
From a developers perspective, the two biggest differences are:
Identity. The stored process server runs under the system account (&sysuserid) configured in the SAS General Servers group, sassrv by default. This will affect permissions (eg database access) at the OS level. Workspace sessions are always run under the client account (logged in user) credentials.
Sessions. The option to retain 'state' by leaving your session alive for a set time period (and accessing the same session again using a session id) is available on for the Stored Process server, however - avoid this pattern at all costs! The reason being, that this session will tie up one of your multibridge ports and play havoc with the load balancing. It's also a poor design choice.
Both stored process and workspace servers can be configured to provide pooled sessions (generic sessions kept alive to be re-used, avoiding startup cost for frequent requests).
To further address your points - a Stored Process is a metadata object which points to (or can also contain) raw sas code. A stored process can run on either type (stored process or workspace) of server. The choice of which will depend on your functional needs above, plus performance considerations as per your pooling and load balancing configuration.