As per documentation:
"timeoutseconds = seconds"
The TIMEOUTSECONDS parameter specifies the
time interval that an OLAP server or a stored process server waits
before it stops a client process and cleans up the server run-time
environment context.
My question is - how do I obtain this value programmatically, eg as part of a Stored Process?
STPSRVGETN
Returns the numeric value of a server property
SAS Statements Results
------------------------------------------------------ ---------------------
dsesstimeout=stpsrvgetn('default session timeout');
put dsesstimeout=; dsesstimeout=900
sessmaxtimeout=stpsrvgetn('maximum session timeout');
put sessmaxtimeout=; sessmaxtimeout=3600
session=stpsrvgetn('session timeout');
put session=; session=900
maxconreqs=stpsrvgetn('maximum concurrent requests');
put maxconreqs=; maxconreqs=1
deflrecl=stpsrvgetn('default output lrecl');
put deflrecl=; deflrecl=65535
version=stpsrvgetn('version');
put version=; version=9.4
Related
I have a Django 3.1.3 server that uses Redis for its cache via django-redis 4.12.1. I know that cache locks can generally be set via the following:
with cache.lock('my_cache_lock_key'):
# Execute some logic here, such as:
cache.set('some_key', 'Hello world', 3000)
Generally, the cache lock releases when the with block completes execution. However, I have some custom logic in my code that sometimes does not release the cache lock (which is fine for my own reasons).
My question: is there a way to set a timeout value for Django cache locks, much in the same way as you can set timeouts for setting cache values (cache.set('some_key', 'Hello world', 3000))?
I've answered my own question. The following arguments are available for cache.lock():
def lock(
self,
key,
version=None,
timeout=None,
sleep=0.1,
blocking_timeout=None,
client=None,
thread_local=True,
):
Cross referencing that with comments from the Python Redis source, which uses the same arguments:
``timeout`` indicates a maximum life for the lock.
By default, it will remain locked until release() is called.
``timeout`` can be specified as a float or integer, both representing
the number of seconds to wait.
``sleep`` indicates the amount of time to sleep per loop iteration
when the lock is in blocking mode and another client is currently
holding the lock.
``blocking`` indicates whether calling ``acquire`` should block until
the lock has been acquired or to fail immediately, causing ``acquire``
to return False and the lock not being acquired. Defaults to True.
Note this value can be overridden by passing a ``blocking``
argument to ``acquire``.
``blocking_timeout`` indicates the maximum amount of time in seconds to
spend trying to acquire the lock. A value of ``None`` indicates
continue trying forever. ``blocking_timeout`` can be specified as a
float or integer, both representing the number of seconds to wait.
Therefore, to set the maximum time period of 2 seconds for which a cache lock takes effect, do something like this:
with cache.lock(key='my_cache_lock_key', timeout=2):
# Execute some logic here, such as:
cache.set('some_key', 'Hello world', 3000)
I'm sorry for this poor title but can't formulate correct title at the time.
What is the process:
Get message id aka offset from a sheet.
Get list of updates from telegram based in this offset.
Store last message id which will be used as offset in p.1
Process updates.
Offset allows you to fetch updates only received after this offset.
E.g i get 10 messages 1st time and last message id aka offset is 100500. Before 2nd run bot received additional 10 message so there's 20 in total. To not load all of 20 messages (10 of which i already processed) i need to specify offset from 1st run, so API will return only last 10 messages.
PQ is run in Excel.
let
// Offset is number read from table in Excel. Let's say it's 10.
Offset = 10
// Returns list of messages as JSON objects.
Updates = GetUpdates(Offset),
// This update_id will be used as offset in next query run.
LastMessageId = List.Last(Updates)[update_id],
// Map Process function to each item in the list of update JSON objects.
Map = List.Transform(Updates, each Process(_))
in
Map
The issue is that i need to store/read this offset number each time query is executed AND process updates.
Because of lazy evaluation based on code below i can either output LastMessageId from the query or output result of a Map function.
The question is: how can i do both things store/load LastMessageId from Updates and process those updates.
Thank you.
I have a Coldfusion app and page where I generate and download a file. There are cases when that process takes a lot of time. I would like to implement it so that the whole process looks like this:
start the process of generating and downloading a file
if the file is generated within a certain time (e.g. 10 sec.), then provide it to the user immediately
if not, then after the expiration of the allowable time for generation, display a message that the file will be sent to the user email.
I think the CF threads can help me, but I haven't used it earlier.
thread action="run" name="generateFile" {
var createFileResult = myService.createFile();
}
thread action="join" name="generateFile" timeout="10000"; // 10 sec.
if (generateFile.status != "COMPLETED") {
thread action="terminate" name="generateFile";
var createFileResult = new CreateFileResult(message="We will produce the file and send you a link to download.");
}
Currently, I show a message if the file has not generated, but at the same time, I lose the ability to generate the file on the first request.
I'm interested in the following: how without killing the thread in which the file is generated (if it is possible), complete the file generation in the background for a user and immediately to send it to a user email?
I want to implement it in this way because I want to use the server resources rationally and not ignore the result of the work done if the file wasn't generated at the first user request and don't load the server again when the same file is generated for sending to email.
How can I change the timeout duration for different operations that can fail due to server inaccessibility? (start_session, insert, find, delete, update, ...)
...
auto pool = mongocxx::pool(mongocxx::uri("bad_uri"), pool_options);
auto connection = pool.try_acquire();
auto db = (*(connection.value()))["test_db"];
auto collection = db["test_collection"];
// This does not help
mongocxx::write_concern wc;
wc.timeout(std::chrono::milliseconds(1000));
mongocxx::options::insert insert_options;
insert_options.write_concern(wc);
// takes about 30 seconds to fail
collection.insert_one(from_json(R"({"name": "john doe", "occupation": "_redacted_", "skills" : "a certain set"})"), insert_options);
[Edit]
Here is the exception message:
C++ exception with description "No suitable servers found:
serverSelectionTimeoutMS expired: [connection timeout calling
ismaster on '127.0.0.1:27017']
It would be helpful to see the actual error message from the insert_one() operation, but "takes about 30 seconds to fail" suggests that this may be due to the default server selection timeout. You can configure that via the serverSelectionTimeoutMS connection string option.
If you are connecting to a replica set, I would suggest keeping that timeout a bit above the expected time for a failover to complete. Replica Set Elections states:
The median time before a cluster elects a new primary should not typically exceed 12 seconds
You may find that is shorter in practice. By keeping the server selection timeout above the expected failover time, you'll allow the driver to insulate your application from an error (at the expense of wait time).
If you are not connecting to a replica set, feel free to lower serverSelectionTimeoutMS to a lower value, albeit still greater than the expected latency to your mongod (standalone) or mongos (sharded cluster) node.
Do note that since server selection occurs within a loop, the connectTimeoutMS connection string option won't affect the delay you're seeing. Lower the connection timeout will allow the driver to internally give up when attempting to connect to an inaccessible server, but the server selection will still block for up to serverSelectionTimeoutMS (and likely retry connections to the server during that loop).
I need guidence on implementing WHILE loop with Kettle/PDI. The scenario is
(1) I have some (may be thousand or thousands of thousand) data in a table, to be validated with a remote server.
(2) Read them and loopup to the remote server; I use Modified Java Script for this as remote server lookup validation is defined in external Java JAR file (I can use "Change number of copies to start... option on Modified java script and set to 5 or 10)
(3) Update the result on database table. There will be 50 to 60% connection failure cases each session.
(4) Repeat Step 1 to step 3 till all gets updated to success
(5) Stop looping on Nth cycle; this is to avoid very long or infinite looping, N value may be 5 or 10.
How to design such a WhILE loop in Pentaho Kettle?
Have you seen this link? It gives a pretty well detailed explanation of how to implement a while loop.
You need a parent job with a sub-transformation for doing a check on the condition which will return a variable to the job on whether to abort or to continue.