I am running a calculation in a PL/pgSQL function and I want to use the result of that calculation in my C++ code. What's the best way to do that?
I can insert that result into a table and use it from there but I'm not sure how well that fares with best practices. Also, I can send message to stderr with RAISE NOTICE but I don't know can I use that message in my code.
The details here are a bit thin on the ground, so it's hard to say for sure.
Strongly preferable whenever possible is to just get the function's return value directly. SELECT my_function(args) if it returns a single result, or SELECT * FROM my_function(args); if it returns a row or set of rows. Then process the result like any other query result. This is part of the basic use of simple SQL and PL/PgSQL functions.
Other options include:
Returning a refcursor. This can be useful in some circumstances where you want to return a dynamic result set or multiple result sets, though it's now mostly superseded by RETURN QUERY and RETURN QUERY EXECUTE. You then FETCH from the refcursorto get the result rows.
LISTENing for an event and having the function NOTIFY when the work is done, possibly with the result as a notify payload. This is useful when the function isn't necessarily called on the same connection as the program that wants to use its results.
Create a temporary table in the function, then SELECT from the table from the session that called the function.
Emitting log messages via RAISE and setting client_min_messages so you receive them, then processing them. This is a very ugly way to do it and should really be avoided at all costs.
INSERTing the results into an existing non-temporary table, then SELECTing them out once the transaction commits and the rows become visible to other transactions.
Which is better? It depends entirely on what you're trying to do. In almost all cases the correct thing to do is just call the function and process the return value, but there are exceptions in special cases.
Related
I had to use transaction.on_commit() for synchronous behaviour in one of the signals of my project. Though it works fine, I couldn't understand how does transaction.on_commit() decide which transaction to take. I mean there can be multiple transactions at the same time. But how does django know which transaction to take by using transaction.on_commit()
According to the docs
You can also wrap your function in a lambda:
transaction.on_commit(lambda: some_celery_task.delay('arg1'))
The function you pass in will be called immediately after a hypothetical database write made where on_commit() is called would be successfully committed.
If you call on_commit() while there isn’t an active transaction, the callback will be executed immediately.
If that hypothetical database write is instead rolled back (typically when an unhandled exception is raised in an atomic() block), your function will be discarded and never called.
If you are using it on post_save method with sender=SomeModel. Probably the on_commit is executed each time a SomeModel object is saved. Without the proper code we would not be able to tell the exact case.
If I understand the question correctly, I think the docs on Savepoints explains this.
Essentially, you can nest any number of transactions, but on_commit() is only called after the top most one commits. However, on_commit() that's nested within a savepoint will only be called if that savepoint was committed and all the ones above it are committed. So, it's tied to which ever one is currently open at the point it's called.
I have a function which aims to delete a concrete row from a SQLite database by an UID identifier.
The sequence is the following:
1. Create select query to check if the row exists
2. Prepare the query
3. Bind the row UID
4. Step
5. Finalize
If the row exist
{
6. Create delete query
7. Prepare it
8. Bind the UID
9. Step
10. Finalize
11. Finalize
}
As you can see first it checks if the row exist in order to notify the caller if the required UID is wrong, than it create new delete query.
The program works as expected in ~14/15 test cases. In the cases where the program is crashing it crashes to the last finalize invocation (11th point). I've checked all the data and it seems that everything is valid.
The question is what is the expected behaviour of consecutive invocation of finalize function. I tried to set 5 invocations of finalize one after another but the behaviour is the same.
Though the documentation doesn't feel the need to state this explicitly, it's fairly obvious that what you're doing is "undefined behaviour" (within the scope of the library).
Much like deleteing dynamically allocated memory, you are supposed to finalize once. Not twice, not five times, but once. After you've finalized a prepared statement, it has been "deleted" and no longer exists. Any further operations on that prepared statement constitutes what the documentation calls "a grivous error" (if we presume that a superfluous call to finalize constitutes "use"; and why would we not?).
Fortunately there is no reason ever to want to do this. So, quite simply, don't! If your design is such that you've lost control of your code flow and, at the point of finalize, for some reason have insufficient information about your program's context to know whether the prepared statement has already been finalized, that's fine: much like we do with pointers, you can set it to nullptr so that subsequent calls are no-ops. But if you need to do this, you really should also revisit your design.
Why did it appear to work for you? Pure chance, much like with any other undefined behaviours:
Any use of a prepared statement after it has been finalized can result in undefined and undesirable behavior such as segfaults and heap corruption.
See also: "Why can't I close my car door twice without opening it?" and "Why can't I shave my imaginary beard?"
We can make some requests to the server using both Query and Mutation. In these queries we can pass some params and we will get some results from the server in both cases. The only one obligatory difference is that we can call the mutation from our props like "this.props.mutation", but it looks like a syntax sugar, because we can wrap our HOC in "withApollo" and we'll receive "query" method in props too. So what is the main difference between these two types of requests?
Strictly speaking there is no difference.
... technically any query could be implemented to cause a data write.
However, it's useful to establish a convention that any operations
that cause writes should be sent explicitly via a mutation.
However, the reference implementation does enforce the following.
While query fields are executed in parallel, mutation fields run in
series, one after the other.
This means that if we send two incrementCredits mutations in one
request, the first is guaranteed to finish before the second begins,
ensuring that we don't end up with a race condition with ourselves.
Both quotes can be found from the links below.
http://graphql.org/learn/queries/#mutations
http://graphql.org/learn/queries/#multiple-fields-in-mutations
I'm looking for guidelines to using a consistent value of the current date and time throughout a transaction.
By transaction I loosely mean an application service method, such methods usually execute a single SQL transaction, at least in my applications.
Ambient Context
One approach described in answers to this question is to put the current date in an ambient context, e.g. DateTimeProvider, and use that instead of DateTime.UtcNow everywhere.
However the purpose of this approach is only to make the design unit-testable, whereas I also want to prevent errors caused by unnecessary multiple querying into DateTime.UtcNow, an example of which is this:
// In an entity constructor:
this.CreatedAt = DateTime.UtcNow;
this.ModifiedAt = DateTime.UtcNow;
This code creates an entity with slightly differing created and modified dates, whereas one expects these properties to be equal right after the entity was created.
Also, an ambient context is difficult to implement correctly in a web application, so I've come up with an alternative approach:
Method Injection + DeterministicTimeProvider
The DeterministicTimeProvider class is registered as an "instance per lifetime scope" AKA "instance per HTTP request in a web app" dependency.
It is constructor-injected to an application service and passed into constructors and methods of entities.
The IDateTimeProvider.UtcNow method is used instead of the usual DateTime.UtcNow / DateTimeOffset.UtcNow everywhere to get the current date and time.
Here is the implementation:
/// <summary>
/// Provides the current date and time.
/// The provided value is fixed when it is requested for the first time.
/// </summary>
public class DeterministicTimeProvider: IDateTimeProvider
{
private readonly Lazy<DateTimeOffset> _lazyUtcNow =
new Lazy<DateTimeOffset>(() => DateTimeOffset.UtcNow);
/// <summary>
/// Gets the current date and time in the UTC time zone.
/// </summary>
public DateTimeOffset UtcNow => _lazyUtcNow.Value;
}
Is this a good approach? What are the disadvantages? Are there better alternatives?
Sorry for the logical fallacy of appeal to authority here, but this is rather interesting:
John Carmack once said:
There are four principle inputs to a game: keystrokes, mouse moves, network packets, and time. (If you don't consider time an input value, think about it until you do -- it is an important concept)"
Source: John Carmack's .plan posts from 1998 (scribd)
(I have always found this quote highly amusing, because the suggestion that if something does not seem right to you, you should think of it really hard until it seems right, is something that only a major geek would say.)
So, here is an idea: consider time as an input. It is probably not included in the xml that makes up the web service request, (you wouldn't want it to anyway,) but in the handler where you convert the xml to an actual request object, obtain the current time and make it part of your request object.
So, as the request object is being passed around your system during the course of processing the transaction, the time to be considered as "the current time" can always be found within the request. So, it is not "the current time" anymore, it is the request time. (The fact that it will be one and the same, or very close to one and the same, is completely irrelevant.)
This way, testing also becomes even easier: you don't have to mock the time provider interface, the time is always in the input parameters.
Also, this way, other fun things become possible, for example servicing requests to be applied retroactively, at a moment in time which is completely unrelated to the actual current moment in time. Think of the possibilities. (Picture of bob squarepants-with-a-rainbow goes here.)
Hmmm.. this feels like a better question for CodeReview.SE than for StackOverflow, but sure - I'll bite.
Is this a good approach?
If used correctly, in the scenario you described, this approach is reasonable. It achieves the two stated goals:
Making your code more testable. This is a common pattern I call "Mock the Clock", and is found in many well-designed apps.
Locking the time to a single value. This is less common, but your code does achieve that goal.
What are the disadvantages?
Since you are creating another new object for each request, it will create a mild amount of additional memory usage and additional work for the garbage collector. This is somewhat of a moot point since this is usually how it goes for all objects with per-request lifetime, including the controllers.
There is a tiny fraction of time being added before you take the reading from the clock, caused by the additional work being done in loading the object and from doing lazy loading. It's negligible though - probably on the order of a few milliseconds.
Since the value is locked down, there's always the risk that you (or another developer who uses your code) might introduce a subtle bug by forgetting that the value won't change until the next request. You might consider a different naming convention. For example, instead of "now", call it "requestRecievedTime" or something like that.
Similar to the previous item, there's also the risk that your provider might be loaded with the wrong lifecycle. You might use it in a new project and forget to set the instancing, loading it up as a singleton. Then the values are locked down for all requests. There's not much you can do to enforce this, so be sure to comment it well. The <summary> tag is a good place.
You may find you need the current time in a scenario where constructor injection isn't possible - such as a static method. You'll either have to refactor to use instance methods, or will have to pass either the time or the time-provider as a parameter into the static method.
Are there better alternatives?
Yes, see Mike's answer.
You might also consider Noda Time, which has a similar concept built in, via the IClock interface, and the SystemClock and FakeClock implementations. However, both of those implementations are designed to be singletons. They help with testing, but they don't achieve your second goal of locking the time down to a single value per request. You could always write an implementation that does that though.
Code looks reasonable.
Drawback - most likely lifetime of the object will be controlled by DI container and hence user of the provider can't be sure that it always be configured correctly (per-invocation and not any longer lifetime like app/singleton).
If you have type representing "transaction" it may be better to put "Started" time there instead.
This isn't something that can be answered with a realtime clock and a query, or by testing. The developer may have figured out some obscure way of reaching the underlying library call...
So don't do that. Dependency injection also won't save you here; the issue is that you want a standard pattern for time at the start of the 'session.'
In my view, the fundamental problem is that you are expressing an idea, and looking for a mechanism for that. The right mechanism is to name it, and say what you mean in the name, and then set it only once. readonly is a good way to handle setting this only once in the constructor, and lets the compiler and runtime enforce what you mean which is that it is set only once.
// In an entity constructor:
this.CreatedAt = DateTime.UtcNow;
This is a followup to Clojure: Compile time insertion of pre/post functions
My goal is to call a debug function instead of throwing an exception. I am looking for the best way to store a list of stack frames, function calls and their arguments, to accomplish this.
I want to have a function (my-uber-debug), so that when I call it (instead of throwing an exception), the following things happen:
a new Java window pops up
there is a record of the current clojure stack frame
for each stack frame, there is a record of the argument passed to the function
This is so that I can move up/down the stack frames, and examine the arguments passed to get to this current point. [If somehow, magically, we can get the variables defined in "let" environments, that'd be awesome too.]
Current Idea
I'm going to have a thread local variable uber-debug, which has type:
List of StackFrames
where StackFrame = function + arguments
At each function call, it's going to push (cons the current function + arguments to uber-debug), then at the end of a function call, it's going to remove the first element from uber-debug
Then, when I call (my-uber-debug), it just pops up a new java window, and lets me interact with uber-debug
Question
The ideas I've had so far are probably not ideal for setting this up. What is the right way to solve this problem?
Edit:
The question is NOT about the Swing/GUI part. It's about how to store the stack frames.
Thanks!
Your answer may depend on a lot of factors, so I am going to answer this by giving you my thoughts.
If you merely want to store function calls and their parameters when an exception occurs, then either write a macro or function as a wrapper to accomplish this. You would then have to pass all functions to be called to this wrapper. The wrapper would perform the try catch operation and whatever else you need.
You might also want to look into Clojure meta data in addition to writing the wrapper, because your running code could look at its meta-data and make some decisions based on that as well. I have never used meta data, but the information at the link looks promising.
As a final thought, it might be helpful for you to further delineate what you want to accomplish by doing this by editing your original post and putting the information there.
For example, are these stack traces for a library or a main program?
As to storing all this information, are multiple threads going to need it, or just one?
Can you get by storing the information in a let binding at the highest level of your program, or do you need something like a ref?