I'm using sqlite3_bind_text to bind text parameters to my queries, with the SQLITE_STATIC flag, since I know the text pointer remains valid at least up until the query is executed.
Recently I've made changes so that the queries are executed in the transaction mode (many such queries in a single transaction). Should the text buffer remain valid up until the transaction is finished?
I mean, my text buffers are valid for the duration of a single query, but no the whole transaction. Should I specify the SQLITE_TRANSIENT flag?
Yes, if you're using SQLITE_STATIC, you should leave the contents alone until after the transaction is finished. Even more so, you should leave the contents alone until you've either rebound the parameter to something else or until you've freed the statement.
SQLITE_TRANSIENT requests that Sqlite make an internal copy of the string which it will manage appropriately. Given your description, this is probably what you should use. Otherwise, you'll have to manage your own copy of each string for each statement.
Related
In Google Spanner, commit timestamps are generated by the server and based on "TrueTime" as discussed in https://cloud.google.com/spanner/docs/commit-timestamp. This page also states that timestamps are not guarnateed to be unique, so multiple independent writers can generate timestamps that are exactly the same.
On the documentation of consistency guarantees, it is stated that In addition if one transaction completes before another transaction starts to commit, the system guarantees that clients can never see a state that includes the effect of the second transaction but not the first.
What I'm trying to understand is the combination of
Multiple concurrent transactions committing "at the same time" resulting in the same commit timestamp (where the commit timestamp forms part of a key for the table)
A reader observing new rows being entered into above table
Under these circumstances, is it possible that a reader can observe some but not all of the rows that will (eventually) be stored with the exact same timestamp? Or put differently, if searching for all rows up to a known exact timestamp, and with rows are being inserted with that timestamp, is it possible that the query first returns some of the results, but when executed again returns more?
The context of this is an attempt to model a stream of events ordered by time in an append only manner - I need to be able to keep what is effectively a cursor to a particular point in time (point in the stream of events) and need to know whether or not having observed events at time T means you can never get more events again at exactly time T.
Spanner is externally consistent, meaning that any reader will only be able to read the results of completed transactions...
Along with all externally consistent DB's, it is not possible for a reader outside of a transaction to be able to read the 'pending state' of another transaction. So a reader at time T will only be able to see transactions that have been committed before time T.
Multiple simultaneous insert/update transactions at commit time T (which would affect different rows, otherwise they could not be simultaneous) would not be seen by the reader at time T, but both would be seen by a reader at T+1
I ... need to know whether or not having observed events at time T means you can never get more events again at exactly time T.
Yes - ish. Rephrasing slightly as this is nuanced:
Having read events up to and including time T means you will never get any more events occurring with time equal to or before time T
But remember that the commit timestamp column is a simple TIMESTAMP column where any value can be stored -- it is the application that requests that the value stored is the commit timestamp, and there is nothing at the DB level to stop the application storing any value it likes...
As always with Spanner, it is the application which has to enforce/maintain the data integrity.
I`ve came accross this issue on SQlite and c++ and i can't find any answer on it.
Everything is working fine in SQlite and c++ all queries all outputs all functions but i have this question that can`t find any solution around it.
I create a database MyTest.db
I create a table test with an id and a name as fields
I enter 2 values to each id=1 name=Name1 and id=2 name=Name2
I delete the 2nd value
The data inside table now says that i have only the id=1 with name=Name1
When i open my Mytest.db with notepad.exe the values that i have deleted such as id=2 name=Name2 are still inside the database file though that it doesn`t come to the data results of this table but still exists though.
What i like to ask from anyone that knows about it is this :
Is there any other way that the value has to be deleted also from the database file or is it my mistake with the DELETE option of SQLITE (that i doubt it)
Its like the database file keeps collecting all the trash inside it without removing DELETED values from its tables...
Any help or suggestion is much appreciated
If you use "PRAGMA secure_delete=ON;" then SQLite overwrites deleted content with zeros. See https://www.sqlite.org/pragma.html#pragma_secure_delete
Even with secure_delete=OFF, the deleted space will be reused (and overwritten) to store new content the next time you INSERT. SQLite does not leak disk space, nor is it necessary to VACUUM in order to reclaim space. But normally, deleted content is not overwritten as that uses extra CPU cycles and disk I/O.
Basically all databases only mark rows active or inactive, they won't delete the actual data from the file immediately. It would be a huge waste of time and resources, since that part of the file can just be reused.
Since your queries show that the row isn't active in results, is this in some way an issue? You can always run a VACUUM on the database if you want to reclaim the space, but I would just let the database engine handle everything by itself. It won't "keep collecting all the trash inside it", don't worry.
If you see that the file size is growing and the space is not reused, then you can issue vacuums from time to time.
You can also test this by just inserting other rows after deleting old ones. The engine should reuse those parts of the file at some point.
To protect against sql injection, I read in the introduction to ColdFusion that we are to use the cfqueryparam tag.
But when using stored procedures, I am passing my variables to corresponding variable declarations in SQL Server:
DROP PROC Usr.[Save]
GO
CREATE PROC Usr.[Save]
(#UsrID Int
,#UsrName varchar(max)
) AS
UPDATE Usr
SET UsrName = #UsrName
WHERE UsrID=#UsrID
exec Usr.[get] #UsrID
Q: Is there any value in including cfSqlType when I call a stored procedure?
Here's how I'm currently doing it in Lucee:
storedproc procedure='Usr.[Save]' {
procparam value=Val(form.UsrID);
procparam value=form.UsrName;
procresult name='Usr';
}
This question came up indirectly on another thread. That thread was about query parameters, but the same issues apply to procedures. To summarize, yes you should always type query and proc parameters. Paraphrasing the other answer:
Since cfsqltype is optional, its importance is often underestimated:
Validation:
ColdFusion uses the selected cfsqltype (date, number, etcetera) to validate the "value". This occurs before any sql is ever sent to
the database. So if the "value" is invalid, like "ABC" for type
cf_sql_integer, you do not waste a database call on sql that was never
going to work anyway. When you omit the cfsqltype, everything is
submitted as a string and you lose the extra validation.
Accuracy:
Using an incorrect type may cause CF to submit the wrong value to the database. Selecting the proper cfsqltype ensures you are
sending the correct value - and - sending it in a non-ambiguous format
the database will interpret the way you expect.
Again, technically you can omit the cfsqltype. However, that
means CF will send everything to the database as a string.
Consequently, the database will perform implicit conversion
(usually undesirable). With implicit conversion, the interpretation
of the strings is left entirely up to the database - and it might
not always come up with the answer you would expect.
Submitting dates as strings, rather than date objects, is a
prime example. How will your database interpret a date string like
"05/04/2014"? As April 5th or a May 4th? Well, it depends. Change the
database or the database settings and the result may be completely
different.
The only way to ensure consistent results is to specify the
appropriate cfsqltype. It should match the data type of the target
column/function (or at least an equivalent type).
I have an entire set of data i want to insert into a table. I am trying to have it insert/update everything OR rollback. I was going to do it in a transaction, but i wasnt sure if the sql_exec() command did the same thing.
My goal was to iterate through the list.
Select from each iteration based on the Primary Key.
If result was found:
append update to string;
else
append insert to string;
Then after iterating through the loop, i would have a giant string and say:
sql_exec(string);
sql_close(db);
Is that how i should do it? I was going to do it on each iteration of the loop, but i didnt think a global rollback if there was an error.
No, you should not append everything into a giant string. If you do, you will need to allocate a whole bunch of memory as you are going, and it will be harder to create good error messages for each individual statement, as you will just get a single error for the entire string. Why spend all of that effort, constructing one big string when SQLite is just going to have to parse it back down into its individual statements again?
Instead, as #Chad suggests, you should just use sqlite3_exec() on a BEGIN statement, which will begin a transaction. Then sqlite3_exec() each statement in turn, and finally sqlite3_exec() a COMMIT or ROLLBACK depending on how everything goes. The BEGIN statement will start a transaction, and all of the statements executed after that will be within that transaction, and so committed or rolled back together. That's what the "A" in ACID stands for; Atomic, as all of the statements in the transaction will be committed or rolled back as if they were a single atomic operation.
Furthermore, you probably shouldn't use sqlite3_exec() if some of the data varies within each statement, such as being read from a file. If you do, a mistake could easily leave you with an SQL injection bug. For instance, if you construct your query by appending strings, and you have strings like char *str = "it's a string" to insert, if you don't quote it properly, your statement could come out like INSERT INTO table VALUES ('it's a string');, which will be an error. Or if someone malicious could write data into this file, then they could cause you to execute any SQL statement they want (imagine if the string were "'); DROP TABLE my_important_table; --"). You may think that no one malicious is going to provide input, but you can still have accidental problems, if someone puts a character that confuses the SQL parser into a string.
Instead, you should use sqlite3_prepare_v2() and sqlite3_bind_...() (where ... is the type, like int or double or text). In order to do this, you use a statement like char *query = "INSERT INTO table VALUES (?)", where you substitute a ? for where you want your parameter to go, prepare it using sqlite3_prepare_v2(db, query, -1, &stmt, NULL), bind the parameter using sqlite3_bind_text(stmt, 1, str, -1, SQLITE_STATIC), then execute the statement with sqlite3_step(stmt). If the statement returns any data, you will get SQLITE_ROW, and can access the data using the various sqlite3_columne_...() functions. Be sure to read the documentation carefully; some of the example parameters I gave may need to change depending on how you use this.
Yes, this is a bit more of a pain than calling sqlite3_exec(), but if your query has any data loaded from external sources (files, user input), this is the only way to do it correctly. sqlite3_exec() is fine to call if the entire text of the query is contained within your source, such as the BEGIN and COMMIT or ROLLBACK statements, or pre-written queries with no parts coming from outside of your program, you just need prepare/bind if there's any chance that an unexpected string could get in.
Finally, you don't need to query whether something is in the database already, and then insert or update it. You can do a INSERT OR REPLACE query, which will either insert a record, or replace one with a matching primary key, which is the equivalent of selecting and then doing an INSERT or an UPDATE, but much quicker and simpler. See the INSERT and "on conflict" documentation for more details.
I'm looking to create a simple web service that when polled returns a unique id. The ID has to be human readable (i.e. not a guid, probably in the form 000023) and is simply incremented by 1 each time its called.
Now I need to consider that it may be called by two different applications at the same time and I don't want it to return the same number to each application.
Is there another option than using a database to store the current number?
Surely this has been done before, can anyone point me at some source code if it is.
Thanks,
Neil
Use a critical section piece of code to control flow one at a time through a section of code. You can do this using the lock statement or by being slightly more hardcore and using a mutex directly. Doing this will ensure that you return a different number to each caller.
As for storing it, using a database is overkill for returning an auto incrementing number - although SQLServer and Oracle (and most likely others but i can't speak for them) both provide an auto incrementing keys feature, so you could have the webservice called, generate a new entry in the database table, return the key, and the caller can use that number as a key back to that record (if you are saving more data later after the initial call). This way you also let the database worry about the generation of unique numbers, you don't have to worry about the details of it - although this is not a good option if you don't already have a database.
The other option is to store it in a local file, although that would be expensive to read the file, increment the number, and write it back out, all within a critical section.
you can use a file.
Pseudocode:
if (!locked('counter.txt'))
counter = read('counter.txt')
else
wait
startAgain
lock('counter.txt')
counter++
print counter
write('counter.txt', counter)
unlock('counter.txt)