ODBC error "String data, right truncation State code: 22001" with SQL Server database - c++

I have a test table using a Microsoft SQL Server that is defined like this:
CREATE TABLE [dbo].[Table] (
[FirstName] NVARCHAR (255) NULL,
[LastName] NVARCHAR (255) NULL
);
There's just one row in the table with the values "person" and "man", respectively.
I'm trying to add a function that will update the values of that row but I keep running into this "[Microsoft][ODBC SQL Server Driver]String data, right truncation State code: 22001" error and I cannot figure out what the problem is. I've looked around and people say that it is caused by the data being too long to fit in the column but that's impossible because the string I'm trying to update with is only two characters, and as you can see in the table definition there is plenty of space for it.
I'm using a prepared statement for optimization purposes and the code creating it looks something like this:
const tString query("UPDATE \"" + tableName + "\" SET " + setClause + " WHERE " + whereClause + ";");
SQLHSTMT statement;
SQLAllocHandle(SQL_HANDLE_STMT, fSQLConnection, &statement);
SQLPrepareW(statement, (SQLWCHAR *) query.mWideStr(), SQL_NTS);`
The query string looks like this:
UPDATE "Table" SET "FirstName" = ?, "LastName" = ? WHERE "FirstName" = ? AND "LastName" = ?;
And then I am binding the parameters like this:
// We have our own string class that we use, which is where the mWideStr() and mGetStrSize()
// come from. mWideStr() returns a pointer to a UCS-2 buffer and mGetStrSize() returns the
// size in bytes.
SQLLEN pcbValue(SQL_NTS);
SQLUSMALLINT paramIndex(1);
// Call this for each parameter in the correct order they need to be bound, incrementing the
// index each time.
SQLBindParameter(statement, paramIndex++, SQL_PARAM_INPUT, SQL_C_WCHAR, SQL_VARCHAR, 255, 0, (SQLPOINTER) paramValue.mWideStr(), paramValue.mGetStrSize(), &pcbValue);
The first and second bound parameters are the new values which are both just "55", then third would be "person" and fourth would be "man".
Then to execute the statements it's just a call to SQLExecute():
SQLExecute(statement);
The call to SQLExecute() fails and then the error is generated and there is some more code that outputs the error message. As far as I can tell this should all be working perfectly fine. I have another database using Oracle that uses the exact same setup and code and it works without any issues, it's just SQL Server that's barfing for some reason. Is there something obviously wrong here that I'm missing? Does SQL Server have some weird rules that I need to add somewhere?

The SQLLEN pcbValue(SQL_NTS); variable being passed to SQLBindParameter() was going out of scope between binding the parameters and executing the statement, which means that some garbage data was being pointed to in the parameter binding. I also realized that you don't need to specify the last parameter. You can just pass NULL and it will act as if it is a nul-terminated string.
So the fix was to remove the SQLLEN pcbValue(SQL_NTS); variable and to just pass NULL to SQLBindParameter() for the last parameter instead.
Stupid mistake, but worth noting I suppose.

Related

Insert JSON format in Mysql query using C++

I am using JSON format to save data in my c++ program , i want to send it to MySql database (the table tab has one column with type : TEXT) but the query failed (tested also VARCHAR and CHAR )
this is a part of the code since we are not interrested in the rest
string json_example = "{\"array\":[\"item1\",\"item2\"], \"not an array\": \"asdf\"}";
mysql_init(&mysql); //initialize database connection
string player="INSERT INTO tab values (\"";
player+= json_example;
player += "\")";
connection = mysql_real_connect(&mysql,HOST,USER,PASSWD,DB,0,NULL,0);
// save data to database
query_state=mysql_query(connection, player.c_str()); // use player.c_str()
to show the final query that will be used : cout << player gives :
INSERT INTO tab values ("{"array":["item1","item2"], "not an
array": "asdf"}")
using for example string json_example = "some text"; is working
but with the json format it is not working , maybe the problem came from the use of curly bracket {} or double quotes "" but i haven't find a way to solve it .
i'm using :
mysql Ver 14.14 Distrib 5.5.44, for debian-linux-gnu (armv7l) under raspberry pi 2
Any help will be appreciated , thanks .
Use a prepared statement. See prepared statements documentation in the MySQL reference manual.
Prepared statements are more correct, safer, possibly faster, and keep your code cleaner. You get all those benefits and don't need to escape anything. There is hardly a reason not to use them.
Something like this might work. But take it with a grain of salt, because I have not tested or compiled it. It should just give you the general idea:
MYSQL_STMT* const statement = mysql_stmt_init(&mysql);
std::string const query = "INSERT INTO tab values(?)";
mysql_stmt_prepare(statement, query, query.size());
MYSQL_BIND bind[1] = {};
bind[0].buffer_type = MYSQL_TYPE_STRING;
bind[0].buffer = json_example.c_str();
bind[0].buffer_length = json_example.size();
mysql_stmt_bind_param(statement, bind);
mysql_stmt_execute(statement);

row number 0 is out of range 0..-1 LIBPQ

query = "select * results where id = '";
query.append(ID);
query.append("'");
res = PQexec(conn, query.c_str());
After executing this statement, i get the following error.
row number 0 is out of range 0..-1
terminate called after throwing an instance of 'std::logic_error'
what(): basic_string::_S_construct null not valid
However, when I run the same query in postgresql, it does not have any problem.
select * from results where id = 'hello'
The only problem is that if the query parameter passed is not in database, it would throw runtime error. If you provide the exact query parameter which is in database, it executes normally.
That's two separate errors, not one. This error:
row number 0 is out of range 0..-1
is from libpq, but is reported by code you have not shown here.
The error:
terminate called after throwing an instance of 'std::logic_error'
what(): basic_string::_S_construct null not valid
is not from PostgreSQL, it is from your C++ runtime.
It isn't possible for me to tell exactly where it came from. You should really run the program under a debugger to tell that. But if I had to guess, based on the code shown, I'd say that ID is null, so:
query.append(ID);
is therefore aborting the program.
Separately, your code is showing a very insecure practice where you compose SQL by string concatenation. This makes SQL injection exploits easy.
Imagine what would happen if your "ID" variable was set to ';DROP TABLE results;-- by a malicious user.
Do not insert user-supplied values into SQL by appending strings.
Instead, use bind parameters via PQexecParams. It looks complicated, but most parameters are optional for simple uses. A version for your query, assuming that ID is a non-null std::string, would look like:
PGresult res;
const char * values[1];
values[0] = ID.c_str();
res = PQexecParams("SELECT * FROM results WHERE id = $1",
1, NULL, values, NULL, NULL, 0);
If you need to handle nulls you need another parameter; see the documentation.
Maybe, a bit too late but just want to put in my 5 cents.
Got this error also these days with a very simple stored procedure of the kind like:
CREATE OR REPLACE FUNCTION selectMsgCounter()
RETURNS text AS
$BODY$
DECLARE
msgCnt text;
BEGIN
msgCnt:= (SELECT max(messageID)::text from messages);
RETURN 'messageCounter: ' || msgCnt;
END
$BODY$
LANGUAGE plpgsql STABLE;
Made some debugging with:
if (PQntuples(res)>=1)
{
char* char_result=(char*) realloc(NULL,PQgetlength(res, 0,0)*sizeof(char));
strcpy( char_result,PQgetvalue(res, 0, 0));
bool ok=true;
messageCounter=QString(char_result).remove("messageCounter: ").toULongLong(&ok);
if (!ok) messageCounter=-1;
qDebug()<<"messageCounter: " << messageCounter;
free(char_result);
PQclear(res);
PQfinish(myConn); // or do call destructor later?
myConn=NULL;
}
else
{
fprintf(stderr, "storedProcGetMsgCounter Connection Error: %s",
PQerrorMessage(myConn));
PQclear(res);
PQfinish(myConn); // or do call destructor later?
myConn=NULL;
}
Turned out that the owner of the stored procedure was not the one whose credentials I used to log in with.
So - at least - in my case this error "row number 0 is out of range 0..-1" was at first sight a false positive.

CTE with HierarchyID suddenly causes parse error

So I have this self-referencing table in my database named Nodes, used for storing the tree structure of an organization:
[Id] [int] IDENTITY(1,1) NOT NULL,
[Name] [nvarchar](max) NULL,
[ParentId] [int] NULL,
(+ other metadata columns)
And from it I'm using HIERARCHYID to manage queries based on access levels and such. I wrote a table-valued function for this, tvf_OrgNodes, a long time ago, tested and working on SQL Server 2008 through 2014 and it's remained unchanged since then since it's been doing great. Now, however, something has changed because the parsing of HIERARCHYIDs from path nvarchars ("/2/10/8/") results in the following error, matching only 4 hits (!) on Google:
Msg 6522, Level 16, State 2, Line 26
A .NET Framework error occurred during execution of user-defined routine or aggregate "hierarchyid":
Microsoft.SqlServer.Types.HierarchyIdException: 24000: SqlHierarchyId operation failed because HierarchyId object was constructed from an invalid binary string.
When altering the function to only return NVARCHAR instead of actual HIERARCHYID's, the paths all look fine, beginning with / for the root, followed by /2/ etc etc. Simply selecting HIERARCHYID::Parse('path') also works fine. I actually got the function working by leaving the paths as strings all the way until the INSERT into the function result, parsing the paths there. But alas, I get the same error when I then try and insert the reusulting data into a table of same schema.
So the question is, Is this a bug, or does anybody know of any (new?) pitfalls in working with HIERARCHYIDs<->Path strings that could cause this? I don't get where the whole binary string idea comes from.
This is the code of the TVF:
CREATE FUNCTION [dbo].[tvf_OrgNodes] ()
RETURNS #OrgNodes TABLE (
OrgNode HIERARCHYID,
NodeId INT,
OrgLevel INT,
ParentNodeId INT
) AS
BEGIN
WITH orgTree(OrgNode, NodeId, OrgLevel, ParentNodeId) AS (
-- Anchor expression = root node
SELECT
CAST(HIERARCHYID::GetRoot() AS varchar(180))
, n.Id
, 0
, NULL
FROM Nodes n
WHERE ParentId IS NULL -- Top level
UNION ALL
-- Recursive expression = organization tree
SELECT
CAST(orgTree.OrgNode + CAST(n.Id AS VARCHAR(180)) + N'/' AS VARCHAR(180))
, n.Id
, orgTree.OrgLevel + 1
, n.ParentId
FROM Nodes AS n
JOIN orgTree
ON n.ParentId = orgTree.NodeId
)
INSERT INTO #OrgNodes
SELECT
HIERARCHYID::Parse(OrgNode),
NodeId,
OrgLevel,
ParentNodeId
FROM orgTree;
RETURN;
END
I might have recently installed .NET 4.53 aka 4.6 for the lolz. Can't find much proof of it anywhere except in the reg, though: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft.NETFramework\v4.0.30319\SKUs.NETFramework,Version=v4.5.3

WMI Associators of DiskDrive Where Result Class is MSStorageDriver

Trying to link DiskDrives found in Win32_DiskDrive with the data in MSStorageDriver_ATAPISmartData.
I've tried the following WQL statement, but it returned nothing each time. (I know that there is relevant data in the MSStorageDrive class)
ASSOCIATORS OF {Win32_DiskDrive.DeviceID=[value]} WHERE RESULTCLASS = MSStorageDriver_ATAPISmartData
Any ideas to match the data up?
The answer was this:
SELECT * FROM MSStorageDriver_ATAPISmartData WHERE InstanceName='[PNPDeviceID]'
Just make sure to double-escape any backslashses. So if the PNPDeviceID as found in Win32_DiskDrive was
IDE\DISKHITACHI_HDT725050VLA360_________________V56OA7EA\5&276E2DE5&0&1.0.0
what would be returned by getting the value will be
IDE\\DISKHITACHI_HDT725050VLA360_________________V56OA7EA\\5&276E2DE5&0&1.0.0
but what you need to send in the WHERE clause is
IDE\\\\DISKHITACHI_HDT725050VLA360_________________V56OA7EA\\\\5&276E2DE5&0&1.0.0
Silly, isn't it?
Oh, and from what I've gathered, you also need _0 on the end of the Device ID, so all together, you would send:
SELECT * FROM MSStorageDriver_ATAPISmartData WHERE InstanceName='IDE\\\\DISKHITACHI_HDT725050VLA360_________________V56OA7EA\\\\5&276E2DE5&0&1.0.0_0'

Database commit not quick enough for Coldfusion process

I have the following Coldfusion process:
My code makes a database call to the proc CommentInsert (this inserts a comment, and then calls an event insert proc about the comment being added called EventInsert)
I then call Event.GetEventByCommentId(commentId)
The result is no records returned, as the EventInsert hasn't finished adding the event record triggered by CommentInsert in Step 1.
I know this is the case, because if I create a delay between steps 1 and 2, then a recordset IS returned in step 2.
This leads me to believe that the read in step 2 is happening too quickly, before the event insert has committed in step 1.
My question is, how do tell the Coldfusion process to wait till Step 1 has completed before doing the read in Step 2??
Step one and step two are tow totally separate methods.
Code:
<cfset MessageHandlerManager = AddComment(argumentCollection=arguments) />
<cfset qEvents = application.API.EventManager.GetEventFeed(commentId=MessageHandlerManager.GetReturnItems()) />
Also, just let me add that the commentId being passed is valid. I have checked.
Another way to look at it:
Given this code:
<!--- Calls CommentInsert proc, which inserts a comment AND inserts an
event record by calling EventInsert within the proc --->
<cfset var newCommentId = AddComment(argumentCollection=arguments) />
<cfloop from="1" to="1000000" index="i">
</cfloop>
<!--- Gets the event record inserted in the code above --->
<cfset qEvent =
application.API.EventManager.GetEventFeed(commentId=newCommentId ) />
When I run the above code, qEvent comes back with a valid record.
However, when I comment out the loop, the record is coming back
empty.
What I think is happening is that the CommentInsert returns the new
comment Id, but when the GetEventFeed function is called, the
EventInsert proc hasn't completed in time and no record is found.
Thus, by adding the loop and delaying a bit, the event insert has time
to finish and then a valid record is returned when GetEventFeed is
called.
So my question is, how do I prevent this without using the loop.
UPDATE:
Here are the two stored procs used:
DELIMITER $$
DROP PROCEDURE IF EXISTS `CommentInsert` $$
CREATE DEFINER=`root`#`%` PROCEDURE `CommentInsert`(
IN _commentParentId bigint,
IN _commentObjectType int,
IN _commentObjectId bigint,
IN _commentText text,
IN _commentAuthorName varchar(100),
IN _commentAuthorEmail varchar(255),
IN _commentAuthorWebsite varchar(512),
IN _commentSubscribe tinyint(1),
IN _commentIsDisabled tinyint(1),
IN _commentIsActive tinyint(1),
IN _commentCSI int,
IN _commentCSD datetime,
IN _commentUSI int,
IN _commentUSD datetime,
OUT _commentIdOut bigint
)
BEGIN
DECLARE _commentId bigint default 0;
INSERT INTO comment
(
commentParentId,
commentObjectType,
commentObjectId,
commentText,
commentAuthorName,
commentAuthorEmail,
commentAuthorWebsite,
commentSubscribe,
commentIsDisabled,
commentIsActive,
commentCSI,
commentCSD,
commentUSI,
commentUSD
)
VALUES
(
_commentParentId,
_commentObjectType,
_commentObjectId,
_commentText,
_commentAuthorName,
_commentAuthorEmail,
_commentAuthorWebsite,
_commentSubscribe,
_commentIsDisabled,
_commentIsActive,
_commentCSI,
_commentCSD,
_commentUSI,
_commentUSD
);
SET _commentId = LAST_INSERT_ID();
CALL EventInsert(6, Now(), _commentId, _commentObjectType, _commentObjectId, null, null, 'Comment Added', 1, _commentCSI, Now(), _commentUSI, Now());
SELECT _commentId INTO _commentIdOut ;
END $$
DELIMITER ;
DELIMITER $$
DROP PROCEDURE IF EXISTS `EventInsert` $$
CREATE DEFINER=`root`#`%` PROCEDURE `EventInsert`(
IN _eventTypeId int,
IN _eventCreateDate datetime,
IN _eventObjectId bigint,
IN _eventAffectedObjectType1 int,
IN _eventAffectedObjectId1 bigint,
IN _eventAffectedObjectType2 int,
IN _eventAffectedObjectId2 bigint,
IN _eventText varchar(1024),
IN _eventIsActive tinyint,
IN _eventCSI int,
IN _eventCSD datetime,
IN _eventUSI int,
IN _eventUSD datetime
)
BEGIN
INSERT INTO event
(
eventTypeId,
eventCreateDate,
eventObjectId,
eventAffectedObjectType1,
eventAffectedObjectId1,
eventAffectedObjectType2,
eventAffectedObjectId2,
eventText,
eventIsActive,
eventCSI,
eventCSD,
eventUSI,
eventUSD
)
VALUES
(
_eventTypeId,
_eventCreateDate,
_eventObjectId,
_eventAffectedObjectType1,
_eventAffectedObjectId1,
_eventAffectedObjectType2,
_eventAffectedObjectId2,
_eventText,
_eventIsActive,
_eventCSI,
_eventCSD,
_eventUSI,
_eventUSD
);
END $$
DELIMITER ;
Found it. Boiled down to this line in the EventManager.GetEventFeed query:
AND eventCreateDate <= <cfqueryparam cfsqltype="cf_sql_timestamp" value="#Now()#" />
What was happening was the MySql Now() function called in the EventInsert proc was a fraction later than the Coldfusion #Now()# being used in the query. Therefore the line of code excluded that record.Also, why it was only happening when comments were added quickly.
What a biatch. Thanks for the input everyone.
Let me get this straight :
You call a MySQL SP which does an insert which then calls another SP to do another insert.
There's no return to ColdFusion between those two? Is that right?
If that's the case then the chances are there's a problem with your SP not returning values correctly or you're looking in the wrong place for the result.
I'm more inclined towards there being a problem with the MySQL SPs. They aren't exactly great and don't really give you a great deal of performance benefit. Views are useful, but the SPs are, frankly, a bit rubbish. I suspect that when you call the second SP from within the first SP and it returns a value its not being correctly passed back out of the original SP to ColdFusion, hence the lack of result.
To be honest, my suggestion would be to write two ORM functions or simple cfqueries in a suitable DAO or service to record the result of the insert of comment first and return a value. Having returned that value, make the other call to the function to get your event based on the returned comment id. (ColdFusion 8 will give you Generated_Key, ColdFusion 9 is generatedkey, I'm not sure what it'll be in Railo, but it'll be there in the "result" attribute structure).
Thinking about it, I'm not even sure why you're getting the event based on the commentid just entered. You've just added that comment against an event, so you should already have some data on that event, even if its just the ID which you can then get the full event record/object from without having to go around the house via the comment.
So over all I would suggest taking a step back and looking at the data flow you're working with and perhaps refactor it.