LDAP server sizelimit was exceeded ( MSAD ) with ldaps_search - sas

using
call ldaps_search(handle,shandle,filter, attrs, num, rc);
with Microsoft Active Directory i get WARNING: LDAP server sizelimit was exceeded.
is there a way to page through somehow in sas?
i have tried ldaps_setOptions with sizeLimit=2000 for example but still generates the warning, as i guess is set on Microsofts side
Thanks
sample:
more = 1;
do while (more eq 1);
call ldaps_search_page(handle, shandle, filter, attrs, num, rc, more, 1000);
if rc ne 0 then do;
more = 0;
msg = sysmsg();
put msg;
end;
/* free search results page */
if shandle NE 0 then do;
call ldaps_free(shandle,rc);
end;
end;

It's not possible to control LDAP server sizelimit from the client side (see AD's MaxPageSize), but yes you can still work around this via paging controls.
The idea is to request a paged result set, with a number of entries per page less than server's MaxPageSize limit.
SAS provides the call ldaps_search_page routine that returns only a single page for a given search request and requires subsequent calls to retrieve the entirety of the results :
CALL LDAPS_SEARCH_PAGE(lHandle, sHandle, filter, attr, num, rc, more <, pageSize>);
pageSize (optional) specifies a positive integer value, which is the number of
results on a page of output. By default, this value is set to 50. If
pageSize is 0, this function acts as if paging is turned off. This
argument is case-insensitive.
For example if a query matches n results (exceeding server side limit) and the page size is set to 50, you need to make up to ceil(n/50) calls.
Here is an example taken from the doc, it uses the more argument in a loop to continue retrieving paged results until there is no more information to retrieve :
more = 1;
do while (more eq 1);
call ldaps_search_page(handle, shandle, filter, attrs, num, rc, more, 50);
...
/* free search results page */
if shandle NE 0 then do;
call ldaps_free(shandle,rc);
end;
end;
https://documentation.sas.com/api/docsets/itechdsref/9.4/content/itechdsref.pdf
For those having trouble with more stuck at 1 thus causing the code above to loop forever (I don't know why the reference wouldn't get updated but OP was in this situation), actually you don't need it, incrementing a counter until the number of fetched entries reaches num should do the trick.

Related

HH:MI:SS string to return number of seconds

I'm trying to create a procedure that takes in a string in 'HH:MI:SS' format and RETURN the number of seconds.
I want to parse out the number of hours, minutes and seconds from the string and return the number of seconds for the value passed in.
The procedure is created successfully but when I try calling it I get an error. Can someone please let me know how to fix this problem.
Thanks in advance to all who answer.
CREATE OR REPLACE PROCEDURE CONVERT_TO_SECONDS(
i_date_string IN VARCHAR2,
o_seconds OUT PLS_INTEGER
)
AS
l_hours NUMBER;
l_minutes NUMBER;
l_seconds NUMBER;
BEGIN
SELECT trim('"'
FROM regexp_substr(i_date_string,'".*?"|[^:]+',1,1)) hours,
trim('"'
FROM regexp_substr(i_date_string,'".*?"|[^:]+',1,2)) minutes,
trim('"'
FROM regexp_substr(i_date_string,'".*?"|[^:]+',1,3)) seconds
INTO l_hours ,
l_minutes ,
l_seconds
FROM dual ;
o_seconds :=
l_hours*3600 +
l_minutes*60 +
l_seconds;
END;
SELECT CONVERT_TO_SECONDS('08:08:08') FROM DUAL;
You can't select a procedure in a SQL statement. You can select a function. Obviously you must create a function, not a procedure - a function that returns integer.
Here is what this may look like. This can still be improved, but I assume this is for practice/learning purposes. (For example, it's not clear why you need to do anything with double-quotes in your code.) I made just the fewest changes needed to make it work. I will let you find the differences between this and your attempt.
CREATE OR REPLACE FUNCTION CONVERT_TO_SECONDS(
i_date_string IN VARCHAR2
)
RETURN INTEGER
AS
l_hours NUMBER;
l_minutes NUMBER;
l_seconds NUMBER;
BEGIN
SELECT trim('"'
FROM regexp_substr(i_date_string,'".*?"|[^:]+',1,1)) hours,
trim('"'
FROM regexp_substr(i_date_string,'".*?"|[^:]+',1,2)) minutes,
trim('"'
FROM regexp_substr(i_date_string,'".*?"|[^:]+',1,3)) seconds
INTO l_hours ,
l_minutes ,
l_seconds
FROM dual ;
return
l_hours*3600 +
l_minutes*60 +
l_seconds;
END;
/
Function CONVERT_TO_SECONDS compiled
SELECT CONVERT_TO_SECONDS('08:08:08') FROM DUAL;
CONVERT_TO_SECONDS('08:08:08')
------------------------------
29288
Don't do this with regexes. Use Oracle's built-in date conversion functions.
SELECT TO_CHAR( TO_DATE('12:34:56','HH24:MI:SS'), 'sssss') FROM dual;
Here are some more links with examples:
https://nimishgarg.blogspot.com/2014/10/oracle-convert-time-hhmiss-to-seconds.html
https://www.complexsql.com/how-to-convert-time-to-seconds-in-oracle-with-examples/
https://www.oracletutorial.com/oracle-date-functions/
Finally, if you tell us "I get an error" then you need to tell us what the error was. Always always always cut & paste the exact error you got. The actual error message is the most important tool for debugging a problem.

thinkscript if statement failure

The thinkscript if statement fails to branch as expected in some cases. The following test case can be used to reproduce this bug / defect.
It is shared via Grid containing chart and script
To cut the long story short, a possible workaround in some cases is to use the if-expression which is a function, which may be slower, potentially leading to Script execution timeout in scans.
This fairly nasty bug in thinkscript prevents me from writing some scans and studies the way I need to.
Following is some sample code that shows the problem on a chart.
input price = close;
input smoothPeriods = 20;
def output = Average(price, smoothPeriods);
# Get the current offset from the right edge from BarNumber()
# BarNumber(): The current bar number. On a chart, we can see that the number increases
# from left 1 to number of bars e.g. 140 at the right edge.
def barNumber = BarNumber();
def barCount = HighestAll(barNumber);
# rightOffset: 0 at the right edge, i.e. at the rightmost bar,
# increasing from right to left.
def rightOffset = barCount - barNumber;
# Prepare a lookup table:
def lookup;
if (barNumber == 1) {
lookup = -1;
} else {
lookup = 53;
}
# This script gets the minimum value from data in the offset range between startIndex
# and endIndex. It serves as a functional but not direct replacement for the
# GetMinValueOffset function where a dynamic range is required. Expect it to be slow.
script getMinValueBetween {
input data = low;
input startIndex = 0;
input endIndex = 0;
plot minValue = fold index = startIndex to endIndex with minRunning = Double.POSITIVE_INFINITY do Min(GetValue(data, index), minRunning);
}
# Call this only once at the last bar.
script buildValue {
input lookup = close;
input offsetLast = 0;
# Do an indirect lookup
def lookupPosn = 23;
def indirectLookupPosn = GetValue(lookup, lookupPosn);
# lowAtIndirectLookupPosn is assigned incorrectly. The if statement APPEARS to be executed
# as if indirectLookupPosn was 0 but indirectLookupPosn is NOT 0 so the condition
# for the first branch should be met!
def lowAtIndirectLookupPosn;
if (indirectLookupPosn > offsetLast) {
lowAtIndirectLookupPosn = getMinValueBetween(low, offsetLast, indirectLookupPosn);
} else {
lowAtIndirectLookupPosn = close[offsetLast];
}
plot testResult = lowAtIndirectLookupPosn;
}
plot debugLower;
if (rightOffset == 0) {
debugLower = buildValue(lookup);
} else {
debugLower = 0;
}
declare lower;
To prepare the chart for the stock ADT, please set custom time frame:
10/09/18 to 10/09/19, aggregation period 1 day.
The aim of the script is to find the low value of 4.25 on 08/14/2019.
I DO know that there are various methods to do this in thinkscript such as GetMinValueOffset().
Let us please not discuss alternative methods of achieving the objective to find the low, alternatives for the attached script.
Because I am not asking for help achieving the objective. I am reporting a bug, and I want to know what goes wrong and perhaps how to fix it. In other words, finding the low here is just an example to make the script easier to follow. It could be anything else that one wants a script to compute.
Please let me describe the script.
First it does some smoothing with a moving average. The result is:
def output;
Then the script defines the distance from the right edge so we can work with offsets:
def rightOffset;
Then the script builds a lookup table:
def lookup;
script getMinValueBetween {} is a little function that finds the low between two offset positions, in a dynamic way. It is needed because GetMinValueOffset() does not accept dynamic parameters.
Then we have script buildValue {}
This is where the error occurs. This script is executed at the right edge.
buildValue {} does an indirect lookup as follows:
First it goes into lookup where it finds the value 53 at lookupPosn = 23.
With 53, if finds the low between offset 53 and 0, by calling the script function getMinValueBetween().
It stores the value in def lowAtIndirectLookupPosn;
As you can see, this is very simple indeed - only 38 lines of code!
The problem is, that lowAtIndirectLookupPosn contains the wrong value, as if the wrong branch of the if statement was executed.
plot testResult should put out the low 4.25. Instead it puts out close[offsetLast] which is 6.26.
Quite honestly, this is a disaster because it is impossible to predict which of any if statement in your program will fail or not.
In a limited number of cases, the if-expression can be used instead of the if statement. However the if-expression covers only a subset of use cases and it may execute with lower performance in scans. More importantly,
it defeats the purpose of the if statement in an important case because it supports conditional assignment but not conditional execution. In other words, it executes both branches before assigning one of two values.

How to perform a transaction with multiple operations in Cassandra

I want to perform a transaction with multiple write operations (~5 inserts/updates to different tables) in Cassandra but if any of them fail, then the rest should not be written (either rollback each operation or fail the whole transaction).
Please let me know what is the proper approach to perform this in Cassandra and how to do it (an example will be welcomed).
Yes, you can use the logged batch functionality to accomplish this atomically. Note, that you do take a hit on performance. See the BATCH Statements documentation section of the C++ Driver.
Here is an example of how to do this in C++, taken from the documentation link above. It demos showing how to batch an INSERT, UPDATE and a DELETE together:
/* This logged batch will makes sure that all the mutations eventually succeed */
CassBatch* batch = cass_batch_new(CASS_BATCH_TYPE_LOGGED);
/* Statements can be immediately freed after being added to the batch */
{
CassStatement* statement
= cass_statement_new(cass_string_init("INSERT INTO example1(key, value) VALUES ('a', '1')"), 0);
cass_batch_add_statement(batch, statement);
cass_statement_free(statement);
}
{
CassStatement* statement
= cass_statement_new(cass_string_init("UPDATE example2 set value = '2' WHERE key = 'b'"), 0);
cass_batch_add_statement(batch, statement);
cass_statement_free(statement);
}
{
CassStatement* statement
= cass_statement_new(cass_string_init("DELETE FROM example3 WHERE key = 'c'"), 0);
cass_batch_add_statement(batch, statement);
cass_statement_free(statement);
}
CassFuture* batch_future = cass_session_execute_batch(session, batch);
/* Batch objects can be freed immediately after being executed */
cass_batch_free(batch);
/* This will block until the query has finished */
CassError rc = cass_future_error_code(batch_future);
printf("Batch result: %s\n", cass_error_desc(rc));
cass_future_free(batch_future);

row number 0 is out of range 0..-1 LIBPQ

query = "select * results where id = '";
query.append(ID);
query.append("'");
res = PQexec(conn, query.c_str());
After executing this statement, i get the following error.
row number 0 is out of range 0..-1
terminate called after throwing an instance of 'std::logic_error'
what(): basic_string::_S_construct null not valid
However, when I run the same query in postgresql, it does not have any problem.
select * from results where id = 'hello'
The only problem is that if the query parameter passed is not in database, it would throw runtime error. If you provide the exact query parameter which is in database, it executes normally.
That's two separate errors, not one. This error:
row number 0 is out of range 0..-1
is from libpq, but is reported by code you have not shown here.
The error:
terminate called after throwing an instance of 'std::logic_error'
what(): basic_string::_S_construct null not valid
is not from PostgreSQL, it is from your C++ runtime.
It isn't possible for me to tell exactly where it came from. You should really run the program under a debugger to tell that. But if I had to guess, based on the code shown, I'd say that ID is null, so:
query.append(ID);
is therefore aborting the program.
Separately, your code is showing a very insecure practice where you compose SQL by string concatenation. This makes SQL injection exploits easy.
Imagine what would happen if your "ID" variable was set to ';DROP TABLE results;-- by a malicious user.
Do not insert user-supplied values into SQL by appending strings.
Instead, use bind parameters via PQexecParams. It looks complicated, but most parameters are optional for simple uses. A version for your query, assuming that ID is a non-null std::string, would look like:
PGresult res;
const char * values[1];
values[0] = ID.c_str();
res = PQexecParams("SELECT * FROM results WHERE id = $1",
1, NULL, values, NULL, NULL, 0);
If you need to handle nulls you need another parameter; see the documentation.
Maybe, a bit too late but just want to put in my 5 cents.
Got this error also these days with a very simple stored procedure of the kind like:
CREATE OR REPLACE FUNCTION selectMsgCounter()
RETURNS text AS
$BODY$
DECLARE
msgCnt text;
BEGIN
msgCnt:= (SELECT max(messageID)::text from messages);
RETURN 'messageCounter: ' || msgCnt;
END
$BODY$
LANGUAGE plpgsql STABLE;
Made some debugging with:
if (PQntuples(res)>=1)
{
char* char_result=(char*) realloc(NULL,PQgetlength(res, 0,0)*sizeof(char));
strcpy( char_result,PQgetvalue(res, 0, 0));
bool ok=true;
messageCounter=QString(char_result).remove("messageCounter: ").toULongLong(&ok);
if (!ok) messageCounter=-1;
qDebug()<<"messageCounter: " << messageCounter;
free(char_result);
PQclear(res);
PQfinish(myConn); // or do call destructor later?
myConn=NULL;
}
else
{
fprintf(stderr, "storedProcGetMsgCounter Connection Error: %s",
PQerrorMessage(myConn));
PQclear(res);
PQfinish(myConn); // or do call destructor later?
myConn=NULL;
}
Turned out that the owner of the stored procedure was not the one whose credentials I used to log in with.
So - at least - in my case this error "row number 0 is out of range 0..-1" was at first sight a false positive.

Database commit not quick enough for Coldfusion process

I have the following Coldfusion process:
My code makes a database call to the proc CommentInsert (this inserts a comment, and then calls an event insert proc about the comment being added called EventInsert)
I then call Event.GetEventByCommentId(commentId)
The result is no records returned, as the EventInsert hasn't finished adding the event record triggered by CommentInsert in Step 1.
I know this is the case, because if I create a delay between steps 1 and 2, then a recordset IS returned in step 2.
This leads me to believe that the read in step 2 is happening too quickly, before the event insert has committed in step 1.
My question is, how do tell the Coldfusion process to wait till Step 1 has completed before doing the read in Step 2??
Step one and step two are tow totally separate methods.
Code:
<cfset MessageHandlerManager = AddComment(argumentCollection=arguments) />
<cfset qEvents = application.API.EventManager.GetEventFeed(commentId=MessageHandlerManager.GetReturnItems()) />
Also, just let me add that the commentId being passed is valid. I have checked.
Another way to look at it:
Given this code:
<!--- Calls CommentInsert proc, which inserts a comment AND inserts an
event record by calling EventInsert within the proc --->
<cfset var newCommentId = AddComment(argumentCollection=arguments) />
<cfloop from="1" to="1000000" index="i">
</cfloop>
<!--- Gets the event record inserted in the code above --->
<cfset qEvent =
application.API.EventManager.GetEventFeed(commentId=newCommentId ) />
When I run the above code, qEvent comes back with a valid record.
However, when I comment out the loop, the record is coming back
empty.
What I think is happening is that the CommentInsert returns the new
comment Id, but when the GetEventFeed function is called, the
EventInsert proc hasn't completed in time and no record is found.
Thus, by adding the loop and delaying a bit, the event insert has time
to finish and then a valid record is returned when GetEventFeed is
called.
So my question is, how do I prevent this without using the loop.
UPDATE:
Here are the two stored procs used:
DELIMITER $$
DROP PROCEDURE IF EXISTS `CommentInsert` $$
CREATE DEFINER=`root`#`%` PROCEDURE `CommentInsert`(
IN _commentParentId bigint,
IN _commentObjectType int,
IN _commentObjectId bigint,
IN _commentText text,
IN _commentAuthorName varchar(100),
IN _commentAuthorEmail varchar(255),
IN _commentAuthorWebsite varchar(512),
IN _commentSubscribe tinyint(1),
IN _commentIsDisabled tinyint(1),
IN _commentIsActive tinyint(1),
IN _commentCSI int,
IN _commentCSD datetime,
IN _commentUSI int,
IN _commentUSD datetime,
OUT _commentIdOut bigint
)
BEGIN
DECLARE _commentId bigint default 0;
INSERT INTO comment
(
commentParentId,
commentObjectType,
commentObjectId,
commentText,
commentAuthorName,
commentAuthorEmail,
commentAuthorWebsite,
commentSubscribe,
commentIsDisabled,
commentIsActive,
commentCSI,
commentCSD,
commentUSI,
commentUSD
)
VALUES
(
_commentParentId,
_commentObjectType,
_commentObjectId,
_commentText,
_commentAuthorName,
_commentAuthorEmail,
_commentAuthorWebsite,
_commentSubscribe,
_commentIsDisabled,
_commentIsActive,
_commentCSI,
_commentCSD,
_commentUSI,
_commentUSD
);
SET _commentId = LAST_INSERT_ID();
CALL EventInsert(6, Now(), _commentId, _commentObjectType, _commentObjectId, null, null, 'Comment Added', 1, _commentCSI, Now(), _commentUSI, Now());
SELECT _commentId INTO _commentIdOut ;
END $$
DELIMITER ;
DELIMITER $$
DROP PROCEDURE IF EXISTS `EventInsert` $$
CREATE DEFINER=`root`#`%` PROCEDURE `EventInsert`(
IN _eventTypeId int,
IN _eventCreateDate datetime,
IN _eventObjectId bigint,
IN _eventAffectedObjectType1 int,
IN _eventAffectedObjectId1 bigint,
IN _eventAffectedObjectType2 int,
IN _eventAffectedObjectId2 bigint,
IN _eventText varchar(1024),
IN _eventIsActive tinyint,
IN _eventCSI int,
IN _eventCSD datetime,
IN _eventUSI int,
IN _eventUSD datetime
)
BEGIN
INSERT INTO event
(
eventTypeId,
eventCreateDate,
eventObjectId,
eventAffectedObjectType1,
eventAffectedObjectId1,
eventAffectedObjectType2,
eventAffectedObjectId2,
eventText,
eventIsActive,
eventCSI,
eventCSD,
eventUSI,
eventUSD
)
VALUES
(
_eventTypeId,
_eventCreateDate,
_eventObjectId,
_eventAffectedObjectType1,
_eventAffectedObjectId1,
_eventAffectedObjectType2,
_eventAffectedObjectId2,
_eventText,
_eventIsActive,
_eventCSI,
_eventCSD,
_eventUSI,
_eventUSD
);
END $$
DELIMITER ;
Found it. Boiled down to this line in the EventManager.GetEventFeed query:
AND eventCreateDate <= <cfqueryparam cfsqltype="cf_sql_timestamp" value="#Now()#" />
What was happening was the MySql Now() function called in the EventInsert proc was a fraction later than the Coldfusion #Now()# being used in the query. Therefore the line of code excluded that record.Also, why it was only happening when comments were added quickly.
What a biatch. Thanks for the input everyone.
Let me get this straight :
You call a MySQL SP which does an insert which then calls another SP to do another insert.
There's no return to ColdFusion between those two? Is that right?
If that's the case then the chances are there's a problem with your SP not returning values correctly or you're looking in the wrong place for the result.
I'm more inclined towards there being a problem with the MySQL SPs. They aren't exactly great and don't really give you a great deal of performance benefit. Views are useful, but the SPs are, frankly, a bit rubbish. I suspect that when you call the second SP from within the first SP and it returns a value its not being correctly passed back out of the original SP to ColdFusion, hence the lack of result.
To be honest, my suggestion would be to write two ORM functions or simple cfqueries in a suitable DAO or service to record the result of the insert of comment first and return a value. Having returned that value, make the other call to the function to get your event based on the returned comment id. (ColdFusion 8 will give you Generated_Key, ColdFusion 9 is generatedkey, I'm not sure what it'll be in Railo, but it'll be there in the "result" attribute structure).
Thinking about it, I'm not even sure why you're getting the event based on the commentid just entered. You've just added that comment against an event, so you should already have some data on that event, even if its just the ID which you can then get the full event record/object from without having to go around the house via the comment.
So over all I would suggest taking a step back and looking at the data flow you're working with and perhaps refactor it.