I'm facing a little problem: I'm trying to understand if a collection of object is empty or not.
Basically I'm making a connection with a DB and launching a simple SELECT query: if there are no results then I want to stop the execution of the test...
This is the part of the code interested:
If ctrl(0).value = 0 Then
reporter.ReportEvent 1, "Process stopped",
"The operation has failed"
End If
Obviously if the query returns no value then ctrl(0) does not exist and QTP stops the execution telling me that either it corresponds to the beginning or the end of the object...
How can I solve it?!
Edit: if I count the objects in the collection, it returns 6. This is the number of columns that the entity in the DB has. But every column is empty so the SELECT does not return a value...
You can check the whether the Recordset is EOF .Please check the code below
If not objRecordSet.EOF then
StrValue=objRecordSet(0)
Else
ExitTest
End IF
Related
I have a MongoDB query:
db.list.find({categories:{$elemMatch:{ "$regex":".*Bar.*", $not:/^Barbeque/}}}).pretty()
where it looks at the elements in the categories array and I think gets all documents where there is an element that contains "Bar" but none that contain "Barbecue". How to I check make sure that my query is correct?
Let me know if my query is wrong and how I could fix it.
You should specify the end of string like this:
$not:/^Barbeque$/
rather than
$not:/^Barbeque/
because if any of file contains (i.e. "Barbequee"), it must return to your query result.
about make sure that my query is correct or not, i have no idea. But if your query is correct, mongo must return the value that matches the written query.
if something goes wrong, it must be that the logic in the query that results in mongo returns an unexpected value for you.
so check your query before you run it. :D
I have been searching for a while on how to get the generated auto-increment ID from an "INSERT . INTO ... (...) VALUES (...)". Even on stackoverflow, I only find the answer of using a "SELECT LAST_INSERT_ID()" in a subsequent query. I find this solution unsatisfactory for a number of reasons:
1) This will effectively double the queries sent to the database, especially since it is mostly handling inserts.
2) What will happen if more than one thread access the database at the same time? What if more than one application accesses the database at the same time? It seems to me the values are bound to become erroneous.
It's hard for me to believe that the MySQL C++ Connector wouldn't offer the feature that the Java Connector as well as the PHP Connector offer.
An example taken from http://forums.mysql.com/read.php?167,294960,295250
sql::Statement* stmt = conn->createStatement();
sql::ResultSet* res = stmt->executeQuery("SELECT ##identity AS id");
res->next();
my_ulong retVal = res->getInt64("id");
In nutshell, if your ID column is not an auto_increment column then you can as well use
SELECT ##identity AS id
EDIT:
Not sure what do you mean by second query/round trip. First I thought you are trying to know a different way to get the ID of the last inserted row but it looks like you are more interested in knowing whether you can save the round trip or not?
If that's the case, then I am completely agree with #WhozCraig; you can punch in both your queries in a single statement like inser into tab value ....;select last_inserted_id() which will be a single call
OR
you can have stored procedure like below to do the same and save the round trip
create procedure myproc
as
begin
insert into mytab values ...;
select last_inserted_id();
end
Let me know if this is not what you are trying to achieve.
I have an entire set of data i want to insert into a table. I am trying to have it insert/update everything OR rollback. I was going to do it in a transaction, but i wasnt sure if the sql_exec() command did the same thing.
My goal was to iterate through the list.
Select from each iteration based on the Primary Key.
If result was found:
append update to string;
else
append insert to string;
Then after iterating through the loop, i would have a giant string and say:
sql_exec(string);
sql_close(db);
Is that how i should do it? I was going to do it on each iteration of the loop, but i didnt think a global rollback if there was an error.
No, you should not append everything into a giant string. If you do, you will need to allocate a whole bunch of memory as you are going, and it will be harder to create good error messages for each individual statement, as you will just get a single error for the entire string. Why spend all of that effort, constructing one big string when SQLite is just going to have to parse it back down into its individual statements again?
Instead, as #Chad suggests, you should just use sqlite3_exec() on a BEGIN statement, which will begin a transaction. Then sqlite3_exec() each statement in turn, and finally sqlite3_exec() a COMMIT or ROLLBACK depending on how everything goes. The BEGIN statement will start a transaction, and all of the statements executed after that will be within that transaction, and so committed or rolled back together. That's what the "A" in ACID stands for; Atomic, as all of the statements in the transaction will be committed or rolled back as if they were a single atomic operation.
Furthermore, you probably shouldn't use sqlite3_exec() if some of the data varies within each statement, such as being read from a file. If you do, a mistake could easily leave you with an SQL injection bug. For instance, if you construct your query by appending strings, and you have strings like char *str = "it's a string" to insert, if you don't quote it properly, your statement could come out like INSERT INTO table VALUES ('it's a string');, which will be an error. Or if someone malicious could write data into this file, then they could cause you to execute any SQL statement they want (imagine if the string were "'); DROP TABLE my_important_table; --"). You may think that no one malicious is going to provide input, but you can still have accidental problems, if someone puts a character that confuses the SQL parser into a string.
Instead, you should use sqlite3_prepare_v2() and sqlite3_bind_...() (where ... is the type, like int or double or text). In order to do this, you use a statement like char *query = "INSERT INTO table VALUES (?)", where you substitute a ? for where you want your parameter to go, prepare it using sqlite3_prepare_v2(db, query, -1, &stmt, NULL), bind the parameter using sqlite3_bind_text(stmt, 1, str, -1, SQLITE_STATIC), then execute the statement with sqlite3_step(stmt). If the statement returns any data, you will get SQLITE_ROW, and can access the data using the various sqlite3_columne_...() functions. Be sure to read the documentation carefully; some of the example parameters I gave may need to change depending on how you use this.
Yes, this is a bit more of a pain than calling sqlite3_exec(), but if your query has any data loaded from external sources (files, user input), this is the only way to do it correctly. sqlite3_exec() is fine to call if the entire text of the query is contained within your source, such as the BEGIN and COMMIT or ROLLBACK statements, or pre-written queries with no parts coming from outside of your program, you just need prepare/bind if there's any chance that an unexpected string could get in.
Finally, you don't need to query whether something is in the database already, and then insert or update it. You can do a INSERT OR REPLACE query, which will either insert a record, or replace one with a matching primary key, which is the equivalent of selecting and then doing an INSERT or an UPDATE, but much quicker and simpler. See the INSERT and "on conflict" documentation for more details.
I have a method that builds and runs a Criteria query. The query does what I want it to, specifically it filters (and sorts) records based on user input.
Also, the query size is restricted to the number of records on the screen. This is important because the data table can be potentially very large.
However, if filters are applied, I want to count the number of records that would be returned if the query was not limited. So this means running two queries: one to fetch the records and then one to count the records that are in the overall set. It looks like this:
public List<Log> runQuery(TableQueryParameters tqp) {
// get the builder, query, and root
CriteriaBuilder builder = em.getCriteriaBuilder();
CriteriaQuery<Log> query = builder.createQuery(Log.class);
Root<Log> root = query.from(Log.class);
// build the requested filters
Predicate filter = null;
for (TableQueryParameters.FilterTerm ft : tqp.getFilterTerms()) {
// this section runs trough the user input and constructs the
// predicate
}
if (filter != null) query.where(filter);
// attach the requested ordering
List<Order> orders = new ArrayList<Order>();
for (TableQueryParameters.SortTerm st : tqp.getActiveSortTerms()) {
// this section constructs the Order objects
}
if (!orders.isEmpty()) query.orderBy(orders);
// run the query
TypedQuery<Log> typedQuery = em.createQuery(query);
typedQuery.setFirstResult((int) tqp.getStartRecord());
typedQuery.setMaxResults(tqp.getPageSize());
List<Log> list = typedQuery.getResultList();
// if we need the result size, fetch it now
if (tqp.isNeedResultSize()) {
CriteriaQuery<Long> countQuery = builder.createQuery(Long.class);
countQuery.select(builder.count(countQuery.from(Log.class)));
if (filter != null) countQuery.where(filter);
tqp.setResultSize(em.createQuery(countQuery).getSingleResult().intValue());
}
return list;
}
As a result, I call createQuery twice on the same CriteriaBuilder and I share the Predicate object (filter) between both of them. When I run the second query, I sometimes get the following message:
Exception [EclipseLink-6089] (Eclipse Persistence Services -
2.2.0.v20110202-r8913):
org.eclipse.persistence.exceptions.QueryException Exception
Description: The expression has not been initialized correctly. Only
a single ExpressionBuilder should be used for a query. For parallel
expressions, the query class must be provided to the ExpressionBuilder
constructor, and the query's ExpressionBuilder must always be on the
left side of the expression. Expression: [ Base
com.myqwip.database.Log] Query: ReportQuery(referenceClass=Log ) at
org.eclipse.persistence.exceptions.QueryException.noExpressionBuilderFound(QueryException.java:874)
at
org.eclipse.persistence.expressions.ExpressionBuilder.getDescriptor(ExpressionBuilder.java:195)
at
org.eclipse.persistence.internal.expressions.DataExpression.getMapping(DataExpression.java:214)
Can someone tell me why this error shows up intermittently, and what I should do to fix this?
Short answer to the question : Yes you can, but only sequentially.
In the method above, you start creating the first query, then start creating the second, the execute the second, then execute the first.
I had the exact same problem. I don't know why it's intermittent tough.
I other words, you start creating your first query, and before having finished it, you start creating and executing another.
Hibernate doesn't complain but eclipselink doesn't like it.
If you just start by the query count, execute it, and then create and execute the other query (what you've done by splitting it in 2 methods), eclipselink won't complain.
see https://issues.jboss.org/browse/SEAMSECURITY-91
It looks like this posting isn't going to draw much more response, so I will answer this in how I resolved it.
Ultimately I ended up breaking my runQuery() method into two methods: runQuery() that fetches the records and runQueryCount() that fetches the count of records without sort parameters. Each method has its own call to em.getCriteriaBuilder(). I have no idea what effect that has on the EntityManager, but the problem has not appeared since.
Also, the DAO object that has these methods used to be #ApplicationScoped. It now has no declared scope, so it is now constructed on demand from the various #RequestScoped and #ConversationScoped beans that use it. I don't know if this has any effect on the problem but since it has not appeared since I will use this as my code pattern from now on. Suggestions welcome.
I am having the most frustrating problem, basically I have a website and a webservice running on the same server. Both use ADO.net to connect to data tables using a couple of custom calls I have created myself, the website has never had a problem with connecting to a particular proc to return data, however the webservice, once in say every 100 calls to that proc, returns an empty dataset even though it should have come back populated and does in a query in SQL Mgmt Studio. The weird thing is it works most times, but on the odd occasion returns this error:
System.IndexOutOfRangeException: Cannot find table 0. at System.Data.DataTableCollection.get_Item(Int32 index)
Dim SQLCmd As SqlCommand = CreateSPCommand("VerifyCredentialsSP")
SQLCmd.Parameters.AddWithValue("#Password", Credentials.Password)
GetData(SQLCmd)
ds.DataSetName = "Customer"
If ds.Tables(0) IsNot Nothing Then
ds.Tables(0).TableName = "Customer"
End If
One way to do this is to catch the exception being thrown, but the better method is to check for null or nothing in your case.
Do not access the index ds.Tables(0)....
Do a check if your dataset ds is null before accessing it like so:
If ds IsNot Nothing then
'only then can you index ds.
end if
In this way you avoid a lookup on the index of your dataset given that it contains some valid reference. In your method you are accessing Tables(0) which may or may not exist, without a valid check your code could potentially throw an exception, and in this case it has!