I am having the most frustrating problem, basically I have a website and a webservice running on the same server. Both use ADO.net to connect to data tables using a couple of custom calls I have created myself, the website has never had a problem with connecting to a particular proc to return data, however the webservice, once in say every 100 calls to that proc, returns an empty dataset even though it should have come back populated and does in a query in SQL Mgmt Studio. The weird thing is it works most times, but on the odd occasion returns this error:
System.IndexOutOfRangeException: Cannot find table 0. at System.Data.DataTableCollection.get_Item(Int32 index)
Dim SQLCmd As SqlCommand = CreateSPCommand("VerifyCredentialsSP")
SQLCmd.Parameters.AddWithValue("#Password", Credentials.Password)
GetData(SQLCmd)
ds.DataSetName = "Customer"
If ds.Tables(0) IsNot Nothing Then
ds.Tables(0).TableName = "Customer"
End If
One way to do this is to catch the exception being thrown, but the better method is to check for null or nothing in your case.
Do not access the index ds.Tables(0)....
Do a check if your dataset ds is null before accessing it like so:
If ds IsNot Nothing then
'only then can you index ds.
end if
In this way you avoid a lookup on the index of your dataset given that it contains some valid reference. In your method you are accessing Tables(0) which may or may not exist, without a valid check your code could potentially throw an exception, and in this case it has!
Related
I have 2 external tables in BiqQuery, created on top of JSON files on Google Cloud Storage. The first one is a fact table, the second is errors data - and it might or might not be empty.
I can query each table separately just fine, even an empty one - here is an
empty table query result example
I'm also able to left join them if both of them are not empty.
However, if errors table is empty, my query fails with the following error:
The query specified one or more federated data sources but not all of them were scanned. It usually indicates incorrect uri specification or a 'limit' clause over a union of federated data sources that was satisfied without having to read all sources.
This situation isn't covered anywhere in the docs, and it's not related to this versioning issue - Reading BigQuery federated table as source in Dataflow throws an error
I'd rather avoid converting either of this tables to native, since they are used in just one step of the ETL process, and this data is dropped afterwards. One of them being empty doesn't look like an exceptional situation, since plain select works just fine.
Is some workaround possible?
UPD: raised an issue with Google, waiting for response - https://issuetracker.google.com/issues/145230326
It feels like a bug. One workaround is to use scripting to avoid querying the empty table:
DECLARE is_external_table_empty BOOL DEFAULT
(SELECT 0 = (SELECT COUNT(*) FROM your_external_table));
-- do things differently when is_external_table_empty is true
IF is_external_table_empty = true
THEN ...
ELSE ...
END IF
I have been searching for a while on how to get the generated auto-increment ID from an "INSERT . INTO ... (...) VALUES (...)". Even on stackoverflow, I only find the answer of using a "SELECT LAST_INSERT_ID()" in a subsequent query. I find this solution unsatisfactory for a number of reasons:
1) This will effectively double the queries sent to the database, especially since it is mostly handling inserts.
2) What will happen if more than one thread access the database at the same time? What if more than one application accesses the database at the same time? It seems to me the values are bound to become erroneous.
It's hard for me to believe that the MySQL C++ Connector wouldn't offer the feature that the Java Connector as well as the PHP Connector offer.
An example taken from http://forums.mysql.com/read.php?167,294960,295250
sql::Statement* stmt = conn->createStatement();
sql::ResultSet* res = stmt->executeQuery("SELECT ##identity AS id");
res->next();
my_ulong retVal = res->getInt64("id");
In nutshell, if your ID column is not an auto_increment column then you can as well use
SELECT ##identity AS id
EDIT:
Not sure what do you mean by second query/round trip. First I thought you are trying to know a different way to get the ID of the last inserted row but it looks like you are more interested in knowing whether you can save the round trip or not?
If that's the case, then I am completely agree with #WhozCraig; you can punch in both your queries in a single statement like inser into tab value ....;select last_inserted_id() which will be a single call
OR
you can have stored procedure like below to do the same and save the round trip
create procedure myproc
as
begin
insert into mytab values ...;
select last_inserted_id();
end
Let me know if this is not what you are trying to achieve.
Helo there.
I am attempting to a execute a many-to-many get all query. To be clear, I am attepmting to get a collection within a collection to be pulled back. Ie, we will get a result set, but in that result set, there will be a collection of all objects linked to it via a foreign key. Now, to do this, I have a collection which I annotate thusly...
#ManyToMany
#JoinTable(name="QUICK_LAUNCH_DISTLIST",
joinColumns=#JoinColumn(name="QUICK_LAUNCH_ID"),
inverseJoinColumns=#JoinColumn(name="LIST_ID"))
private Collection<QuickLaunchDistlist> distributionLists;
Which seems to be just about text book...
I call a named query which looks like this...
#NamedQuery(name="getQuickLaunch", query = "SELECT q FROM QuickLaunch q")
Which is executed like so...
qlList = emf.createNamedQuery("getQuickLaunch").getResultList();
Every time I make this call, I get back the expected data in the first collection. But none of the collections seem to populate with it. To find out why, I looked at the sql being generated by the call... This is what I find...
I get this exception...
This is a FFDC log generated for the Default Resource Adapter from source:com.ibm.ws.rsadapter.jdbc.WSJdbcPreparedStatement.executeQuery
The exception caught:java.sql.SQLSyntaxErrorException: ORA-00904: "T1"."QL_DISTLIST_ID": invalid identifier
SQL Error Code is 904 SQL State is :42000
Along with this query...
SELECT t1.QL_DISTLIST_ID, t2.LIST_ID, t2.CREATE_DATE, t2.CREATE_USERID, t2.description, t2.flag, t2.MOD_DATE, t2.MOD_USERID, t2.ORGANIZATION_ID, t2.owner, t2.STATUS_ID, t1.MESSAGE_TYPE_ID, t1.MOD_DATE, t1.MOD_USERID, t1.QUICK_LAUNCH_ID FROM EPCD13.QUICK_LAUNCH_DISTLIST t0, EPCD13.QUICK_LAUNCH_DISTLIST t1, EPCD13.DISTRIBUTION_LIST t2 WHERE t0.QUICK_LAUNCH_ID = ? AND t0.LIST_ID = t1.QL_DISTLIST_ID AND t1.LIST_ID = t2.LIST_ID(+)
If you look at the first column it request's to pull back you will notice that it selects t1.QL_DISTLIST_ID... Problem is, I have no such named column any where in my db!?!?!? Why on earth is that column being called? How does JPA generate the queries that it calls? If I knew that, I might be a little closer to figuring out what went wrong here or what I did wrong. Any help would be greatly appreciated.
I have a method that builds and runs a Criteria query. The query does what I want it to, specifically it filters (and sorts) records based on user input.
Also, the query size is restricted to the number of records on the screen. This is important because the data table can be potentially very large.
However, if filters are applied, I want to count the number of records that would be returned if the query was not limited. So this means running two queries: one to fetch the records and then one to count the records that are in the overall set. It looks like this:
public List<Log> runQuery(TableQueryParameters tqp) {
// get the builder, query, and root
CriteriaBuilder builder = em.getCriteriaBuilder();
CriteriaQuery<Log> query = builder.createQuery(Log.class);
Root<Log> root = query.from(Log.class);
// build the requested filters
Predicate filter = null;
for (TableQueryParameters.FilterTerm ft : tqp.getFilterTerms()) {
// this section runs trough the user input and constructs the
// predicate
}
if (filter != null) query.where(filter);
// attach the requested ordering
List<Order> orders = new ArrayList<Order>();
for (TableQueryParameters.SortTerm st : tqp.getActiveSortTerms()) {
// this section constructs the Order objects
}
if (!orders.isEmpty()) query.orderBy(orders);
// run the query
TypedQuery<Log> typedQuery = em.createQuery(query);
typedQuery.setFirstResult((int) tqp.getStartRecord());
typedQuery.setMaxResults(tqp.getPageSize());
List<Log> list = typedQuery.getResultList();
// if we need the result size, fetch it now
if (tqp.isNeedResultSize()) {
CriteriaQuery<Long> countQuery = builder.createQuery(Long.class);
countQuery.select(builder.count(countQuery.from(Log.class)));
if (filter != null) countQuery.where(filter);
tqp.setResultSize(em.createQuery(countQuery).getSingleResult().intValue());
}
return list;
}
As a result, I call createQuery twice on the same CriteriaBuilder and I share the Predicate object (filter) between both of them. When I run the second query, I sometimes get the following message:
Exception [EclipseLink-6089] (Eclipse Persistence Services -
2.2.0.v20110202-r8913):
org.eclipse.persistence.exceptions.QueryException Exception
Description: The expression has not been initialized correctly. Only
a single ExpressionBuilder should be used for a query. For parallel
expressions, the query class must be provided to the ExpressionBuilder
constructor, and the query's ExpressionBuilder must always be on the
left side of the expression. Expression: [ Base
com.myqwip.database.Log] Query: ReportQuery(referenceClass=Log ) at
org.eclipse.persistence.exceptions.QueryException.noExpressionBuilderFound(QueryException.java:874)
at
org.eclipse.persistence.expressions.ExpressionBuilder.getDescriptor(ExpressionBuilder.java:195)
at
org.eclipse.persistence.internal.expressions.DataExpression.getMapping(DataExpression.java:214)
Can someone tell me why this error shows up intermittently, and what I should do to fix this?
Short answer to the question : Yes you can, but only sequentially.
In the method above, you start creating the first query, then start creating the second, the execute the second, then execute the first.
I had the exact same problem. I don't know why it's intermittent tough.
I other words, you start creating your first query, and before having finished it, you start creating and executing another.
Hibernate doesn't complain but eclipselink doesn't like it.
If you just start by the query count, execute it, and then create and execute the other query (what you've done by splitting it in 2 methods), eclipselink won't complain.
see https://issues.jboss.org/browse/SEAMSECURITY-91
It looks like this posting isn't going to draw much more response, so I will answer this in how I resolved it.
Ultimately I ended up breaking my runQuery() method into two methods: runQuery() that fetches the records and runQueryCount() that fetches the count of records without sort parameters. Each method has its own call to em.getCriteriaBuilder(). I have no idea what effect that has on the EntityManager, but the problem has not appeared since.
Also, the DAO object that has these methods used to be #ApplicationScoped. It now has no declared scope, so it is now constructed on demand from the various #RequestScoped and #ConversationScoped beans that use it. I don't know if this has any effect on the problem but since it has not appeared since I will use this as my code pattern from now on. Suggestions welcome.
Each time the save() method is called on a Django object, Django executes two queries one INSERT and one SELECT. In my case this is usefull except for some specific places where each query is expensive. Any ideas on how to sometimes state that no object needs to be returned - no SELECT needed.
Also I'm using django-mssql to connect to, this problem doesn't seem to exist on MySQL.
EDIT : A better explanation
h = Human()
h.name='John Foo'
print h.id # Returns None, No insert has been done therefore no id is available
h.save()
print h.id # Returns the ID, an insert has taken place and also a select statement to return the id
Sometimes I don't the need the retruning ID, just insert
40ins's answer was right, but probably it might have higher costs...
When django execustes a save(), it needed to be sure if the object is a new one or an existing one. So it hits the database to check if related objext exists. If yes, it executes an UPDATE, orherwise it executes an ISERT
Check documentatin from here...
You can use force_insert or force_update ,but that might be cause serious data integrity problems, like creating a duplicate entry instead of updating the existing one...
So, if you wish to use force , you must be sure whether it will be an INSERT or an UPDATE...
Try to use save() method with force_insert or force_update attributes. With this attributes django knows about record existence and don't make additional query.
The additional select is the django-mssql backend getting the identity value from the table to determine the ID that was just inserted. If this select is slow, then something is wrong with your SQL server/configuration because it is only doing SELECT CAST(IDENT_CURRENT(*table_name*) as bigint) call.