Usage of Id='' in EAI Siebel Adapter Query - siebel

Requirement: I am suppose to use an existing Integration Object for my requirement. As this IO consists of ICs that I do not need in my requirement, I would like to avoid them in my IO query output.
I observe that passing Id = '' returns no result in Siebel 8.0. Can I use it as a feature and pass SearchSpec => [Integration Component.Id]='' to EAI Siebel Adapter query to suppress ICs that I don't want in output?
How good is this query Id=''? Will Siebel ignore this query completely? or will it attempt and return no output?
As per my understanding Siebel ignores the query where row_id is passed as ''
(Not true for siebel 6.0) Please share your opinion.

Not sure of using Id = '', when you club the condition with other conditions, Siebel might try to find actual matching records. Also , not sure if future upgrading will keep the same system.
If yours is the only code using the IO, you could straightaway inactivate the ICs you dont want.
If you are unsure of IC inactivation, best way should be to a DatMapper. Set up an EAI Datamapper, source and target IOs of same name. In this datamapper, map only the ICs you need. After querying from EAI Siebel Adapter, send your output to this DataMapper.
Siebel will keep only the ICs mapped and remove all the rest.
Since this is a non-repository change, you can modify the DataMapper in future too.
Hope this helps !

Answering it myself with my opinion..
As per my understanding, querying with Id='' still queries the database for row_id = ''. Including this in IO query reduces the query scope to the parent's context..
Though this won't improve any performance, IO query output looks cleaner.
Update: I'm using a Indexed column based field Id (ROW_ID) with search spec as "[Id] IS NULL". It's a next to impossible case in database having ROW_ID = NULL, unless it's intentionally and manually updated. Again no one would do it unless really wants to messup that data .. because without ROW_ID record is literally invalid..

Adding a null query to the IC will inherently result in an empty property set for the IC in question. But if you don't need the IC, and the ICs in the IO are not hierarchically connected (no hierarchy key )(eg- independant BCs with same base table in the same BO), you just have to remove the IC mapping in the datamap editor and the IC wont show in the IO propset

Related

Drupal 8: Altering Search API queries

I'm working on a project which includes the following activated modules:
Drupal core 8.2.3
Database Search 8.x-1.0-beta4
Search API 8.x-1.0-beta4
Search API Term Handlers 8.x-1.0-beta4
Views 8.2.3
I have a list of nids which need to be excluded from the search result of the site-wide search. The search uses Search API and has been setup using Views.
The table in the database is: "search_api_db_default_index"
The field I wish to target is: "nid"
I wasn't able to get HOOK__search_api_query_alter or HOOK_search_api_results_alter to fire, so I am attempting to manipulate the query through HOOK_views_query_alter.
I have attempted to use both the "addWhere" and "addCondition" methods with the following syntax:
When using the addCondition method, I attempted
$query->addCondition('search_api_db_default_index.nid', $oneBadNid, '<>');
and
$query->addCondition('search_api_db_default_index.nid', $manyBadNids, 'NOT IN');
and when using the addWhere method, I attempted
$query->addWhere('AND', 'search_api_index_default_index.nid', $oneBadNid, '<>');
and
$query->addWhere('AND', 'search_api_index_default_index.nid', $manyBadNids, 'NOT IN');
Regardless of whether or not I prefix the field with the table name, searching always results in triggering the following notice:
Unknown field in filter clause: 'search_api_db_default_index.nid' .
It seems that the field name is always wrapped in an html encoded string representing a single quotation, but this occurs both when using double quotations or single quotations around the supplied table.field parameter.
I am not even sure that this is what is keeping me from altering my query, but it is the only thing close to an error which I have discovered in this process. It's also possible that I'm simply not supposed to be targeting the table in the manner written, but I did not find any documentation directing me to the proper methodology.
I would appreciate any insight into this issue! Thanks!
Generally you can use
$fields = $query->getIndex()->getFields();
on the query to get an array of fields you can use within the search_api query.
Piggy-backing off of Nebel54's comment, and attempting this on my own, you don't need to include the 'table' name when setting the addCondition. However, I did need to use hook_search_api_query_alter over a views-specific one.
function mymodule_search_api_query_alter(\Drupal\search_api\Query\QueryInterface &$query) {
// Ensure field_myfield is being indexed
$fields = $query->getIndex()->getFields();
if (isset($fields['field_myfield'])) {
$query->addCondition('field_myfield', 'myvalue', '<>');
}
}

Correct way to query the server for all records, but filter the cached ones in Ember.js

I need to use the filter method in order to prevent unsaved new records (!isNew) from being listed/shown before being saved.
So, The official Ember.js guide, says, the filter method signature is filter (type, query, filter) and that the query is an optional argument.
The thing is, When I don't specify the query, I don't get any results, and nothing is shown.
Further digging, And I found out (correct me if I am wrong), That the filter method filters the cached records, So, This means I have to query the back-end the first time I visit the route?
My question is, Is this the correct way to do it? I feel that there is something wrong with just leaving the query argument blank, Or HAVING to put a blank argument in the first place!
Here is my Route (which works perfectly and as expected by the way) :
SalesRepsRoute = Ember.Route.extend
model: ->
# the query left blank in order to get all salesReps from the server.
#store.filter('sales-rep', {} ,(sr) ->
!sr.get('isNew')
)
Thanks in advance, And please let me know if I should post any more code/info .
filter is meant to be used for querying the store, but in the event that you'd like to trigger a call to the server you can specify a hash as the second parameter.
If you feel hacky (which you shouldn't) about it, you can just call find before it and exclude the hash in the filter call. find will asynchronously call your backend and the filter will stay up to date with records as they are added to the store.
#store.find('sales-rep')
#store.filter('sales-rep', (sr) ->
!sr.get('isNew')
)

Changing Length of Siebel Column

Suppose we have a existing siebel column and this column has corresponding mapped eim column also. If I change the length of this siebel base table's column from 100 to 200varhcar by running alter query from backend. How it will impact on the EIM process? Will import process be successful?
Regards,
Robin
If you are interested in knowing conceptually, here are the implications that i can foresee.
a) Table column added using alter table is virtually useless as the application wont be able to use it because its definition is missing from Siebel Repository.
b) If you change the length of an existing column, application would still be using the length mentioned in Siebel Repository.
c) EIM process will ignore your new column length as it loads data dictionary before running the job.
d) And finally, during code migration you have to do the alter table every time since DDLSync process cannot take care of your scenario.
I would advise you not to alter the length of an existing vanilla table column, and instead extend the database table to add a new column. Just as the other poster mentioned, you should do this using Siebel Tools. You will then need to also add reference for this new field into the EIM components (this you also do using Siebel Tools).
This is a best-practice. If your client ever had an Siebel code review done by Oracle, you would be told to do what I described above (not what you were considering doing).
Changing the column length using the alter table command will only change it in the database layer, which will have no repercussions with a siebel standpoint. The EIM tables will still be valid as they will be using the column length mentioned in the repository sent in by tools. If you dont change it in the tools and apply the table, I dont think the changes will work.
I would not recommend that you do this. In this case, probably nothing will go wrong. EIM columns will load data that are upto 100 characters long but from the gui, you could insert upto 200 characters. Something unexpected can go wrong, we would need to know your application better to answer this question.

borland builder c++ oracle question

I have a Borland builder c++ 6 application calling Oracle 10g database. Operating over a LAN. When the application in question makes a simple db select e.g.
select table_name from element_tablenames where element_id = 10023842
the following is recorded as happening in Oracle (from the performance logs)
select table_name
from element_tablenames
where element_id = 10023842
then immediately (and not from C++ source code but perhaps deeper)
select table_name, element_tablenames.ROWID
from element_tablenames
where element_id = 10023842
The select statement is only called once in the TADODbQuery object, yet two queries are being performed - one to parse and the other adds the ROWID for executon.
Over a WAN and many, many queries this is obviously a problem to the user.
Does anyone know why this might be happening, can someone suggest a solution?
Agree with Robert.
The ROWID uniquely identifies a row in a table so that the returned record can be applied back to the database with any changes (or as a DELETE).
Is there a way to identify a particular column (or set of columns) as a primary key so that it can be used to identify a row without using a ROWID.
I don't know exactly where the RowID is coming from, it could be either the TAdoQuery implementation or the Oracle Driver. But I am sure I found the reason.
From the Oracle docs:
If the database table does not contain a primary key, the ROWID must be selected explicitly when populating DataTable.
So I suspect your Table does not have a primary key, either add one or add the rowid.
Either way this will solve the duplicate query problem.
Since you are concerned about performance. In general
Using TAdoQuery you can set the CursorType to optimize different behaviors for performance. This article covers this from a TAdoQuery perspective. MSDN also has an article that covers it from from a general ADO Perspective. Finally the specifications from the Oracle Driver can be useful.
I would recommend setting the Cursor to either as they are the only supported by Oracle
ctStatic - Bi-directional query produced.
ctOpenForwardOnly - Unidirectional query produced, fastest but can't call Prior
You can also play with CursorLocation to see how it effects your speed.

WQL SELECT with optional column

I need to make a query like this:
SELECT PNPDeviceID FROM Win32_NetworkAdapter WHERE AdapterTypeId = 0
Trouble is, the AdapterTypeId column isn't always present. In this case, I just want everything, like so:
SELECT PNPDeviceID FROM Win32_NetworkAdapter
My WQL/SQL knowledge is extremely limited. Can anybody tell me how to do this in a single query?
EDIT:
A bit more background seems to be required: I am querying Windows for device information using WMI, which uses an SQL-like syntax. So, in my example, I am querying for network adapters that have an AdapterTypeId of 0.
That column is not always present however, meaning that if I enumerate through the returned values then "AdapterTypeId" is not listed.
EDIT 2:
Changed SQL to WQL; apparantly this is more correct.
I am assuming you mean the underlying schema is unreliable.
This is a highly unconventional situation. I suggest that you resolve the issue that is causing the column to not always be present, because to have the schema changing dynamically underneath your application is potentially (almost certainly) disastrous.
Update:
OK, so WQL lets you query objects with a SQL-like syntax but, unlike SQL, the schema can change underneath your feet. This is a classic example of a leaky abstraction, and I now hate WQL without ever having used it :).
Since the available properties are in flux, I am guessing that WQL provides a way to enumerate the properties for a given adapter. Do this, and choose which query to run depending upon the results.
After some Googling, there is an example here, which shows how to enumerate through the available properties. You can use this to determine if AdapterTypeId exists or not.
SELECT PNPDeviceID FROM Win32_NetworkAdapter WHERE AdapterTypeId = {yourDesire} OR AdapterTypeId IS NULL
I assume that you mean that this field is missing from the table.
Do you know before submitting the query if this field exists?
If yes then just create SQL dynamically, otherwise It think you will get syntax error in case of missing field
This is not an SQL question. SQL does not contemplate records with varying schemas in a single table source. Instead (as you mention) this is a different system using an "SQL-like" syntax. You'll have better luck if you recast the question using the actual product that you're trying to query, and information how that product deals with variable record structures is probably discussed in the documentation.