How do I query for USB devices not listed in USB Root Hub - wmi

Query which works for some devices found in Win32_USBHub
SELECT * FROM Win32_USBHub WHERE DeviceID = '{0}'
Here's the code context,
// Check if USB device is plugged in
string deviceQuery = String.Format("SELECT * FROM Win32_USBHub WHERE DeviceID = '{0}'", deviceID);
using (var searcher = new System.Management.ManagementObjectSearcher(deviceQuery))
if (searcher.Get().Count == 0)
MessageBox.Show(#"Device not detected");
However when a device is not listed inside 'Universal Serial Bus controllers', querying from Win32_USBHub does not return the connected device I'm looking for.
Is there another 'table' to query outside from Win32_USBHub which would contain device I'm looking for 'Cardio Perfect PRO-Link USB'? Or would this be a 'custom table'?

SELECT * FROM Win32_PnPEntity WHERE DeviceID = '{0}'
I guess I didn't search long enough, here's a link which contains a lot of really good examples...
http://msdn.microsoft.com/en-us/library/aa394587%28v=vs.85%29.aspx
The 'table' I was looking for is Win32_PnPEntity it listed all 155 devices I'm connected to.

Related

Retrieving last message related to a specific status in Power BI

I have a table called Sessions containing PC:s downloading software.
I want to create a new column or a measure that shows which version of the software the PC is downloading or has downloaded recently.
Software version can be found in the message at the start of the download.
My measure currently looks like this but in visuals it filters out the rows where the status is not "Start"
Result = CALCULATE(MAX(Sessions[Message]),
ALLEXCEPT(Sessions, Sessions[PC]), Sessions[Status]="Start")
(There is also a DateTime column in Sessions that can be used)
I solved this with a measure.
By using TOPN and filtering by dateTime i could return a single row.
By using MAXX on this row i got the correct SW
getLatestSW =
VAR SINGLE_ROW= TOPN(1,, FILTER(Sessions, Sessions[Status]="Start"),
Sessions[DateTime], DESC)
return MAXX(SINGLE_ROW,[Message])
This was also possible with LOOKUP.
getLatestSWLookUp =
VAR LASTID = MAXX(FILTER(Sessions, Sessions[Status]="Start"),
Sessions[dateTime])
return LOOKUPVALUE(Sessions[Message], SessionEvents[dateTime], LASTID)

InfluxDB Grafana templates: Can't select all fields in "Add Query"

I believe I've done everything right when creating my graphite DB. Grafana can see the data but won't let me select all the fields when I try to "Add Query".
Output from my server shows that the DB is working:
show measurements
name: measurements
name
PORT
select * from "PORT"
name: PORT
time CardNo Counter Nodename PortNo value
---- ------ ------- -------- ------ -----
1511214407000000000 18 bcast_inpackets ALPRGAGQPN2 1 500
However, when I try to "Add Query" in Grafana, I can see PORT in "FROM" (which is what I want), but in the "WHERE" section, when I try to narrow my selection using CardNo, Counter, etc., it appears to behave randomly. If I select CardNo first, it will let me select 18 (see picture below), but then clicking "+" to add another criteria doesn't display the option for say "PortNo" (all I get is an empty dialog box). I can enter the field value manually (eg PortNo) but other users will be plotting graphs and won't necessarily know the underlying schema. Also, if I select Nodename first, then I can select CardNo (weird). I'd like it so the end user can specify ALL the fields (in this case CardNo, Counter, Nodename and PortNo).
My graphite template is this:
"[[graphite]]
# Determines whether the graphite endpoint is enabled.
enabled = true
database = "graphite"
# retention-policy = ""
bind-address = ":2003"
protocol = "tcp"
# consistency-level = "one"
templates = [ "ASR.PORT.* .measurement.Nodename.CardNo.PortNo.Counter"
]
and the data I feed to InfluxDB to test my setup is:
echo "ASR.PORT.ALPRGAGQPN2.18.1.bcast_inpackets 500 `date +%s`" | nc localhost 2003
Firstly, template is better written as:
"ASR.PORT.* .measurement.Nodename.CardNo.PortNo.field"
Which makes bcast_inpackets and any other value after PortNo into a field containing data. This reduces cardinality of series, which improves performance and scalability, by combining all counters into multiple fields on the same series as opposed to separate series with unique tags with their own value fields.
Grafana's influx query builder will filter tag values for the value of the already selected tags. In other words, if you select PortNo=1 and try to select another tag, only tag keys where PortNo=1 will be shown.
If you look at queries Grafana runs in browser, you will see something like show tag keys from PORT where PortNo='1' if PortNo=1 is already selected and different queries for other tags.
This is why you may not see other tags and why which tags you see depends on the tags already selected. This is by design so if you want something different you will need to adjust the schema by, for example, making PortNo and CardNo into fields instead of tags.
You might also be interested in InfluxGraph which can query InfluxDB via Graphite API and also supports same template configuration as InfluxDB.

Siebel NextRecord method is not moving to the next record

I have found a very weird behaviour in our Siebel 7.8 application. This is part of a business service:
var bo:BusObject;
var bc:BusComp;
try {
bo = TheApplication().GetBusObject("Service Request");
bc = bo.GetBusComp("Action");
bc.InvokeMethod("SetAdminMode", "TRUE");
bc.SetViewMode(AllView);
bc.ClearToQuery();
bc.SetSearchSpec("Status", "='Unscheduled' OR ='Scheduled' OR ='02'");
bc.ExecuteQuery(ForwardOnly);
var isRecord = bc.FirstRecord();
while (isRecord) {
log("Processing activity '" + bc.GetFieldValue("Id") + "'");
bc.SetFieldValue("Status", "03");
bc.WriteRecord();
isRecord = bc.NextRecord();
}
} catch (e) {
log("Exception: " + e.message);
} finally {
bc = null;
bo = null;
}
In the log file, we get something like this:
Processing activity '1-23456'
Processing activity '1-56789'
Processing activity '1-ABCDE'
Processing activity '1-ABCDE'
Exception: The selected record has been modified by another user since it was retrieved.
Please continue. (SBL-DAT-00523)
So, basically, it processes a few records from the BC and then, apparently at random, it "gets stuck". It's like the NextRecord call isn't executed, and instead it processes the same record again.
If I remove the SetFieldValue and WriteRecord to avoid the SBL-DAT-00523 error, it still shows some activities twice (only twice) in the log file.
What could be causing this behaviour?
It looks like in business component "Action" you have join(s) that can return multiple records for one base record and you use ForwardOnly mode to query BC.
Assume, for example, in table S_EVT_ACT you have one record with a custom column X_PHONE_NUMBER = '12345678' and you have two records in table S_CONTACT with column 'MAIN_PH_NUM' equal to the same value '12345678'. So when you will join these two tables using SQL like this:
SELECT T1.* FROM SIEBEL.S_EVT_ACT T1, SIEBELS_CONTACT T2
WHERE T1.X_PHONE_NUMBER = T2.MAIN_PH_NUM
as a result you will get two records, with the same T1.ROW_ID.
Exactly the same situation happens when you use ForwardOnly cursor mode in eScript, in this case Siebel just fetches everything what database has returned. And that why it's a big mistake to iterate over business component while it's queried in a ForwardOnly mode. You should use ForwardBackward mode instead, because in this case Siebel will exclude duplicates records (it also true for normal UI queries, because it also executed in ForwardBackward mode).
Actually this is the most important and less known difference between ForwardOnly and ForwardBackward cursor modes.
Try changing that query mode
bc.ExecuteQuery(ForwardOnly);
to ForwardBackward.

Qt/SQL - Get column type and name from table without record

Using Qt, I have to connect to a database and list column's types and names from a table. I have two constraints:
1 The database type must not be a problem (This has to work on PostgreSQL, SQL Server, MySQL, ...)
2 When I looked on the internet, I found solutions that work but only if there are one or more reocrd into the table. And I have to get column's type and name with or without record into this database.
I searched a lot on the internet but I didn't find any solutions.
I am looking for an answer in Qt/C++ or using a query that can do that.
Thanks for help !
QSqlDriver::record() takes a table name and returns a QSqlRecord, from which you can fetch the fields using QSqlRecord::field().
So, given a QSqlDatabase db,
fetch the driver with db.driver(),
fetch the list of tables with db.tables(),
fetch the a QSqlRecord for each table from driver->record(tableName), and
fetch the number of fields with record.count() and the name and type with record.field(x)
According to the previous answers, I make the implementation as below.It can work well, hope it can help you.
{
QSqlDatabase db = QSqlDatabase::addDatabase("QSLITE", "demo_conn"); //create a db connection
QString strDBPath = "db_path";
db.setDatabaseName(strDBPath); //set the db file
QSqlRecord record = db.record("table_name"); //get the record of the certain table
int n = record.count();
for(int i = 0; i < n; i++)
{
QString strField = record.fieldName(i);
}
}
QSqlDatabase::removeDatabase("demo_conn"); //remove the db connection
Getting column names and types is a database-specific operation. But you can have a single C++ function that will use the correct sql query according to the QSqlDriver you currently use:
QStringlist getColumnNames()
{
QString sql;
if (db.driverName.contains("QOCI", Qt::CaseInsensitive))
{
sql = ...
}
else if (db.driverName.contains("QPSQL", Qt::CaseInsensitive))
{
sql = ...
}
else
{
qCritical() << "unsupported db";
return QStringlist();
}
QSqlQuery res = db.exec(sql);
...
// getting names from db-specific sql query results
}
I don't know of any existing mechanism in Qt which allows that (though it might exist - maybe by using QSqlTableModel). If noone else knows of such a thing, I would just do the following:
Create data classes to store the information you require, e.g. a class TableInfo which stores a list of ColumnInfo objects which have a name and a type.
Create an interface e.g. ITableInfoReader which has a pure virtual TableInfo* retrieveTableInfo( const QString& tableName ) method.
Create one subclass of ITableInfoReader for every database you want to support. This allows doing queries which are only supported on one or a subset of all databases.
Create a TableInfoReaderFactory class which allows creation of the appropriate ITableInfoReader subclass dependent on the used database
This allows you to have your main code independent from the database, by using only the ITableInfoReader interface.
Example:
Input:
database: The QSqlDatabase which is used for executing queries
tableName: The name of the table to retrieve information about
ITableInfoReader* tableInfoReader =
_tableInfoReaderFactory.createTableReader( database );
QList< ColumnInfo* > columnInfos = tableInfoReader->retrieveTableInfo( tableName );
foreach( ColumnInfo* columnInfo, columnInfos )
{
qDebug() << columnInfo.name() << columnInfo.type();
}
I found the solution. You just have to call the record function from QSqlDatabase. You have an empty record but you can still read column types and names.

WSO2 - Table created using Analytic Script Invisible in Gadget Generation Tool

My use case: pushes data from a stream configured in the ESB to BAM and create a report using “Gadget Generation Tool”
Publishing the stream from ESB to BAM after adding an agent to the proxy service worked fine.
From the stream I created a table using the Analytics->Add screen and the table seems to persist as I am able to do a select and see results from the same screen.
Now I am trying to generate a Dashboard using the Gadget Generation Tool but the table is not available, though the jdbc connection is working fine but the table is nowhere:
Script for Analytic Table run from Analytics->Add screen
CREATE EXTERNAL TABLE IF NOT EXISTS CREDITTABLE(creditkey STRING, creditFlag STRING, version STRING)
STORED BY 'org.apache.hadoop.hive.cassandra.CassandraStorageHandler'
WITH SERDEPROPERTIES ( "cassandra.host" = "127.0.0.1" ,
cassandra.port" = "9163" , "cassandra.ks.name" = "EVENT_KS" ,
"cassandra.ks.username" = "admin" ,
"cassandra.ks.password" = "admin" ,
"cassandra.cf.name" = "firstStream" ,
"cassandra.columns.mapping" = ":key,payload_k1-constant, Version" );
Tried looking for table in following databases:
jdbc:h2:repository/database/WSO2CARBON_DB;AUTO_SERVER=TRUE
jdbc:h2:repository/database/metastore_db;AUTO_SERVER=TRUE
jdbc:h2:repository/database/samples/BAM_STATS_DB;AUTO_SERVER=TRUE
Have not done any custom db configurations.
Did you try jdbc:h2:repository/database/samples/WSO2CARBON_DB;AUTO_SERVER=TRUE? Also, what you have pasted is the Cassandra Storage Definition, probably used for getting the input, not persisting the output. If you give the full hive query, that would help to figure out the problem more.
Why did I not see the table in Gadget Generation tool?
The table I have created using the Hive script is a Casandra Distributed database table and the reference I gave in the Gadget generation tool while looking up for the table were from the h2 RDBMS database table.
Below are the references to the h2 RDBMS databse which comes out of box with WSO2
jdbc:h2:repository/database/WSO2CARBON_DB;AUTO_SERVER=TRUE
jdbc:h2:repository/database/metastore_db;AUTO_SERVER=TRUE
jdbc:h2:repository/database/samples/BAM_STATS_DB;AUTO_SERVER=TRUE
Resolution ----- How to get tables listed in the Gadget Generation tool?
To get the tables listed in the Gadget Generation tool you have to extensively use the Hive Script to complete the following 3 steps:
Create a Hive table reference for the Casandra data stream to which data is pushed from ESB in my case.
CREATE EXTERNAL TABLE IF NOT EXISTS CREDITTABLE(
payload_creditkey STRING, payload_creditFlag STRING, payload_version STRING) STORED BY
'org.apache.hadoop.hive.cassandra.CassandraStorageHandler' WITH SERDEPROPERTIES ( "cassandra.host" = "127.0.0.1" ,
"cassandra.port" = "9163" , "cassandra.ks.name" = "EVENT_KS" , "cassandra.ks.username" = "admin" , "cassandra.ks.password" = "admin" ,
"cassandra.cf.name" = "firstStream" , "cassandra.columns.mapping" = ":key,payload_k1-constant, Version" );
Using Hive script create a H2 RDBMS script and reference to which I would be copying my data from the Casandra stream.
CREATE EXTERNAL TABLE IF NOT EXISTS CREDITTABLEh2summary(
creditFlg STRING,
verSion STRING
)
STORED BY
'org.wso2.carbon.hadoop.hive.jdbc.storage.JDBCStorageHandler'
TBLPROPERTIES (
'mapred.jdbc.driver.class' = 'org.h2.Driver' ,
'mapred.jdbc.url' = 'jdbc:h2:C:/wso2bam-2.2.0/repository/samples/database/BAM_STATS_DB' ,
'mapred.jdbc.username' = 'wso2carbon' ,
'mapred.jdbc.password' = 'wso2carbon' ,
'hive.jdbc.update.on.duplicate' = 'true' ,
'hive.jdbc.primary.key.fields' = 'creditFlg' ,
'hive.jdbc.table.create.query' = 'CREATE TABLE CREDITTABLE_newh2(creditFlg VARCHAR(100), version VARCHAR(100))' );
Write a Hive query using which data would be copied from Casandra to H2[RDBMS]
insert overwrite table CREDITTABLEh2summary select a.payload_creditFlag,a.payload_version from CREDITTABLE a;
On doing this I was able to see the table in the Gadget Generation tool however I also had to chage the referenc to the H2 Database to absolute in the JDBC URL value that I passed.
Observation:
Was wondering if the Gadget generation tool can directly point to the Casandra Stream without having to copy the tables to a RDBMS database.