Big Query insertAll method in Java is not reflecting the changes in Table - google-cloud-platform

I am trying to insert data into a Big Query table using the method insertAll.
This is how my insert method code looks like -
public void insertIntoTable(String datasetName, String tableName) {
TableId tableId = TableId.of(datasetName, tableName);
String fieldName = "testField";
Map<String, Object> rowContent = new HashMap<>();
rowContent.put(fieldName, "testVal");
InsertAllResponse response = bigquery.insertAll(InsertAllRequest.newBuilder(tableId).addRow("rowId", rowContent).build());
if (response.hasErrors()) {
for (Map.Entry<Long, List<BigQueryError>> entry : response.getInsertErrors().entrySet()) {
System.out.println(entry.getValue().toString());
}
}
}
Although it is not throwing any error, but still data is not getting inserted into my table.
As per there access-control document:
https://cloud.google.com/bigquery/docs/access-control
To use the insertAll method the user would require bigquery.tables.updateData permission. And for using that permission the user need to have the bigquery.dataEditor role, which I already have.
There is no issue with the permission because in another method I am creating a table, and the table is getting created successfully in Big Query.
The tableName and datasetName is also correct, I tested it in the debug mode.
Another issue that I can think of is type mismatch issue. But thats not the case either, because I checked it and the type is String only. Attached schema details below -
Can there be any other issue which I may be missing out here?
=============================================
Edited Part -
It has been pointed out to me that the stream data does not reflect in the UI, for viewing the data we have to query the table.
Now I am facing a new issue.
I executed the above function 6 times, each time with a different value. Basically I was changing only this line -
rowContent.put(fieldName, "testVal");
The first time when I executed the method, the insert value was testVal.
The other five times when I executed the method, I modified this line of code -
Execution 1: rowContent.put(fieldName, "testVal1");
Execution 2: rowContent.put(fieldName, "testVal2");
Execution 3: rowContent.put(fieldName, "testVal3");
Execution 4: rowContent.put(fieldName, "testVal4");
Execution 5: rowContent.put(fieldName, "testVal5");
So ideally in my table there should be 6 rows with the values -
testVal
testVal1
testVal2
testVal3
testVal4
testVal5
But I am able to see only two rows when I am querying my table.
Why it is showing only 2 rows instead of 6?

Related

Oracle APEX - how to read a cell from interactive grid

The same question once again but with (I hope) better explanation:
I created the most simple case:
An Interactive Grid IG with data source EMP ( table with 14 records contains Ename, Job, HireDate, Salary etc. etc.)
Text field P7_ENAME
After running it looks like below:
What I would like to do is to copy Ename from selected record of IG to P7_ENAME field .
I found several tutorials (text and video) how to do it. Most of them suggest to create dynamic action SelectionChange on IG and when TRUE add a JavaScript code something like below:
var v_ename;
model = this.data.model;
v_ename = model.getValue( this.data.selectedRecords[0], "Ename");
apex.item( "P7_ENAME" ).setValue (v_ename);
and the second step is to create another action: Refresh.
So finally I have a dynamic action with two steps : the first one is a Java script code and the second refresh function on my P7_ENAME field.
Sounds simple and it is simple to repeat/implement. A guy (I suppose) from India published a video on YouTube (https://www.youtube.com/watch?v=XuFz885Yndw) which I followed and in his case it works good. In my case it simple does not work - field P7ENAME is always empty, no errors appears. Any idea why ? Any hints, suggestion ?
thanks for any help
K.
The best way to debug and achieve what you are trying to do is as follows:
create the Dynamic action with the following setup:
-when -> selection change[interactive grid],
-selection type -> region, region -> your IG region,
-client side condition -> javascript expression: ```this.data.selectedRecords[0] != undefined```
First action of the true of the DA with the type: execute javascript code and fire on initialization is turned on, code: console.log(this.data.selectedRecords);
Run your page, and check the browser console. You should see an array of columns when you select a record from that IG as follows:
Find in that array, which sort number of the array contains the data that you want to use for the page item. Let's say I want the 3rd element which is "2694" then I should change my dynamic action's execute javascript code to:
var value = this.data.selectedRecords[0][2];
apex.item( "P7_ENAME" ).setValue (value);
The last thing I should do is add another true action (and the refresh action at the end) to the same dynamic action with type 'SET VALUE' and 'PLSQL EXPRESSION' as type, put :P7_ENAME in the expression, items to submit P7_ENAME and affected element: item / P7_ENAME as follows:

How to create a StreamableTable which is also a ScannableTable (Apache Calcite)?

I am looking to implement a org.apache.calcite.schema.Table which can be used as a stream as well as a table.
I was going through the Calcite documentation, and here it mentions an example of Orders table which is a stream as well as table. It also mentions that both of the following queries are applicable on this Orders table/stream,
SELECT STREAM * FROM Orders;
and
SELECT * FROM Orders;
I am trying to implement a class whose instances are such tables. I implemented the StreamableTable interface as well as the ScannableTable interface but still not able to get it to work both ways. When I try to execute a non-stream query (like SELECT * FROM TEST_TABLE), I get the following error,
Caused by: org.apache.calcite.runtime.CalciteContextException: From line 1, column 15 to line 1, column 38: Cannot convert stream 'TEST_TABLE' to relation
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
at org.apache.calcite.runtime.Resources$ExInstWithCause.ex(Resources.java:467)
at org.apache.calcite.sql.SqlUtil.newContextException(SqlUtil.java:883)
at org.apache.calcite.sql.SqlUtil.newContextException(SqlUtil.java:868)
at org.apache.calcite.sql.validate.SqlValidatorImpl.newValidationError(SqlValidatorImpl.java:5043)
at org.apache.calcite.sql.validate.SqlValidatorImpl.validateModality(SqlValidatorImpl.java:3739)
at org.apache.calcite.sql.validate.SqlValidatorImpl.validateModality(SqlValidatorImpl.java:3664)
at org.apache.calcite.sql.validate.SqlValidatorImpl.validateQuery(SqlValidatorImpl.java:1048)
at org.apache.calcite.sql.SqlSelect.validate(SqlSelect.java:232)
at org.apache.calcite.sql.validate.SqlValidatorImpl.validateScopedExpression(SqlValidatorImpl.java:1016)
at org.apache.calcite.sql.validate.SqlValidatorImpl.validate(SqlValidatorImpl.java:724)
at org.apache.calcite.sql2rel.SqlToRelConverter.convertQuery(SqlToRelConverter.java:567)
at org.apache.calcite.prepare.Prepare.prepareSql(Prepare.java:242)
at org.apache.calcite.prepare.Prepare.prepareSql(Prepare.java:208)
at org.apache.calcite.prepare.CalcitePrepareImpl.prepare2_(CalcitePrepareImpl.java:642)
at org.apache.calcite.prepare.CalcitePrepareImpl.prepare_(CalcitePrepareImpl.java:508)
at org.apache.calcite.prepare.CalcitePrepareImpl.prepareSql(CalcitePrepareImpl.java:478)
at org.apache.calcite.jdbc.CalciteConnectionImpl.parseQuery(CalciteConnectionImpl.java:231)
at org.apache.calcite.jdbc.CalciteMetaImpl.prepareAndExecute(CalciteMetaImpl.java:556)
at org.apache.calcite.avatica.AvaticaConnection.prepareAndExecuteInternal(AvaticaConnection.java:675)
at org.apache.calcite.avatica.AvaticaStatement.executeInternal(AvaticaStatement.java:156)
... 3 more
Queries like SELECT STREAM * FROM TEST_TABLE work as expected.
Can someone help me create such a table?
The modality of Select * FROM Orders is RELATION but the table is an instance of StreamableTable and that's why the above exception is thrown. I changed the RelOptTableImpl#supportsModality a bit as follows.
#Override public boolean supportsModality(SqlModality modality) {
switch (modality) {
case STREAM:
return table instanceof StreamableTable;
default:
// check whether the table is scannable
if (table instanceof ScannableTable) {
return true;
}
return !(table instanceof StreamableTable);
}
}
For the above SQL, now the plans generated were as per usual:
Logical:
LogicalProject(ROWTIME=[$0], ID=[$1], PRODUCT=[$2], UNITS=[$3])
LogicalTableScan(table=[[STREAMS, ORDERS]])
Physical:
EnumerableTableScan(table=[[STREAMS, ORDERS]])
With Result set starting with:
ROWTIME=2015-02-15 10:15:00; ID=1; PRODUCT=paint; UNITS=10",
ROWTIME=2015-02-15 10:24:15; ID=2; PRODUCT=paper; UNITS=5"
You can create a test case for this in StreamTest with above changes.

DynamoDB QuerySpec {MaxResultSize + filter expression}

From the DynamoDB documentation
The Query operation allows you to limit the number of items that it
returns in the result. To do this, set the Limit parameter to the
maximum number of items that you want.
For example, suppose you Query a table, with a Limit value of 6, and
without a filter expression. The Query result will contain the first
six items from the table that match the key condition expression from
the request.
Now suppose you add a filter expression to the Query. In this case,
DynamoDB will apply the filter expression to the six items that were
returned, discarding those that do not match. The final Query result
will contain 6 items or fewer, depending on the number of items that
were filtered.
Looks like the following query should return (at least sometimes) 0 records.
In summary, I have a UserLogins table. A simplified version is:
1. UserId - HashKey
2. DeviceId - RangeKey
3. ActiveLogin - Boolean
4. TimeToLive - ...
Now, let's say UserId = X has 10,000 inactive logins in different DeviceIds and 1 active login.
However, when I run this query against my DynamoDB table:
QuerySpec{
hashKey: null,
rangeKeyCondition: null,
queryFilters: null,
nameMap: {"#0" -> "UserId"}, {"#1" -> "ActiveLogin"}
valueMap: {":0" -> "X"}, {":1" -> "true"}
exclusiveStartKey: null,
maxPageSize: null,
maxResultSize: 10,
req: {TableName: UserLogins,ConsistentRead: true,ReturnConsumedCapacity: TOTAL,FilterExpression: #1 = :1,KeyConditionExpression: #0 = :0,ExpressionAttributeNames: {#0=UserId, #1=ActiveLogin},ExpressionAttributeValues: {:0={S: X,}, :1={BOOL: true}}}
I always get 1 row. The 1 active login for UserId=X. And it's not happening just for 1 user, it's happening for multiple users in a similar situation.
Are my results contradicting the DynamoDB documentation?
It looks like a contradiction because if maxResultSize=10, means that DynamoDB will only read the first 10 items (out of 10,001) and then it will apply the filter active=true only (which might return 0 results). It seems very unlikely that the record with active=true happened to be in the first 10 records that DynamoDB read.
This is happening to hundreds of customers that are running similar queries. It works great, when according to the documentation it shouldn't be working.
I can't see any obvious problem with the Query. Are you sure about your premise that users have 10,000 items each?
Your keys are UserId and DeviceId. That seems to mean that if your user logs in with the same device it would overwrite the existing item. Or put another way, I think you are saying your users having 10,000 different devices each (unless the DeviceId rotates in some way).
In your shoes I would just remove the filterexpression and print the results to the log to see what you're getting in your 10 results. Then remove the limit too and see what results you get with that.

Siebel NextRecord method is not moving to the next record

I have found a very weird behaviour in our Siebel 7.8 application. This is part of a business service:
var bo:BusObject;
var bc:BusComp;
try {
bo = TheApplication().GetBusObject("Service Request");
bc = bo.GetBusComp("Action");
bc.InvokeMethod("SetAdminMode", "TRUE");
bc.SetViewMode(AllView);
bc.ClearToQuery();
bc.SetSearchSpec("Status", "='Unscheduled' OR ='Scheduled' OR ='02'");
bc.ExecuteQuery(ForwardOnly);
var isRecord = bc.FirstRecord();
while (isRecord) {
log("Processing activity '" + bc.GetFieldValue("Id") + "'");
bc.SetFieldValue("Status", "03");
bc.WriteRecord();
isRecord = bc.NextRecord();
}
} catch (e) {
log("Exception: " + e.message);
} finally {
bc = null;
bo = null;
}
In the log file, we get something like this:
Processing activity '1-23456'
Processing activity '1-56789'
Processing activity '1-ABCDE'
Processing activity '1-ABCDE'
Exception: The selected record has been modified by another user since it was retrieved.
Please continue. (SBL-DAT-00523)
So, basically, it processes a few records from the BC and then, apparently at random, it "gets stuck". It's like the NextRecord call isn't executed, and instead it processes the same record again.
If I remove the SetFieldValue and WriteRecord to avoid the SBL-DAT-00523 error, it still shows some activities twice (only twice) in the log file.
What could be causing this behaviour?
It looks like in business component "Action" you have join(s) that can return multiple records for one base record and you use ForwardOnly mode to query BC.
Assume, for example, in table S_EVT_ACT you have one record with a custom column X_PHONE_NUMBER = '12345678' and you have two records in table S_CONTACT with column 'MAIN_PH_NUM' equal to the same value '12345678'. So when you will join these two tables using SQL like this:
SELECT T1.* FROM SIEBEL.S_EVT_ACT T1, SIEBELS_CONTACT T2
WHERE T1.X_PHONE_NUMBER = T2.MAIN_PH_NUM
as a result you will get two records, with the same T1.ROW_ID.
Exactly the same situation happens when you use ForwardOnly cursor mode in eScript, in this case Siebel just fetches everything what database has returned. And that why it's a big mistake to iterate over business component while it's queried in a ForwardOnly mode. You should use ForwardBackward mode instead, because in this case Siebel will exclude duplicates records (it also true for normal UI queries, because it also executed in ForwardBackward mode).
Actually this is the most important and less known difference between ForwardOnly and ForwardBackward cursor modes.
Try changing that query mode
bc.ExecuteQuery(ForwardOnly);
to ForwardBackward.

siebel log clarification - queries running twice

I am analysing the siebel log and i see that every query runs twice in the log. Could anyone pls tell me why this happens?
For example the below query is one of the many queries that i found got executed twice in the log
SELECT /*+ ALL_ROWS */
T2.CONFLICT_ID,
T2.LAST_UPD,
T2.CREATED,
T2.LAST_UPD_BY,
T2.CREATED_BY,
T2.MODIFICATION_NUM,
T2.ROW_ID,
T1.BU_ID,
T2.MULTI_LINGUAL_FLG,
:1
FROM
SIEBEL.S_LST_OF_VAL_BU T1,
SIEBEL.S_LST_OF_VAL T2
WHERE
T2.ROW_ID = T1.LST_OF_VAL_ID (+) AND
(T2.TYPE = :2 AND T2.NAME = :3)
ORDER BY
T2.TYPE, T2.ORDER_BY, T2.VAL
The query should NOT run twice, unless the logged in user has repeated an operation, and the Business Component is not cached. You will see the SQLs for LOV values repeated in the log, but the value of bind variable ":2" will be different each time. You can see these values just under the SQL
eg: Bind variable 2: TIME_ZONE_DST_ORDINAL
Bind variable 2: DAY_NAME
Is there any other SQL which is repeated and not for the S_LST_OF_VAL tables ?