flatbuffers: Using add_myTable(table) to encode data - c++

I am trying to use the following method of building a table, as taken from the flatbuffers tutorial:
MonsterBuilder monster_builder(builder);
monster_builder.add_pos(&pos);
monster_builder.add_hp(hp);
But having done this for my root table I am unsure if I need to call .Finish() before adding it to the table that then contains the table above.
Is anyone able to provide me with an example of how usage of the add_member commands may be used in nested tables?

You call .Finish() on any tables you create with a table builder. You call .Finish(root) on your FlatBufferBuilder instance only once at the end to finish construction of the buffer.

Related

Tray.IO - Creating a NEW list

Working with Tray.IO for the first time and I run into an issue after using the object helper to format the data in the object model I need, to add to a NEW list that contains only the new object information. As you can see below I have tried manipulating the object in several ways.
Any help would be very appreciated.
Tray.IO Items
Based on the information you've provided, below is how I would approach this situation. I've also included a video link if you'd like to see it done step-by-step.
Using the Data Storage connector, you should create a new list by using ‘Append to List’ method, setting the Key to an appropriate name, with the Value coming from object-helpers-2. In addition, you’ll want to Create if missing.
Once the new list is created, you’ll need to retrieve the data using another Data Storage connector by using the ‘Get Value’ method, setting the Key to the name of your prior Data Storage step.
Finally, update the Key of your List Helpers step to the result of the Data Storage connector used to retrieve the data.
Link to Video
Happy Traygramming!
Grant

Select stmt in source qualifier along with procedure call in Informatica

We have a situation where we are dealing with a relational source(Oracle). The system is developed in a way where we have to first execute a package which will enable data read from Oracle and user will be able to get results out of select statement. I am trying to find a way on how to implement this in informatica mapping.
What we tried
1. In PreSQL we tried to execute the package and in SQL query we wrote select statement - data not getting loaded in target.
2. In PreSQL we wrote a block in which we are executing the package and just after that(within same beging...end block) we wrote insert statement on top of select statement - This is inserting data through insert statement however I am not in favor of this solution as both source and target are dummy which will confuse people in future.
Is there any possibility to implement this solution somehow by using 1st option.
Please help and suggest.
Thanks
The stored procedure transformation is there for this purpose configure it to execute source pre load
Pre-Sql and data read are not a part of same session. From what I understand, this needs to be done within the same session as otherwise the read is granted only for the session.
What you can do, is create a stored procedure/package that will grant read access and then return the data. Use it as a SQL Override on your SQ. This way SQ will read the data as usual. The concept:
CREATE PROCEDURE ReadMyData AS
BEGIN
execute immediate 'GiveMeTheReadAccess';
select * from MyTable;
END;
And use the ReadMyData on the Source Qualifier.

libpq: get data type

I am coding a cpp project with the database "postgreSQL".
I created a table in my database its type is character varying(40).
Now I need to SELECT these data FROM the table in my cpp project. I knew that I should use the library libpq, this is the interface of "postgreSQL" for c/cpp.
I have succeeded in selecting data from the table. Now I am considering if it's possible to get the data type of this table. For example, here I want to get character varying(40).
You need to use PQftype.
As described here: http://www.idiap.ch/~formaz/doc/postgreSQL/libpq-chapter17861.htm
And just take a look here about decoding return values: http://www.postgresql.org/message-id/da7021e0608040738l3b0880a1q5a76b838937f8c78#mail.gmail.com
You must also use PQfsize to get field size.

How can I get all data exported by communication node for post processing in MA Tool of CI Studio?

I am trying to do a post processing on data exported by the communication node. One option I have is to export the data in form of sasdataset and import it in the process node. But if I can get it directly from the macro varibale like &intable or anything similar, it would be easier for me. I have already tried &intable and &intable1. They only have subject_id level data and not all the data that is being exported by the communication node. Is this possible? If so how?
The communication node will be creating a sas dataset with the same structure as the export definition. The sas dataset can be used in the post processing node.
Try an inner join to your exported dataset from &intable.
Something like:
Update exporteddata
from tempdb.exporteddata
inner join &intable
&intable is strictly subject IDs. Other attributes are not editable on that variable. Nor can you directly edit a field in the information map. But process nodes allow you to work with data that SAS has access to.
Worst case you can try setting the data to its own dataset and running a pass through query the database engine to perform the manipulation.

Write to multiple tables in HBASE

I have a situation here where I need to write to two of the hbase tables say table1,table 2. Whenever a write happens on table 1, I need to do some operation on table 2 say increment a counter in table 2 (like triggering). For this purpose I need to access (write) to two tables in the same task of a map-reduce program. I heard that it can be done using MultiTableOutputFormat. But I could not find any good example explaining in detail. Could some one please answer whether is it possible to do so. If so how can/should I do it. Thanks in advance.
Please provide me an answer that should not include co-processors.
To write into more than one table in map-reduce job, you have to specify that in job configuration. You are right this can be done using MultiTableOutputFormat.
Normally for a single table you use like:
TableMapReduceUtil.initTableReducerJob("tableName", MyReducer.class, job);
Instead of this write:
job.setOutputFormatClass(MultiTableOutputFormat.class);
job.setMapperClass(MyMapper.class);
job.setReducerClass(MyReducer.class);
job.setNumReduceTasks(2);
TableMapReduceUtil.addDependencyJars(job);
TableMapReduceUtil.addDependencyJars(job.getConfiguration());
Now at the time of writing data in table write as:
context.write(new ImmutableBytesWritable(Bytes.toBytes("tableName1")),put1);
context.write(new ImmutableBytesWritable(Bytes.toBytes("tableName2")),put2);
For this you can use HBase Observer, You have to create an observer and have to deploy on your server(applicable only for HBase Version >0.92), It will automatic trigger to another table.
And I think HBase Observer has similar concepts of like Aspects.
For more details -
https://blogs.apache.org/hbase/entry/coprocessor_introduction