I'm trying to create a table with two columns like below:
CREATE TABLE test (col1 INT ,col2 Array<Decimal>) USING column options(BUCKETS '5');
It is creating successfully but when i'm trying to insert data into it, it is not accepting any format of array. I've tried the following queries:
insert into test1 values(1,Array(Decimal("1"), Decimal("2")));
insert into test1 values(1,Array(1,2));
insert into test1 values(1,[1,2,1]);
insert into test1 values(1,"1,2,1");
insert into test1 values(1,<1,2,1>);
etc..
Please help!
There is an open ticket for this: https://jira.snappydata.io/browse/SNAP-1284 which will be addressed in next release for VALUES strings (JSON string and Spark compatible strings).
The Spark Catalyst compatible format will work:
insert into test1 select 1, array(1, 2);
When selecting the data is by default shipped in serialized form and shown as binary. For now you have to use "complexTypeAsJson" hint to show as JSON:
select * from test1 --+complexTypeAsJson(true);
Support for displaying in simpler string format by default will be added in next release.
One other thing that can be noticed in your example is a prime value for buckets. This was documented as preferred in previous releases but as of 1.0 release it is recommended to use a power of two or some even number (e.g the total number of cores in your cluster can be a good choice) -- perhaps some examples are still using the older recommendation.
Related
I have c/c++ code with embedded SQL for Oracle through Pro*C. Is there any mechanism to get the difference of values of array values and DB column values? For example, say, I have an array like this:
int nums[] = {10,20,35,45};
vector<int> vnums (nums, nums + sizeof(nums) / sizeof(int) );
Now, I have a DB table tbl1 with col1 containing values:
20
40
60
I would like to get the unmatched array values that are not present in tbl1.
So, result should be:
10
35
45
I know one way. I may run the following SQL query:
select col1 from tab1
And store the results in an vector say, vec2.
Now, I see the difference of these two vectors vnums and vec2.
Can you suggest a better way?
Unfortunately, there isn't a way to directly do that and your approach is fine.
What you can do is to wrap the call to the DB with a stream and create a procedure which loading your input values in using something like:
CREATE OR REPLACE TYPE numArray_t AS TABLE OF NUMBER
CREATE OR REPLACE FUNCTION extractDif (p_data IN numArray_t) return numArray_t;
and in the function you will perform a MINUS of the passed numArray_t with the values from your table and stream the differences.
I am running a NetLogo model in BehaviorSpace each time varying number of runs. I have turtle-breed pigs, and they accumulate a table with patch-types as keys and number of visits to each patch-type as values.
In the end I calculate a list of mean number of visits from all pigs. The list has the same length as long as the original table has the same number of keys (number of patch-types). I would like to export this mean number of visits to each patch-type with BehaviorSpace.
Perhaps I could write a separate csv file (tried - creates many files, so lots of work later on putting them together). But I would rather have everything in the same file output after a run.
I could make a global variable for each patch-type but this seems crude and wrong. Especially if I upload a different patch configuration.
I tried just exporting the list, but then in Excel I see it with brackets e.g. [49 0 31.5 76 7 0].
So my question Q1: is there a proper way to export a list of values so that in BehaviorSpace table output csv there is a column for each value?
Q2: Or perhaps there is an example of how to output a single csv that looks exactly as I want it from BehaviorSpace?
PS: In my case the patch types are costs. And I might change those in the future and rerun everything. Ideally I would like to have as output: a graph of costs vs frequency of visits.
Thanks
If the lists are a fixed length that doesn't vary from run to run, you can get the items into separate columns by using one metric for each item. So in your BehaviorSpace experiment definition, instead of putting mylist, put item 0 mylist and item 1 mylist and so on.
If the lists aren't always the same length, you're out of luck. BehaviorSpace isn't flexible that way. You would have to write a separate program (in the programming language of your choice, perhaps NetLogo itself, perhaps an Excel macro, perhaps something else) to postprocess the BehaviorSpace output and make it look how you want.
I am trying to create character sequence like AAA, AAB, AAC, AAD,....,BAA, BAB, BAC,.... and So on in a flat file using Informatica. I have the formula to create the charater sequence.
Here I need to have sequence numbers generated in informatica. But I dont have any source file or database to have this source.
Is there any method in Informatica to create sequence using Sequence Generater when there is no source records to read?
This is bit tricky as Informatica will do row by row processing and your mapping won't initialize until you give source rows through input(File or DB). So for generating sequence of n length by Informatica trnasformations you need n rows in input.
Another soltion to this is to use Dummy Source(i.e. Source with one row) and you can pass the loop parameters from this source and then use Java transfornmation and Java code to generate this sequence.
There is no way to generate rows without a source in a mapping.
When I need to do that I use one of these methods :
Generating a file with as many lines as I need, with the seq command under Unix. It could also be used as a direct pipeline source without creating the file.
Getting lines from a database
For example Oracle can generate as many lines as you want with a hierarchical query :
SELECT LEVEL just_a_column
FROM dual
CONNECT BY LEVEL <= 26*26*26
DB2 can do that with a recursive query :
WITH DUMMY(ID) AS (
SELECT 1 FROM SYSIBM.SYSDUMMY1
UNION ALL
SELECT ID + 1 FROM DUMMY WHERE ID < 26*26*26
)
SELECT ID FROM DUMMY
You can generate rows using Java transformation. But even to use that , you need a source. I suggest you to use the formula in the Java transform and a dummy source to a database with a select getdate() statement so that a record is returned to call the Java transform. You can then generate the sequence as well in Java transform or connect sequence generator to output of Java transform to number them.
We have an option to create a sequence number even it is not available in the source.
Create a Sequence generator transformation. You will be getting NEXTVAL and CURRVAL.
In a property tab you will be having an option the create a sequence numbers.
Start values - the value from which it should start
Increment by - increment value
End value - the value in which it should end
Current value - your current value
Cycle - In case you require in cyclic
No.of cached values
Reset
Tracing level
Connect the NEXTVAL to your target column.
I am creating a pivot table in excel sheet by aspose.cells. I want the values to be formatted as Accounting, with a symbol, a comma and 2 decimal places. Is this possible with aspose.cells? Please suggest how to do this with Aspose.Cells and c#.
If you need Accounting number formatting for the PivotField, you may try to use the following numeric formatting using PivotField.Number attribute instead.
pivotTable.DataFields[0].Number = 43; //You may also try it with 44 if it suits your needs.
Alternatively you may try to use the following formatting string for NumberFormat custom attribute of PivotField. You may also check in MS Excel to get your desired custom strings to try with NumberFormat property.
_($* #,##0.00_);_($* (#,##0.00);_($* "-"??_);_(#_)
If you still face any confusion/issue, can you please share the sample Excel file in which you may manually set the desired number formatting for the Pivot Table fields in MS Excel, and share the file with us, so that we can test the scenario at our end.
Furthermore, can you please share the code/sample application with the template files (input, output and expected output file etc.). The files can also be shared in Aspose.Cells product support forum.
Please try using PivotField.NumberFormat property to specify his desired formatting, see the code segment below for reference:
//Specify the number formatting to the first data field added.
pivotTable.DataFields[0].NumberFormat = "$#,##0.00";
Moreover, we also recommend you use our latest version of Aspose.Cells for .NET 7.4.0 in which we made some more enhancements regarding PivotTables.
PS, I am working as Support developer / Technical Evangelist at Aspose.
I am using the libodbc++ ODBC wrapper for C++, designed similar to JDBC. I have a prepared statement "INSERT INTO t1 (col1) VALUES (?)", where t1.col1 is defined as VARCHAR(500).
When I call statement->setString(1, s), the value of s is truncated to 255. I suspect the libodbc++ library, but as I am not very familiar with ODBC I'd like to be sure that the wrapper doesn't just expose a restriction of the underlying ODBC. The ODBC API reference is too complicated to be skimmed quickly and frankly I really don't want to do that, so pardon me for asking a basic question.
NOTE: an un-prepared and un-parameterized insert statement via the same library inserts a long value ok, so it isn't a problem of the MySql DB.
For long string, use PreparedStatement::setAsciiStream() instead of PreparedStatement::setString().
But when use stream, I often encounter error "HY104 Invalid Precision Value", which is annoying because I have no idea how to tackle it head on, however I work around it with following steps:
1, order the columns in SQL statement, non-stream columns go first;
2, if that doesn't work, split the statement to multiple ones, update or query a single column per statement.
But(again), in order to insert a row first and then update some columns in stream manner, one may have to get the last insert id, which turns out to be another challenge which I again failed to tackle head on for now...
I don't know libodbc++, but PreparedStatements available via ODBC API can store more characters. I use it with Delphi/Kylix ODBC wrapper.
Maybe there is some configuration in libodbc++ to set value length limit? I have such setting in my Delphi library. If you use PreparedStatement then you can allocate big chunk of memory, divide it into fields and show ODBC where block for each column starts and how long is it via SQLBindParameter() function.