Does the database QuestDB support BitMath, i.e. the ability to manipulate individual bits? I tried using field & 1 == true but I get the following in return:
unexpected character.
QuestDB supports bitwise operators to assist with performing bitwise operations "bitmath". Consider browsing the QuestDB documentation found here: QuestDBDocs
Related
I am trying to design the relational table structure and standard SQL query for Apache - IoTDB (a time series database) with Calcite. Now I want to know how i can convert Calcite's logical plan to IoTDB own physical plan easily.
For example, I execute a simple query:
SELECT deviceid, temperature FROM root_ln WHERE deviceid = 'd1'
After parsing by Calcite, Logical Plan represented by RelNodes is like this:
LogicalProject(DEVICEID=[$1], TEMPERATURE=[$2])
LogicalFilter(condition=[=($1, 'd1')])
EnumerableTableScan(table=[[IoTDBSchema, ROOT_LN]])
And I want to convert it to IoTDB's own physical plan, which i need to provide:(just the simple example)
Project's path, like root.ln.d1.temperature, we execute query by these paths. I have to put tablename(root.ln), deviceid(d1), and measurement(temperature) together to get a whole path. This needs to scan the whole logical plan.
Project's datatype, like float. I can get it from paths, it's simple.
Filter's expression, I have to convert LogicalFilter's expression to IoTDB's expression again, including parse what $1 means in this example.
I think it involves too much work. When the query becomes more complex, the conversion becomes more difficult too.
So I think maybe there is a simpler way to do this work.
The standard way of creating a physical plan for a particular data source is to create an adapter for that data source. This amounts to writing rules which can convert logical operators to physical operators. This allows data sources to specify what logical operators it can implement and how.
I would like to use a UUID as a primary key in Cloud Spanner. What is the best way to read and write UUIDs? Is there a UUID type, or client library support?
The simplest solution is just to store it as a STRING in the standard RFC 4122 format. E.g.:
"d1a0ce61-b9dd-4169-96a8-d0d7789b61d9"
This will take 37 bytes to store (36 bytes plus a length byte). If you really want to save every possible byte, you could store your UUID as two INT64's. However, you would need to write your own libraries for serializing/deserializing the values, and they wouldn't appear very pretty in your SQL queries. In most cases, the extra ~21 bytes of savings per row is probably not worth it.
Note that some UUID generation algorithms generate the UUID sequentially based on a timestamp. If the UUID values generated by a machine are monotonically increasing, then this can lead to hot-spotting in Cloud Spanner (this is analogous to the anti-pattern of using timestamps as the beginning of a primary key), so it is best to avoid these variants (e.g. UUID version 1 is not recommended).
This Stackoverflow answer provides more details about the various UUID versions. (TL;DR: use Version 4 with Cloud Spanner since a psuedo-ranndom number is used in the generation)
As per Cloud Spanner documentation:
There are several ways to store the UUID as the primary key:
In a STRING(36) column.
In a pair of INT64 columns.
In a BYTES(16) column.
I am currently using the BETA 2 version of wso2 esb tooling. it is a great improvement from the previous version that I used which is the ALPHA version. so for my question is that is there a way to manipulate the attribute that was mapped from data mapper? so for example is that if have a response minute with a data type of integer and value of 150 and I want to concatenate a string on that integer so the result should be 150 min
For this, you can do a type conversion using 'ToString' operation and and then use 'Concat' operation to concatenate the two strings. You can find more details on the latest improvements of Data Mapper at : https://nuwanpallewela.wordpress.com/2016/07/16/understanding-wso2-data-mapper-5-0-0/
I am using a map reducer on Hadoop on Elastic Map Reduce (on AWS) but it is sorting it as a string, I want to do integer sorting. How to do it ? I want to take the key as integer and do integer sorting on the key.
I recommend pre-pending (or padding) the integer with leading zeros so that you can get Hadoop (or EMR) to do lexographical sorting. Hadoop doesn't support integer based sorting - its simply lexographic sorting.
For example, if these are your keys:
1
15
168
1900
You should output them like this in your mapper:
0001
0015
0168
1900
so that Hadoop can correctly sort them.
This related question's answers could be used without modifying the data: how to sort numerically in hadoop's shuffle/sort phase?
I have recently started looking into Google Charts API for possible use within the product I'm working on. When constructing the URL for a given chart, the data points can be specified in three different formats, unencoded, using simple encoding and using extended encoding (http://code.google.com/apis/chart/formats.html). However, there seems to be no way around the fact that the highest value possible to specify for a data point is using extended encoding and is in that case 4095 (endoded as "..").
Am I missing something here or is this limit for real?
When using the Google Chart API, you will usually need to scale your data yourself so that it fits within the 0-4095 range required by the API.
For example, if you have data values from 0 to 1,000,000 then you could divide all your data by 245 so that it fits within the available range (1000000 / 245 = 4081).
Per data scaling, this may also help you:
http://code.google.com/apis/chart/formats.html#data_scaling
Note the chds parameter option.
You may also wish to consider leveraging a wrapper API that abstracts away some of these ugly details. They are listed here:
http://groups.google.com/group/google-chart-api/web/useful-links-to-api-libraries
I wrote charts4j which has functionality to help you deal with data scaling.