Does QuestDB supports Foreign key, Primary key, references? - questdb

I wonder how db schema design would look, because at first glance I need bunch sql relation features, that are not available out of box in QuestDB.

No, QuestDB does not support PK, FK and any other constraints yet.
Schema can be same as other relational databases, except the PK values have to be generated by the application. QuestDB will not enforce uniqueness of values in a table column and uniqueness has to come from inserting application.

Related

How many foreign keys can a table have in postgreSQL?

I am working on a Django project with a database in PostgreSQL. During my schema design, I have noticed that one table is referencing many foreign keys from other tables. Just curious that how many foreign keys can be referenced from/to one table? I also searched and found that the SQL server 2014 can reference up to253 foreign keys.
I don't think there is a hard limit. I just created a table with 10001 foreign key constraints.
Of course that doesn't mean that it is sane to define that many foreign keys, and performance will suffer considerably.

Should Dynamodb apply single table design instead of multiple table design when the entities are not relational

Let’s assume there are mainly 3 tables for the current database.
Pkey = partition key
Admin
-id(Pkey), username, email, createdAt,UpdatedAt
Banner
-id(Pkey), isActive, createdAt, caption
News
-id(Pkey), createdAt, isActive, title, message
None of the above tables have relation with other tables, and more tables will be required in the future(I think most of it also don’t have the relation with other tables).
According to the aws document
You should maintain as few tables as possible in a DynamoDB application.
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/bp-general-nosql-design.html
So I was considering the need to combine these 3 tables into a single table.
Should I start to use a single table from now on, or keep using multiple tables for the database?
If using a single table, how should I design the table schema?
DynamoDB is a NoSQL database, hence you design your schema specifically to make the most common and important queries as fast and as inexpensive as possible. Your data structures are tailored to the specific requirements of your business use cases.
When designing a data model for your DynamoDB Table, you should start from the access patterns of your data that would in turn inform the relation (or lack thereof) among them.
Two interesting resources that would help you get started are From SQL to NoSQL and NoSQL Design for DynamoDB, both part of the AWS Developer Documentation of DynamoDB.
In your specific example, based on the questions you're trying to answer (i.e. use case & access patterns), you could either work with only the Partition Key or more likely, benefit from the usage of composite Sort Keys / Sort Key overloading as described in Best Practices for Using Sort Keys to Organize Data.
Update, add example table design to get you started:

NoSQL encourages designing database based on access patterns. What to do when the patterns change?

NoSQL encourages designing database based on access patterns and it can perform those queries it was designed for very fast. For other queries, the performance is not so good. But for software, change is the norm. So when new requirements come in and we have to add new features, how can nosql databases adapt? Or better yet, how can I design nosql databases(preferably dynamodb) that will allow me to adapt to new feature additions.
The first approach that comes to my mind will be to design a new table and migrate all the previous data to the new table. But considering the table has millions of records, its probably not very cost effective
References:
Rick Houlihan talking about designing dynamodb table based on access patterns
Dynamodb design best practices from aws documentation
DynamoDB is schema-less, so you can add a new attribute at any time without having to do any backfill or migration. Just make sure your application knows what to do if the attribute is not present.
If you need to query that attribute, you can add a new GSI on the attribute. DynamoDB has an initial quota of 20 GSIs per table, but you can request a quota increase if you need more.
If your new use case isn’t satisfied by a GSI, you can create a new table containing your new attribute(s) to use alongside the existing table. If you need a guarantee of consistency between those tables, you can use DynamoDB transactions to keep them in sync.
One way to minimize full table migrations in order to adapt to new changes would be to use generic names for indexes. In the case of dynamodb, we would have pk as partition_key and sk for sort_key as well as all the attributes of the item. The value of pk and sk will actually be a derived value from other attributes. More importantly, we will add 5 LSIs during table creation and use them when necessary. For example, to store data about a book, a row in the table will have the following fields:
pk, sk, ISBN, data_type, author, created_at, ...other data, lsi1, lsi2, lsi3, lsi4, lsi5
The values for the fields:
pk->ISBN, sk->data_type, ISBN->ISBN, ...., lsi1->data_type#created_at , lsi(2-5)->empty
This way, unless there are drastic changes in the requirements the table structure of our table is unlikely to change. One thing to note here is that unless an item that is added, deleted or updated contains an attribute that belongs to an index, no computational or storage cost is incurred in dynamodb.

Django OneToOneField on foreign table

I'm trying to setup a new server with foreign tables (using postgres_fdw) that weren't foreign tables previously, and I have some OneToOneFields pointing to these tables. This doesn't work out of the box - OneToOneFields use foreign keys, and postgres_fdw does not support foreign keys for foreign tables.
The foreign tables are in a read-only database on the same server.
Is there an easy way to get this working?
After a little more research, I think the answer is 'don't do that'. Handling foreign keys for foreign tables isn't implemented, and any solution that tried to bridge that gap would have serious performance and/or consistency issues.
pglogical looks like a better solution - instead of pulling the data in through postgres_fdw, replicate the tables into the new database so they're local. There are other projects for replicating just specific tables, but pglogical is included with PostgreSQL 10.

A simple file for saving a class of vectors or a SQL database

I have a database that is made of sorted data from the user activities. If I wanted to keep a record of each users that which record belong to which user (like a class of vectors of numbers for each users), what is the best database type that I can use here? The speed is important and the database is very large (9 Gig ~ 700 million record).The number of users is around 2 million, so I don't think that a relational connection in SQL would be a good suggestion. (Coding are in C++).
I am going to provide an answer now based on our conversation in the comments as I have too much to write in a comment.
First of all, I would use a full RDBMS for this rather than SQLite. The Lite part of the name should serve as an indicator that it isn't trying to be a full strength database. I am just saying this because if SQLite does not perform well enough on your large database, I don't want you to blame it on RDBMS technology, but on the weak database that you are using. Choose PostgreSQL or MySQL as they have better optimizers (you don't have to code it).
Second your database should provide the features to join the tables together. It would look something like:
Select *
From users
Join activity on users.id = activity.user_id
Where users.id = ###
That combined with the appropriate indexes should give you what you need.
As far as indexes, your primary keys should produce the appropriate indexes for this join. You can also create a foreign key definition so that the database knows the relationship between the tables, and can enforce it. Some databases do not support foreign key constraints, but that is not critical.
A relational SQL database can handle this just well.
Use PostGreSQL
You can use ODBC from C, that way you can change the database should the need arise.
If your data is not really relational, you can also use redis.
http://code.google.com/p/credis/
Since its a sorted set of data, you can event go for a NoSQL or Bigtable database. HBase, Hadoop, etc are provided OpenSouce resources for you.