Will Doctrine use indexes that are defined in the MySQL server but was never defined in the code?
If by "use indexes" you mean use them for optimal querying then answer is yes. From database's perspective Doctrine merely prepares query and receives data, it's up to MySQL to decide how the query will be performed. Downside of not having those indexes defined in Doctrine is that when using schema creation or migration tools Doctrine will try to remove them as according to Doctrine's knowledge, they shouldn't exist.
Related
I am trying to use pre aggregations over CLOUD SQL on Google Cloud Platform but the database is denying access and giving error Statement violates GTID consistency.
Any help is appreciated.
Cube.js done pre-aggregation by CREATE TABLE ... SELECT, but you are using MySQL on top of Google SQL with --enforce-gtid-consistency (has limitations).
Since only transactionally safe statements can be logged, there is a limitation to use CREATE TABLE ... SELECT (and some another SQL), because this statement is actually logged as two separate events.
There are two ways how to solve this issue:
1. Use pre-aggregations to an external database. (recommended way).
https://cube.dev/docs/pre-aggregations/#read-only-data-source-pre-aggregations
2. Use not documented flag loadPreAggregationWithoutMetaLock
Attention: This flag is an experimental and can be removed or changed in the feature..
Take a look at the source code
You can pass it directly in the driver constructor. This will produce two SQL statements to pass the limitation:
CREATE TABLE
INSERT INTO
Thanks
I have an existing database made and currently used by a Drupal project. I need to write an app using Doctrine and this database.
I'd like to use the Doctrine ORM, but I cannot change the Database schema and it is kind of unintuitive (Drupal has kind of one table per data to store...).
Is there a way to tell Doctrine the SQL to store and read every attribute of my entities ?
Otherwise I will use Doctrine DBAL, but the simplicity of the entities interests me a lot.
I'm putting together a partitioned table in postgres which will be used by an API written in Django. Postgres has a number of issues with this, most of them having to do with the RETURNING clause in SQL returning NULL or creating duplicate records (google postgres partition returning if you want to learn more).
I believe the solution is to override the save() method in ORM to use a stored procedure or custom SQL, but how do I map the incoming arguments to a custom SQL statement?
Ideally it would look like this but instead of calling the super method it would map the args to a custom SQL statement.
The simplest way is make trigger before insert/update on PostgreSQL site.
I have a database that is made of sorted data from the user activities. If I wanted to keep a record of each users that which record belong to which user (like a class of vectors of numbers for each users), what is the best database type that I can use here? The speed is important and the database is very large (9 Gig ~ 700 million record).The number of users is around 2 million, so I don't think that a relational connection in SQL would be a good suggestion. (Coding are in C++).
I am going to provide an answer now based on our conversation in the comments as I have too much to write in a comment.
First of all, I would use a full RDBMS for this rather than SQLite. The Lite part of the name should serve as an indicator that it isn't trying to be a full strength database. I am just saying this because if SQLite does not perform well enough on your large database, I don't want you to blame it on RDBMS technology, but on the weak database that you are using. Choose PostgreSQL or MySQL as they have better optimizers (you don't have to code it).
Second your database should provide the features to join the tables together. It would look something like:
Select *
From users
Join activity on users.id = activity.user_id
Where users.id = ###
That combined with the appropriate indexes should give you what you need.
As far as indexes, your primary keys should produce the appropriate indexes for this join. You can also create a foreign key definition so that the database knows the relationship between the tables, and can enforce it. Some databases do not support foreign key constraints, but that is not critical.
A relational SQL database can handle this just well.
Use PostGreSQL
You can use ODBC from C, that way you can change the database should the need arise.
If your data is not really relational, you can also use redis.
http://code.google.com/p/credis/
Since its a sorted set of data, you can event go for a NoSQL or Bigtable database. HBase, Hadoop, etc are provided OpenSouce resources for you.
I have C++ code, and from it I need to access the DB and make a query in table (with name NECE_TABLE, which has 2 columns - IntID and Status).
Here I need to get "status" column value from DB table (NECE_TABLE) using the IntID from C++ code.
Any help will be greatly helpful. Thanks in advance
Your question is very vague, but in summary you need to:
Use an appropriate client library supported by your database to connect to that database using some user credentials with appropriate permissions for SELECTing from your table
Execute a SQL select to fetch the data you want
There's some confusion as to which database you're using.
If you're using Oracle, you can use the OCCI client library to connect to the database and execute SQL statements. See section 2 of the linked document, where it describes connecting to a database and executing SQL queries.
Take a look at this link - it's a simple tutorial on how to get started with MySQL and C++. You say you are using vanilla SQL in your tags, but the two should be compatible as long as you stick to the more basic queries.