Visual C++ how to find name of a column in MySql - c++

I am currently using the following code to fill a combo box with the column information inside of MySql database:
private: void Fillcombo1(void){
String^ constring=L"datasource=localhost;port=3307;username=root;password=root";
MySqlConnection^ conDataBase=gcnew MySqlConnection(constring);
MySqlCommand^ cmdDataBase= gcnew MySqlCommand("select * from database.combinations ;", conDataBase);
MySqlDataReader^ myReader;
try{
conDataBase->Open();
myReader=cmdDataBase->ExecuteReader();
while(myReader->Read()){
String^ vName;
vName= myReader->GetString("OD");
comboBox1->Items->Add(vName);
}
}catch(Exception^ex){
MessageBox::Show(ex->Message);
}
}
Is there any simple method for finding the name of the column and placing it within a combo box?
Also, I am adding small details to my app such as a news feed which would need updating every so often, will I have to dedicate a full new database spreadsheet to this single news feed text so that I can updated it or is there a simpler alternative?
Thanks.

An alternative is to use the DESCRIBE statement:
mysql> describe rcp_categories;
+---------------+------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+---------------+------------------+------+-----+---------+----------------+
| ID_Category | int(10) unsigned | NO | PRI | NULL | auto_increment |
| Category_Text | varchar(32) | NO | UNI | NULL | |
+---------------+------------------+------+-----+---------+----------------+
2 rows in set (0.20 sec)

There may be an easier way without launching any other query but you could also use "SHOW COLUMNS" MySQL Query.
SHOW COLUMNS FROM combinations FROM database
or
SHOW COLUMNS FROM database.combinatons
Both will work.

Related

django.db.utils.OperationalError 3780 - Referencing column and referenced column "id" in foreign key constraint are incompatible

After reading this django.db.utils.OperationalError: 3780 Referencing column and referenced column are incompatible and SQLSTATE[HY000]: General error: 3780 Referencing column 'user_id' and referenced column 'id' in foreign key are incompatible
I have two models in Django 3.2.16 declared in different apps, it's a long time project which started in Django 2.2 and was upgraded over time.
Here are the two model classes:
from city_search.models
class Città(models.Model):
nome = models.CharField(max_length = 50, db_index = True)
def __str__(self):
return "{} - {}".format(self.nome,self.regione)
provincia = models.ForeignKey(Provincia, models.SET_NULL, blank=True,null=True)
capoluogo = models.BooleanField(default=False)
regione = models.ForeignKey(Regione, models.SET_NULL, blank=True,null=True, related_name='comuni')
slug = models.SlugField(null=True)
latlng = LocationField(null=True,map_attrs={"center": [2.149123103826298,41.39496092463892], "zoom":10})
from eventos.models instead, which I'm developing
from schedule.models import events
class Manifestazione(events.Event):
ciudad = models.ForeignKey('city_search.Città', on_delete=models.CASCADE, verbose_name='Ciudad', related_name='manifestaciones', blank=False, null=False)
The migration of the latter model fails with the following error:
django.db.utils.OperationalError: (3780, "Referencing column 'ciudad_id' and referenced column 'id' in foreign key constraint 'eventos_manifestazio_ciudad_id_74f49286_fk_city_sear' are incompatible.")
these two models declarations translate to the following MySQL tables (the second is only partially created by the faulty migration)
mysql> describe city_search_città;
+--------------+-------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+--------------+-------------+------+-----+---------+----------------+
| id | int | NO | PRI | NULL | auto_increment |
| nome | varchar(50) | NO | MUL | NULL | |
| provincia_id | int | YES | MUL | NULL | |
| capoluogo | tinyint(1) | NO | | NULL | |
| regione_id | int | YES | MUL | NULL | |
| slug | varchar(50) | YES | MUL | NULL | |
| latlng | varchar(63) | YES | MUL | NULL | |
+--------------+-------------+------+-----+---------+----------------+
7 rows in set (0.00 sec)
and
mysql> describe eventos_manifestazione;
+--------------+--------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+--------------+--------+------+-----+---------+-------+
| event_ptr_id | int | NO | PRI | NULL | |
| ciudad_id | bigint | NO | | NULL | |
+--------------+--------+------+-----+---------+-------+
2 rows in set (0.00 sec)
Now I absolutely understand that int and bigint are a great deal of difference. However I already tried to set DEFAULT_AUTO_FIELD = 'django.db.models.AutoField' in settings.py before making the migrations and later migrating, to no avail.
According to Django Docs on automatic primary key fields I could also try to use the AppConfig so I also edited eventos/apps.py like this and later migrated
class EventosConfig(AppConfig):
default_auto_field = 'django.db.models.AutoField'
name = 'eventos'
which didn't work either. I still get the same table schemes as above.
This is the migration that such settings generated.
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
('city_search', '__first__'),
('schedule', '0014_use_autofields_for_pk'),
]
operations = [
migrations.CreateModel(
name='Manifestazione',
fields=[
('event_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='schedule.event')),
('ciudad', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='manifestaciones', to='city_search.città', verbose_name='Ciudad')),
],
bases=('schedule.event',),
),
]
As I also read about a possible collation issue I will compare the table collations below:
mysql> select column_name, COLLATION_NAME, CHARACTER_SET_NAME from information_schema.`COLUMNS` where table_name = "city_search_città";
+--------------+-------------------+--------------------+
| COLUMN_NAME | COLLATION_NAME | CHARACTER_SET_NAME |
+--------------+-------------------+--------------------+
| id | NULL | NULL |
| nome | latin1_swedish_ci | latin1 |
| provincia_id | NULL | NULL |
| capoluogo | NULL | NULL |
| regione_id | NULL | NULL |
| slug | latin1_swedish_ci | latin1 |
| latlng | latin1_swedish_ci | latin1 |
+--------------+-------------------+--------------------+
7 rows in set (0.00 sec)
and
mysql> select column_name, COLLATION_NAME, CHARACTER_SET_NAME from information_schema.`COLUMNS` where table_name = "eventos_manifestazione";
+--------------+----------------+--------------------+
| COLUMN_NAME | COLLATION_NAME | CHARACTER_SET_NAME |
+--------------+----------------+--------------------+
| event_ptr_id | NULL | NULL |
| ciudad_id | NULL | NULL |
+--------------+----------------+--------------------+
2 rows in set (0.01 sec)
My hypothesis
is it more likely that the int and bigint difference is to blame
and I'm unable due to some bug or weird reason that the migrations
are ignoring my preference to shift to int as the automatic
primary key fields?
Does it have to do with some difference in using these settings when it comes to foreign keys?
Or do you believe it may have more to do with the collation difference? But are there differences that I may not be able to see?
I'm unable to complete a proper migration
It seems the error text was warning about the wrong table. It all worked fine by adding DEFAULT_AUTO_FIELD = 'django.db.models.AutoField' to settings.py and make new project-wide migrations that included some tweaks to a third app's model fields, where indeed models.AutoField was set to the id fields/columns.
This third app and the one I'm developing are not connected directly through a ForeignKey, I don't understand where the relation is, although the error turned up after I created the models I mentioned in this thread. So, honestly I'm not very sure why it worked or what these third models may have to do with this new app I'm adding to the project. I would really like to know if somebody understands more.

filtered INDEX on Sql Server table causes errors during Insert

I have a table in SQL Server 2019 which defined like this:
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING OFF
GO
CREATE TABLE [dbo].[productionLog2](
[id] [int] IDENTITY(1,1) NOT NULL,
[itemID] [binary](10) NOT NULL,
[version] [int] NOT NULL,
CONSTRAINT [PK_productionLog2] PRIMARY KEY CLUSTERED
(
[id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
This table is going to log produced items and it is a checkpoint to avoid generation of items with duplicate (itemId,version) in case version >0. In other words we should have no rows with same itemId and version (this rule should only apply to rows with version greater than 0).
So I've added below constraint as a filtered INDEX:
SET ANSI_PADDING OFF
GO
CREATE UNIQUE NONCLUSTERED INDEX [UQ_itemID_ver] ON [dbo].[productionLog2]
(
[itemID] ASC,
[version] ASC
)
WHERE ([version]>=(0))
WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
The problems is when I want to execute transactions which contain several commands, such as below one using C++ OLE APIs (for VC V7/Visual Studio 2000), the insertion fails after adding above index to the table, although the insert command itself will run individually inside SQL Server management studio with no errors.
C++ follows such a sequence:
--begin C++ transaction
--excute sub-command 1 in C++
SELECT ISNULL(MAX(version),-1)
FROM [dbo].[productionLog2]
WHERE [itemID]=0x01234567890123456789
--increase version by one inside C++ code
-- consider fox example max version is 9
-- will use 10 for next version insertion
--excute sub-command 2 in C++
INSERT INTO [dbo].[productionLog2]([itemID] ,[version] )
VALUES (0x01234567890123456789,10);
--end C++ transaction
Above transaction will fails to run when it reaches to insert command, but below scripts runs without errors in for the first time (for next runs, it will fail due to constraint):
INSERT INTO [dbo].[productionLog2]([itemID] ,[version] )
VALUES (0x01234567890123456789,10);
Can you imagine is what is wrong with defined constraint? Or what causes that it will avoid running C++ commands but is working well inside SSMS?
P.S. Prior to this I had no requirment to add WHERE ([version]>=(0)) on my INDEX so I was using UNIQUE constraint but since I want to have filtered CONSTRAINT I changed constraint as an INDEX with filters and nothing went wrong before this change during my code execution.
The required session SET options for filtered indexes are listed in the CREATE INEX documentation:
+-------------------------+----------------+----------------------+-------------------------------+--------------------------+
| SET options | Required value | Default server value | Default OLE DB and ODBC value | Default DB-Library value |
+-------------------------+----------------+----------------------+-------------------------------+--------------------------+
| DB-Library value | | | | |
| ANSI_NULLS | ON | ON | ON | OFF |
| ANSI_PADDING | ON | ON | ON | OFF |
| ANSI_WARNINGS* | ON | ON | ON | OFF |
| ARITHABORT | ON | ON | OFF | OFF |
| CONCAT_NULL_YIELDS_NULL | ON | ON | ON | OFF |
| NUMERIC_ROUNDABORT | OFF | OFF | OFF | OFF |
| QUOTED_IDENTIFIER | ON | ON | ON | OFF |
+-------------------------+----------------+----------------------+-------------------------------+--------------------------+
These are set properly by modern SQL Server APIs but it seems you have old code and/or driver.
Add these SET statements to T-SQL batches that modify tables with filtered indexes:
SET ANSI_NULLS, ANSI_PADDING, ANSI_WARNINGS, ARITHABORT,CONCAT_NULL_YIELDS_NULL, QUOTED_IDENTIFIER ON;
SET NUMERIC_ROUNDABORT OFF;
In the case of an outdated driver that doesn't set session options at all, the default database SET options will be used. These are mostly set to OFF for backwards compatibility. The script below will set the database defaults needed for filtered indexes but, again, an explicit setting by the driver or session will override these.
ALTER DATABASE YourDatabase SET ANSI_NULLS ON;
ALTER DATABASE YourDatabase SET ANSI_PADDING ON;
ALTER DATABASE YourDatabase SET ANSI_WARNINGS ON;
ALTER DATABASE YourDatabase SET ARITHABORT ON;
ALTER DATABASE YourDatabase SET CONCAT_NULL_YIELDS_NULL ON;
ALTER DATABASE YourDatabase SET QUOTED_IDENTIFIER ON;
ALTER DATABASE YourDatabase SET NUMERIC_ROUNDABORT OFF;

DynamoDB one-to-many relation denormalization or adjacency?

I am designing a table for a data structure that represents a business operation that can be performed either ad hoc or as part of a batch. Operations that are performed together as a batch must be linked and queryable, and there is meta data on the batch that will be persisted.
The table must support 2 queries: retrieve history, both ad hoc and batch instances.
Amazon suggests 2 approaches, adjacency and denormalization.
I am not sure which approach is best. Speed will be a priority, cost secondary.
This will be a multi-tenant database with multiple organizations with million+ operations. (Orgs will be a part of partition key to segregated these across nodes)
Here are the ideas I've come up with:
Denormalized, non adjacency - single root wrapper object with 1 (ad hoc) or more (batch) operation data.
Denormalized, adjacency - top level keys consist of operation instances (ad hoc) as well as parent objects containing collection of operation instances (batch)
Normalized, non adjacency, duplicate data - top level consists of operations instances, with or without a batch key, abd batch information duplicated among all members of batch
Is there a standard best practice? Any advice on setting up/generating keys?
Honestly it confuses me to understand the concept with these terms in NoSQL most specific DynamoDB. For me, hard to design the dynamodb table based on piece by piece of the whole business process. And precisely, I am more worried to data sizes instead of the speed in DynamoDB request due to we have 1MB limit per request. In other words, I should forget all things about relational DB concept and see the data as json object when working with dynamodb.
But well, for very simple one-to-many (i.e person loves some fruits) design I will have my best scheme choice has String PartitionKey. So my table will be like so :
|---------------------|---------------------------------------|
| PartitionKey | Infos |
|---------------------|---------------------------------------|
| PersonID | {name:String, age:Number, loveTo:map} |
|---------------------|---------------------------------------|
| FruitID | {name:String, otherProps} |
|---------------------|---------------------------------------|
The sample data :
|---------------------|---------------------------------------|
| PartitionKey | Infos |
|---------------------|---------------------------------------|
| person123 | { |
| | name:"Andy", |
| | age:24, |
| | loveTo:["fruit123","fruit432"] |
| | } |
|---------------------|---------------------------------------|
| personABC | { |
| | name:"Liza", |
| | age:20, |
| | loveTo:["fruit432"] |
| | } |
|---------------------|---------------------------------------|
| fruit123 | { |
| | name:"Apple", |
| | ... |
| | } |
|---------------------|---------------------------------------|
| fruit432 | { |
| | name:"Manggo", |
| | ... |
| | } |
|---------------------|---------------------------------------|
But let's see more complex case, for the sample chat app. Each channel allows many users, Each user is possible to join in any channels. Whether it should be one-to-many or many-to-many and how to make the relation? I will say I don't care about them. If we think as like relational DB, what a headache! In this case I will have composite SortKey and even Secondary Index to speedup the specific query.
So, the question what's the whole business process you work on will
help us to design the table, not piece by piece

Django with Postgres: How To Allow Null as Default in Django Model

I am using Django with Postgres, and trying to load data from csv to table. However, since one filed is Geometry Field, so I have to leave it blank when I load the table(otherwise \copy from will fail).
Here's my model:
name = models.CharField(max_length=200)
lat = models.DecimalField(max_digits=6, decimal_places=4, default=Decimal('0.0000'))
lon = models.DecimalField(max_digits=7,decimal_places=4, default=Decimal('0.0000'))
geom = models.PointField(srid=4326, blank=True, null=True, default=None)
and after migration, I ran psql like this:
mydb=>\copy target(name,lat,lon) from 'file.csv' DELIMITER ',' CSV HEADER ;
and I get error like this:
ERROR: null value in column "geom" violates not-null constraint
DETAIL: Failing row contains (name1, 30.4704, -97.8038, null).
CONTEXT: COPY target, line 2: "name1, 30.4704, -97.8038"
and here's the portion of csv file:
name,lat,lon
name1,30.4704,-97.8038
name2,30.3883,-97.7386
and here's the \d+ target:
mydb=> \d+ target
Table "public.target"
Column | Type | Modifiers | Storage | Stats target | Description
------------+------------------------+-----------+----------+--------------+-------------
name | character varying(200) | not null | extended | |
lat | numeric(6,4) | not null | main | |
lon | numeric(7,4) | not null | main | |
geom | geometry(Point,4326) | | main | |
Indexes:
"target_pkey" PRIMARY KEY, btree (name)
"target_geom_id" gist (geom)
So I guess geom is set to null when loading csv to the table? How can I fix it? I want to set the default of filed geom is null so that I can update it use other query.
Thanks very much!!
Couple different options, described here.
You can either:
1) Use a placeholder (0,0) which you can then overwrite.
or:
2) Set spacial_index=False.

Error: "Index '' does not exist on table" when trying to create entities in Doctrine 2.0 CLI

I have a mySQL database. I am trying to get Doctrine2 to create entities from the MySQL schema. I tried this with our production database, and got the following error:
[Doctrine\DBAL\Schema\SchemaException] Index '' does not exist on table user
I then created a simple test database with only one table, and only three fields: an auto-increment primary key field and three varchar fields. When attempting to have doctrine create entities from this database, I got the same error.
Here is the table that I was trying to create an entitie for. (Should have been simple)
mysql> desc user;
+-----------+-------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-----------+-------------+------+-----+---------+----------------+
| iduser | int(11) | NO | PRI | NULL | auto_increment |
| firstname | varchar(45) | YES | | NULL | |
| lastname | varchar(45) | YES | | NULL | |
| username | varchar(45) | YES | | NULL | |
+-----------+-------------+------+-----+---------+----------------+
4 rows in set (0.00 sec)
Here is the command that I used in an attempt to get said entities created:
./doctrine orm:convert-mapping --from-database test ../models/test
I am running:
5.1.49-1ubuntu8.1 (Ubuntu)
mysql Ver 14.14 Distrib 5.1.49, for debian-linux-gnu (i686) using readline 6.1
Doctrine 2.0.1
I am facing the same problem right now. I have traced the problem back to the primary key being not identified / set correctly. The default value is boolean(false) which is cast to the string ''. Doctrine subsequently fails to locate an index for this attribute. ;-)
Solution: Define a PRIMARY KEY.