Doctrine 2: There is no column with name '$columnName' on table '$table' - doctrine-orm

When I do:
vendor/bin/doctrine-module orm:schema-tool:update
Doctrine 2.4 gives me this error:
[Doctrine\DBAL\Schema\SchemaException]
There is no column with name 'resource_id' on table 'role_resource'.
My actual MySQL database schema has the column and the table, as evident from running this command (no errors thrown):
mysql> select resource_id from role_resource;
Empty set (0.00 sec)
Thus, the error must be somewhere in the Doctrine's representation of the schema. I did a var_dump() of $this object, and here is what I get (partial):
object(Doctrine\DBAL\Schema\Table)#546 (10) {
["_name" :protected] => string(13) "role_resource"
["_columns":protected] => array(0) { }
Note that indeed, the _columns key does not contain any columns, which is how Doctrine checks for column names.
In my case, the partial trace dump is as follows:
SchemaException.php#L85
Table.php#L252
Table.php#L161
Reading other posts with similar problem, seem to suggest that I may have an error in the column case (upper vs lower). While it is possible I have missed something, but looking over my actual schema on the Database and the Annotations in my code seem to suggest a match (all lowercase). Similarly, Doctrine2's code does incorporate checks for such casing errors. So I am ruling out the error casing possibility.
Another post I've seen suggests that there may be an error in my Annotations, i.e. wrong naming, syntax, or id placement. I don't know, I checked it and it seems fine. Here is what I have:
class Role implements HierarchicalRoleInterface
{
/**
* #var \Doctrine\Common\Collections\Collection
* #ORM\ManyToMany(targetEntity="ModuleName\Entity\Resource")
* #ORM\JoinTable(name="role_resource",
* joinColumns={#ORM\JoinColumn(name="role_id", referencedColumnName="id")},
* inverseJoinColumns={#ORM\JoinColumn(name="resource_id", referencedColumnName="id")}
* )
*/
protected $resource;
So at the moment, I am stuck, and unable to use the ORM's schema-generation tools. This is a persistent error. I have scraped my database, generated schema anew using ORM, but still get stuck on this error whenever I try to do an update via ORM, as I describe in this post. Where perhaps should I look next?
Update: traced it to this code:
$sql before this line ==
SELECT COLUMN_NAME AS Field,
COLUMN_TYPE AS Type,
IS_NULLABLE AS `Null`,
COLUMN_KEY AS `Key`,
COLUMN_DEFAULT AS `Default`,
EXTRA AS Extra,
COLUMN_COMMENT AS Comment,
CHARACTER_SET_NAME AS CharacterSet,
COLLATION_NAME AS CollactionName,
FROM information_schema.COLUMNS
WHERE TABLE_SCHEMA = 'loginauth' AND TABLE_NAME = 'role_resource'
which when I run it form MySQL prompt, returns (some columns were trimmed):
+-------------+---------+------+-----+--------------+----------------+
| Field | Type | Null | Key | CharacterSet | CollactionName |
+-------------+---------+------+-----+--------------+----------------+
| role_id | int(11) | NO | PRI | NULL | NULL |
| resource_id | int(11) | NO | PRI | NULL | NULL |
+-------------+---------+------+-----+--------------+----------------+
and the $this->executeQuery($sql, $params, $types) returns the proper(?) statement that runs fine on my prompt, but when ->fetchAll() is called, specifically this fetchAll() it breaks down and returns an empty array. Can I have someone make sense out of this?
MORE:
Essentially, from above links, $this->executeQuery($sql, $params, $types) returns:
object(Doctrine\DBAL\Driver\PDOStatement)#531 (1) {
["queryString"]=> string(332) "SELECT COLUMN_NAME AS Field, COLUMN_TYPE AS Type, IS_NULLABLE AS `Null`, COLUMN_KEY AS `Key`, COLUMN_DEFAULT AS `Default`, EXTRA AS Extra, COLUMN_COMMENT AS Comment, CHARACTER_SET_NAME AS CharacterSet, COLLATION_NAME AS CollactionName FROM information_schema.COLUMNS WHERE TABLE_SCHEMA = 'loginauth' AND TABLE_NAME = 'role_resource'"
}
but then $this->executeQuery($sql, $params, $types)->fetchAll() (adding fetchAll()), returns this:
array(0) {
}
And that is so sad my friends :( because I don't know why it returns an empty array, when the statement in queryString above is so clearly valid and fruitful.

Check that the column names used in 'index' and 'uniqueContraints' schema definitions actually exist:
For example using Annotations:
#ORM\Table(name="user_password_reset_keys", indexes={#ORM\Index(name="key_idx", columns={"key"})} )
I had renamed my column from 'key' to 'reset_key' and this column name mismatch caused the error
/**
* #var string
*
* #ORM\Column(name="reset_key", type="string", length=255, nullable=false)
*/
private $resetKey;

Turns out that my DB permissions prevented my DB user from reading that particular table. using GRANT SELECT ... fixed the issue. Also, DBAL team traced it to a DB Permissions peculiarity returning NULL in MySQL and SQL Server.

Late answer, but may be helps some one
I had the same problem. It was because I used Symphony's command line to create my entity and camel case calling some of its properties. Then, when Doctrine create the table, also via command line, it change camel case for "_ "convention.

I was using a ORM Designer software called Skipper that exports my Entities for me. My problem for just one table only out of 30. Was that I was missing the name attribute on my annotations. Example.
#ORM\Column(name="isActive", ....
I added that attribute "name=" and it worked again!

Related

Doctrine/MySQL mediumtext + unncessary ALTER TABLEs

Running on mysql 5.7 + Doctrine 2.7/DBAL 2.9, with:
/**
* #ORM\Entity()
*/
class Entity {
/**
* #ORM\Id()
* #ORM\GeneratedValue()
* #ORM\Column(type="integer")
*/
private $id;
/**
* #ORM\Column(type="string", length=100000, nullable=true)
*/
private $bigText;
}
orm:schema-tool:create creates the table Entity:
CREATE TABLE `Entity` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`bigText` mediumtext COLLATE utf8_unicode_ci,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci
However, the schema-tool considers the table in need of update, running orm:schema-tool:update --dump-sql gives:
ALTER TABLE Entity CHANGE bigText bigText MEDIUMTEXT DEFAULT NULL;
Executing the update works fine, but doesn't help: it will still think the table needs updating.
I've tracked the issue to Doctrine\DBAL\Schema\Comparator's diffColumn method where the PlatformOptions of the current and the to-be column definition are compared. The to-be column has no PlatformOptions (which makes sense, as it's not on the platform yet, I suppose), while the current column does have them, namely charset and collation, in my test case here the defaults: ['charset' => 'utf8', 'collation' => 'utf8_unicode_ci'], which are set on the Column object by Doctrine\DBAL\Schema\MySqlSchemaManager->_getPortableTableColumnDefinition().
Poking around a bit in the source, I've found I can use EventListeners: because Doctrine\DBAL\Schema\AbstractSchemaManager->_getPortableTableColumnList() will emit an onSchemaColumnDefinition event and if I set $event->preventDefault() in there and create my own Column object and use $event->setColumn($column), I can circumvent MySqlSchemaManager->_getPortableTableColumnDefinition(). Or, I can go and fiddle with the other side and change the Schema created from the Classes / annotations and use postGenerateSchemaTable (which is much easier and feels less weird than messing with onSchemaColumnDefinition).
While I've learned a tiny bit more about internal flows, this feels way too complicated to handle mediumtext in MySQL without creating unnecessary ALTER TABLEs all the time.
Of course, I can sidestep the issue entirely by using type="text" instead of type="string" + a fixed length (which I've realized while browing issues here before asking this), but out of curiosity: am I missing something fundamental for dealing with fixed length fields in MySQL that fall into mediumtext length?

After updating table id via csv file when trying to add new field getting - duplicate key value violates unique constraint

Problem.
After successful data migration from csv files to django /Postgres application .
When I try to add a new record via the application interface getting - duplicate key value violates unique constraint.(Since i had id's in my csv files -i use them as keys )
Basically the app try to generate id's that already migrated.
After each attempt ID increments by one so if I have 160 record I have to get this error 160 times and then when I try 160 times the time 161 record saves ok.
Any ideas how to solve it?
PostgreSQL doesn't have an actual AUTO_INCREMENT column, at least not in the way that MySQL does. Instead it has a special SERIAL. This creates a four-byte INT column and attaches a trigger to it. Behind the scenes, if PostgreSQL sees that there is no value in that ID column, it checks the value of a sequence created when that column was created.
You can see this by:
SELECT
TABLE_NAME, COLUMN_NAME, COLUMN_DEFAULT
FROM
INFORMATION_SCHEMA.COLUMNS
WHERE
TABLE_NAME='<your-table>' AND COLUMN_NAME = '<your-id-column>';
You should see something like:
table_name | column_name | column_default
--------------+---------------------------+-------------------------------------
<your-table> | <your-id-column> | nextval('<table-name>_<your-id-column>_seq'::regclass)
(1 row)
To resolve your particular issue, you're going to need to reset the value of the sequence (named <table-name>_<your-id-column>_seq) to reflect the current index.
ALTER SEQUENCE your_name_your_id_column_seq RESTART WITH 161;
Credit where credit is due.
Sequence syntax is here.
Finding the name is here.

Getting table information for Redshift `stl_load_errors` errors

I am using Redshift COPY command to load data into Redshift table from S3. When something goes wrong, I typically get an error ERROR: Load into table 'example' failed. Check 'stl_load_errors' system table for details. I can always lookup stl_load_errors manually to get details. Now, I am trying to figure out how I can do that automatically.
From documentation it looks like the following query should give me all the details I need:
SELECT *
FROM stl_load_errors errors
INNER JOIN svv_table_info info
ON errors.tbl = info.table_id
AND info.schema = '<schema-name>'
AND info.table = '<table-name>'
However it always returns nothing. I also tried using stv_tbl_perm instead of svv_table_info, and still nothing.
After some troubleshooting, I see two things I don't understand:
I see multiple different IDs in stv_tbl_perm and svv_table_info for the same exact table. Why is that?
I see tbl filed on stl_load_errors referencing ids that do not exist in stv_tbl_perm or svv_table_info. Again why?
Feels like I don't understanding something in structure of these tables, but it completely escapes me what.
This is because tbl and table_id are with different types. First one is integer, second one is iod.
When you cast iod to integer the columns have the same values. You could check this query:
SELECT table_id::integer, table_id
FROM SVV_TABLE_INFO
I have result when I execute
SELECT errors.tbl, info.table_id::integer, info.table_id, *
FROM stl_load_errors errors
INNER JOIN svv_table_info info
ON errors.tbl = info.table_id
Please note that inner join is ON errors.tbl = info.table_id
I finally got to the bottom of it, and it is surprisingly boring and probably not useful to many ...
I had an existing table. My code that was creating the table was wrapped in transaction, and it was dropping the table inside the transaction. The code that was querying the stl_load_errors was outside the transaction. So the table_id outside and inside the transaction where different, as it was a different table.
You could try looking by filename. Doesn't really answer the question about joining the various tables, but I use a query like so to group up files that are part of the same manifest file and let me compare it to the maxerror setting:
select min(starttime) over (partition by substring(filename, 1, 53)) as starttime,
substring(filename, 1, 53) as filename, btrim(err_reason) as err_reason, count(*)
from stl_load_errors where filename like '%/some_s3_path/%'
group by starttime, filename, err_reason order by starttime desc;
This worked for me without any casting:
schemaz=# select i.database, e.err_code from stl_load_errors e join svv_table_info i on e.tbl=i.table_id limit 5
schemaz-# ;
database | err_code
-----------+----------
schemaz | 1204
schemaz | 1204
schemaz | 1204
schemaz | 1204
schemaz | 1204

Declare a variable in RedShift

SQL Server has the ability to declare a variable, then call that variable in a query like so:
DECLARE #StartDate date;
SET #StartDate = '2015-01-01';
SELECT *
FROM Orders
WHERE OrderDate >= #StartDate;
Does this functionality work in Amazon's RedShift? From the documentation, it looks that DECLARE is used solely for cursors. SET looks to be the function I am looking for, but when I attempt to use that, I get an error.
set session StartDate = '2015-01-01';
[Error Code: 500310, SQL State: 42704] [Amazon](500310) Invalid operation: unrecognized configuration parameter "startdate";
Is it possible to do this in RedShift?
Slavik Meltser's answer is great. As a variation on this theme, you can also use a WITH construct:
WITH tmp_variables AS (
SELECT
'2015-01-01'::DATE AS StartDate,
'some string' AS some_value,
5556::BIGINT AS some_id
)
SELECT *
FROM Orders
WHERE OrderDate >= (SELECT StartDate FROM tmp_variables);
Actually, you can simulate a variable using a temporarily table, create one, set data and you are good to go.
Something like this:
CREATE TEMP TABLE tmp_variables AS SELECT
'2015-01-01'::DATE AS StartDate,
'some string' AS some_value,
5556::BIGINT AS some_id;
SELECT *
FROM Orders
WHERE OrderDate >= (SELECT StartDate FROM tmp_variables);
The temp table will be deleted after the transaction execution.
Temp tables are bound per session (connect), therefor cannot be shared across sessions.
No, Amazon Redshift does not have the concept of variables. Redshift presents itself as PostgreSQL, but is highly modified.
There was mention of User Defined Functions at the 2014 AWS re:Invent conference, which might meet some of your needs.
Update in 2016: Scalar User Defined Functions can perform computations but cannot act as stored variables.
Note that if you are using the psql client to query, psql variables can still be used as always with Redshift:
$ psql --host=my_cluster_name.clusterid.us-east-1.redshift.amazonaws.com \
--dbname=your_db --port=5432 --username=your_login -v dt_format=DD-MM-YYYY
# select current_date;
date
------------
2015-06-15
(1 row)
# select to_char(current_date,:'dt_format');
to_char
------------
15-06-2015
(1 row)
# \set
AUTOCOMMIT = 'on'
...
dt_format = 'DD-MM-YYYY'
...
# \set dt_format 'MM/DD/YYYY'
# select to_char(current_date,:'dt_format');
to_char
------------
06/15/2015
(1 row)
You can now use user defined functions (UDF's) to do what you want:
CREATE FUNCTION my_const()
RETURNS CSTRING IMMUTABLE AS
$$ return 'my_string_constant' $$ language plpythonu;
Unfortunately, this does require certain access permissions on your redshift database.
Not an exact answer but in DBeaver, you can set up variables to use in your local queries in the IDE. Our team has found this helpful in testing before we put code into production.
From this answer: https://stackoverflow.com/a/58308439/220997
You should then be able to do:
#set date = '2019-10-09'
SELECT ${date}::DATE, ${date}::TIMESTAMP WITHOUT TIME ZONE
which produces:
| date | timestamp |
|------------|---------------------|
| 2019-10-09 | 2019-10-09 00:00:00 |
Again note: This only works in the DBeaver IDE. This SQL won't work when integrated in stored procedures or called from other tools

Doctrine inserting NULL values for undefined fields

I don't know if this is a bug, but I am using Doctrine 2.3.0 and I found the persist/flush behaviour quite strange. I have a basic table:
------------------
| test |
------------------
| id INT | AI |
| field1 VARCHAR |
| field2 VARCHAR |
-----------------
When I create an entry by setting only field1:
$test = new Entities\test();
$test->setField1('foo');
$em->persist($test);
$em->flush();
Doctrine\DBAL\Logging\EchoSQLLogger tells me (which is confirmed by looking at the DB) that Doctrine performs the following query:
INSERT INTO test (field1, field2) VALUES (?, ?)
array(2) {
[1]=>
string(3) "foo"
[2]=>
NULL
}
As you can see, although I haven't set field2, Doctrine does put it in the insert statement with a NULL value.
I have that behaviour for all my entities, which is problematic because when doing inserts, my default DB values for fields I don't set are overwritten by NULL.
Is this the expected default behaviour of Doctrine, is there a way to turn that off (i.e. exclude fields I don't set from the INSERT statements)?
I should probably add that my entities were generated automatically with the reverse engineering, and that the declaration for one of the fields looks like this
/**
* #var string $field2
* // removing nullalble makes no diff.
* #ORM\Column(name="field2", type="string", length=45, nullable=true)
*/
private $field2;
/**
* Set field2
*
* #param string $field2
* #return Test
*/
public function setField2($field2)
{
$this->field2 = $field2;
return $this;
}
I suppose this is default behaviour, as it could be treated logical to create a full record in one INSERT statement. But I could be mistaking. Having some kind of #ORM\Column(type="boolean", columnDefinition="TINYINT(1) NULL DEFAULT 0") could be treated as a hack, but solution. I advise you to review database record insertion logic.