After migrating to Oracle 18c Enterprise Edition, a function based index fails to create.
Here is my index DDL:
CREATE INDEX my_index ON my_table
(UPPER( REGEXP_REPLACE ("DEPT_NUM",'[^[:alnum:]]',NULL,1,0)))
TABLESPACE my_tbspace
PCTFREE 10
INITRANS 2
MAXTRANS 255
STORAGE (
INITIAL 64K
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT
);
I get the following error:
ORA-01743: only pure functions can be indexed
01743. 00000 - "only pure functions can be indexed"
*Cause: The indexed function uses SYSDATE or the user environment.
*Action: PL/SQL functions must be pure (RNDS, RNPS, WNDS, WNPS). SQL
expressions must not use SYSDATE, USER, USERENV(), or anything
else dependent on the session state. NLS-dependent functions
are OK.
Is this a known bug in 18c? If this function based index is no longer supported, what is another way to write this function?
The issue is regexp_replace is not deterministic. The problem arises when changing NLS settings:
alter session set nls_language = english;
with rws as (
select 'STÜFF' v
from dual
)
select regexp_replace ( v, '[A-Z]+', '#' )
from rws;
REGEXP_REPLACE(V,'[A-Z]+','#')
#Ü#
alter session set nls_language = german;
with rws as (
select 'STÜFF' v
from dual
)
select regexp_replace ( v, '[A-Z]+', '#' )
from rws;
REGEXP_REPLACE(V,'[A-Z]+','#')
#
U-umlaut is at the end of the alphabet in English. But after U in German. So the first statement doesn't replace it. The second does.
In Oracle Database 12.1 and earlier regexp_replace was incorrectly marked as deterministic. 12.2 fixed this by making it non-deterministic.
Consider carefully whether any workarounds manage diacritics correctly.
MOS note 2592779.1 discusses this further.
Most likely the REGEXP_REPLACE causes the problem, see Find out if a string contains only ASCII characters. You can bypass the limitation with a user defined function (thanks to Bob Jarvis)
CREATE OR REPLACE FUNCTION KEEP_ALNUM(strIn IN VARCHAR2)
RETURN VARCHAR2
DETERMINISTIC
AS
BEGIN
RETURN UPPER(REGEXP_REPLACE(strIn, '[^[:alnum:]]', NULL, 1, 0));
END KEEP_ALNUM;
/
CREATE INDEX DEPTS_1 ON DEPTS(KEEP_ALNUM(DEPT_NUM));
Just ensure function has keyword DETERMINISTIC, then you can define even useless functions like below and create a functional index on it
CREATE OR REPLACE FUNCTION SillyValue RETURN VARCHAR2 DETERMINISTIC
AS
BEGIN
RETURN DBMS_RANDOM.STRING('p', 20);
END;
/
There are a couple of workarounds.
First one is a hack.
As you may know, when you create FBI then Oracle creates hidden column and index on it.
Moreover, you even can specify the name of that column instead of FBI expression and Oracle will use an index.
set lines 70 pages 70
column column_name format a15
column data_type format a15
drop table my_table;
create table my_table(dept_num, dept_descr) as select rownum||'*', 'dummy' from dual connect by level <= 1e6;
create index my_index
on my_table(upper(regexp_replace(dept_num, '[^[:alnum:]]', null, 1, 0)));
select column_name, data_type from user_tab_cols where table_name = 'MY_TABLE';
explain plan for
select * from my_table where upper(regexp_replace(dept_num, '[^[:alnum:]]', null, 1, 0)) = '666';
select * from table(dbms_xplan.display(format => 'BASIC'));
explain plan for
select * from my_table where SYS_NC00003$ = '666';
select * from table(dbms_xplan.display(format => 'BASIC'));
Output
Table dropped.
Table created.
Index created.
COLUMN_NAME DATA_TYPE
--------------- ---------------
DEPT_NUM VARCHAR2
DEPT_DESCR CHAR
SYS_NC00003$ VARCHAR2
3 rows selected.
Explain complete.
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------
Plan hash value: 2234884270
--------------------------------------------------------
| Id | Operation | Name |
--------------------------------------------------------
| 0 | SELECT STATEMENT | |
| 1 | TABLE ACCESS BY INDEX ROWID BATCHED| MY_TABLE |
| 2 | INDEX RANGE SCAN | MY_INDEX |
--------------------------------------------------------
9 rows selected.
Explain complete.
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------
Plan hash value: 2234884270
--------------------------------------------------------
| Id | Operation | Name |
--------------------------------------------------------
| 0 | SELECT STATEMENT | |
| 1 | TABLE ACCESS BY INDEX ROWID BATCHED| MY_TABLE |
| 2 | INDEX RANGE SCAN | MY_INDEX |
--------------------------------------------------------
9 rows selected.
So to mimic FBI you can create a hidden column and an index on top of it.
That can be done in Oracle 11g using dbms_stats.create_extended_stats.
drop index my_index;
begin
for i in (select dbms_stats.create_extended_stats
(user, 'my_table', '(upper(regexp_replace("DEPT_NUM", ''[^[:alnum:]]'', null, 1, 0)))') as col_name
from dual)
loop
execute immediate(utl_lms.format_message('alter table %s rename column "%s" to my_hidden_col','my_table', i.col_name));
end loop;
end;
/
select column_name, data_type from user_tab_cols where table_name = 'MY_TABLE';
create index my_index on my_table(my_hidden_col);
explain plan for
select * from my_table where upper(regexp_replace(dept_num, '[^[:alnum:]]', null, 1, 0)) = '666';
select * from table(dbms_xplan.display(format => 'BASIC'));
explain plan for
select * from my_table where MY_HIDDEN_COL = '666';
select * from table(dbms_xplan.display(format => 'BASIC'));
Output
Index dropped.
PL/SQL procedure successfully completed.
COLUMN_NAME DATA_TYPE
--------------- ---------------
DEPT_NUM VARCHAR2
DEPT_DESCR CHAR
MY_HIDDEN_COL VARCHAR2
3 rows selected.
Index created.
Explain complete.
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------
Plan hash value: 2234884270
--------------------------------------------------------
| Id | Operation | Name |
--------------------------------------------------------
| 0 | SELECT STATEMENT | |
| 1 | TABLE ACCESS BY INDEX ROWID BATCHED| MY_TABLE |
| 2 | INDEX RANGE SCAN | MY_INDEX |
--------------------------------------------------------
9 rows selected.
Explain complete.
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------
Plan hash value: 2234884270
--------------------------------------------------------
| Id | Operation | Name |
--------------------------------------------------------
| 0 | SELECT STATEMENT | |
| 1 | TABLE ACCESS BY INDEX ROWID BATCHED| MY_TABLE |
| 2 | INDEX RANGE SCAN | MY_INDEX |
--------------------------------------------------------
9 rows selected.
Starting with Oracle 12c hidden columns are documented so it becomes even more straightforward.
alter table my_table add (my_hidden_col invisible as
(upper(regexp_replace(dept_num, '[^[:alnum:]]', null, 1, 0))) virtual);
create index my_index on my_table(my_hidden_col);
Another approach is to implement the same logic without a regex.
create index my_index on my_table(
translate(upper(dept_num, '_'||translate(dept_num, '_ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789', '_'), '_')));
But in this case you have to make sure that all expressions with regex in predicates are replaced with the new one.
The work-around I found easiest was to create the index using NLS_UPPER instead of UPPER:
CREATE INDEX my_index ON my_table
( REGEXP_REPLACE (NLS_UPPER("DEPT_NUM"),'[^[:alnum:]]',NULL,1,0)))
TABLESPACE my_tbspace
PCTFREE 10
INITRANS 2
MAXTRANS 255
STORAGE (
INITIAL 64K
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT
);
There is no migration script for an Oracle database if you run on 1.6.0 and wanna use 1.7.0. I created an Jira issue on this. I have created a solution.
Based on the migration script of mysql.sql and the install script of Oracle, I merged those two together.
The result is my migration script for Oracle, I hope it contains all the steps.
ALTER TABLE IDN_OAUTH_CONSUMER_APPS DROP COLUMN LOGIN_PAGE_URL
/
ALTER TABLE IDN_OAUTH_CONSUMER_APPS DROP COLUMN ERROR_PAGE_URL
/
ALTER TABLE IDN_OAUTH_CONSUMER_APPS DROP COLUMN CONSENT_PAGE_URL
/
/*
DROP INDEX IDX_AT_CK_AU
/
DROP SEQUENCE IDP_SEQUENCE
/
DROP SEQUENCE IDP_ROLE_MAPPINGS_SEQUENCE
/
DROP SEQUENCE IDP_ROLES_SEQUENCE
/
ALTER TABLE UM_TENANT_IDP_ROLE_MAPPINGS DROP PRIMARY KEY CASCADE
/
ALTER TABLE UM_TENANT_IDP_ROLES DROP PRIMARY KEY CASCADE
/
ALTER TABLE IDP_BASE_TABLE DROP PRIMARY KEY CASCADE
/
ALTER TABLE UM_TENANT_IDP DROP PRIMARY KEY CASCADE
/
DROP TABLE UM_TENANT_IDP_ROLE_MAPPINGS CASCADE CONSTRAINTS
/
DROP TABLE UM_TENANT_IDP_ROLES CASCADE CONSTRAINTS
/
DROP TABLE UM_TENANT_IDP CASCADE CONSTRAINTS
/
DROP TABLE IDP_BASE_TABLE CASCADE CONSTRAINTS
/
*/
ALTER TABLE AM_API_URL_MAPPING ADD (MEDIATION_SCRIPT BLOB DEFAULT NULL)
/
CREATE TABLE IDN_OAUTH2_SCOPE (
SCOPE_ID INTEGER,
SCOPE_KEY VARCHAR2 (100) NOT NULL,
NAME VARCHAR2 (255) NULL,
DESCRIPTION VARCHAR2 (512) NULL,
TENANT_ID INTEGER DEFAULT 0 NOT NULL,
ROLES VARCHAR2 (500) NULL,
PRIMARY KEY (SCOPE_ID))
/
CREATE SEQUENCE IDN_OAUTH2_SCOPE_SEQUENCE START WITH 1 INCREMENT BY 1 NOCACHE
/
CREATE OR REPLACE TRIGGER IDN_OAUTH2_SCOPE_TRIGGER
BEFORE INSERT
ON IDN_OAUTH2_SCOPE
REFERENCING NEW AS NEW
FOR EACH ROW
BEGIN
SELECT IDN_OAUTH2_SCOPE_SEQUENCE.nextval INTO :NEW.SCOPE_ID FROM dual;
END;
/
CREATE TABLE IDN_OAUTH2_RESOURCE_SCOPE (
RESOURCE_PATH VARCHAR2 (255) NOT NULL,
SCOPE_ID INTEGER NOT NULL,
PRIMARY KEY (RESOURCE_PATH),
FOREIGN KEY (SCOPE_ID) REFERENCES IDN_OAUTH2_SCOPE (SCOPE_ID) ON DELETE CASCADE
)
/
CREATE TABLE IDN_SCIM_GROUP (
ID INTEGER,
TENANT_ID INTEGER NOT NULL,
ROLE_NAME VARCHAR2(255) NOT NULL,
ATTR_NAME VARCHAR2(1024) NOT NULL,
ATTR_VALUE VARCHAR2(1024),
PRIMARY KEY (ID))
/
CREATE SEQUENCE IDN_SCIM_GROUP_SEQUENCE START WITH 1 INCREMENT BY 1 NOCACHE
/
CREATE OR REPLACE TRIGGER IDN_SCIM_GROUP_TRIGGER
BEFORE INSERT
ON IDN_SCIM_GROUP
REFERENCING NEW AS NEW
FOR EACH ROW
BEGIN
SELECT IDN_SCIM_GROUP_SEQUENCE.nextval INTO :NEW.ID FROM dual;
END;
/
CREATE TABLE IDN_SCIM_PROVIDER (
CONSUMER_ID VARCHAR(255) NOT NULL,
PROVIDER_ID VARCHAR(255) NOT NULL,
USER_NAME VARCHAR(255) NOT NULL,
USER_PASSWORD VARCHAR(255) NOT NULL,
USER_URL VARCHAR(1024) NOT NULL,
GROUP_URL VARCHAR(1024),
BULK_URL VARCHAR(1024),
PRIMARY KEY (CONSUMER_ID,PROVIDER_ID))
/
CREATE TABLE IDN_OPENID_REMEMBER_ME (
USER_NAME VARCHAR(255) NOT NULL,
TENANT_ID INTEGER DEFAULT 0,
COOKIE_VALUE VARCHAR(1024),
CREATED_TIME TIMESTAMP,
PRIMARY KEY (USER_NAME, TENANT_ID))
/
CREATE TABLE IDN_OPENID_ASSOCIATIONS (
HANDLE VARCHAR(255) NOT NULL,
ASSOC_TYPE VARCHAR(255) NOT NULL,
EXPIRE_IN TIMESTAMP NOT NULL,
MAC_KEY VARCHAR(255) NOT NULL,
ASSOC_STORE VARCHAR(128) DEFAULT 'SHARED',
PRIMARY KEY (HANDLE))
/
CREATE TABLE IDN_STS_STORE (
ID INTEGER,
TOKEN_ID VARCHAR(255) NOT NULL,
TOKEN_CONTENT BLOB NOT NULL,
CREATE_DATE TIMESTAMP NOT NULL,
EXPIRE_DATE TIMESTAMP NOT NULL,
STATE INTEGER DEFAULT 0,
PRIMARY KEY (ID))
/
CREATE SEQUENCE IDN_STS_STORE_SEQUENCE START WITH 1 INCREMENT BY 1 NOCACHE
/
CREATE OR REPLACE TRIGGER IDN_STS_STORE_TRIGGER
BEFORE INSERT
ON IDN_STS_STORE
REFERENCING NEW AS NEW
FOR EACH ROW
BEGIN
SELECT IDN_STS_STORE_SEQUENCE.nextval INTO :NEW.ID FROM dual;
END;
/
CREATE TABLE IDN_IDENTITY_USER_DATA (
TENANT_ID INTEGER DEFAULT -1234,
USER_NAME VARCHAR(255) NOT NULL,
DATA_KEY VARCHAR(255) NOT NULL,
DATA_VALUE VARCHAR(255) NOT NULL,
PRIMARY KEY (TENANT_ID, USER_NAME, DATA_KEY))
/
CREATE TABLE IDN_IDENTITY_META_DATA (
USER_NAME VARCHAR(255) NOT NULL,
TENANT_ID INTEGER DEFAULT -1234,
METADATA_TYPE VARCHAR(255) NOT NULL,
METADATA VARCHAR(255) NOT NULL,
VALID VARCHAR(255) NOT NULL,
PRIMARY KEY (TENANT_ID, USER_NAME, METADATA_TYPE,METADATA))
/
-- End of IDN Tables --
-- Start of IDN-APPLICATION-MGT Tables--
CREATE TABLE SP_APP (
ID INTEGER,
TENANT_ID INTEGER NOT NULL,
APP_NAME VARCHAR (255) NOT NULL ,
USER_STORE VARCHAR (255) NOT NULL,
USERNAME VARCHAR (255) NOT NULL ,
DESCRIPTION VARCHAR (1024),
ROLE_CLAIM VARCHAR (512),
AUTH_TYPE VARCHAR (255) NOT NULL,
PROVISIONING_USERSTORE_DOMAIN VARCHAR (512),
IS_LOCAL_CLAIM_DIALECT CHAR(1) DEFAULT '1',
IS_SEND_LOCAL_SUBJECT_ID CHAR(1) DEFAULT '0',
IS_SEND_AUTH_LIST_OF_IDPS CHAR(1) DEFAULT '0',
SUBJECT_CLAIM_URI VARCHAR (512),
IS_SAAS_APP CHAR(1) DEFAULT '0',
PRIMARY KEY (ID))
/
CREATE SEQUENCE SP_APP_SEQ START WITH 1 INCREMENT BY 1 NOCACHE
/
CREATE OR REPLACE TRIGGER SP_APP_TRIG
BEFORE INSERT
ON SP_APP
REFERENCING NEW AS NEW
FOR EACH ROW
BEGIN
SELECT SP_APP_SEQ.nextval INTO :NEW.ID FROM dual;
END;
/
CREATE UNIQUE INDEX APPLICATION_NAME_CONSTRAINT ON SP_APP(APP_NAME, TENANT_ID)
/
ALTER TABLE SP_APP ADD CONSTRAINT APPLICATION_NAME_CONSTRAINT UNIQUE (APP_NAME, TENANT_ID) USING INDEX APPLICATION_NAME_CONSTRAINT
/
CREATE TABLE SP_INBOUND_AUTH (
ID INTEGER,
TENANT_ID INTEGER NOT NULL,
INBOUND_AUTH_KEY VARCHAR (255) NOT NULL,
INBOUND_AUTH_TYPE VARCHAR (255) NOT NULL,
PROP_NAME VARCHAR (255),
PROP_VALUE VARCHAR (1024) ,
APP_ID INTEGER NOT NULL,
PRIMARY KEY (ID))
/
CREATE SEQUENCE SP_INBOUND_AUTH_SEQ START WITH 1 INCREMENT BY 1 NOCACHE
/
CREATE OR REPLACE TRIGGER SP_INBOUND_AUTH_TRIG
BEFORE INSERT
ON SP_INBOUND_AUTH
REFERENCING NEW AS NEW
FOR EACH ROW
BEGIN
SELECT SP_INBOUND_AUTH_SEQ.nextval INTO :NEW.ID FROM dual;
END;
/
ALTER TABLE SP_INBOUND_AUTH ADD CONSTRAINT APPLICATION_ID_CONSTRAINT FOREIGN KEY (APP_ID) REFERENCES SP_APP (ID) ON DELETE CASCADE
/
CREATE TABLE SP_AUTH_STEP (
ID INTEGER,
TENANT_ID INTEGER NOT NULL,
STEP_ORDER INTEGER DEFAULT 1,
APP_ID INTEGER NOT NULL ,
IS_SUBJECT_STEP CHAR(1) DEFAULT '0',
IS_ATTRIBUTE_STEP CHAR(1) DEFAULT '0',
PRIMARY KEY (ID))
/
CREATE SEQUENCE SP_AUTH_STEP_SEQ START WITH 1 INCREMENT BY 1 NOCACHE
/
CREATE OR REPLACE TRIGGER SP_AUTH_STEP_TRIG
BEFORE INSERT
ON SP_AUTH_STEP
REFERENCING NEW AS NEW
FOR EACH ROW
BEGIN
SELECT SP_AUTH_STEP_SEQ.nextval INTO :NEW.ID FROM dual;
END;
/
ALTER TABLE SP_AUTH_STEP ADD CONSTRAINT APPLICATION_ID_CONSTRAINT_STEP FOREIGN KEY (APP_ID) REFERENCES SP_APP (ID) ON DELETE CASCADE
/
CREATE TABLE SP_FEDERATED_IDP (
ID INTEGER NOT NULL,
TENANT_ID INTEGER NOT NULL,
AUTHENTICATOR_ID INTEGER NOT NULL,
PRIMARY KEY (ID, AUTHENTICATOR_ID))
/
ALTER TABLE SP_FEDERATED_IDP ADD CONSTRAINT STEP_ID_CONSTRAINT FOREIGN KEY (ID) REFERENCES SP_AUTH_STEP (ID) ON DELETE CASCADE
/
CREATE TABLE SP_CLAIM_MAPPING (
ID INTEGER,
TENANT_ID INTEGER NOT NULL,
IDP_CLAIM VARCHAR (512) NOT NULL ,
SP_CLAIM VARCHAR (512) NOT NULL ,
APP_ID INTEGER NOT NULL,
IS_REQUESTED VARCHAR(128) DEFAULT '0',
DEFAULT_VALUE VARCHAR(255),
PRIMARY KEY (ID))
/
CREATE SEQUENCE SP_CLAIM_MAPPING_SEQ START WITH 1 INCREMENT BY 1 NOCACHE
/
CREATE OR REPLACE TRIGGER SP_CLAIM_MAPPING_TRIG
BEFORE INSERT
ON SP_CLAIM_MAPPING
REFERENCING NEW AS NEW
FOR EACH ROW
BEGIN
SELECT SP_CLAIM_MAPPING_SEQ.nextval INTO :NEW.ID FROM dual;
END;
/
ALTER TABLE SP_CLAIM_MAPPING ADD CONSTRAINT CLAIMID_APPID_CONSTRAINT FOREIGN KEY (APP_ID) REFERENCES SP_APP (ID) ON DELETE CASCADE
/
CREATE TABLE SP_ROLE_MAPPING (
ID INTEGER,
TENANT_ID INTEGER NOT NULL,
IDP_ROLE VARCHAR (255) NOT NULL ,
SP_ROLE VARCHAR (255) NOT NULL ,
APP_ID INTEGER NOT NULL,
PRIMARY KEY (ID))
/
CREATE SEQUENCE SP_ROLE_MAPPING_SEQ START WITH 1 INCREMENT BY 1 NOCACHE
/
CREATE OR REPLACE TRIGGER SP_ROLE_MAPPING_TRIG
BEFORE INSERT
ON SP_ROLE_MAPPING
REFERENCING NEW AS NEW
FOR EACH ROW
BEGIN
SELECT SP_ROLE_MAPPING_SEQ.nextval INTO :NEW.ID FROM dual;
END;
/
ALTER TABLE SP_ROLE_MAPPING ADD CONSTRAINT ROLEID_APPID_CONSTRAINT FOREIGN KEY (APP_ID) REFERENCES SP_APP (ID) ON DELETE CASCADE
/
CREATE TABLE SP_REQ_PATH_AUTHENTICATOR (
ID INTEGER,
TENANT_ID INTEGER NOT NULL,
AUTHENTICATOR_NAME VARCHAR (255) NOT NULL ,
APP_ID INTEGER NOT NULL,
PRIMARY KEY (ID))
/
CREATE SEQUENCE SP_REQ_PATH_AUTH_SEQ START WITH 1 INCREMENT BY 1 NOCACHE
/
CREATE OR REPLACE TRIGGER SP_REQ_PATH_AUTH_TRIG
BEFORE INSERT
ON SP_REQ_PATH_AUTHENTICATOR
REFERENCING NEW AS NEW
FOR EACH ROW
BEGIN
SELECT SP_REQ_PATH_AUTH_SEQ.nextval INTO :NEW.ID FROM dual;
END;
/
ALTER TABLE SP_REQ_PATH_AUTHENTICATOR ADD CONSTRAINT REQ_AUTH_APPID_CONSTRAINT FOREIGN KEY (APP_ID) REFERENCES SP_APP (ID) ON DELETE CASCADE
/
CREATE TABLE SP_PROVISIONING_CONNECTOR (
ID INTEGER,
TENANT_ID INTEGER NOT NULL,
IDP_NAME VARCHAR (255) NOT NULL ,
CONNECTOR_NAME VARCHAR (255) NOT NULL ,
APP_ID INTEGER NOT NULL,
IS_JIT_ENABLED CHAR(1) DEFAULT '0',
BLOCKING CHAR(1) DEFAULT '0',
PRIMARY KEY (ID))
/
CREATE SEQUENCE SP_PROV_CONNECTOR_SEQ START WITH 1 INCREMENT BY 1 NOCACHE
/
CREATE OR REPLACE TRIGGER SP_PROV_CONNECTOR_TRIG
BEFORE INSERT
ON SP_PROVISIONING_CONNECTOR
REFERENCING NEW AS NEW
FOR EACH ROW
BEGIN
SELECT SP_PROV_CONNECTOR_SEQ.nextval INTO :NEW.ID FROM dual;
END;
/
ALTER TABLE SP_PROVISIONING_CONNECTOR ADD CONSTRAINT PRO_CONNECTOR_APPID_CONSTRAINT FOREIGN KEY (APP_ID) REFERENCES SP_APP (ID) ON DELETE CASCADE
/
CREATE TABLE IDP (
ID INTEGER,
TENANT_ID INTEGER,
NAME VARCHAR(254) NOT NULL,
IS_ENABLED CHAR(1) DEFAULT '1',
IS_PRIMARY CHAR(1) DEFAULT '0',
HOME_REALM_ID VARCHAR(254),
IMAGE BLOB,
CERTIFICATE BLOB,
ALIAS VARCHAR(254),
INBOUND_PROV_ENABLED CHAR (1) DEFAULT '0',
INBOUND_PROV_USER_STORE_ID VARCHAR(254),
USER_CLAIM_URI VARCHAR(254),
ROLE_CLAIM_URI VARCHAR(254),
DESCRIPTION VARCHAR (1024),
DEFAULT_AUTHENTICATOR_NAME VARCHAR(254),
DEFAULT_PRO_CONNECTOR_NAME VARCHAR(254),
PROVISIONING_ROLE VARCHAR(128),
IS_FEDERATION_HUB CHAR(1) DEFAULT '0',
IS_LOCAL_CLAIM_DIALECT CHAR(1) DEFAULT '0',
PRIMARY KEY (ID),
DISPLAY_NAME VARCHAR(254),
UNIQUE (TENANT_ID, NAME))
/
CREATE SEQUENCE IDP_SEQ START WITH 1 INCREMENT BY 1 NOCACHE
/
CREATE OR REPLACE TRIGGER IDP_TRIG
BEFORE INSERT
ON IDP
REFERENCING NEW AS NEW
FOR EACH ROW
BEGIN
SELECT IDP_SEQ.nextval INTO :NEW.ID FROM dual;
END;
/
INSERT INTO IDP (TENANT_ID, NAME, HOME_REALM_ID) VALUES (-1234, 'LOCAL', 'localhost')
/
CREATE TABLE IDP_ROLE (
ID INTEGER,
IDP_ID INTEGER,
TENANT_ID INTEGER,
ROLE VARCHAR(254),
PRIMARY KEY (ID),
UNIQUE (IDP_ID, ROLE),
FOREIGN KEY (IDP_ID) REFERENCES IDP(ID) ON DELETE CASCADE)
/
CREATE SEQUENCE IDP_ROLE_SEQ START WITH 1 INCREMENT BY 1 NOCACHE
/
CREATE OR REPLACE TRIGGER IDP_ROLE_TRIG
BEFORE INSERT
ON IDP_ROLE
REFERENCING NEW AS NEW
FOR EACH ROW
BEGIN
SELECT IDP_ROLE_SEQ.nextval INTO :NEW.ID FROM dual;
END;
/
CREATE TABLE IDP_ROLE_MAPPING (
ID INTEGER,
IDP_ROLE_ID INTEGER,
TENANT_ID INTEGER,
USER_STORE_ID VARCHAR (253),
LOCAL_ROLE VARCHAR(253),
PRIMARY KEY (ID),
UNIQUE (IDP_ROLE_ID, TENANT_ID, USER_STORE_ID, LOCAL_ROLE),
FOREIGN KEY (IDP_ROLE_ID) REFERENCES IDP_ROLE(ID) ON DELETE CASCADE)
/
CREATE SEQUENCE IDP_ROLE_MAPPING_SEQ START WITH 1 INCREMENT BY 1 NOCACHE
/
CREATE OR REPLACE TRIGGER IDP_ROLE_MAPPING_TRIG
BEFORE INSERT
ON IDP_ROLE_MAPPING
REFERENCING NEW AS NEW
FOR EACH ROW
BEGIN
SELECT IDP_ROLE_MAPPING_SEQ.nextval INTO :NEW.ID FROM dual;
END;
/
CREATE TABLE IDP_CLAIM (
ID INTEGER,
IDP_ID INTEGER,
TENANT_ID INTEGER,
CLAIM VARCHAR(254),
PRIMARY KEY (ID),
UNIQUE (IDP_ID, CLAIM),
FOREIGN KEY (IDP_ID) REFERENCES IDP(ID) ON DELETE CASCADE)
/
CREATE TABLE IDP_CLAIM_MAPPING (
ID INTEGER,
IDP_CLAIM_ID INTEGER,
TENANT_ID INTEGER,
LOCAL_CLAIM VARCHAR(253),
DEFAULT_VALUE VARCHAR(255),
IS_REQUESTED VARCHAR(128) DEFAULT '0',
PRIMARY KEY (ID),
UNIQUE (IDP_CLAIM_ID, TENANT_ID, LOCAL_CLAIM),
FOREIGN KEY (IDP_CLAIM_ID) REFERENCES IDP_CLAIM(ID) ON DELETE CASCADE)
/
CREATE SEQUENCE IDP_CLAIM_SEQ START WITH 1 INCREMENT BY 1 NOCACHE
/
CREATE OR REPLACE TRIGGER IDP_CLAIM_TRIG
BEFORE INSERT
ON IDP_CLAIM
REFERENCING NEW AS NEW
FOR EACH ROW
BEGIN
SELECT IDP_CLAIM_SEQ.nextval INTO :NEW.ID FROM dual;
END;
/
CREATE SEQUENCE IDP_CLAIM_MAPPING_SEQ START WITH 1 INCREMENT BY 1 NOCACHE
/
CREATE OR REPLACE TRIGGER IDP_CLAIM_MAPPING_TRIG
BEFORE INSERT
ON IDP_CLAIM_MAPPING
REFERENCING NEW AS NEW
FOR EACH ROW
BEGIN
SELECT IDP_CLAIM_MAPPING_SEQ.nextval INTO :NEW.ID FROM dual;
END;
/
CREATE TABLE IDP_AUTHENTICATOR (
ID INTEGER,
TENANT_ID INTEGER,
IDP_ID INTEGER,
NAME VARCHAR(255) NOT NULL,
IS_ENABLED CHAR (1) DEFAULT '1',
DISPLAY_NAME VARCHAR(255),
PRIMARY KEY (ID),
UNIQUE (TENANT_ID, IDP_ID, NAME),
FOREIGN KEY (IDP_ID) REFERENCES IDP(ID) ON DELETE CASCADE)
/
CREATE SEQUENCE IDP_AUTHENTICATOR_SEQ START WITH 1 INCREMENT BY 1 NOCACHE
/
CREATE OR REPLACE TRIGGER IDP_AUTHENTICATOR_TRIG
BEFORE INSERT
ON IDP_AUTHENTICATOR
REFERENCING NEW AS NEW
FOR EACH ROW
BEGIN
SELECT IDP_AUTHENTICATOR_SEQ.nextval INTO :NEW.ID FROM dual;
END;
/
INSERT INTO IDP_AUTHENTICATOR (TENANT_ID, IDP_ID, NAME) VALUES (-1234, 1, 'saml2sso')
/
CREATE TABLE IDP_AUTHENTICATOR_PROPERTY (
ID INTEGER,
TENANT_ID INTEGER,
AUTHENTICATOR_ID INTEGER,
PROPERTY_KEY VARCHAR(255) NOT NULL,
PROPERTY_VALUE VARCHAR(2047),
IS_SECRET CHAR (1) DEFAULT '0',
PRIMARY KEY (ID),
UNIQUE (TENANT_ID, AUTHENTICATOR_ID, PROPERTY_KEY),
FOREIGN KEY (AUTHENTICATOR_ID) REFERENCES IDP_AUTHENTICATOR(ID) ON DELETE CASCADE)
/
CREATE SEQUENCE IDP_AUTHENTICATOR_PROP_SEQ START WITH 1 INCREMENT BY 1 NOCACHE
/
CREATE OR REPLACE TRIGGER IDP_AUTHENTICATOR_PROP_TRIG
BEFORE INSERT
ON IDP_AUTHENTICATOR_PROPERTY
REFERENCING NEW AS NEW
FOR EACH ROW
BEGIN
SELECT IDP_AUTHENTICATOR_PROP_SEQ.nextval INTO :NEW.ID FROM dual;
END;
/
CREATE TABLE IDP_PROVISIONING_CONFIG (
ID INTEGER,
TENANT_ID INTEGER,
IDP_ID INTEGER,
PROVISIONING_CONNECTOR_TYPE VARCHAR(255) NOT NULL,
IS_ENABLED CHAR (1) DEFAULT '0',
IS_BLOCKING CHAR (1) DEFAULT '0',
PRIMARY KEY (ID),
UNIQUE (TENANT_ID, IDP_ID, PROVISIONING_CONNECTOR_TYPE),
FOREIGN KEY (IDP_ID) REFERENCES IDP(ID) ON DELETE CASCADE)
/
CREATE SEQUENCE IDP_PROVISIONING_CONFIG_SEQ START WITH 1 INCREMENT BY 1 NOCACHE
/
CREATE OR REPLACE TRIGGER IDP_PROVISIONING_CONFIG_TRIG
BEFORE INSERT
ON IDP_PROVISIONING_CONFIG
REFERENCING NEW AS NEW
FOR EACH ROW
BEGIN
SELECT IDP_PROVISIONING_CONFIG_SEQ.nextval INTO :NEW.ID FROM dual;
END;
/
CREATE TABLE IDP_PROV_CONFIG_PROPERTY (
ID INTEGER,
TENANT_ID INTEGER,
PROVISIONING_CONFIG_ID INTEGER,
PROPERTY_KEY VARCHAR(255) NOT NULL,
PROPERTY_VALUE VARCHAR(2048),
PROPERTY_BLOB_VALUE BLOB,
PROPERTY_TYPE CHAR(32) NOT NULL,
IS_SECRET CHAR (1) DEFAULT '0',
PRIMARY KEY (ID),
UNIQUE (TENANT_ID, PROVISIONING_CONFIG_ID, PROPERTY_KEY),
FOREIGN KEY (PROVISIONING_CONFIG_ID) REFERENCES IDP_PROVISIONING_CONFIG(ID) ON DELETE CASCADE)
/
CREATE SEQUENCE IDP_PROV_CONFIG_PROP_SEQ START WITH 1 INCREMENT BY 1 NOCACHE
/
CREATE OR REPLACE TRIGGER IDP_PROV_CONFIG_PROP_TRIG
BEFORE INSERT
ON IDP_PROV_CONFIG_PROPERTY
REFERENCING NEW AS NEW
FOR EACH ROW
BEGIN
SELECT IDP_PROV_CONFIG_PROP_SEQ.nextval INTO :NEW.ID FROM dual;
END;
/
CREATE TABLE IDP_PROVISIONING_ENTITY (
ID INTEGER,
PROVISIONING_CONFIG_ID INTEGER,
ENTITY_TYPE VARCHAR(255) NOT NULL,
ENTITY_LOCAL_USERSTORE VARCHAR(255) NOT NULL,
ENTITY_NAME VARCHAR(255) NOT NULL,
ENTITY_VALUE VARCHAR(255),
TENANT_ID INTEGER,
PRIMARY KEY (ID),
UNIQUE (ENTITY_TYPE, TENANT_ID, ENTITY_LOCAL_USERSTORE, ENTITY_NAME),
UNIQUE (PROVISIONING_CONFIG_ID, ENTITY_TYPE, ENTITY_VALUE),
FOREIGN KEY (PROVISIONING_CONFIG_ID) REFERENCES IDP_PROVISIONING_CONFIG(ID) ON DELETE CASCADE)
/
CREATE SEQUENCE IDP_PROV_ENTITY_SEQ START WITH 1 INCREMENT BY 1 NOCACHE
/
CREATE OR REPLACE TRIGGER IDP_PROV_ENTITY_TRIG
BEFORE INSERT
ON IDP_PROVISIONING_ENTITY
REFERENCING NEW AS NEW
FOR EACH ROW
BEGIN
SELECT IDP_PROV_ENTITY_SEQ.nextval INTO :NEW.ID FROM dual;
END;
/
CREATE TABLE IDP_LOCAL_CLAIM (
ID INTEGER,
TENANT_ID INTEGER,
IDP_ID INTEGER,
CLAIM_URI VARCHAR(255) NOT NULL,
DEFAULT_VALUE VARCHAR(255),
IS_REQUESTED VARCHAR(128) DEFAULT '0',
PRIMARY KEY (ID),
UNIQUE (TENANT_ID, IDP_ID, CLAIM_URI),
FOREIGN KEY (IDP_ID) REFERENCES IDP(ID) ON DELETE CASCADE)
/
CREATE SEQUENCE IDP_LOCAL_CLAIM_SEQ START WITH 1 INCREMENT BY 1 NOCACHE
/
CREATE OR REPLACE TRIGGER IDP_LOCAL_CLAIM_TRIG
BEFORE INSERT
ON IDP_LOCAL_CLAIM
REFERENCING NEW AS NEW
FOR EACH ROW
BEGIN
SELECT IDP_LOCAL_CLAIM_SEQ.nextval INTO :NEW.ID FROM dual;
END;
/
-- End of IDN-APPLICATION-MGT Tables--
ALTER TABLE AM_APPLICATION_KEY_MAPPING DROP PRIMARY KEY CASCADE
/
ALTER TABLE AM_APPLICATION_KEY_MAPPING ADD (STATE VARCHAR2(30 BYTE) DEFAULT 'COMPLETED' NOT NULL)
/
ALTER TABLE AM_APPLICATION_KEY_MAPPING ADD PRIMARY KEY (APPLICATION_ID, KEY_TYPE)
/
CREATE TABLE AM_APPLICATION_REGISTRATION (
REG_ID INTEGER ,
SUBSCRIBER_ID INTEGER,
WF_REF VARCHAR2(255) NOT NULL,
APP_ID INTEGER,
TOKEN_TYPE VARCHAR2(30),
ALLOWED_DOMAINS VARCHAR2(256),
VALIDITY_PERIOD NUMBER(19),
UNIQUE (SUBSCRIBER_ID,APP_ID,TOKEN_TYPE),
FOREIGN KEY(SUBSCRIBER_ID) REFERENCES AM_SUBSCRIBER(SUBSCRIBER_ID),
FOREIGN KEY(APP_ID) REFERENCES AM_APPLICATION(APPLICATION_ID),
PRIMARY KEY (REG_ID)
)
/
CREATE SEQUENCE AM_APP_REGISTRATION_SEQUENCE START WITH 1 INCREMENT BY 1
/
CREATE OR REPLACE TRIGGER AM_APP_REGISTRATION_TRIGGER
BEFORE INSERT
ON AM_APPLICATION_REGISTRATION
REFERENCING NEW AS NEW
FOR EACH ROW
BEGIN
SELECT AM_APP_REGISTRATION_SEQUENCE.nextval INTO :NEW.REG_ID FROM dual;
END;
/
CREATE TABLE AM_API_SCOPES (
API_ID INTEGER NOT NULL,
SCOPE_ID INTEGER NOT NULL,
FOREIGN KEY (API_ID) REFERENCES AM_API (API_ID) ON DELETE CASCADE,
FOREIGN KEY (SCOPE_ID) REFERENCES IDN_OAUTH2_SCOPE (SCOPE_ID) ON DELETE CASCADE
)
/
CREATE TABLE AM_API_DEFAULT_VERSION (
DEFAULT_VERSION_ID NUMBER,
API_NAME VARCHAR(256) NOT NULL ,
API_PROVIDER VARCHAR(256) NOT NULL ,
DEFAULT_API_VERSION VARCHAR(30) ,
PUBLISHED_DEFAULT_API_VERSION VARCHAR(30) ,
PRIMARY KEY (DEFAULT_VERSION_ID)
)
/
CREATE SEQUENCE AM_API_DEFAULT_VERSION_SEQ START WITH 1 INCREMENT BY 1 NOCACHE
/
CREATE OR REPLACE TRIGGER AM_API_DEFAULT_VERSION_TRG
BEFORE INSERT
ON AM_API_DEFAULT_VERSION
REFERENCING NEW AS NEW
FOR EACH ROW
BEGIN
SELECT AM_API_DEFAULT_VERSION_SEQ.nextval INTO :NEW.DEFAULT_VERSION_ID FROM dual;
END;
/
CREATE OR REPLACE FUNCTION DROP_ALL_SCHEMA_OBJECTS RETURN NUMBER AS
PRAGMA AUTONOMOUS_TRANSACTION;
cursor c_get_objects is
select object_type,'"'||object_name||'"'||decode(object_type,'TABLE' ,' cascade constraints',null) obj_name
from user_objects
where object_type in ('TABLE','VIEW','PACKAGE','SEQUENCE','SYNONYM', 'MATERIALIZED VIEW')
order by object_type;
cursor c_get_objects_type is
select object_type, '"'||object_name||'"' obj_name
from user_objects
where object_type in ('TYPE');
BEGIN
begin
for object_rec in c_get_objects loop
execute immediate ('drop '||object_rec.object_type||' ' ||object_rec.obj_name);
end loop;
for object_rec in c_get_objects_type loop
begin
execute immediate ('drop '||object_rec.object_type||' ' ||object_rec.obj_name);
end;
end loop;
end;
RETURN 0;
END DROP_ALL_SCHEMA_OBJECTS;
/
ALTER TABLE IDN_OAUTH2_ACCESS_TOKEN MODIFY(TOKEN_SCOPE VARCHAR2(2048 BYTE))
/
DECLARE
statement VARCHAR2(2000);
constr_name VARCHAR2(30);
BEGIN
SELECT CONSTRAINT_NAME INTO constr_name FROM USER_CONS_COLUMNS WHERE table_name = 'IDN_OAUTH1A_ACCESS_TOKEN' AND column_name = 'CONSUMER_KEY';
statement := 'ALTER TABLE IDN_OAUTH1A_ACCESS_TOKEN DROP CONSTRAINT '|| constr_name;
EXECUTE IMMEDIATE(statement);
END;
/
ALTER TABLE IDN_OAUTH1A_ACCESS_TOKEN ADD FOREIGN KEY (CONSUMER_KEY) REFERENCES IDN_OAUTH_CONSUMER_APPS (CONSUMER_KEY) ON DELETE CASCADE
/
DECLARE
statement VARCHAR2(2000);
constr_name VARCHAR2(30);
BEGIN
SELECT CONSTRAINT_NAME INTO constr_name FROM USER_CONS_COLUMNS WHERE table_name = 'IDN_OAUTH1A_REQUEST_TOKEN' AND column_name = 'CONSUMER_KEY';
statement := 'ALTER TABLE IDN_OAUTH1A_REQUEST_TOKEN DROP CONSTRAINT '|| constr_name;
EXECUTE IMMEDIATE(statement);
END;
/
ALTER TABLE IDN_OAUTH1A_REQUEST_TOKEN ADD FOREIGN KEY (CONSUMER_KEY) REFERENCES IDN_OAUTH_CONSUMER_APPS (CONSUMER_KEY) ON DELETE CASCADE
/
DECLARE
statement VARCHAR2(2000);
constr_name VARCHAR2(30);
BEGIN
SELECT CONSTRAINT_NAME INTO constr_name FROM USER_CONS_COLUMNS WHERE table_name = 'IDN_OAUTH2_AUTHORIZATION_CODE' AND column_name = 'CONSUMER_KEY';
statement := 'ALTER TABLE IDN_OAUTH2_AUTHORIZATION_CODE DROP CONSTRAINT '|| constr_name;
EXECUTE IMMEDIATE(statement);
END;
/
ALTER TABLE IDN_OAUTH2_AUTHORIZATION_CODE ADD FOREIGN KEY (CONSUMER_KEY) REFERENCES IDN_OAUTH_CONSUMER_APPS (CONSUMER_KEY) ON DELETE CASCADE
/
As mentoined in the question. The answer is adressed to WSO2 in a Jira issue https://wso2.org/jira/browse/APIMANAGER-2524).
I am copying data from Amazon S3 to Redshift. During this process, I need to avoid the same files being loaded again. I don't have any unique constraints on my Redshift table. Is there a way to implement this using the copy command?
http://docs.aws.amazon.com/redshift/latest/dg/r_COPY_command_examples.html
I tried adding unique constraint and setting column as primary key with no luck. Redshift does not seem to support unique/primary key constraints.
As user1045047 mentioned, Amazon Redshift doesn't support unique constraints, so I had been looking for the way to delete duplicate records from a table with a delete statement.
Finally, I found out a reasonable way.
Amazon Redshift supports creating an IDENTITY column that is stored an auto-generated unique number.
http://docs.aws.amazon.com/redshift/latest/dg/r_CREATE_TABLE_NEW.html
The following sql is for PostgreSQL to delete duplicated records with OID that is unique column, and you can use this sql by replacing OID with the identity column.
DELETE FROM duplicated_table WHERE OID > (
SELECT MIN(OID) FROM duplicated_table d2
WHERE column1 = d2.dupl_column1
AND column2 = d2.column2
);
Here is an example that I tested on my Amazon Redshift cluster.
create table auto_id_table (auto_id int IDENTITY, name varchar, age int);
insert into auto_id_table (name, age) values('John', 18);
insert into auto_id_table (name, age) values('John', 18);
insert into auto_id_table (name, age) values('John', 18);
insert into auto_id_table (name, age) values('John', 18);
insert into auto_id_table (name, age) values('John', 18);
insert into auto_id_table (name, age) values('Bob', 20);
insert into auto_id_table (name, age) values('Bob', 20);
insert into auto_id_table (name, age) values('Matt', 24);
select * from auto_id_table order by auto_id;
auto_id | name | age
---------+------+-----
1 | John | 18
2 | John | 18
3 | John | 18
4 | John | 18
5 | John | 18
6 | Bob | 20
7 | Bob | 20
8 | Matt | 24
(8 rows)
delete from auto_id_table where auto_id > (
select min(auto_id) from auto_id_table d
where auto_id_table.name = d.name
and auto_id_table.age = d.age
);
select * from auto_id_table order by auto_id;
auto_id | name | age
---------+------+-----
1 | John | 18
6 | Bob | 20
8 | Matt | 24
(3 rows)
Also it works with COPY command like this.
auto_id_table.csv
John,18
Bob,20
Matt,24
copy sql
copy auto_id_table (name, age) from '[s3-path]/auto_id_table.csv' CREDENTIALS 'aws_access_key_id=[your-aws-key-id] ;aws_secret_access_key=[your-aws-secret-key]' delimiter ',';
The advantage of this way is that you don't need to run DDL statements. However it doesn't work with existing tables that do not have an identity column because an identity column cannot be added to an existing table. The only way to delete duplicated records with existing tables is migrating all records like this. (same as user1045047's answer)
insert into temp_table (select distinct from original_table);
drop table original_table;
alter table temp_table rename to original_table;
Mmm..
What about just never loading data into your master table directly.
Steps to avoid duplication:
begin transaction
bulk load into a temp staging table
delete from master table where rows = staging table rows
insert into master table from staging table (merge)
drop staging table
end transaction.
this is also super somewhat fast, and recommended by redshift docs.
My solution is to run a 'delete' command before 'copy' on the table. In my use case, each time I need to copy the records of a daily snapshot to redshift table, thus I can use the following 'delete' command to ensure duplicated records are deleted, then run the 'copy' command.
DELETE from t_data where snapshot_day = 'xxxx-xx-xx';
Currently there is no way to remove duplicates from redshift. Redshift doesn't support primary key/unique key constraints, and also removing duplicates using row number is not an option (deleting rows with row number greater than 1) as the delete operation on redshift doesn't allow complex statements (Also the concept of row number is not present in redshift).
The best way to remove duplicates is to write a cron/quartz job that would select all the distinct rows, put them in a separate table and then rename the table to your original table.
Insert into temp_originalTable (Select Distinct from originalTable)
Drop table originalTable
Alter table temp_originalTable rename to originalTable
There's another solution to really avoid data duplication although it's not as straightforward as removing duplicated data once inserted.
The copy command has the manifest option to specify which files you want to copy
copy customer
from 's3://mybucket/cust.manifest'
iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole'
manifest;
you can build a lambda that generates a new manifest file every time before you run the copy command. That lambda will compare the files already copied with the new files arrived and will create a new manifest with only the new files so that you will never ingest the same file twice
We remove duplicates weekly, but you could also do this during the load transaction as mentioned by #Kyle. Also, this does require the existence of an autogenerated ID column as an eventual target of the delete :
DELETE FROM <your table> WHERE ID NOT IN (
SELECT ID FROM (
SELECT *, ROW_NUMBER() OVER
( PARTITION BY <your constraint columns> ORDER BY ID ASC ) DUPLICATES
FROM REQUESTS
) WHERE DUPLICATES=1
); COMMIT;
for example:
CREATE TABLE IF NOT EXISTS public.requests
(
id BIGINT NOT NULL DEFAULT "identity"(1, 0, '1,1'::text) ENCODE delta
kaid VARCHAR(50) NOT NULL
,eid VARCHAR(50) NOT NULL ENCODE text32k
,aid VARCHAR(100) NOT NULL ENCODE text32k
,sid VARCHAR(100) NOT NULL ENCODE zstd
,rid VARCHAR(100) NOT NULL ENCODE zstd
,"ts" TIMESTAMP WITHOUT TIME ZONE NOT NULL ENCODE delta32k
,rtype VARCHAR(50) NOT NULL ENCODE bytedict
,stype VARCHAR(25) ENCODE bytedict
,sver VARCHAR(50) NOT NULL ENCODE text255
,dmacd INTEGER ENCODE delta32k
,reqnum INTEGER NOT NULL ENCODE delta32k
,did VARCHAR(255) ENCODE zstd
,"region" VARCHAR(10) ENCODE lzo
)
DISTSTYLE EVEN
SORTKEY (kaid, eid, aid, "ts")
;
. . .
DELETE FROM REQUESTS WHERE ID NOT IN (
SELECT ID FROM (
SELECT *, ROW_NUMBER() OVER
( PARTITION BY DID,RID,RTYPE,TS ORDER BY ID ASC ) DUPLICATES
FROM REQUESTS
) WHERE DUPLICATES=1
); COMMIT;