error with query REPLACE using mysql PDO - replace

I try to execute the following mysql query with pdo
REPLACE INTO session SET id = :id, user_id = :user_id, data = :data, timestamp = :timestamp
and I get the following error:
[18-Sep-2014 11:48:10] Exception Message: Unhandled Exception.
SQLSTATE[42000]: Syntax error or access violation: 1064 You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '?, data = ?, timestamp = ?' at line 1.
You can find the error back in the log.:
Query:REPLACE INTO session SET id = :id, user_id = :user_id, data = :data, timestamp = :timestamp
Params:
Array
(
[id] => sv9o264ciicsfd8porp1v0gl46
[user_id] => 0
[data] => version|s:8:"computer";linkedin|a:1:{s:5:"state";s:7:"Q7HXzKo";}github|a:1:{s:5:"state";s:7:"Q7HXzKo";}
[timestamp] => 1411030090
)
My Table session structure is:
CREATE TABLE IF NOT EXISTS `session` (
`id` varchar(255) COLLATE utf8_unicode_ci NOT NULL DEFAULT '',
`user_id` mediumint(10) NOT NULL,
`data` text COLLATE utf8_unicode_ci NOT NULL,
`timestamp` int(40) NOT NULL DEFAULT '0',
PRIMARY KEY (`id`),
UNIQUE KEY `session` (`id`),
UNIQUE KEY `id` (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
When I use phpmyadmin to execute it is working fine.
Can you help me to understand what is the problem?

well I found it, very silly mistake: I should separate the , from the parameter:
I changed this
:user_id,
to:
:user_id ,

You have forgotten an equal sign.
Change this part of your query:
user_id :user_id
to:
user_id = :user_id
I also think there may be a problem in using a reserved keyword like timestamp for one of your columns, try changing it to something like tstamp. You could also try to quote the name in the query.
REPLACE INTO session SET `id` = :id, `user_id` = :user_id, `data` = :data, `timestamp` = :timestamp

Related

WSO2 APIM Analytics SQL Server Installation

I am trying to run the APIM Analytics but want to go away from default H2 databases and use SQL Server instead.
Here are the mappings of the database in SQL Server:
${sys:carbon.home}/wso2/dashboard/database/metrics ---> WSO2_APIM_ANALYTICS_METRICS
${sys:carbon.home}/wso2/${sys:wso2.runtime}/database/WSO2_CARBON_DB --> WSO2_APIM_ANALYTICS_CARBON
${sys:carbon.home}/wso2/dashboard/database/MESSAGE_TRACING_DB --> WSO2_APIM_ANALYTICS_MESSAGE_TRACING
${sys:carbon.home}/wso2/worker/database/GEO_LOCATION_DATA --> WSO2_APIM_ANALYTICS_GEO_LOCATION_DATA
${sys:carbon.home}/wso2/worker/database/WSO2AM_MGW_ANALYTICS_DB --> WSO2_APIM_ANALYTICS_MICROGATEWAY_ANALYTICS
${sys:carbon.home}/wso2/${sys:wso2.runtime}/database/SP_MGT_DB --> WSO2_APIM_ANALYTICS_SP_MGT_DB
${sys:carbon.home}/wso2/${sys:wso2.runtime}/database/DASHBOARD_DB --> WSO2_APIM_ANALYTICS_DASHBOARD
${sys:carbon.home}/wso2/${sys:wso2.runtime}/database/SAMPLE_DB --> WSO2_APIM_ANALYTICS_SAMPLE
${sys:carbon.home}/wso2/${sys:wso2.runtime}/database/wso2_status_dashboard --> WSO2_APIM_ANALYTICS_STATUS_DASHBOARD
${sys:carbon.home}/wso2/worker/database/WSO2AM_STATS_DB --> WSO2_METRICS
${sys:carbon.home}/wso2/${sys:wso2.runtime}/database/BUSINESS_RULES_DB --> WSO2_APIM_ANALYTICS_BUSINESS_RULES
${sys:carbon.home}/wso2/${sys:wso2.runtime}/database/PERMISSION_DB --> WSO2_APIM_ANALYTICS_PERMISSIONS
${sys:carbon.home}/wso2/worker/database/WSO2AM_MGW_ANALYTICS_DB --> WSO2_APIM_ANALYTICS_MICROGATEWAY_ANALYTICS
${sys:carbon.home}/wso2/worker/database/GEO_LOCATION_DATA --> WSO2_APIM_ANALYTICS_GEO_LOCATION_DATA
I updated deployment.yaml for all three worker, manager and dashboard functionality to point to a new data source.
When I try to run the worker.bat, I get the following error messages for sidhi. It looks like schema and data for other databases are not populated as it is for h2.
How can I get the schema for all the databases that h2 uses and populate in SQL Server?
I also opened h2 database but don't see anything in h2 database in public schema. Am I missing something?
Here are the errors I see when I start the worker node:
{org.wso2.transport.http.netty.listener.ServerConnectorBootstrap$HTTPServerConnector} - HTTP(S) Interface starting on host 0.0.0.0 and port 9444
[2019-04-09 14:22:59,446] ERROR {org.wso2.carbon.stream.processor.core.internal.StreamProcessorDeployer} - org.wso2.siddhi.core.exception.SiddhiAppCreationException: Error on 'apim_abnormal_backend_time_alert_0' # Line: 34. Position: 111, near '#store(type = 'rdbms', datasource = 'APIM_ANALYTICS_DB')
define table ApimAllAlert (type string, tenantDomain string, message string, severity int, alertTimestamp long)'. No extension exist for store:rdbms org.wso2.carbon.stream.processor.core.internal.exception.SiddhiAppDeploymentException: org.wso2.siddhi.core.exception.SiddhiAppCreationException: Error on 'apim_abnormal_backend_time_alert_0' # Line: 34. Position: 111, near '#store(type = 'rdbms', datasource = 'APIM_ANALYTICS_DB')
define table ApimAllAlert (type string, tenantDomain string, message string, severity int, alertTimestamp long)'. No extension exist for store:rdbms
at org.wso2.carbon.stream.processor.core.internal.StreamProcessorDeployer.deploySiddhiQLFile(StreamProcessorDeployer.java:105)
at org.wso2.carbon.stream.processor.core.internal.StreamProcessorDeployer.deploy(StreamProcessorDeployer.java:306)
at org.wso2.carbon.deployment.engine.internal.DeploymentEngine.lambda$deployArtifacts$0(DeploymentEngine.java:291)
at java.util.ArrayList.forEach(ArrayList.java:1257)
at org.wso2.carbon.deployment.engine.internal.DeploymentEngine.deployArtifacts(DeploymentEngine.java:282)
at org.wso2.carbon.deployment.engine.internal.RepositoryScanner.sweep(RepositoryScanner.java:112)
at org.wso2.carbon.deployment.engine.internal.RepositoryScanner.scan(RepositoryScanner.java:68)
at org.wso2.carbon.deployment.engine.internal.DeploymentEngine.start(DeploymentEngine.java:121)
at org.wso2.carbon.deployment.engine.internal.DeploymentEngineListenerComponent.onAllRequiredCapabilitiesAvailable(DeploymentEngineListenerComponent.java:216)
at org.wso2.carbon.kernel.internal.startupresolver.StartupComponentManager.lambda$notifySatisfiableComponents$7(StartupComponentManager.java:266)
at java.util.ArrayList.forEach(ArrayList.java:1257)
at org.wso2.carbon.kernel.internal.startupresolver.StartupComponentManager.notifySatisfiableComponents(StartupComponentManager.java:252)
at org.wso2.carbon.kernel.internal.startupresolver.StartupOrderResolver$1.run(StartupOrderResolver.java:204)
at java.util.TimerThread.mainLoop(Timer.java:555)
at java.util.TimerThread.run(Timer.java:505)
Caused by: org.wso2.siddhi.core.exception.SiddhiAppCreationException: Error on 'apim_abnormal_backend_time_alert_0' # Line: 34. Position: 111, near '#store(type = 'rdbms', datasource = 'APIM_ANALYTICS_DB')
define table ApimAllAlert (type string, tenantDomain string, message string, severity int, alertTimestamp long)'. No extension exist for store:rdbms
at org.wso2.siddhi.core.util.SiddhiClassLoader.loadExtensionImplementation(SiddhiClassLoader.java:45)
at org.wso2.siddhi.core.util.parser.helper.DefinitionParserHelper.addTable(DefinitionParserHelper.java:203)
at org.wso2.siddhi.core.util.SiddhiAppRuntimeBuilder.defineTable(SiddhiAppRuntimeBuilder.java:125)
at org.wso2.siddhi.core.util.parser.SiddhiAppParser.defineTableDefinitions(SiddhiAppParser.java:320)
at org.wso2.siddhi.core.util.parser.SiddhiAppParser.parse(SiddhiAppParser.java:224)
at org.wso2.siddhi.core.SiddhiManager.createSiddhiAppRuntime(SiddhiManager.java:65)
at org.wso2.siddhi.core.SiddhiManager.createSiddhiAppRuntime(SiddhiManager.java:74)
at org.wso2.carbon.stream.processor.core.internal.StreamProcessorService.deploySiddhiApp(StreamProcessorService.java:100)
at org.wso2.carbon.stream.processor.core.internal.StreamProcessorDeployer.deploySiddhiQLFile(StreamProcessorDeployer.java:93)
... 14 more
And many more like this for each alert types.
Any help in regards to this is appreciated.
Thanks
The tables needed will be created in most of the cases, exceptions are following data sources and those needs to be created only if you are using the specific functionality,
1. Metrics DB
2. Microgateway analytics db
However, seems the issue you are facing is the server is not recognising siddhi-store-rdbms.jar packed in /lib folder. Please check if it is available. It is packed by default.
Niveathika,
We are currently not using microgateway functionality so I really don't know if I need to have that database populated with schema but what I found was that I have to have two database schema populated WSO2_APIM_ANALYTICS_GEO_LOCATION_DATA and WSO2_APIM_ANALYTICS_DASHBOARD I found schema for WSO2_APIM_ANALYTICS_DASHBOARD in stream processor server
Here are those two schema for someone like me struggling to migrate over to MSSQL
WSO2_APIM_ANALYTICS_DASHBOARD
IF OBJECT_ID('[dbo].[DASHBOARD_RESOURCE]', 'U') IS NOT NULL
DROP TABLE [dbo].[DASHBOARD_RESOURCE]
GO
CREATE TABLE [dbo].[DASHBOARD_RESOURCE](
[ID] [int] IDENTITY(1,1) NOT NULL,
[URL] [varchar](100) NOT NULL,
[OWNER] [varchar](100) NOT NULL,
[NAME] [varchar](256) NOT NULL,
[DESCRIPTION] [varchar](1000) NULL,
[PARENT_ID] [int] NOT NULL,
[LANDING_PAGE] [varchar](100) NOT NULL,
[CONTENT] [varbinary](max) NULL,
CONSTRAINT [PK_DASHBOARD_RESOURCE] PRIMARY KEY CLUSTERED
(
[URL] ASC,
[OWNER] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
IF OBJECT_ID('[dbo].[WIDGET_RESOURCE]', 'U') IS NOT NULL
DROP TABLE [dbo].[WIDGET_RESOURCE]
GO
CREATE TABLE [dbo].[WIDGET_RESOURCE](
[WIDGET_ID] [varchar](255) NOT NULL,
[WIDGET_NAME] [varchar](255) NOT NULL,
[WIDGET_CONFIGS] [varbinary](8000) NULL,
CONSTRAINT [PK_WIDGET_RESOURCE] PRIMARY KEY CLUSTERED
(
[WIDGET_ID] ASC,
[WIDGET_NAME] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
WSO2_APIM_ANALYTICS_GEO_LOCATION_DATA
CREATE TABLE BLOCKS (
network_cidr varchar(45) DEFAULT NULL,
network BIGINT DEFAULT NULL,
broadcast BIGINT DEFAULT NULL,
geoname_id BIGINT DEFAULT NULL,
registered_country_geoname_id BIGINT DEFAULT NULL,
represented_country_geoname_id BIGINT DEFAULT NULL,
is_anonymous_proxy SMALLINT DEFAULT '0',
is_satellite_provider SMALLINT DEFAULT '0',
postal_code VARCHAR(45) DEFAULT NULL,
latitude DECIMAL(10,4) DEFAULT NULL,
longitude DECIMAL(10,4) DEFAULT NULL,
network_blocks varchar(45) DEFAULT NULL);
CREATE INDEX idx_blocks_network ON BLOCKS (network);
CREATE INDEX idx_blocks_broadcast ON BLOCKS (broadcast);
CREATE INDEX idx_blocks_network_blocks ON BLOCKS (network_blocks);
CREATE TABLE LOCATION (
geoname_id BIGINT NOT NULL,
locale_code VARCHAR(10) DEFAULT NULL,
continent_code VARCHAR(10) DEFAULT NULL,
continent_name VARCHAR(20) DEFAULT NULL,
country_iso_code VARCHAR(10) DEFAULT NULL,
country_name VARCHAR(45) DEFAULT NULL,
subdivision_1_iso_code VARCHAR(10) DEFAULT NULL,
subdivision_1_name VARCHAR(1000) DEFAULT NULL,
subdivision_2_iso_code VARCHAR(10) DEFAULT NULL,
subdivision_2_name VARCHAR(1000) DEFAULT NULL,
city_name VARCHAR(1000) DEFAULT NULL,
metro_code BIGINT DEFAULT NULL,
time_zone VARCHAR(30) DEFAULT NULL,
PRIMARY KEY (geoname_id));
CREATE TABLE IP_LOCATION (
ip VARCHAR(100) NOT NULL,
country_name VARCHAR(200) DEFAULT NULL,
city_name VARCHAR(200) DEFAULT NULL,
PRIMARY KEY (ip)
);
Thanks

Java preparedstatement cannot insert data to mysql child table with foreign key

I am stuck with my parent-child mysql tables. I'm using preparedstatements in Java programming language. But I'm only successful with inserting data to my parent table. However, I can't insert data to my child table after executing another insert statement.
These are the glimpse of my tables:
tbl_patient:
ID (primary key)
patientName (primary key)
address
contact
tbl_admission:
ID (primary key)
admitDate
patientName (foreign key referenced from tbl_patient)
When I executed my statements for inserting a new patient, it became successful and I got the result stored to my tbl_patient table. However, my problem started when I executed my preparedstatements for tbl_admission since it cannot add data to my tbl_admission. Instead, I get the following error:
Cannot add or update a child row: a foreign key constraint fails (hms_mdh/admission, CONSTRAINT FK_admission_1 FOREIGN KEY (ID, patientName) REFERENCES tbl_patient (ID, patientName))
I really don't know what's happening here. Can I get any help? Thanks.
This is my sample sql in my child table - the admission table:
CREATE TABLE admission (
ID int(11) NOT NULL auto_increment,
admitID varchar(45) NOT NULL default '',
admitDate varchar(45) NOT NULL default '',
patientID varchar(45) NOT NULL default '',
entrance varchar(45) NOT NULL default '',
doctor varchar(45) NOT NULL default '',
initialDiagnosis varchar(45) NOT NULL default '',
recommend varchar(100) NOT NULL default '',
patientName varchar(45) NOT NULL default '',
PRIMARY KEY (ID),
KEY FK_admission_1 (ID,patientName),
CONSTRAINT FK_admission_1 FOREIGN KEY (ID, patientName) REFERENCES tbl_patient (ID, patientName)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 COMMENT='InnoDB free: 11264 kB; (ID patientName) REFER `hms_mdh/t'
And this is the parent table--the patients:
CREATE TABLE tbl_patient (
ID int(11) NOT NULL auto_increment,
patientID varchar(45) NOT NULL default '',
gender varchar(45) NOT NULL default '',
birthday varchar(45) NOT NULL default '',
chiefcomplain varchar(100) NOT NULL default '',
entrance varchar(45) NOT NULL default '',
patientName varchar(45) NOT NULL default '',
PRIMARY KEY (ID,patientName)
) ENGINE=InnoDB DEFAULT CHARSET=latin1

Restricting database in Postgresql

I am using Django1.9 with Postgresql. This is my data model for the model "Feature" :
class Feature(models.Model):
image_component = models.ForeignKey('Image_Component',on_delete=models.CASCADE,)
feature_value = HStoreField()
def save(self,*args,**kwargs):
if Feature.objects.filter(feature_value__has_keys=['size', 'quality' , 'format']):
super(Feature, self).save(*args, **kwargs)
else:
print("Incorrect key entered")
I am imposing a restriction on the feature_value such that only the keys that is allowed with Hstore are size , format and quality. I can do this while updating the database using Django-Admin. But I am not able to update the database directly using pgAdmin3 with the same restrictions i.e , I want to impose the same restrictions on the database level. How can I do that? Any suggestions?
You need to ALTER you Future table and add a constraint for feature_value field with such query:
ALTER TABLE your_feature_table
ADD CONSTRAINT restricted_keys
CHECK (
-- Check that 'feature_value' contains all specified keys
feature_value::hstore ?& ARRAY['size', 'format', 'quality']
AND
-- and that number of keys is three
array_length(akeys(feature_value), 1) = 3
);
This will ensure that every data in feature_value could contain exactly three keys: size, format and quality; and won't allow empty data.
Note, that before applying this query, you need to remove all invalid data from the table, or you would receive an error:
ERROR: check constraint "restricted_keys" is violated by some row
You could execute this query in DB console, or since you're using Django, it would be more appropriate to create a migration and apply this query using RunSQL: create an emtpy migration and pass above query into migrations.RunSQL, and pass this query into reverse_sql param for removing the constraint when the migration is unapplied:
ALTER TABLE your_feature_table
DROP CONSTRAINT restricted_keys;
After applying:
sql> INSERT INTO your_feature_table (feature_value) VALUES ('size => 124, quality => great, format => A4')
1 row affected in 18ms
sql> INSERT INTO your_feature_table (feature_value) VALUES ('format => A4')
ERROR: new row for relation "your_feature_table" violates check constraint "restricted_keys"
Detail: Failing row contains ("format"=>"A4").
sql> INSERT INTO your_feature_table (feature_value) VALUES ('')
ERROR: new row for relation "your_feature_table" violates check constraint "restricted_keys"
Detail: Failing row contains ().
sql> INSERT INTO your_feature_table (feature_value) VALUES ('a => 124, b => great, c => A4')
ERROR: new row for relation "your_feature_table" violates check constraint "restricted_keys"
Detail: Failing row contains ("a"=>"124", "b"=>"great", "c"=>"A4").
sql> INSERT INTO your_feature_table (feature_value) VALUES ('size => 124, quality => great, format => A4, incorrect_key => error')
ERROR: new row for relation "your_feature_table" violates check constraint "restricted_keys"
Detail: Failing row contains ("size"=>"124", "format"=>"A4", "quality"=>"great", "incorrect_ke...).

delete a row that is created by django using PHPPgAdmin?

Using django, I added a new entry to my table. Now I want to delete it using PHPPgAdmin (postgresql), but I get No unique Identifier for this row error. What is the problem?
django automatically adds an auto-incrementing primary key, so I cannot figure out what the issue is?
I read this post, but it did not help. If you notice the image carefully, you will see that the primary key column label is id but not pk as it should be in django.
EDIT: No primary key is seen on table;
But this is what django executes;
python manage.py sql auth
CREATE TABLE "auth_user" (
"id" serial NOT NULL PRIMARY KEY,
"password" varchar(128) NOT NULL,
"last_login" timestamp with time zone NOT NULL,
"is_superuser" boolean NOT NULL,
"username" varchar(30) NOT NULL UNIQUE,
"first_name" varchar(30) NOT NULL,
"last_name" varchar(30) NOT NULL,
"email" varchar(75) NOT NULL,
"is_staff" boolean NOT NULL,
"is_active" boolean NOT NULL,
"date_joined" timestamp with time zone NOT NULL
)
;
EDIT: A screenshot from PHPPgAdmin, showing id as primary key
I think this is a bug with phpPgAdmin.
I experienced a similar problem and went directly into psql (using the command ./manage.py dbshell).
I tried deleting the row in question, and received a more helpful error message than the one from phpPgAdmin. (In my case, that the row was being referenced by another table.)
I deleted the row referenced by the other table, and was then able to delete the row in question.

One to many mapping in Zend Framwork 2 with doctrine

I am trying to make a page where i handle my invoces. I have the invoice data in one tables and the invoice rows in another table. The tables looks as follows:
CREATE TABLE IF NOT EXISTS `Invoices` (
`I_Id` int(10) NOT NULL AUTO_INCREMENT,
`I_Number` int(4) NOT NULL,
`I_ClientId` int(10) NOT NULL,
`I_ExtraText` text NOT NULL,
PRIMARY KEY (`I_Id`)
) ENGINE=InnoDB
CREATE TABLE IF NOT EXISTS `InvoiceRows` (
`IR_Id` int(10) NOT NULL AUTO_INCREMENT,
`IR_InvoiceId` int(10) NOT NULL,
`IR_Price` int(10) NOT NULL,
`IR_Vat` smallint(2) unsigned NOT NULL,
`IR_Quantity` int(10) NOT NULL,
`IR_Text` varchar(255) NOT NULL,
PRIMARY KEY (`IR_Id`),
KEY `IR_InvoiceId` (`IR_InvoiceId`)
) ENGINE=InnoDB
Here is my mapping:
class Invoice {
/**
* #ORM\OneToMany(targetEntity="Row", mappedBy="invoice" ,cascade={"persist"})
*/
protected $rows;
}
class Row {
/**
* #ORM\ManyToOne(targetEntity="Invoice", inversedBy="rows", cascade={"persist"})
* #ORM\JoinColumn(name="IR_InvoiceId", referencedColumnName="I_Id")
**/
private $invoice;
}
I have been trying to follow the example at the doctrine docs on how to setup a One-To-Many, Bidirectional mapping. This is then connect with Zend Framework 2 and form collections. Pulling data works very good. I get all the rows of each invoice.
My Problem is when i want to write back to the database and save my changes. When i try to save i get the following error:
An exception occurred while executing 'INSERT INTO
MVIT_ADM__InvoiceRows (IR_InvoiceId, IR_Price, IR_Vat, IR_Quantity,
IR_Text) VALUES (?, ?, ?, ?, ?)' with params
{"1":null,"2":320,"3":0,"4":1,"5":"Learning your dog to sit"}:
SQLSTATE[23000]: Integrity constraint violation: 1048 Column
'IR_InvoiceId' cannot be null
What have i done wrong? When checking the data from the post value is not empty.
Edit: Full source can be found at Github
It seems IR_InvoiceId null, it expect the Id of Invoices (I_Id) value, so make sure while you are inserting the data in InvoiceRows table then here pass the Invoices (I_Id) value as IR_InvoiceId as you mention table relation..
Best Of Luck!
Saran