WSO2 APIM Analytics SQL Server Installation - wso2

I am trying to run the APIM Analytics but want to go away from default H2 databases and use SQL Server instead.
Here are the mappings of the database in SQL Server:
${sys:carbon.home}/wso2/dashboard/database/metrics ---> WSO2_APIM_ANALYTICS_METRICS
${sys:carbon.home}/wso2/${sys:wso2.runtime}/database/WSO2_CARBON_DB --> WSO2_APIM_ANALYTICS_CARBON
${sys:carbon.home}/wso2/dashboard/database/MESSAGE_TRACING_DB --> WSO2_APIM_ANALYTICS_MESSAGE_TRACING
${sys:carbon.home}/wso2/worker/database/GEO_LOCATION_DATA --> WSO2_APIM_ANALYTICS_GEO_LOCATION_DATA
${sys:carbon.home}/wso2/worker/database/WSO2AM_MGW_ANALYTICS_DB --> WSO2_APIM_ANALYTICS_MICROGATEWAY_ANALYTICS
${sys:carbon.home}/wso2/${sys:wso2.runtime}/database/SP_MGT_DB --> WSO2_APIM_ANALYTICS_SP_MGT_DB
${sys:carbon.home}/wso2/${sys:wso2.runtime}/database/DASHBOARD_DB --> WSO2_APIM_ANALYTICS_DASHBOARD
${sys:carbon.home}/wso2/${sys:wso2.runtime}/database/SAMPLE_DB --> WSO2_APIM_ANALYTICS_SAMPLE
${sys:carbon.home}/wso2/${sys:wso2.runtime}/database/wso2_status_dashboard --> WSO2_APIM_ANALYTICS_STATUS_DASHBOARD
${sys:carbon.home}/wso2/worker/database/WSO2AM_STATS_DB --> WSO2_METRICS
${sys:carbon.home}/wso2/${sys:wso2.runtime}/database/BUSINESS_RULES_DB --> WSO2_APIM_ANALYTICS_BUSINESS_RULES
${sys:carbon.home}/wso2/${sys:wso2.runtime}/database/PERMISSION_DB --> WSO2_APIM_ANALYTICS_PERMISSIONS
${sys:carbon.home}/wso2/worker/database/WSO2AM_MGW_ANALYTICS_DB --> WSO2_APIM_ANALYTICS_MICROGATEWAY_ANALYTICS
${sys:carbon.home}/wso2/worker/database/GEO_LOCATION_DATA --> WSO2_APIM_ANALYTICS_GEO_LOCATION_DATA
I updated deployment.yaml for all three worker, manager and dashboard functionality to point to a new data source.
When I try to run the worker.bat, I get the following error messages for sidhi. It looks like schema and data for other databases are not populated as it is for h2.
How can I get the schema for all the databases that h2 uses and populate in SQL Server?
I also opened h2 database but don't see anything in h2 database in public schema. Am I missing something?
Here are the errors I see when I start the worker node:
{org.wso2.transport.http.netty.listener.ServerConnectorBootstrap$HTTPServerConnector} - HTTP(S) Interface starting on host 0.0.0.0 and port 9444
[2019-04-09 14:22:59,446] ERROR {org.wso2.carbon.stream.processor.core.internal.StreamProcessorDeployer} - org.wso2.siddhi.core.exception.SiddhiAppCreationException: Error on 'apim_abnormal_backend_time_alert_0' # Line: 34. Position: 111, near '#store(type = 'rdbms', datasource = 'APIM_ANALYTICS_DB')
define table ApimAllAlert (type string, tenantDomain string, message string, severity int, alertTimestamp long)'. No extension exist for store:rdbms org.wso2.carbon.stream.processor.core.internal.exception.SiddhiAppDeploymentException: org.wso2.siddhi.core.exception.SiddhiAppCreationException: Error on 'apim_abnormal_backend_time_alert_0' # Line: 34. Position: 111, near '#store(type = 'rdbms', datasource = 'APIM_ANALYTICS_DB')
define table ApimAllAlert (type string, tenantDomain string, message string, severity int, alertTimestamp long)'. No extension exist for store:rdbms
at org.wso2.carbon.stream.processor.core.internal.StreamProcessorDeployer.deploySiddhiQLFile(StreamProcessorDeployer.java:105)
at org.wso2.carbon.stream.processor.core.internal.StreamProcessorDeployer.deploy(StreamProcessorDeployer.java:306)
at org.wso2.carbon.deployment.engine.internal.DeploymentEngine.lambda$deployArtifacts$0(DeploymentEngine.java:291)
at java.util.ArrayList.forEach(ArrayList.java:1257)
at org.wso2.carbon.deployment.engine.internal.DeploymentEngine.deployArtifacts(DeploymentEngine.java:282)
at org.wso2.carbon.deployment.engine.internal.RepositoryScanner.sweep(RepositoryScanner.java:112)
at org.wso2.carbon.deployment.engine.internal.RepositoryScanner.scan(RepositoryScanner.java:68)
at org.wso2.carbon.deployment.engine.internal.DeploymentEngine.start(DeploymentEngine.java:121)
at org.wso2.carbon.deployment.engine.internal.DeploymentEngineListenerComponent.onAllRequiredCapabilitiesAvailable(DeploymentEngineListenerComponent.java:216)
at org.wso2.carbon.kernel.internal.startupresolver.StartupComponentManager.lambda$notifySatisfiableComponents$7(StartupComponentManager.java:266)
at java.util.ArrayList.forEach(ArrayList.java:1257)
at org.wso2.carbon.kernel.internal.startupresolver.StartupComponentManager.notifySatisfiableComponents(StartupComponentManager.java:252)
at org.wso2.carbon.kernel.internal.startupresolver.StartupOrderResolver$1.run(StartupOrderResolver.java:204)
at java.util.TimerThread.mainLoop(Timer.java:555)
at java.util.TimerThread.run(Timer.java:505)
Caused by: org.wso2.siddhi.core.exception.SiddhiAppCreationException: Error on 'apim_abnormal_backend_time_alert_0' # Line: 34. Position: 111, near '#store(type = 'rdbms', datasource = 'APIM_ANALYTICS_DB')
define table ApimAllAlert (type string, tenantDomain string, message string, severity int, alertTimestamp long)'. No extension exist for store:rdbms
at org.wso2.siddhi.core.util.SiddhiClassLoader.loadExtensionImplementation(SiddhiClassLoader.java:45)
at org.wso2.siddhi.core.util.parser.helper.DefinitionParserHelper.addTable(DefinitionParserHelper.java:203)
at org.wso2.siddhi.core.util.SiddhiAppRuntimeBuilder.defineTable(SiddhiAppRuntimeBuilder.java:125)
at org.wso2.siddhi.core.util.parser.SiddhiAppParser.defineTableDefinitions(SiddhiAppParser.java:320)
at org.wso2.siddhi.core.util.parser.SiddhiAppParser.parse(SiddhiAppParser.java:224)
at org.wso2.siddhi.core.SiddhiManager.createSiddhiAppRuntime(SiddhiManager.java:65)
at org.wso2.siddhi.core.SiddhiManager.createSiddhiAppRuntime(SiddhiManager.java:74)
at org.wso2.carbon.stream.processor.core.internal.StreamProcessorService.deploySiddhiApp(StreamProcessorService.java:100)
at org.wso2.carbon.stream.processor.core.internal.StreamProcessorDeployer.deploySiddhiQLFile(StreamProcessorDeployer.java:93)
... 14 more
And many more like this for each alert types.
Any help in regards to this is appreciated.
Thanks

The tables needed will be created in most of the cases, exceptions are following data sources and those needs to be created only if you are using the specific functionality,
1. Metrics DB
2. Microgateway analytics db
However, seems the issue you are facing is the server is not recognising siddhi-store-rdbms.jar packed in /lib folder. Please check if it is available. It is packed by default.

Niveathika,
We are currently not using microgateway functionality so I really don't know if I need to have that database populated with schema but what I found was that I have to have two database schema populated WSO2_APIM_ANALYTICS_GEO_LOCATION_DATA and WSO2_APIM_ANALYTICS_DASHBOARD I found schema for WSO2_APIM_ANALYTICS_DASHBOARD in stream processor server
Here are those two schema for someone like me struggling to migrate over to MSSQL
WSO2_APIM_ANALYTICS_DASHBOARD
IF OBJECT_ID('[dbo].[DASHBOARD_RESOURCE]', 'U') IS NOT NULL
DROP TABLE [dbo].[DASHBOARD_RESOURCE]
GO
CREATE TABLE [dbo].[DASHBOARD_RESOURCE](
[ID] [int] IDENTITY(1,1) NOT NULL,
[URL] [varchar](100) NOT NULL,
[OWNER] [varchar](100) NOT NULL,
[NAME] [varchar](256) NOT NULL,
[DESCRIPTION] [varchar](1000) NULL,
[PARENT_ID] [int] NOT NULL,
[LANDING_PAGE] [varchar](100) NOT NULL,
[CONTENT] [varbinary](max) NULL,
CONSTRAINT [PK_DASHBOARD_RESOURCE] PRIMARY KEY CLUSTERED
(
[URL] ASC,
[OWNER] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
IF OBJECT_ID('[dbo].[WIDGET_RESOURCE]', 'U') IS NOT NULL
DROP TABLE [dbo].[WIDGET_RESOURCE]
GO
CREATE TABLE [dbo].[WIDGET_RESOURCE](
[WIDGET_ID] [varchar](255) NOT NULL,
[WIDGET_NAME] [varchar](255) NOT NULL,
[WIDGET_CONFIGS] [varbinary](8000) NULL,
CONSTRAINT [PK_WIDGET_RESOURCE] PRIMARY KEY CLUSTERED
(
[WIDGET_ID] ASC,
[WIDGET_NAME] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
WSO2_APIM_ANALYTICS_GEO_LOCATION_DATA
CREATE TABLE BLOCKS (
network_cidr varchar(45) DEFAULT NULL,
network BIGINT DEFAULT NULL,
broadcast BIGINT DEFAULT NULL,
geoname_id BIGINT DEFAULT NULL,
registered_country_geoname_id BIGINT DEFAULT NULL,
represented_country_geoname_id BIGINT DEFAULT NULL,
is_anonymous_proxy SMALLINT DEFAULT '0',
is_satellite_provider SMALLINT DEFAULT '0',
postal_code VARCHAR(45) DEFAULT NULL,
latitude DECIMAL(10,4) DEFAULT NULL,
longitude DECIMAL(10,4) DEFAULT NULL,
network_blocks varchar(45) DEFAULT NULL);
CREATE INDEX idx_blocks_network ON BLOCKS (network);
CREATE INDEX idx_blocks_broadcast ON BLOCKS (broadcast);
CREATE INDEX idx_blocks_network_blocks ON BLOCKS (network_blocks);
CREATE TABLE LOCATION (
geoname_id BIGINT NOT NULL,
locale_code VARCHAR(10) DEFAULT NULL,
continent_code VARCHAR(10) DEFAULT NULL,
continent_name VARCHAR(20) DEFAULT NULL,
country_iso_code VARCHAR(10) DEFAULT NULL,
country_name VARCHAR(45) DEFAULT NULL,
subdivision_1_iso_code VARCHAR(10) DEFAULT NULL,
subdivision_1_name VARCHAR(1000) DEFAULT NULL,
subdivision_2_iso_code VARCHAR(10) DEFAULT NULL,
subdivision_2_name VARCHAR(1000) DEFAULT NULL,
city_name VARCHAR(1000) DEFAULT NULL,
metro_code BIGINT DEFAULT NULL,
time_zone VARCHAR(30) DEFAULT NULL,
PRIMARY KEY (geoname_id));
CREATE TABLE IP_LOCATION (
ip VARCHAR(100) NOT NULL,
country_name VARCHAR(200) DEFAULT NULL,
city_name VARCHAR(200) DEFAULT NULL,
PRIMARY KEY (ip)
);
Thanks

Related

How do I create composite key using loopback4?

CREATE TABLE dbo.Users (
id int NOT NULL IDENTITY(1,1),
uid int NOT NULL,
username nvarchar(65) NULL,
password varchar(100) NULL,
firstname nvarchar(50) NULL,
lastname nvarchar(50) NULL,
);
ALTER TABLE dbo.Users ADD CONSTRAINT PK_Users PRIMARY KEY (id, uid);
This is my sql query and want to create data model for this schema using loopback4.
Unfortunately, it doesn't seem that this feature is currently supported. An open issue tracking this feature can be found on their GitHub page here.

Regular expression to extract certain columns from SQL "CREATE TABLE" statements

Let's say I have a MySQL dump which creates a lot of tables.
Example:
CREATE TABLE `my_table` (
`id` bigint(20) NOT NULL,
`REVTYPE` tinyint(4) DEFAULT NULL
`some_other_column` varchar(255)
);
What whould be a valid regular expression to find the following:
All lines which start with "CREATE TABLE" and which contains "my_" in the table name
Then extracting the line containing "tinyint"
So the result would look like:
CREATE TABLE `my_table` (
`REVTYPE` tinyint(4) DEFAULT NULL
This regex seems to work:
^((CREATE.*my_.*\n)|(\s+.*tinyint.*\n)|(\s+.*(?!tinyint)\n))
CREATE TABLE `my_table` (
`id` bigint(20) NOT NULL,
`id` bigint(22) NOT NULL,
`REVTYPE` tinyint(4) DEFAULT NULL,
`id` bigint(20) NOT NULL,
`REVTYPE` tinyint(5) DEFAULT NULL,
`some_other_column` varchar(255)
);
becomes (replace with $2$3) :
CREATE TABLE `my_table` (
`REVTYPE` tinyint(4) DEFAULT NULL,
`REVTYPE` tinyint(5) DEFAULT NULL,
);
[I assume the OP wants the ); at the end -advise if not true.]
.
See regex101 link:

Draw.IO: Foreign Key on SQL Plugin

Does Draw.IO support foreign key relationships?
Tested it with a lot of different SQL samples (w3schools sql foreignkey) but none worked.
Due to budget issues i was only able to implement it to work with MySQL and SQL Server and SQL Server Generated Scripts.
The pull request is pending on https://github.com/jgraph/drawio/pull/233.
MySQL Example:
CREATE TABLE Persons
(
PersonID int,
LastName varchar(255),
FirstName varchar(255),
Address varchar(255),
City varchar(255)
);
CREATE TABLE Orders (
OrderID int NOT NULL,
OrderNumber int NOT NULL,
PersonID int,
PRIMARY KEY (OrderID),
FOREIGN KEY (OrderID) REFERENCES Persons(PersonID)
);
SQL Server Example:
CREATE TABLE Persons
(
PersonID int,
LastName varchar(255),
FirstName varchar(255),
Address varchar(255),
City varchar(255)
);
CREATE TABLE Orders (
OrderID int NOT NULL,
PRIMARY KEY (OrderID),
CONSTRAINT FK_PersonOrder FOREIGN KEY (OrderID)
REFERENCES Persons(PersonID)
);
SQL Server Generated Script Example:
CREATE TABLE [dbo].[aspnet_Applications](
[ApplicationName] [nvarchar](256) NOT NULL,
[LoweredApplicationName] [nvarchar](256) NOT NULL,
[ApplicationId] [uniqueidentifier] NOT NULL,
[Description] [nvarchar](256) NULL,
PRIMARY KEY NONCLUSTERED
(
[ApplicationId] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY],
UNIQUE NONCLUSTERED
(
[LoweredApplicationName] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY],
UNIQUE NONCLUSTERED
(
[ApplicationName] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
CREATE TABLE [dbo].[aspnet_Users](
[ApplicationId] [uniqueidentifier] NOT NULL,
[UserId] [uniqueidentifier] NOT NULL,
[UserName] [nvarchar](256) NOT NULL,
[LoweredUserName] [nvarchar](256) NOT NULL,
[MobileAlias] [nvarchar](16) NULL,
[IsAnonymous] [bit] NOT NULL,
[LastActivityDate] [datetime] NOT NULL,
PRIMARY KEY NONCLUSTERED
(
[UserId] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[aspnet_Users] WITH CHECK ADD FOREIGN KEY([ApplicationId])
REFERENCES [dbo].[aspnet_Applications] ([ApplicationId])
GO
The PR was merged, you can access this plugin via Arrange, Insert, From SQL at https://www.draw.io/?splash=0&p=sql
I rewrote the sql parser by #brunomartinspro to be more forgiving on spaces and case sensativity and support more sql database types. Also foreign key relationships now work and are drawn properly, just waiting for pr to be merged in.
https://github.com/jgraph/drawio/pull/3091

delete a row that is created by django using PHPPgAdmin?

Using django, I added a new entry to my table. Now I want to delete it using PHPPgAdmin (postgresql), but I get No unique Identifier for this row error. What is the problem?
django automatically adds an auto-incrementing primary key, so I cannot figure out what the issue is?
I read this post, but it did not help. If you notice the image carefully, you will see that the primary key column label is id but not pk as it should be in django.
EDIT: No primary key is seen on table;
But this is what django executes;
python manage.py sql auth
CREATE TABLE "auth_user" (
"id" serial NOT NULL PRIMARY KEY,
"password" varchar(128) NOT NULL,
"last_login" timestamp with time zone NOT NULL,
"is_superuser" boolean NOT NULL,
"username" varchar(30) NOT NULL UNIQUE,
"first_name" varchar(30) NOT NULL,
"last_name" varchar(30) NOT NULL,
"email" varchar(75) NOT NULL,
"is_staff" boolean NOT NULL,
"is_active" boolean NOT NULL,
"date_joined" timestamp with time zone NOT NULL
)
;
EDIT: A screenshot from PHPPgAdmin, showing id as primary key
I think this is a bug with phpPgAdmin.
I experienced a similar problem and went directly into psql (using the command ./manage.py dbshell).
I tried deleting the row in question, and received a more helpful error message than the one from phpPgAdmin. (In my case, that the row was being referenced by another table.)
I deleted the row referenced by the other table, and was then able to delete the row in question.

One to many mapping in Zend Framwork 2 with doctrine

I am trying to make a page where i handle my invoces. I have the invoice data in one tables and the invoice rows in another table. The tables looks as follows:
CREATE TABLE IF NOT EXISTS `Invoices` (
`I_Id` int(10) NOT NULL AUTO_INCREMENT,
`I_Number` int(4) NOT NULL,
`I_ClientId` int(10) NOT NULL,
`I_ExtraText` text NOT NULL,
PRIMARY KEY (`I_Id`)
) ENGINE=InnoDB
CREATE TABLE IF NOT EXISTS `InvoiceRows` (
`IR_Id` int(10) NOT NULL AUTO_INCREMENT,
`IR_InvoiceId` int(10) NOT NULL,
`IR_Price` int(10) NOT NULL,
`IR_Vat` smallint(2) unsigned NOT NULL,
`IR_Quantity` int(10) NOT NULL,
`IR_Text` varchar(255) NOT NULL,
PRIMARY KEY (`IR_Id`),
KEY `IR_InvoiceId` (`IR_InvoiceId`)
) ENGINE=InnoDB
Here is my mapping:
class Invoice {
/**
* #ORM\OneToMany(targetEntity="Row", mappedBy="invoice" ,cascade={"persist"})
*/
protected $rows;
}
class Row {
/**
* #ORM\ManyToOne(targetEntity="Invoice", inversedBy="rows", cascade={"persist"})
* #ORM\JoinColumn(name="IR_InvoiceId", referencedColumnName="I_Id")
**/
private $invoice;
}
I have been trying to follow the example at the doctrine docs on how to setup a One-To-Many, Bidirectional mapping. This is then connect with Zend Framework 2 and form collections. Pulling data works very good. I get all the rows of each invoice.
My Problem is when i want to write back to the database and save my changes. When i try to save i get the following error:
An exception occurred while executing 'INSERT INTO
MVIT_ADM__InvoiceRows (IR_InvoiceId, IR_Price, IR_Vat, IR_Quantity,
IR_Text) VALUES (?, ?, ?, ?, ?)' with params
{"1":null,"2":320,"3":0,"4":1,"5":"Learning your dog to sit"}:
SQLSTATE[23000]: Integrity constraint violation: 1048 Column
'IR_InvoiceId' cannot be null
What have i done wrong? When checking the data from the post value is not empty.
Edit: Full source can be found at Github
It seems IR_InvoiceId null, it expect the Id of Invoices (I_Id) value, so make sure while you are inserting the data in InvoiceRows table then here pass the Invoices (I_Id) value as IR_InvoiceId as you mention table relation..
Best Of Luck!
Saran