Let's say I have a MySQL dump which creates a lot of tables.
Example:
CREATE TABLE `my_table` (
`id` bigint(20) NOT NULL,
`REVTYPE` tinyint(4) DEFAULT NULL
`some_other_column` varchar(255)
);
What whould be a valid regular expression to find the following:
All lines which start with "CREATE TABLE" and which contains "my_" in the table name
Then extracting the line containing "tinyint"
So the result would look like:
CREATE TABLE `my_table` (
`REVTYPE` tinyint(4) DEFAULT NULL
This regex seems to work:
^((CREATE.*my_.*\n)|(\s+.*tinyint.*\n)|(\s+.*(?!tinyint)\n))
CREATE TABLE `my_table` (
`id` bigint(20) NOT NULL,
`id` bigint(22) NOT NULL,
`REVTYPE` tinyint(4) DEFAULT NULL,
`id` bigint(20) NOT NULL,
`REVTYPE` tinyint(5) DEFAULT NULL,
`some_other_column` varchar(255)
);
becomes (replace with $2$3) :
CREATE TABLE `my_table` (
`REVTYPE` tinyint(4) DEFAULT NULL,
`REVTYPE` tinyint(5) DEFAULT NULL,
);
[I assume the OP wants the ); at the end -advise if not true.]
.
See regex101 link:
Related
I think I'm using the right syntax for MariaDB, but my foreign key constraint is not being created.
Here's the create table DDL:
CREATE TABLE items (
id INT auto_increment primary key,
description TEXT NOT NULL
);
CREATE TABLE item_events (
id INT NOT NULL,
calendar_event_guid TEXT(255) NOT NULL,
foreign key item_events_id_fk (id) REFERENCES items (id)
);
Then, when I ask MariaDB to show me what I created, I get this:
+-------------+-----------------
| Table | Create Table +-------------+-----------------
| item_events | CREATE TABLE `item_events` (
`id` int(11) NOT NULL,
`calendar_event_guid` text COLLATE utf8_unicode_ci NOT NULL,
KEY `item_events_id_fk` (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci
+-------------+-----------------
or, just showing the DDL:
CREATE TABLE `item_events` (
`id` int(11) NOT NULL,
`calendar_event_guid` text COLLATE utf8_unicode_ci NOT NULL,
KEY `item_events_id_fk` (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci
Notice that it only created a "KEY", not a foreign key. The items table is correctly created.
Surely, this is really simple :)
I am trying to run the APIM Analytics but want to go away from default H2 databases and use SQL Server instead.
Here are the mappings of the database in SQL Server:
${sys:carbon.home}/wso2/dashboard/database/metrics ---> WSO2_APIM_ANALYTICS_METRICS
${sys:carbon.home}/wso2/${sys:wso2.runtime}/database/WSO2_CARBON_DB --> WSO2_APIM_ANALYTICS_CARBON
${sys:carbon.home}/wso2/dashboard/database/MESSAGE_TRACING_DB --> WSO2_APIM_ANALYTICS_MESSAGE_TRACING
${sys:carbon.home}/wso2/worker/database/GEO_LOCATION_DATA --> WSO2_APIM_ANALYTICS_GEO_LOCATION_DATA
${sys:carbon.home}/wso2/worker/database/WSO2AM_MGW_ANALYTICS_DB --> WSO2_APIM_ANALYTICS_MICROGATEWAY_ANALYTICS
${sys:carbon.home}/wso2/${sys:wso2.runtime}/database/SP_MGT_DB --> WSO2_APIM_ANALYTICS_SP_MGT_DB
${sys:carbon.home}/wso2/${sys:wso2.runtime}/database/DASHBOARD_DB --> WSO2_APIM_ANALYTICS_DASHBOARD
${sys:carbon.home}/wso2/${sys:wso2.runtime}/database/SAMPLE_DB --> WSO2_APIM_ANALYTICS_SAMPLE
${sys:carbon.home}/wso2/${sys:wso2.runtime}/database/wso2_status_dashboard --> WSO2_APIM_ANALYTICS_STATUS_DASHBOARD
${sys:carbon.home}/wso2/worker/database/WSO2AM_STATS_DB --> WSO2_METRICS
${sys:carbon.home}/wso2/${sys:wso2.runtime}/database/BUSINESS_RULES_DB --> WSO2_APIM_ANALYTICS_BUSINESS_RULES
${sys:carbon.home}/wso2/${sys:wso2.runtime}/database/PERMISSION_DB --> WSO2_APIM_ANALYTICS_PERMISSIONS
${sys:carbon.home}/wso2/worker/database/WSO2AM_MGW_ANALYTICS_DB --> WSO2_APIM_ANALYTICS_MICROGATEWAY_ANALYTICS
${sys:carbon.home}/wso2/worker/database/GEO_LOCATION_DATA --> WSO2_APIM_ANALYTICS_GEO_LOCATION_DATA
I updated deployment.yaml for all three worker, manager and dashboard functionality to point to a new data source.
When I try to run the worker.bat, I get the following error messages for sidhi. It looks like schema and data for other databases are not populated as it is for h2.
How can I get the schema for all the databases that h2 uses and populate in SQL Server?
I also opened h2 database but don't see anything in h2 database in public schema. Am I missing something?
Here are the errors I see when I start the worker node:
{org.wso2.transport.http.netty.listener.ServerConnectorBootstrap$HTTPServerConnector} - HTTP(S) Interface starting on host 0.0.0.0 and port 9444
[2019-04-09 14:22:59,446] ERROR {org.wso2.carbon.stream.processor.core.internal.StreamProcessorDeployer} - org.wso2.siddhi.core.exception.SiddhiAppCreationException: Error on 'apim_abnormal_backend_time_alert_0' # Line: 34. Position: 111, near '#store(type = 'rdbms', datasource = 'APIM_ANALYTICS_DB')
define table ApimAllAlert (type string, tenantDomain string, message string, severity int, alertTimestamp long)'. No extension exist for store:rdbms org.wso2.carbon.stream.processor.core.internal.exception.SiddhiAppDeploymentException: org.wso2.siddhi.core.exception.SiddhiAppCreationException: Error on 'apim_abnormal_backend_time_alert_0' # Line: 34. Position: 111, near '#store(type = 'rdbms', datasource = 'APIM_ANALYTICS_DB')
define table ApimAllAlert (type string, tenantDomain string, message string, severity int, alertTimestamp long)'. No extension exist for store:rdbms
at org.wso2.carbon.stream.processor.core.internal.StreamProcessorDeployer.deploySiddhiQLFile(StreamProcessorDeployer.java:105)
at org.wso2.carbon.stream.processor.core.internal.StreamProcessorDeployer.deploy(StreamProcessorDeployer.java:306)
at org.wso2.carbon.deployment.engine.internal.DeploymentEngine.lambda$deployArtifacts$0(DeploymentEngine.java:291)
at java.util.ArrayList.forEach(ArrayList.java:1257)
at org.wso2.carbon.deployment.engine.internal.DeploymentEngine.deployArtifacts(DeploymentEngine.java:282)
at org.wso2.carbon.deployment.engine.internal.RepositoryScanner.sweep(RepositoryScanner.java:112)
at org.wso2.carbon.deployment.engine.internal.RepositoryScanner.scan(RepositoryScanner.java:68)
at org.wso2.carbon.deployment.engine.internal.DeploymentEngine.start(DeploymentEngine.java:121)
at org.wso2.carbon.deployment.engine.internal.DeploymentEngineListenerComponent.onAllRequiredCapabilitiesAvailable(DeploymentEngineListenerComponent.java:216)
at org.wso2.carbon.kernel.internal.startupresolver.StartupComponentManager.lambda$notifySatisfiableComponents$7(StartupComponentManager.java:266)
at java.util.ArrayList.forEach(ArrayList.java:1257)
at org.wso2.carbon.kernel.internal.startupresolver.StartupComponentManager.notifySatisfiableComponents(StartupComponentManager.java:252)
at org.wso2.carbon.kernel.internal.startupresolver.StartupOrderResolver$1.run(StartupOrderResolver.java:204)
at java.util.TimerThread.mainLoop(Timer.java:555)
at java.util.TimerThread.run(Timer.java:505)
Caused by: org.wso2.siddhi.core.exception.SiddhiAppCreationException: Error on 'apim_abnormal_backend_time_alert_0' # Line: 34. Position: 111, near '#store(type = 'rdbms', datasource = 'APIM_ANALYTICS_DB')
define table ApimAllAlert (type string, tenantDomain string, message string, severity int, alertTimestamp long)'. No extension exist for store:rdbms
at org.wso2.siddhi.core.util.SiddhiClassLoader.loadExtensionImplementation(SiddhiClassLoader.java:45)
at org.wso2.siddhi.core.util.parser.helper.DefinitionParserHelper.addTable(DefinitionParserHelper.java:203)
at org.wso2.siddhi.core.util.SiddhiAppRuntimeBuilder.defineTable(SiddhiAppRuntimeBuilder.java:125)
at org.wso2.siddhi.core.util.parser.SiddhiAppParser.defineTableDefinitions(SiddhiAppParser.java:320)
at org.wso2.siddhi.core.util.parser.SiddhiAppParser.parse(SiddhiAppParser.java:224)
at org.wso2.siddhi.core.SiddhiManager.createSiddhiAppRuntime(SiddhiManager.java:65)
at org.wso2.siddhi.core.SiddhiManager.createSiddhiAppRuntime(SiddhiManager.java:74)
at org.wso2.carbon.stream.processor.core.internal.StreamProcessorService.deploySiddhiApp(StreamProcessorService.java:100)
at org.wso2.carbon.stream.processor.core.internal.StreamProcessorDeployer.deploySiddhiQLFile(StreamProcessorDeployer.java:93)
... 14 more
And many more like this for each alert types.
Any help in regards to this is appreciated.
Thanks
The tables needed will be created in most of the cases, exceptions are following data sources and those needs to be created only if you are using the specific functionality,
1. Metrics DB
2. Microgateway analytics db
However, seems the issue you are facing is the server is not recognising siddhi-store-rdbms.jar packed in /lib folder. Please check if it is available. It is packed by default.
Niveathika,
We are currently not using microgateway functionality so I really don't know if I need to have that database populated with schema but what I found was that I have to have two database schema populated WSO2_APIM_ANALYTICS_GEO_LOCATION_DATA and WSO2_APIM_ANALYTICS_DASHBOARD I found schema for WSO2_APIM_ANALYTICS_DASHBOARD in stream processor server
Here are those two schema for someone like me struggling to migrate over to MSSQL
WSO2_APIM_ANALYTICS_DASHBOARD
IF OBJECT_ID('[dbo].[DASHBOARD_RESOURCE]', 'U') IS NOT NULL
DROP TABLE [dbo].[DASHBOARD_RESOURCE]
GO
CREATE TABLE [dbo].[DASHBOARD_RESOURCE](
[ID] [int] IDENTITY(1,1) NOT NULL,
[URL] [varchar](100) NOT NULL,
[OWNER] [varchar](100) NOT NULL,
[NAME] [varchar](256) NOT NULL,
[DESCRIPTION] [varchar](1000) NULL,
[PARENT_ID] [int] NOT NULL,
[LANDING_PAGE] [varchar](100) NOT NULL,
[CONTENT] [varbinary](max) NULL,
CONSTRAINT [PK_DASHBOARD_RESOURCE] PRIMARY KEY CLUSTERED
(
[URL] ASC,
[OWNER] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
IF OBJECT_ID('[dbo].[WIDGET_RESOURCE]', 'U') IS NOT NULL
DROP TABLE [dbo].[WIDGET_RESOURCE]
GO
CREATE TABLE [dbo].[WIDGET_RESOURCE](
[WIDGET_ID] [varchar](255) NOT NULL,
[WIDGET_NAME] [varchar](255) NOT NULL,
[WIDGET_CONFIGS] [varbinary](8000) NULL,
CONSTRAINT [PK_WIDGET_RESOURCE] PRIMARY KEY CLUSTERED
(
[WIDGET_ID] ASC,
[WIDGET_NAME] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
WSO2_APIM_ANALYTICS_GEO_LOCATION_DATA
CREATE TABLE BLOCKS (
network_cidr varchar(45) DEFAULT NULL,
network BIGINT DEFAULT NULL,
broadcast BIGINT DEFAULT NULL,
geoname_id BIGINT DEFAULT NULL,
registered_country_geoname_id BIGINT DEFAULT NULL,
represented_country_geoname_id BIGINT DEFAULT NULL,
is_anonymous_proxy SMALLINT DEFAULT '0',
is_satellite_provider SMALLINT DEFAULT '0',
postal_code VARCHAR(45) DEFAULT NULL,
latitude DECIMAL(10,4) DEFAULT NULL,
longitude DECIMAL(10,4) DEFAULT NULL,
network_blocks varchar(45) DEFAULT NULL);
CREATE INDEX idx_blocks_network ON BLOCKS (network);
CREATE INDEX idx_blocks_broadcast ON BLOCKS (broadcast);
CREATE INDEX idx_blocks_network_blocks ON BLOCKS (network_blocks);
CREATE TABLE LOCATION (
geoname_id BIGINT NOT NULL,
locale_code VARCHAR(10) DEFAULT NULL,
continent_code VARCHAR(10) DEFAULT NULL,
continent_name VARCHAR(20) DEFAULT NULL,
country_iso_code VARCHAR(10) DEFAULT NULL,
country_name VARCHAR(45) DEFAULT NULL,
subdivision_1_iso_code VARCHAR(10) DEFAULT NULL,
subdivision_1_name VARCHAR(1000) DEFAULT NULL,
subdivision_2_iso_code VARCHAR(10) DEFAULT NULL,
subdivision_2_name VARCHAR(1000) DEFAULT NULL,
city_name VARCHAR(1000) DEFAULT NULL,
metro_code BIGINT DEFAULT NULL,
time_zone VARCHAR(30) DEFAULT NULL,
PRIMARY KEY (geoname_id));
CREATE TABLE IP_LOCATION (
ip VARCHAR(100) NOT NULL,
country_name VARCHAR(200) DEFAULT NULL,
city_name VARCHAR(200) DEFAULT NULL,
PRIMARY KEY (ip)
);
Thanks
the following query should only return all cities starting with "Ö" (German umlaut).
letter = 'Ö'
City.objects.filter(name__istartswith=letter)
But it returns cities starting with O and Ö.
I use django 1.11 and mariadb.
I allready set COLLATE on that table to utf8_bin but this haven't changed the behavior within django.
This is the simplified SQL query
SELECT `cities_city`.`name` FROM `cities_city` WHERE `cities_city`.`name` LIKE "Ö%";
and here the SHOW CREATE TABLE output:
SHOW CREATE TABLE `cities_city`
-> ;
+-------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Table | Create Table |
+-------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| cities_city | CREATE TABLE `cities_city` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(200) CHARACTER SET utf8 NOT NULL,
`slug` varchar(255) CHARACTER SET utf8 DEFAULT NULL,
`name_std` varchar(200) CHARACTER SET utf8 NOT NULL,
`location` point NOT NULL,
`population` int(11) NOT NULL,
`elevation` int(11) DEFAULT NULL,
`kind` varchar(10) CHARACTER SET utf8 NOT NULL,
`timezone` varchar(40) CHARACTER SET utf8 NOT NULL,
`country_id` int(11) NOT NULL,
`region_id` int(11) DEFAULT NULL,
`subregion_id` int(11) DEFAULT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `cities_city_country_id_2f07e352_uniq` (`country_id`,`region_id`,`subregion_id`,`id`,`name`),
KEY `cities_city_b068931c` (`name`),
KEY `cities_city_16c3f481` (`name_std`),
KEY `cities_city_region_id_0227cdac_fk_cities_region_id` (`region_id`),
KEY `cities_city_subregion_id_9fbab97d_fk_cities_subregion_id` (`subregion_id`),
CONSTRAINT `cities_city_country_id_779ae117_fk_cities_country_id` FOREIGN KEY (`country_id`) REFERENCES `cities_country` (`id`),
CONSTRAINT `cities_city_region_id_0227cdac_fk_cities_region_id` FOREIGN KEY (`region_id`) REFERENCES `cities_region` (`id`),
CONSTRAINT `cities_city_subregion_id_9fbab97d_fk_cities_subregion_id` FOREIGN KEY (`subregion_id`) REFERENCES `cities_subregion` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=11468436 DEFAULT CHARSET=utf8 COLLATE=utf8_bin |
+-------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)
The problem is a subtle one.
The clue is here:
SELECT * FROM information_schema.`COLUMNS` WHERE table_name = 'cities_city';
The explanation...
`name` varchar(200) CHARACTER SET utf8 NOT NULL,
is COLLATE utf8_general_ci because that is the default collation for utf8.
This table default:
) ENGINE=InnoDB AUTO_INCREMENT=11468436 DEFAULT CHARSET=utf8 COLLATE=utf8_bin
gives utf8_bin to any newly added rows.
Perhaps you did the obvious ALTER TABLE to change to _bin? Instead:
ALTER TABLE cities_city
CONVERT TO CHARACTER SET utf8
COLLATE utf8_bin;
this will go into each string column and make the change. Note that indexes (etc) must be rebuilt when the collation changes.
CREATE TABLE `transaction` (
`id` mediumint(9) NOT NULL AUTO_INCREMENT,
`description` text NOT NULL,
PRIMARY KEY (`id`),
FULLTEXT KEY `description` (`description`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
How can I remove FULLTEXT line from MySQL dump above and comma on the line before so it looks something like this:
CREATE TABLE `transaction` (
`id` mediumint(9) NOT NULL AUTO_INCREMENT,
`description` text NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
The removal should be easy using sed but I'm not sure how to remove this comma on the line above so the dump is successfully imported:
sed -i '/FULLTEXT KEY.*/d' dump.sql
Sometimes there is also more columns with FULLTEXT index:
CREATE TABLE `entity` (
`id` mediumint(9) NOT NULL AUTO_INCREMENT,
`company_name` varchar(255) NOT NULL DEFAULT '',
`description` text NOT NULL,
PRIMARY KEY (`id`),
FULLTEXT KEY `company_name` (`company_name`),
FULLTEXT KEY `description` (`description`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
Sed script:
#!/bin/bash
sed -e '/FULLTEXT/d' |
sed -ne '
/ENGINE=InnoDB/!{H}
/ENGINE=InnoDB/{x; s/,[ \t]*$//; p; }
${g;p;}
'
Input:
CREATE TABLE `transaction` (
`id` mediumint(9) NOT NULL AUTO_INCREMENT,
`description` text NOT NULL,
PRIMARY KEY (`id`),
FULLTEXT KEY `description` (`description`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
CREATE TABLE `entity` (
`id` mediumint(9) NOT NULL AUTO_INCREMENT,
`company_name` varchar(255) NOT NULL DEFAULT '',
`description` text NOT NULL,
PRIMARY KEY (`id`),
FULLTEXT KEY `company_name` (`company_name`),
FULLTEXT KEY `description` (`description`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
Sample Run:
/home/user> ./1.sed < input
CREATE TABLE `transaction` (
`id` mediumint(9) NOT NULL AUTO_INCREMENT,
`description` text NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
CREATE TABLE `entity` (
`id` mediumint(9) NOT NULL AUTO_INCREMENT,
`company_name` varchar(255) NOT NULL DEFAULT '',
`description` text NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
sed is for simple substitutions on individual lines, that is all. For anything else you should use awk for clarity, brevity, portability, efficiency, robustness and most other desirable qualities of software. All of the sed constructs to do anything other than s, g, and p (with -n) became obsolete in the mid-1970s when awk was invented and exist today just for mental exercise.
Given this input:
$ cat file
CREATE TABLE `transaction` (
`id` mediumint(9) NOT NULL AUTO_INCREMENT,
`description` text NOT NULL,
PRIMARY KEY (`id`),
FULLTEXT KEY `description` (`description`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
CREATE TABLE `entity` (
`id` mediumint(9) NOT NULL AUTO_INCREMENT,
`company_name` varchar(255) NOT NULL DEFAULT '',
`description` text NOT NULL,
PRIMARY KEY (`id`),
FULLTEXT KEY `company_name` (`company_name`),
FULLTEXT KEY `description` (`description`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
With GNU awk for multi-char RS:
$ awk -v RS=',\\s*FULLTEXT[^\n]*)' -v ORS= '1' file
CREATE TABLE `transaction` (
`id` mediumint(9) NOT NULL AUTO_INCREMENT,
`description` text NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
CREATE TABLE `entity` (
`id` mediumint(9) NOT NULL AUTO_INCREMENT,
`company_name` varchar(255) NOT NULL DEFAULT '',
`description` text NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
Since you are using GNU sed (for -i) I assume you will have no problem using GNU awk and if you need inplace editing add -i inplace at the start.
awk -vRS=";" 'NF{gsub(/),\n +FULLTEXT.*)/,")\n)",$0);$0=$0";"}1' file
I have a problem with mysql c++ connector when i want to insert a string with a prepared statement he reduce my string in the database(saved in a longtext) . I have enormous loss of data because I want to save a longtext.
here is my code :
void RequetteBDD::add(Files::Fichier file)
{
string query = "INSERT INTO files(titre,url,type,txt,lastcrawl) VALUES (?,?,?,?,?)";
sql::PreparedStatement *prep_stmt;
prep_stmt = con->prepareStatement(query);
prep_stmt->setString(1,file.getNom()); //title
prep_stmt->setString(2,file.getURL().getUri()); //url
prep_stmt->setInt(3,file.getTypeInt()); //type
//i also try :
istringstream stream(file.getTextFull());
prep_stmt->setBlob(4,&stream);
//but the saved length was exactly the same.
prep_stmt->setString(4,file.getTextFull()); //here is the probleme
prep_stmt->setInt(5,time(NULL)); //timstamp
prep_stmt->execute();
delete prep_stmt;
}
mysql ddb:
CREATE TABLE IF NOT EXISTS `files` (
`id` int(11) NOT NULL AUTO_INCREMENT ,
`titre` varchar(256) COLLATE utf8_unicode_ci NOT NULL,
`url` varchar(512) COLLATE utf8_unicode_ci NOT NULL,
`type` int(1) NOT NULL DEFAULT '0' COMMENT ,
`txt` longtext COLLATE utf8_unicode_ci NOT NULL,
`lastcrawl` int(11) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci AUTO_INCREMENT=7 ;
thanks for your help.
It was a encoding problem; they are two solution to solve the probleme:
change the encoding from the database to ASCII
Or change the encoding from the string can be easely do with boost::locale::conv
I hope it will help other people.