Insert contents of a text file into the Oracle CLOB - c++

I'm trying to insert whole text contents of file.txt into a CLOB column!
Connection^ DB = gcnew Connection();
OracleConnection^ Ocnn=DB->getOracleConnectionObject();
int number = 0;
try {
// here >>
OracleCommand^ c = gcnew OracleCommand("INSERT INTO PANDA.PAGE(SITE_ID, URL, SOURCE) VALUES('40', 'www.site.com', Read_Whole_File('C://Users/farmehr/Desktop/', 'file.txt'))", Ocnn);
number = c->ExecuteNonQuery();
}
catch (Exception^ eOra) {
Console::WriteLine(eOra->Message + "Exception Caught");
throw eOra;
}
I want to know is there any way to insert file directly to the data base? ( A function like Read_Whole_File() in the code )

In order to be able to insert a file into a clob first I had to make a procedure in SQLPLUS! SOURCE is my clob file and TEMP_CLOB is a predefined directory.
Next in my code I had to run this procedure:
Using code:
Result:
-Keep this in mind that for making and running procedures you have to login AS SYSDBA.( Change oracleClient.dll to OracleManagedAcess.dll if you're using C or .NET)

Related

Univocity - Issue in file header validation during multiple reads

I'm using Univocity-Parser's bean iterator to read each line of file and get the bean. I have observed a weird behavior in the library when I'm attempting to read the same file mutiple times.
Code when passing the File object to CsvParser instance:
private static void testBeanIterator() throws Exception {
try {
File sampleFile = generateFile(0);
/*
System.out.println("Sample file content = " + FileUtils.readFileToString(sampleFile,
Charset.defaultCharset()));
*/
for (int i = 0; i < 1000; i++) {
BufferedReader reader =
new BufferedReader(new InputStreamReader(new FileInputStream(sampleFile),
StandardCharsets.UTF_8));
AtomicInteger atomicInteger = new AtomicInteger();
final BeanProcessor<CustomerSegmentMapping> rowProcessor =
new BeanProcessor<CustomerSegmentMapping>(CustomerSegmentMapping.class) {
#Override
public void beanProcessed(#Nonnull final CustomerSegmentMapping customerSegmentMapping,
#Nonnull final ParsingContext context) {
try {
System.out.println(OBJECT_MAPPER.writeValueAsString(customerSegmentMapping));
atomicInteger.getAndAdd(1);
} catch (Exception ex) {
throw new RuntimeException("error");
}
}
};
rowProcessor.setStrictHeaderValidationEnabled(true);
final CsvParserSettings parserSettings = new CsvParserSettings();
parserSettings.setRowProcessor(rowProcessor);
parserSettings.setHeaderExtractionEnabled(true);
final CsvParser parser = new CsvParser(parserSettings);
//parser.parse(reader);
parser.parse(sampleFile);
System.out.println("Finished parser");
if (atomicInteger.get() != 10) {
throw new Exception("mismatch");
}
reader.close();
}
} catch (Exception ex) {
throw new RuntimeException("exception = " + ex.getMessage(), ex);
} finally {
}
}
On executing the code, following is the console output:
{"customerId":"6bc12a7a-2c28-4aea-a7be-6be45e16ffb2","segmentId":"S1"}
{"customerId":"da736310-e508-47ff-92b8-59d490e37a72","segmentId":"S1"}
{"customerId":"9a5d4454-e6d4-49a5-bb04-8354154d0493","segmentId":"S1"}
{"customerId":"ec2ed5cc-cd18-443b-bd69-e56fc09ba0f5","segmentId":"S1"}
{"customerId":"94ea24b0-0c83-4039-a391-1d2439c88be8","segmentId":"S1"}
{"customerId":"2baef5f9-d8cd-451d-b579-a626cb58b284","segmentId":"S1"}
{"customerId":"022a184b-1b06-49aa-b1c4-b94a6f343b04","segmentId":"S1"}
{"customerId":"bcb3984c-0495-4da8-b146-9af3983cc158","segmentId":"S1"}
{"customerId":"feef62de-1aaf-43d4-a83b-afe053db97cf","segmentId":"S1"}
{"customerId":"5825c924-55d5-4fd6-8468-ca36d47a7cae","segmentId":"S1"}
Finished parser
{"customerId":"6bc12a7a-2c28-4aea-a7be-6be45e16ffb2","segmentId":"S1"}
{"customerId":"da736310-e508-47ff-92b8-59d490e37a72","segmentId":"S1"}
{"customerId":"9a5d4454-e6d4-49a5-bb04-8354154d0493","segmentId":"S1"}
{"customerId":"ec2ed5cc-cd18-443b-bd69-e56fc09ba0f5","segmentId":"S1"}
{"customerId":"94ea24b0-0c83-4039-a391-1d2439c88be8","segmentId":"S1"}
{"customerId":"2baef5f9-d8cd-451d-b579-a626cb58b284","segmentId":"S1"}
{"customerId":"022a184b-1b06-49aa-b1c4-b94a6f343b04","segmentId":"S1"}
{"customerId":"bcb3984c-0495-4da8-b146-9af3983cc158","segmentId":"S1"}
{"customerId":"feef62de-1aaf-43d4-a83b-afe053db97cf","segmentId":"S1"}
{"customerId":"5825c924-55d5-4fd6-8468-ca36d47a7cae","segmentId":"S1"}
Finished parser
{"customerId":"6bc12a7a-2c28-4aea-a7be-6be45e16ffb2","segmentId":"S1"}
{"customerId":"da736310-e508-47ff-92b8-59d490e37a72","segmentId":"S1"}
{"customerId":"9a5d4454-e6d4-49a5-bb04-8354154d0493","segmentId":"S1"}
{"customerId":"ec2ed5cc-cd18-443b-bd69-e56fc09ba0f5","segmentId":"S1"}
{"customerId":"94ea24b0-0c83-4039-a391-1d2439c88be8","segmentId":"S1"}
{"customerId":"2baef5f9-d8cd-451d-b579-a626cb58b284","segmentId":"S1"}
{"customerId":"022a184b-1b06-49aa-b1c4-b94a6f343b04","segmentId":"S1"}
{"customerId":"bcb3984c-0495-4da8-b146-9af3983cc158","segmentId":"S1"}
{"customerId":"feef62de-1aaf-43d4-a83b-afe053db97cf","segmentId":"S1"}
{"customerId":"5825c924-55d5-4fd6-8468-ca36d47a7cae","segmentId":"S1"}
Finished parser
Exception in thread "main" java.lang.RuntimeException: exception = Could not find fields [CustomerId]' in input. Names found: [ustomerId, SegmentId]
Internal state when error was thrown: line=2, column=0, record=1, charIndex=60, headers=[ustomerId, SegmentId]
at com.poppins.cube.common.UnivocityNahiHatanaHai.testBeanIterator(UnivocityNahiHatanaHai.java:95)
at com.poppins.cube.common.UnivocityNahiHatanaHai.main(UnivocityNahiHatanaHai.java:37)
Caused by: com.univocity.parsers.common.DataProcessingException: Could not find fields [CustomerId]' in input. Names found: [ustomerId, SegmentId]
Internal state when error was thrown: line=2, column=0, record=1, charIndex=60, headers=[ustomerId, SegmentId]
at com.univocity.parsers.common.processor.core.BeanConversionProcessor.mapFieldIndexes(BeanConversionProcessor.java:414)
at com.univocity.parsers.common.processor.core.BeanConversionProcessor.mapValuesToFields(BeanConversionProcessor.java:340)
at com.univocity.parsers.common.processor.core.BeanConversionProcessor.createBean(BeanConversionProcessor.java:508)
at com.univocity.parsers.common.processor.core.AbstractBeanProcessor.rowProcessed(AbstractBeanProcessor.java:54)
at com.univocity.parsers.common.Internal.process(Internal.java:21)
at com.univocity.parsers.common.AbstractParser.rowProcessed(AbstractParser.java:596)
at com.univocity.parsers.common.AbstractParser.parse(AbstractParser.java:133)
at com.univocity.parsers.common.AbstractParser.parse(AbstractParser.java:605)
at com.poppins.cube.common.UnivocityNahiHatanaHai.testBeanIterator(UnivocityNahiHatanaHai.java:83)
... 1 more
Process finished with exit code 1
Following is the content of the file:
CustomerId,SegmentId
6bc12a7a-2c28-4aea-a7be-6be45e16ffb2,S1
da736310-e508-47ff-92b8-59d490e37a72,S1
9a5d4454-e6d4-49a5-bb04-8354154d0493,S1
ec2ed5cc-cd18-443b-bd69-e56fc09ba0f5,S1
94ea24b0-0c83-4039-a391-1d2439c88be8,S1
2baef5f9-d8cd-451d-b579-a626cb58b284,S1
022a184b-1b06-49aa-b1c4-b94a6f343b04,S1
bcb3984c-0495-4da8-b146-9af3983cc158,S1
feef62de-1aaf-43d4-a83b-afe053db97cf,S1
5825c924-55d5-4fd6-8468-ca36d47a7cae,S1
From what I could understand, the issue is arising because I'm passing a File object to CsvParser. CsvParser internally creates an InputStream object which is not closed.
If I'm passing a Buffered reader object instead of File object, the issue is not arising.
I'm not able to understand whether this is a known issue with the Univocity-Parsers or is there anything I'm missing in understanding.
Author of the library here. I can see your exception showing it got header ustomerId instead of CustomerId.
This looks like a bug introduced in version 2.5.0 that was fixed in version 2.5.6 if I'm not mistaken. This plagued me for a while as it was an internal concurrency issue that was hard to track down. Basically when you pass a File without an explicit encoding it will try to find a UTF BOM marker in the input (effectively consuming the first character) to determine the encoding automatically. This happened only for InputStreams and Files.
Anyway, this has been fixed so simply updating to the latest version should get rid of the problem for you (please let me know if you are not using version 2.5.something)
If you want to remain with the current version you have there, the error will be gone if you call
parser.parse(sampleFile, Charset.defaultCharset());
This will prevent the parser from trying to discover whether there's a BOM marker in your file, therefore avoiding that pesky bug.
Hope this helps

SqlDataAdapter not loading datatable - C++

I have been trying to load an SQL database into a datatable in C++, however; it doesn't seem to want to work. The connection is working though, as DataReader works. Here is my code
void importDatabase() {
SqlConnection con;
SqlDataAdapter^ da;
SqlCommand cmd;
DataTable^ dt;
int count = 1;
try {
con.ConnectionString = "Data Source=MYNAME\\SQLEXPRESS;Initial Catalog=VinylRecords;Integrated Security=True";
cmd.CommandText = "SELECT * FROM Records";
cmd.Connection = %con;
con.Open();
da = gcnew SqlDataAdapter(%cmd);
dt = gcnew DataTable("Records");
Console::Write(da->ToString());
da->Fill(dt);
for (int i = 0; i < dt->Rows->Count - 1; i++) {
String^ value_string;
value_string = dt->Rows[i]->ToString();
Console::WriteLine(dt->Rows[i]->ToString());
count++;
}
cout << "There are " << count << " many records";
}
catch (Exception^ ex) {
Console::WriteLine(ex->ToString());
}
}
Please note, that I slightly altered the source name to post here, but only the first part.
What is wrong with my code?
So, the problem is here:
dt->Rows[i]->ToString()
Rows[i] is a Row object. And the Row class's ToString() method always prints out the fully qualified typename, which is what you are seeing. So this is technically working just fine. What you will need to do to get something useful is: you will need to access a specific column in that row and get it's value, then output that.
Something along the lines of:
foreach (DataRow dr in dt.Rows)
{
Console.Write(dr.Field<int>("ColumnOne"));
Console.Write(" | ");
Console.WriteLine(dr.Field<string>("ColumnTwo"));
}
I am not entirely sure on the syntax for accessing a specific cell inside of a DataTable when using C++\CLI. So I have provided the C# equivalent to explain why it is you were getting output of managed type names (e.g. "System.Data.DataRow") instead of the info inside of the Row's columns.
Also, I noticed you tagged this question with "mysql", but you are using the ADO.NET System.Data.SqlClient namespace. The SqlDataReader and SqlDataAdapter classes only work with TSQL (Microsoft's SQL Server databases) If you are actually connecting to a mysql database you will want to use the System.Data.OdbcDataAdapter class. You can read a little more here: https://msdn.microsoft.com/en-us/library/ms254931.aspx

Saving text in Sqlite3, c++

here is my second post in the community, excuse me if I'm forget to add something, just let me know:
I am trying to do a program in c++ able to save text (i want to save code) in a database using sqlite3. Currently I've made a wxWidget program that call some functions from a DLL and this ones interactuate with the database.
The database that I want to make is really simple, it has 3 columns in one table (id,name, ref). My problem comes when I want to save big amount of text that also contains simblos that can conflict with the sql queries (I would like to save files inside the database, for example in the "ref" column ).
I'm using mostly the sqlite3_exec function, because the functions sqlite3_prepare_v2, sqlite_bind, sqlite3_step crash me the DLL where I'm working.
My doubt: Can I directly save any text as big as I want, without taking care about if it has simbols or not? and how can I do it?.
More info: I am working in c++ with code:block(13.12) making a DLL of sqlite3 functions and using MinGW toolchain. (windows 7).
This is an example of an insert function that I'm using:
int DLL_EXPORT add_item(sqlite3* db, string tbname,string col,string item)
{
char* db_err = 0;
if (tbname==std::string()||col==std::string()||item==std::string())
throw std::invalid_argument( "stoi: invalid argument table name");
char buf[200];
sprintf(buf,"insert into %s (%s) values ('%s');", tbname.c_str(), col.c_str(),item.c_str());
int n = sqlite3_exec(db, buf, NULL, 0, &db_err);
dsperr(&db_err);
if( n != SQLITE_OK )
{
//throw something
}
return 0;
}
Thank you in advance.
Thank to CL. for the up commentary
// Add one text to a table
// The column must be specify
//
int DLL_EXPORT add_text(sqlite3* db, string tbname,string col,string id,string item)
{
char* db_err = 0;
if (tbname==std::string()||col==std::string()||item==std::string())
throw std::invalid_argument( "stoi: invalid argument table name");
char *zSQL = sqlite3_mprintf("UPDATE %q SET %q=(%Q) WHERE id=%q", tbname.c_str(),col.c_str() ,item.c_str(),id.c_str());
int n = sqlite3_exec(db, zSQL, NULL, 0, &db_err);
dsperr(&db_err);
sqlite3_free(zSQL);
if( n != SQLITE_OK )
{
// throw something
}
return 0;
}

Insert a row in MySQL using C++ for MFC Dialog Base App

I have 2 variables, and I want to insert their values into a MySQL database, but I don't know how to do this.
Here is the all of my code so far, please correct/advise:
void RegistrationForm::Register()
{
istifadeciAdi.GetWindowText(i_ad);
par.GetWindowText(i_par);
parTekrar.GetWindowText(i_par_tekrar);
if (istifadeciAdi.GetWindowTextLength() != 0) // if you can please write this line better.
{
if (i_parol == i_parol_tekrar)
{
MySQL_Driver *driver;
Connection *dbConn;
Statement *statement;
//ResultSet *result; // I don't need this line any more
//PreparedStatement *ps;
driver = get_mysql_driver_instance();
dbConn = driver->connect("host", "u", "c");
dbConn->setSchema("mfc_app_database");
statement = dbConn->createStatement();
statement->executeQuery("INSERT INTO users(`username`, `password`) VALUES (/* how to use i_ad and i_par as variable to set column value? */)"); // executes the user "input"
/*ps = dbConn->prepareStatement("INSERT INTO users(`username`, `password`, `name`) VALUES (?)");
ps->setString(1, "cccc");
ps->setString(2, "ffff);*/
//delete result;
//delete[] result;
/*delete ps;
delete[] ps;*/
delete statement;
delete[] statement; // don't use this line in your program as me
delete dbConn;
delete[] dbConn; // don't use this line in your program as me
}
else
MessageBox(L"Şifrə dəqiq təkrar olunmalıdır.", L"Xəbərdarlıq", MB_ICONWARNING);
}
else
AfxMessageBox(L"Boş qoymaq olmaz.");
}
Edit
There's no any error. But when I clicked the (Register) button it says:
Program stopped working
and after clicking the Debug button it takes me to line which insert query I wrote.
p.s Sorry for my poor English.
Use CString to make query.
For example:
CString strQuery;
strQuery.Format(_T("INSERT INTO users(`username`, `password`) VALUES ('%s', '%s')"),i_ad, i_par);
Before using this query string in executeQuery (or in other query commands) you must convert it to std::string. Because, execute, executeQuery and executeUpdate commands only accepts the std::string.So, add this lines:
CT2CA tempString(query);
std::string query(tempString);
And use this string in your execute command
statement->executeQuery(query);
The docs for that MySQL connector say to use statement::execute() for queries that don't return a resultset, and statement::executeQuery() when there is a single row resultset.
So for SQL INSERT INTO maybe your problem is that you should be using execute.

Same Instances header ( arff ) for all my database queries

I am using InstanceQuery , SQL queries, to construct my Instances. But my query results does not come in the same order always as it is normal in SQL.
Beacuse of this Instances constucted from different SQL has different headers. A simple example can be seen below. I suspect my results changes because of this behavior.
Header 1
#attribute duration numeric
#attribute protocol_type {tcp,udp}
#attribute service {http,domain_u}
#attribute flag {SF}
Header 2
#attribute duration numeric
#attribute protocol_type {tcp}
#attribute service {pm_dump,pop_2,pop_3}
#attribute flag {SF,S0,SH}
My question is : How can I give correct header information to Instance construction.
Is something like below workflow is possible?
get pre-prepared header information from arff file or another place.
give instance construction this header information
call sql function and get Instances (header + data)
I am using following sql function to get instances from database.
public static Instances getInstanceDataFromDatabase(String pSql
,String pInstanceRelationName){
try {
DatabaseUtils utils = new DatabaseUtils();
InstanceQuery query = new InstanceQuery();
query.setUsername(username);
query.setPassword(password);
query.setQuery(pSql);
Instances data = query.retrieveInstances();
data.setRelationName(pInstanceRelationName);
if (data.classIndex() == -1)
{
data.setClassIndex(data.numAttributes() - 1);
}
return data;
} catch (Exception e) {
throw new RuntimeException(e);
}
}
I tried various approaches to my problem. But it seems that weka internal API does not allow solution to this problem right now. I modified weka.core.Instances append command line code for my purposes. This code is also given in this answer
According to this, here is my solution. I created a SampleWithKnownHeader.arff file , which contains correct header values. I read this file with following code.
public static Instances getSampleInstances() {
Instances data = null;
try {
BufferedReader reader = new BufferedReader(new FileReader(
"datas\\SampleWithKnownHeader.arff"));
data = new Instances(reader);
reader.close();
// setting class attribute
data.setClassIndex(data.numAttributes() - 1);
}
catch (Exception e) {
throw new RuntimeException(e);
}
return data;
}
After that , I use following code to create instances. I had to use StringBuilder and string values of instance, then I save corresponding string to file.
public static void main(String[] args) {
Instances SampleInstance = MyUtilsForWeka.getSampleInstances();
DataSource source1 = new DataSource(SampleInstance);
Instances data2 = InstancesFromDatabase
.getInstanceDataFromDatabase(DatabaseQueries.WEKALIST_QUESTION1);
MyUtilsForWeka.saveInstancesToFile(data2, "fromDatabase.arff");
DataSource source2 = new DataSource(data2);
Instances structure1;
Instances structure2;
StringBuilder sb = new StringBuilder();
try {
structure1 = source1.getStructure();
sb.append(structure1);
structure2 = source2.getStructure();
while (source2.hasMoreElements(structure2)) {
String elementAsString = source2.nextElement(structure2)
.toString();
sb.append(elementAsString);
sb.append("\n");
}
} catch (Exception ex) {
throw new RuntimeException(ex);
}
MyUtilsForWeka.saveInstancesToFile(sb.toString(), "combined.arff");
}
My save instances to file code is as below.
public static void saveInstancesToFile(String contents,String filename) {
FileWriter fstream;
try {
fstream = new FileWriter(filename);
BufferedWriter out = new BufferedWriter(fstream);
out.write(contents);
out.close();
} catch (Exception ex) {
throw new RuntimeException(ex);
}
This solves my problem but I wonder if more elegant solution exists.
I solved a similar problem with the Add filter that allows adding attributes to Instances. You need to add a correct Attibute with proper list of values to both datasets (in my case - to test dataset only):
Load train and test data:
/* "train" contains labels and data */
/* "test" contains data only */
CSVLoader csvLoader = new CSVLoader();
csvLoader.setFile(new File(trainFile));
Instances training = csvLoader.getDataSet();
csvLoader.reset();
csvLoader.setFile(new File(predictFile));
Instances test = csvLoader.getDataSet();
Set a new attribute with Add filter:
Add add = new Add();
/* the name of the attribute must be the same as in "train"*/
add.setAttributeName(training.attribute(0).name());
/* getValues returns a String with comma-separated values of the attribute */
add.setNominalLabels(getValues(training.attribute(0)));
/* put the new attribute to the 1st position, the same as in "train"*/
add.setAttributeIndex("1");
add.setInputFormat(test);
/* result - a compatible with "train" dataset */
test = Filter.useFilter(test, add);
As a result, the headers of both "train" and "test" are the same (compatible for Weka machine learning)