Why do I see read-write error message when creating a table or inserting rows? - questdb

I keep getting a "could not open read-write messages" when creating a table or inserting rows?
2022-10-21T17:27:11.336011Z I i.q.c.l.t.LineTcpMeasurementScheduler could not create table [tableName=cpu, ex=could not open read-write
io.questdb.cairo.CairoException: [22] could not open read-only [file=/root/.questdb/db/cpu/service.k]
I have tried the troubleshoot solution given in QuestDB forums but it does not work.
If you could explain why it does not work along with the solution, I would appreciate it.

There are possibilities for insufficient limits for the maximum number of open files. Try and change that limit for your device. You can check more about that here

I think that the syntax which you used was incorrect. Have you checked, whether the table was created properly or not ?
If not, create the table
CREATE TABLE my_table(symb SYMBOL, price DOUBLE, ts TIMESTAMP, s STRING) timestamp(ts);
After creating the table properly, then try to insert the rows.

You are likely hitting a bug which was fixed recently:
https://github.com/questdb/questdb/pull/2627
The fix will be released in the upcoming 6.5.4 release.
Unfortunately not out yet.
Please, try 6.5.4 when it is released, should be out in the next few days.

Related

Error while saving transformation in pentaho spoon

I am getting below error while I save the transformation in pentaho spoon:
Error saving transformation to repository!
Error updating batch
Cannot insert duplicate key row in object 'dbo.R_STEP_ATTRIBUTE' with unique index 'IDX_RSAT'. The duplicate key value is (2314, PARTITIONING_SCHEMA, 0).
Everything was working fine before I ran a job that creates multiple excel files. While this job was running suddenly a memory issue occurred and the job was aborted. After that I tried to save my file but it is deleted for saving but not been saved. So I lost the job I created.
Please help me to know the reason.
The last save of the directory did not end gracefully.
There is a small chance that you can repair it by easing the db-caches file in the .kettle directory.
If it does not work, create a new repository and copy the current in the new. Try the global repository export/import. Then erase the old rep and do the same from the just rebuild repository.
The intermediary repository may be on files rather than on a database.
If it is the first time you do this, plan for a one-two hours.
There is a easy way to recover this.
As AlainD says, the problem occurs when you save or delete a transformations, and suddenly you lost the connection or had a problem with Kettle.
When that occurs, you will find a lot of step records into the table R_STEP_ATTRIBUTE. In the error shown is the [ID_TRANSFORMATION] = 2314.
So, if you check the table R_TRANSFORMATION with [ID_TRANSFORMATION] = 2314, maybe wont find any transformation with that id.
After check that, you can delete all the records related with that [ID_TRANSFORMATION], for example:
delete from R_STEP_ATTRIBUTE where ID_TRANSFORMATION=2314
We just solved this issue by executing the following SQL statement
DELETE
FROM R_STEP_ATTRIBUTE
WHERE ID_STEP NOT IN (SELECT ID_STEP FROM R_STEP)

Informatica : taking very long time when doing insert

i have one mapping which just includes one source table and one target table. The source table has 100 columns and around 33xxxx records, i need to use this tool to insert to the target table and the logic is insert only. The version of informatica is 9.6.1 version and Database is SQL Server 2012.
After i run the workflow, it takes 5x/s to insert. the speed is too slow. I think it may be related to the number of columns
Can anyone help me how to increase the speed?
Thanks a lot
I think i know the reason why it happened. It is there are two fields which are ntext field in this table. That's why it takes very long time.
You can try the below options
1) Use bulk option for 'Target Load type' attribute in session if the target table doesn't have any indexes or keys on it
2) If there is any SQL override in the SOURCE QUALIFIER try to tune the query
3) Find for 'BUSY' in the session log and note down the busy percentages of each thread. Based on the thread percentages you will be able to identify the exact thread which is taking more time (Reader, Transformation, Writer)
4) Try to use informatica partitions through which you can achieve parallel processing.
Thanks and Regards,
Raj
Consider following points to increase the performance:
Increase the "commit interval" size in the session level properties.
Use the "bulk load" in session level properties.
You can also use the "partitioning" in session level, to do this you need partitioning license.
If your source is a database and you are doing sql override in source qualifier transformation , then you can also use the "Hints" for increasing the performan

Data truncated error via ODBC

i'm trying to query a table with c++ odbc using 'sql driver'.
While i'm trying to open the table using my query i get 'Data truncated.' error. I checked it out, the data i'm passing for that query is no longer than 255 in length, i think it is a bug.
did some one solved that issue? any suggestions?
windows 7, sql server 2008, vs 2010.
Thanks in advance.
the fix up
check not only the parameter you pass during open but also the fields that are being transferred during DoFieldExchange() function, in my case that was another record field that was transferred to me from the sql and messed things up.
refer the link for the fix up.

Recordset Update errors when updating sql_variant field

I'm using C++ and ADO to add data to a SQL Server 2005 database. When calling the Recordset Update method for a sql_variant column I'm getting the error DB_E_ERRORSOCCURRED and the error message Multiple-step OLE DB operation generated errors. Check each OLE DB status value, if available. No work was done. If the value I'm adding is NULL all works fine and all values going to the fields that are not sql_variant types work.
Does anyone know what I might be doing wrong?
Thanks
[Edit] I have some more information. The value we are storing is the empty string - ADO appears to want to store this in the sql_variant as a nchar(0), which of course is not a valid SQL data type. Is there a way to get an empty string in a sql_variant using the ADO batch commands?
You are only being shown the outer-most error there and as the error suggest you need to check the inner errors to find out the problem.
Apologies, I'm a VB developer but if you loop through the errors on your connection object you should be able to pinpoint the actual error.
From my classic ADO days multiple-step errors usually pointed at trying to stuff something to big into your column, e.g a string thats too big or a number with too high precision.
Hope this helps.
Ed

Drop constraints only if it exists in mysql server 5.0

i want to know how to drop a constraint only if it exists. is there any single line statement present in mysql server which will allow me to do this.
i have tried the following command but unable to get the desire output
alter table airlines
drop foreign key if exits FK_airlines;
any help to this really help me to go forward in mysql
I do not believe this is possible in a single line, unless you are willing to detect the error and move on (not a bad thing).
The INFORMATION_SCHEMA database contains the info you need to tell if the foreign key exists, so you could implement it in a 2 step process.
http://dev.mysql.com/doc/refman/5.1/en/table-constraints-table.html
yep not possible the if exist is available only for database table and view :
http://dev.mysql.com/doc/refman/5.0/en/replication-features-drop-if-exists.html
yep 2 step process is a good way like gahooa said