QuestDB invalid metadata version - questdb

When I run QuestDB selects for few tables I started see the error
Invalid metadata version at fd=34. Metadata version does not match runtime version
I am running QuestDB docker image questdb/questdb:6.0.4 and I believe I created the table when I used questdb/questdb:6.0.5. Is it possible to downgrade tables in QuestDB or any other way to fix the error?

It is possible to downgrade from some version to others, but not always.
In particular 6.0.5 can be downgraded to 6.0.4. To do it, in every table directory upgrade process leaves file _meta.v419. You need to stop questdb, delete _meta and rename _meta.v419 into _meta. Then you delete dbroot/upgrade.d and start QuestDB.

Related

Redshift UNLOAD command with extension parameter throws syntax error

I am attempting to unload data from Redshift using the extension parameter to specify a CSV file extension. The CSV extension is useful to allow data files to be opened e.g. in spreadsheet software.
The command I run is:
unload ('select * from public.mytable')
to 's3://mydomain/fZyd6EYPK5c/data_'
iam_role 'arn:aws:iam::xxxxxxx:role/my-role'
parallel off
format csv
extension '.csv.gz'
gzip
allowoverwrite;
This command throws an error message:
SQL Error [42601]: ERROR: syntax error at or near "extension"
It appears that the extension option is not recognized. I believe I have followed the official documentation and examples:
https://docs.aws.amazon.com/redshift/latest/dg/r_UNLOAD.html
https://docs.aws.amazon.com/redshift/latest/dg/r_UNLOAD_command_examples.html
select version();
PostgreSQL 8.0.2 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.4.2 20041017 (Red Hat 3.4.2-6.fc3), Redshift 1.0.44903
I am testing the query from a Java application and from DBeaver.
Do I have a syntax error in my query? Could this be a Redshift bug? Asked on AWS forum. Replies appreciated.
It turns out the extension feature was not rolled out to my Redshift cluster yet. Asked on the AWS forum and the parameter is available after version 1.0.45698.
Extension parameter is a feature released recently and it's available after version 1.0.45698. You are seeing this error since your cluster version 1.0.44903 is lower than this. Please wait until next maintenance window or try creating a new cluster/workgroup to get the updated version. Cluster versions and released features are documented in following page.

Internal Error in Redmine Initialization phase

I'm trying to setup Redmine on the following products
redmine-4.0.7
Rails 5.2.4.2
Phusion Passenger 6.0.7
Apache/2.4.6
mysql Ver 14.14
I expected there will be initializing page however, I got `Internal Error' from http://mydomain/redmine/
I can see the following messages in log/prduction.log
Completed 500 Internal Server Error in 21ms (ActiveRecord: 1.5ms)
ActiveRecord::StatementInvalid (Mysql2::Error: Can't find file: './redmine/settings.frm' (errno: 13 - Permission denied): SHOW FULL FIELDS FROM `settings`):
It seems I need ./redmine/settings.frm but there isn't.
Does anyone know how to place ./redmine/settings.frm and what content should be in?
The error is thrown by your database server (i.e. MySQL). It seems that MySQL does not have the required permission to access the files where it stores the table data.
Usually, those files are handled (i.e. created, updated, and eventually deleted) entirely by MySQL which requires specific access patterns to ensure consistent data. Because of that, you should strongly avoid to manually change any files under control of MySQL. Instead, you should only use SQL commands to update table structures and table data.
o fix this issue now, you need to fix the permissions of your MySQL data files so that MySQL can properly access them. What exactly is required here is unfortunately not simply explained since there can be various causes. If you have jsut setup your MySQL server, it might be best start entirely new.

How to fix `user must specify LSN` when using AWS DMS for Postgres RDS

I'm trying to migrate and synchronize a PostgreSQL database using AWS DMS and I'm getting the following error.
Last Error Task error notification received from subtask 0, thread 0
[reptask/replicationtask.c:2673] [1020101] When working with Configured Slotname, user must
specify LSN; Error executing source loop; Stream component failed at subtask 0, component
st_0_D27UO7SI6SIKOSZ4V6RH4PPTZQ ; Stream component 'st_0_D27UO7SI6SIKOSZ4V6RH4PPTZQ'
terminated [reptask/replicationtask.c:2680] [1020101] Stop Reason FATAL_ERROR Error Level FATAL
I already created a replication slot and configured its name in the source endpoint.
DMS Engine version: 3.1.4
Does anyone knows anything that could help me?
Luan -
I experienced the same issue - I was trying to replicate data from Postgres to an S3 bucket.I would check two things - your version of Postgres and the DMS version being used.
I downgraded my RDS postgres version to 9.6 and my DMS version to 2.4.5 to get replication working.
You can find more details here -
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.PostgreSQL.html
I wanted to try the newer versions of DMS (3.1.4 and 3.3.0[beta]) as it has parquet support but I have gotten the same errors you mentioned above.
Hope this helps.
It appears AWS expects you to use the pglogical extension rather than test_decoding. You have to:
add pglogical to shared_preload_libraries in parameter options
reboot
CREATE EXTENSION pglogical;
On dms 3.4.2 and postgres 12.3 without the slotName= setting DMS created the slot for itself. Also make sure you exclude the pglogical schema from the migration task as it has unsupported data types.
P.S. When DMS hits resource limits it silently fails. After resolving the LSN errors, I continued to get failures of the type Last Error Task 'psql2es' was suspended due to 6 successive unexpected failures Stop Reason FATAL_ERROR Error Level FATAL without any errors in the logs. I resolved this issue using the Advanced task settings > Full load tuning settings and tuning the parameters downward.

EMR dyanmodb export failed because of table capacity set to on-demand

After we changed the dynamodb table capacity to on-demand, the data pipeline job to export dynamodb table failed with this error.
Exception in thread "main" java.lang.RuntimeException: Read throughput should not be less than 1. Read throughput percent: 0.0
at org.apache.hadoop.dynamodb.read.AbstractDynamoDBInputFormat.getSplits(AbstractDynamoDBInputFormat.java:51)
at org.apache.hadoop.mapreduce.JobSubmitter.writeOldSplits(JobSubmitter.java:520)
at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:512)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:394)
Any workaround to this issue?
Thanks
--gsu
I'd contact AWS support to confirm, but I was told the EMR DynamoDB connector does not formally support tables using on-demand provisioning yet. So, more than likely you need to switch the table back to provisioned capacity as a workaround.
Edit: As of 23 January 2019, the EMR connector for DynamoDB supports tables set to on-demand billing.
If the issue is not resolved, then you might need to change at 3 places:
You need to use emr release emr-5.26.0 to emr-5.30.0.
Replace org.apache.hadoop.dynamodb.tools.DynamoDbExport with org.apache.hadoop.dynamodb.tools.DynamoDBExport. Notice the change in casing. Similar for DynamoDBImport.
If you are using emr-dynamodb-connector, then you need to clone its latest version, generate the emr-ddb-tools jar by mvn clean install and then use the generated jar for emr-dynamodb-tools which would be version 4 or higher currently and replace this in your arguments. These should resolve the issue.
Also, there is an issue currently with emr releases 31 or higher if you are using emr-dynamodb-tools which shows you error related to some joda-time framework. I would restrict to using any releases between emr-5.26.0 to emr-5.30.0.

Table "API_REQUEST_SUMMARY" not found;

I am trying to collect statistics in wso2AM using wso2BAM. When the API request completes no errors are reporting and the pass through works.
However when I try to access the statistics I get the error Table "API_REQUEST_SUMMARY" not found; SQL Statement:
SELECT time, year,month,day FROM API_REQUEST_SUMMARY order by time ASC limit 1 [42102-140]
No errors in the BAM logs. The Hadoop job is running every few seconds with no errors
Suggestions on what to look at?
OS is Linux
AM is 1.4.0
BAM is 2.3.0
I followed the instructions here for combining the two:
http://docs.wso2.org/wiki/display/AM140/Monitoring+Using+WSO2+BAM
Thanks,
Chris
The exact same exception can be happen, if user forgot to Copy the file /statistics/API_Manager_Analytics.tbox to directory, /repository/deployment/server/bam-toolbox.
So in that case only APIMGTSTATS_DB.db will be created without any tables.
Thanks
Ajith.