We have PostgreSQL instances (1 master + 1 read replica) on Google SQL. Our Django (1.11.12) application uses these databases via PostGIS engine. When we try to use the database, we saw this error message:
django.db.utils.OperationalError: canceling statement due to conflict with recovery
DETAIL: User query might have needed to see row versions that must be removed.
When I search for a solution, they generally say that I need to change hot_standby_feedback flag. But as you know Google SQL service has some restrictions about settings. I can't set the flag.
How can I fix this?
If “Google SQL” allows that, you can set max_standby_streaming_delay to -1 so that replication is delayed if a conflict is detected.
Then the query will not be canceled, but replication may lag if applying changes would cause a conflict.
Consider getting an “unfettered” PostgreSQL.
If you would like set hot_standby_feedback = on, I'll suggest that you indicate your interest in the open feature request on Google Cloud Platform's Public Issue Tracker tool. That way someone can look into the handling query conflict issues your Cloud SQL PostgreSQL instance encountered.
I've also been monitoring an open thread in the Issue Tracker about making max_standby_archive_delay and max_standby_streaming_delay flags available to users to set. You can track it there as well. Hope this helps!
Related
I am working on a Power BI project.
I started using a MySQL database that had a small amount of data. I managed myself to create the schema and a very basic dashboard.
After this, I tried to change the data source for a new mysql database with a much bigger amount of data in order to see its performance. The tables are the same. The only thing that changes is the name of the data base and the name of the schema.
The thing is that whenever I try to do this an error always pops up:
Fatal error encountered during data read.
Microsoft.Mashup.Evaluator.Interface.ErrorException
True
I dont know why this happend. I tried to follow some suggestions I saw in the official forums but they didnt worked for me.
I also deleted the cache but nothing happens ( File-> Options and Settings-> Options-> Data Load -> clear cache)
If you could give me any suggestions, they would be appreciated because I am new to power bi and to be honest I am quite lost with this error.
Are you using mysql with hosted environment? like AWS RDS database ?
Previously I had similar issue getting fatal error when import data via mysql view.
The issue is the processer that used in mysql database was not powerful to run on 100% cpu usage.
So I had to upgrade processer in to powerful and efficient one. And did some changes to query to be efficient.
In your case try to add indexing to the tables and if you are using hosted mysql connection try upgrade processor that can work with 100% usage.
I'm trying to perform inserts on Amazon's Managed Cassandra service from IntelliJ's DataGrip IDE, however I recieve the following error:
Consistency level LOCAL_ONE is not supported for this operation. Supported consistency levels are: LOCAL_QUORUM
This is due to Amazon using the LOCAL_QUORUM consistency level for writes.
I tried to set the consistency level with CONSISTENCY LOCAL_QUORUM; before running other queries but it returned the following error:
line 1:0 no viable alternative at input 'CONSISTENCY' ([CONSISTENCY])
From my understanding, this is because CONSISTENCY is a cqlsh command and not a CQL command.
I cannot find any way to set the consistency level from within DataGrip so that I can run scripts and populate my tables.
Ultimately, I will use plain cqlsh if I cannot find a solution but I was hoping to use DataGrip as I find it useful and have many databases already configured. I hope someone can shed some light on the issue, this seems like it should be a basic feature.
I am Max from DataGrip team, and the correct answer is:
It could be JDBC driver issue and the desired method hasn't been implemented yet. Since you're trying to run pure cqlsh command as SQL. Follow the issue DBE-10638.
It's a DataGrip bug, see https://youtrack.jetbrains.com/issue/DBE-10182 :
Cassandra 'CONSISTENCY' command is not supported
So upvote that bug, and maybe add a comment that it makes DataGrip useless for writing to Amazon Managed Cassandra
Amazon Keyspaces (Apache Cassandra)
Now I used DataGrip version 2020.1.3 (Buy Licensed)
Encounter problems as well.
Cannot change type CONSISTENCY ONE to LOCAL_QUORUM
I have opened an issue already and waiting for the investigation.
So, I try so many tools and found that DBeaver is working,
The CONSISTENCY can be selected in the configuration GUI.
https://dbeaver.com/download
I have a database on a Google Cloud SQL instance. I want to connect the database to pgBadger which is used to analyse the query. I have tried finding various methods, but they are asking for the log file location.
I believe there are 2 major limitations preventing an easy set up that would allow you to use pgBadger with logs generated by a Cloud SQL instance.
The first is the fact that Cloud SQL logs are processed by Stackdriver, and can only be accessed through it. It is actually possible to export logs from Stackdriver, however the outcome format and destination will still not meet the requirements for using pgBadger, which leads to the second major limitation.
Cloud SQL does not allow changes in all required configuration directives. The major one is the log_line_prefix, which currently does not follow the required format and it is not possible to change it. You can actually see what flags are supported in Cloud SQL in the Supported flags documentation.
In order to use pgBadger you would need to reformat the log entries, while exporting them to a location where pgBadger could do its job. Stackdriver can stream the logs through Pub/Sub, so you could develop an app to process and store them in the format you need.
I hope this helps.
I have an instance running a database on the Google Cloud Platform (MySQL Second Generation master). It is currently taking a backup of the database, and has been doing so for more than 13 hours!
When I try to log into my database through my terminal, I get the following error message:
ERROR: (gcloud.sql.connect) HTTPError 409: Operation failed because another operation was already in progress.
Any idea why it has taken so many hours to create a backup? Anything I can do to be able to view my database in the meantime?
All help is welcome - thank you!
Any idea why it has taken so many hours to create a backup?
This question can't be answered without inspecting your project and metrics, I suggest you to either open a technical support case if you have technical support or raise your issue here with your project number and Cloud SQL instance
Anything I can do to be able to view my database in the meantime?
While there is a backup in progress you can not log in the instance. The best way to access the data (readonly) is to set up a read replica which you will be able to access even while a backup of master instance is in progress.
Have recently tried working with xDB in Sitecore 8 and now looking for the way of cleaning out current stats from xDB without re-installing Sitecore. I deleted data files for Mongo (as was suggested) but still see figures in Analytics in Sitecore; also did iisreset but also did not help. What am I doing wrong? (I am new to Sitecore so might be missing something).
Have you tried to clean-up only MongoDB files, without Reporting database?
If yes, I think that is a point of your confusion. The way it works in xDB is that all tracking analytics data is written into Mongo and then by SessionEnd processed and saved into Reporting database, that is SQL database, same way as it was before previously in DMS. In that case you need to clean that database as well.
If you have access to SQL, you may use __DeleteAllReportingData stored procedure as the quickest:
More correct approach that goes well for instances where there is no direct access to DB is using admin tool for that located at /sitecore/admin/RebuildReportingDB.aspx. Also there was a module Analytics Database Manager previously, however I do not know its current state.
Reference: Walkthrough: Rebuilding the reporting database (from official documentation)