I cannot upgrade the First generation Google Cloud Instance to Second generation instance using MySQL Second Generation upgrade wizard in console.
During the check configuration screen, I get Tables that use the MEMORY storage engine found error due to which I cannot proceed further as shown in screenshot
According to documentation at Upgrading a First Generation instance to Second Generation, I have verified using the query mentioned in documentation.
SELECT table_schema, table_name, table_type
FROM information_schema.tables
WHERE engine = 'MEMORY' AND
table_schema NOT IN
('mysql','information_schema','performance_schema');
but found no tables using a MEMORY storage engine as shown below.
I managed to resolve the error and proceeded with upgrade. I had a to remove a table from the performance_schema database that did not use the PERFORMANCE_SCHEMA storage engine before starting the upgrade process. Seems Google Cloud console threw in an irrelevant error!
Related
I am attempting to clone a database. I was able to previous clone it in the console, but now I want to create a small script to automate this and it fails with the following error message:
(gcloud.sql.instances.clone) [ERROR_RDBMS] unable to update the following flags: cloudsql.enable_password_validation
If I attempt to clone it in the console, I get the same error shown above.
I looked up the documentation and enable_password_validation does not seem to be in the list of supported flags, which would explain why it can't update it.
If I run gcloud sql instances describe my-instance, I don't see the flag in question.
But running on the source instance:
SELECT * FROM pg_settings
yields this row in particular:
name
setting
unit
category
short_desc
extra_desc
context
vartype
source
min_val
max_val
enumvals
boot_val
reset_val
sourcefile
sourceline
pending_restart
cloudsql.enable_password_validation
off
NULL
Customized Options
Sets whether to enable Cloud SQL password validation.
NULL
superuser
bool
configuration file
NULL
NULL
NULL
on
off
/pgsql/data/postgresql.auto.conf
3
False
Any advice on how to solve this?
There is currently an ongoing issue with password validation in Cloud SQL Postgres instances. The issue involves the exact flag that is giving you problems cloudsql.enable_password_validation:
Diagnosis: Affected postgres instances from a recent release have the following flag set and are unable to remove or disable this flag: cloudsql.enable_password_validation=on. This flag does not appear in Cloud Console, and attempting to disable flag via gcloud returns error where the flag is not recognized or supported. Password validation occurs on every new client connection but is limited to 50 QPS, and thus higher rates will return errors.
When did this issue start occurring? Have you also attempted to clone the database since then? This is due to the issue receiving several updates. If you continue experiencing issues, you could open a support case with GCP as the status page recommends.
EDIT (2/24/2022)
I wanted to update this answer. The issue seems to be resolved as shown in the status page of Google Cloud:
The issue with Cloud SQL has been resolved for all affected instances as of Tuesday, 2022-02-22 14:30 US/Pacific. We thank you for your patience while we worked on resolving the issue.
If you still see this error, you can update the question confirming that it was not resolved as part of the outage resolution.
I'm trying to migrate and synchronize a PostgreSQL database using AWS DMS and I'm getting the following error.
Last Error Task error notification received from subtask 0, thread 0
[reptask/replicationtask.c:2673] [1020101] When working with Configured Slotname, user must
specify LSN; Error executing source loop; Stream component failed at subtask 0, component
st_0_D27UO7SI6SIKOSZ4V6RH4PPTZQ ; Stream component 'st_0_D27UO7SI6SIKOSZ4V6RH4PPTZQ'
terminated [reptask/replicationtask.c:2680] [1020101] Stop Reason FATAL_ERROR Error Level FATAL
I already created a replication slot and configured its name in the source endpoint.
DMS Engine version: 3.1.4
Does anyone knows anything that could help me?
Luan -
I experienced the same issue - I was trying to replicate data from Postgres to an S3 bucket.I would check two things - your version of Postgres and the DMS version being used.
I downgraded my RDS postgres version to 9.6 and my DMS version to 2.4.5 to get replication working.
You can find more details here -
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.PostgreSQL.html
I wanted to try the newer versions of DMS (3.1.4 and 3.3.0[beta]) as it has parquet support but I have gotten the same errors you mentioned above.
Hope this helps.
It appears AWS expects you to use the pglogical extension rather than test_decoding. You have to:
add pglogical to shared_preload_libraries in parameter options
reboot
CREATE EXTENSION pglogical;
On dms 3.4.2 and postgres 12.3 without the slotName= setting DMS created the slot for itself. Also make sure you exclude the pglogical schema from the migration task as it has unsupported data types.
P.S. When DMS hits resource limits it silently fails. After resolving the LSN errors, I continued to get failures of the type Last Error Task 'psql2es' was suspended due to 6 successive unexpected failures Stop Reason FATAL_ERROR Error Level FATAL without any errors in the logs. I resolved this issue using the Advanced task settings > Full load tuning settings and tuning the parameters downward.
I am trying to create a cloudsql instance with the following command:
gcloud beta sql instances create sql-instance-1 --tier=db-f1-micro --region=asia-south1 --network=default --storage-type=HDD --storage-size=10GB --authorized-networks=XX.XXX.XX.XX/XX
The instance sql-instance-1 is something I need not running all the time. So I create an sqldump file and when I need the database I create it. When I run this command it fails with the following error
ERROR: (gcloud.beta.sql.instances.create) Resource in project [my-project-id] is the subject of a conflict: The instance or operation is not in an appropriate state to handle the request.
From what I understand the gcloud is complaining that instance name was used before although the instance is already deleted. When I change the name to a new unused name the command works fine. The problem with this is I need to give a new name every time I re-create the instance from the dump.
My questions are:
Is this expected behavior i.e. should name of cloud-sql instance be unique and not used before within a project.
I also found that --network option is not recognized with gcloud. Seems to work only with gcloud beta as explained here. When is this expected to become GA?
This is indeed expected behaviour. From the documentation:
You cannot reuse an instance name for up to a week after you have
deleted an instance.
Regarding the --network flag and it's schedule for GA, there is no ETA for its release outside of beta. However, it's release will be listed in the Google Cloud SDK Release Notes, which you can get updates from by subscribing to the google-cloud-sdk-announce group
Solved
Found the problem, I had a schema defined in my java class
I have a cloud foundry app which uses a mysql data service.
It works great but I want to add another database table.
When I re-deploy to cloud foundry with the new entity class it does not create the table and the log has the following error
2012-08-12 20:42:23,699 [main] ERROR org.hibernate.tool.hbm2ddl.SchemaUpdate - CREATE command denied to user 'ulPKtgaPXgdtl'#'172.30.49.146' for table 'acl_class'
Schema is created dynamically via the service. All you need to do is bind the application to the service and use the cloud namespace. As you mentioned above, removing the schema name from your configuration file will resolve the issue.
I'm rushing (never a good thing) to get Sync Framework up and running for a "offline support" deadline on my project. We have a SQL Express 2008 instance on our server and then will deploy SQLCE to the clients. Clients will only sync with server, no peer-to-peer.
So far I have the following working:
Server schema setup
Scope created and tested
Server provisioned
Client provisioned w/ table creation
I've been very impressed with the relative simplicity of all of this. Then I realized the following:
Schema created through client provisioning to SQLCE does not setup default values for uniqueidentifier types.
FK constraints are not created on client
Here is the code that is being used to create the client schema (pulled from an example I found somewhere online)
static void Provision()
{
SqlConnection serverConn = new SqlConnection(
"Data Source=xxxxx, xxxx; Database=xxxxxx; " +
"Integrated Security=False; Password=xxxxxx; User ID=xxxxx;");
// create a connection to the SyncCompactDB database
SqlCeConnection clientConn = new SqlCeConnection(
#"Data Source='C:\SyncSQLServerAndSQLCompact\xxxxx.sdf'");
// get the description of the scope from the SyncDB server database
DbSyncScopeDescription scopeDesc = SqlSyncDescriptionBuilder.GetDescriptionForScope(
ScopeNames.Main, serverConn);
// create CE provisioning object based on the scope
SqlCeSyncScopeProvisioning clientProvision = new SqlCeSyncScopeProvisioning(clientConn, scopeDesc);
clientProvision.SetCreateTableDefault(DbSyncCreationOption.CreateOrUseExisting);
// starts the provisioning process
clientProvision.Apply();
}
When Sync Framework creates the schema on the client I need to make the additional changes listed earlier (default values, constraints, etc.).
This is where I'm getting confused (and frustrated):
I came across a code example that shows a SqlCeClientSyncProvider that has a CreatingSchema event. This code example actually shows setting the RowGuid property on a column which is EXACTLY what I need to do. However, what is a SqlCeClientSyncProvider?! This whole time (4 days now) I've been working with SqlCeSyncProvider in my sync code. So there is a SqlCeSyncProvider and a SqlCeClientSyncProvider?
The documentation on MSDN is not very good in explaining what either of these.
I've further confused whether I should make schema changes at provision time or at sync time?
How would you all suggest that I make schema changes to the client CE schema during provisioning?
SqlCeSyncProvider and SqlCeClientSyncProvider are different.
The latter is what is commonly referred to as the offline provider and this is the provider used by the Local Database Cache project item in Visual Studio. This provider works with the DbServerSyncProvider and SyncAgent and is used in hub-spoke topologies.
The one you're using is referred to as a collaboration provider or peer-to-peer provider (which also works in a hub-spoke scenario). SqlCeSyncProvider works with SqlSyncProvider and SyncOrchestrator and has no corresponding Visual Studio tooling support.
both providers requires provisioning the participating databases.
The two types of providers provisions the sync objects required to track and apply changes differently. The SchemaCreated event applies to the offline provider only. This get's fired the first time a sync is initiated and when the framework detects that the client database has not been provisioned (create user tables and the corresponding sync framework objects).
the scope provisioning used by the other provider dont apply constraints other than the PK. so you will have to do a post-provisioning step to apply the defaults and constraints yourself outside of the framework.
While researching solutions without using SyncAgent I found that the following would also work (in addition to my commented solution above):
Provision the client and let the framework create the client [user] schema. Now you have your tables.
Deprovision - this removes the restrictions on editing the tables/columns
Make your changes (in my case setting up Is RowGuid on PK columns and adding FK constraints) - this actually required me to drop and add a column as you can't change the "Is RowGuid" property an existing columns
Provision again using DbSyncCreationOption.CreateOrUseExisting