I have an instance running a database on the Google Cloud Platform (MySQL Second Generation master). It is currently taking a backup of the database, and has been doing so for more than 13 hours!
When I try to log into my database through my terminal, I get the following error message:
ERROR: (gcloud.sql.connect) HTTPError 409: Operation failed because another operation was already in progress.
Any idea why it has taken so many hours to create a backup? Anything I can do to be able to view my database in the meantime?
All help is welcome - thank you!
Any idea why it has taken so many hours to create a backup?
This question can't be answered without inspecting your project and metrics, I suggest you to either open a technical support case if you have technical support or raise your issue here with your project number and Cloud SQL instance
Anything I can do to be able to view my database in the meantime?
While there is a backup in progress you can not log in the instance. The best way to access the data (readonly) is to set up a read replica which you will be able to access even while a backup of master instance is in progress.
Related
We have restored a GCP project that had several resources associated to it. Among those there was a Firestore database. After restoring the project and verified everything, I noticed that the database was not shown in the console, it asks me to create a new one, which is strange since in theory any resource should be kept if restored within 30 days:
I tried to recreate the database and I receive an error that a database already exists for this project:
It seems like a big GCP issue. Has anyone experienced this before?
I am going through the AWS DeepRacer: Driven by Reinforcement Learning training located here:
https://www.aws.training/learningobject/wbc?id=32143 and am unable to successfully create a model.
When attempting to create a model in the DeepRacer console using the default configuration, I have not been able to successfully create a model using the default configuration.
I continually get and error saying Failed to create model. Unable to create your model. with no other information. There are no relevant logs generated in CloudWatch.
I have tried resetting my Account Resources and recreating them, but that has not changed the result.
Any insight into whats going on?
My model failed last night for the same reason, it trained for just over 1 hour then crashed.
I cloned the model and re-trained and the 3 hours worked for me.
I recommend you check your quota on ML.C4.2XL boxes. Some accounts have it set to zero and I don't think UI handles it too well. If that is the case, support are quite helpful with increasing it.
I guess if you are running Deeprunner Instance using root user it will throw such error. Create an IAM user and allow admin rights. Hope it works
We have PostgreSQL instances (1 master + 1 read replica) on Google SQL. Our Django (1.11.12) application uses these databases via PostGIS engine. When we try to use the database, we saw this error message:
django.db.utils.OperationalError: canceling statement due to conflict with recovery
DETAIL: User query might have needed to see row versions that must be removed.
When I search for a solution, they generally say that I need to change hot_standby_feedback flag. But as you know Google SQL service has some restrictions about settings. I can't set the flag.
How can I fix this?
If “Google SQL” allows that, you can set max_standby_streaming_delay to -1 so that replication is delayed if a conflict is detected.
Then the query will not be canceled, but replication may lag if applying changes would cause a conflict.
Consider getting an “unfettered” PostgreSQL.
If you would like set hot_standby_feedback = on, I'll suggest that you indicate your interest in the open feature request on Google Cloud Platform's Public Issue Tracker tool. That way someone can look into the handling query conflict issues your Cloud SQL PostgreSQL instance encountered.
I've also been monitoring an open thread in the Issue Tracker about making max_standby_archive_delay and max_standby_streaming_delay flags available to users to set. You can track it there as well. Hope this helps!
When it comes to databases, we want to leave managing them to the pros, which is why we went for a managed solution in the form of a CloudSQL 2nd gen db instance. Today the instance stopped responding, I clicked restart, it has been restarting for hours and is not responding, I have tried clone the instance, also not responding.
I don't know what else to do, our db is crippled and the service that uses it is down. These things happen, fine.
The thing that shocked me is that I am unable to contact anybody to resolve this problem. I understand that I can pay for a support subscription, $150p/m and up. This confuses me though, the GCloud console UI is not responding, am I incorrect in assuming I should not have to pay for support for the core product to at least work?
This leads me to my main question, if I want to continue using Google Cloud products in production, do I NEED a support subscription?
Same happened to us yesterday. The cloud SQL instance did not respond for an hour and a half (from 18h to 19:30h GTM+1).
We couldn't do absolutely nothing, we tried to backup the instance to a bucket but the command was returning an error saying that another operation is in progress.
We are a small startup and we can't pay for a support plan, but when we hired the cloud SQL service we thought that this kind of situations doesn't happen.
Honestly, after this I believe that Cloud SQL is not a good option if you do not contract at the same time a gold or platinum support plan. It is frustrating that something fails and you can not do anything, or even report the error.
Try the gcloud command line tool in your active shell, instead of the console UI. Try exporting the data from your SQL instance to google cloud storage bucket by using this command:
gcloud sql instances \
export <sql-instnace-name> \
gs://<bucket-name>/backup.sql
The SQL instance's service account by default has read and write access to google cloud storage bucket.
Create a new SQL instance using this command:
gcloud sql instances \
create <new-sql-instance-name>
Now, add the data to the new SQL instance using this command:
gcloud sql instances \
import <new-sql-instance-name> \
gs://<bucket-name>/backup.sql
You can get free or premium support here. You do not need a subscription to get help; it all depends on your needs and the level of urgency you estimate for eventual future problems.
If you have a recent backup of your database, you may consider re-creating it in another instance, from there.
You may consider posting your issue in the Google Cloud SQL Product Issue Tracker. This way, it will enjoy much better visibility from developers and Google support, without attracting any extra costs.
I'm very new to Amazon web services, especially using their RDS system. I have set up an Oracle database (11.2) and I now want to import a dump we made locally from our server using expdp. Apparently, the ability to use expdp/impdp on AWS is quite new. From what I understand, when creating an ORACLE database on RDS, a DATA_PUMP_DIR is automatically created. What is less obvious is how to access this directory and made our local dump available to RDS. I've tried to read the following information http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Oracle.Procedural.Importing.html on their website. But there is a lot of things I don't understand:
Why do I have to setup an EC2 instance when the dump file is actually on my local computer (and I can access remotely the RDS database using sqlplus or sql developper)
They are often using the 'sys' or 'system' user in their examples but, when reading the security settings for Oracle, it said that these users are made unavailable on RDS => you cannot connect to a database as Sysdba.
Could someone please point me to a simple and clear tutorial on how to use impdp on AWS ?
Thanks
It is possible to use Data Pump on RDS now.
duduklein's answer was correct when he wrote it. But the RDS docs now have details about using Oracle Data Pump. The doc page url is unmodified from the link as originally posted in the question (nice job, Amazon!) but it has new content on using Data Pump now.
It's not possible for now. I have just contacted amazon (through the premium support) for the same issue and they just told me that this is a feature request that was already passed to the RDS team, but there is no estimation of when this will be available.
The only way you can import files dumps is using the "exp" utility instead of the "expdp". In this case, you can use the "imp" utility to import data to RDS