I have a MariaDb 10.1.34 instance on AWS RDS. This instance has among other things root user with appropriate permissions.
I upgraded the instance to use 10.2.21 version of the MariaDB and wanted to execute mysql_upgrade as suggested in offical upgrade document.
However I always get this error:
Got error: 1045: Access denied for user 'root'#'<instanceip>' (using password: YES) when trying to connect
Using the exact same credentials I can connect to the database using mysql cli but no mysql_upgrade or mysqlcheck.
So this works:
mysql --user=root --password=<pwd> --host=<hosturl> --port=3306 --protocol=tcp
And this does not:
mysql_upgrade --user=root --password=<pwd> --host=<hosturl> --port=3306 --protocol=tcp
SHOW GRANTS FOR root; gives this output:
GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, RELOAD, PROCESS, REFERENCES, INDEX,
ALTER, SHOW DATABASES, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, REPLICATION SLAVE,
REPLICATION CLIENT, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, CREATE USER,
EVENT, TRIGGER ON *.* TO 'root'#'%' IDENTIFIED BY PASSWORD '<sometext>' WITH GRANT OPTION
Any ideas?
To save time to to anybody trying to do the same thing - don't.
The behavior I encountered is by AWS design. mysql_upgrade is already executing during the instance upgrade process internally.
(Source for this statement)
Related
We have upgraded Debian distribution in Google Cloud instance and it seems GCloud cannot manage the users and their SSH keys in the instance anymore.
I have installed following tools:
google-cloud-packages-archive-keyring/now 1.2-499050965 all
google-cloud-sdk/cloud-sdk-bullseye,now 412.0.0-0 all
google-compute-engine-oslogin/google-compute-engine-bullseye-stable,now 1:20220714.00-g1+deb11 amd64
google-compute-engine/google-compute-engine-bullseye-stable,now 1:20220211.00-g1 all
google-guest-agent/google-compute-engine-bullseye-stable,now 1:20221109.00-g1 amd64
I cannot connect through the UI. It gets stuck on "Transfering SSH keys to the instance". The "troubleshooting" says that everything is fine.
When trying to connect via gcloud compute ssh it dies with
permission denied (publickey)
I still have access to the instance with some other user, but no new users are created and no SSH keys transferred.
What else am I missing?
EDIT:
Have you added the SSH key to Project metadata or Instance metadata? If its instance metadata, is project level ssh key blocked?
I haven't added any metadata.
Does your user account has necessary permission in the project to SSH to the instance (e.g Owner, Editor or Compute Instance Admin IAM role)?
Yes this worked correctly until the debian upgrade to bookworm. I could see all the google-cloud related packages were remove and I had to install them.
Are you able to SSH to the instance using ssh client e.g Putty?If yes, you need to make sure Google account manager daemon is running on the instance.
I can nicely SSH with accounts which were active on the machine BEFORE the Debian upgrade. These account already have .ssh directory correctly set up and working. New google users cannot login.
Try gcloud beta compute ssh --zone ZONE INSTANCE_NAME --project PROJECT
This works only for users active before the Debian upgrade.
If yes, you need to make sure Google account manager daemon is running on the instance.
I installed the google-compute-engine-oslogin package which was missing, but it seems it has no effect and new users still cannot login.
EDIT2:
When connecting to serial console, it gets stuck on: csearch-dev google_guest_agent[2839775]: ERROR non_windows_accounts.go:158 Error updating SSH keys for gke-495d6b605cf336a7b160: mkdir /home/gke-495d6b605cf336a7b160/.ssh: no such file or directory. - the same issue, SSH keys are never transferred into the instance.
There are a few things you can do troubleshoot the Permission denied (publickey) error message :
To start, you must ensure that you have properly authenticated yourself with gcloud using an IAM user with the compute instance admin role. You can do that by running gcloud auth login [USER] then try gcloud compute ssh again.
You can also verify that the Linux Guest Environment scripts are properly installed and running. Please refer to this page for information about validating, updating, or manually installing the guest environment.
Another possibility is that the private key was lost or that we have a mismatched keypair. To force gcloud to generate a new SSH keypair, you must first move ~/.ssh/google_compute_engine and ~/.ssh/google_compute_engine.pub if present, for example:
mv ~/.ssh/google_compute_engine.pub ~/.ssh/google_compute_engine.pub.old
mv ~/.ssh/google_compute_engine ~/.ssh/google_compute_engine.old
Once that is done, you may then try gcloud compute ssh [INSTANCE-NAME] again, a new keypair should be created and a public key will be added to the SSH keys metadata.
Refer to Sunny-j and Answer to review the serial-port logs of the affected instance for possible clues on the issue. Also refer to Resolving getting locked out of a Compute Engine for more information.
Edit1:
Refer to this similar SO and Troubleshooting using the serial console which helps to resolve your error.
EDIT2:
Maybe you have git-all installed. Cloud-init and virtually every step of the booting process are disrupted as a result of this, as the older SysV init system takes its place. You are unable to SSH into your instance as a result of this.
Check out these potential solutions to the above problem:
1.Try using git instead of git-all.
2.If git-all is necessary, use apt install --no-install-recommends -y git-all to prevent the installation of recommendations.
Finally : If you were previously able to SSH into the instance with a particular SSH key for new users, either the SSH daemon was not running or was otherwise broken, or you somehow removed that SSH key. It would appear that you damaged this machine during the upgrade.
Why is this particular VM instance required? Does it contain significant data? If this is the case, you can turn it off, mount its disk with a new VM instance, and copy that data off.( I'd recommend build another machine running these services from latest snapshot or scratch and start using that instead).
You should probably move to a new machine if it runs a service: There is no way to tell what still works and what doesn't, even if you are able to access the instance.
I am trying to create RDS database backup to s3 backup using native backup:
exec msdb.dbo.rds_backup_database
#source_db_name='db_name',
#s3_arn_to_backup_to='my_s3_arn/db_name.bak',
#overwrite_s3_backup_file=1;
After executing, I get this error from task status info:
Task execution has started. Aborted the task because of a task failure or an overlap with your preferred backup window for RDS automated backup.
Task has been aborted The provided token has expired.
The error info is saying that your token has expired. This is the token of the role which is used to create native backups(SQLSERVER_BACKUP_RESTORE option). This shouldn't happen normally.
Try these options and after each one check if you can create the native backup:
Reboot your database instance
Stop/Start your database instance
Recreate the option for the native backups
Recreate the entire option group
The last option is the one that fixed the problem when I tested revoking the role sessions.
I'm trying to connect to AWS Aurora Serverless, using MySQL Workbench via a Cloud9 IDE instance, but am getting the following error when I test the connection using MySQLWorkbench:
Authentication error. Please check that your username and password are correct and try again.
Details (Original exception message):
Bad authentication type, the server is not accepting this type of authentication.
Allowed ones are:
[u'publickey']
I'm able to connect to Cloud9 instance via SSH, using iTerm on my Mac. I did this by creating a openssh format public and private key set using the below, and copying the id_rsa.pub to the authorized_keys file on the Cloud9 instance:
ssh-keygen -o -b 4096
Once SSH'ed into the Cloud9 instance I was able to connect to Aurora completely fine, using:
mysql --user=... --password -h <aurora host>
But doing the same in MySQLWorkbench returns the error mentioned above. I'm completely stumped why MySQLWorkbench fails were iTerm doesn't. Any ideas please?
Double and triple checked usernames. For SSH'ing I am using ec2-user#
My needs are:
Have a MySQL database on AWS.
Minimise costs when resources are not being used
Minimise management effort to turn off resources when they are not being used
Be able to use user-friendly tools like MySQLWorkbench.
I am using Aurora Serverless for reasons 1, 2 and 3. However, Aurora Serverless can only be accessed from within a VPC, hence I will need something like a jumpbox/bastion host.
I am using Cloud9 because it can be configured to turn off its EC2 instances after 30mins of activity. This protects me from accidentally forgetting to turn off the jumpbox/bastion, and incurring costs.
I could use EC2 with an autoscaling group with minimum 0; not yet explored, I wanted to use Cloud9 as both an IDE and a jumpbox (because a common use-case is for me to develop lambda code while at the same time administering the database using MySQLWorkbench).
Thanks
I'd like to use Ola Hallengren's MaintenanceSolution.sql script to create index and stats maintenance jobs on an Amazon RDS SQL Server 2008 R2 DB.
Documentation: http://ola.hallengren.com
SQL Script: http://ola.hallengren.com/scripts/MaintenanceSolution.sql
The problem is that currently, the sysadmin role is not available on RDS instances, and parts of this script require that role to execute.
I was able to get around most of this by executing the script on a custom DB rather than master, and by suppressing the IS_SRVROLEMEMBER('sysadmin') check on line 41.
However, I am running into issues later on with the SELECT * FROM msdb.dbo.sysjobs calls at the bottom of the script, and I cannot find a way to grant read permissions on msdb.dbo.sysjobs via an RDS SQL Server DB.
Has anyone successfully run this script, including the jobs creation portion on an Amazon RDS SQL Server 2008 R2 DB?
I created an AWS RDS MSSQL instance using Management Console but I cannot create a new database. Creating a table works fine though.
Did I miss anything in the configuration? Do I need to execute a special schema?
According to the documentation, you can create up to 30 databases per RDS instances.
http://aws.amazon.com/rds/faqs/#2
We would need more details to debug your particular issue. (Parameters used to create the RDS instance, exact error message etc )