Bad authentication - MySQL Workbench to serverless Aurora via Cloud9 - amazon-web-services

I'm trying to connect to AWS Aurora Serverless, using MySQL Workbench via a Cloud9 IDE instance, but am getting the following error when I test the connection using MySQLWorkbench:
Authentication error. Please check that your username and password are correct and try again.
Details (Original exception message):
Bad authentication type, the server is not accepting this type of authentication.
Allowed ones are:
[u'publickey']
I'm able to connect to Cloud9 instance via SSH, using iTerm on my Mac. I did this by creating a openssh format public and private key set using the below, and copying the id_rsa.pub to the authorized_keys file on the Cloud9 instance:
ssh-keygen -o -b 4096
Once SSH'ed into the Cloud9 instance I was able to connect to Aurora completely fine, using:
mysql --user=... --password -h <aurora host>
But doing the same in MySQLWorkbench returns the error mentioned above. I'm completely stumped why MySQLWorkbench fails were iTerm doesn't. Any ideas please?
Double and triple checked usernames. For SSH'ing I am using ec2-user#
My needs are:
Have a MySQL database on AWS.
Minimise costs when resources are not being used
Minimise management effort to turn off resources when they are not being used
Be able to use user-friendly tools like MySQLWorkbench.
I am using Aurora Serverless for reasons 1, 2 and 3. However, Aurora Serverless can only be accessed from within a VPC, hence I will need something like a jumpbox/bastion host.
I am using Cloud9 because it can be configured to turn off its EC2 instances after 30mins of activity. This protects me from accidentally forgetting to turn off the jumpbox/bastion, and incurring costs.
I could use EC2 with an autoscaling group with minimum 0; not yet explored, I wanted to use Cloud9 as both an IDE and a jumpbox (because a common use-case is for me to develop lambda code while at the same time administering the database using MySQLWorkbench).
Thanks

Related

Google Cloud not managing users/SSH in VMs

We have upgraded Debian distribution in Google Cloud instance and it seems GCloud cannot manage the users and their SSH keys in the instance anymore.
I have installed following tools:
google-cloud-packages-archive-keyring/now 1.2-499050965 all
google-cloud-sdk/cloud-sdk-bullseye,now 412.0.0-0 all
google-compute-engine-oslogin/google-compute-engine-bullseye-stable,now 1:20220714.00-g1+deb11 amd64
google-compute-engine/google-compute-engine-bullseye-stable,now 1:20220211.00-g1 all
google-guest-agent/google-compute-engine-bullseye-stable,now 1:20221109.00-g1 amd64
I cannot connect through the UI. It gets stuck on "Transfering SSH keys to the instance". The "troubleshooting" says that everything is fine.
When trying to connect via gcloud compute ssh it dies with
permission denied (publickey)
I still have access to the instance with some other user, but no new users are created and no SSH keys transferred.
What else am I missing?
EDIT:
Have you added the SSH key to Project metadata or Instance metadata? If its instance metadata, is project level ssh key blocked?
I haven't added any metadata.
Does your user account has necessary permission in the project to SSH to the instance (e.g Owner, Editor or Compute Instance Admin IAM role)?
Yes this worked correctly until the debian upgrade to bookworm. I could see all the google-cloud related packages were remove and I had to install them.
Are you able to SSH to the instance using ssh client e.g Putty?If yes, you need to make sure Google account manager daemon is running on the instance.
I can nicely SSH with accounts which were active on the machine BEFORE the Debian upgrade. These account already have .ssh directory correctly set up and working. New google users cannot login.
Try gcloud beta compute ssh --zone ZONE INSTANCE_NAME --project PROJECT
This works only for users active before the Debian upgrade.
 If yes, you need to make sure Google account manager daemon is running on the instance.
I installed the google-compute-engine-oslogin package which was missing, but it seems it has no effect and new users still cannot login.
EDIT2:
When connecting to serial console, it gets stuck on: csearch-dev google_guest_agent[2839775]: ERROR non_windows_accounts.go:158 Error updating SSH keys for gke-495d6b605cf336a7b160: mkdir /home/gke-495d6b605cf336a7b160/.ssh: no such file or directory. - the same issue, SSH keys are never transferred into the instance.
There are a few things you can do troubleshoot the Permission denied (publickey) error message :
To start, you must ensure that you have properly authenticated yourself with gcloud using an IAM user with the compute instance admin role. You can do that by running gcloud auth login [USER] then try gcloud compute ssh again.
You can also verify that the Linux Guest Environment scripts are properly installed and running. Please refer to this page for information about validating, updating, or manually installing the guest environment.
Another possibility is that the private key was lost or that we have a mismatched keypair. To force gcloud to generate a new SSH keypair, you must first move ~/.ssh/google_compute_engine and ~/.ssh/google_compute_engine.pub if present, for example:
mv ~/.ssh/google_compute_engine.pub ~/.ssh/google_compute_engine.pub.old
mv ~/.ssh/google_compute_engine ~/.ssh/google_compute_engine.old
Once that is done, you may then try gcloud compute ssh [INSTANCE-NAME] again, a new keypair should be created and a public key will be added to the SSH keys metadata.
Refer to Sunny-j and Answer to review the serial-port logs of the affected instance for possible clues on the issue. Also refer to Resolving getting locked out of a Compute Engine for more information.
Edit1:
Refer to this similar SO and Troubleshooting using the serial console which helps to resolve your error.
EDIT2:
Maybe you have git-all installed. Cloud-init and virtually every step of the booting process are disrupted as a result of this, as the older SysV init system takes its place. You are unable to SSH into your instance as a result of this.
Check out these potential solutions to the above problem:
1.Try using git instead of git-all.
2.If git-all is necessary, use apt install --no-install-recommends -y git-all to prevent the installation of recommendations.
Finally : If you were previously able to SSH into the instance with a particular SSH key for new users, either the SSH daemon was not running or was otherwise broken, or you somehow removed that SSH key. It would appear that you damaged this machine during the upgrade.
Why is this particular VM instance required? Does it contain significant data? If this is the case, you can turn it off, mount its disk with a new VM instance, and copy that data off.( I'd recommend build another machine running these services from latest snapshot or scratch and start using that instead).
You should probably move to a new machine if it runs a service: There is no way to tell what still works and what doesn't, even if you are able to access the instance.

How can I able to use PEM file in aws again?

Currently, I am facing an issue related to AWS. A project is already uploaded on AWS server and always connect to that Project by using PEM key of that particular project. But from last week I am not able to connect with the AWS server by using the PEM key. I got one solution for this, as by creating new instance i will be able to got my access on the project but this will make me lose of my whole data & database as a result. Is this occurred due to virus or any thing else? Got stuck over here badly. Any help will be appreciable.
I think the authorized_keys file has some incorrect entries. Let's remove the key from the file and add it again.
remove the host key
ssh-keygen -R [hostname]
ssh again
ssh -Tv ec2-user#example.com -i ~/mykey.pem
In order to use existing EBS volume with data in a new ec2, with a new ssh key:
Create a snapshot of the current instance's EBS, and create a volume.
Create a new instance with a new ssh key.
Stop the new instance and attached the previously created volume as the boot volume.
Start the new instance and you should be able to login with the new ssh key.
The log indicates that you do not have any networking issues. It is the ssh server on the instance that is rejecting your connection.
The first thing to confirm is that you are connecting to the correct Amazon EC2 instance. If an EC2 instance is stopped and started again, it might change public IP address (depending how it is configured). Therefore, make sure that you are connecting to the right instance.
Next, confirm that you are using the correct username. You are using ubuntu#, which is correct if the instance is using an Ubuntu AMI. However, it is possible to create additional users on a Linux computer and the PEM files are associated with specific users. Therefore, confirm that this is the correct username for use with that PEM file.
Next, confirm that you are using the correct PEM file. The PEM file should contain the private half of a keypair that matches the public half that is stored on the instance in the user's ~.ssh/authorized_keys file. The log indicates that the instance is rejecting the provided keypair. Therefore, you might be using the wrong one.
Failing all this, there are some recommended steps available on: Walkthrough: Reset Passwords and SSH Keys on Amazon EC2 Instances - AWS Systems Manager
However, they might not work since you are using an Ubuntu instance, which might not have standard AWS software installed.
Let us know how you go!

Self describe regions with ECS/EC2 Instance

I want to securely fetch configuration files from S3 using a secure VPC from my Docker container. But I want to determine inside the application which configuration file to fetch and use based on the region I am on. Is there a good/best practice to go on about describing the current container's region?
I understand that you can use the AWS SDK/CLI to describe the ECS instances, but that doesn't tell me which one the container is specifically deployed on.
Use the metadata server to query the availability-zone from which you can get the region.
$ curl 169.254.169.254/latest/meta-data/placement/availability-zone/
us-east-1a
One example if you are using python SDK is:
import os
az = os.popen("curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone/").read()
print az[:-1]
>>> us-east-1
Within EC2, you can retrieve the instance metadata using a simple curl command to a local (internal) web API. Region and AZ are some of the data points you can get:
http://169.254.169.254/latest/meta-data/services/domain
http://169.254.169.254/latest/meta-data/placement/availability-zone
See this page for full details about instance metadata.
Within ECS, I'd be interested to see if these might still work -- my hunch is they would, as the container should query the host machine's API for the answer, and the ECS host is most certainly an EC2 instance.
Let us know if that works?

SSH from AWS BeanStalk

I have an application that runs on AWS BeanStalk and one requirement is to connect to another server using ssh. I could log as root into the server and generate a key pair that i can use but this would not scale. (we have auto-scaling enabled)
Is there a way to generate and replicate a key pair across the instances that are running?
Edit - I feel the need to provide a better description to my problem.
When I lunch the BeanStalk instance i selected the previously created keypair but looking at the EC2 documentation here it states the following:
Amazon EC2 stores the public key only, and you store the private key.
This seems to work ok as I am able to ssh into the ec2 instance. We have another service that is running on a DigitalOcean hosted machine, and we need to ssh from the ec2 instance to the digitalocean instance.
Important The DigitalOcean instance can only allow key based authentication (user/password authentication is not allowed)
When i log into the ec2 machine i can see that in the .sshfolder i only have the authorized_keys file and that would make sense taking into consideration the documentation paragraph.
Is there a way to get a public key that i could use to log into the digitalocean instance from the ec2 instance?
If I understand you correctly, you need the Beanstalk application to SSH in to another server?
Every EC2 instance gets launched with a designated keypair. You have the option of either creating a new keypair or using a keypair already set (i.e. the keypair created by the Beanstalk application for the first instance).
Keeping the private key on the Beanstalk instance, launching the other instance(s) using that same keypair would allow the application to use the private key to SSH in and also allow you to scale the instances without your having to go in to each one and create new keypairs.
That said, I believe the documentation suggests against keeping the private key on the instance, so perhaps consider launching the non-Beanstalk instances with a configuration script that creates a customized user, perhaps using a key and password and pre-configuring the application with that information? You can keep that information as environment variables within Beanstalk itself, similar to how you would keep RDS credentials.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html
When you launch an instance in Amazon EC2, you have the option of
passing user data to the instance that can be used to perform common
automated configuration tasks and even run scripts after the instance
starts. You can pass two types of user data to Amazon EC2: shell
scripts and cloud-init directives. You can also pass this data into
the launch wizard as plain text, as a file (this is useful for
launching instances via the command line tools), or as base64-encoded
text (for API calls).
EDIT 1
In order to SSH from computer A to computer B, computer A needs to have the private key in the .ssh directory and computer B needs to have the public key appended to the authorized_keys file in the .ssh directory, so that's perhaps why you don't see either key in the Elastic Beanstalk EC2 instance.
Since you have the public key within the authorized_keys file, you can simply replicate it to the DigitalOcean instance (once it's on the server, do cat public_key >> authorized_hosts) and since you're able to SSH in to the Elastic Beanstalk instance, you can simply take the private key from your computer and put it in the .ssh directory of the Beanstalk instance. That way, now the DigitalOcean instance will have just the public key appended to the authorized_keys file and the beanstalk instance will now have both the private key and public key as authorized to login.
That said, this is probably the most insecure way of doing this...I would prefer you generate a new key and use that to be able to SSH from the Beanstalk instance to the DigitalOcean instance.
Note, this is not the same as creating a new IAM user, though you can use IAM to simply create new key pairs.
EDIT 2
I guess it will be difficult for an EC2 instance to automatically obtain the private key upon being automatically launched, so the way I see it, you have three options;
1) EC2 instances can be (auto)launched with a custom "user data" script, which I referenced above. In that script, you can include the actual private key data (pretty bad idea IMO) OR have it obtain the private key from somewhere (e.g. SCP with username/password into some machine and download it). Again, all pretty bad ideas.
2) Embed the private key within your Beanstalk application. Not knowing what language your application is written in, it's difficult to determine how bad / good of an idea this is. If it's in Java, private / sensitive keys get embedded all the time, so I don't see why this would be any different. This seems to me to be a fine idea, iff this is an application developed specifically for this use case and will never be used anywhere else. I'd hate to see a developer accidentally deploy this app somewhere else and now that private key is potentially compromised.
3) You could create an AMI of the EC2 instance with the key embedded in it. Then simply instruct the autoscaling to launch a new instance of that AMI and voila, you will have the key in the .ssh directory. I tend to like this idea the best as it uses AWS resources for what they're intended, and I would think makes the key a bit more 'secure' (outside of compromising the EC2 instance itself, it will be much more difficult for anyone to access the key). This wouldn't add any additional scalability over option #2 as you can scale / deploy a Beanstalk application just as much as you can an AMI image. That preference is up to you.
NOTE, this of course says nothing about scaling the DO machine, assuming that's even a requirement.

Access one environment from another in Engine Yard

We have a couple of environments in Engine Yard. Each of them runs the same application, but on different stages: production, staging, etc. In total about 10 environments. Now, we want to dump the production database every night, and restore it on the rest of environments to have the latest data.
The problem is, an instance from one environment can't access instances in other environments. There are two ways to connect that are suitable for us:
SSH.
Specify the RDS host as the --host parameter to mysqldump. The RDS host is of the form environment.random_string.region.rds.amazonaws.com as opposed to a regular EC2 host name.
Neither of them works out of box. The straightforward solution would be to generate RSA keys on all the servers that want access, and add them to authorized_hosts to all the servers that should allow access. However, this solution isn't scalable: once we add or recreate an environment we'd need to repeat process.
Is there any better solution?
There is a way to setup a special backup configuration file on your other instances that would allow you to directly access the Production S3 bucket from another environment within the same account. There is some risk involved with this since it would also technically allow your non-production environment the ability to edit the contents of the production bucket.
There may be some other options depending on the specifics of your configuration. Your best option would be to open a ticket with the Engine Yard Support team so we can discuss your needs further.
Is it possible to set up a separate HUB server with FTP or SFTP service only?
open inbound port 21/22 from all environments to that HUB server, so all clients can download the database dump.
open inbound port 3306 or other database port from Hub Server to RDS/Database.
run cron job on Hub server to get the db dump, push the dump to other environment and so on.
Backup your production to a S3 bucket created for this purpose.
Use IAM roles to control how your other environments can connect to the same bucket.
Since the server of your Production environment should be known you can use a script to mysqldump that one server to the shared S3 bucket.
Once complete, your other servers can collect the data from that S3 bucket using a properly authorized IAM role.