I have two databases one on my local machine and one on my amazon ec2 instance.Now what I
do is I run a python program on my local machine which makes changes to the databse on my local machine.I want these changes to be reflected onto the database on amazon ec2 instance,
periodically.I want to do this in python.A script that logs onto the amazon server establishes a connection with the database there and makes the changes.
I came across some modules like pexcept,fabric and paramiko.But I am struggling with the
key authentication.
The way I ssh from my terminal is ssh -i my_rsa_file.pem username#ip_address.There is no password.How do I go about this ??
Also I want to know whether simply using Popen in subprocess to execute the login command work ?
The Boto EC2 documentation here describes the EC2 instance object, of which "key_pair" is an attribute. Look about 3/4 of the way down, under "boto.ec2.instance".
http://boto.readthedocs.org/en/latest/ref/ec2.html
So, e.g., you could run some instances as follows, and then store the first instance as "inst":
reservation = conn.run_instances(...)
inst = reservation.instances[0]
To retrieve your key-pair name as a unicode string, just use:
kp_name = inst.key_name
You can then retrieve the corresponding Boto object using get_key_pair:
kp_obj = conn.get_key_pair(kp_name)
Of course, this is a silly example, since I would have needed my key pair name to run_instances in the first place. May you find a more fruitful application!
Related
In the AWS website https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html, it shows how to modify the instance user data of an existing EC2.
However, at the end it says the modified user data will not be executed (7). What's the point to modify the user data and not executed? Is it possible to execute the modified user data once?
To modify instance user data
Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
In the navigation pane, choose Instances.
Select the instance and choose Instance state, Stop instance. If this
option is disabled, either the instance is already stopped or its root
device is an instance store volume.
When prompted for confirmation, choose Stop. It can take a few minutes
for the instance to stop.
With the instance still selected, choose Actions, Instance settings,
Edit user data.
Modify the user data as needed, and then choose Save.
Restart the instance. The new user data is visible on your instance
after you restart it; however, user data scripts are not executed.
The User Data field was created as a way to pass information to an instance. For example, providing a password to a database, or configuration information.
Then, the Ubuntu community came up with the clever idea of passing a script via the User Data field, and having some code on the instance execute the script when the system boots. This has enabled "self-configuring" systems, and is called cloud-init.
By default, the User Data script only executes once per instance, with the intention of installing software.
From Boot Stages — cloud-init documentation:
cloud-init ships a command for manually cleaning the cache: cloud-init clean
Running this command will 'forget' the previous runs, and will execute the User Data script on the next boot.
It is also possible to run a script on every boot by placing the script in /var/lib/cloud/scripts/per-boot/.
Simple step to step working Solution:
Stop the Instance
Go to Action > Instance Setting > Edit User Data
And make sure to choose As Text in Edit User Data Screen
Add commands on Textarea
Now start the Instance And check that it is on Running State
check your public IP (This has been changed after restart)
Now finally connect to the instance using ssh :
ssh -i #<IP_ADDRESS>
I have a single MySQL RDS instance and an AMI containing a Grails application. I would like to use the User Data function to populate the Grails application.yml file with the RDS endpoint. How do I retrieve RDS the endpoint from within User Data?
There are two ways to use User Data:
Just as data: The contents of User Data is accessible via http://169.254.169.254/latest/user-data/, so your application could just parse the contents and do what you wish with it.
As an executable script: On Linux, starting User Data with #! will cause it to be executed, so you could write a script to update the application.yml file.
An alternate concept would be to store the RDS Endpoint in the AWS Systems Manager Parameter Store. Then, use a User Data script to extract it from there and store it in application.yml. This way, the endpoint can be easily updated in Parameter Store without modifying any scripts.
User Data is nothing but shell script while runing on Linux AMI.
You can edit appication.yml file using shell script and add you parameters.
I have two EC2 instances and i am trying to sync a directory between the two of them.
I have set up the lsyncd service on one of the instances and was able to sync a directory to different directory on the same instance.
Now i am trying to sync the same directory with the second instance and it is not working.
The reason it is not working is that I am not able to put the key that was generated on the first instance using ssh-keygen -t rsa on the second instance in order to allow them access each other.
I have tried sudo ssh-copy-id -i /path/to/key ec2-user#ip-of-second-instance but it did not work.
I have also tried to manually copy the public part from the key.pub file of the first instance to the ~/.ssh/authorized_keys of the second instance but it did not work either.
That is my lsynd configuration settings:
settings = {
insist = true,
logfile = "/var/log/lsyncd/lsyncd.log",
statusFile = "/var/log/lsyncd/lsyncd.status"}
sync {
default.rsyncssh,
source = "/home/ec2-user/IntSrv/Sync",
host = "second-instance-ip",
target = "/home/ec2-user/GenSrv/Sync",
}
What am i doing wrong? How can i fix that issue?
Any help would be appreciated. Thank you.
You might want to start again with the keys.
You should really be generating your own keys for each user. Then, for each user you want to grant access to the instance, add their key to the .ssh/authorized_keys file, either for the ec2-user or preferably create a user account for them first and add it to their authorized_keys file.
The keys generated by Amazon EC2 should be used to gain initial access to your instances. Then, proper security practice is to remove that key and add your own keys. This way, you have each person accessing via their own keypair, which can be removed if you wish to rescind access.
While I'm not familiar with lsyncd, I suspect that if you get ssh working, then lsyncd will probably work fine, too.
So, quick summary:
Generate a key for YOU using ssh-keygen
Connect to the desired instances, and add your public keypair to authorized_keys within the desired user home directory
Use those keys instead of the ones generated by Amazon EC2
im trying to install simple app in Amazon AWS. Since im really new to servers i used Elastic Beanstalk.
Everythink ok, but when i run my app i get an error: PDO error: could not find driver.
I tried mysqli_ping the connection and got boolean true, so this is OK.
I checked for help, but all i found is here:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_PHP.rds.html
2.If you plan to use PDO, install the PDO drivers. For more information, go to http://www.php.net/manual/pdo.installation.php.
But i really don't know what to do with this information. Any help?
so, its quite a procedure, first you have to get ssh access to your instance:
1. Generate key value for you instance & download it in pem format.
go to: https://console.aws.amazon.com/ec2/v2/home?region=eu-west-1#Instances (change for your region)
click Key Pairs, Create Key pairs and create your new key par and download it to you comp.
assosiate your instance with the key, go to elastic beanstalk, select you application, select configuration, instances and select your new key from drop-down of EC2 key pair.
2. download putty for windows (installer) & install it: http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html
3. Transform key to .pkk format using PuTTYgen:
http://www.techrepublic.com/blog/the-enterprise-cloud/connect-to-amazon-ec2-with-a-private-key-using-putty-and-pageant/?tag=nl.e011#.
4. Setup putty to use key: http://docs.aws.amazon.com/gettingstarted/latest/wah-linux/getting-started-deploy-app-connect.html
5. run putty and find you instance public dns and add ec2-user in front of it, so it looks like this: ec2-user#ec2-54-76-47-0.eu-west-1.compute.amazonaws.com
Then is as simple as: yum install php-pdo
I've created a RDS instance called realcardiodb (the engine is mysql)
and I've exported my database from my localhost. File is saved locally called localhostrealcardio.sql
Most research says to use mysqldump to import data from a local system to a web server, but my system doesn't even recognize mysqldump.
C:\xampp\mysql>mysqldump
'mysqldump' is not recognized as an internal or external command, operable program or batch file.
How do I resolve this error should I use mysqldump? (I definitely have mysql install on my system)
Is there a better utility I should use?
Any help is appreciated, especially if you have experience importing mysql to aws rds.
Thanks!
DK
Update 7/31/2012
So I got the error resolved. mysqldump is in the bin directory C:\xampp\mysql\bin>mysqldump
AWS provides the folloinwg instructions for uploading a local database to RDS:
mysqldump acme | mysql --host=hostname --user=username --password acme
Can someone break this down for me?
1) Is the first 'acme' (after mysqldump command) the name of my local database or the exported sql file I saved locally?
2)Is the hostname the IP address, Public DNS, RDS Endpoint or neither?
3)The username and password I assume is the RDS credentials and the second acme is the name of the database I created in RDS.
Thanks!
This is how I did it for a couple instances that had data in the MySQl tables.
The steps to creating an RDS database instance:
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_GettingStarted.CreatingConnecting.MySQL.html
Note: Make sure the RDS instance has a security group configured that relates to the EC2 security group.
http://docs.amazonwebservices.com/AmazonRDS/latest/UserGuide/USER_Workin...
Before we go forward, let me provide a list of what some of the following placeholders are:
host.address.for.rds.server = this will be what is referred to as the "end point" in your RDS description/settings page.
rdsusername = the master user account which you created during RDS setup.
rdsdatabase = a blank database which you created inside the server on your RDS instance.
backupfile.sql = the sql dump file your made of your pre-existing installation's database.
Once you've created a fresh RDS database instance, and have configured its security settings, log into this server (from within an ssh session to your EC2 server) and then create an empty database inside the instance using basic SQL commands.
mysql -h host.address.for.rds.server -P 3306 -u rdsusername -p
(enter your password)
create database rdsdatabase;
Then quit out of the MySQL environment inside your RDS server.
\q
This tutorial assumes you already have a backup from your old database. If you don't, go create one now. After that, you’re ready to import that sql dump file into the empty database waiting on your RDS server.
mysql -h host.address.for.rds.server -u rdsusername -p rdsdatabase < backupfile.sql
It might take a few seconds to complete, depending on the size of the sql dump file. Your indication that it is finished is that the bash command prompt reappears.
Note: the command “mysqlimport” is used when imported data directly into an existing table inside a database. It might seem like we’re “importing” data, but this is not what we’re actually doing in this situation. The database we are migrating to has no tables yet, and the sql dump file we’re using contains the sql commands to generate the tables it needs.
Confirm the Transfer
Now, if you didn't get any error messages, then your sql transfer probably worked. If you want, you can double check to see if it did by connecting to your RDS database server, looking up the database you created, and check to see if the tables are now present.
mysql -h host.address.for.rds.server -P 3306 -u rdsusername -p
(enter your password)
use rdsdatabase;
show tables;
I prefer using MySQL workbench. It's much more easier & user friendly than the command line way.
It provides a simple GUI.
MySQL workbench or SQL Yog.
These are the steps that I did.
1) Install MySQL Workbench.
2) In AWS console, there must be a security group for your RDS instance.
Add an inbound rule to that group for allowing connections from your machine.
It's simple. Add your IP-address.
3) Open MySQL workbench, Add a new connection.
4) Give the connection a name you prefer.
5) Choose connection method- Standard TCP/IP
6) Enter your RDS endpoint in the field of Hostname.
7) Port:3306
8) Username: master username (the one which which you created during the creation of your RDS instance)
9)Password: master password
10) Click Test Connection to check your connection.
11) If connection is successful, click OK.
12) Open the connection.
13) you will see your database 'realcardiodb' there.
14) Now you can export your mysqldump file to this database. Go to-> Server. Click Data Import.
15) You can check whether the data has been migrated by simply opening a blank SQL file & typing in basic SQL commands like use database, select * from table;
That's it. Viola.
If you have a backup.sql in your PC, No need to transfer to EC2. Just give below line on your terminal in your PC.
$ mysql -h rdsinstance-hostaddress-ending.rds.amazonaws.com -u rds_username -p rds_database < /path/to/your/backup.sql
Enter password: paswd_mysql_user
That's all.
Import backup directly from existing remote server
SSH connect to your remote server
Get the remote server mysql backup (backup/path/backupfile.sql)
Import backup file to RDS mysql while you in remote server shell
mysql -h your-mysql-instance.region.rds.amazonaws.com -u db_username -p db_name < backup/path/backupfile.sql
Note:
I have tried all the above criteria to import my existing backup to new RDS database, including through EC2 as in AWS documentation. It was a 10GB backup. So I have tried tables by tables as well. It shows process completed but some data were missing for large tables. So I had to write a DB to DB data migration script.
Using work bench :
setup connection
go to management tab and click on data import/restore
click on import from self contained file .
choose your mysqlbackup.sql file.
select default database.
click on start import button.
Using command line (On Windows ) :
mysqldump -u <localuser>
--databases world
--single-transaction
--compress
--order-by-primary
-p<localpassword> | mysql -u <rds-user-name>
--port=3306
--host=ednpoint
-p<rds-password>
For more detail please refer :
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/MySQL.Procedural.Importing.SmallExisting.html
or
https://docs.bitnami.com/aws/how-to/migrate-database-rds/#using-phpmyadmin-110
Hope it helps.
The step by step instruction on how to migrate already existing db on mysql/mariadb to already running RDS instance.
Here is the AWS RDS Mysql document to import customer data into RDS
http://aws.amazon.com/articles/2933
Create flat files containing the data to be loaded
Stop any applications accessing the target DB Instance
Create a DB Snapshot
Disable Amazon RDS automated backups
Load the data using mysqlimport
Enable automated backups again