Unable to do a Restore from S3 with RDS SQL Server - amazon-web-services

I am attempting to do a restore from S3 in AWS RDS (SQL Server). On the page when I can select the engine, I select SQL Server. But the options to select the Edition are all grayed out and I cannot select one and move on. You can see this from the screen shot below. Note, this does not happen if I simply attempt to create an instance of SQL Server in RDS. I can then select an Edition.

It looks like you can't do it straight from the console and the Database has to exist already.
Restoring a Database
To restore your database, you call the rds_restore_database stored
procedure.
The following parameters are required:
#restore_db_name – The name of the database to restore.
#s3_arn_to_restore_from – The Amazon S3 bucket that contains the backup file, and the name of the file.
The following parameters are optional:
#kms_master_key_arn – If you encrypted the backup file, the key to use to decrypt the file.
Example Without Encryption
exec msdb.dbo.rds_restore_database
#restore_db_name='database_name',
#s3_arn_to_restore_from='arn:aws:s3:::bucket_name/file_name_and_extension';
Example With Encryption
exec msdb.dbo.rds_restore_database
#restore_db_name='database_name',
#s3_arn_to_restore_from='arn:aws:s3:::bucket_name/file_name_and_extension',
#kms_master_key_arn='arn:aws:kms:region:account-id:key/key-id';
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/SQLServer.Procedural.Importing.html

Related

PBI services Odbc.DataSource dynamic dsn name

In PBI desktop file no errors, erro appear only in PBI service on refreshing
ERROR:
Query contains unsupported function. Function name:Odbc.DataSource
Parameter1
Mydsn ' as parameter
used as text - not dynamic
= Odbc.DataSource("dsn=Mydsn", [HierarchicalNavigation=true]) ' no error
as used us text Parameter - not dynamic
= Odbc.DataSource("dsn=" & Parameter1, [HierarchicalNavigation=true]) ' no error,
Odbc_dsn ' Query
= settings[Column2]{0} ' from csv
from csv query
= Odbc.DataSource("dsn=" & Odbc_dsn, [HierarchicalNavigation=true]) ' Query contains unsupported function. Function name: Odbc.DataSource
directly from csv table
= Odbc.DataSource("dsn=" & settings[Column2]{0}, [HierarchicalNavigation=true]) ' Query contains unsupported function. Function name: Odbc.DataSource
No one of Privacy settings is not change anything, tryied all
available ways. (change to none, private, organizational, public,
disabling privacy settings and etc)
How to use odbs source DSN name from csv file?
(Answer to be expanded with additional info provided - see comments on original question)
While I have never imported a DSN name through a CSV, your saying that it works on your local machine makes me accept that this is at least possible so we'll instead focus on issues with the gateway.
My first impression here as to why this might not be working is simply permissions and visibility.
Having worked with a number of PowerBI Service setups, the issue with an unrecognized ODBC DSN usually falls into the following issues:
Is the DSN setup as a system DSN?
Is the gateway setup as a LocalService Account vs PowerBI Gateway Host Account?
Does which user the gateway is setup under actually have permissions to the directory that the data source (or custom connector) that the connection depends on?
So:
Fairly straight forward: all gateway accessible ODBC sources need to be setup on the gateway host as system DSNs, not user DSNs. See your ODBC Data Source Administrator here:
Confirm the On-Premise Gateway "Logon" User on the gateway's host machine? Generally I recommend going to Windows Services and making sure to use the "Local System account" (to inherit permissions) but just consider this during the next step of checking local permissions.
This applies to anything which is "self-hosted" on the local machine that is the gateway host: Whichever account is hosting the powerbi gateway service must also be given explicit permissions to the local resources needed. For example, if you add a custom connector to the documents directory on the gateway host under your user account - make sure the PowerBI default user has access to that directory and file. I.E. File properties -> Security -> User permission etc.
In my experience, 9/10 times one of these things isn't setup right.
Additional note - every time you upgrade or re-install a powerbi gateway host, you will have to change the service login account and double check all permissions. I don't know why but it overwrites that setting by default disabling all refresh until restored.
Edit:
After further thinking, I believe you will eventually run into the roadblock regardless - PowerBI Service's Gateway Data Source mappings are 1-1. After upload you will get this screen in the dataset settings:
Which requires that the data source has been defined in the PowerBI service's settings:
I don't believe that it is currently possible to make that definition a variably composed string per user's request.
Dsn name can be only static and only string

How to export SQL Output directly to CSV on Amazon RDS

Amazon doesn't give Access to RDS Server directly ( they expose it only through service RDS) hence, "select into outfile" doesn't work..
Even the master user does not have privileges of FILE.
I created ticket with Amazon; talked at length with them.. They suggested few work-around like using Data Pipeline etc.. but all are too complicated..
Surely one of the way could be to use tool like MYSql Workbench --> execute query --> Export to CSV.. Only problem with this approach is that you need to execute same query twice on server and is problematic if your output is having thousands of rows.
Just write the query in a file a.sql. The SQL Should be in this format:
select concat( '"',Product_id,'","', Subcategory,'","', ifnull(Product_type,''),'","', ifnull(End_Date,''), '"') as data from tablename
mysql -h xyz.abc7zdltfa3r.ap-southeast-1.rds.amazonaws.com -u query -pxyz < a.sql > deepak.csv
Output will be there in file deepak.csv

SQL Access denied error for a folder with 'Share with Nobody' property

I need to create a backup of my database and then restore the same on my PC. I am copying my backup file to another folder( Eg: 'Example' ) and setting the property as 'Share with Nobody' to the folder( Right click on the 'Example' folder->Share with->Nobody ). If I try to perform a restore operation after this, I am getting below Access denied error.
SQL Error : [Microsoft][ODBC SQL Server Driver][SQL Server]Cannot
open backup device 'C:\Temp\Example\Example.bak'. Operating system
error 5(Access is denied.).
This scenario is reproduced when we ran the query
RESTORE VERIFYONLY FROM DISK = 'C:\Temp\Example\Example.bak'
Your Sql Server instance needs permissions to read the backup file. Make sure the service account that's running Sql Server has rights to access the file share.

Connect IntelliJ to Amazon Redshift

I'm using the latest version of IntelliJ and I've just created a cluster in Amazon Redshift. How do I connect IntelliJ to Redshift so that I can query it from my favorite IDE?
Download a jdbc driver:
http://docs.aws.amazon.com/redshift/latest/mgmt/configure-jdbc-connection.html#download-jdbc-driver
On IntelliJ: View |Tool Windows | Database
Click on "Data Source
Properties" ()
Click Add (+) and select "Database Driver":
Uncheck "JDBC drivers", and add a jdbc driver, select a class from the dropdown and select a PostgreSQL dialect:
6.Add a new connection, and use this datasource for your connection: (+ | Data Source | RedShift).
7.Set URL templates:
jdbc:redshift://[{host::localhost}[:{port::5439}]][/{database::postgres}?][\?<&,user={user:param},password={password:param},{:identifier}={:param}>]
jdbc:redshift://\[{host:ipv6:\:\:1}\][:{port::5439}][/{database::postgres}?][\?<&,user={user:param},password={password:param},{:identifier}={:param}>]
jdbc:redshift:{database::postgres}[\?<&,user={user:param},password={password:param},{:identifier}={:param}>]
You can connect IntelliJ to Redshift by the using the JDBC Driver supplied by Amazon. In the Redshift Console, go to "Connect Client" to get the driver.
Then, in the IntelliJ Data Source window, add the JAR as a Driver file, and use the following settings:
Class: com.amazon.redshift.jdbc41.Driver
URL template: jdbc:redshift://{host}:{port}/{database}
Common Pitfalls:
If the driver file is not readable or marked as in quarantine by OS X, you won't be able to select the driver class.
For a more detailed guide, see this blog post: Connecting IntelliJ to Redshift
Note: There is no native Redshift support in IntelliJ yet. IntelliJ Issue DBE-1459
Update for 2019: I've just created a PostgreSQL connection and then filled the usual Redshift settings (don't forget port: 5439), no need to download Amazon's JDBC driver.
Only little issue is that the syntax check doesn't know Redshift specificities such as AS and some functions, but queries execute correctly.
Update for 2020: PyCharm (and possibly all other JetBrains IDEs) now supports connecting to Redshift through IAM AWS credentials without manual driver installation.
Here are the detailed setup instructions:
Grant a redshift:GetClusterCredentials permission to your AWS user. Either create and attach a new policy (docs) or use an existing one such as AmazonRedshiftFullAccess (not recommended: too permissive).
Create an AWS access key (access key id + secret access key pair) for your user (docs).
Create a text configuration file ~/.aws/credentials (no extension) with the following content (docs):
[default] # arbitrary profile name, will be used later
region = <your region>
aws_access_key_id = <your access key id> # created on the previous step
aws_secret_access_key = <your secret access key>
Create a new PyCharm database connection of type Amazon Redshift and set it up (docs):
Choose connection type = IAM cluster/region (right under the «General» tab of the connection settings window).
Authentication = AWS Profile
User = {your AWS login}
Profile = default or the one you have used in credentials file.
The credentials can possibly be provided through AccessKeyID/SecretAccessKey connection settings on the «Advanced» tab but it did not work for me (due to NullPointerException if Profile field is empty).
Database = {your database}, choose an existing one to not face non descriptive errors from the driver.
Region = {your region}
Cluster = {cluster name}, get it from Redshift AWS console.
Setup the connection:
Check necessary databases in the «Schemas» tab.
«Advanced» tab: AutoCreate = true (literal lowercase true as the setting value). This will automatically create a new database user with your AWS login.
Test connection.

How do I import a local MySQL db to RDS db instance?

I've created a RDS instance called realcardiodb (the engine is mysql)
and I've exported my database from my localhost. File is saved locally called localhostrealcardio.sql
Most research says to use mysqldump to import data from a local system to a web server, but my system doesn't even recognize mysqldump.
C:\xampp\mysql>mysqldump
'mysqldump' is not recognized as an internal or external command, operable program or batch file.
How do I resolve this error should I use mysqldump? (I definitely have mysql install on my system)
Is there a better utility I should use?
Any help is appreciated, especially if you have experience importing mysql to aws rds.
Thanks!
DK
Update 7/31/2012
So I got the error resolved. mysqldump is in the bin directory C:\xampp\mysql\bin>mysqldump
AWS provides the folloinwg instructions for uploading a local database to RDS:
mysqldump acme | mysql --host=hostname --user=username --password acme
Can someone break this down for me?
1) Is the first 'acme' (after mysqldump command) the name of my local database or the exported sql file I saved locally?
2)Is the hostname the IP address, Public DNS, RDS Endpoint or neither?
3)The username and password I assume is the RDS credentials and the second acme is the name of the database I created in RDS.
Thanks!
This is how I did it for a couple instances that had data in the MySQl tables.
The steps to creating an RDS database instance:
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_GettingStarted.CreatingConnecting.MySQL.html
Note: Make sure the RDS instance has a security group configured that relates to the EC2 security group.
http://docs.amazonwebservices.com/AmazonRDS/latest/UserGuide/USER_Workin...
Before we go forward, let me provide a list of what some of the following placeholders are:
host.address.for.rds.server = this will be what is referred to as the "end point" in your RDS description/settings page.
rdsusername = the master user account which you created during RDS setup.
rdsdatabase = a blank database which you created inside the server on your RDS instance.
backupfile.sql = the sql dump file your made of your pre-existing installation's database.
Once you've created a fresh RDS database instance, and have configured its security settings, log into this server (from within an ssh session to your EC2 server) and then create an empty database inside the instance using basic SQL commands.
mysql -h host.address.for.rds.server -P 3306 -u rdsusername -p
(enter your password)
create database rdsdatabase;
Then quit out of the MySQL environment inside your RDS server.
\q
This tutorial assumes you already have a backup from your old database. If you don't, go create one now. After that, you’re ready to import that sql dump file into the empty database waiting on your RDS server.
mysql -h host.address.for.rds.server -u rdsusername -p rdsdatabase < backupfile.sql
It might take a few seconds to complete, depending on the size of the sql dump file. Your indication that it is finished is that the bash command prompt reappears.
Note: the command “mysqlimport” is used when imported data directly into an existing table inside a database. It might seem like we’re “importing” data, but this is not what we’re actually doing in this situation. The database we are migrating to has no tables yet, and the sql dump file we’re using contains the sql commands to generate the tables it needs.
Confirm the Transfer
Now, if you didn't get any error messages, then your sql transfer probably worked. If you want, you can double check to see if it did by connecting to your RDS database server, looking up the database you created, and check to see if the tables are now present.
mysql -h host.address.for.rds.server -P 3306 -u rdsusername -p
(enter your password)
use rdsdatabase;
show tables;
I prefer using MySQL workbench. It's much more easier & user friendly than the command line way.
It provides a simple GUI.
MySQL workbench or SQL Yog.
These are the steps that I did.
1) Install MySQL Workbench.
2) In AWS console, there must be a security group for your RDS instance.
Add an inbound rule to that group for allowing connections from your machine.
It's simple. Add your IP-address.
3) Open MySQL workbench, Add a new connection.
4) Give the connection a name you prefer.
5) Choose connection method- Standard TCP/IP
6) Enter your RDS endpoint in the field of Hostname.
7) Port:3306
8) Username: master username (the one which which you created during the creation of your RDS instance)
9)Password: master password
10) Click Test Connection to check your connection.
11) If connection is successful, click OK.
12) Open the connection.
13) you will see your database 'realcardiodb' there.
14) Now you can export your mysqldump file to this database. Go to-> Server. Click Data Import.
15) You can check whether the data has been migrated by simply opening a blank SQL file & typing in basic SQL commands like use database, select * from table;
That's it. Viola.
If you have a backup.sql in your PC, No need to transfer to EC2. Just give below line on your terminal in your PC.
$ mysql -h rdsinstance-hostaddress-ending.rds.amazonaws.com -u rds_username -p rds_database < /path/to/your/backup.sql
Enter password: paswd_mysql_user
That's all.
Import backup directly from existing remote server
SSH connect to your remote server
Get the remote server mysql backup (backup/path/backupfile.sql)
Import backup file to RDS mysql while you in remote server shell
mysql -h your-mysql-instance.region.rds.amazonaws.com -u db_username -p db_name < backup/path/backupfile.sql
Note:
I have tried all the above criteria to import my existing backup to new RDS database, including through EC2 as in AWS documentation. It was a 10GB backup. So I have tried tables by tables as well. It shows process completed but some data were missing for large tables. So I had to write a DB to DB data migration script.
Using work bench :
setup connection
go to management tab and click on data import/restore
click on import from self contained file .
choose your mysqlbackup.sql file.
select default database.
click on start import button.
Using command line (On Windows ) :
mysqldump -u <localuser>
--databases world
--single-transaction
--compress
--order-by-primary
-p<localpassword> | mysql -u <rds-user-name>
--port=3306
--host=ednpoint
-p<rds-password>
For more detail please refer :
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/MySQL.Procedural.Importing.SmallExisting.html
or
https://docs.bitnami.com/aws/how-to/migrate-database-rds/#using-phpmyadmin-110
Hope it helps.
The step by step instruction on how to migrate already existing db on mysql/mariadb to already running RDS instance.
Here is the AWS RDS Mysql document to import customer data into RDS
http://aws.amazon.com/articles/2933
Create flat files containing the data to be loaded
Stop any applications accessing the target DB Instance
Create a DB Snapshot
Disable Amazon RDS automated backups
Load the data using mysqlimport
Enable automated backups again