How do I create a new database in Questdb? - questdb

I would like to be able to separate multiple types of data in Questdb but it looks like I can't perform something like
CREATE DATABASE new_db;
But I get table expected error. Is is possible outside of SQL to add a new database?

As of writing, only one database qdb exists within QuestDB. Separating out types of data should be done using different tables

What you could do is to run multiple different instances of QuestDB, each containing a different database. You just have to run those instances on different ports. If you are using Docker for example you could map the QuestDB port 9000 of the container for the first DB to another host port, for example 9001.
docker run -p 9001:9000 --name=First_DB_Name questdb/questdb
If you now want to create a new DB you can run another container on another host port, like for example:
docker run -p 9002:9000 --name=new_db questdb/questdb
You can then reach the different databases via these host ports. If you are working on the host itself, then the first db is reachable via http://localhost:9001 and the new db via http://localhost:9002, etc...
It's not a beautiful solution, but it works.

Related

Retrieving an RDS endpoint from within USER DATA

I have a single MySQL RDS instance and an AMI containing a Grails application. I would like to use the User Data function to populate the Grails application.yml file with the RDS endpoint. How do I retrieve RDS the endpoint from within User Data?
There are two ways to use User Data:
Just as data: The contents of User Data is accessible via http://169.254.169.254/latest/user-data/, so your application could just parse the contents and do what you wish with it.
As an executable script: On Linux, starting User Data with #! will cause it to be executed, so you could write a script to update the application.yml file.
An alternate concept would be to store the RDS Endpoint in the AWS Systems Manager Parameter Store. Then, use a User Data script to extract it from there and store it in application.yml. This way, the endpoint can be easily updated in Parameter Store without modifying any scripts.
User Data is nothing but shell script while runing on Linux AMI.
You can edit appication.yml file using shell script and add you parameters.

how can I use R Studio data in shinyServer

In my local machine I use RStudio + Shiny to work properly.
Now that I have Shiny-Server installed on linux, but I do not know the Data generated by RStudiom.
how can I get Shiny-Server to read it?
Do not know what keyword query?
Thanks
Importing data in the server
As I see it, there are two ways to supply data in this situation.
The first one is to upload the data to the server where your shiny-apps are hosted. This can be done via ssh (wget) or something like FileZilla. You can put your data in the same folder as the app and then access them with relative paths. For example if you have
- app-folder
- app.R
- data.rds
- more_data.csv
You can use readRDS("data.rds") or readr::read_csv2("more_data.csv") in app.R to use the data in the app.
The second option is to use fileInput inside your app. This will give you the option to upload data from your local machine in the GUI. This data will then be put onto the server temporarilly. See ?shiny::fileInput.
Exporting data from RStudio
There are numerous ways to do this. You can use save to write your whole workspace to disk. If you just want to save single objects, saveRDS is quite handy. If you want to save datasets (for example data.frames) you can also use readr::write_csv or similar functions.

Clarification on Sitecore link database sycronization in muliti-server environments

The Sitecore Guide states this:
To ensure that Sitecore automatically updates the link database in the
CD environment:
*The CD and CM instances must use the same name to refer to the publishing target database across the environments (typically Web).
One of the following conditions should be met:
**The Core database should be shared or replicated between the CM and CD instances.
** The Link database data should be configured to be stored in a database which is shared between CM and CD publishing target database
(typically Web).
Two things aren't clear to me:
The line with the first *, I assume this means that if I have two web DBs, one being "web" and the other being "web2", then this means that the CM needs to use those names and CD1 needs to use "web" and CD2 needs to use "web2", yes"?
The last line with **: by "shared" does this mean that CD1 and CD2 would need to use the same web database, or does it just mean that as long as CM, CD1 and CD2 are set to use their respective web DBs to store the Link DB, the Link DB will be updated on publish? What database should the CM be configured to store it's like DB? It has two webs (web1, web2).
Here are details of our environment for context:
Our CM environment is 1 web server and 1 DB server. Our CD environment is two load balanced web servers, each with their own DB. So, two publishing targets for the CM to point to.
This is a good question. Typically you may have multiple web DBs for things such as pre production preview, e.g. "webpreview" as opposed to a public "web" DB. If you have two separate web DBs, "web1" and "web2" and two separate CDs use them respectively, then it seems you must have two separate publishing targets, web1 and web2. In the typical case (where "typical" maybe just means simple), there's a single web DB shared by 1-n CDs. So in your case CD1 and CD2 would both read from the same single web DB. Based on this context:
It means whatever connection string 'name' token you use on the CM for the "web" DB, you need to use the same token on CD1 and CD2. So it could be "web" or "webpublic" or similar. But must be consistent across all 3 instances (CM, CD1, CD2)
Yes, CD1 and CD2 would share the same exact web DB as I indicated above. And thus you would set the link database to use that shared "web" (or "webpublic"...) DB.

ssh through python onto amazon ec2

I have two databases one on my local machine and one on my amazon ec2 instance.Now what I
do is I run a python program on my local machine which makes changes to the databse on my local machine.I want these changes to be reflected onto the database on amazon ec2 instance,
periodically.I want to do this in python.A script that logs onto the amazon server establishes a connection with the database there and makes the changes.
I came across some modules like pexcept,fabric and paramiko.But I am struggling with the
key authentication.
The way I ssh from my terminal is ssh -i my_rsa_file.pem username#ip_address.There is no password.How do I go about this ??
Also I want to know whether simply using Popen in subprocess to execute the login command work ?
The Boto EC2 documentation here describes the EC2 instance object, of which "key_pair" is an attribute. Look about 3/4 of the way down, under "boto.ec2.instance".
http://boto.readthedocs.org/en/latest/ref/ec2.html
So, e.g., you could run some instances as follows, and then store the first instance as "inst":
reservation = conn.run_instances(...)
inst = reservation.instances[0]
To retrieve your key-pair name as a unicode string, just use:
kp_name = inst.key_name
You can then retrieve the corresponding Boto object using get_key_pair:
kp_obj = conn.get_key_pair(kp_name)
Of course, this is a silly example, since I would have needed my key pair name to run_instances in the first place. May you find a more fruitful application!

How do I import a local MySQL db to RDS db instance?

I've created a RDS instance called realcardiodb (the engine is mysql)
and I've exported my database from my localhost. File is saved locally called localhostrealcardio.sql
Most research says to use mysqldump to import data from a local system to a web server, but my system doesn't even recognize mysqldump.
C:\xampp\mysql>mysqldump
'mysqldump' is not recognized as an internal or external command, operable program or batch file.
How do I resolve this error should I use mysqldump? (I definitely have mysql install on my system)
Is there a better utility I should use?
Any help is appreciated, especially if you have experience importing mysql to aws rds.
Thanks!
DK
Update 7/31/2012
So I got the error resolved. mysqldump is in the bin directory C:\xampp\mysql\bin>mysqldump
AWS provides the folloinwg instructions for uploading a local database to RDS:
mysqldump acme | mysql --host=hostname --user=username --password acme
Can someone break this down for me?
1) Is the first 'acme' (after mysqldump command) the name of my local database or the exported sql file I saved locally?
2)Is the hostname the IP address, Public DNS, RDS Endpoint or neither?
3)The username and password I assume is the RDS credentials and the second acme is the name of the database I created in RDS.
Thanks!
This is how I did it for a couple instances that had data in the MySQl tables.
The steps to creating an RDS database instance:
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_GettingStarted.CreatingConnecting.MySQL.html
Note: Make sure the RDS instance has a security group configured that relates to the EC2 security group.
http://docs.amazonwebservices.com/AmazonRDS/latest/UserGuide/USER_Workin...
Before we go forward, let me provide a list of what some of the following placeholders are:
host.address.for.rds.server = this will be what is referred to as the "end point" in your RDS description/settings page.
rdsusername = the master user account which you created during RDS setup.
rdsdatabase = a blank database which you created inside the server on your RDS instance.
backupfile.sql = the sql dump file your made of your pre-existing installation's database.
Once you've created a fresh RDS database instance, and have configured its security settings, log into this server (from within an ssh session to your EC2 server) and then create an empty database inside the instance using basic SQL commands.
mysql -h host.address.for.rds.server -P 3306 -u rdsusername -p
(enter your password)
create database rdsdatabase;
Then quit out of the MySQL environment inside your RDS server.
\q
This tutorial assumes you already have a backup from your old database. If you don't, go create one now. After that, you’re ready to import that sql dump file into the empty database waiting on your RDS server.
mysql -h host.address.for.rds.server -u rdsusername -p rdsdatabase < backupfile.sql
It might take a few seconds to complete, depending on the size of the sql dump file. Your indication that it is finished is that the bash command prompt reappears.
Note: the command “mysqlimport” is used when imported data directly into an existing table inside a database. It might seem like we’re “importing” data, but this is not what we’re actually doing in this situation. The database we are migrating to has no tables yet, and the sql dump file we’re using contains the sql commands to generate the tables it needs.
Confirm the Transfer
Now, if you didn't get any error messages, then your sql transfer probably worked. If you want, you can double check to see if it did by connecting to your RDS database server, looking up the database you created, and check to see if the tables are now present.
mysql -h host.address.for.rds.server -P 3306 -u rdsusername -p
(enter your password)
use rdsdatabase;
show tables;
I prefer using MySQL workbench. It's much more easier & user friendly than the command line way.
It provides a simple GUI.
MySQL workbench or SQL Yog.
These are the steps that I did.
1) Install MySQL Workbench.
2) In AWS console, there must be a security group for your RDS instance.
Add an inbound rule to that group for allowing connections from your machine.
It's simple. Add your IP-address.
3) Open MySQL workbench, Add a new connection.
4) Give the connection a name you prefer.
5) Choose connection method- Standard TCP/IP
6) Enter your RDS endpoint in the field of Hostname.
7) Port:3306
8) Username: master username (the one which which you created during the creation of your RDS instance)
9)Password: master password
10) Click Test Connection to check your connection.
11) If connection is successful, click OK.
12) Open the connection.
13) you will see your database 'realcardiodb' there.
14) Now you can export your mysqldump file to this database. Go to-> Server. Click Data Import.
15) You can check whether the data has been migrated by simply opening a blank SQL file & typing in basic SQL commands like use database, select * from table;
That's it. Viola.
If you have a backup.sql in your PC, No need to transfer to EC2. Just give below line on your terminal in your PC.
$ mysql -h rdsinstance-hostaddress-ending.rds.amazonaws.com -u rds_username -p rds_database < /path/to/your/backup.sql
Enter password: paswd_mysql_user
That's all.
Import backup directly from existing remote server
SSH connect to your remote server
Get the remote server mysql backup (backup/path/backupfile.sql)
Import backup file to RDS mysql while you in remote server shell
mysql -h your-mysql-instance.region.rds.amazonaws.com -u db_username -p db_name < backup/path/backupfile.sql
Note:
I have tried all the above criteria to import my existing backup to new RDS database, including through EC2 as in AWS documentation. It was a 10GB backup. So I have tried tables by tables as well. It shows process completed but some data were missing for large tables. So I had to write a DB to DB data migration script.
Using work bench :
setup connection
go to management tab and click on data import/restore
click on import from self contained file .
choose your mysqlbackup.sql file.
select default database.
click on start import button.
Using command line (On Windows ) :
mysqldump -u <localuser>
--databases world
--single-transaction
--compress
--order-by-primary
-p<localpassword> | mysql -u <rds-user-name>
--port=3306
--host=ednpoint
-p<rds-password>
For more detail please refer :
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/MySQL.Procedural.Importing.SmallExisting.html
or
https://docs.bitnami.com/aws/how-to/migrate-database-rds/#using-phpmyadmin-110
Hope it helps.
The step by step instruction on how to migrate already existing db on mysql/mariadb to already running RDS instance.
Here is the AWS RDS Mysql document to import customer data into RDS
http://aws.amazon.com/articles/2933
Create flat files containing the data to be loaded
Stop any applications accessing the target DB Instance
Create a DB Snapshot
Disable Amazon RDS automated backups
Load the data using mysqlimport
Enable automated backups again