i have a problem getting my local database into Heroku.I'm currently working on building a Django web application and I tried to learn about getting local database to Heroku .I was trying to push my local Postgresql database with pg:push command then this error happen
pg_restore: error: unrecognized data block type (0) while searching archive
! pg_restore errored with 1
Any idea why is this happened ?
I'm a lazy programmer and efficient so this is much more easier rather than paying for AWS backup store them in excel sheets.
This save cost and not use PUSH: PULL which is not efficient.
Using CMD as ADMIN to insert Excel data to Heroku Postgres Database.
Follow INSTRUCTION
1. OPEN CMD AS ADMIN
2. heroku pg:sql postgresql-rugged-08088 --app sample
3. CREATE TABLE SERIAL_T ( id SERIAL , SERIAL VARCHAR(50), USE INT, DEVICES TEXT[], PRINTED BOOLEAN, PRIMARY KEY (id))
4. \COPY SERIAL_T (SERIAL, USE, DEVICES, PRINTED) FROM 'C:\Users\PATH\EXCEL-03-27-2021.csv' DELIMITER ','CSV HEADER;
Related
I am in the process of migrating a database from an external server to cloud sql 2nd gen. Have been following the recommended steps and the 2TB mysqlsump process was complete and replication started. However, got an error:
'Error ''Access denied for user ''skip-grants user''#''skip-grants host'' (using password: NO)'' on query. Default database: ''mondovo_db''. Query: ''LOAD DATA INFILE ''/mysql/tmp/SQL_LOAD-0a868f6d-8681-11e9-b5d3-42010a8000a8-6498057-322806.data'' IGNORE INTO TABLE seoi_volume_update_tracker FIELDS TERMINATED BY ''^#^'' ENCLOSED BY '''' ESCAPED BY ''\'' LINES TERMINATED BY ''^|^'' (keyword_search_volume_id)'''
2 questions,
1) I'm guessing the error has come about because cloud sql requires LOAD DATA LOCAL INFILE instead of LOAD DATA INFILE? However am quite sure on the master we run only LOAD DATA LOCAL INFILE so not sure how it changes to remove LOCAL while in replication, is that possible?
2) I can't stop the slave to skip the error and restart since SUPER privileges aren't available and so am not sure how to skip this error and also avoid it for the future while the the final sync happens. Suggestions?
There was no way to work around the slave replication error in Google Cloud SQL, so had to come up with another way.
Since replication wasn't going to work, I had to do a copy of all the databases. However, because of the aggregate size of all my DBs being at 2TB, it was going to take a long time.
The final strategy that took the least amount of time:
1) Pre-requisite: You need to have at least 1.5X the amount of current database size in terms of disk space remaining on your SQL drive. So my 2TB DB was on a 2.7TB SSD, I needed to eventually move everything temporarily to a 6TB SSD before I could proceed with the steps below. DO NOT proceed without sufficient disk space, you'll waste a lot of your time as I did.
2) Install cloudsql-import on your server. Without this, you can't proceed and this took a while for me to discover. This will facilitate in the quick transfer of your SQL dumps to Google.
3) I had multiple databases to migrate. So if in a similar situation, pick one at a time and for the sites that access that DB, prevent any further insertions/updates. I needed to put a "Website under Maintenance" on each site, while I executed the operations outlined below.
4) Run the commands in the steps below in a separate screen. I launched a few processes in parallel on different screens.
screen -S DB_NAME_import_process
5) Run a mysqldump using the following command and note, the output is an SQL file and not a compressed file:
mysqldump {DB_NAME} --hex-blob --default-character-set=utf8mb4 --skip-set-charset --skip-triggers --no-autocommit --single-transaction --set-gtid-purged=off > {DB_NAME}.sql
6) (Optional) For my largest DB of around 1.2TB, I also split the DB backup into individual table SQL files using the script mentioned here: https://stackoverflow.com/a/9949414/1396252
7) For each of the files dumped, I converted the INSERT commands into INSERT IGNORE because didn't want any further duplicate errors during the import process.
cat {DB_OR_TABLE_NAME}.sql | sed s/"^INSERT"/"INSERT IGNORE"/g > new_{DB_OR_TABLE_NAME}_ignore.sql
8) Create a database by the same name on Google Cloud SQL that you want to import. Also create a global user that has permission to access all the databases.
9) Now, we import the SQL files using the cloudsql-import plugin. If you split the larger DB into individual table files in Step 6, use the cat command to combine a batch of them into a single file and make as many batch files as you see appropriate.
Run the following command:
cloudsql-import --dump={DB_OR_TABLE_NAME}.sql --dsn='{DB_USER_ON_GLCOUD}:{DB_PASSWORD}#tcp({GCLOUD_SQL_PUBLIC_IP}:3306)/{DB_NAME_CREATED_ON_GOOGLE}'
10) While the process is running, you can step out of the screen session using Ctrl+a
+ Ctrl+d (or refer here) and then reconnect to the screen later to check on progress. You can create another screen session and repeat the same steps for each of the DBs/batches of tables that you need to import.
Because of the large sizes that I had to import, I believe it did take me a day or two, don't remember now since it's been a few months but I know that it's much faster than any other way. I had tried using Google's copy utility to copy the SQL files to Cloud Storage and then use Cloud SQL's built-in visual import tool but that was slow and not as fast as cloudsql-import. I would recommend this method up until Google fixes the ability to skip slave errors.
I finally managed to install Oracle Apex 5.1.2 but I have problem with creating a workspace. Whenever I try to do so at the end I get an error:
I tried to create this workspace with following values:
The strange thing is that when I try to use Yes as option to Reuse Existing Schema no schemas are listed. Is it possible that Apex somehow doesn't have access to managing schemas?
I am using APEX with ORDS. At home page I get info that I have 1 workspace and 1 schema.
I've tried:
Using strong passwords as mentioned here
Changing provisioning type to request: Effect is the same. If user request a space and I accept it I get the exact same error.
Enabled OMF with parameter DB_CREATE_FILE_DEST = '/u01/app/oracle/oradata' -> *.dbf files are not created before and after the change in directory.
The root cause of this problem was installing APEX both on CDB$ROOT, so as a result, and on PDB1. I uninstalled APEX from root, repaired with #utlrp.sql script as in this tutorial and installed APEX again, but only on PDB1. Workspace was successfully created.
I had the same problem (apex 18.1/ords) in a database without CDB configured. The solution in my case was to run #apex_rest_config.sql script.
After that, the workspace is created without any problem.
If you don't want to reinstall apex to move it from the CDB to the PDB I suggest you try setting PDB mapping in your ords config file.
https://docs.oracle.com/en/database/oracle/oracle-rest-data-services/20.2/aelig/configuring-REST-data-services.html#GUID-694B2F89-CE4F-4AB0-88E2-EB35D03DEC3C
I did it by adding
<entry key="db.serviceNameSuffix"></entry>
to the end of my defaults.xml (you can find its location by running
$ java -jar ords.war configdir ).
Then access apex with /yourpdb in the path: e.g.
http://server:port/ords/pdb1
This will run apex from that PDB instead of from the CDB and will create the workspace in there, that should work OK. It did for me.
I had same problem at ORACLE 12c, according to this link my problem has been solved. The problem is the users can't create workspace in CDB, so you must change session container to pdf files by the following steps :
$root> cd ~/TEMP/apex
$root> sqlplus
Enter user-name: sys as sysdba
Enter password:
SQL> exec dbms_xdb.sethttpport(0); /*set port*/
SQL> alter session set container=YOURAPPEXPDB;
SQL> exec dbms_xdb.sethttpport(8181);
SQL> alter system register;
//install oracle apex again
to remove oracle apex i use this link, its perfectly worked for me.
Im using WampServer and when i run this query:
insert into users (pname,puname,email,psw,Addr,Gps,Mob,Ass,About,Pic,Prev,Want,Pt,Rate,Dat)
values ("aaa","admin","a#a.a","a","a","33.3216452,44.35840350000001","11111111111",
"","","img/icon.png",0,0,0,0,"2014-01-15 08:04:32");
from phpmyadmin it works good but when i run it from browser (php code) i got this error:
You have an error in your SQL syntax; check the manual that
corresponds to your MySQL server version for the right syntax to use
near '"aaa","admin","a#a.a","a","a",' at line 1
and when i run it from ems manager it say "Query OK, 1 rows affected" but no row added
The problem:
My C++ application connects to a MySQL server, reads the first/header line of each db export.txt, makes a create table statement to prepare for the import and executes that against the database (no problem with that, the table appears just as intended) -- but when I try and execute the LOAD DATA LOCAL INFILE to import the data into the newly created table, I get the error "The used command is not allowed with this MySQL version". But, this works on the CLI! When I execute this command on the CLI using mysql -u <user> -p<password> -e "LOAD DATA LOCAL INFILE 'myfile.txt' INTO TABLE mytable FIELDS TERMINATED BY '|' LINES TERMINATED BY '\r\n';" it works flawlessly?
The Situation:
My company gets a large quantity of database exports (160 files/10gb of .txt files that are '|' delimited) from our vendors on a monthly basis that have to replace the old vendor lists. I am working on a smallish C++ app to deal with it on my work desktop. The application is meant to set up the required tables, import the data, then execute a series of intermediate queries against multiple tables to assemble information in a series of final tables, which is then itself exported and uploaded to the production environment, for use in the companies e-commerce website.
My Setup:
Ubuntu 12.04
MySQL Server v. 5.5.29 + MySQL Command Line client
Linux GNU C++ Compiler
libmysqlcppconn is installed and I have the required mysqlconn library linked in.
I have already overcome/tried the following issues/combinations:
1.) I have already discovered (the hard way) that LOAD DATA [LOCAL] INFILE statements must be enabled in the config -- I have the "local-infile" option set in the configuration files for both client and server. (fixed by updating the /etc/mysql/my.cnf with "local-infile" statements for the client and server. NOTE: I could have used the --local-infile=1 to restart the mysql-server, but this is my local dev environment so I just wanted it turned on permanently)
2.) LOAD DATA LOCAL INFILE seems to fail to perform the import (from the CLI) if the target import file does not have execute permissions enabled (fixed with chmod +x target_file.txt)
3.) I am using the mysql root account in my application code (because its my localhost, not production and this particular program will never run on a production server.)
4.) I have tried executing my compiled binary program using the sudo command (no change, same error "The used command is not allowed with this MySQL version")
5.) I have tried changing the ownership of the binary file from my normal login to root (no change, same error "The used command is not allowed with this MySQL version")
6.) I know the libcppmysqlconn is working because I am able to connect and perform the CREATE TABLE call without a problem, and I can do other queries and execute statements
What am I missing? Any suggestions? Thanks in advance :)
After much diligent trial and error working with the /etc/mysql/my.cfg file (I know this is a permissions issue because it works on the command line, but not from the connector) and after much googling and finding some back alley tech support posts I've come to conclude that the MySQL C++ connector did not (for whatever reason) decide to implement the ability for developers to be able to allow the local-infile=1 option from the C++ connector.
Apparently some people have been able to hack/fork the MySQL C++ connector to expose the functionality, but no one posted their source code -- only said it worked. Apparently there is a workaround in the MySQL C API after you initialize the connection you would use this:
mysql_options( &mysql, MYSQL_OPT_LOCAL_INFILE, 1 );
which apparently allows the LOAD DATA LOCAL INFILE statements to work with the MySQL C API.
Here are some reference articles that lead me to this conclusion:
1.) How can I get the native C API connection structure from MySQL Connector/C++?
2.) Mysql 5.5 LOAD DATA INFILE Permissions
3.) http://osdir.com/ml/db.mysql.c++/2004-04/msg00097.html
Essentially if you want the ability to use the LOAD DATA LOCAL INFILE functionality from a programmatic Connector API -- you have to use the mysql C API or hack/fork the existing mysql C++ api to expose the connection structure. Or just stick to executing the LOAD DATA LOCAL INFILE from the command line :(
I've created a RDS instance called realcardiodb (the engine is mysql)
and I've exported my database from my localhost. File is saved locally called localhostrealcardio.sql
Most research says to use mysqldump to import data from a local system to a web server, but my system doesn't even recognize mysqldump.
C:\xampp\mysql>mysqldump
'mysqldump' is not recognized as an internal or external command, operable program or batch file.
How do I resolve this error should I use mysqldump? (I definitely have mysql install on my system)
Is there a better utility I should use?
Any help is appreciated, especially if you have experience importing mysql to aws rds.
Thanks!
DK
Update 7/31/2012
So I got the error resolved. mysqldump is in the bin directory C:\xampp\mysql\bin>mysqldump
AWS provides the folloinwg instructions for uploading a local database to RDS:
mysqldump acme | mysql --host=hostname --user=username --password acme
Can someone break this down for me?
1) Is the first 'acme' (after mysqldump command) the name of my local database or the exported sql file I saved locally?
2)Is the hostname the IP address, Public DNS, RDS Endpoint or neither?
3)The username and password I assume is the RDS credentials and the second acme is the name of the database I created in RDS.
Thanks!
This is how I did it for a couple instances that had data in the MySQl tables.
The steps to creating an RDS database instance:
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_GettingStarted.CreatingConnecting.MySQL.html
Note: Make sure the RDS instance has a security group configured that relates to the EC2 security group.
http://docs.amazonwebservices.com/AmazonRDS/latest/UserGuide/USER_Workin...
Before we go forward, let me provide a list of what some of the following placeholders are:
host.address.for.rds.server = this will be what is referred to as the "end point" in your RDS description/settings page.
rdsusername = the master user account which you created during RDS setup.
rdsdatabase = a blank database which you created inside the server on your RDS instance.
backupfile.sql = the sql dump file your made of your pre-existing installation's database.
Once you've created a fresh RDS database instance, and have configured its security settings, log into this server (from within an ssh session to your EC2 server) and then create an empty database inside the instance using basic SQL commands.
mysql -h host.address.for.rds.server -P 3306 -u rdsusername -p
(enter your password)
create database rdsdatabase;
Then quit out of the MySQL environment inside your RDS server.
\q
This tutorial assumes you already have a backup from your old database. If you don't, go create one now. After that, you’re ready to import that sql dump file into the empty database waiting on your RDS server.
mysql -h host.address.for.rds.server -u rdsusername -p rdsdatabase < backupfile.sql
It might take a few seconds to complete, depending on the size of the sql dump file. Your indication that it is finished is that the bash command prompt reappears.
Note: the command “mysqlimport” is used when imported data directly into an existing table inside a database. It might seem like we’re “importing” data, but this is not what we’re actually doing in this situation. The database we are migrating to has no tables yet, and the sql dump file we’re using contains the sql commands to generate the tables it needs.
Confirm the Transfer
Now, if you didn't get any error messages, then your sql transfer probably worked. If you want, you can double check to see if it did by connecting to your RDS database server, looking up the database you created, and check to see if the tables are now present.
mysql -h host.address.for.rds.server -P 3306 -u rdsusername -p
(enter your password)
use rdsdatabase;
show tables;
I prefer using MySQL workbench. It's much more easier & user friendly than the command line way.
It provides a simple GUI.
MySQL workbench or SQL Yog.
These are the steps that I did.
1) Install MySQL Workbench.
2) In AWS console, there must be a security group for your RDS instance.
Add an inbound rule to that group for allowing connections from your machine.
It's simple. Add your IP-address.
3) Open MySQL workbench, Add a new connection.
4) Give the connection a name you prefer.
5) Choose connection method- Standard TCP/IP
6) Enter your RDS endpoint in the field of Hostname.
7) Port:3306
8) Username: master username (the one which which you created during the creation of your RDS instance)
9)Password: master password
10) Click Test Connection to check your connection.
11) If connection is successful, click OK.
12) Open the connection.
13) you will see your database 'realcardiodb' there.
14) Now you can export your mysqldump file to this database. Go to-> Server. Click Data Import.
15) You can check whether the data has been migrated by simply opening a blank SQL file & typing in basic SQL commands like use database, select * from table;
That's it. Viola.
If you have a backup.sql in your PC, No need to transfer to EC2. Just give below line on your terminal in your PC.
$ mysql -h rdsinstance-hostaddress-ending.rds.amazonaws.com -u rds_username -p rds_database < /path/to/your/backup.sql
Enter password: paswd_mysql_user
That's all.
Import backup directly from existing remote server
SSH connect to your remote server
Get the remote server mysql backup (backup/path/backupfile.sql)
Import backup file to RDS mysql while you in remote server shell
mysql -h your-mysql-instance.region.rds.amazonaws.com -u db_username -p db_name < backup/path/backupfile.sql
Note:
I have tried all the above criteria to import my existing backup to new RDS database, including through EC2 as in AWS documentation. It was a 10GB backup. So I have tried tables by tables as well. It shows process completed but some data were missing for large tables. So I had to write a DB to DB data migration script.
Using work bench :
setup connection
go to management tab and click on data import/restore
click on import from self contained file .
choose your mysqlbackup.sql file.
select default database.
click on start import button.
Using command line (On Windows ) :
mysqldump -u <localuser>
--databases world
--single-transaction
--compress
--order-by-primary
-p<localpassword> | mysql -u <rds-user-name>
--port=3306
--host=ednpoint
-p<rds-password>
For more detail please refer :
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/MySQL.Procedural.Importing.SmallExisting.html
or
https://docs.bitnami.com/aws/how-to/migrate-database-rds/#using-phpmyadmin-110
Hope it helps.
The step by step instruction on how to migrate already existing db on mysql/mariadb to already running RDS instance.
Here is the AWS RDS Mysql document to import customer data into RDS
http://aws.amazon.com/articles/2933
Create flat files containing the data to be loaded
Stop any applications accessing the target DB Instance
Create a DB Snapshot
Disable Amazon RDS automated backups
Load the data using mysqlimport
Enable automated backups again