I'm importing data from Storage to SQL GCP (MySql datacloud). When I select the .sql file and the database there is an error message: Sorry, there’s a problem. If you entered information, check it and try again. Otherwise, the problem might clear up on its own, so check back later.
I'm trying using only one catalog table and I have the same mistake. Could you help me? please
Have you created the database before?
It seems that the problem might be with the sql file, that file is a sql dump?
The normal process for migrating to CloudSQL would be:
Create the instance, if your source will be (MySQL) you need to select MySQL.
Create database
Select the database, IMPORT
Select from sql file, and select the file
You need to have permissions for creating an instance and database.
Are you able to input correct parameters like (correct GCS path, valid file, correct database name and table)? Please note that for compressed (.gz file) and sql (dump) file, SQL format should be selected instead of CSV.
Related
I'm installing WSO2IS 5.10.0 and I am creating five PostgreSQL databases per the column titled Recommended Database Structure in this document:
https://is.docs.wso2.com/en/next/setup/setting-up-separate-databases-for-clustering/
Actually it's six databases if you count the CARBON_DB. The five PostgreSQL databases are named as follows: SHARED_DB, USERSTORE_DB, IDENTITY_DB, CONSENT_MGT_DB and BPS_DB. I already have them configured in the deployment.toml file. I've created the databases in PostgreSQL and I have to manually execute the SQL files against each database in order to create the schema for each database. Based on the document in the link, I have figured out which SQL files to execute for four of the databases. However, I have no idea what SQL files I need to execute to create the USERSTORE_DB schema. It's got to be one of the files under the dbscripts directory but I just don't know which one(s). Can anybody help me on this one?
The CARBON_DB contains product-specific data. And by default that stores in the embedded h2 database. There is no requirement to point that DB to the PostgreSQL database. Hence you need to worry only about these databases SHARED_DB, USERSTORE_DB, IDENTITY_DB, CONSENT_MGT_DB and BPS_DB.
As per your next question, You can find the DB scripts related to USER_DB(USERSTORE_DB) in /dbscripts/postgresql.sql file. This file has tables starting with the name UM_. These tables are the user management tables. You can use those table sql scripts to create tables in USERSTORE_DB.
Refer the following doc for more information
[1]https://is.docs.wso2.com/en/5.10.0/administer/user-management-related-tables/
I never had to deal with database, therefore, sorry in advance!
I was asked to create a database for a project and store data output from a c++ program into the database. I informed on Google about databases, and I came across with MySQL, and in particular database connection. As far as I understood, in the first place a database has to be created (for example with MySQL), and once data are inserted, it’s possible to access to them. However, it’s not totally clear what is possible to achieve with such a connection and how to save data from a c++ program into the database directly.
Based on what I read on the net, these should be related, is it right? I would really need some help, example or clarification about these two questions. Thanks in advance for your time!
First you should create DB and tables.
You can do it in each DB IDE wizards, or you can write it in a script.
So here are scrypt for MySQL
CREATE DATABASE test_db --this create DB called test_db
I guess you should store a message and a timestamp so a possible table (In MySQL) will be:
USE test_db -- from now on the script using test_db unless specified explicit DB
--creating table with id, mmessage and timestamp
CREATE TABLE output_table (
msg_ID INT(11) NOT NULL AUTO_INCREMENT PRIMARY KEY,
msg VARCHAR(max),
msg_TS TIMESTAMP DEFAULT CURRENT_TIMESTAMP)
In the above table you only need to give the message since all the rest are filled automatically. So Insert into the table command may look like this:
INSERT INTO output_table (msg) values ('this is a message')
When you want to check the whole table you run the following:
SELECT * FROM output_table
Now you need to connect this code to your c++ code:
Generally, you'll need to know db name, user name and password in manner to connect to DB.
You can use ODBC + MySQL ODBC Connector. It is better since you are not limitted in your c++ to a specific DB. If you are sure you will use only MySQL you can use also MySQL C++ Connector directly. Anyway, both will give you option to run SQL commands on your DB.
HERE you can fine MySQL c++ connector sample
HERE you can find ODBC sample.
I am recreating a web app in Django that was running in a server but it was terminated, fortunately, I did a backup of all the code. My problem comes with the database because but I do not know how to transfer all the data from the old db.sqlite3 Django database web app into the new one.
I found a similar question as mine Django: transfer data from one database to another but the user wanted to transfer data from specific columns because their models.pyfrom the old and new databases were slightly different. In my case, my models.py from the old and new databases are the same.
Alternatives
I am using the DB Browser for SQLite to explore the content of the old database and I could add manually each row into the Django administration but this will take me too much time.
I could copy the old db.sqlite and replace it in the new web app because the models.py file remains the same but this solution is not appropriate IMO, this solution is rude and I think it goes against the good practices of Software.
How should I proceed for transferring data from the old database to the new one?
This seems like a one time copy of one db to another. I don't see how this goes against good software practice unless you have to be copying this db frequently. I've done it before when migrating servers and it doesn't cause any issues assuming the two instances of the application are the same build.
I was able to do some minor tricks in order to solve my problem because there is not a straightforward functionality that allows you to transfer all your data from two sqlite databases in Django. Here is the trick I did:
Download the sqlite db browser to explore and export the contents of your old database in a .csv file. Open you database with sqlite db browser and hit on the tables option and you will see all your tables, then do a right click on any of those and hit the export as a csv file option to generate the csv file (name_of_your_csv_file.csv). The other alternative is to use the sqlite3.exe to open your database in cmd or powershell and then doing the export with:
.tables #this lets you explore your available tables
headers on
mode csv
output name_of_your_csv_file.csv
2.There are two choices up to this point: You can either insert all the records at once to your new database or you can drop your existing tables from the new database and then recreate them and import the .csv file. I went for the drop option because there were more than 100 records to migrate.
# sqlite3
# check the structure of your table so you can re-create it
.schema <table_name>
#the result might be something like CREATE TABLE IF NOT EXISTS "web_app_navigator_table" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "ticket" varchar(120) NOT NULL);
#drop the table
drop table web_app_navigator_table
#re-create the table
create table web_app_navigator_table(id integer not null primary key autoincrement, ticket varchar(120) not null);
#do the import of the csv file
.import C:/Users/a/PycharmProjects/apps/navigator/name_of_your_csv_file.csv table_name_goes_here
You might see an error such as csv:1: INSERT failed datatype mismatch but this indicates that the first row of your csv file was not inserted because it contains the headers of the exported data from your old database.
Hi i have a scenerio i have a database dump which i want to import in my new rails web applications database i have used activerecord etl gem but now the demand is that use kettle etl for importing the data. i have no idea of kettle can someone help me or link me the tutorials from where by following it i can do my job?
thanks in advance :)
Actually it's very easy. Just use Table Input to read from current MySQL Table, do some transformation, pipe it to Table Output. In the Table Output, point it to target MySQL connection, fill in the table name and click "Generate SQL" button to execute the DML generated.
For example you can see my article for a sample Excel to MySQL, but the operation can easily converted for MysQL-MySQL migration. (The site is in Indonesian - you can translate it using Google Translator)
http://pentaho.phi-integration.com/kettle/export-excel-ke-mysql
I want to know how to rename the ODBC data source in informatica.
I can think of two approaches. One, create a new ODBC connection on your PC and use that connection to re-import the source. Two, edit the source and click on the rename button. The second field is the database name, and editing that field will cause your source to appear to have been imported using an ODBC connection with whatever name you place in that that field.