MySQL Connector CSV import - c++

I am trying to import a CSV file to a MySQL server. Everything is working fine with the MySQL connector: create database, show tables etc. But my program crashes when I create a statement like:
stmt->execute("LOAD DATA INFILE 'c:\Programming\SQL.csv' REPLACE INTO TABLE sqltable FIELDS TERMINATED BY '\;' ENCLOSED BY '\"' LINES TERMINATED BY '\n' IGNORE 1 ROWS (Company, Address, City)")
Who can help me out?

Related

SAS EG Import Data Error - Delimiter and column not recognized

I am very new to SAS and would like to import a CSV file into SAS. I have tried using proc and also the tool import data, but fail to import the data.
i get the following errors:
ERROR: The name Page Name (prop31) is not a valid SAS name
when importing the data it doesnt accept the delimiter and imports everything as one column
any suggestions?
Thank you!

Trying to remove database name from sql file

I'm trying to import a postgres dump into a sqlite3 database.
Now pg_dump add the database name to the expressions and this is not good for sqlite3.
CREATE TABLE dbname.table
Is it possible to tell sqlite3 to ignore database name?
The next solution is to try to write a regexp that modifies the sql file but I'm not a regexp magician, I've obtained something along the lines of:
printf "\nCREATE TABLE dbname.data;\nINSERT INTO dbname.data VALUES (\"the data in dbname.\")\n" | sed -e '/^CREATE TABLE/s/dbname.//g' -e '/^INSERT INTO/s/dbname.//g'
But this is incorrect cause I want to substitute only the first occurrence...
Can you give me some suggestion?
You actually don't have to change your file of SQL statements:
$ sqlite3
sqlite> ATTACH 'your_database.db' AS dbname;
sqlite> .read dump_file.sql
ATTACH will open a database using the schema name dbname so that dbname.tablename will refer to it.

AWS glue is not able to read Chinese characters in a MySQL database

I have data stored in an AWS RDS table source. This table contains some Chinese characters.
How can I configure my AWS glue so that it correctly copies the data in the table to another table, dest?
I've told JDBC to use UTF-8 encoding.
If I print the content of the table in my python script, I get the following text:
\u4e8c\u578b\u7cd6\u5c3f\u75c5
The original data of the above text are 6 Chinese characters.
But the text doesn't seem to be an UTF-8 encoded string. Because I can't decode them correctly with this online tool.
Here's the code I use to print the text:
After the job of AWS glue completes, the data in the destination table, dest, becomes ?????, which is incorrect.
The following python 2 script can correctly print the Chinese characters in the table.
#!/usr/bin/python
import MySQLdb
db = MySQLdb.connect(host="remoteHost", # your host, usually localhost
user="myName", # your username
passwd="myPassword", # your password
db="jonhydb") # name of the data base
# you must create a Cursor object. It will let
# you execute all the queries you need
cur = db.cursor()
# Use all the SQL you like
cur.execute("SELECT * FROM source")
# print all the first cell of all the rows
for row in cur.fetchall():
print row[0]
db.close()
Do AWS glue and the above code use different mechanisms to communicate with MySQL database?
Anyone knows how to handle character encoding issue when using AWS glue?

Does AWS Athena supports Sequence File

Has any one tried creating AWS Athena Table on top of Sequence Files. As per the Documentation looks like it is possible. I was able to execute below create table statement.
create external table if not exists sample_sequence (
account_id string,
receiver_id string,
session_index smallint,
start_epoch bigint)
STORED AS sequencefile
location 's3://bucket/sequencefile/';
The Statement executed Successfully but when i try to read data from the table it throws below error
Your query has the following error(s):
HIVE_CANNOT_OPEN_SPLIT: Error opening Hive split s3://viewershipforneo4j/2017-09-26/000030_0 (offset=372128055, length=62021342) using org.apache.hadoop.mapred.SequenceFileInputFormat: s3://viewershipforneo4j/2017-09-26/000030_0 not a SequenceFile
This query ran against the "default" database, unless qualified by the query. Please post the error message on our forum or contact customer support with Query Id: 9f0983b0-33da-4686-84a3-91b14a39cd09.
Sequence file are valid one . Issue here is there is not deliminator defined.
Ie row format delimited fields terminated by is missing
if in your case if tab is column deliminator row data is in next row it will be
create external table if not exists sample_sequence (
account_id string,
receiver_id string,
session_index smallint,
start_epoch bigint)
row format delimited fields terminated by '\t'
STORED AS sequencefile
location 's3://bucket/sequencefile/';

Python MySQLdb not inserting data

ubuntu version: 12.10
mysql server version: 5.5.29-0
python version: 2.7
I am trying to use MySQLdb to insert data into my localhost mysql server. I don't get any errors when I run the script but the data isn't enter into my table. I view tables with phpmyadmin.
I tried going back to basics and following a tutorial but same result. The weird thing is that I can create and delete tables but not enter data.
The code is from the tutorial even reports that 4 rows were inserted. What is preventing data from being entered into the table when the script reports everything is fine??
cursor = conn.cursor ()
cursor.execute ("DROP TABLE IF EXISTS animal")
cursor.execute ("""
CREATE TABLE animal
(
name CHAR(40),
category CHAR(40)
)
""")
cursor.execute ("""
INSERT INTO animal (name, category)
VALUES
('snake', 'reptile'),
('frog', 'amphibian'),
('tuna', 'fish'),
('racoon', 'mammal')
""")
print "%d rows were inserted" % cursor.rowcount
Add :
conn.commit()
at the bottom of your script.
On a side note, have a look at the following : http://mysql-python.sourceforge.net/MySQLdb.html