I am using
Python 2.7
cx_Oracle 6.0.2
I am doing something like this in my code
import cx_Oracle
connection_string = "%s:%s/%s" % ("192.168.8.168", "1521", "xe")
connection = cx_Oracle.connect("system", "oracle", connection_string)
cur = connection.cursor()
print "Connection Version: {}".format(connection.version)
query = "select *from product_information"
cur.execute(query)
result = cur.fetchone()
print result
I got the output like this
Connection Version: 11.2.0.2.0
(1, u'????????????', 'test')
I am using following query to create table in oracle database
CREATE TABLE product_information
( product_id NUMBER(6)
, product_name NVARCHAR2(100)
, product_description VARCHAR2(1000));
I used the following query to insert data
insert into product_information values(2, 'दुःख', 'teting');
Edit 1
Query: SELECT * from NLS_DATABASE_PARAMETERS WHERE parameter IN ( 'NLS_LANGUAGE', 'NLS_TERRITORY', 'NLS_CHARACTERSET');
Result
NLS_LANGUAGE: AMERICAN, NLS_TERRITORY: AMERICA, NLS_CHARACTERSET:
AL32UTF8
I solved the problem.
First I added NLS_LANG=.AL32UTF8 as the environment variable in the system where Oracle is installed
Second I passed the encoding and nencoding parameter in connect function of cx_Oracle like below.
cx_Oracle.connect(username, password, connection_string,
encoding="UTF-8", nencoding="UTF-8")
This issue is also discussed here at https://github.com/oracle/python-cx_Oracle/issues/157
Related
I would like to read row by row in python from Oracle Select query. I have to build a logic based on the data I am getting from Oracle for specific columns.
I am using cx_Oracle.connect
dsnStr = cx_Oracle.makedsn("xxxx.net", "6000", "XVTRR") # Dev envrionment
con = cx_Oracle.connect(user="SCOTT", password="TIGER", dsn=dsnStr)
print con.version
cursor = con.cursor()
cursor.execute("select * from user_tables where rownum<=1 order by TABLE_NAME")
rows = cursor.fetchall()
col_names=[]
for i in range(0, len(cursor.description)):
col_names.append(cursor.description[i][0])
pp = pprint.PrettyPrinter(width=1024)
pp.pprint(col_names)
pp.pprint(rows)
cursor.close()
con.close()
Please help me.
I am very new to sql and intermediate at python. Using sqlite3, how can I get a print() list of of primary and foreign keys (per table) in my database?
Using Python2.7, SQLite3, PyCharm.
sqlite3.version = 2.6.0
sqlite3.sqlite_version = 3.8.11
Also note: when I set up the database, I enabled FKs as such:
conn = sqlite3.connect(db_file)
conn.execute('pragma foreign_keys=ON')
I tried the following:
conn=sqlite3.connect(db_path)
print(conn.execute("PRAGMA table_info"))
print(conn.execute("PRAGMA foreign_key_list"))
Which returned:
<sqlite3.Cursor object at 0x0000000002FCBDC0>
<sqlite3.Cursor object at 0x0000000002FCBDC0>
I also tried the following, which prints nothing (but I think this may be because it's a dummy database with tables and fields but no records):
conn=sqlite3.connect(db_path)
rows = conn.execute('PRAGMA table_info')
for r in rows:
print r
rows2 = conn.execute('PRAGMA foreign_key_list')
for r2 in rows2:
print r2
Unknown or malformed PRAGMA statements are ignored.
The problem with your PRAGMAs is that the table name is missing. You have to get a list of all tables, and then execute those PRAGMAs for each one:
rows = db.execute("SELECT name FROM sqlite_master WHERE type = 'table'")
tables = [row[0] for row in rows]
def sql_identifier(s):
return '"' + s.replace('"', '""') + '"'
for table in tables:
print("table: " + table)
rows = db.execute("PRAGMA table_info({})".format(sql_identifier(table)))
print(rows.fetchall())
rows = db.execute("PRAGMA foreign_key_list({})".format(sql_identifier(table)))
print(rows.fetchall())
SELECT
name
FROM
sqlite_master
WHERE
type ='table' AND
name NOT LIKE 'sqlite_%';
this sql will show all table in database, for eache table run sql PRAGMA table_info(your_table_name);, you can get the primary key of the table.
Those pictures show what sql result like in my database:
first sql result
second sql result
I am interfacing Elasticsearch with Spark, using the Elasticsearch-Hadoop plugin and I am having difficulty writing a dataframe with a timestamp type column to Elasticsearch.
The problem is when I try to write using dynamic/multi resource formatting to create a daily index.
From the relevant documentation I get the impression that this is possible, however, the python example below fails to run unless I change my dataframe type to date.
import pyspark
conf = pyspark.SparkConf()
conf.set('spark.jars', 'elasticsearch-spark-20_2.11-6.1.2.jar')
conf.set('es.nodes', '127.0.0.1:9200')
conf.set('es.read.metadata', 'true')
conf.set('es.nodes.wan.only', 'true')
from datetime import datetime, timedelta
now = datetime.now()
before = now - timedelta(days=1)
after = now + timedelta(days=1)
cols = ['idz', 'name', 'time']
vals = [(0,'maria', before), (1, 'lolis', after)]
time_df = spark.createDataFrame(vals, cols)
When I try to write, I use the following:
time_df.write.mode('append').format(
'org.elasticsearch.spark.sql'
).options(
**{'es.write.operation': 'index' }
).save('xxx-{time|yyyy.MM.dd}/1')
Unfortunatelly this renders an error:
.... Caused by: java.lang.IllegalArgumentException: Invalid format:
"2018-03-04 12:36:12.949897" is malformed at " 12:36:12.949897" at
org.joda.time.format.DateTimeFormatter.parseDateTime(DateTimeFormatter.java:945)
On the other hand this works perfectly fine if I use dates when I create my dataframe:
cols = ['idz', 'name', 'time']
vals = [(0,'maria', before.date()), (1, 'lolis', after.date())]
time_df = spark.createDataFrame(vals, cols)
Is it possible to format a dataframe timestamp to be written to daily indexes with this method, without also keeping a date column around? How about monthly indexes?
Pyspark version:
spark version 2.2.1
Using Scala version 2.11.8, OpenJDK 64-Bit Server VM, 1.8.0_151
ElasticSearch version
number "6.2.2" build_hash "10b1edd"
build_date "2018-02-16T19:01:30.685723Z" build_snapshot false
lucene_version "7.2.1" minimum_wire_compatibility_version "5.6.0"
minimum_index_compatibility_version "5.0.0"
I'm a newbie to python and i am trying to insert some data in to a mysql table .Seems the query executed with out any issues, however i don't see any record added on to the table.
Any help would be greatly appreciated.
Cheers,
Aditya
connection = mysql.connector.connect(user='sandboxbeta2503', password='XXX',
host='myreleasebox.com',
database='iaas')
print ("Updating the history in bulk_notification_history")
cursor = connection.cursor()
timestamp = time.strftime("%Y-%m-%d %X")
notification_type = "Notify Inactive users"
usercount= 45
query = ("INSERT INTO iaas.bulk_notification_history"
"(nty_date,notification_type,user_count)"
"VALUES (%s,%s,%s)")
data = (time.strftime('%Y-%m-%d %H:%M:%S'),notification_type, usercount)
linker1 = cursor.execute(query,data)
print (linker1)
cursor.close()
connection.close()
ubuntu version: 12.10
mysql server version: 5.5.29-0
python version: 2.7
I am trying to use MySQLdb to insert data into my localhost mysql server. I don't get any errors when I run the script but the data isn't enter into my table. I view tables with phpmyadmin.
I tried going back to basics and following a tutorial but same result. The weird thing is that I can create and delete tables but not enter data.
The code is from the tutorial even reports that 4 rows were inserted. What is preventing data from being entered into the table when the script reports everything is fine??
cursor = conn.cursor ()
cursor.execute ("DROP TABLE IF EXISTS animal")
cursor.execute ("""
CREATE TABLE animal
(
name CHAR(40),
category CHAR(40)
)
""")
cursor.execute ("""
INSERT INTO animal (name, category)
VALUES
('snake', 'reptile'),
('frog', 'amphibian'),
('tuna', 'fish'),
('racoon', 'mammal')
""")
print "%d rows were inserted" % cursor.rowcount
Add :
conn.commit()
at the bottom of your script.
On a side note, have a look at the following : http://mysql-python.sourceforge.net/MySQLdb.html