How to fix 'ORA-19011' error in Python cx_Oracle - python-2.7

HI~ I want to query xml data from Oracle db with cx_Oracle, but it doesn't work with Ora-19011 error message. I think size of query data is larger than string buffer, but i don't know how to solve this problem
Oracle DB is an external DB and it's not my own DB, So i can't access directly. Therefore, I want to fix my problem on my code and print query data on python terminal.
(my software version)
windows 10 64bit
python 2.7 64bit
oracle-instant client 19.3 64bit
cx_oracle 7.2.2
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import cx_Oracle
import sys
import csv
import codecs
printHeader = True
conn = cx_Oracle.connect('id/passwd#ip:port/orcl')
print(conn.version)
curs = conn.cursor()
curs.execute('SELECT * FROM tablename')
for record in curs:
print(record)
Error occured at line 18(for record in curs) and here are error messages.
11.2.0.4.0
We've got an error while stopping in unhandled exception: <class 'cx_Oracle.DatabaseError'>.
Traceback (most recent call last):
File "c:\Users\goo41\.vscode\extensions\ms-python.python-2019.8.30787\pythonFiles\lib\python\ptvsd\_vendored\pydevd\pydevd.py", line 1740, in do_stop_on_unhandled_exception
self.do_wait_suspend(thread, frame, 'exception', arg, is_unhandled_exception=True)
File "c:\Users\goo41\.vscode\extensions\ms-python.python-2019.8.30787\pythonFiles\lib\python\ptvsd\_vendored\pydevd\pydevd.py", line 1615, in do_wait_suspend
with self._threads_suspended_single_notification.notify_thread_suspended(thread_id, stop_reason):
File "C:\Python27\lib\contextlib.py", line 17, in __enter__
return self.gen.next()
File "c:\Users\goo41\.vscode\extensions\ms-python.python-2019.8.30787\pythonFiles\lib\python\ptvsd\_vendored\pydevd\pydevd.py", line 360, in notify_thread_suspended
with AbstractSingleNotificationBehavior.notify_thread_suspended(self, thread_id, stop_reason):
File "C:\Python27\lib\contextlib.py", line 17, in __enter__
return self.gen.next()
File "c:\Users\goo41\.vscode\extensions\ms-python.python-2019.8.30787\pythonFiles\lib\python\ptvsd\_vendored\pydevd\pydevd.py", line 308, in notify_thread_suspended
self.send_suspend_notification(thread_id, stop_reason)
File "c:\Users\goo41\.vscode\extensions\ms-python.python-2019.8.30787\pythonFiles\lib\python\ptvsd\_vendored\pydevd\pydevd.py", line 354, in send_suspend_notification
py_db.writer.add_command(py_db.cmd_factory.make_thread_suspend_single_notification(py_db, thread_id, stop_reason))
File "c:\Users\goo41\.vscode\extensions\ms-python.python-2019.8.30787\pythonFiles\lib\python\ptvsd\_vendored\pydevd\_pydevd_bundle\pydevd_net_command_factory_json.py", line 309, in make_thread_suspend_single_notification
return NetCommand(CMD_THREAD_SUSPEND_SINGLE_NOTIFICATION, 0, event, is_json=True)
File "c:\Users\goo41\.vscode\extensions\ms-python.python-2019.8.30787\pythonFiles\lib\python\ptvsd\_vendored\pydevd\_pydevd_bundle\pydevd_net_command.py", line 57, in __init__
text = json.dumps(as_dict)
File "C:\Python27\lib\json\__init__.py", line 244, in dumps
return _default_encoder.encode(obj)
File "C:\Python27\lib\json\encoder.py", line 207, in encode
chunks = self.iterencode(o, _one_shot=True)
File "C:\Python27\lib\json\encoder.py", line 270, in iterencode
return _iterencode(o, 0)
UnicodeDecodeError: 'utf8' codec can't decode byte 0xb9 in position 11: invalid start byte
Traceback (most recent call last):
File "c:\Users\goo41\.vscode\extensions\ms-python.python-2019.8.30787\pythonFiles\ptvsd_launcher.py", line 43, in <module>
main(ptvsdArgs)
File "c:\Users\goo41\.vscode\extensions\ms-python.python-2019.8.30787\pythonFiles\lib\python\ptvsd\__main__.py", line 432, in main
run()
File "c:\Users\goo41\.vscode\extensions\ms-python.python-2019.8.30787\pythonFiles\lib\python\ptvsd\__main__.py", line 316, in run_file
runpy.run_path(target, run_name='__main__')
File "C:\Python27\lib\runpy.py", line 252, in run_path
return _run_module_code(code, init_globals, run_name, path_name)
File "C:\Python27\lib\runpy.py", line 82, in _run_module_code
mod_name, mod_fname, mod_loader, pkg_name)
File "C:\Python27\lib\runpy.py", line 72, in _run_code
exec code in run_globals
File "c:\PythonWorkspace\oraclePrc\test1.py", line 18, in <module>
for record in curs:
cx_Oracle.DatabaseError: ORA-19011: Character string buffer too small

When you connect to the database, try using this code instead:
conn = cx_Oracle.connect('id/passwd#ip:port/orcl', encoding="UTF-8", nencoding="UTF-8")
This will ensure that you are using a universal encoding -- which may eliminate the first error, and possibly the second as well. If not, adjust the code sample and error messages noted above.

Related

Compatibility with GCP aiplatform, bigquery and cloud-storage on hyperparameter tuning docker image

I am doing hyperparameter tuning on GCP using this scikit docker image. When I add the aiplatform package as a dependency, things break. The error comes from the bigquery import.
from google.cloud import bigquery
The error message is below.
The replica workerpool0-0 exited with a non-zero status of 1.
Traceback (most recent call last):
[...]
File "/root/.local/lib/python3.7/site-packages/trainer/task.py", line 7, in
from google.cloud import storage, bigquery
File "/usr/local/lib/python3.7/dist-packages/google/cloud/bigquery/__init__.py", line 35, in
from google.cloud.bigquery.client import Client
File "/usr/local/lib/python3.7/dist-packages/google/cloud/bigquery/client.py", line 60, in
from google.cloud.bigquery import _pandas_helpers
File "/usr/local/lib/python3.7/dist-packages/google/cloud/bigquery/_pandas_helpers.py", line 40, in
from google.cloud.bigquery import schema
File "/usr/local/lib/python3.7/dist-packages/google/cloud/bigquery/schema.py", line 19, in
from google.cloud.bigquery_v2 import types
File "/usr/local/lib/python3.7/dist-packages/google/cloud/bigquery_v2/__init__.py", line 23, in
from google.cloud.bigquery_v2 import types
File "/usr/local/lib/python3.7/dist-packages/google/cloud/bigquery_v2/types.py", line 23, in
from google.cloud.bigquery_v2.proto import encryption_config_pb2
File "/usr/local/lib/python3.7/dist-packages/google/cloud/bigquery_v2/proto/encryption_config_pb2.py", line 64, in
file=DESCRIPTOR,
File "/root/.local/lib/python3.7/site-packages/google/protobuf/descriptor.py", line 560, in __new__
_message.Message._CheckCalledFromGeneratedFile()
TypeError: Descriptors cannot not be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:
1. Downgrade the protobuf package to 3.20.x or lower.
2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).
From the logs, I can see the system is downloading google-cloud-aiplatform v1.17.0. According to the scikit docker image, google-cloud-storage v1.35.0 is installed, but google-cloud-aiplatform drags in v2.5.0.
I am thinking I need to downgrade google-cloud-aiplatform to a specific version. Anyone know which version or how to resolve this problem?
UPDATE: FWIW, if I downgrade google-cloud-aiplatform==1.15.1 then the problem above goes away. However, this problem below shows.
Traceback (most recent call last):
File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/root/.local/lib/python3.7/site-packages/trainer/hpt.py", line 170, in
staging_bucket=f'{args.bucket_uri}'
File "/root/.local/lib/python3.7/site-packages/google/cloud/aiplatform/initializer.py", line 138, in init
backing_tensorboard=experiment_tensorboard,
File "/root/.local/lib/python3.7/site-packages/google/cloud/aiplatform/metadata/metadata.py", line 235, in set_experiment
experiment_name=experiment, description=description
File "/root/.local/lib/python3.7/site-packages/google/cloud/aiplatform/metadata/experiment_resources.py", line 247, in get_or_create
project=project, location=location, credentials=credentials
File "/root/.local/lib/python3.7/site-packages/google/cloud/aiplatform/metadata/metadata_store.py", line 283, in ensure_default_metadata_store_exists
encryption_spec_key_name=encryption_key_spec_name,
File "/root/.local/lib/python3.7/site-packages/google/cloud/aiplatform/metadata/metadata_store.py", line 123, in get_or_create
credentials=credentials,
File "/root/.local/lib/python3.7/site-packages/google/cloud/aiplatform/metadata/metadata_store.py", line 241, in _get
credentials=credentials,
File "/root/.local/lib/python3.7/site-packages/google/cloud/aiplatform/metadata/metadata_store.py", line 73, in __init__
self._gca_resource = self._get_gca_resource(resource_name=metadata_store_name)
File "/root/.local/lib/python3.7/site-packages/google/cloud/aiplatform/base.py", line 617, in _get_gca_resource
return getattr(self.api_client, self._getter_method)(
File "/root/.local/lib/python3.7/site-packages/google/cloud/aiplatform/utils/__init__.py", line 425, in __getattr__
return getattr(self._clients[self._default_version], name)
File "/root/.local/lib/python3.7/site-packages/google/cloud/aiplatform/utils/__init__.py", line 359, in __getattr__
client_info=self._client_info,
File "/root/.local/lib/python3.7/site-packages/google/cloud/aiplatform_v1/services/metadata_service/client.py", line 547, in __init__
api_audience=client_options.api_audience,
File "/root/.local/lib/python3.7/site-packages/google/cloud/aiplatform_v1/services/metadata_service/transports/grpc.py", line 190, in __init__
("grpc.max_receive_message_length", -1),
File "/root/.local/lib/python3.7/site-packages/google/cloud/aiplatform_v1/services/metadata_service/transports/grpc.py", line 241, in create_channel
**kwargs,
File "/root/.local/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 318, in create_channel
default_host=default_host,
File "/root/.local/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 239, in _create_composite_credentials
credentials, scopes=scopes, default_scopes=default_scopes
TypeError: with_scopes_if_required() got an unexpected keyword argument 'default_scopes'

Error migrating database from 2.7.2 to 2.9

During migration of a CKAN instance from version 2.7.2 to 2.9 I'm facing the following error:
2022-03-11 14:11:28,312 INFO [ckan.cli] Using configuration file /etc/ckan/____/production.ini
2022-03-11 14:11:28,312 INFO [ckan.config.environment] Loading static files from public
2022-03-11 14:11:28,364 INFO [ckan.config.environment] Loading templates from /usr/lib/ckan/____/local/lib/python2.7/site-packages/ckan-2.9.5-py2.7.egg/ckan/templates
2022-03-11 14:11:28,581 INFO [ckan.config.environment] Loading templates from /usr/lib/ckan/____/local/lib/python2.7/site-packages/ckan-2.9.5-py2.7.egg/ckan/templates
Traceback (most recent call last):
File "/usr/lib/ckan/____/bin/ckan", line 11, in <module>
load_entry_point('ckan==2.9.5', 'console_scripts', 'ckan')()
File "/usr/lib/ckan/____/local/lib/python2.7/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/usr/lib/ckan/____/local/lib/python2.7/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/usr/lib/ckan/____/local/lib/python2.7/site-packages/click/core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/lib/ckan/____/local/lib/python2.7/site-packages/click/core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/lib/ckan/____/local/lib/python2.7/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/lib/ckan/____/local/lib/python2.7/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/usr/lib/ckan/____/local/lib/python2.7/site-packages/ckan-2.9.5-py2.7.egg/ckan/cli/db.py", line 64, in upgrade
_run_migrations(plugin, version)
File "/usr/lib/ckan/____/local/lib/python2.7/site-packages/ckan-2.9.5-py2.7.egg/ckan/cli/db.py", line 122, in _run_migrations
repo.upgrade_db(version)
File "/usr/lib/ckan/____/local/lib/python2.7/site-packages/ckan-2.9.5-py2.7.egg/ckan/model/__init__.py", line 326, in upgrade_db
alembic_upgrade(self.alembic_config, version)
File "/usr/lib/ckan/____/local/lib/python2.7/site-packages/alembic/command.py", line 254, in upgrade
script.run_env()
File "/usr/lib/ckan/____/local/lib/python2.7/site-packages/alembic/script/base.py", line 427, in run_env
util.load_python_file(self.dir, 'env.py')
File "/usr/lib/ckan/____/local/lib/python2.7/site-packages/alembic/util/pyfiles.py", line 81, in load_python_file
module = load_module_py(module_id, path)
File "/usr/lib/ckan/____/local/lib/python2.7/site-packages/alembic/util/compat.py", line 135, in load_module_py
mod = imp.load_source(module_id, path, fp)
File "/usr/lib/ckan/____/local/lib/python2.7/site-packages/ckan-2.9.5-py2.7.egg/ckan/migration/env.py", line 80, in <module>
run_migrations_online()
File "/usr/lib/ckan/____/local/lib/python2.7/site-packages/ckan-2.9.5-py2.7.egg/ckan/migration/env.py", line 74, in run_migrations_online
context.run_migrations()
File "<string>", line 8, in run_migrations
File "/usr/lib/ckan/____/local/lib/python2.7/site-packages/alembic/runtime/environment.py", line 836, in run_migrations
self.get_context().run_migrations(**kw)
File "/usr/lib/ckan/____/local/lib/python2.7/site-packages/alembic/runtime/migration.py", line 330, in run_migrations
step.migration_fn(**kw)
File "/usr/lib/ckan/____/local/lib/python2.7/site-packages/ckan-2.9.5-py2.7.egg/ckan/migration/versions/001_103676e0a497_create_existing_tables.py", line 20, in upgrade
if skip_based_on_legacy_engine_version(op, __name__):
File "/usr/lib/ckan/____/local/lib/python2.7/site-packages/ckan-2.9.5-py2.7.egg/ckan/migration/__init__.py", line 22, in skip_based_on_legacy_engine_version
return int(version) >= int(filename.split(u'_', 1)[0])
ValueError: invalid literal for int() with base 10: 'None'
The skip_based_on_legacy_engine_version function is in this file from codebase. In this comparison int(version) >= int(filename.split(u'_', 1)[0]), the second returns 001, and the filename is the first of the versions folder.
I didn't get if the sqlalchemy migrate_version shoud maintain data stored in the Ckan database, couldn't find out.
The version should be setted on the alembic.ini file ?

How to resolve an error message when attempting to start Jupyter-Notebook?

I have Jupyter-Notebook on an AWS EC2 instance. It was working fine until I made a few changes/updates and now I'm getting the following error message when I attempt to use Jupyter-Notebook:
ubuntu#ip-10-0-0-5:~$ jupyter-notebook
[E 06:41:09.375 NotebookApp] Exception while loading config file /home/ubuntu/.jupyter/jupyter_notebook_config.json
Traceback (most recent call last):
File "/home/ubuntu/.local/lib/python2.7/site-packages/traitlets/config/application.py", line 562, in _load_config_files
config = loader.load_config()
File "/home/ubuntu/.local/lib/python2.7/site-packages/traitlets/config/loader.py", line 406, in load_config
dct = self._read_file_as_dict()
File "/home/ubuntu/.local/lib/python2.7/site-packages/traitlets/config/loader.py", line 412, in _read_file_as_dict
return json.load(f)
File "/usr/lib/python2.7/json/__init__.py", line 291, in load
**kw)
File "/usr/lib/python2.7/json/__init__.py", line 339, in loads
return _default_decoder.decode(s)
File "/usr/lib/python2.7/json/decoder.py", line 367, in decode
raise ValueError(errmsg("Extra data", s, end, len(s)))
ValueError: Extra data: line 17 column 1 - line 18 column 1 (char 255 - 257)
Traceback (most recent call last):
File "/home/ubuntu/.local/bin/jupyter-notebook", line 10, in <module>
sys.exit(main())
File "/home/ubuntu/.local/lib/python2.7/site-packages/jupyter_core/application.py", line 266, in launch_instance
return super(JupyterApp, cls).launch_instance(argv=argv, **kwargs)
File "/home/ubuntu/.local/lib/python2.7/site-packages/traitlets/config/application.py", line 657, in launch_instance
app.initialize(argv)
File "<decorator-gen-7>", line 2, in initialize
File "/home/ubuntu/.local/lib/python2.7/site-packages/traitlets/config/application.py", line 87, in catch_config_error
return method(app, *args, **kwargs)
File "/home/ubuntu/.local/lib/python2.7/site-packages/notebook/notebookapp.py", line 1633, in initialize
self.init_server_extensions()
File "/home/ubuntu/.local/lib/python2.7/site-packages/notebook/notebookapp.py", line 1559, in init_server_extensions
section = manager.get(self.config_file_name)
File "/home/ubuntu/.local/lib/python2.7/site-packages/notebook/services/config/manager.py", line 25, in get
recursive_update(config, cm.get(section_name))
File "/home/ubuntu/.local/lib/python2.7/site-packages/notebook/config_manager.py", line 103, in get
recursive_update(data, json.load(f))
File "/usr/lib/python2.7/json/__init__.py", line 291, in load
**kw)
File "/usr/lib/python2.7/json/__init__.py", line 339, in loads
return _default_decoder.decode(s)
File "/usr/lib/python2.7/json/decoder.py", line 367, in decode
raise ValueError(errmsg("Extra data", s, end, len(s)))
ValueError: Extra data: line 17 column 1 - line 18 column 1 (char 255 - 257)
Can anyone help me troubleshoot this problem?

happybase hbase table.put command error?

I am trying to connect to hbase-1.2.6 from python code, is as:
import happybase
connection = happybase.Connection(host='localhost',port=16010)
table = connection.table('blogpost')
table.put('1', {'post:title': 'hello world1'})
I manually created the table-"blogpost" in hbase. I am using python-2.7 and happybase-1.1.0.
error log are as:
/usr/bin/python2.7 /home/spark/PycharmProjects/PySpark/hbase.py
Traceback (most recent call last):
File "/home/spark/PycharmProjects/PySpark/hbase.py", line 5, in <module>
table.put('1', {'post:title': 'hello world1'})
File "/usr/local/lib/python2.7/dist-packages/happybase/table.py", line 464, in put
batch.put(row, data)
File "/usr/local/lib/python2.7/dist-packages/happybase/batch.py", line 137, in __exit__
self.send()
File "/usr/local/lib/python2.7/dist-packages/happybase/batch.py", line 60, in send
self._table.connection.client.mutateRows(self._table.name, bms, {})
File "/usr/local/lib/python2.7/dist-packages/thriftpy/thrift.py", line 198, in _req
return self._recv(_api)
File "/usr/local/lib/python2.7/dist-packages/thriftpy/thrift.py", line 210, in _recv
fname, mtype, rseqid = self._iprot.read_message_begin()
File "thriftpy/protocol/cybin/cybin.pyx", line 439, in cybin.TCyBinaryProtocol.read_message_begin (thriftpy/protocol/cybin/cybin.c:6470)
cybin.ProtocolError: No protocol version header
thanks.
Process finished with exit code 1

Command 'makemessages' error

I newbe in django and python. My project created under PyTools for Visual Studio 2013.
For localization I create 'locale' folder on manage.py level. And I try run the following command:
.\ClarisPyEnv\Scripts\python.exe manage.py makemessages -l he
And I got the error:
Exception in thread Thread-2377:
Traceback (most recent call last):
File "C:\Python34\lib\threading.py", line 921, in _bootstrap_inner
self.run()
File "C:\Python34\lib\threading.py", line 869, in run
self._target(*self._args, **self._kwargs)
File "C:\Python34\lib\subprocess.py", line 1170, in _readerthread
buffer.append(fh.read())
File "C:\Python34\lib\encodings\cp1255.py", line 23, in decode
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x90 in position 766: char
acter maps to <undefined>
Traceback (most recent call last):
File "manage.py", line 17, in <module>
execute_from_command_line(sys.argv)
File "C:\Users\Alex\Documents\PythonProjects\ClarisPy\ClarisPy\ClarisPyEnv\lib
\site-packages\django\core\management\__init__.py", line 385, in execute_from_co
mmand_line
utility.execute()
File "C:\Users\Alex\Documents\PythonProjects\ClarisPy\ClarisPy\ClarisPyEnv\lib
\site-packages\django\core\management\__init__.py", line 377, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "C:\Users\Alex\Documents\PythonProjects\ClarisPy\ClarisPy\ClarisPyEnv\lib
\site-packages\django\core\management\base.py", line 288, in run_from_argv
self.execute(*args, **options.__dict__)
File "C:\Users\Alex\Documents\PythonProjects\ClarisPy\ClarisPy\ClarisPyEnv\lib
\site-packages\django\core\management\base.py", line 338, in execute
output = self.handle(*args, **options)
File "C:\Users\Alex\Documents\PythonProjects\ClarisPy\ClarisPy\ClarisPyEnv\lib
\site-packages\django\core\management\base.py", line 533, in handle
return self.handle_noargs(**options)
File "C:\Users\Alex\Documents\PythonProjects\ClarisPy\ClarisPy\ClarisPyEnv\lib
\site-packages\django\core\management\commands\makemessages.py", line 290, in ha
ndle_noargs
self.write_po_file(potfile, locale)
File "C:\Users\Alex\Documents\PythonProjects\ClarisPy\ClarisPy\ClarisPyEnv\lib
\site-packages\django\core\management\commands\makemessages.py", line 402, in wr
ite_po_file
msgs, errors, status = popen_wrapper(args)
File "C:\Users\Alex\Documents\PythonProjects\ClarisPy\ClarisPy\ClarisPyEnv\lib
\site-packages\django\core\management\utils.py", line 25, in popen_wrapper
output, errors = p.communicate()
File "C:\Python34\lib\subprocess.py", line 959, in communicate
stdout, stderr = self._communicate(input, endtime, timeout)
File "C:\Python34\lib\subprocess.py", line 1234, in _communicate
stdout = stdout[0]
IndexError: list index out of range
What this means ? Where a problem? help me please!
Thank
Alex
You dont need to specify unicode in pyhton 3.
Unicode is default.
from django.utils.translation import ugettext as _
_('חתול')
Is enof for the translation and encoding to work.