I am getting following error after upgraing reviewboard
UnicodeDecodeError: 'utf8' codec can't decode byte 0xe7 in position 0: invalid continuation byte
1- Restore DB to new host
2- Convert all tables from MYIsam to Innodb
I got the answer.
We need to copy the SECRET_KEY from the settings.py file on the old host into the one on the new server.
Related
I tried to restore PostgreSQL database from dump file in AWS but an error occurred.
it said
pg_restore: [archiver] unsupported version (1.14) in file header
What should I do?
Upgrade the PostgreSQL version and then try again.
I am trying to upload a csv file into Azure Devops workitem. While uploading the attachment step calling below rest api, I am facing an issue:
POST https://dev.azure.com/{organization}/{project}/_apis/wit/attachments?api-version=5.1
My code is as below:
with open('details.csv', 'r') as f:
details = f.read()
print(details) #Printing the CSV file as expected
ado_req_headers_ATT = {'Content-Type':'application/octet-stream'}
ADO_SEC_ATTA_URL = 'https://dev.azure.com/orgname/projectname/_apis/wit/attachments?fileName=details.csv&api-version=5.1-preview.3'
ado_req_attach_File = requests.post(url=ADO_SEC_ATTA_URL,headers=ado_req_headers_ATT,data=details, auth=('',ADO_AUTH_PAT))
print(ado_req_attach_File.text)
Same code is working when I am using python 3.8 in my local visual studio code but not working when I am using Azure Automation Runbook (python 2.7).
When I try to print text of the response body, I am getting below error:
print(ado_req_attach_File.text)UnicodeEncodeError: 'ascii' codec can't encode character u'\u221e' in position 6303: ordinal not in range(128)
Expected Output:
{"id":"facedff6-48c6-5479-894b-f7807f29b96e","url":"https://dev.azure.com/orgname/d93740f8-fe37-5433-bc8e-79c0a320d81b/_apis/wit/attachments/facedff6-48c6-5479-894b-f7807f29b96e?fileName=details.csv"}
This seems not related to Azure DevOps side, since your API works properly. And you have got the JSON response succeed.
For version 2.7 ,encoding is not set by default.
Please use below code as a first line of program and check if it works:
# -*- coding: utf-8 -*-
# Your code goes below this line
For python 3.x ,there is default encoding.hence there will be no issue of encoding.
Also take a look at similar issue here for more info: https://stackoverflow.com/a/39293287/5391065
I am setting up a development environment with the following:
CentOS 7
Python 2.7
IBM-DB2 EE Server v11.1.4.4
ibm-db package
My earlier installation and set up went smooth with no real issues on ODBC connectivity with the local DB2 trial database version. With my new install, I keep getting the following message:
Exception:
[IBM][CLI Driver] SQL1531N The connection failed because the name specified with the DSN connection string keyword could not be found in either the db2dsdriver.cfg configuration file or the db2cli.ini configuration file. Data source name specified in the connection string: "DATABASE". SQLCODE=-1531
I did try updating the python version to 3.7 but results the same. I had to reiterate here that my earlier install with the same configuration went through without any issues. I never updated neither the db2cli.ini file nor db2dsdriver file. I did try here and it fails. As much I could gather, I saw a message which read like "ibm-db does not sit well with all python versions properly".
>>> import difflib
>>> import subprocess
>>> import os
>>> import ibm_db
>>> from shutil import copyfile
>>> conn = ibm_db.connect("DATABASE","USERID","PASSWORD")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
Exception:
[IBM][CLI Driver] SQL1531N The connection failed because the name specified with the DSN connection string keyword could not be found in either the db2dsdriver.cfg configuration file or the db2cli.ini configuration file. Data source name specified in the connection string: "DATABASE". SQLCODE=-1531
I expect the connection to go through fine without any issues.
The short answer is that the easiest way is to use a complete DSN string to establish the connection (including the hostname, port etc.), e.g:
In [1]: import ibm_db
In [2]: conn = ibm_db.connect("DATABASE=SAMPLE;HOSTNAME=localhost;PORT=60111;UID=db2v111;PWD=passw0rd;","","")
The long answer is that we should be able to use the alias from the catalog, as explained in ibm_db.connect API:
IBM_DBConnection ibm_db.connect(string database, string user, string
password [, dict options [, constant replace_quoted_literal])
database For a cataloged connection to a database, this parameter
represents the database alias in the DB2 client catalog. For an
uncataloged connection to a database, database represents a complete
connection string in the following format: DRIVER={IBM DB2 ODBC
DRIVER};DATABASE=database;HOSTNAME=hostname;PORT=port;
PROTOCOL=TCPIP;UID=username;PWD=password; where the parameters
represent the following values:
hostname - The hostname or IP address of the database server.
port - The TCP/IP port on which the database is listening for requests.
username - The username with which you are connecting to the database.
password - The password with which you are connecting to the database.
user - The username with which you are connecting to the database.
For uncataloged connections, you must pass an empty string.
password- The password with which you are connecting to the database. For uncataloged connections, you must pass an empty string.
The question is though which client catalog we will check...
It all depends whether IBM_DB_HOME was set when package was installed, as explained in README. If it was set, then Python driver will use the existing client instance and its database catalog (as well as db2cli.ini and db2dsdriver.cfg). If not, then a separate client will be fetched during the installation and deployed in Python's site-packages.
In order to check which one is the case you can run ldd against your ibm_db.so, e.g:
ldd /usr/lib/python2.7/site-packages/ibm_db-2.0.7-py2.7-linux-x86_64.egg/ibm_db.so | grep libdb2
libdb2.so.1 => /usr/lib/python2.7/site-packages/ibm_db-2.0.7-py2.7-linux-x86_64.egg/clidriver/lib/libdb2.so.1 (0x00007fb6e137e000)
Based on the output I can say that in my environment the diver was linked against a driver in Python's site-packages, so it will use db2cli.ini from /usr/lib/python2.7/site-packages/ibm_db-2.0.7-py2.7-linux-x86_64.egg/clidriver/cfg.
If I will populate it with a section:
[sample]
port=60111
hostname=localhost
database=sample
I will be able to connect just with the DSN alias:
In [4]: conn = ibm_db.connect("SAMPLE","db2v111","passw0rd")
If you want the driver to use the existing client instance, use the IBM_DB_HOME during installation.
I tried to install the awsebscli package in my virtualenv, first trying the below command
pip install awsebcli
then the next command that initializes eb
eb init
then when I choose a region number from the given list, it says the following error given below.
UnicodeDecodeError: ascii codec can't decode byte 0xe2 in position 20:
ordinal not in range(128)
This error shows up when I am selecting the region using its number.
I am using Django with Heroku Postgreql. I use both English and Turkish languages on default DB settings. My DB works with no issues both locally and on Heroku. However, I get an error when I try to dump and restore a db from local to production.
psql -U user -d db_name -f "b003.dump"
psql:b003.dump:270: ERROR: invalid byte sequence for encoding "UTF8": 0xa3
First guess: Encoding mismatch
Your file b003.dump is not in the UTF-8 encoding.
You will need to override the encoding settings, specifying the correct text encoding for the file.
It's probably iso-8859-9 if it has Turkish text, though 0xa3 is the pound symbol (£) in many of the ISO-8859 encodings.
Try:
PGCLIENTENCODING="iso-8859-9" psql ....
I also suggest checking the output of the locale command to see what your system default encoding is, and of file -bi b003.dump to try to guess the encoding.
Second guess, after comments revealed file output
Your file isn't an SQL script style dump. It a PostgreSQL custom database dump. That's why file says:
b003.dump: PostgreSQL custom database dump - v1.12-0
Restore it with the pg_restore command.
pg_restore -f b003.dump -d db_name -U user
At some point I want to enhance psql so it tells you this, rather than failing with an error.
The weird thing here is that you must've had a lot of previous errors, before the one you showed here, if it's a custom dump.
As pointed in the above answer the file is a postgres custom dump and can be restored by the command:
pg_restore dump_file -d db_name -U user
http://www.postgresql.org/docs/9.2/static/app-pgrestore.html