I want to know version of the default Postgres database. How to find it using Django?
Get a database connection:
from django.db import connection
And access the inner psycopg2 connection object:
print(connection.cursor().connection.server_version)
One-liner:
$ python3 manage.py shell -c "from django.db import connection; print(connection.cursor().connection.server_version)"
90504
After Postgres 10, the number is formed by multiplying the server's major version number by 10000 and adding the minor version number. For example, version 10.1 will be returned as 100001, and version 11.0 will be returned as 110000. Zero is returned if the connection is bad.
Prior to Postgres 10, the number is formed by converting the major, minor, and revision numbers into two-decimal-digit numbers and appending them together. For example, version 8.1.5 will be returned as 80105.
Therefore, for purposes of determining feature compatibility, applications should divide the result of connection.server_version by 100 not 10000 to determine a logical major version number. In all release series, only the last two digits differ between minor releases (bug-fix releases).
Docs:
https://www.psycopg.org/docs/connection.html#connection.server_version
https://www.postgresql.org/docs/current/libpq-status.html#LIBPQ-PQSERVERVERSION
Related
I am trying to bind a SQL_TYPE_TIMESTAMP value using ODBC to DATETIME and/or TIMESTAMP columns in a MS SQL Server database and it's failing with the following error:
[HY104] (native 0): [Microsoft][ODBC Driver 11 for SQL Server]Invalid precision value
Does anyone know what the problem is? I am binding the value like this:
TIMESTAMP_STRUCT* tval = getTimestampFromSomewhere();
SQLRETURN ret = SQLBindParameter(stmt, column, SQL_PARAM_INPUT, SQL_C_TYPE_TIMESTAMP, SQL_TYPE_TIMESTAMP, 29, 9, tval, sizeof(TIMESTAMP_STRUCT), 0);
ColSize is set to 29 because according to Column Size docs, for the TIMESTAMP type it should be:
20+s (the number of characters in the yyyy-mm-dd hh:mm:ss[.fff...] format, where s is the seconds precision)
and seconds precision (the precision of the fraction field) is 9 because:
The value of the fraction field is the number of billionths of a second and ranges from 0 through 999,999,999. (source)
DecimalDigits is set to 9 because for all datetime types except SQL_TYPE_DATE it's
The number of digits to the right of the decimal point in the seconds part of the value (fractional seconds). (source)
According to answer to this question, it shouldn't be possible to insert a value into a TIMESTAMP column, but it does not work with DATETIME column for me either. And IMO the problem with TIMESTAMP should happen when actually executing the command, not when just binding the values. Therefore I think this is something else.
The same code is working for TIMESTAMP column + SQL_TYPE_TIMESTAMP with PostgreSQL 9.2.15 (driver "PostgreSQL Unicode" 9.3.300),
and for DATETIME and TIMESTAMP columns + SQL_TYPE_TIMESTAMP with MySQL 5.5.50 (driver "MySQL ODBC 5.3 Unicode Driver" 5.3.6).
By the way, I am running Xubuntu 16.04 64bit, the SQL Server's version is 12.0.2569 (running on Windows 10) and I am using "ODBC Driver 11 for SQL Server" version 11.0.2270.
Try setting precision to 3 for Sql Server and its datetime columns. They only have a precision of 3. You can easily check this using SQL Server Management Studio and try to set a value with a precision higher than 3 - you will get no error, but only 3 digits of the fraction will be stored.
Note that there is SQLDescribeParam function, which will return information about the decimal digits and precision values for a parameter in a query: https://msdn.microsoft.com/en-us/library/ms710188(v=vs.85).aspx
Update, about the question from the comment: You have to specify the fractional part in the full range from 1 to 999'999'9999. But, if I remember this right, Sql Server for example with a precision of '3' will only accept values where the last 6 digits of the fractional part are 0s.
Sample: It will work fine for a fractional part of 111'000'000 (which would correspond to 111'000'000 / 1'000'000'000 seconds), but setting a fractional part of 111'100'000 will fail with an error about invalid fraction or similar.
The Python range() built-in function is limited to integers.
I need a more generic function of similar signature range(start, stop, step), where start, stop and step may be of any types given that:
result of (possibly multiple) addition of step to start is defined,
both start and the result mentioned above can be compared with stop.
The function may be used (for example) to obtain a sequence of days of the year:
range(datetime.datetime(2015, 1, 1),
datetime.datetime(2016, 1, 1),
datetime.timedelta(1))
I know I can easily write such function by myself. However, I am looking for efficient existing solutions in some popular package (like numpy or scipy), working with both Python 2 and Python 3.
I have already tried to combine itertools.takewhile with itertools.count, however the latter seems to be limited to numbers.
You can do this with a simple generator. The following example works with floats, integers and datetime.
With datetime you just need to clearly specify the exact date (1 Jan 2016) not just 2016 and clearly specify the time delta (timedelta(days=1) not timedelta(1))
import datetime
def range2(start,stop,step):
value = start
while value <= stop:
yield value
value += step
for date in range2(datetime.datetime(2015,1,1),datetime.datetime(2016,1,1),datetime.timedelta(days=1)):
print(date)
As for off-the-shelf solutions there is arange in numpy but I haven't tested that with datetime.
>>> from numpy import arange
>>> arange(0.5, 5, 1.5)
array([0.5, 2.0, 3.5])
I have some code from PySpark 1.5 that I unfortunately have to port backwards to Spark 1.3. I have a column with elements that are alpha-numeric but I only want the digits.
An example of the elements in 'old_col' of 'df' are:
'125 Bytes'
In Spark 1.5 I was able to use
df.withColumn('new_col',F.regexp_replace('old_col','(\D+)','').cast("long"))
However, I cannot seem to come up with a solution using old 1.3 methods like SUBSTR or RLIKE. Reason being the number of digits in front of "Bytes" will vary in length, so what I really need is the 'replace' or 'strip' functionality I can't find in Spark 1.3
Any suggestions?
As long as you use HiveContext you can execute corresponding Hive UDFs either with selectExpr:
df.selectExpr("regexp_extract(old_col,'([0-9]+)', 1)")
or with plain SQL:
df.registerTempTable("df")
sqlContext.sql("SELECT regexp_extract(old_col,'([0-9]+)', 1) FROM df")
I am trying to use the 0.15.2 version of scikit-learn. In this version, the documentation shows that there is are separate fit(X) and a predict(X) functions as well as the combo fit_predict(X) function that was available in the prior version.
I thought that this would allow me to fit on one array and then predict on a new array (so fit(X) and predict(Y)). However, when I try this, I get a ValueError since the dimensions don't match.
Here is my code:
from sklearn.cluster import MeanShift, estimate_bandwidth
bandwidth = estimate_bandwidth(X, quantile=0.2, n_samples=5000) # n_samples 100000 memory bound
ms = MeanShift(bandwidth=bandwidth)
ms.fit(X)
results = ms.predict(Y)
Is there a way to use the model to predict the clusters for a new (different) array than the array on which it was originally fitted?
I found this post (MeanShift `fit` vs `fit_predict` scikitlearn), but I think it was asking a different question than what I am trying to ask.
I am using postgresql on Heroku for my Django App. When I try making comment for my posts I sometimes get this error (again, sometimes not all the time).
Despite of the error, the comment is still saved, but all the code following the save() does not execute.
This problem only occurs on postgresql though. On my localhost where I am using sqlite, everything works just fine.
I am not sure what is the reason for this.
This is how my model looks like
class Comment(models.Model):
post = models.ForeignKey(post)
date = models.DateTimeField(auto_now_add=True)
comment = models.TextField()
comment_user = models.ForeignKey(User)
That is how my Comment model looks like.
So is it because I did not add max_length for comment?
Here is the Traceback
DatabaseError at /post/114/
value too long for type character varying(10)
Request Method: POST
Request URL: http://www.mysite.com/post/114/
Django Version: 1.4.1
Exception Type: DatabaseError
Exception Value:
value too long for type character varying(10)
Exception Location: /app/.heroku/venv/lib/python2.7/site-packages/django/db/backends/postgresql_psycopg2/base.py in execute, line 52
Python Executable: /app/.heroku/venv/bin/python2.7
Python Version: 2.7.2
Python Path:
['/app',
'/app/.heroku/venv/bin',
'/app/.heroku/venv/lib/python2.7/site-packages/pip-1.1-py2.7.egg',
'/app/.heroku/venv/lib/python2.7/site-packages/distribute-0.6.31-py2.7.egg',
'/app',
'/app/.heroku/venv/lib/python27.zip',
'/app/.heroku/venv/lib/python2.7',
'/app/.heroku/venv/lib/python2.7/plat-linux2',
'/app/.heroku/venv/lib/python2.7/lib-tk',
'/app/.heroku/venv/lib/python2.7/lib-old',
'/app/.heroku/venv/lib/python2.7/lib-dynload',
'/usr/local/lib/python2.7',
'/usr/local/lib/python2.7/plat-linux2',
'/usr/local/lib/python2.7/lib-tk',
'/app/.heroku/venv/lib/python2.7/site-packages',
'/app/.heroku/venv/lib/python2.7/site-packages/PIL']
Server time: Wed, 5 Dec 2012 20:41:39 -0600
I can't help you with the Django parts (sorry) so I'll just speak PostgreSQL.
Somewhere in your application you have a varchar(10) column and you're trying to put something longer than 10 characters into it, you probably have a missing validation somewhere. SQLite ignores the size in a varchar(n) column and treats it as text which has no size limit; so you can do things like this with SQLite:
sqlite> create table t (s varchar(5));
sqlite> insert into t (s) values ('Where is pancakes house?');
sqlite> select * from t;
s
where is pancakes house?
with nary a complaint. Similarly, SQLite lets you do ridiculous things like putting a string into a numeric column.
This is what you need to do:
Stop developing on SQLite when you're deploying on top of PostgreSQL. Install PostgreSQL locally and develop on top of that, you should even go so far as to install the same version of PostgreSQL that you'll be using at Heroku. There are all sorts of differences between databases that will cause you headaches, this little field size problem is just your gentle introduction to cross-database issues.
Stop using varchar(n) with PostgreSQL, just use text. There's no point to using size limited string columns in PostgreSQL unless you have a hard requirement that the size must be limited. From the fine manual:
The storage requirement for a short string (up to 126 bytes) is 1 byte plus the actual string, which includes the space padding in the case of character. Longer strings have 4 bytes of overhead instead of 1. Long strings are compressed by the system automatically, so the physical requirement on disk might be less. [...] If you desire to store long strings with no specific upper limit, use text...
Tip: There is no performance difference among these three types, apart from increased storage space when using the blank-padded type, and a few extra CPU cycles to check the length when storing into a length-constrained column. While character(n) has performance advantages in some other database systems, there is no such advantage in PostgreSQL; in fact character(n) is usually the slowest of the three because of its additional storage costs. In most situations text or character varying should be used instead.
So don't bother with traditional char and varchar in PostgreSQL unless you have to, just use text.
Start validating your incoming data so ensure that it doesn't violate any size or format constraints you have.
You can switch your columns from varchar(n) to text immediately and keep working with SQLite while you get PostgreSQL up and running; both databases will be happy with text for strings of unlimited length and this simple fix will get you past your immediate problem. Then, as soon as you can, switch your development environment to PostgreSQL so that you can catch problems like this before your code hits production.
The reason is that PostgreSQL actually checks the lengths of the data against the size of the field and errors out if too large, whereas SQLite completely ignores the specified field size, and MySQL silently truncates the data destroying it irretrievably. Make the field larger.