I have the following scenario:
I have defined the right view in the database, taking care that the view is named according to the django conventions
I have made sure that my model is not managed by django. The migration created is accordingly defined with managed=False
The DB view is working fine by itself.
When triggering the API endpoint, two strange things happen:
the request to the database fails with:
ERROR: relation "consumption_recentconsumption" does not exist at character 673
(I have logging enabled at the postgres level, and copy-pasting the exact same request into a db console client works, without modifications whatsoever)
the request to the DB gets retried lots of times (more than 30?). Why is this happening? Is there a django setting to control this? (I am sending the request to the API just once, manually with curl)
EDIT
This is my model:
class RecentConsumption(models.Model):
name = models.CharField(max_length=100)
...
class Meta:
managed = False
This is the SQL statement, as generated by django and sent to the db:
SELECT "consumption_recentconsumption"."id", "consumption_recentconsumption"."name", ... FROM "consumption_recentconsumption" LIMIT 21;
As I mentioned, this fails through django, but works fine when run directly against the db.
EDIT2
Logs from postgres, when running the sql directly:
2018-12-13 11:12:02.954 UTC [66] LOG: execute <unnamed>: SAVEPOINT JDBC_SAVEPOINT_4
2018-12-13 11:12:02.955 UTC [66] LOG: execute <unnamed>: SELECT "consumption_recentconsumption"."id", "consumption_recentconsumption"."name", "consumption_recentconsumption"."date", "consumption_recentconsumption"."psc", "consumption_recentconsumption"."material", "consumption_recentconsumption"."system", "consumption_recentconsumption"."env", "consumption_recentconsumption"."objs", "consumption_recentconsumption"."size", "consumption_recentconsumption"."used", "consumption_recentconsumption"."location", "consumption_recentconsumption"."WWN", "consumption_recentconsumption"."hosts", "consumption_recentconsumption"."pool_name", "consumption_recentconsumption"."storage_name", "consumption_recentconsumption"."server" FROM "consumption_recentconsumption" LIMIT 21
2018-12-13 11:12:10.038 UTC [66] LOG: execute <unnamed>: RELEASE SAVEPOINT JDBC_SAVEPOINT_4
Logs from postgres when running through django (repeated more than 30 times):
2018-12-13 11:13:50.782 UTC [75] LOG: statement: SELECT "consumption_recentconsumption"."id", "consumption_recentconsumption"."name", "consumption_recentconsumption"."date", "consumption_recentconsumption"."psc", "consumption_recentconsumption"."material", "consumption_recentconsumption"."system", "consumption_recentconsumption"."env", "consumption_recentconsumption"."objs", "consumption_recentconsumption"."size", "consumption_recentconsumption"."used", "consumption_recentconsumption"."location", "consumption_recentconsumption"."WWN", "consumption_recentconsumption"."hosts", "consumption_recentconsumption"."pool_name", "consumption_recentconsumption"."storage_name", "consumption_recentconsumption"."server" FROM "consumption_recentconsumption" LIMIT 21
2018-12-13 11:13:50.783 UTC [75] ERROR: relation "consumption_recentconsumption" does not exist at character 673
2018-12-13 11:13:50.783 UTC [75] STATEMENT: SELECT "consumption_recentconsumption"."id", "consumption_recentconsumption"."name", "consumption_recentconsumption"."date", "consumption_recentconsumption"."psc", "consumption_recentconsumption"."material", "consumption_recentconsumption"."system", "consumption_recentconsumption"."env", "consumption_recentconsumption"."objs", "consumption_recentconsumption"."size", "consumption_recentconsumption"."used", "consumption_recentconsumption"."location", "consumption_recentconsumption"."WWN", "consumption_recentconsumption"."hosts", "consumption_recentconsumption"."pool_name", "consumption_recentconsumption"."storage_name", "consumption_recentconsumption"."server" FROM "consumption_recentconsumption" LIMIT 21
Answering myself, in case this helps somebody in the future.
I am running the CREATE VIEW command in PyCharm, which seems to use a transaction for all operations. That means that the view is available within the db session in PyCharm (since it uses the transaction for all requests), but not from outside. The django app is running in the console, and does not see the view.
The solution is just to commit the transaction in PyCharm, to make it fully visible.
The final solution is to create the view via django migrations.
Related
Issue description
Trying to insert DML statistics from Bigquery system tables to BQ native tables for monitoring purpose using Airflow task.
For this need, I am using below query:
INSERT INTO
`my-project-id.my_dataset.my_table_metrics`
(table_name,
row_count,
inserted_row_count,
updated_row_count,
creation_time)
SELECT
b.table_name,
a.row_count AS row_count,
b.inserted_row_count,
b.updated_row_count,
b.creation_time
FROM
`my-project-id.my_dataset`.__TABLES__ a
JOIN (
SELECT
tables.table_id AS table_name,
dml_statistics.inserted_row_count AS inserted_row_count,
dml_statistics.updated_row_count AS updated_row_count,
creation_time AS creation_time
FROM
`my-project-id`.`region-europe-west3`.INFORMATION_SCHEMA.JOBS,
UNNEST(referenced_tables) AS tables
WHERE
DATE(creation_time) = current_date ) b
ON
a.table_id = b.table_name
WHERE
a.table_id = 'my_bq_table'
The query is working in Bigquery console but not working via airflow.
Error as per Airflow
python_http_client.exceptions.UnauthorizedError: HTTP Error 401:
Unauthorized
[2022-10-21, 12:18:30 UTC] {standard_task_runner.py:93}
ERROR - Failed to execute job 313922 for task load_metrics (403 Access
Denied: Table
my-project-id:region-europe-west3.INFORMATION_SCHEMA.JOBS: User does
not have permission to query table
my-project-id:region-europe-west3.INFORMATION_SCHEMA.JOBS, or perhaps
it does not exist in location europe-west3.
How to fix this access issue?
Appricate your help.
I uninstalled a few modules.
~/apps/drupal/htdocs/modules$ drush pm-uninstall relaxed
The following extensions will be uninstalled: relaxed
Do you really want to continue? (y/n): y
relaxed was successfully uninstalled. [ok]
bitnami#ip-172-26-15-109:~/apps/drupal/htdocs/modules$ drush pm-uninstall replication
The following extensions will be uninstalled: replication
Do you really want to continue? (y/n): y
replication was successfully uninstalled. [ok]
bitnami#ip-172-26-15-109:~/apps/drupal/htdocs/modules$ drush pm-uninstall multiversion
The following extensions will be uninstalled: multiversion
Do you really want to continue? (y/n): y
multiversion was successfully uninstalled.
bitnami#ip-172-26-15-109:~/apps/drupal/htdocs/modules$ drush pm-uninstall views_rest_feed
The following extensions will be uninstalled: views_rest_feed
Do you really want to continue? (y/n): y
views_rest_feed was successfully uninstalled.
And now I see errors.
I see following errors, how should I proceed with fixing them?
Entity/field definitions Mismatched entity and/or field definitions
The following changes were detected in the entity type and field definitions.
Comment
The Comment entity type needs to be updated.
File
The File entity type needs to be updated.
Content
The Revision ID field needs to be installed.
The UUID field needs to be updated.
Shortcut link
The UUID field needs to be updated.
Taxonomy term
The Taxonomy term entity type needs to be updated.
User
The UUID field needs to be updated.
Custom menu link
The Custom menu link entity type needs to be updated.
Running drush entity-updates gives
The following updates are pending:
comment entity type :
The Comment entity type needs to be updated.
file entity type :
The File entity type needs to be updated.
node entity type :
The Revision ID field needs to be installed.
The UUID field needs to be updated.
shortcut entity type :
The UUID field needs to be updated.
taxonomy_term entity type :
The Taxonomy term entity type needs to be updated.
user entity type :
The UUID field needs to be updated.
menu_link_content entity type :
The Custom menu link entity type needs to be updated.
Do you wish to run all pending updates? (y/n): y
Drupal\Core\Entity\EntityStorageException: The SQL storage cannot change the schema for an existing entity type (comment) with data. in [error]
Drupal\Core\Entity\Sql\SqlContentEntityStorageSchema->onEntityTypeUpdate() (line 303 of
/opt/bitnami/apps/drupal/htdocs/core/lib/Drupal/Core/Entity/Sql/SqlContentEntityStorageSchema.php).
Failed: Drupal\Core\Entity\EntityStorageException: !message in [error]
Drupal\Core\Entity\Sql\SqlContentEntityStorageSchema->onEntityTypeUpdate() (line 303 of
/opt/bitnami/apps/drupal/htdocs/core/lib/Drupal/Core/Entity/Sql/SqlContentEntityStorageSchema.php).
Cache rebuild complete. [ok]
Finished performing updates.
drush watchdog-show
ID Date Type Severity Message
147 22/Dec 15:57 php error Drupal\Core\Database\DatabaseExceptionWrapper: SQLSTATE[42S22]: Column not found: 1054 Unknown column
'base_table.vid' in 'field list': SELECT base_table.vid AS vid, base_table.nid AS nid
146 22/Dec 15:57 php error Drupal\Core\Database\DatabaseExceptionWrapper: SQLSTATE[42S22]: Column not found: 1054 Unknown column
'base_table.vid' in 'field list': SELECT base_table.vid AS vid, base_table.nid AS nid
145 22/Dec 15:57 cron notice Cron run completed.
144 22/Dec 15:57 cron notice Execution of update_cron() took 9.97ms.
143 22/Dec 15:57 cron notice Starting execution of update_cron(), execution of system_cron() took 15.01ms.
142 22/Dec 15:57 cron notice Starting execution of system_cron(), execution of search_cron() took 4.15ms.
141 22/Dec 15:57 cron notice Starting execution of search_cron(), execution of node_cron() took 30.37ms.
140 22/Dec 15:57 cron notice Starting execution of node_cron(), execution of history_cron() took 1.82ms.
139 22/Dec 15:57 cron notice Starting execution of history_cron(), execution of file_cron() took 11.11ms.
138 22/Dec 15:57 cron notice Starting execution of file_cron(), execution of field_cron() took 3.38ms.
Also accessing http://ipaddress/admin/modules/uninstall gives
The website encountered an unexpected error. Please try again later.
Apache error log shows
[Thu Dec 22 15:06:52.683333 2016] [core:notice] [pid 13291:tid 139826185398080] AH00094: Command line: '/opt/bitnami/apache2/bin/httpd.bin -f /opt/bitnami/apache2/conf/httpd.conf'
[Thu Dec 22 15:06:58.953491 2016] [proxy_fcgi:error] [pid 13300:tid 139825856366336] [client 123.201.127.176:49572] AH01071: Got error 'PHP message: PHP Fatal error: Call to a member function id() on null in /opt/bitnami/apps/drupal/htdocs/modules/multiversion/src/WorkspaceCacheContext.php on line 44\n'
[Thu Dec 22 15:12:39.540744 2016] [proxy_fcgi:error] [pid 13299:tid 139825386374912] [client 123.201.127.176:49808] AH01071: Got error 'PHP message: Uncaught PHP Exception Drupal\\Core\\Database\\DatabaseExceptionWrapper: "SQLSTATE[42S22]: Column not found: 1054 Unknown column 'revision.revision_id' in 'field list': SELECT revision.revision_id AS revision_id, revision.langcode AS langcode, revision.revision_log AS revision_log, base.id AS id, base.type AS type, base.uuid AS uuid, CASE base.revision_id WHEN revision.revision_id THEN 1 ELSE 0 END AS isDefaultRevision\nFROM \n{block_content} base\nINNER JOIN {block_content_revision} revision ON revision.revision_id = base.revision_id; Array\n(\n)\n" at /opt/bitnami/apps/drupal/htdocs/core/lib/Drupal/Core/Database/Connection.php line 671\n'
[Thu Dec 22 15:14:41.556451 2016] [proxy_fcgi:error] [pid 13299:tid 139825428338432] [client 123.201.127.176:49828] AH01071: Got error 'PHP message: Uncaught PHP Exception Drupal\\Core\\Database\\DatabaseExceptionWrapper: "SQLSTATE[42S22]: Column not found: 1054 Unknown column 'revision.revision_id' in 'field list': SELECT revision.revision_id AS revision_id, revision.langcode AS langcode, revision.revision_log AS revision_log, base.id AS id, base.type AS type, base.uuid AS uuid, CASE base.revision_id WHEN revision.revision_id THEN 1 ELSE 0 END AS isDefaultRevision\nFROM \n{block_content} base\nINNER JOIN {block_content_revision} revision ON revision.revision_id = base.revision_id; Array\n(\n)\n" at /opt/bitnami/apps/drupal/htdocs/core/lib/Drupal/Core/Database/Connection.php line 671\n'
I am using AWS lightsail bitnami drupal instance.
Try going to the database and truncate all tables that start with cache_. It's what I would do after facing such an issue. It's possible that the uninstall process crashed before the cache was cleared that the cache still has references to already deleted fields (that were removed during module uninstall).
I’m having a tough time deploying my django app (v1.9) to heroku (psql 9.5),cedar stack-14.
Here’s how I arrived here: I had tremendous migration issues that resulted in “””django.db.utils.ProgrammingError: relation already exists”””, and “””Django column “name” of relation “django_content_type” does not exist””” errors. Figuring that there were old, mishandled migrations imported int django_migrations table, I decided to push a fresh, local db up to an empty heroku with:
PGUSER=dbnameHERE PGPASSWORD=dbpassHERE heroku pg:push localDBnameHERE DATABASE --app appnameHERE
This worked flawlessly. After that, here’s what happens when I run these commands:
When I run heroku local, my full app shows locally on 0.0.0.0:5000. (/admin works, but with css issues, presumably bc the whitenoise module I imported does not deal well under production .env settings)
When I run heroku local -e .env.DEV (development .env settings) on 0.0.0.0:5000, everything, including /admin works wonderfully.
The issue begins when gunicorn comes into the picture. When I run gunicorn config.wsgi:application, it runs, but I get “This site can’t be reached, localhost took too long to respond” blank page.
Here is the request header from the blank webpage:
HTTP/1.1 301 Moved Permanently
Connection: keep-alive
Server: gunicorn/19.4.5
Date: Tue, 07 Jun 2016 22:39:00 GMT
Transfer-Encoding: chunked
Location: https://sitename.herokuapp.com/
Content-Type: text/html; charset=utf-8
X-Content-Type-Options: nosniff
X-Xss-Protection: 1; mode=block
X-Frame-Options: SAMEORIGIN
Via: 1.1 vegur
When I run heroku - heroku run python manage.py check --deploy, I get this:
WARNINGS:
?: (security.W001) You do not have 'django.middleware.security.SecurityMiddleware' in your MIDDLEWARE_CLASSES so the SECURE_HSTS_SECONDS, SECURE_CONTENT_TYPE_NOSNIFF, SECURE_BROWSER_XSS_FILTER, and SECURE_SSL_REDIRECT settings will have no effect.
?: (security.W009) Your SECRET_KEY has less than 50 characters or less than 5 unique characters. Please generate a long and random SECRET_KEY, otherwise many of Django's security-critical features will be vulnerable to attack.
?: (security.W012) SESSION_COOKIE_SECURE is not set to True. Using a secure-only session cookie makes it more difficult for network traffic sniffers to hijack user sessions.
?: (security.W016) You have 'django.middleware.csrf.CsrfViewMiddleware' in your MIDDLEWARE_CLASSES, but you have not set CSRF_COOKIE_SECURE to True. Using a secure-only CSRF cookie makes it more difficult for network traffic sniffers to steal the CSRF token.
?: (security.W017) You have 'django.middleware.csrf.CsrfViewMiddleware' in your MIDDLEWARE_CLASSES, but you have not set CSRF_COOKIE_HTTPONLY to True. Using an HttpOnly CSRF cookie makes it more difficult for cross-site scripting attacks to steal the CSRF token.
?: (security.W018) You should not have DEBUG set to True in deployment.
?: (security.W019) You have 'django.middleware.clickjacking.XFrameOptionsMiddleware' in your MIDDLEWARE_CLASSES, but X_FRAME_OPTIONS is not set to 'DENY'. The default is 'SAMEORIGIN', but unless there is a good reason for your site to serve other parts of itself in a frame, you should change it to 'DENY'.
?: (security.W020) ALLOWED_HOSTS must not be empty in deployment.
System check identified 8 issues (0 silenced).
Same blank page and 301 redirect with no error code when I navigate to sitename.herokuapp.com that I got when running gunicorn. Any guess as to why my app throws redirects when gunicorn gets involved?
I eventually solved this issue by re-reading the Heroku docs, where it states that production environments must have their config vars loaded separately, either through their dashboard interface, or by using their heroku config:set command from the heroku cli.
I had placed my config vars $VIRTUAL_ENV/bin/postactivate, loading the different virtualenv configs following activation of the virtualenv; however, that file was not being used in production.
if ENVIRONMENT == 'production':
SECURE_BROWSER_XSS_FILTER = True
X_FRAME_OPTIONS = 'DENY'
SECURE_SSL_REDIRECT = True
SECURE_HSTS_SECONDS = 3600
SECURE_HSTS_INCLUDE_SUBDOMAINS = True
SECURE_HSTS_PRELOAD = True
SECURE_CONTENT_TYPE_NOSNIFF = True
SESSION_COOKIE_SECURE = True
CSRF_COOKIE_SECURE = True
There is some security issues can be happen
I am trying to make a query with aggregate in Django, it works fine with my local computer but not in the server.
In both machines, pymongo is installed in Python virtual enviroment:
pip freeze | grep mongo
pymongo==2.5.2
I can also get the inserted data in two machines with find() method:
conn.firmalar.searchlogger.find()
But aggregate method works in my local but not in the server even everything installed are the same. I got this error when i attempt to run it on server:
import pymongo
conn = pymongo.Connection()
search = conn.firmalar.searchlogger.aggregate([{"$group": {"_id": "$what", "count": {"$sum": 1}}}])
OperationFailure at /admin/weblog/
command SON([('aggregate', u'searchlogger'), ('pipeline', [{'$group': {'count': {'$sum': 1}, '_id': '$what'}}])]) failed: no such cmd: aggregate
/home/cem/env/firmalar/local/lib/python2.7/site-packages/pymongo/collection.pyc in aggregate(self, pipeline)
1059 self.secondary_acceptable_latency_ms),
1060 slave_okay=self.slave_okay,
-> 1061 _use_master=use_master)
1062
1063 # TODO key and condition ought to be optional, but deprecation
/home/cem/env/firmalar/local/lib/python2.7/site-packages/pymongo/helpers.pyc in _check_command_response(response, reset, msg, allowable_errors)
145 if code in (11000, 11001, 12582):
146 raise DuplicateKeyError(errmsg, code)
--> 147 raise OperationFailure(msg % errmsg, code)
148
149
It's not about the driver - it's about mongodb itself. aggregate() was introduced in mongodb 2.2: docs.
Most likely, you are using an older version of mongodb. Check your mongodb version and upgrade if needed. Also check that in your python code you are connecting to the mongodb version >=2.2.
Also see:
MongoDB Java driver : no such cmd: aggregate
Aggregate failed: no such cmd: aggregate
I am using the Python Toolkit for Rally REST API to update defects on our Rally server. I have confirmed that I am able to make contact with the server and authenticate fine by getting a list of current defects. I am running into issues with updating them. I am using Python 2.7.3 with pyral 0.9.1 and requests 0.13.3.
Also, I am passing 'verify=False' to the Rally() call and have made appropriate chages to the
restapi module to compensate for this.
Here is my test code:
import sys
from pyral import Rally, rallySettings
server = "rallydev.server1.com"
user = "user#mycompany.com"
password = "trial"
workspace = "trialWorkspace"
project = "Testing Project"
defectID = "DE192"
rally = Rally(server, user, password, workspace=workspace,
project=project, verify=False)
defect_data = { "FormattedID" : defectID,
"State" : "Closed"
}
try:
defect = rally.update('Defect', defect_data)
except Exception, details:
sys.stderr.write('ERROR: %s \n' % details)
sys.exit(1)
print "Defect %s updated" % defect.FormattedID
When I run the script:
[temp]$ ./updefect.py
ERROR: Unable to update the Defect
If I change the code in the RallyRESTResponse function to print out the value of self.errors when found (line 164 of rallyresp.py), I get this output:
[temp]$ ./updefect.py
[u"Cannot parse input stream due to I/O error as JSON document: Parse error: expected '{' but saw '\uffff' [ chars read = >>>\uffff<<< ]"]
ERROR: Unable to update the Defect
I did find another question that sounds like it might possibly be related to mine here:
App SDK: Erorr parsing input stream when running query
Can you provide any assistance?
Pairing Michael's observation regarding the GZIP encoding with that of another astute Rally customer working a Support case on the issue - it appears that some versions of the requests module will default to GZIP compression if the content-type is not specifically defined.
The fix is to set content-type to application/json in the REST Headers section of pyral's config.py:
RALLY_REST_HEADERS = \
{
'X-RallyIntegrationName' : 'Python toolkit for Rally REST API',
'X-RallyIntegrationVendor' : 'Rally Software Development',
'X-RallyIntegrationVersion' : '%s.%s.%s' % __version__,
'X-RallyIntegrationLibrary' : 'pyral-%s.%s.%s' % __version__,
'X-RallyIntegrationPlatform' : 'Python %s' % platform.python_version(),
'X-RallyIntegrationOS' : platform.platform(),
'User-Agent' : 'Pyral Rally WebServices Agent',
'Content-Type' : 'application/json',
}
What you are seeing is probably not related to the Python 2.7.3 / requests 0.13.3 versions being used. The error message you saw has also been reported using the Javascript based App SDK and .NET Toolkit for Rally (2 separate reports here on SO) and at least one other person using Python 2.6.6 and requests 0.9.2. It appears that the error verbiage is being generated on the Rally WSAPI back-end. Current assessment by fellow Rally'ers is that it is an encoding related issue. The question is where the encoding issue originates.
I have yet to be able to repro this issue, having tried with several versions of Python (2.6.x and 2.7.x), several versions of requests and on Linux, MacOS and Win7.
As you seem to be pretty comfortable with diving in to the code and running in debug mode, one avenue to try is to capture the defective POST URL and POST data and attempting the update via a browser based REST client like 'Simple REST Client' or Poster and observing if you get the same error message in the WSAPI response.
I'm seeing similar behavior with pyral while trying to add an attachment to a defect.
With debugging and logging on I see this request on stdout:
2012-07-20T15:11:24.855212 PUT https://rally1.rallydev.com/slm/webservice/1.30/attachmentcontent/create.js?workspace=workspace/123456789
Then the json in the logfile:
2012-07-20 15:11:24.854 PUT attachmentcontent/create.js?workspace=workspace/123456789
{"AttachmentContent": {"Content": "iVBORw0KGgoAAAANSUhEUgAABBQAAAJrCAIAAADf2VflAAAXOWlDQ...
Then this in the logfile (after a bit of fighting with restapi.py to get around the unicode error):
2012-07-20 15:11:25.260 404 Cannot parse input stream due to I/O error as JSON document: Parse error: expected '{' but saw '?' [ chars read = >>>?<<< ]
The notable thing there is the 404 error code. Also, the "Cannot parse input stream..." error message is not coming from pyral, it's coming from Rally's server. So pyral is sending Rally something Rally can't understand.
I also logged the response headers, which may be a clue:
{'rallyrequestid': 'qs-app-03ml3akfhdpjk7c430otjv50ak.qs-app-0387404259', 'content-encoding': 'gzip', 'transfer-encoding': 'chunked', 'expires': 'Fri, 20 Jul 2012 19:18:35 GMT', 'vary': 'Accept-Encoding', 'cache-control': 'no-cache,no-store,max-age=0,must-revalidate', 'date': 'Fri, 20 Jul 2012 19:18:36 GMT', 'p3p': 'CP="NON DSP COR CURa PSAa PSDa OUR NOR BUS PUR COM NAV STA"', 'content-type': 'text/javascript; charset=utf-8'}
Note there the 'content-encoding': 'gzip'. I suspect the requests module (I'm using 0.13.3 in Macos Python 2.6) is gzip encoding its PUT request but the Rally API server is not properly decoding that.