I am trying to make a query with aggregate in Django, it works fine with my local computer but not in the server.
In both machines, pymongo is installed in Python virtual enviroment:
pip freeze | grep mongo
pymongo==2.5.2
I can also get the inserted data in two machines with find() method:
conn.firmalar.searchlogger.find()
But aggregate method works in my local but not in the server even everything installed are the same. I got this error when i attempt to run it on server:
import pymongo
conn = pymongo.Connection()
search = conn.firmalar.searchlogger.aggregate([{"$group": {"_id": "$what", "count": {"$sum": 1}}}])
OperationFailure at /admin/weblog/
command SON([('aggregate', u'searchlogger'), ('pipeline', [{'$group': {'count': {'$sum': 1}, '_id': '$what'}}])]) failed: no such cmd: aggregate
/home/cem/env/firmalar/local/lib/python2.7/site-packages/pymongo/collection.pyc in aggregate(self, pipeline)
1059 self.secondary_acceptable_latency_ms),
1060 slave_okay=self.slave_okay,
-> 1061 _use_master=use_master)
1062
1063 # TODO key and condition ought to be optional, but deprecation
/home/cem/env/firmalar/local/lib/python2.7/site-packages/pymongo/helpers.pyc in _check_command_response(response, reset, msg, allowable_errors)
145 if code in (11000, 11001, 12582):
146 raise DuplicateKeyError(errmsg, code)
--> 147 raise OperationFailure(msg % errmsg, code)
148
149
It's not about the driver - it's about mongodb itself. aggregate() was introduced in mongodb 2.2: docs.
Most likely, you are using an older version of mongodb. Check your mongodb version and upgrade if needed. Also check that in your python code you are connecting to the mongodb version >=2.2.
Also see:
MongoDB Java driver : no such cmd: aggregate
Aggregate failed: no such cmd: aggregate
Related
I have the following scenario:
I have defined the right view in the database, taking care that the view is named according to the django conventions
I have made sure that my model is not managed by django. The migration created is accordingly defined with managed=False
The DB view is working fine by itself.
When triggering the API endpoint, two strange things happen:
the request to the database fails with:
ERROR: relation "consumption_recentconsumption" does not exist at character 673
(I have logging enabled at the postgres level, and copy-pasting the exact same request into a db console client works, without modifications whatsoever)
the request to the DB gets retried lots of times (more than 30?). Why is this happening? Is there a django setting to control this? (I am sending the request to the API just once, manually with curl)
EDIT
This is my model:
class RecentConsumption(models.Model):
name = models.CharField(max_length=100)
...
class Meta:
managed = False
This is the SQL statement, as generated by django and sent to the db:
SELECT "consumption_recentconsumption"."id", "consumption_recentconsumption"."name", ... FROM "consumption_recentconsumption" LIMIT 21;
As I mentioned, this fails through django, but works fine when run directly against the db.
EDIT2
Logs from postgres, when running the sql directly:
2018-12-13 11:12:02.954 UTC [66] LOG: execute <unnamed>: SAVEPOINT JDBC_SAVEPOINT_4
2018-12-13 11:12:02.955 UTC [66] LOG: execute <unnamed>: SELECT "consumption_recentconsumption"."id", "consumption_recentconsumption"."name", "consumption_recentconsumption"."date", "consumption_recentconsumption"."psc", "consumption_recentconsumption"."material", "consumption_recentconsumption"."system", "consumption_recentconsumption"."env", "consumption_recentconsumption"."objs", "consumption_recentconsumption"."size", "consumption_recentconsumption"."used", "consumption_recentconsumption"."location", "consumption_recentconsumption"."WWN", "consumption_recentconsumption"."hosts", "consumption_recentconsumption"."pool_name", "consumption_recentconsumption"."storage_name", "consumption_recentconsumption"."server" FROM "consumption_recentconsumption" LIMIT 21
2018-12-13 11:12:10.038 UTC [66] LOG: execute <unnamed>: RELEASE SAVEPOINT JDBC_SAVEPOINT_4
Logs from postgres when running through django (repeated more than 30 times):
2018-12-13 11:13:50.782 UTC [75] LOG: statement: SELECT "consumption_recentconsumption"."id", "consumption_recentconsumption"."name", "consumption_recentconsumption"."date", "consumption_recentconsumption"."psc", "consumption_recentconsumption"."material", "consumption_recentconsumption"."system", "consumption_recentconsumption"."env", "consumption_recentconsumption"."objs", "consumption_recentconsumption"."size", "consumption_recentconsumption"."used", "consumption_recentconsumption"."location", "consumption_recentconsumption"."WWN", "consumption_recentconsumption"."hosts", "consumption_recentconsumption"."pool_name", "consumption_recentconsumption"."storage_name", "consumption_recentconsumption"."server" FROM "consumption_recentconsumption" LIMIT 21
2018-12-13 11:13:50.783 UTC [75] ERROR: relation "consumption_recentconsumption" does not exist at character 673
2018-12-13 11:13:50.783 UTC [75] STATEMENT: SELECT "consumption_recentconsumption"."id", "consumption_recentconsumption"."name", "consumption_recentconsumption"."date", "consumption_recentconsumption"."psc", "consumption_recentconsumption"."material", "consumption_recentconsumption"."system", "consumption_recentconsumption"."env", "consumption_recentconsumption"."objs", "consumption_recentconsumption"."size", "consumption_recentconsumption"."used", "consumption_recentconsumption"."location", "consumption_recentconsumption"."WWN", "consumption_recentconsumption"."hosts", "consumption_recentconsumption"."pool_name", "consumption_recentconsumption"."storage_name", "consumption_recentconsumption"."server" FROM "consumption_recentconsumption" LIMIT 21
Answering myself, in case this helps somebody in the future.
I am running the CREATE VIEW command in PyCharm, which seems to use a transaction for all operations. That means that the view is available within the db session in PyCharm (since it uses the transaction for all requests), but not from outside. The django app is running in the console, and does not see the view.
The solution is just to commit the transaction in PyCharm, to make it fully visible.
The final solution is to create the view via django migrations.
I am trying to do mongodump in Digital Ocean ubuntu server.I am getting following error.
root#Project:~# sudo mongodump --db <database> --out /root/backup/Localdb
2018-11-07T06:28:09.177+0000 Failed: error getting collections for database `<database>`: error running `listCollections`. Database: `<database>` Err: not authorized on `<database>` to execute command { listCollections: 1, cursor: {}, $readPreference: { mode: "secondaryPreferred" }, $db: `<database>` }
How to fix this issue.
Thanks.
Hi I am newbie to tensorflow. My aim is to convert .pb file to .tflite from pretrain model for my understanding. I have download mobilenet_v1_1.0_224 Model. Below is structure for model
mobilenet_v1_1.0_224.ckpt.data-00000-of-00001 - 66312kb
mobilenet_v1_1.0_224.ckpt.index - 20kb
mobilenet_v1_1.0_224.ckpt.meta - 3308kb
mobilenet_v1_1.0_224.tflite - 16505kb
mobilenet_v1_1.0_224_eval.pbtxt - 520kb
mobilenet_v1_1.0_224_frozen.pb - 16685kb
I know model already has .tflite file, but for my understanding I am trying to convert it.
My First Step : Creating frozen Graph file
import tensorflow as tf
imported_meta = tf.train.import_meta_graph(base_dir + model_folder_name + meta_file,clear_devices=True)
graph_ = tf.get_default_graph()
with tf.Session() as sess:
#saver = tf.train.import_meta_graph(base_dir + model_folder_name + meta_file, clear_devices=True)
imported_meta.restore(sess, base_dir + model_folder_name + checkpoint)
graph_def = sess.graph.as_graph_def()
output_graph_def = graph_util.convert_variables_to_constants(sess, graph_def, ['MobilenetV1/Predictions/Reshape_1'])
with tf.gfile.GFile(base_dir + model_folder_name + './my_frozen.pb', "wb") as f:
f.write(output_graph_def.SerializeToString())
I have successfully created my_frozen.pb - 16590 kb . But original file size is 16,685kb, which is clearly visible in folder structure above. So this is my first question why file size is different, Am I following some wrong path.
My Second Step : Creating tflite file using bazel command
bazel run --config=opt tensorflow/contrib/lite/toco:toco -- --input_file=/path_to_folder/my_frozen.pb --output_file=/path_to_folder/model.tflite --inference_type=FLOAT --input_shape=1,224,224,3 --input_array=input --output_array=MobilenetV1/Predictions/Reshape_1
This commands give me model.tflite - 0 kb.
Trackback for bazel Command
INFO: Analysed target //tensorflow/contrib/lite/toco:toco (0 packages loaded).
INFO: Found 1 target...
Target //tensorflow/contrib/lite/toco:toco up-to-date:
bazel-bin/tensorflow/contrib/lite/toco/toco
INFO: Elapsed time: 0.369s, Critical Path: 0.01s
INFO: Build completed successfully, 1 total action
INFO: Running command line: bazel-bin/tensorflow/contrib/lite/toco/toco '--input_file=/home/ubuntu/DEEP_LEARNING/Prashant/TensorflowBasic/mobilenet_v1_1.0_224/frozengraph.pb' '--output_file=/home/ubuntu/DEEP_LEARNING/Prashant/TensorflowBasic/mobilenet_v1_1.0_224/float_model.tflite' '--inference_type=FLOAT' '--input_shape=1,224,224,3' '--input_array=input' '--output_array=MobilenetV1/Predictions/Reshape_1'
2018-04-12 16:36:16.190375: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1265] Converting unsupported operation: FIFOQueueV2
2018-04-12 16:36:16.190707: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1265] Converting unsupported operation: QueueDequeueManyV2
2018-04-12 16:36:16.202293: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before Removing unused ops: 290 operators, 462 arrays (0 quantized)
2018-04-12 16:36:16.211322: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 290 operators, 462 arrays (0 quantized)
2018-04-12 16:36:16.211756: F tensorflow/contrib/lite/toco/graph_transformations/resolve_batch_normalization.cc:86] Check failed: mean_shape.dims() == multiplier_shape.dims()
Python Version - 2.7.6
Tensorflow Version - 1.5.0
Thanks In advance :)
The error Check failed: mean_shape.dims() == multiplier_shape.dims()
was an issue with resolution of batch norm and has been resolved in:
https://github.com/tensorflow/tensorflow/commit/460a8b6a5df176412c0d261d91eccdc32e9d39f1#diff-49ed2a40acc30ff6d11b7b326fbe56bc
In my case the error occurred using tensorflow v1.7
Solution was to use tensorflow v1.15 (nightly)
toco --graph_def_file=/path_to_folder/my_frozen.pb \
--input_format=TENSORFLOW_GRAPHDEF \
--output_file=/path_to_folder/my_output_model.tflite \
--input_shape=1,224,224,3 \
--input_arrays=input \
--output_format=TFLITE \
--output_arrays=MobilenetV1/Predictions/Reshape_1 \
--inference-type=FLOAT
I am trying to use Desired Capabilities in Selenium Python for IE on our 64bit machine, Windows 2008 as IEDriverServer.exe keeps crashing half way through the test when i use:
cls.driver = webdriver.Ie(Globals.IEdriver_path)
I want try Desired Capabilities, see if it works ok this way.
I have the following in my setup:
class BaseTestCase(unittest.TestCase):
#classmethod
def setUpClass(cls):
desired_caps = {}
desired_caps['platform'] = 'WINDOWS'
desired_caps['browserName'] = 'INTERNETEXPLORER'
#cls.driver = webdriver.Remote('http://192.168.1.103:4444/wd/hub', desired_caps)
cls.driver = webdriver.Remote('http://127.0.0.1:4444/wd/hub', desired_caps)
cls.driver = webdriver.Ie(Globals.IEdriver_path)
cls.driver.get(Globals.URL)
cls.login_page = login.LoginPage(cls.driver)
I run the Selenium Server jar file as follows:
java -Dwebdriver.ie.driver="C:\\IEDriverServer.exe" -jar
selenium-server-standalone-2.53.0.jar
When i run my Selenium Python test i get the following error:
WebDriverException: Message: The best matching driver provider org.openqa.selenium.ie.InternetExplorerDriver can't create a new driver instance for Capabilities [{browserName=INTERNETEXPLORER, platform=WINDOWS}]
Build info: version: '2.53.0', revision: '35ae25b', time: '2016-03-15 17:00:58'
System info: host: 'JUSTIN-PC', ip: '192.168.1.164', os.name: 'Windows 7', os.arch: 'x86', os.version: '6.1', java.version: '1.8.0_45'
Driver info: driver.version: unknown
Stacktrace:
at org.openqa.selenium.remote.server.DefaultDriverFactory.newInstance (DefaultDriverFactory.java:62)
at org.openqa.selenium.remote.server.DefaultSession$BrowserCreator.call (DefaultSession.java:222)
at org.openqa.selenium.remote.server.DefaultSession$BrowserCreator.call (DefaultSession.java:1)
at java.util.concurrent.FutureTask.run (None:-1)
at org.openqa.selenium.remote.server.DefaultSession$1.run (DefaultSession.java:176)
at java.util.concurrent.ThreadPoolExecutor.runWorker (None:-1)
at java.util.concurrent.ThreadPoolExecutor$Worker.run (None:-1)
at java.lang.Thread.run (None:-1)
If i use:
cls.driver = webdriver.Remote('http://192.168.1.103:4444/wd/hub', desired_caps)
Then I will get the following error:
A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond>
How should i set Desired Capabilities in Selenium Python?
Thanks, Riaz
Here is an example to start a remote session with Internet Explorer:
from selenium import webdriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
capabilities = DesiredCapabilities.INTERNETEXPLORER
capabilities.update({'logLevel' : 'ERROR'})
remote_server = "http://127.0.0.1:4444/wd/hub"
driver = webdriver.Remote(remote_server, capabilities)
driver.get('http://stackoverflow.com/')
I am using the Python Toolkit for Rally REST API to update defects on our Rally server. I have confirmed that I am able to make contact with the server and authenticate fine by getting a list of current defects. I am running into issues with updating them. I am using Python 2.7.3 with pyral 0.9.1 and requests 0.13.3.
Also, I am passing 'verify=False' to the Rally() call and have made appropriate chages to the
restapi module to compensate for this.
Here is my test code:
import sys
from pyral import Rally, rallySettings
server = "rallydev.server1.com"
user = "user#mycompany.com"
password = "trial"
workspace = "trialWorkspace"
project = "Testing Project"
defectID = "DE192"
rally = Rally(server, user, password, workspace=workspace,
project=project, verify=False)
defect_data = { "FormattedID" : defectID,
"State" : "Closed"
}
try:
defect = rally.update('Defect', defect_data)
except Exception, details:
sys.stderr.write('ERROR: %s \n' % details)
sys.exit(1)
print "Defect %s updated" % defect.FormattedID
When I run the script:
[temp]$ ./updefect.py
ERROR: Unable to update the Defect
If I change the code in the RallyRESTResponse function to print out the value of self.errors when found (line 164 of rallyresp.py), I get this output:
[temp]$ ./updefect.py
[u"Cannot parse input stream due to I/O error as JSON document: Parse error: expected '{' but saw '\uffff' [ chars read = >>>\uffff<<< ]"]
ERROR: Unable to update the Defect
I did find another question that sounds like it might possibly be related to mine here:
App SDK: Erorr parsing input stream when running query
Can you provide any assistance?
Pairing Michael's observation regarding the GZIP encoding with that of another astute Rally customer working a Support case on the issue - it appears that some versions of the requests module will default to GZIP compression if the content-type is not specifically defined.
The fix is to set content-type to application/json in the REST Headers section of pyral's config.py:
RALLY_REST_HEADERS = \
{
'X-RallyIntegrationName' : 'Python toolkit for Rally REST API',
'X-RallyIntegrationVendor' : 'Rally Software Development',
'X-RallyIntegrationVersion' : '%s.%s.%s' % __version__,
'X-RallyIntegrationLibrary' : 'pyral-%s.%s.%s' % __version__,
'X-RallyIntegrationPlatform' : 'Python %s' % platform.python_version(),
'X-RallyIntegrationOS' : platform.platform(),
'User-Agent' : 'Pyral Rally WebServices Agent',
'Content-Type' : 'application/json',
}
What you are seeing is probably not related to the Python 2.7.3 / requests 0.13.3 versions being used. The error message you saw has also been reported using the Javascript based App SDK and .NET Toolkit for Rally (2 separate reports here on SO) and at least one other person using Python 2.6.6 and requests 0.9.2. It appears that the error verbiage is being generated on the Rally WSAPI back-end. Current assessment by fellow Rally'ers is that it is an encoding related issue. The question is where the encoding issue originates.
I have yet to be able to repro this issue, having tried with several versions of Python (2.6.x and 2.7.x), several versions of requests and on Linux, MacOS and Win7.
As you seem to be pretty comfortable with diving in to the code and running in debug mode, one avenue to try is to capture the defective POST URL and POST data and attempting the update via a browser based REST client like 'Simple REST Client' or Poster and observing if you get the same error message in the WSAPI response.
I'm seeing similar behavior with pyral while trying to add an attachment to a defect.
With debugging and logging on I see this request on stdout:
2012-07-20T15:11:24.855212 PUT https://rally1.rallydev.com/slm/webservice/1.30/attachmentcontent/create.js?workspace=workspace/123456789
Then the json in the logfile:
2012-07-20 15:11:24.854 PUT attachmentcontent/create.js?workspace=workspace/123456789
{"AttachmentContent": {"Content": "iVBORw0KGgoAAAANSUhEUgAABBQAAAJrCAIAAADf2VflAAAXOWlDQ...
Then this in the logfile (after a bit of fighting with restapi.py to get around the unicode error):
2012-07-20 15:11:25.260 404 Cannot parse input stream due to I/O error as JSON document: Parse error: expected '{' but saw '?' [ chars read = >>>?<<< ]
The notable thing there is the 404 error code. Also, the "Cannot parse input stream..." error message is not coming from pyral, it's coming from Rally's server. So pyral is sending Rally something Rally can't understand.
I also logged the response headers, which may be a clue:
{'rallyrequestid': 'qs-app-03ml3akfhdpjk7c430otjv50ak.qs-app-0387404259', 'content-encoding': 'gzip', 'transfer-encoding': 'chunked', 'expires': 'Fri, 20 Jul 2012 19:18:35 GMT', 'vary': 'Accept-Encoding', 'cache-control': 'no-cache,no-store,max-age=0,must-revalidate', 'date': 'Fri, 20 Jul 2012 19:18:36 GMT', 'p3p': 'CP="NON DSP COR CURa PSAa PSDa OUR NOR BUS PUR COM NAV STA"', 'content-type': 'text/javascript; charset=utf-8'}
Note there the 'content-encoding': 'gzip'. I suspect the requests module (I'm using 0.13.3 in Macos Python 2.6) is gzip encoding its PUT request but the Rally API server is not properly decoding that.