Hi I am newbie to tensorflow. My aim is to convert .pb file to .tflite from pretrain model for my understanding. I have download mobilenet_v1_1.0_224 Model. Below is structure for model
mobilenet_v1_1.0_224.ckpt.data-00000-of-00001 - 66312kb
mobilenet_v1_1.0_224.ckpt.index - 20kb
mobilenet_v1_1.0_224.ckpt.meta - 3308kb
mobilenet_v1_1.0_224.tflite - 16505kb
mobilenet_v1_1.0_224_eval.pbtxt - 520kb
mobilenet_v1_1.0_224_frozen.pb - 16685kb
I know model already has .tflite file, but for my understanding I am trying to convert it.
My First Step : Creating frozen Graph file
import tensorflow as tf
imported_meta = tf.train.import_meta_graph(base_dir + model_folder_name + meta_file,clear_devices=True)
graph_ = tf.get_default_graph()
with tf.Session() as sess:
#saver = tf.train.import_meta_graph(base_dir + model_folder_name + meta_file, clear_devices=True)
imported_meta.restore(sess, base_dir + model_folder_name + checkpoint)
graph_def = sess.graph.as_graph_def()
output_graph_def = graph_util.convert_variables_to_constants(sess, graph_def, ['MobilenetV1/Predictions/Reshape_1'])
with tf.gfile.GFile(base_dir + model_folder_name + './my_frozen.pb', "wb") as f:
f.write(output_graph_def.SerializeToString())
I have successfully created my_frozen.pb - 16590 kb . But original file size is 16,685kb, which is clearly visible in folder structure above. So this is my first question why file size is different, Am I following some wrong path.
My Second Step : Creating tflite file using bazel command
bazel run --config=opt tensorflow/contrib/lite/toco:toco -- --input_file=/path_to_folder/my_frozen.pb --output_file=/path_to_folder/model.tflite --inference_type=FLOAT --input_shape=1,224,224,3 --input_array=input --output_array=MobilenetV1/Predictions/Reshape_1
This commands give me model.tflite - 0 kb.
Trackback for bazel Command
INFO: Analysed target //tensorflow/contrib/lite/toco:toco (0 packages loaded).
INFO: Found 1 target...
Target //tensorflow/contrib/lite/toco:toco up-to-date:
bazel-bin/tensorflow/contrib/lite/toco/toco
INFO: Elapsed time: 0.369s, Critical Path: 0.01s
INFO: Build completed successfully, 1 total action
INFO: Running command line: bazel-bin/tensorflow/contrib/lite/toco/toco '--input_file=/home/ubuntu/DEEP_LEARNING/Prashant/TensorflowBasic/mobilenet_v1_1.0_224/frozengraph.pb' '--output_file=/home/ubuntu/DEEP_LEARNING/Prashant/TensorflowBasic/mobilenet_v1_1.0_224/float_model.tflite' '--inference_type=FLOAT' '--input_shape=1,224,224,3' '--input_array=input' '--output_array=MobilenetV1/Predictions/Reshape_1'
2018-04-12 16:36:16.190375: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1265] Converting unsupported operation: FIFOQueueV2
2018-04-12 16:36:16.190707: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1265] Converting unsupported operation: QueueDequeueManyV2
2018-04-12 16:36:16.202293: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before Removing unused ops: 290 operators, 462 arrays (0 quantized)
2018-04-12 16:36:16.211322: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 290 operators, 462 arrays (0 quantized)
2018-04-12 16:36:16.211756: F tensorflow/contrib/lite/toco/graph_transformations/resolve_batch_normalization.cc:86] Check failed: mean_shape.dims() == multiplier_shape.dims()
Python Version - 2.7.6
Tensorflow Version - 1.5.0
Thanks In advance :)
The error Check failed: mean_shape.dims() == multiplier_shape.dims()
was an issue with resolution of batch norm and has been resolved in:
https://github.com/tensorflow/tensorflow/commit/460a8b6a5df176412c0d261d91eccdc32e9d39f1#diff-49ed2a40acc30ff6d11b7b326fbe56bc
In my case the error occurred using tensorflow v1.7
Solution was to use tensorflow v1.15 (nightly)
toco --graph_def_file=/path_to_folder/my_frozen.pb \
--input_format=TENSORFLOW_GRAPHDEF \
--output_file=/path_to_folder/my_output_model.tflite \
--input_shape=1,224,224,3 \
--input_arrays=input \
--output_format=TFLITE \
--output_arrays=MobilenetV1/Predictions/Reshape_1 \
--inference-type=FLOAT
Related
i am trying to deploy my first function to the google cloud using the terminal from vs code:
I followed the tutorial and downlaoded the git repo.
I try to deploy like this:
gcloud functions deploy nodejs-http-function \
--gen2 \
--runtime=nodejs16 \
--region=europe-west1 \
--source=projects \
--entry-point=helloGET \
--trigger-http \
--allow-unauthenticated
But it fails saying there was an unexpected error.
ERROR: (gcloud.functions.deploy) OperationError: code=3, message=Build failed with status: FAILURE and message: An unexpected error occurred. Refer to build logs:
Info
2022-08-30T10:37:20.251175889ZCopying gs://gcf-v2-sources-988480765475-europe-west1/nodejs-http-function/function-source.zip#1661855830947918...
Info
2022-08-30T10:37:20.315626228Z/ [0 files][ 0.0 B/ 22.0 B]
/ [1 files][ 22.0 B/ 22.0 B]
Info
2022-08-30T10:37:20.315708737ZOperation completed over 1 objects/22.0 B.
Info
2022-08-30T10:37:21.477875307ZArchive: /tmp/source-archive.zip
Info
2022-08-30T10:37:21.477882569Zwarning [/tmp/source-archive.zip]: zipfile is empty
Info
2022-08-30T10:37:21.988690356ZFetching storage object: gs://gcf-v2-sources-988480765475-europe-west1/nodejs-http-function/function-source.zip#1661855830947918
Info
2022-08-30T10:37:23.934777658ZCopying gs://gcf-v2-sources-988480765475-europe-west1/nodejs-http-function/function-source.zip#1661855830947918...
Info
2022-08-30T10:37:23.997108157Z/ [0 files][ 0.0 B/ 22.0 B]
/ [1 files][ 22.0 B/ 22.0 B]
Info
2022-08-30T10:37:23.997183532ZOperation completed over 1 objects/22.0 B.
Info
2022-08-30T10:37:25.147810126Zwarning [/tmp/source-archive.zip]: zipfile is empty
Info
2022-08-30T10:37:25.147817595ZArchive: /tmp/source-archive.zip
Info
2022-08-30T10:37:25.157272064ZERROR
Info
2022-08-30T10:37:25.157303927ZERROR: error fetching storage source: generic::unknown: retry budget exhausted (3 attempts): fetching gcs source: unpacking source from gcs: source fetch container exited with non-zero status: 1
I dont really know what to make of this.
EDIT:
My path to the index.js file is here:
/Users/juliustolksdorf/Projects/Interior_Circle/webhook/nodejs-docs-samples-main/functions/helloworld
and I am running the command from the VS code terminal.
I ofc first tried " % cd /Users/juliustolksdorf/Projects/Interior_Circle/webhook/nodejs-docs-samples-main/functions/helloworld" to get to the right path where my index.js file lies, and then did the command to deploy but it still failed.
I have to run a spark job, (I am new to spark) and getting following error-
[2022-02-16 14:47:45,415] {{bash.py:135}} INFO - Tmp dir root location: /tmp
[2022-02-16 14:47:45,416] {{bash.py:158}} INFO - Running command: spark-submit --class org.xyz.practice.driver.PractitionerDriver s3://pfdt-poc-temp/xyz_test/org.xyz.spark-xy_mvp-1.0.0-SNAPSHOT.jar
[2022-02-16 14:47:45,422] {{bash.py:169}} INFO - Output:
[2022-02-16 14:47:45,423] {{bash.py:173}} INFO - bash: spark-submit: command not found
[2022-02-16 14:47:45,423] {{bash.py:177}} INFO - Command exited with return code 127
[2022-02-16 14:47:45,437] {{taskinstance.py:1482}} ERROR - Task failed with exception
What has to be done,
def run_spark(**kwargs):
import pyspark
sc = pyspark.SparkContext()
df = sc.textFile('s3://demoairflowpawan/people.txt')
logging.info('Number of lines in people.txt = {0}'.format(df.count()))
sc.stop()
spark_task = BashOperator(
task_id='spark_java',
bash_command='spark-submit --class {{ params.class }} {{ params.jar }}',
params={'class': 'org.xyz.practice.driver.PractitionerDriver', 'jar': 's3://pfdt-poc-temp/xyz_test/org.xyz.spark-xy_mvp-1.0.0-SNAPSHOT.jar'},
dag=dag
)
The question is - why do you expect the spark-submit to be there?
If you created the airflow default pods, then they come with airflow code only.
You can check here an example for spark and airflow - https://medium.com/codex/executing-spark-jobs-with-apache-airflow-3596717bbbe3 - and they state specifically "Spark binaries must be added and mapped".
So you need to figure out how to download the spark binaries to the existing airflow pod.
Alternatively - you can create another k8s job which will do the spark-submit, and have your DAG activate this job.
sorry for the high level answer...
This question already has answers here:
Using different delimiters in sed commands and range addresses
(3 answers)
Closed 1 year ago.
When following their documentation and running ./build_packages --board=lakitu, I get the following error.
Using ubuntu 16.0.4. Looks like a sed syntax error? Am I missing a variable? Does sed work differently in different operating systems or is something wrong with their documentation/scripts? Followed their documentation to the t and didn't add or configure anything. Waiting for a successful run first.
Looking at similar questions, they all appear to be syntax errors...
* Package: sys-boot/shim-14.0.20180308-r4
* Repository: lakitu
* USE: abi_x86_64 amd64 elibc_glibc kernel_linux userland_GNU
* FEATURES: network-sandbox sandbox splitdebug userpriv usersandbox
* Running stacked hooks for pre_pkg_setup
* sysroot_build_bin_dir ... [ ok ]
* Running stacked hooks for post_pkg_setup
* python_eclass_hack ... [ ok ]
* Running stacked hooks for pre_src_unpack
* python_multilib_setup ... [ ok ]
>>> Unpacking source...
>>> Unpacking shim-14.0.20180308.tar.gz to /build/lakitu/tmp/portage/sys-boot/shim-14.0.20180308-r4/work
>>> Source unpacked in /build/lakitu/tmp/portage/sys-boot/shim-14.0.20180308-r4/work
* Running stacked hooks for post_src_unpack
* asan_init ... [ ok ]
>>> Preparing source in /build/lakitu/tmp/portage/sys-boot/shim-14.0.20180308-r4/work/shim-79cdb2a215de2ace7d1bf0a294165a04b726c70a ...
>>> Source prepared.
>>> Configuring source in /build/lakitu/tmp/portage/sys-boot/shim-14.0.20180308-r4/work/shim-79cdb2a215de2ace7d1bf0a294165a04b726c70a ...
>>> Source configured.
>>> Compiling source in /build/lakitu/tmp/portage/sys-boot/shim-14.0.20180308-r4/work/shim-79cdb2a215de2ace7d1bf0a294165a04b726c70a ...
make -j8 ARCH=x86_64 CROSS_COMPILE=x86_64-cros-linux-gnu- EFI_INCLUDE=/build/lakitu//usr/include/efi EFI_PATH=/build/lakitu//usr/lib64 ARCH_LDFLAGS=--no-experimental-use-relr COMMITID=79cdb2a215de2ace7d1bf0a294165a04b726c70a DEFAULT_LOADER=\\\\grub-lakitu.efi shimx64.efi
sed -e "s,##VERSION##,14," \
-e "s,##UNAME##,Linux x86_64 Intel Xeon E312xx (Sandy Bridge, IBRS update) GenuineIntel GNU/Linux," \
-e "s,##COMMIT##,79cdb2a215de2ace7d1bf0a294165a04b726c70a," \
< /build/lakitu/tmp/portage/sys-boot/shim-14.0.20180308-r4/work/shim-79cdb2a215de2ace7d1bf0a294165a04b726c70a/version.c.in > version.c
sed: -e expression #2, char 60: unknown option to `s'
make: *** [Makefile:183: version.c] Error 1
* ERROR: sys-boot/shim-14.0.20180308-r4::lakitu failed (compile phase):
* emake failed
*
* If you need support, post the output of `emerge --info '=sys-boot/shim-14.0.20180308-r4::lakitu'`,
* the complete build log and the output of `emerge -pqv '=sys-boot/shim-14.0.20180308-r4::lakitu'`.
* The complete build log is located at '/build/lakitu/tmp/portage/logs/sys-boot:shim-14.0.20180308-r4:20190531-002217.log'.
* For convenience, a symlink to the build log is located at '/build/lakitu/tmp/portage/sys-boot/shim-14.0.20180308-r4/temp/build.log'.
* The ebuild environment file is located at '/build/lakitu/tmp/portage/sys-boot/shim-14.0.20180308-r4/temp/environment'.
* Working directory: '/build/lakitu/tmp/portage/sys-boot/shim-14.0.20180308-r4/work/shim-79cdb2a215de2ace7d1bf0a294165a04b726c70a'
* S: '/build/lakitu/tmp/portage/sys-boot/shim-14.0.20180308-r4/work/shim-79cdb2a215de2ace7d1bf0a294165a04b726c70a'
There's a , after Bridge
-e "s,##UNAME##,Linux x86_64 Intel Xeon E312xx (Sandy Bridge, IBRS update) GenuineIntel GNU/Linux," \
Change to
-e "s###UNAME###Linux x86_64 Intel Xeon E312xx (Sandy Bridge, IBRS update) GenuineIntel GNU/Linux#" \
We are using Google Cloud Bigtable for our Big Data.
When I'm running MapReduce job I assembly a jar and run it and now I'm getting this error:
Application application_1451577928704_0050 failed 2 times due to AM
Container for appattempt_1451577928704_0050_000002 exited with
exitCode: 1 For more detailed output, check application tracking
page:http://censored:8088/cluster/app/application_1451577928704_0050Then,
click on links to logs of each attempt. Diagnostics: Exception from
container-launch. Container id:
container_e02_1451577928704_0050_02_000001 Exit code: 1 Stack trace:
ExitCodeException exitCode=1: at
org.apache.hadoop.util.Shell.runCommand(Shell.java:545) at
org.apache.hadoop.util.Shell.run(Shell.java:456) at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)
at
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:266) at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745) Container exited with a
non-zero exit code 1 Failing this attempt. Failing the application.
When I logged to see the logging of the workers node I saw this error:
2016-02-15 02:59:54,106 INFO [main]
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster
for application appattempt_1451577928704_0050_000001 2016-02-15
02:59:54,294 WARN [main] org.apache.hadoop.util.NativeCodeLoader:
Unable to load native-hadoop library for your platform... using
builtin-java classes where applicable 2016-02-15 02:59:54,319 INFO
[main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with
tokens: 2016-02-15 02:59:54,319 INFO [main]
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind:
YARN_AM_RM_TOKEN, Service: , Ident: (appAttemptId { application_id {
id: 50 cluster_timestamp: 1451577928704 } attemptId: 1 } keyId:
-******) 2016-02-15 02:59:54,424 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Using mapred
newApiCommitter. 2016-02-15 02:59:54,755 WARN [main]
org.apache.hadoop.hdfs.shortcircuit.DomainSocketFactory: The
short-circuit local reads feature cannot be used because libhadoop
cannot be loaded. 2016-02-15 02:59:54,855 INFO [main]
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter set in
config null 2016-02-15 02:59:54,911 INFO [main]
org.apache.hadoop.service.AbstractService: Service
org.apache.hadoop.mapreduce.v2.app.MRAppMaster failed in state INITED;
cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException:
java.lang.ClassCastException:
org.apache.xerces.dom.DeferredElementNSImpl cannot be cast to
org.w3c.dom.Text
org.apache.hadoop.yarn.exceptions.YarnRuntimeException:
java.lang.ClassCastException:
org.apache.xerces.dom.DeferredElementNSImpl cannot be cast to
org.w3c.dom.Text at
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$1.call(MRAppMaster.java:478)
at
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$1.call(MRAppMaster.java:458)
at
org.apache.hadoop.mapreduce.v2.app.MRAppMaster.callWithJobClassLoader(MRAppMaster.java:1560)
at
org.apache.hadoop.mapreduce.v2.app.MRAppMaster.createOutputCommitter(MRAppMaster.java:458)
at
org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceInit(MRAppMaster.java:377)
at
org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$4.run(MRAppMaster.java:1518)
at java.security.AccessController.doPrivileged(Native Method) at
javax.security.auth.Subject.doAs(Subject.java:422) at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at
org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1515)
at
org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1448)
Caused by: java.lang.ClassCastException:
org.apache.xerces.dom.DeferredElementNSImpl cannot be cast to
org.w3c.dom.Text at
org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2603)
at
org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2502)
at
org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2405)
at org.apache.hadoop.conf.Configuration.get(Configuration.java:981)
at
org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:1031)
at
org.apache.hadoop.conf.Configuration.getBoolean(Configuration.java:1432)
at
org.apache.hadoop.hbase.HBaseConfiguration.checkDefaultsVersion(HBaseConfiguration.java:67)
at
org.apache.hadoop.hbase.HBaseConfiguration.addHbaseResources(HBaseConfiguration.java:81)
at
org.apache.hadoop.hbase.HBaseConfiguration.create(HBaseConfiguration.java:96)
at
org.apache.hadoop.hbase.HBaseConfiguration.create(HBaseConfiguration.java:105)
at
org.apache.hadoop.hbase.mapreduce.TableOutputFormat.setConf(TableOutputFormat.java:184)
at
org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:76)
at
org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136)
at
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$1.call(MRAppMaster.java:474)
... 11 more
I tried an older jar and it's running perfectly fine and I'm not sure why the new jar won't work - Didn't change anything.
Please advise?
Thanks!
Update 1: Here is some more details:
I setup the cluster with the dataproc.
We are using the newest versions, here is the library dependencies:
val BigtableHbase = "com.google.cloud.bigtable" % "bigtable-hbase-1.1"
% "0.2.2" val BigtableHbaseMapreduce = "com.google.cloud.bigtable" %
"bigtable-hbase-mapreduce" % "0.2.2" val CommonsCli = "commons-cli" %
"commons-cli" % "1.2" val HadoopCommon = "org.apache.hadoop" %
"hadoop-common" % "2.7.1" val HadoopMapreduceClientApp =
"org.apache.hadoop" % "hadoop-mapreduce-client-app" % "2.7.1" val
HbaseCommon = "org.apache.hbase" % "hbase-common" % "1.1.2" val
HbaseProtocol = "org.apache.hbase" % "hbase-protocol" % "1.1.2" val
HbaseClient = "org.apache.hbase" % "hbase-client" % "1.1.2" val
HbaseServer = "org.apache.hbase" % "hbase-server" % "1.1.2" val
HbaseAnnotations = "org.apache.hbase" % "hbase-annotations" % "1.1.2"
libraryDependencies += BigtableHbase libraryDependencies +=
BigtableHbaseMapreduce libraryDependencies += CommonsCli
libraryDependencies += HadoopCommon libraryDependencies +=
HadoopMapreduceClientApp libraryDependencies += HbaseCommon
libraryDependencies += HbaseProtocol libraryDependencies +=
HbaseClient libraryDependencies += HbaseServer libraryDependencies +=
HbaseAnnotations
Java version:
openjdk version "1.8.0_66-internal" OpenJDK Runtime Environment (build
1.8.0_66-internal-b17) OpenJDK 64-Bit Server VM (build 25.66-b17, mixed mode)
Alpn version: alpn-boot-8.1.3.v20150130
hbase verison:
2016-02-15 20:45:42,050 INFO [main] util.VersionInfo: HBase 1.1.2
2016-02-15 20:45:42,051 INFO [main] util.VersionInfo: Source code
repository file:///mnt/ram/bigtop/bigtop/output/ hbase/hbase-1.1.2
revision=Unknown 2016-02-15 20:45:42,051 INFO [main]
util.VersionInfo: Compiled by bigtop on Tue Nov 10 19:09:17 UTC 2015
2016-02-15 20:45:42,051 INFO [main] util.VersionInfo: From source
with checksum 42e8a1890c700d37485c69a44a3
hadoop version:
Hadoop 2.7.1 Subversion
https://bigdataoss-internal.googlesource.com/third_party/apache/bigtop
-r 2a194d4d838b79460c3ceb892f3c94 44218ba970 Compiled by bigtop on 2015-11-10T18:38Z Compiled with protoc 2.5.0 From source with checksum
fc0a1a23fc1868e4d5ee7fa2b28a58a This command was run using
/usr/lib/hadoop/hadoop-common-2.7.1.jar
I found the problem in my case!
The hbase-site.xml was slightly different in the hbase.client.connection.impl property.
<property>
<name>hbase.client.connection.impl</name>
<value>com.google.cloud.bigtable.hbase1_1.BigtableConnection</value>
</property>
I got to this after extracting and comparing the two jars.
The newer versions of the bigtable client jar include newer versions of the gRPC jar. Newer versions of the gRPC jar depend on newer versions of alpn-boot or OpenSSL. In addition to a new version of the bigtable jar, you may need a new version of the alpn-boot jar. Unfortunately, the Jetty team isn't making new alpn-boot jars for Java7, which bdutil depends on.
We are actively working on moving away from bdutil to dataproc, which is the newer version of Google Cloud Hadoop management. Dataproc uses Java 8, and doesn't have the same problems as bdutil. There are still kinks we need to work out.
More information can be found at:
https://cloud.google.com/dataproc/examples/cloud-bigtable-example
and
https://github.com/grpc/grpc-java/blob/master/SECURITY.md
I am trying to make a query with aggregate in Django, it works fine with my local computer but not in the server.
In both machines, pymongo is installed in Python virtual enviroment:
pip freeze | grep mongo
pymongo==2.5.2
I can also get the inserted data in two machines with find() method:
conn.firmalar.searchlogger.find()
But aggregate method works in my local but not in the server even everything installed are the same. I got this error when i attempt to run it on server:
import pymongo
conn = pymongo.Connection()
search = conn.firmalar.searchlogger.aggregate([{"$group": {"_id": "$what", "count": {"$sum": 1}}}])
OperationFailure at /admin/weblog/
command SON([('aggregate', u'searchlogger'), ('pipeline', [{'$group': {'count': {'$sum': 1}, '_id': '$what'}}])]) failed: no such cmd: aggregate
/home/cem/env/firmalar/local/lib/python2.7/site-packages/pymongo/collection.pyc in aggregate(self, pipeline)
1059 self.secondary_acceptable_latency_ms),
1060 slave_okay=self.slave_okay,
-> 1061 _use_master=use_master)
1062
1063 # TODO key and condition ought to be optional, but deprecation
/home/cem/env/firmalar/local/lib/python2.7/site-packages/pymongo/helpers.pyc in _check_command_response(response, reset, msg, allowable_errors)
145 if code in (11000, 11001, 12582):
146 raise DuplicateKeyError(errmsg, code)
--> 147 raise OperationFailure(msg % errmsg, code)
148
149
It's not about the driver - it's about mongodb itself. aggregate() was introduced in mongodb 2.2: docs.
Most likely, you are using an older version of mongodb. Check your mongodb version and upgrade if needed. Also check that in your python code you are connecting to the mongodb version >=2.2.
Also see:
MongoDB Java driver : no such cmd: aggregate
Aggregate failed: no such cmd: aggregate