Unhandled rejection MongoError: CMD_NOT_ALLOWED: mapreduce - mapreduce

i'm trying to use mapReduce method in a free Mongodb-atlas-cloud plan but i got this error
at emitOne (events.js:96:13)
at TLSSocket.emit (events.js:188:7)
at readableAddChunk (_stream_readable.js:176:18)
at TLSSocket.Readable.push (_stream_readable.js:134:10)
at TLSWrap.onread (net.js:547:20)
Unhandled rejection MongoError: CMD_NOT_ALLOWED: mapreduce
at Function.MongoError.create ...
note: i can write and read without any problem

from MongoDB Cloud Services Support
Mapreduce is not allowed in free tier at this moment.
https://docs.atlas.mongodb.com/unsupported-commands/

Related

PERMISSION_DENIED for BigQuery Storage API on Apache Beam 2.39.0 and DataFlow runner

I have the following error for one of my DataFlow Jobs:
2022-06-15T16:12:27.365182607Z Error message from worker: java.lang.RuntimeException: org.apache.beam.sdk.util.UserCodeException: java.lang.RuntimeException: java.lang.RuntimeException: java.lang.RuntimeException: com.google.api.gax.rpc.PermissionDeniedException: io.grpc.StatusRuntimeException: PERMISSION_DENIED: BigQuery Storage API has not been used in project 770406736630 before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/bigquerystorage.googleapis.com/overview?project=770406736630 then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry.
The same code works fine with Apache Beam 2.38.0. I tested multiple times and this is not a temporary issues. The project number mentioned in the error (770406736630) is not mine.
Any idea why I get this error?
I had the same issue. I'm using Spring Cloud GCP and hadn't set the spring.cloud.gcp.project-id property, which I'm guessing makes the SDK or API use some default value.
I don't know how you've set up you environment, because you haven't specified, but look into how you can explicitly set the project id. You can get it from the dialog for selecting a project in GCP Console.
I just ran into this, and simply needed to re-authenticate with the gcp cli by running gcloud auth application-default login.
The error happens for the latest Apache Beam SKD (2.41.0) when BigQueryIO.Write.Method.STORAGE_WRITE_API is used and destination does not specify the project name. For example dataset.table instead of project-id:dataset.table
This is the solution that worked for me:
BigQueryIO.writeTableRows()
.to("project-id:dataset.table")
.withMethod(BigQueryIO.Write.Method.STORAGE_WRITE_API)
For some reason the Apache Beam implementation for BigQuery Write Storage API does not handle this situation even though it works fine for FILE_LOADS method.
You may also receive a sightly different error for the latest Beam SDK.
Exception in thread "main" org.apache.beam.sdk.Pipeline$PipelineExecutionException: java.lang.RuntimeException:
java.lang.RuntimeException:
java.lang.RuntimeException: com.google.api.gax.rpc.PermissionDeniedException:
io.grpc.StatusRuntimeException:
PERMISSION_DENIED: Permission denied: Consumer 'project:null' has been suspended.

Apache Beam StatusRuntimeException on Dataflow pipeline

I am working on a dataflow pipeline written in python2.7 using apache_beam==2.24.0 . The work of the pipeline is to consume pubsub messages from a subscription using beam's ReadFromPubSub in batches, do some processing on the messages and then to persist the resultant data to two different bigquery tables. There is a lot of data that I am consuming. Google-cloud-pubsub version is 1.7.0 . After running the pipeline everything works fine but after a few hours I start getting the exception:
org.apache.beam.vendor.grpc.v1p13p1.io.grpc.StatusRuntimeException: CANCELLED: call already cancelled
On gcp dataflow console, the logs show this error but the job in itself seems to work fine. It consumes data from the subscription and writes it to bigquery. What CANCELLED: call is being referred to here and why am I getting this error? How can I resolve this?
Full stacktrace:
Caused by: org.apache.beam.vendor.grpc.v1p26p0.io.grpc.StatusRuntimeException: CANCELLED: call already cancelled
org.apache.beam.vendor.grpc.v1p26p0.io.grpc.Status.asRuntimeException(Status.java:524)
org.apache.beam.vendor.grpc.v1p26p0.io.grpc.stub.ServerCalls$ServerCallStreamObserverImpl.onNext(ServerCalls.java:341)
org.apache.beam.sdk.fn.stream.DirectStreamObserver.onNext(DirectStreamObserver.java:98)
org.apache.beam.sdk.fn.data.BeamFnDataSizeBasedBufferingOutboundObserver.flush(BeamFnDataSizeBasedBufferingOutboundObserver.java:100)
org.apache.beam.runners.dataflow.worker.fn.data.RemoteGrpcPortWriteOperation.shouldWait(RemoteGrpcPortWriteOperation.java:124)
org.apache.beam.runners.dataflow.worker.fn.data.RemoteGrpcPortWriteOperation.maybeWait(RemoteGrpcPortWriteOperation.java:167)
org.apache.beam.runners.dataflow.worker.fn.data.RemoteGrpcPortWriteOperation.process(RemoteGrpcPortWriteOperation.java:196)
org.apache.beam.runners.dataflow.worker.util.common.worker.OutputReceiver.process(OutputReceiver.java:49)
org.apache.beam.runners.dataflow.worker.GroupAlsoByWindowsParDoFn$1.output(GroupAlsoByWindowsParDoFn.java:182)
org.apache.beam.runners.dataflow.worker.GroupAlsoByWindowFnRunner$1.outputWindowedValue(GroupAlsoByWindowFnRunner.java:108)
org.apache.beam.runners.dataflow.worker.StreamingGroupAlsoByWindowReshuffleFn.processElement(StreamingGroupAlsoByWindowReshuffleFn.java:57)
org.apache.beam.runners.dataflow.worker.StreamingGroupAlsoByWindowReshuffleFn.processElement(StreamingGroupAlsoByWindowReshuffleFn.java:39)
org.apache.beam.runners.dataflow.worker.GroupAlsoByWindowFnRunner.invokeProcessElement(GroupAlsoByWindowFnRunner.java:121)
org.apache.beam.runners.dataflow.worker.GroupAlsoByWindowFnRunner.processElement(GroupAlsoByWindowFnRunner.java:73)
org.apache.beam.runners.dataflow.worker.GroupAlsoByWindowsParDoFn.processElement(GroupAlsoByWindowsParDoFn.java:134)
org.apache.beam.runners.dataflow.worker.util.common.worker.ParDoOperation.process(ParDoOperation.java:44)
org.apache.beam.runners.dataflow.worker.util.common.worker.OutputReceiver.process(OutputReceiver.java:49)
org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:201)
org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:159)
org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:77)
org.apache.beam.runners.dataflow.worker.fn.control.BeamFnMapTaskExecutor.execute(BeamFnMapTaskExecutor.java:123)
org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker.process(StreamingDataflowWorker.java:1365)
org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker.access$1100(StreamingDataflowWorker.java:154)
org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker$7.run(StreamingDataflowWorker.java:1085)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
java.lang.Thread.run(Thread.java:748)
The client I am working for has option for raising request ticket for Google Cloud Support. The exact reply from Google Cloud Support:
This error you are finding is rather harmless. The dataflow is a massively parallel data processing platform and when there are autoscaling events which can move the worker VM around. When the VM is getting shut down the grpc channel is closed before the runner process and the work item being processed will be retried on another newly launched runner. These errors can be ignored.

Google Cloud Django App Deployment - Permission Issues

I'm following this tutorial, yet I get stuck at the very end when I'm trying to deploy the app on the App Engine.
I get the following error message:
Updating service [default] (this may take several minutes)...failed.
ERROR: (gcloud.app.deploy) Error Response: [13] Flex operation projects/responder-289707/regions/europe-west6/operations/a0e5f3f4-29a7-49d8-98b5-4a52b7bf04ca error [INTERNAL]: An internal error occurred while processing task /app-engine-flex/insert_flex_deployment/flex_create_resources>2020-09-21T20:32:48.366Z12808.hy.0: Deployment Manager operation responder-289707/operation-1600720369987-5afd8c109adf5-6a4ad9a9-e71b9336 errors: [code: "RESOURCE_ERROR"
location: "/deployments/aef-default-20200921t223056/resources/aef-default-20200921t223056"
message: "{\"ResourceType\":\"compute.beta.regionAutoscaler\",\"ResourceErrorCode\":\"403\",\"ResourceErrorMessage\":{\"code\":403,\"message\":\"The caller does not have permission\",\"status\":\"PERMISSION_DENIED\",\"statusMessage\":\"Forbidden\",\"requestPath\":\"https://compute.googleapis.com/compute/beta/projects/responder-289707/regions/europe-west6/autoscalers\",\"httpMethod\":\"POST\"}}"
I don't really understand why though. I'm have authenticated my gcloud, made sure my account has App Engine Admin/Deployment rights. Have everything in place.
Any hints would be much appreciated.
You apparently do not have the rights for autoscaling resources. This could be due to a free account or that you need different rights to deploy an autoscaling service (other than App Engine Admin/Deployment).
Seeing as how you're doing the tutorial you could define a static resource amount, this is safer for your wallet as wel.
app.yaml
# add this
automatic_scaling:
min_num_instances: 1
max_num_instances: 2

Is there a way to tell why a Cloud DataStore transaction failed?

I'm using the Ruby client, and only see
Google::Cloud::Datastore::TransactionError: Transaction failed to commit.
from /myapp/bundle/ruby/2.4.0/gems/google-cloud-datastore-1.4.4/lib/google/cloud/datastore/dataset.rb:555:in `rescue in transaction'
Is there a verbosity setting or something where I can see more trace?
there isn't a way to differentiate between that and some transient issue
Yes, it's a flaw in SDK. They eat your original exception. Read about it in my ticket thread:
https://github.com/googleapis/google-cloud-ruby/pull/2033

Ethereum Mist client crashing on startup

I have been using the Mist client as an Ethereum wallet recently.
It took a long time to download all the blocks and view my current balance but eventually it synced fully to the blockchain.
Now when I try and open the client it crashes on "Ethereum Node Starting Up".
Is there a way I can view error logs in the console?
EDIT this is the error I get - reinstalling did not solve the problem:
TypeError: Cannot read property 'message' of null
at Socket.proc.stdout.on.data (/opt/Mist/resources/app.asar/modules/ethereumNode.js:428:46)
at emitOne (events.js:96:13)
at Socket.emit (events.js:191:7)
at readableAddChunk (_stream_readable.js:178:18)
at Socket.Readable.push (_stream_readable.js:136:10)
at Pipe.onread (net.js:560:20)