I have setup the UnitTetsing Framework for Marklogic using the below url-
https://github.com/marklogic-community/marklogic-unit-test
I am able to deploy my project using mlDeploy and able to load my Testing module using mlUnitTestLoadModules but while running the test cases from gradle mlUnitTest i am getting the below error-
Task ':mlUnitTest' is not up-to-date because:
Task has not declared any outputs.
Releasing connection
:mlUnitTest (Thread[Daemon worker Thread 16,5,main]) completed. Took 0.064 secs
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':mlUnitTest'.
> java.lang.NullPointerException (no error message)
* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to
get more log output. Run with --scan to get full insights.
* Get more help at https://help.gradle.org
Any Suggestions ??
We had this issue come up recently when migrating to marklogic-unit-test-1.0.0. There is a work around thanks to one of the MarkLogic gurus. If this is still an
issue let us know.
Related
I'm trying to automate provisioning of streaming job using cloud build, for the POC I tried https://github.com/GoogleCloudPlatform/python-docs-samples/tree/master/dataflow/flex-templates/streaming_beam
It worked as expected when I manually ran the commands.
When I add the commands in cloudbuild.yaml file the build gets created successfully but the dataflow job fails each time with the below error:
Error occurred in the launcher container: Template launch failed. See console logs
This is the only error log that I get, I tried to add extra permissions to Cloud Build service account but that didn't help either.
Since there's no other info mentioned in the log file I find it hard to debug it as well.
In AWS Device Farm, I created a new run. I chose native application. I uploaded my APK. I chose Calabash as the tests. I got this error message:
Tests skipped due to test package parsing error. Please check Parsing result for more details.
I downloaded the Parsing result. Here's what it said:
Failed to run cucumber dry-run command. See the information below for more details.
Here are the contents for the Parsing result:
'cucumber --dry-run --format json --out /tmp/scratchvxnAeX.scratch/tmpF6f5Xx' failed.
Could not find proper version of cucumber (2.99.0) in any of the sources
Run `bundle install` to install missing gems.
Solution
I have some .rb page objects. I added require 'calabash-android' to the top of those files. Then I made a new run and ran it. It worked.
How I got to the solution
Through Google I came across CALABASH_TEST_PACKAGE_DRY_RUN_FAILED here.
Running this command failed: cucumber-ios --dry-run --format json features
I figured that was for ios. So I tried this: bundle exec calabash-android run .\app-releaseStaging.apk --dry-run. I got this error:
uninitialized constant Calabash::ABase (NameError)
I wasn't getting that error when running locally.
According to this:
-d, --dry-run Invokes formatters without executing the steps. This also omits the loading of your support/env.rb file if it exists.
I had require 'calabash-android' inside env.rb. So I moved it to my page objects. Then it worked.
You may find the aws-device-farm-calabash-tests-for-sample-app useful.
I am running a batch job on dataflow, querying from BigQuery. When I use the DirectRunner, everything works, and the results are written to a new BigQuery table. Things seem to break when I change to DataflowRunner.
The logs show that 30 worker instances are spun up successfully. The graph diagram in the web UI shows the job has started. The first 3 steps show "Running", the rest show "not started". None of the steps show any records transformed (i.e. outputcollections all show '-'). The logs show many messages that look like this, which may be the issue:
skipping: failed to "StartContainer" for "python" with CrashLoopBackOff: "Back-off 10s restarting failed container=python pod=......
I took a step back and just ran the minimal wordcount example, and that completed successfully. So all the necessary APIs seem to be enabled for Dataflow runner. I'm just trying to get a sense of what is causing my Dataflow job to hang.
I am executing the job like this:
python2.7 script.py --runner DataflowRunner --project projectname --requirements_file requirements.txt --staging_location gs://my-store/staging --temp_location gs://my-store/temp
I'm not sure if my solution was the cause of the error pasted above, but fixing dependencies problems (which were not showing up as errors in the log at all!) did solve the hanging dataflow processes.
So if you have a hanging process, make sure your workers have all their necessary dependencies. You can provide them through the --requirements_file argument, or through a custom setup.py script.
Thanks to the help I received in this post, the pipeline appears to be operating, albeit VERY SLOWLY.
Deploying cloud function with gcloud failed with below message,
ERROR: (gcloud.beta.functions.deploy) OperationError: code=13,
message=Failure in the execution environment
Couldn`t find much information about the error in the cloud function logs.
Running deploy with --verbose debug traces the functions called in the Cloud SDK directory and ends with displaying the below error,
FunctionsError: OperationError: code=13, message=Failure in the
execution environment ERROR: (gcloud.beta.functions.deploy)
OperationError: code=13, message=Failure in the execution environment
Per this Google Public Issue Tracker, the error is due to a very large package.json file hitting an internal restriction. Possible workarounds:
1- Installing your dependencies locally (through 'npm install') and deploying with --include-ignored-files flag.
2- Reduce your package.json to less than 4000 characters
This is an ongoing issue and you can follow the discussion on this thread for related updates.
The status of firebase can be found under:
https://status.firebase.google.com/
Just sharing our experience here, with the hope it helps someone in the future.
In our case we got a similar error:
ERROR: (gcloud.beta.functions.deploy) OperationError: code=13, message=Error setting up the execution environment for your function. Please try deploying again after a few minutes.
This was caused by an import of package.json in the code to read out the version. I.e.:
import { version } from '../package.json';
Transpilation and local invocation of the generated JS code worked as expected with the above line in our code base. After we removed the import, we were able to deploy the function agian.
Some of the GCP errors are broad.
The solution to this for me was my go.mod file had go 1.14 while the GCP only supports go 1.11 or go 1.13
For my case, it was a python environment, and the culprit was a dependency yarl==1.5.1.
As there are no logs, I couldn't tell exactly why yarl was causing the breakage, but downgrading to yarl==1.3.0 fixed the issue for me.
Error:FAILURE: Build failed with an exception.
What went wrong:
Task 'stacktrace' not found in root project 'GatePass14mar17'.
Try:
Run gradle tasks to get a list of available tasks. Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output.
I am getting this error while running the application