Why does pm4py.view doesn't generate any image - data-mining

For a business process discovery task, I am trying to generate a process model, following pm4py python library. Here's a sample code:
!pip install pm4py
import pm4py
log = pm4py.read_xes('/content/running-example.xes')
process_model, initial_marking, final_marking = pm4py.discover_petri_net_inductive(log)
pm4py.view_petri_net(process_model, initial_marking, final_marking, format="svg")
However, I get output as:
parsing log, completed traces :: 100%
6/6 [00:00<00:00, 121.77it/s]
But no image as is expected from the website: https://pm4py.fit.fraunhofer.de/getting-started-page#discovery
Being relatively new to the world of python, what I learnt from other coders' suggestions here on SO that always read in depth the source code in case of open source libraries.
Here is pm4py visual links:
https://github.com/pm4py/pm4py-core/blob/afee8b0932283b8f8f02dd2b6cc0968a1f1cc723/pm4py/visualization/process_tree/visualizer.py#L69
and specifically for my example:
https://github.com/pm4py/pm4py-core/blob/afee8b0932283b8f8f02dd2b6cc0968a1f1cc723/pm4py/vis.py#L17
But I am not able to figure out how to manipulate it.
Can someone please point out the problem to me and help me generate the views. Also, if anyone has done business process generations before, maybe if you could suggest me any libraries or techniques to analyse event-logs data it would be really helpful.

to visualize the process models mined in PM4Py, make sure that you have graphviz installed on your computer.
see https://pm4py.fit.fraunhofer.de/install for more information on this.

Related

Python Sub-Process Coverage

Situation:
I'm attempting to get coverage reports for a project that uses both C++ and Python. I'm using LCOV/GCOV for C++, and attempting to use Coverage.py for the python stuff. The only issue is, most of the python code that's being used is simply utility functions being called one function at a time. No initialization, no real life-cycle, or exit. So no real way to use the API to start/stop/save, or use the coverage command line to measure.
With this, I thought the easiest way to accomplish this would be using the sitecustomize.py method like outlined here. I have gotten that to work, and it measures all configured python code as expected. Now I'm looking at how to accomplish this with compiled python code (.pyc).
I can get it to work if I keep source(.py) and (.pyc) in the same directory when running, and then reporting. However, I'm looking for a way to RUN the files and generate the measurement data. Then at a later time point to the actual source files, and run the actual reports. Ideally I wouldn't need the source(.py) files at all, but I haven't found a way to accomplish this.
Objective:
In the end I want to be able to compile the python files(.pyc), install them on the target, and run coverage like stated above. It will generate coverage data files, then pull those files to my host machine which houses the source(.py) .. and do the actual coverage reporting.
Is this possible currently?
[Edit] Thanks to Ned's advice, I looked into the [paths] usage, and it worked exactly how I needed it to.

Weka - Measuring testing time

I'm using Weka 3.6.8 to carry out some machine learning and I'm want to find the 'time taken to test model on training/testing data'. When I test a predictive model on evaluation data, this parameter seems to be missing. Has this feature been removed from Weka or is it just a setting I'm missing? All I seem to be able to find is the time taken to build the actual predictive model. (I've also checked the Weka Manual but can't find anything)
Thanks in advance
That feature was added to 3.7.7, you need to upgrade. You should be able to get this data by running the test on the command line with the -T parameter.

Building a Native Client app from nothing

What does it take to build a Native Client app from scratch? I have looked into the documentation, and fiddled with several apps, however, I am now moving onto making my own app and I don't see anything related to creating the foundation of a native client app.
Depending on the version of the SDK you want to use, you have a couple of options.
Pepper 16 and 17: use init_project.py or use an example as a starting point
If you are using pepper_16 or pepper_17, you will find a Python script init_project.py in the project_templates in the SDK. It will setup up a complete set of files (.cc, .html, .nmf) with comments indicating where you need to add code. Run python init_project.py -h to see what options it accepts. Additional documentation can be found at https://developers.google.com/native-client/pepper17/devguide/tutorial.
Pepper 18 and newer: use an example as the starting point
If you are using pepper_18 or newer, init_project.py is no longer included. Instead you can copy a very small example from the examples directory (e.g., hello_world_glibc or hello_world_newlib for C or hello_world_interactive for C++) and use that as a starting point.
Writing completely from scratch
If you want to write your app completely from scratch, first ensure that the SDK is working by compiling and running a few of the examples. Then a good next step is to look at the classes pp::Module and pp:Instance, which your app will need to implement.
On the HTML side, write a simple page with the EMBED element for the Native Client module. Then add the JavaScript event handlers for loadstart, progress, error, abort, load, loadend, and message and have the handlers write the event data to, e.g., the JavaScript console, so that it's possible to tell what went wrong if the Native Client module didn't load. The load_progress example shows how to do this.
Next, create the manifest file (.nmf). From pepper_18 and onwards you can use the generate_nmf.py script found in the tools/ directory for this. If you want to write it from scratch, the examples provide examples both for using newlib and glibc (the two Standard C librares currently supported). See hello_world_newlib/ and hello_world_glibc/, respectively.
If you haven't used a gcc-family compiler before, it is also a good idea to look at the Makefile for some of the examples to see what compiler and linker flags to use. Compiling both for 32-bit and 64-bit right from the beginning is recommended.
Easiest way is to follow the quick start doc at https://developers.google.com/native-client/pepper18/quick-start, in particular steps 5-7 of the tutorial ( https://developers.google.com/native-client/pepper18/devguide/tutorial ) which seems to be what you are asking about.

Doing sophisticated image analysis with python / django

I am working on a django project that analyzes images that contain text and (1) infers if the image needs to be rotated and (2) where text areas are.
I am currently using PIL to do some more simpler processing of these images but I am not quite sure how I can use PIL or other libraries to perform both tasks. I was wondering if anyone has done this before and if there are libraries / api available to help in the development.
OpenCV is probably the post popular open source image processing library. It's C/C++ but there are python bindings:
http://opencv.willowgarage.com/wiki/
and the python docs
http://opencv.willowgarage.com/documentation/python/index.html
I've never done an OCR with it, but I'm sure it's capable
I agree with #pastylegs that OpenCV is your best initial bet. If you need stuff specific to OCR you could also look at ocropus.

Are there any Autopano Server alternative or automated panorama stitching/video panorama open source?

I haven't found any server side panorama making from stitching images or a video. I would like an open source alternative, but found any. I just don't want to go trough the hassle of developing all this on my own since but paid software usually are closed source and not very flexible.
I've seen some nifty panorama from video software in the iphone and thought it would be easy to find on *nix systems but with no luck.
Any help will be appreciated. Thanks in advance.
The only option I know is to use panostart (which is part of hugin) from whatever server language you are using.
See here for more information and other command line tools that do parts of the process more specifically.
panostart just works with images, so obviously if you wanted it to work with videos then you would have to process the videos with something like ffmpeg -i z.mov -f image2 export2\%d.png to generate images to pass to panostart.
The other alternative which requires more effort is to write something that uses libpano13 and libffmpeg directly.
You can have a look to VideoStitch, a command line is provided to automate the stitching.