In the Neptune notebook, I add the vertex as follow:
%%gremlin
g.addV('labelC').property(T.id, '153')
Then I do the gremlin -p v,oute,inv option to see a visual graph representation as below. Reference: https://docs.aws.amazon.com/neptune/latest/userguide/notebooks-visualization.html
%%gremlin -p v,oute,inv
g.V().hasLabel('labelC')
However, I don't see the graph tab in the output.
The rendering hints are designed to help the renderer know how to draw the graph when a path step is present. In general to get a nice rendering it is best to use a path step. In your example there will just be a set of vertices so the hint is actually stopping any rendering. In many cases the hints are no longer needed. There are a lot of examples in this notebook. An example of a simple rendering might be:
%%gremlin
g.V('44').outE().inV().path().by(elementMap())
The hints are not needed here as the elementMap step gives the renderer all it needs in order to create a visual. Please checkout the notebook linked above and let me know if you have any additional questions. That notebook explains all of the possible hints and ways to adjust the visuals in detail.
Related
For a business process discovery task, I am trying to generate a process model, following pm4py python library. Here's a sample code:
!pip install pm4py
import pm4py
log = pm4py.read_xes('/content/running-example.xes')
process_model, initial_marking, final_marking = pm4py.discover_petri_net_inductive(log)
pm4py.view_petri_net(process_model, initial_marking, final_marking, format="svg")
However, I get output as:
parsing log, completed traces :: 100%
6/6 [00:00<00:00, 121.77it/s]
But no image as is expected from the website: https://pm4py.fit.fraunhofer.de/getting-started-page#discovery
Being relatively new to the world of python, what I learnt from other coders' suggestions here on SO that always read in depth the source code in case of open source libraries.
Here is pm4py visual links:
https://github.com/pm4py/pm4py-core/blob/afee8b0932283b8f8f02dd2b6cc0968a1f1cc723/pm4py/visualization/process_tree/visualizer.py#L69
and specifically for my example:
https://github.com/pm4py/pm4py-core/blob/afee8b0932283b8f8f02dd2b6cc0968a1f1cc723/pm4py/vis.py#L17
But I am not able to figure out how to manipulate it.
Can someone please point out the problem to me and help me generate the views. Also, if anyone has done business process generations before, maybe if you could suggest me any libraries or techniques to analyse event-logs data it would be really helpful.
to visualize the process models mined in PM4Py, make sure that you have graphviz installed on your computer.
see https://pm4py.fit.fraunhofer.de/install for more information on this.
Can I configure tensorboard to start with no smoothing? The smoothing can hide overfitting. If there was a command line option, or if tensorboard looked in a location like ~/.tensorboard/config.ini that would be great.
As an example
But with the default smoothing it looks like
you've got to look at the second one closely to see your overfitting
or what would be even better, is to configure this per plot. Maybe even when making the SummaryWriter in the code.
I'm still very new to the world of machine learning and am looking for some guidance for how to continue a project that I've been working on. Right now I'm trying to feed in the Food-101 dataset into the Image Classification algorithm in SageMaker and later deploy this trained model onto an AWS deeplens to have food detection capabilities. Unfortunately the dataset comes with only the raw image files organized in sub folders as well as a .h5 file (not sure if I can just directly feed this file type into sageMaker?). From what I've gathered neither of these are suitable ways to feed in this dataset into SageMaker and I was wondering if anyone could help point me in the right direction of how I might be able to prepare the dataset properly for SageMaker i.e convert to a .rec or something else. Apologies if the scope of this question is very broad I am still a beginner to all of this and I'm simply stuck and do not know how to proceed so any help you guys might be able to provide would be fantastic. Thanks!
if you want to use the built-in algo for image classification, you can either use Image format or RecordIO format, re: https://docs.aws.amazon.com/sagemaker/latest/dg/image-classification.html#IC-inputoutput
Image format is straightforward: just build a manifest file with the list of images. This could be an easy solution for you, since you already have images organized in folders.
RecordIO requires that you build files with the 'im2rec' tool, re: https://mxnet.incubator.apache.org/versions/master/faq/recordio.html.
Once your data set is ready, you should be able to adapt the sample notebooks available at https://github.com/awslabs/amazon-sagemaker-examples/tree/master/introduction_to_amazon_algorithms
I'm helping a professor working on a satellite image analysis project, we need 800 images stitching together for a square area at 8000x8000 resolution each image from Google Map, it is possible to download them one by one, however I believe there must be a way to write a script for batch processing.
Here I would like to ask how can I implement this using shell or python script, and how could I download images by google maps url ?
Here is an example of the url:
https://maps.google.com.au/maps/myplaces?ll=-33.071009,149.554911&spn=0.027691,0.066047&ctz=-660&t=k&z=15
However I'm not able to analyse the image direct download link from this.
Update:
Actually, I solved this problem, however due to Google's intention, I would not post the way for doing this.
Have you tried the Google static maps API?
You get 25 000 free requests, but you're limited to 640x640, so you'll need to do ~160 requests at a higher zoom level.
I suggest downloading the images as so: Downloading a picture via urllib and python
URL to start with: http://maps.googleapis.com/maps/api/staticmap?center=-33.071009,149.554911&zoom=15&size=640x640&sensor=false&maptype=satellite
It's been long time since I solved the problem, sorry for the delay.
I posted my code to github here, plz star or fork as you like :)
The idea is to use a virtual web browser at a very high resolution to load the google map page, then do the page capture. The defect is there will be google symbol all around on each image, the solution is to apply over sampling on the resolution on each of the image, then use the stiching technique to stick them all together.
I am new to graph theory and graph concept.
I am workign on something, that requires me to create a mesh(Undirected graphs) with n number of nodes.
Once the structure is created, I would be running various algorithms on the structure, to find a shortest path from a node to other.
No for this I have decided to use Boost graph librabry.
I read through the online documentation. The online documentation is good but at the same time not sufficient.
I went through various examples online and everywhere, they import the graph from Graphviz.
If i am not wrong, we have to manually draw or write a dot program to get a graph in Graphviz and import in .dot format(Please correct me if i am wrong)
But is there a way in Boost where I could create a graph, instead of importing it from GraphViz?
And I would let user to decide the number of vertices in it, instead of pre-defining it.
Any help would be very much appreciated.
Thanks a ton in advance.
It's maybe not very correct, but I am giving a response a gave before.
https://stackoverflow.com/a/3100220/202083
You can see how to programmaticaly add nodes and edges.
I hope this is enough for you to get started.