Configure tensorboard to start with no smoothing -- for certain plots - tensorboard

Can I configure tensorboard to start with no smoothing? The smoothing can hide overfitting. If there was a command line option, or if tensorboard looked in a location like ~/.tensorboard/config.ini that would be great.
As an example
But with the default smoothing it looks like
you've got to look at the second one closely to see your overfitting
or what would be even better, is to configure this per plot. Maybe even when making the SummaryWriter in the code.

Related

Which watchface uses CLKComplicationTemplateGraphicExtraLargeCircularView?

Is there a documentation/web page on which Apple Watch watch faces uses which complication types?
I am trying to find which watch face uses CLKComplicationTemplateGraphicExtraLargeCircularView. This is a rare SwiftUI view and I have been trying to find a watch face to test it.
According to documentation (https://developer.apple.com/documentation/clockkit/clkcomplicationtemplategraphicextralargecircularview), the watch face looks like this (but what is this):
I found the answer: it is the X-Large watch face.
The main reason why I can't find it initially is: without complication, this watch face does not look like this (just 2 rows of big text). Only when added complications it looks like the above.

Neptune Jupyter notebook is not showing graph tab

In the Neptune notebook, I add the vertex as follow:
%%gremlin
g.addV('labelC').property(T.id, '153')
Then I do the gremlin -p v,oute,inv option to see a visual graph representation as below. Reference: https://docs.aws.amazon.com/neptune/latest/userguide/notebooks-visualization.html
%%gremlin -p v,oute,inv
g.V().hasLabel('labelC')
However, I don't see the graph tab in the output.
The rendering hints are designed to help the renderer know how to draw the graph when a path step is present. In general to get a nice rendering it is best to use a path step. In your example there will just be a set of vertices so the hint is actually stopping any rendering. In many cases the hints are no longer needed. There are a lot of examples in this notebook. An example of a simple rendering might be:
%%gremlin
g.V('44').outE().inV().path().by(elementMap())
The hints are not needed here as the elementMap step gives the renderer all it needs in order to create a visual. Please checkout the notebook linked above and let me know if you have any additional questions. That notebook explains all of the possible hints and ways to adjust the visuals in detail.

osgexport for Blender: Imported meshes impact scene ( possibly the lighting? )

Blender Version 2.79
OSG Version: 3.4.0-9
Operating System: Fedora
I have been using Blender's export utility to export obj files and then using osgconv to convert them to osg files. The files are then imported and rendered into a scene that looks like:
Image of the working scene before using the export tool
Today I installed osgexport by Cedric Pinson ( Github page: https://github.com/cedricpinson/osgexport ) to directly export from Blender to osgt files. I get the following results when I import those files and render them.
Image of the scene where everything goes dark and the lighting is wierd
Additional Details:
The code is set to follow the human character. The rest of the scene
is static.
When I use the old human model I get the working effect,
but the whole purpose of using the converter is to be able to export
its animations.
Any ideas? I saw the effect and I don't really even know where to start. The only difference is the outputted file from the converter. Everything else is the same. Also, if there is a newer/better way to export blender files into files that OSG can read, then I'm open to any and all suggestions.
Thank you in advance,
For what it's worth, the OSG forum/mailing list is usually pretty good about answering questions, and I believe I've seen Cedric's name there regularly, along with all the main contributors: http://forum.openscenegraph.org/
New users usually have to have their questions go through moderation, to cut down on spam, so just have some patience.
On this specific question, I have 3 suggestions:
Convert both files to osgt and run a diff. Ignore any numerical differences, look for things like node arrangement, material types, etc.
Try opening your exported model in osgviewer, just to see how the default settings display it. You can also easily play with things like backfaces, lighting settings, clipping planes, before dropping it into your application - press h to see all the run-time options.
Instantiate your light with all 3 types of lighting enabled. Particularly, I have found some models depend heavily on specular lighting, but some of the examples don't turn it on. Of course, if you use the full-white values shown here, you may oversaturate, but this is for an example:
osg::Light *light = new osg::Light;
light->setAmbient(osg::Vec4(1.0,1.0,1.0,1.0));
light->setDiffuse(osg::Vec4(1.0,1.0,1.0,1.0));
light->setSpecular(osg::Vec4(1.0,1.0,1.0,1.0)); // some examples don't have this one

Where can I find completed img's pack for training opencv face recognizing system?

So...
Where can I find completed img's pack for training opencv face recognizing system?
Can anybody help?
have a look here
the att faces db was probably used a lot ( if you look at the docs. )
once you downloaded a set of images, you'll want to run the little python script to generate the needed csv file for training
if you opt for the yale db, you'll have to convert the images to png or pgm first ( opencv can't handle gif's)
but honestly, in the end you want to use a db, that consists entirely of faces you want to recognize [that is, your own db].
unlike most ml algo's it does not need explicit 'negative' images[people other than you want to recognize] here. thoose only add noise and degrade the actual recognition.
the only situation, where you'd want that is when there's only 1 person to recognize. you#d need some others there to increase 'contrast'

How to batch download large number of high resolution satellite images from Google Map directly?

I'm helping a professor working on a satellite image analysis project, we need 800 images stitching together for a square area at 8000x8000 resolution each image from Google Map, it is possible to download them one by one, however I believe there must be a way to write a script for batch processing.
Here I would like to ask how can I implement this using shell or python script, and how could I download images by google maps url ?
Here is an example of the url:
https://maps.google.com.au/maps/myplaces?ll=-33.071009,149.554911&spn=0.027691,0.066047&ctz=-660&t=k&z=15
However I'm not able to analyse the image direct download link from this.
Update:
Actually, I solved this problem, however due to Google's intention, I would not post the way for doing this.
Have you tried the Google static maps API?
You get 25 000 free requests, but you're limited to 640x640, so you'll need to do ~160 requests at a higher zoom level.
I suggest downloading the images as so: Downloading a picture via urllib and python
URL to start with: http://maps.googleapis.com/maps/api/staticmap?center=-33.071009,149.554911&zoom=15&size=640x640&sensor=false&maptype=satellite
It's been long time since I solved the problem, sorry for the delay.
I posted my code to github here, plz star or fork as you like :)
The idea is to use a virtual web browser at a very high resolution to load the google map page, then do the page capture. The defect is there will be google symbol all around on each image, the solution is to apply over sampling on the resolution on each of the image, then use the stiching technique to stick them all together.