Demo Code for Detectron Not Detecting Object Instances - computer-vision

I am trying to get the demo code for Detectron2 working locally on my laptop. Everything appears to run correctly, but no object instances are detected, even when I use the image from the Colab demo.
I am running on a non-GPU Mac. I followed the installation instructions to install Detectron. I have the following module versions on my machine:
detectron2#git+https://github.com/facebookresearch/detectron2.git#ea3b3f22bf1de58008599794f149149ff65d3780
opencv-python==4.5.3.56
torch==1.9.0
torchvision==0.10.0
I copied demo.py, predictor.py, mask_rcnn_R_101_FPN_3x.yaml, and Base-RCNN-FPN.yaml from Detectron's github. I then ran inference demo with pretrained model command. The specific command was this:
python demo.py --input 000000439715.jpeg --output output --config-file mask_rcnn_R_101_FPN_3x.yaml --opts MODEL.WEIGHTS detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl MODEL.DEVICE cpu
000000439715.jpeg is the sample image of the man on horseback from the Colab notebook demo. The last line of the output is
000000439715.jpeg: detected 0 instances in 6.77s
The image in the output directory has no annotation on it.
The logging output looks okay to me. The only thing that may be an indication of a problem is a warning at the top
[08/28 12:35:18 detectron2]: Arguments: Namespace(confidence_threshold=0.5, config_file='mask_rcnn_R_101_FPN_3x.yaml', input=['000000439715.jpeg'], opts=['MODEL.WEIGHTS', 'detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl', 'MODEL.DEVICE', 'cpu'], output='output', video_input=None, webcam=False)
[08/28 12:35:18 fvcore.common.checkpoint]: [Checkpointer] Loading from detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl ...
[08/28 12:35:18 fvcore.common.checkpoint]: Reading a file from 'Detectron2 Model Zoo'
WARNING [08/28 12:35:19 fvcore.common.checkpoint]: Some model parameters or buffers are not found in the checkpoint:
I'm not sure what to do about it though.
I tried not specifying the model weights. I also tried setting the confidence threshold to zero. I got the same results.
Am I doing something wrong? What are the next debugging steps?

I met the same question with you, just like:
WARNING [xxxxxxxxx fvcore.common.checkpoint]: Some model parameters or buffers are not found in the checkpoint:
and this warning made my result very bad. Finally I found that I use a wrong weight file.
Hope this can help you.

Related

Problem using magenta to generate song: SyntaxError: (unicode error) 'unicodeescape'

I want to generate music with magenta and a neural network model for a project.
I found this simple example and wanted to try it first to understand how it works: https://www.twilio.com/blog/training-a-neural-network-on-midi-music-data-with-magenta-and-python
Apparently i have to modify the type of my inital data (which is midi file) "in note to sequences"
Here is what i have:
convert_dir_to_note_sequences \
--input_dir == 'C:\Users\mista\Downloads\CLEANED_DATA\CLEANED_DATA' \
--output_file = tmp/notesequences.tfrecord \
--recursive
and here is the error i get:
File "C:\Users\mista\AppData\Local\Temp/ipykernel_28328/3757950315.py", line 3
--output_file = tmp/notesequences.tfrecord
^
SyntaxError: (unicode error) 'unicodeescape' codec can't decode bytes in position 2-3: truncated \UXXXXXXXX escape
I saw some people saying that you could use an 'r' before your path to solve this but i've tried many ways, i'm still stuck
I am following the same tutorial and after a couple hours of bashing my head against the keyboard, I managed to run the neural net properly. It seems like the problem is a lot of conflicting dependencies and deprecations, so you may have to play around with what version of Python you're using once you set everything else up properly (I used Python 3.7 and did pip install magenta).
I used Powershell (right click on Windows home button --> run Powershell as admin). You'll want to set up a virtual environment in your current working directory, as the tutorial advises. And now we get to your problem in particular - make sure you're running the command all in one line, as such:
convert_dir_to_note_sequences --input_dir == 'C:\Users\mista\Downloads\CLEANED_DATA\CLEANED_DATA' --output_file = tmp/notesequences.tfrecord --recursive
That should start converting all the midi files to NoteSequences objects. If you have any more trouble, please follow up and I'll see what I can do to help.

Chronic ripgrep / vim Plugin Error on Load: "-complete Used Without -nargs"?

I recently added ripgrep to my list of vim plugins and, immediately after installation, I began receiving this error message whenever I loaded up vim:
Error detected while processing /Users/my_macbook/.vim/plugged/vim-ripgrep/plugin/vim-ripgrep.vim:
line 149: E1208: -complete used without -nargs
Press ENTER or type command to continue
Opening the offending file and reviewing lines 148-149 reveals:
148 command! -nargs=* -complete=file Rg :call s:Rg(<q-args>)
149 command! -complete=file RgRoot :call s:RgShowRoot()
I am well & truly out of my depth here, especially considering that this error was generated by simply installing the plugin; I've made 0 changes to the underlying file (vim-ripgrep.vim).
Has anyone encountered a similarly chronic error after installing ripgrep and, if so, how did you resolve it?
Congratulations, you have found a bug in a FOSS program. Next step is to either notify the maintainer via their issue tracker or, if you know how to fix it, submit a patch.
Case in point, the author assigns a completion method, -complete=file, but custom commands like :RgRoot don't accept arguments by default so the command makes no sense as-is: you can't complete arguments if you can't pass arguments.
It only needs a -nargs=*, like its upstairs neighbour, :Rg, to work properly and the error message is pretty clear about it:
line 149: E1208: -complete used without -nargs
See :help -complete, :help -nargs, and more generally, :help user-commands.
As the other answer stated, it is a bug in this plugin. There is a currently open pull request to fix this: https://github.com/jremmen/vim-ripgrep/pull/58 Unfortunately, the repository is currently unmaintained, so it is unlikely to be merged any time soon. This active forks page may help you identify a new maintainer.
Until there is a new maintainer for vim-ripgrep, I suggest checking out that branch in your ~/.vim/plugged/vim-ripgrep directory and reopening vim.
I met the functionally same error on VIM plugins while using vim ~/.vimrc.
My met error liking yours:
Error detected while processing /Users/my_macbook/.vim/plugged/vim-ripgrep
/plugin/vim-ripgrep.vim:
I fixed the upstairs with the below:
cd /Users/my_macbook/.vim/plugged/vim-ripgrep/plugin/
git pull --rebase
END!
If you are using vim-plug, try to change
Plug "jremmen/vim-ripgrep"
to
Plug "miyase256/vim-ripgrep", {'branch': 'fix/remove-complete-from-RgRoot'}
Here are detail steps:
comment Plug "jremmen/vim-ripgrep"
:PlugClean
add Plug "miyase256/vim-ripgrep", {'branch': 'fix/remove-complete-from-RgRoot'}
:PlugInstall

How to debug Halide internal error with CodeGen_LLVM

I have problems finding the source of an error message reported by JIT-compiled pipeline with halide.
The log message is:
Internal Error at Halide-release_2019_08_27/halide/src/CodeGen_LLVM.cpp:2815 triggered by user code at :
Condition failed: append_string:
The LLVM_code at the following lines is:
llvm::Function *append_string = module->getFunction("halide_string_to_string");
internal_assert(append_string);
I'm using halide release build from 2019_08_27 on Ubuntu 18.04.
The pipeline runs without any errors until somebody wanted to use the Halide::print() for debugging.
I've checked a small test pipeline and print seem to work.
My problem now is to find our bug in a very complex pipeline. Could somebody explain the source of this bug and what I need to check in my code to solve this?
Thanks in advance.
That means the function "halide_string_to_string" was not found in the runtime, which would be very odd for CPU targets. Hrm, I wonder if you're trying to use print inside a Func scheduled on a GPU or DSP? I could easily imagine that being broken.

Unable to display children:Attribute not found: value

I keep on getting this error when trying to view objects in the Debugger in PyCharm:
Unable to display children:Attribute not found: value
I have deduced that it is an error with Pycharm itself, not my code
(I get the same error on multiple scripts, but no error on with an older version of Pycharm on 2 different computers)
I'm on PyCharm Community 2017.3.4
Any ideas for workarounds, other than installing an older version?
I am finding similar issues. I too think there is something up with PyCharm it does not work as expected or previous version as you mention. I also found Pyscripter to debug as expected.
In some instances I would rely on the result object, result[0] or result.getOutput(0) to pass to next tool. Instead one can use a variable for the "output" or use the string (name) directly as input for the next tool.
For example,
facility_staging_polygons = os.path.join(outGDB, 'facility_staging_polygons\Polygon_1')
result = arcpy.MakeFeatureLayer_management(facility_staging_polygons, 'facility_staging_polygons_Layer')
# Process: Update Attributes
arcpy.AddField_management('facility_staging_polygons_Layer', "area_calc", "LONG")

weka: how to generate libsvm training parameter

I am running libsvm through weka. Its output accuracy looks good to me, so I am planning to write a svm model by myself. However, weka didn't generate any training parameter, such as number of support vector. Therefore i cannot do anything. Searching the web, i found somebody said it would generate some parameters like the following:
optimization finished, #iter = 27
nu = 0.058475864943863545
obj = -1.871013102744184, rho = -0.19357337828800944
nSV = 9, nBSV = 0 `enter code here`
Total nSV = 9
but how come i didn't see any of them? any step that i missed? please help me. Thanks a lot.
Weka writes the output you mentioned to stderr.
So if you have started weka.sh or weka.bat from a terminal (or "command window" if you are on Windows), you should see that output appear in your terminal window after clicking "classify"
If you want to have access to this information via scripts, you can
redirect the output to a file and read in that file.
Here is how to edit the startup file weka.sh / weka.bat.
Edit this line (it is probably the last line) in order to write log info to a file instead of the terminal window:
java -cp $CP -Xmx8092m weka.gui.GUIChooser 2>>/opt/weka-stable/weka.log &
You can also add a properties file to your home directory to add more fine-grained behaviour.
https://weka.wikispaces.com/Properties+file
(You probably can also access information via the Weka Java API somehow, but you did not ask for that)