Problem using magenta to generate song: SyntaxError: (unicode error) 'unicodeescape' - unicode-escapes

I want to generate music with magenta and a neural network model for a project.
I found this simple example and wanted to try it first to understand how it works: https://www.twilio.com/blog/training-a-neural-network-on-midi-music-data-with-magenta-and-python
Apparently i have to modify the type of my inital data (which is midi file) "in note to sequences"
Here is what i have:
convert_dir_to_note_sequences \
--input_dir == 'C:\Users\mista\Downloads\CLEANED_DATA\CLEANED_DATA' \
--output_file = tmp/notesequences.tfrecord \
--recursive
and here is the error i get:
File "C:\Users\mista\AppData\Local\Temp/ipykernel_28328/3757950315.py", line 3
--output_file = tmp/notesequences.tfrecord
^
SyntaxError: (unicode error) 'unicodeescape' codec can't decode bytes in position 2-3: truncated \UXXXXXXXX escape
I saw some people saying that you could use an 'r' before your path to solve this but i've tried many ways, i'm still stuck

I am following the same tutorial and after a couple hours of bashing my head against the keyboard, I managed to run the neural net properly. It seems like the problem is a lot of conflicting dependencies and deprecations, so you may have to play around with what version of Python you're using once you set everything else up properly (I used Python 3.7 and did pip install magenta).
I used Powershell (right click on Windows home button --> run Powershell as admin). You'll want to set up a virtual environment in your current working directory, as the tutorial advises. And now we get to your problem in particular - make sure you're running the command all in one line, as such:
convert_dir_to_note_sequences --input_dir == 'C:\Users\mista\Downloads\CLEANED_DATA\CLEANED_DATA' --output_file = tmp/notesequences.tfrecord --recursive
That should start converting all the midi files to NoteSequences objects. If you have any more trouble, please follow up and I'll see what I can do to help.

Related

AWS Lambda Extension throws exit status 127 (/usr/bin/env: node : No such file or directory)

I am creating a Lambda extension to get secret values from secret manager using as a template:
https://github.com/hariohmprasath/aws-lambda-extensions
I have zipped the files into the following structure.
extension.zip
--> extensions
--> secret-extension
--> secret-extension
--> node_modules
--> extensions-api.js
--> index.js
--> package.json
--> package-lock.json
--> secrets.js
Error:
{
"errorMessage": "RequestId: e5c06575-cf7d-46c0-b168-624e8e9cf572 Error: exit status 127",
"errorType": "Extension.Crash"
}
The Error is that /usr/bin/env: node : No such file or directory
At the top of the index.js file is the command #!/usr/bin/env node (in order to interpret the file in node)
The runtime environment is Nodejs 12 and have tried with 14 as well.(extension documentation says node 12 runtime is required)
What could be causing this issue?
The lambda runtime is a node runtime so node should be installed.
I have ls the folder and /env folder exists.
I know node exists within the runtime as node -v returns v14.20.0 or v12.22.11
I am on a windows machine
creating the extension (dont think the deployment could be causing
this because it was written on windows machine.
Any help would be appreciated.
So found out it has to do with a custom environment they are using for the example provided by AWS. Instead I went the route of using a runtime independent solution which has worked as expected.
Documentation
I suspect the issue you may have been encountering is the same as mine, and that issue was:
The #!/usr/bin/env node had the whitespace characters \r\n at the end of the line which obviously cannot be seen unless you have your editor display these, and this is how windows handles new lines (*nix systems use just \n); Now when the lambda reads the line, it is trying to interpret it as #!/usr/bin/env node\r which obviously won't exist, and can't run the file via node.
The problem with the logs is when you look at the logs, it won't render the \r as that, it could do 1 of 2 things depending on where you look at the logs:
It will interpret \r as a new line character, and thereby just print the whitespace, which is not obvious in the log message; OR the other situation that can occur (which is what happened to me):
It shows just : No such file or directory because it's interpreted the \r as a carriage return, which means it takes the cursor to the beginning of the line, and overwrites as it prints the new characters,
I am pretty confident this is your issue, and I will admit I didn't solve this 100% on my own as a person in my team had a similar issue with whitespace characters and only after allot of head banging did I think of it, and confirmed using hexdump -C to confirm the issue.

Demo Code for Detectron Not Detecting Object Instances

I am trying to get the demo code for Detectron2 working locally on my laptop. Everything appears to run correctly, but no object instances are detected, even when I use the image from the Colab demo.
I am running on a non-GPU Mac. I followed the installation instructions to install Detectron. I have the following module versions on my machine:
detectron2#git+https://github.com/facebookresearch/detectron2.git#ea3b3f22bf1de58008599794f149149ff65d3780
opencv-python==4.5.3.56
torch==1.9.0
torchvision==0.10.0
I copied demo.py, predictor.py, mask_rcnn_R_101_FPN_3x.yaml, and Base-RCNN-FPN.yaml from Detectron's github. I then ran inference demo with pretrained model command. The specific command was this:
python demo.py --input 000000439715.jpeg --output output --config-file mask_rcnn_R_101_FPN_3x.yaml --opts MODEL.WEIGHTS detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl MODEL.DEVICE cpu
000000439715.jpeg is the sample image of the man on horseback from the Colab notebook demo. The last line of the output is
000000439715.jpeg: detected 0 instances in 6.77s
The image in the output directory has no annotation on it.
The logging output looks okay to me. The only thing that may be an indication of a problem is a warning at the top
[08/28 12:35:18 detectron2]: Arguments: Namespace(confidence_threshold=0.5, config_file='mask_rcnn_R_101_FPN_3x.yaml', input=['000000439715.jpeg'], opts=['MODEL.WEIGHTS', 'detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl', 'MODEL.DEVICE', 'cpu'], output='output', video_input=None, webcam=False)
[08/28 12:35:18 fvcore.common.checkpoint]: [Checkpointer] Loading from detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl ...
[08/28 12:35:18 fvcore.common.checkpoint]: Reading a file from 'Detectron2 Model Zoo'
WARNING [08/28 12:35:19 fvcore.common.checkpoint]: Some model parameters or buffers are not found in the checkpoint:
I'm not sure what to do about it though.
I tried not specifying the model weights. I also tried setting the confidence threshold to zero. I got the same results.
Am I doing something wrong? What are the next debugging steps?
I met the same question with you, just like:
WARNING [xxxxxxxxx fvcore.common.checkpoint]: Some model parameters or buffers are not found in the checkpoint:
and this warning made my result very bad. Finally I found that I use a wrong weight file.
Hope this can help you.

Chronic ripgrep / vim Plugin Error on Load: "-complete Used Without -nargs"?

I recently added ripgrep to my list of vim plugins and, immediately after installation, I began receiving this error message whenever I loaded up vim:
Error detected while processing /Users/my_macbook/.vim/plugged/vim-ripgrep/plugin/vim-ripgrep.vim:
line 149: E1208: -complete used without -nargs
Press ENTER or type command to continue
Opening the offending file and reviewing lines 148-149 reveals:
148 command! -nargs=* -complete=file Rg :call s:Rg(<q-args>)
149 command! -complete=file RgRoot :call s:RgShowRoot()
I am well & truly out of my depth here, especially considering that this error was generated by simply installing the plugin; I've made 0 changes to the underlying file (vim-ripgrep.vim).
Has anyone encountered a similarly chronic error after installing ripgrep and, if so, how did you resolve it?
Congratulations, you have found a bug in a FOSS program. Next step is to either notify the maintainer via their issue tracker or, if you know how to fix it, submit a patch.
Case in point, the author assigns a completion method, -complete=file, but custom commands like :RgRoot don't accept arguments by default so the command makes no sense as-is: you can't complete arguments if you can't pass arguments.
It only needs a -nargs=*, like its upstairs neighbour, :Rg, to work properly and the error message is pretty clear about it:
line 149: E1208: -complete used without -nargs
See :help -complete, :help -nargs, and more generally, :help user-commands.
As the other answer stated, it is a bug in this plugin. There is a currently open pull request to fix this: https://github.com/jremmen/vim-ripgrep/pull/58 Unfortunately, the repository is currently unmaintained, so it is unlikely to be merged any time soon. This active forks page may help you identify a new maintainer.
Until there is a new maintainer for vim-ripgrep, I suggest checking out that branch in your ~/.vim/plugged/vim-ripgrep directory and reopening vim.
I met the functionally same error on VIM plugins while using vim ~/.vimrc.
My met error liking yours:
Error detected while processing /Users/my_macbook/.vim/plugged/vim-ripgrep
/plugin/vim-ripgrep.vim:
I fixed the upstairs with the below:
cd /Users/my_macbook/.vim/plugged/vim-ripgrep/plugin/
git pull --rebase
END!
If you are using vim-plug, try to change
Plug "jremmen/vim-ripgrep"
to
Plug "miyase256/vim-ripgrep", {'branch': 'fix/remove-complete-from-RgRoot'}
Here are detail steps:
comment Plug "jremmen/vim-ripgrep"
:PlugClean
add Plug "miyase256/vim-ripgrep", {'branch': 'fix/remove-complete-from-RgRoot'}
:PlugInstall

Receiving back string of lenght 0 from os.popen('cmd').read()

I am working with a command line tool called 'ideviceinfo' (see https://github.com/libimobiledevice) to help me to quickly get back serial, IMEI and battery health information from the iOS device I work with daily. It executes much quicker than Apple's own 'cfgutil' tools.
Up to know I have been able to develop a more complicated script than the one shown below in PyCharm (my main IDE) to assign specific values etc to individual variables and then to use something like to pyclip and pyautogui to help automatically paste these into the fields of the database app we work with. I have also been able to use the simplified version of the script both in Mac OS X terminal and in the python shell without any hiccups.
I am looking to use AppleScript to help make running the script as easy as possible.
When I try to use Applescript's "do shell script 'python script.py'" I just get back a string of lenght zero when I call 'ideviceinfo'. The exact same thing happens when I try to build an Automator app with a 'Run Shell Script' component for "python script.py".
I have tried my best to isolate the problem down. When other more basic commands such as 'date' are called within the script they return valid strings.
#!/usr/bin/python
import os
ideviceinfoOutput = os.popen('ideviceinfo').read()
print ideviceinfoOutput
print len (ideviceinfoOutput)
boringExample = os.popen('date').read()
print boringExample
print len (boringExample)
I am running Mac OS X 10.11 and am on Python 2.7
Thanks.
I think I've managed to fix it on my own. I just need to be far more explicit about where the 'ideviceinfo' binary (I hope that's the correct term) was stored on the computer.
Changed one line of code to
ideviceinfoOutput = os.popen('/usr/local/bin/ideviceinfo').read()
and all seems to be OK again.

dragonfly+dragon naturally speaking not working

I've got Dragon Naturally Speaking 14, dragonfly, the latest natlink (4.1 or something), pywin32, python 2.7, and wxpython installed
I've got a python file with this in it in my "user configuration directory" set up by natlink
I get the natlink popup message when Dragon Naturally Speaking starts, telling me that it's working. I reset DNS 14 to ensure my "macro" (dosomething.py) is loaded.
this is the code in my dosomething.py
from dragonfly import Grammar, CompoundRule
# Voice command rule combining spoken form and recognition processing.
class ExampleRule(CompoundRule):
spec = "do something computer" # Spoken form of command.
def _process_recognition(self, node, extras): # Callback when command is spoken.
print "Voice command spoken."
# Create a grammar which contains and loads the command rule.
grammar = Grammar("example grammar") # Create a grammar to contain the command rule.
grammar.add_rule(ExampleRule()) # Add the command rule to the grammar.
grammar.load() # Load the grammar.
while True:
pythoncom.PumpWaitingMessages()
sleep(.1)
however, when I start up & activate DNS and say "do something computer" with dictation & command mode, or just command mode, the transcription box pops up, how can I tell if it's working or not? I don't think it is. What is supposed to happen? I'm new to python, I fired up the interpreter in the cmd window and no prompt like "Voice command spoken." is ever generated when I say the voice command. Is that what's supposed to happen?
There are two ways to load up Dragonfly grammars: through Natlink, and through Windows Speech Recognition. WSR requires you to put in that while loop with pythoncom.PumpWaitingMessages(), but that won't work with Natlink. You should comment it out.
If you set this up correctly, you won't see "Voice command spoken." in any cmd prompt -- you will see it in the Natlink window.