Use metadata in filename when saving harbor input Liquidsoap - icecast

So I have an instance of Liquidsoap, which I am using to stream to an Icecast server.
I'd like to record any live broadcasts that take place automatically, which I am now doing and is working well.
What I'd like to do is use the metadata (specifically the songname) of the live show when creating the mp3 archive.
#!/usr/local/bin/liquidsoap
set("log.file",true)
set("log.file.path","/var/log/liquidsoap/radiostation.log")
set("log.stdout",true)
set("log.level",3)
#-------------------------------------
set("harbor.bind_addr","0.0.0.0")
#-------------------------------------
backup_playlist = playlist("/home/radio/playlists/playlist.pls",conservative=true,reload_mode="watch")
output.dummy(fallible=true,backup_playlist)
#-------------------------------------
live_dj = input.harbor(id="live",port=9000,password="XXX", "live")
date = '%m-%d-%Y'
time = '%H:%M:%S'
output.file(%mp3, "/var/www/recorded-shows/#{Title} - Recorded On #{date} At #{time}.mp3", live_dj, fallible=true)
#time_stamp = '%m-%d-%Y, %H:%M:%S'
#output.file(%mp3, "/var/www/recorded-shows/live_dj_#{time_stamp}.mp3", live_dj, fallible=true)
#-------------------------------------
on_fail = single("/home/radio/fallback/Enei -The Moment Feat DRS.mp3")
#-------------------------------------
source = fallback(track_sensitive=false,
[live_dj, backup_playlist, on_fail])
# We output the stream to icecast
output.icecast(%mp3,id="icecast",
mount="myradio.mp3",
host="localhost", password="XXX",
icy_metadata="true",description="cool radio",
url="http://myradio.fm",
source)
I have added #{title} where I would like my song title to appear, sadly though I am unable to get this populate.
My Dj's use BUTT and the show title is connected as part of their connection, so the data should be available pre recording.
Any advice is much appreciated!

This is far from being as easy as it seems.
The title metadata is dynamic, thus not available as a variable on script initialization
The filename argument of output.file is compiled when the script is initialized
A solution would consist in:
Defining a variable reference title to populate with live metadata
Output to a temporary file
Rename the file on close using on_close argument with output.file (in this case, we can just prepend the title)
This would give the following code (on a Linux box, change mv with ren on Windows):
date = '%m-%d-%Y'
time = '%H:%M:%S'
# Title is a reference
title = ref ""
# Populate with metadata
def get_title(m)
title := m['title']
end
live_dj = on_metadata(get_title,live_dj)
# Rename file on close
def on_close(filename)
# Generate new file name
new_filename = "#{path.dirname(filename)}/#{!title} - #{basename(filename)}"
# Rename file
system("mv '#{filename}' '#{new_filename}'")
end
output.file(on_close=on_close, %mp3, "/var/www/recorded-shows/Recorded On #{date} At #{time}.mp3", live_dj, fallible=true)
I tested a similar scenario and it works just well. Just beware that this will create a new file every time a DJ disconnects or updates the title. Also keep in mind that time stamps will be resolved by output.file.
This is based on the following example from a Liquidsoap dev: https://github.com/savonet/liquidsoap/issues/661#issuecomment-439935854)

Related

rmarkdown::render loop stalls after a few iterations

I am trying to generate a large number of html files using rmarkdown::render in a loop, with Parameterized reports.
After generating a number of files, it stalls, and I have to restart RStudio. I can generate each individual file in itself; it is not at the same file it stalls each time when I try running the loop.
There is no error message, making it hard for me to debug.
I have tried the following, none of which helped:
Closing all other programs; reducing the memory used.
Adding knitr::knit_meta(clean = TRUE) before render
Adding clean = T inside render
Calling render with callr::r
Including rm([[data]]); gc() at the end of the .rmd file that is called by render
Any other ideas of how to try and solve this issue?
Taking This from R Markdown: The Definitive Guide Where an example is used to render multiple files. Every file would have a it's own name regarding it's param region and year.
render_report = function(region, year) {
rmarkdown::render(
"MyDocument.Rmd", params = list(
region = region,
year = year
),
output_file = paste0("Report-", region, "-", year, ".html")
)
}

PowerBI: Check if the folder/directory we are loading to exists or not using Power Query

I am a newbie in PowerBi and currently working on a POC where I need to load data from a folder or directory. Before this load, I need to check if
1) the respective folder exists
2) the file under the folder is with.csv extension.
Ex. Let suppose we have a file '/MyDoc2004/myAction.csv'.
Here first we need to check if MyDoc2004 exists and then if myAction file is with.csv extension.
Is there any way we can do this using Power Query?
1. Check if the folder exists
You can apply Folder.Contents function with the absolute path of the folder, and handle the error returned when the folder does not exist with try ... otherwise ... syntax.
let
absoluteFolderPath = "C:/folder/that/may/not/exist",
folderContentsOrError = Folder.Contents(absoluteFolderPath),
alternativeResult = """" & absoluteFolderPath & """ is not a valid folder path",
result = try folderContentsOrError otherwise alternativeResult
in
result
2. Check if the file is with .csv extension
I'm not sure what output you are expecting.
Here is a way to get the content of the file by full path including ".csv", or return an alternative result if not found.
let
absoluteFilePath = "C:/the/path/myAction.csv",
fileContentsOrError = File.Contents(absoluteFilePath),
alternativeResult = """" & absoluteFilePath & """ is not a valid file path",
result = try fileContentsOrError otherwise alternativeResult
in
result
If this is not what you are looking for, please update the question with the expected output.
Hope it helps.

Postman accessing the stored results in the database leveldb

So I have a set of results in Postman from a runner on a collection using some data file for iterations - I have the stored data from the runner in the Postman app on Linux, but I want to know how I can get hold of the data. There seems to be a database hidden away in the ~/.config directory (/Desktop/file__0.indexeddb.leveldb) - that looks like it has the data from the results there.
Is there anyway that I can get hold of the raw data - I want to be able to save the results from the database and not faff around with running newman or hacking a server to post the results and then save, I already have 20000 results in a collection. I want to be able to get the responseData from each post and save it to a file - I will not execute the posts again, I need to just work out a way
I've tried KeyLord, FastNoSQL (this crashes), levelDBViewer(Jar), but not having any luck here.
Any suggestions?
inline 25024 of runner.js a simple yet hack for small numbers of results I can do the following
RunnerResultsRequestListItem = __WEBPACK_IMPORTED_MODULE_2_pure_render_decorator___default()(_class = class RunnerResultsRequestListItem extends __WEBPACK_IMPORTED_MODULE_0_react__["Component"] {
constructor(props) {
super(props);
var text = props.request.response.body,
blob = new Blob([text], { type: 'text/plain' }),
anchor = document.createElement('a');
anchor.download = props.request.ref + ".txt";
anchor.href = (window.webkitURL || window.URL).createObjectURL(blob);
anchor.dataset.downloadurl = ['text/plain', anchor.download, anchor.href].join(':');
anchor.click();
it allows me to save but obviously I have to click save for now, anyone know how to automate the saving part - please add something here!

tensorboard embeddings show no data

I am rying to use tensorboard embeddings page in order to visualize the results of word2vec. After debugging, digging of lots of codes i came to a point that tensorboard successfully runs, reads the confguration file, reads the tsv files but now the embeddings page does not show data.
( the page is opened , i can see the menus , items etc) this is my config file:
embeddings {
tensor_name: 'word_embedding'
metadata_path: 'c:\data\metadata.tsv'
tensor_path: 'c:\data\tensors2.tsv'
}
What can be the problem?
The tensor file originally is 1gb. in size, if i try that file , the app crashes becasue of the memory. So i copy and paste 1 or 2 pages of the original file into tensor2.tsv and use this file. May be this is the problem. May be i need to create more data by copy/ paste.
thx
tolga
Try following code snippet to get visualized word embedding in tensorboard. Open tensorboard with logdir, check localhost:6006 for viewing your embedding.
tensorboard --logdir="visual/1"
# code
fname = "word2vec_model_1000"
model = gensim.models.keyedvectors.KeyedVectors.load(fname)
# project part of vocab, max of 100 dimension
max = 1000
w2v = np.zeros((max,100))
with open("prefix_metadata.tsv", 'w+') as file_metadata:
for i,word in enumerate(model.wv.index2word[:max]):
w2v[i] = model.wv[word]
file_metadata.write(word + '\n')
# define the model without training
sess = tf.InteractiveSession()
with tf.device("/cpu:0"):
embedding = tf.Variable(w2v, trainable=False, name='prefix_embedding')
tf.global_variables_initializer().run()
path = 'visual/1'
saver = tf.train.Saver()
writer = tf.summary.FileWriter(path, sess.graph)
# adding into projector
config = projector.ProjectorConfig()
embed= config.embeddings.add()
embed.tensor_name = 'prefix_embedding'
embed.metadata_path = 'prefix_metadata.tsv'
# Specify the width and height of a single thumbnail.
projector.visualize_embeddings(writer, config)
saver.save(sess, path+'/prefix_model.ckpt', global_step=max)

Unparsable MOF Query When Trying to Register Event

Update 2
I accepted an answer and asked a different question elsewhere, where I am still trying to get to the bottom of this.
I don't think that one-lining this query is the answer, as I am still not getting the required results (and multi-lining queries is allowed in .mof, as shown in the URLs in comments to the answer ...
Update
I rewrote the query as a one-liner as suggested, but still got the same error! As it was still talking about lines 11-19 I knew there must be another issue. After saving a new file with the change, I reran mofcomp and it appears to have loaded, but the event which I have subscribed to simply does not work.
I really feel that there is not enough documentation on this topic and it is hard to work out how I am meant to debug this - any help on this would be much appreciated, even if this means using a different more appropriate method.
I have the following .mof file, which I would like to use to register an event on my system :
#pragma namespace("\\\\.\\root\\subscription")
instance of __EventFilter as $EventFilter
{
Name = "Event Filter Instance Name";
Query = "Select * from __InstanceCreationEvent within 1 "
"where targetInstance isa \"Cim_DirectoryContainsFile\" "
"and targetInstance.GroupComponent = \"Win32_Directory.Name=\"c:\\\\test\"\"";
QueryLanguage = "WQL";
EventNamespace = "Root\\Cimv2";
};
instance of ActiveScriptEventConsumer as $Consumer
{
Name = "TestConsumer";
ScriptingEngine = "VBScript";
ScriptText =
"Set objFSO = CreateObject(\"Scripting.FileSystemObject\")\n"
"Set objFile = objFSO.OpenTextFile(\"c:\\test\\Log.txt\", 8, True)\n"
"objFile.WriteLine Time & \" \" & \" File Created\"\n"
"objFile.Close\n";
// Specify any other relevant properties.
};
instance of __FilterToConsumerBinding
{
Filter = $EventFilter;
Consumer = $Consumer;
};
But whenever I run the command mfcomp myfile.mof I am getting this error:
Parsing MOF file: myfile.mof
MOF file has been successfully parsed
Storing data in the repository...
An error occurred while processing item 1 defined on lines 11 - 19 in file myfile.mof:
Error Number: 0x80041058, Facility: WMI
Description: Unparsable query.
Compiler returned error 0x80041058
This error appears to be caused by incorrect syntax in the query, but I don't understand where I have gone wrong with this - is anyone able to advise?
There are no string concatenation or line continuation characters being used in building "Query". To keep it simple, you could put the entire query on one line.