Icecast2, MPD and metadata - icecast

Currently running a MPD source to Icecast2 on my Raspberry Pi 3, using HTTPS.
Everything is working smoothly, however upon retrieving metadata from Icecast using the status-json.xsl file (and opening it directly), all I can find as relevant metadata is the artist and title of the song.
I would also like to be able to retrieve the album for the current song; upon browsing the MPD documentation, I uncommented metadata_to_use in /etc/mpd.conf and tailored it to my needs :
metadata_to_use "artist,album,title,name"
This being done, I restarted Icecast2 and MPD, but on the status-json.xsl file, no additional tags are available.
I thought tags, when required, would appear as additional information on the status-json.xsl file, but I was apparently wrong.
I did not find any relevant property in the part of Icecast2 as far as metadata go.
Could someone please clarify where I am supposed to find the additional tags I need?
Or, if I misconfigured anything, what I am supposed to look for?
Thanks a lot!

OK so, long story short, no could do.
Upon reading on the Interweb, I discovered that mpd comes with its own httpd server, and indeed, after making it work, there was simply no need for Icecast2 anymore.
For those interested into how to manage that, here goes :
mpd.conf :
Enable the "gapless "yes" to the decoder{} block :
Comment out the WHOLE audio_output{} used to mount to Icecast2 (the Shout one)
Enable the audio_output{} type "httpd",
Set a different port, e.g 8100, should you wish to keep Icecast compatibility for later,
Ensure you have encoder "vorbis" as Lame did not work for me as far as metadata go,
Ensure you have EITHER bitrate OR quality enabled, not both (will not start anyway)
and add after max_clients :
always_on "yes"
tags "yes"
Now, for the metadata parsing : it's a function graciously given here used to break down the data.
Please note it does not autorefresh yet, but it works.
<!DOCTYPE html>
<html>
<body>
<?php
function get_string_between($string, $start, $end)
{
$string = ' ' . $string;
$ini = strpos($string, $start);
if ($ini == 0) return false;
$ini += strlen($start);
$len = strpos($string, $end, $ini) - $ini;
return substr($string, $ini, $len);
}
$fp = fsockopen("192.168.1.10",8100,$errno,$errstr,1);
if (!$fp)
{
echo "$errstr ($errno)<br />\n";
}
else
{
$out = "GET / HTTP/1.1\r\n";
$out .= "Host: www.example.com\r\n";
$out .= "Connection: Close\r\n\r\n";
fwrite($fp, $out);
$stop=false;
$album='';
$artist='';
$title='';
while(!$stop)
{
if(feof($fp)) $stop=true;
$buff=fgets($fp, 512);
$album=get_string_between($buff,'ALBUM=','vorbis+');
if($album!=false) $stop=true;
$title=get_string_between($buff,'TITLE=','ALBUM=');
if($title!=false) $stop=true;
$artist=get_string_between($buff,'ARTIST=','TITLE=');
if($artist!=false) $stop=true;
}
if($album==false) $album='n/a';
if($title==false) $title='n/a';
if($artist==false) $artist='n/a';
fclose($fp);
echo("<table><tr><td align=center><b>$artist</b> - $title</td></tr>");
echo("<tr><td align=center><i>$album</i></td></tr></table>");
}
?>
</body>
</html>
Admittedly, it is a first draft and despite it not working like perfectly, it does work for me, so I hope it helps others!

Related

SoapClient does not recognize a function I am seeing in WSDL

I have a simple webservice in symfony2 that is working perfectly. I have added a new method, however, strangely, that method is not recognized, even when I see it in the WSDL definition.
Please load: WSDL definition
Method is called GetHoliday
The controller that executes that method is the following:
public function getHolidayAction() {
date_default_timezone_set('America/Santiago');
$request = $this->getRequest();
$client = new \SoapClient('http://' . $request->getHttpHost() . $request->getScriptName() . '/feriados?wsdl');
$year = $request->get('year');
$month = $request->get('month');
$day = $request->get('day');
$types = $client->__getFunctions();
var_dump($types);
die();
$result = $client->GetHoliday('8cd4c502f69b5606a8bef291deaac1ba83bb7727', 'cl', $year, $month, $day);
echo $result;
die();
}
After the call to __getFunctions call, GetHoliday method is missing.
If you want to see the __getFunctions response, please load online site
Enter any date in the input field. The response will appear in red.
The most curious thing, is that this works in my development machine which also has RedHat operating system (my hosting is HostGator).
Any help will be appreciated,
Finally, the problem was that the WSDL was being cached.
To make the first test, I used
$client = new \SoapClient('http://' . $request->getHttpHost() . $request->getScriptName() . '/feriados?wsdl', array('cache_wsdl' => WSDL_CACHE_NONE) );
To instantiate SoapClient. That way, it worked. so to get rid of WSDL_CACHE_NONE parameter, I deleted all files that start with wsdl in /tmp folder.
Regards,
Jaime

Query Facebook Opengraph next page parameters

I am unable to implement pagination with Facebook OpenGraph. I have exhausted every option I have found.
My hope is to query for 500 listens repeatedly until there are none left. However, I am only able to receive a response from my first query. Below is my current code, but I have tried setting the parameters to different amounts rather than having the fields from the [page][next] dictate them
$q_param['limit'] = 500;
$next_exists = true;
while($next_exists){
$music = $facebook->api('/me/music.listens','GET', $q_param);
$music_data = array_merge($music_data, $music['data']);
if($music["paging"]["next"]==null || $music["paging"]["next"]=="")
$next_exists = false;
else{
$url = $music["paging"]["next"];
parse_str(parse_url($url, PHP_URL_QUERY), $array);
foreach ($array as $key => $value) {
$q_param[$key]=$value;
}
}
}
}
a - Can you please share what do you get after first call?
b - Also, possible if you can share the whole file?
I think your script is timing out. Try adding following on top of your file:
set_time_limit(0);
Can you check apache log files?
sudo tail -f /var/log/apache2/error.log

mediawiki: is there a way to automatically create redirect pages that redirect to the current page?

My hobby is writing up stuff on a personal wiki site: http://comp-arch.net.
Currently using mediawiki (although I often regret having chosen it, since I need per page access control.)
Often I create pages that define several terms or concepts on the same page. E.g. http://semipublic.comp-arch.net/wiki/Invalidate_before_writing_versus_write_through_is_the_invalidate.
Oftentimes such "A versus B" pages provide the only definitions of A and B. Or at least the only definitions that I have so far gotten around to writing.
Sometimes I will define many more that two topics on the same page.
If I create such an "A vs B" or other paging containing multiple definitions D1, D2, ... DN, I would like to automatically create redirect pages, so that I can say [[A]] or [[B]] or [[D1]] .. [[DN]] in other pages.
At the moment the only way I know of to create such pages is manually. It's hard to keep up.
Furthermore, at the time I create such a page, I would like to provide some page text - typicaly a category.
Here;s another example: variant page names. I often find that I want to create several variants of a page name, all linking to the same place. For example
[[multithreading]],
[[multithreading (MT)]],
[[MT (multithreading)]],
[[MT]]
Please don;t tell me to use piped links. That's NOT what I want!
TWiki has plugins such as
TOPICCREATE automatically create topics or attach files at topic save time
More than that, I remember a twiki plugin, whose name I cannot remember or google up, that included the text of certain subpages within your current opage. You could then edit all of these pages together, and save - and the text would be extracted and distributed as needed. (By the way, if you can remember the name of tghat package, please remind me. It had certain problems, particularly wrt file locking (IIRC it only locked the top file for editing, bot the sub-topics, so you could lose stuff.))
But this last, in combination with parameterized templtes, would be almost everything I need.
Q: does mediawiki have something similar? I can't find it.
I suppose that I can / could should wrote my own robot to perform such actions.
It's possible to do this, although I don't know whether such extensions exist already. If you're not averse to a bit of PHP coding, you could write your own using the ArticleSave and/or ArticleSaveComplete hooks.
Here's an example of an ArticleSaveComplete hook that will create redirects to the page being saved from all section titles on the page:
$wgHooks['ArticleSaveComplete'][] = 'createRedirectsFromSectionTitles';
function createRedirectsFromSectionTitles( &$page, &$user, $text ) {
// do nothing for pages outside the main namespace:
$title = $page->getTitle();
if ( $title->getNamespace() != 0 ) return true;
// extract section titles:
// XXX: this is a very quick and dirty implementation;
// it would be better to call the parser
preg_match_all( '/^(=+)\s*(.*?)\s*\1\s*$/m', $text, $matches );
// create a redirect for each title, unless they exist already:
// (invalid titles and titles outside ns 0 are also skipped)
foreach ( $matches[2] as $section ) {
$nt = Title::newFromText( $section );
if ( !$nt || $nt->getNamespace() != 0 || $nt->exists() ) continue;
$redirPage = WikiPage::factory( $nt );
if ( !$redirPage ) continue; // can't happen; check anyway
// initialize some variables that we can reuse:
if ( !isset( $redirPrefix ) ) {
$redirPrefix = MagicWord::get( 'redirect' )->getSynonym( 0 );
$redirPrefix .= '[[' . $title->getPrefixedText() . '#';
}
if ( !isset( $reason ) ) {
$reason = wfMsgForContent( 'editsummary-auto-redir-to-section' );
}
// create the page (if we can; errors are ignored):
$redirText = $redirPrefix . $section . "]]\n";
$flags = EDIT_NEW | EDIT_MINOR | EDIT_DEFER_UPDATES;
$redirPage->doEdit( $redirText, $reason, $flags, false, $user );
}
return true;
}
Note: Much of this code is based on bits and pieces of the pagemove redirect creating code from Title.php and the double redirect fixer code, as well as the documentation for WikiPage::doEdit(). I have not actually tested this code, but I think it has at least a decent chance of working as is. Note that you'll need to create the MediaWiki:editsummary-auto-redir-to-section page on your wiki to set a meaningful edit summary for the redirect edits.

Set different Lighttpd vhost for internal LAN clients - possibly just RegEx required...?

I want Lighttpd to display a different page for internal clients and the default page for everyone else.
Between these two links, I have an idea of what I want to do but am not sure of the RegEx I would need to restrict clients using a hostname of [http://]192.168.0.? or [http://]192.168.?.? to a different page. I've been using the following code in lighttpd.conf:
server.document-root = "/var/www/sites"
$HTTP["host"] == "RegExHere" {
server.document-root = "/var/www/setup"
}
...where for 'RegExHere' I have tried a variety of attempts such as:
192\.168\.0\.\d{1,3}(\s|$))+
192\.168\.
[192.168.[0-9]+.]
192\.168\.[0-9]+.[0-9]+$
...and various combinations thereof. I have no idea whether I'm close, but regardless it only shows me the default page.
Can anyone advise where I may be going wrong please?
Thanks in advance!
You have to use the =~ syntax to match a regex. Change $HTTP["host"] == "RegExHere" to $HTTP["host"] =~ "RegExHere" and one of those regexs should work. ^192\.168\.\d{1,3}\.\d{1,3}$ should do it.
Found this article on it http://blog.evanweaver.com/2006/06/07/regular-expressions-in-lighttpd-host-redirects/
edit: I think you need to use $HTTP["remoteip"] instead of $HTTP["host"] and it looks like you can do it without regexes.
$HTTP["remoteip"] == "10.0.0.0/8" { url.access-deny = ("") }
$HTTP["remoteip"] == "127.0.0.0/8" { url.access-deny = ("") }
http://forum.lighttpd.net/topic/27

Querying a website with Perl LWP::Simple to Process Online Prices

In my free time, I've been trying to improve my perl abilities by working on a script that uses LWP::Simple to poll one specific website's product pages to check the prices of products (I'm somewhat of a perl noob). This script also keeps a very simple backlog of the last price seen for that item (since the prices change frequently).
I was wondering if there was any way I could further automate the script so that I don't have to explicitly add the page's URL to the initial hash (i.e. keep an array of key terms and do a search query amazon to find the page or price?). Is there anyway way I could do this that doesn't involve me just copying Amazon's search URL and parsing in my keywords? (I'm aware that processing HTML with regex is generally bad form, I just used it since I only need one small piece of data).
#!usr/bin/perl
use strict;
use warnings;
use LWP::Simple;
my %oldPrice;
my %nameURL = (
"Archer Season 1" => "http://www.amazon.com/Archer-Season-H-Jon-Benjamin/dp/B00475B0G2/ref=sr_1_1?ie=UTF8&qid=1297282236&sr=8-1",
"Code Complete" => "http://www.amazon.com/Code-Complete-Practical-Handbook-Construction/dp/0735619670/ref=sr_1_1?ie=UTF8&qid=1296841986&sr=8-1",
"Intermediate Perl" => "http://www.amazon.com/Intermediate-Perl-Randal-L-Schwartz/dp/0596102062/ref=sr_1_1?s=books&ie=UTF8&qid=1297283720&sr=1-1",
"Inglorious Basterds (2-Disc)" => "http://www.amazon.com/Inglourious-Basterds-Two-Disc-Special-Brad/dp/B002T9H2LK/ref=sr_1_3?ie=UTF8&qid=1297283816&sr=8-3"
);
if (-e "backlog.txt"){
open (LOG, "backlog.txt");
while(){
chomp;
my #temp = split(/:\s/);
$oldPrice{$temp[0]} = $temp[1];
}
close(LOG);
}
print "\nChecking Daily Amazon Prices:\n";
open(LOG, ">backlog.txt");
foreach my $key (sort keys %nameURL){
my $content = get $nameURL{$key} or die;
$content =~ m{\s*\$(\d+.\d+)} || die;
if (exists $oldPrice{$key} && $oldPrice{$key} != $1){
print "$key: \$$1 (Was $oldPrice{$key})\n";
}
else{
print "\n$key: $1\n";
}
print LOG "$key: $1\n";
}
close(LOG);
Yes, the design can be improved. It's probably best to delete everything and start over with an existing full-featured web scraping application or framework, but since you want to learn:
The name-to-URL map is configuration data. Retrieve it from outside of the program.
Store the historic data in a database.
Learn XPath and use it to extract data from HTML, it's easy if you already grok CSS selectors.
Other stackers, if you want to amend my post with the rationale for each piece of advice, go ahead and edit it.
I made simple script to demonstate Amazon search automation. Search url for all departments was changed with escaped search term. The rest of code is simple parsing with HTML::TreeBuilder. Structure of HTML in question can be easily examined with dump method (see commented-out line).
use strict; use warnings;
use LWP::Simple;
use URI::Escape;
use HTML::TreeBuilder;
use Try::Tiny;
my $look_for = "Archer Season 1";
my $contents
= get "http://www.amazon.com/s/ref=nb_sb_noss?url=search-alias%3Daps&field-keywords="
. uri_escape($look_for);
my $html = HTML::TreeBuilder->new_from_content($contents);
for my $item ($html->look_down(id => qr/result_\d+/)) {
# $item->dump; # find out structure of HTML
my $title = try { $item->look_down(class => 'productTitle')->as_trimmed_text };
my $price = try { $item->look_down(class => 'newPrice')->find('span')->as_text };
print "$title\n$price\n\n";
}
$html->delete;