icecast cannot find MPD mountpoint - icecast

I have setup icecast 2 server and mpd too.
Both are working fine individually but icecast doesn't show the mpd in the mount list.
Here is my mpd.conf
# See: /usr/share/doc/mpd/mpdconf.example
user "ayush"
pid_file "~/.mpd/mpd.pid"
db_file "~/.mpd/mpd.db"
state_file "~/.mpd/mpdstate"
log_file "~/.mpd/mpd.log"
playlist_directory "~/.mpd/playlists"
music_directory "~/Music"
audio_output {
type "shout"
encoding "ogg"
name "stream"
host "localhost"
port "8000"
mount "/mpd.ogg"
bind_to_address "127.0.0.1"
# This is the source password in icecast.xml
password "pass"
# Set either quality or bit rate
# quality "5.0"
bitrate "128"
format "44100:16:2"
# Optional Parameters
user "source"
# description "here is my long description"
# genre "jazz"
} # end of audio_output
# Need this so that mpd still works if icecast is not running
audio_output {
type "alsa"
name "fake out"
driver "null"
}
Also here is the output of my netstat
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 315/sshd
tcp 0 0 0.0.0.0:17500 0.0.0.0:* LISTEN 651/dropbox
tcp 0 0 0.0.0.0:8000 0.0.0.0:* LISTEN 8006/icecast
tcp 0 0 0.0.0.0:16001 0.0.0.0:* LISTEN 1211/pulseaudio
tcp 0 0 0.0.0.0:57253 0.0.0.0:* LISTEN 1211/pulseaudio
tcp 0 0 0.0.0.0:60421 0.0.0.0:* LISTEN 1211/pulseaudio
tcp 0 0 0.0.0.0:4713 0.0.0.0:* LISTEN 1211/pulseaudio
tcp6 0 0 :::22 :::* LISTEN 315/sshd
tcp6 0 0 :::16001 :::* LISTEN 1211/pulseaudio
tcp6 0 0 :::36418 :::* LISTEN 1211/pulseaudio
tcp6 0 0 :::32899 :::* LISTEN 1211/pulseaudio
tcp6 0 0 :::6600 :::* LISTEN 8046/mpd
tcp6 0 0 :::4713 :::* LISTEN 1211/pulseaudio
My guess is that because mpd is not listening on ipv4 icecast is not able to see the mount point.
But I also don't understand why it doesn;t listen on ipv4 when I have explicitly used bind_to_address option.
Can someone please tell me how to make icecast see the mpd mountpoint.
Thanks

I was having the same issue and it seemed to stem from the setting bitrate "128" in mpd.conf. I was able to get the mountpoint to show up when I used quality "5.0".
I also tried bitrate "320" which did not work either, however I was able to see the mount with quality "10.0" as well. From this it seems that only the quality setting works.
I am not entirely sure, but I believe this stems from the way Vorbis is encoded. It seems like the encoders accept flags for quality in the form of -q {quality} where {quality} is any value from 0.0 to 10.0 (including factional values).
Sources:
https://en.wikipedia.org/wiki/Vorbis
http://linux.die.net/man/1/oggenc

I don't any problems connecting to icecast using the same settings, the only difference I see is the bind_to_address. This is used for connecting mpd clients if I'm not mistaken and not for the streaming server. It doesn't belong under audio_output.
Also, is there something in the MPD logs?

Related

Client cannot reply ACK after receiving Server's SYN/ACK in virtual network interface environment

I'm setting a local vpn environment, and I want to capture traffic locally through virtual network interface then forwarding them to real destinations through a physical network interface binding socket. However, I cannot even connect real destination after setting up a tun virtual network interface.
My testing machine:
Linux testing-VirtualBox 3.19.0-15-generic #15-Ubuntu SMP Thu Apr 16 23:32:37 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
First I successfully create a virtual network interface named tun0 as below:
tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
inet addr:192.168.2.1 P-t-P:192.168.2.1 Mask:255.0.0.0
UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:500
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
For brevity, I just add the target server's ip address into route table:
route add -host 45.113.192.102 dev tun0
The route table is as below:
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 xxx.xx.xxx.xxx 0.0.0.0 UG 0 0 0 eth0
45.113.192.102 0.0.0.0 255.255.255.255 UH 0 0 0 tun0
xxx.xx.xxx.xxx 0.0.0.0 255.255.255.128 U 0 0 0 eth0
192.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 tun0
xxx.xx.xxx.xxx is my internal host/gateway ip address.
At last, I create a socket and bind the socket into physical network interface. I use libuv here and should no matter to the issue.
struct sockaddr_in remote_addr;
memset(&remote_addr, 0, sizeof(remote_addr));
remote_addr.sin_family = AF_INET;
remote_addr.sin_port = ntohs(443);
inet_pton(AF_INET, "45.113.192.102", &remote_addr.sin_addr);
uv_os_sock_t sock = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
if (sock < 0) {
std::cout << "ERROR--- create socket failed\n";
return -1;
}
int32_t r;
r = setsockopt(sock, SOL_SOCKET, SO_BINDTODEVICE, "eth0", strlen("eth0"));
if (r != 0) {
std::cout << "setsockopt failed: " << errno;
return -1;
}
uv_tcp_init(g_uv_loop, &socket_handle);
r = uv_tcp_open(&socket_handle, sock);
uv_connect_t connect_req;
r = uv_tcp_connect(&connect_req, &socket_handle,
(struct sockaddr *) &remote_addr, _tcp_connect_cb);
I run my code and found I cannot connect to "45.113.192.102".
I capture the traffic through wireshark and found that my program has sent SYN to "45.113.192.102", and "45.113.192.102" also replied SYN,ACK. However, after that seems my program did not send ACK, which causes connecting failed.
In the following, Client continuously sends [TCP Spurious Retransmission] SYN and Server replys [TCP Retransmission] SYN,ACK.

Who can send to a socket bound to INADDR_LOOPBACK?

I'm somewhat new to socket programming, and am confused about the concept of binding a socket to the address INADDR_LOOPBACK, or 127.0.0.1.
If I'm writing server code to listen for messages on a specific port, and I bind a socket to an address as in the following code exerpt...
int sd = socket( PF_INET, SOCK_DGRAM, 0 );
sockaddr_in si;
si.sin_family = AF_INET;
si.sin_addr.s_addr = inet_addr( "127.0.0.1" );
si.sin_port = htons( 9090 );
bind( sd, (sockaddr*)&si, sizeof si )
...my question is: who is able to send to this socket?
I know that other processes running on the same PC as the server process can reach the above socket, by doing a sendto() with a dest_addr argument specifying 127.0.0.1.
But can clients on other PCs on the same network also send to that socket if they know the server's "actual" address? What I mean is: if I run ifconfig on a Linux PC, I'll see an inet address, e.g. 10.138.19.27. Does this mean a client process on a different PC than the server, but on the same network, can send to the server's socket - which was bound to 127.0.0.1 - if the client specifies an address of 10.138.19.27?
Only connections to the loopback adapter (127.0.0.1), and those can only originate from the same machine as the listener since the other interfaces intentionally avoid rounding to that one.
When you don't bind or when you bind to INADDR_ANY (0.0.0.0), you accept connections from all interfaces.
Window 1 Window 2
------------------------------------------ ------------------------------------------
$ nc -l 5678
$ echo test-ip | nc 69.163.162.155 5678 test-ip
$ echo $?
0
$ nc -l 5678
$ echo test-localhost | nc localhost 5678 test-localhost
$ echo $?
0
When you bind to an IP address, you only accept connections directed to that IP address.
Window 1 Window 2
------------------------------------------ ------------------------------------------
$ nc -l 69.163.162.155 5678
$ echo test-localhost | nc localhost 5678
$ echo $?
1
$ echo test-ip | nc 69.163.162.155 5678 test-ip
$ echo $?
0
The same goes for addresses in 127.x.x.x.
Window 1 Window 2
------------------------------------------ ------------------------------------------
$ nc -l localhost 5678
$ echo test-ip | nc 69.163.162.155 5678
$ echo $?
1
$ echo test-localhost | nc localhost 5678 test-localhost
$ echo $?
0
The special thing about 127.x.x.x is that only your own machine can reach 127.x.x.x addresses.

Pysnmp openServerMode with IPv6

I try to start a SNMP Agent with pysnmp.
for IPv4 and IPv6 bind, it works pretty fine with localhost ('127.0.0.1' and '::1')
but when I try to use other IPv6 IP which I fetched from interface, it failed due to
[vagrant#test SOURCES]$ sudo python snmp_agent.py enp0s8
Traceback (most recent call last):
File "snmp_agent.py", line 172, in <module>
master_agent_startup(ifname=sys.argv[1])
File "snmp_agent.py", line 101, in master_agent_startup
(get_ipv6_address(interface_name), SNMP_AGENT_PORT))
File "/usr/lib/python2.7/site-packages/pysnmp/carrier/asyncore/dgram/base.py", line 50, in openServerMode
raise error.CarrierError('bind() for %s failed: %s' % (iface, sys.exc_info()[1],))
pysnmp.carrier.error.CarrierError: bind() for ('fe80::a00:27ff:fe9e:9c16', 8001) failed: [Errno 22] Invalid argument
This is the output from the interface 'enp0s8':
[vagrant#test SOURCES]$ ifconfig enp0s8
enp0s8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.20.20.26 netmask 255.255.255.0 broadcast 172.20.20.255
inet6 fe80::a00:27ff:fe9e:9c16 prefixlen 64 scopeid 0x20<link>
ether 08:00:27:9e:9c:16 txqueuelen 1000 (Ethernet)
RX packets 874053 bytes 115842841 (110.4 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 862314 bytes 114652475 (109.3 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
This is the code piece I used for IPv6 bind:
def get_ipv6_address(ifname):
return netifaces.ifaddresses(ifname)[netifaces.AF_INET6][0]['addr'].split('%')[0]
config.addSocketTransport(snmpEngine, udp.domainName,
udp.UdpTransport().openServerMode(
(get_ipv4_address(interface_name), SNMP_AGENT_PORT))
)
config.addSocketTransport(snmpEngine, udp6.domainName,
udp6.Udp6SocketTransport().openServerMode(
(get_ipv6_address(interface_name), SNMP_AGENT_PORT))
)
From pysnmp sample, it seems the parameter inside "openServerMode()" is just a tuple of IP and port.
And from output error I suppose there is no error for given IP and port.
so why it failed due to Invalid argument?
Could you #Ilya Etingof or some other pysnmp expert help me with it?
Thanks.
UPDATE: I try to bind it with given suggestion, but is still doesn't work.
the bind command was run from a new installed CentOS. but it still failed:
[root#test ~]# ifconfig
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.10.20.4 netmask 255.255.255.0 broadcast 10.10.20.255
inet6 fe80::f816:3eff:fee1:5475 prefixlen 64 scopeid 0x20<link>
ether fa:16:3e:e1:54:75 txqueuelen 1000 (Ethernet)
RX packets 12242 bytes 962552 (939.9 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 12196 bytes 957826 (935.3 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[root#test ~]# python
Python 2.7.5 (default, Oct 11 2015, 17:47:16)
[GCC 4.8.3 20140911 (Red Hat 4.8.3-9)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import socket
>>> s = socket.socket(socket.AF_INET6, socket.SOCK_DGRAM, 0)
>>> addr_and_port = ('fe80::f816:3eff:fedb:ba4f', 8001)
>>> s.bind(addr_and_port)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib64/python2.7/socket.py", line 224, in meth
return getattr(self._sock,name)(*args)
socket.error: [Errno 22] Invalid argument
>>>
[1]+ Stopped python
[root#test ~]# netstat -anp | grep 8001
[root#test ~]#
One more update:
I suppose the bind is failed due to my environment has some issue with IPv6 configuration. As I'm only able to get one IPv4 address by using "socket.getaddrinfo()" method.
Br,
-Dapeng Jiao
You could get this error if attempted to bind the same socket more than once. But I can't see that is the case in your code.
That .openServerMode() method does no magic -- it just calls .bind() on socket object. For inspiration, does this work in at your Python prompt?
from pysnmp.carrier.asyncore.dgram import udp6
addr_and_port = ('fe80::a00:27ff:fe9e:9c16', 8001)
udp6.Udp6SocketTransport().openServerMode(addr_and_port)
or even:
import socket
s = socket.socket(socket.AF_INET6, socket.SOCK_DGRAM, 0)
addr_and_port = ('fe80::a00:27ff:fe9e:9c16', 8001)
s.bind(addr_and_port)
My hope is that tests like these may help you figuring out the problem...
When you use an IPv6 link-local address, you must always use the scope along with it. The link-local address is not valid without its scope, thus you will receive an Invalid argument error.
For example, instead of using fe80::a00:27ff:fe9e:9c16 you must use fe80::a00:27ff:fe9e:9c16%enp0s8.

Erlang - Ejabberd join_cluster Error: {no_ping ...}

Hi I have been trying to set up an ejabberd cluster.
However on trying to join from node2 to node1 , i get an error saying
On node 2:
# ejabberdctl join_cluster ejabberd#<internal ip of node1>
Error: {no_ping,'ejabberd#<internal ip of node1>'}
I can clearly ping node1 from node2.
Both the node are hosted in the same region on AWS.
I have tried allowing all traffic on node 1.
Both have the same .erlang.cookie.
Not sure why i continue to get that error.
# ejabberdctl status
The node 'ejabberd#<internal ip of node1>' is started with status: started
ejabberd 16.03.107 is running in that node
# netstat -lnptu
tcp 0 0 0.0.0.0:4369 0.0.0.0:* LISTEN 2190/epmd
tcp 0 0 0.0.0.0:5269 0.0.0.0:* LISTEN 2233/beam.smp
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 975/sshd
tcp 0 0 0.0.0.0:52189 0.0.0.0:* LISTEN 2233/beam.smp
tcp 0 0 0.0.0.0:5280 0.0.0.0:* LISTEN 2233/beam.smp
tcp 0 0 0.0.0.0:5222 0.0.0.0:* LISTEN 2233/beam.smp
tcp6 0 0 :::4369 :::* LISTEN 2190/epmd
tcp6 0 0 :::22 :::* LISTEN 975/sshd
ejabberdctl.cfg on node1 :
ERLANG_NODE=ejabberd#<internal IP of node1>
ejabberd.yml on node1:
loglevel: 4
log_rotate_size: 10485760
log_rotate_date: ""
log_rotate_count: 1
log_rate_limit: 100
hosts:
- "<external ip of node1>"
listen:
-
port: 5222
module: ejabberd_c2s
max_stanza_size: 65536
shaper: c2s_shaper
access: c2s
-
port: 5269
module: ejabberd_s2s_in
-
port: 5280
module: ejabberd_http
request_handlers:
"/websocket": ejabberd_http_ws
web_admin: true
http_bind: true
captcha: true
auth_method: internal
shaper:
normal: 1000
fast: 50000
max_fsm_queue: 1000
acl:
local:
user_regexp: ""
loopback:
ip:
- "127.0.0.0/8"
access:
max_user_sessions:
all: 10
max_user_offline_messages:
admin: 5000
all: 100
local:
local: allow
c2s:
blocked: deny
all: allow
c2s_shaper:
admin: none
all: normal
s2s_shaper:
all: fast
announce:
admin: allow
configure:
admin: allow
muc_admin:
admin: allow
muc_create:
local: allow
muc:
all: allow
pubsub_createnode:
local: allow
register:
all: allow
trusted_network:
loopback: allow
language: "en"
modules:
mod_adhoc: {}
mod_announce: # recommends mod_adhoc
access: announce
mod_blocking: {} # requires mod_privacy
mod_caps: {}
mod_carboncopy: {}
mod_client_state: {}
mod_configure: {} # requires mod_adhoc
mod_disco: {}
mod_irc: {}
mod_http_bind: {}
mod_last: {}
mod_muc:
host: "conference.#HOST#"
access: muc
access_create: muc_create
access_persistent: muc_create
access_admin: muc_admin
mod_muc_admin: {}
mod_offline:
access_max_user_messages: max_user_offline_messages
mod_ping: {}
mod_privacy: {}
mod_private: {}
mod_pubsub:
access_createnode: pubsub_createnode
ignore_pep_from_offline: true
last_item_cache: false
plugins:
- "flat"
- "hometree"
- "pep" # pep requires mod_caps
mod_roster: {}
mod_shared_roster: {}
mod_stats: {}
mod_time: {}
mod_vcard:
search: false
mod_version: {}
allow_contrib_modules: true
I faced the same issue while setting up Ejabberd cluster on EC2. Here solution for reference.
Make sure following ports are open on internal/private network
5222 - xmpp client connection
5280 - web portal
4369 - EPMD
5269 - S2S
4200 - 4210 node communication
Also allow internal ping (icmp packets) just in case.
Next set FIREWALL_WINDOW option in ejabberdctl.cfg file as follows. This will set Erlang use a defined range of port instead of dynamic
ports for node communication. (refer ejabberdctl.cfg)
FIREWALL_WINDOW=4200-4210
And use full node names for you Ejabberd nodes eg: ejabberd#srv1.example.com
it seems you are missing configration in ejabberdctl.cfg change the following line in your ejabberdctl.cfg file-
#INET_DIST_INTERFACE=127.0.0.1 to
INET_DIST_INTERFACE=104.10.120.122 (whatever your host public ip)
and open erlang console and run the following command-
net_adm:ping('ejabberd#ejabberd1'). # your node
if it will return pong now you can do cluster between ejabberd nodes.
run the following command to make cluster-
ejabberdctl join_cluster 'ejabberd#ejabberd1'
Frist, #Uday Sawant's method is mandatory.
and also you should add each node info into /etc/hosts
for example, if your nodes are
ejabberd#node1
ejabberd#node2
set these to host file for two systems.
for os,
add your ejabbered node hostname
vi /etc/hosts
...
node1 10.0.100.1
node2 10.0.100.2
for erlang
vi $HOME/.hosts.erlang
'node1'.
'node2'.
host file for ejabberd

Different processes showed as same PID within netstat

I spawn few processes using the Python multiprocessing module.
However when I call netstat -nptl, each ip:port listeners listed under the same PID.
I'm using Python 2.7 on Ubuntu 14.04.
netstat -V
>> net-tools 1.60
>> netstat 1.42 (2001-04-15)
Relevant code:
import unittest
import multiprocessing
import socket
import os
import time
import ex1
class Listener(multiprocessing.Process):
def __init__(self, _ttl):
super(Listener, self).__init__()
self.ttl = _ttl
self.socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.socket.bind(('localhost', 0))
def get_pid(self):
return self.pid
def get_name(self):
return self.socket.getsockname()
def run(self):
self.socket.listen(1)
time.sleep(self.ttl)
def listen(self):
self.start()
class TestEx1(unittest.TestCase):
def test_is_legal_ip(self):
# Legal IP
assert(ex1.is_legal_ip("1.1.1.1:55555"))
assert(ex1.is_legal_ip("0.1.1.255:55555"))
assert(ex1.is_legal_ip("0.0.0.0:55555"))
assert(ex1.is_legal_ip("255.255.255.255:55555"))
assert(ex1.is_legal_ip("0.1.2.3:55555"))
# Illegal IP
assert(not ex1.is_legal_ip("256.1.1.1:55555"))
assert(not ex1.is_legal_ip("1.256.1:55555"))
assert(not ex1.is_legal_ip("1.1.1.1.1:55555"))
assert(not ex1.is_legal_ip("1.a.1.1:55555"))
assert(not ex1.is_legal_ip("1.1001.1.1:55555"))
def test_address_to_pid(self):
# Create 3 listener processes
listener1 = Listener(22)
listener2 = Listener(22)
listener3 = Listener(22)
# Start listening
listener1.listen()
listener2.listen()
listener3.listen()
print listener1.get_pid()
print listener2.get_pid()
print listener3.get_pid()
# For each listener, get appropriate ip:port
address1 = str(str(listener1.get_name()[0])) + \
":" + str(listener1.get_name()[1])
address2 = str(str(listener2.get_name()[0])) + \
":" + str(listener2.get_name()[1])
address3 = str(str(listener3.get_name()[0])) + \
":" + str(listener3.get_name()[1])
# Check if address_to_pid() works as expected.
#assert(str(ex1.address_to_pid(address1)) == str(listener1.get_pid()))
#assert(str(ex1.address_to_pid(address2)) == str(listener2.get_pid()))
#assert(str(ex1.address_to_pid(address3)) == str(listener3.get_pid()))
# Waits for the listener processes to finish
listener2.join()
listener2.join()
listener3.join()
if __name__ == "__main__":
unittest.main()
Output:
4193
4194
4195
..
----------------------------------------------------------------------
Ran 2 tests in 22.019s
OK
Netstat -nptl output:
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.1.1:53 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:37529 0.0.0.0:* LISTEN 4192/python
tcp 0 0 127.0.0.1:53402 0.0.0.0:* LISTEN 4192/python
tcp 0 0 127.0.0.1:49214 0.0.0.0:* LISTEN 4192/python
tcp 1 0 192.168.46.136:49475 209.20.75.76:80 CLOSE_WAIT 2968/plugin_host
tcp 70 0 192.168.46.136:60432 91.189.92.7:443 CLOSE_WAIT 3553/unity-scope-ho
tcp6 0 0 ::1:631 :::* LISTEN -
Using my Mac OS 10.9.5 (Python 2.7.3), I could reproduce the same behavior. After several try-and-error, it turned out that it's because the socket objects are shared among the processes. (lsof -p <pid> helps to identify listening sockets.)
When I made following change to Listener class, each process started to listen on its own port number by its own PID.
def get_name(self):
# this method cannot refer to socket object any more
# self.sockname should be initialized as "not-listening" at __init__
return self.sockname
def run(self):
# Instantiate socket at run
self.socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.socket.bind(('localhost', 0))
self.sockname = self.socket.getsockname()
# Listener knows sockname
print self.sockname
self.socket.listen(1)
time.sleep(self.ttl)
This behavior should be the same as Ubuntu's Python and netstat.
self.sockname remains "not-listening" at original process
To listen on port as independent process, sockets need to be created at run method of a Listener object (New process calls this method after creating copy of the object). However variables updated in this copied object in the new process are not reflected to the original objects in original process.