ICECAST streaming two different playlists - icecast

I already modified my icecast.xml
<listen-socket>
<port>8000</port>
</listen-socket>
<listen-socket>
<port>8001</port>
</listen-socket>
<fileserve>2</fileserve>
I want to send one playlist thru port 8000 (I'm already doing so)
But I need to send a different playlist from port 8001
The problem is that I hear the stream from port 8000 on 8001 and not my second playlist.

I recommend to have a look at this nice explanation.
Basically what this means is that you don't need to mess with port numbers, as Icecast maps streams to virtual paths on one web server.
As an example you could then have:
http://stream.example.org:8000/stream1.ogg
http://stream.example.org:8000/stream2.ogg
Or if those streams are unrelated you could also access the very same streams as:
http://radio1.example.org:8000/stream1.ogg
http://radio2.example.org:8000/stream2.ogg
As long as both hostnames resolve to the IP address of your Icecast server.

Related

I would like to maintain three tcp/ip client connection with in a same machine and those will be connected to one server is it possible?

I am working with one requirement. Where I need to maintain three tcp/ip clients and those need to connected to the same server.
Is it possible to Run those three clients in a same machine.?
If not possible, if I need to run those clients in a three remote machines and those need to be connected to the same server, Then how could i synchronize those clients.?
I am glad if any suggestions.Thanks in advance.
Sure, why wouldn't this be possible? Those three clients can even connect to the same remote port.
Yes surely that should not be problem. Even within a same thread you can do that.
Example here. you can create more objects of the class tcp_client_c...
Yes, it is very much possible tcp connections are identified by 4-tuples
(src ip,src port, dest. ip, dest.port)
Here if you run your clients on the same machine they all have the same source ip, so their source ports will have to be different. Then all 3 of the clients can connect to the same server listening on a single port.
These 3 connections can be distiguished because one of the 4-tuple items (The source port) is different for each.
As pointed out by #Remy The OS by default assigns unique port numbers (source ports) to the Applications, but if you specifically bind to a certain port, the onus is on you to bind it to a unique port number.

C++ sockets: communication between PCs over internet

I'm writing a program on Windows using winsocks that can send messages to another computer. The client connects with the server in the other computer and begin exchanging data.
It works fine on my local network using local addresses(192.168.1.*), but I can't communicate with public addresses (216.185.45.129); not even my own. I can successfully connect to a website on port 80, but not to my laptop at home using its public IP address, regardless of what ports I use (unreserved ports).
So I did research online and the only solution that seems to work is port forwarding.
-But is there absolutely no other way to achieve this?
-How do other programs like Teamviewer connect to other computers on the network then?
-Is there an already open but typically unused port that I can use?
-At the very least, can I forward the ports on my router but not have the client do anything? Or maybe have my program forward the ports automatically.
The main problem is, that every router is using NAT to distinguish different computer in your lokal network against the WAN. He need to do this, because you got only one IP in the internet, but several devices in your home. To archive this, he uses groups of ports. That means, if you use to send maybe from port 2048 to a webserver in internet with two devices, the router gives one device another port (like 2049). The response has the Port of the requester, so the router can map it back. Unfortunately most router always map ports so you never now which port you have from the internet side.
There are two common ways to work around and archive your goal.
Port Fowarding
You can force most router not to map special ports but bind them to unique MAC addresses. You can use UPNP to config most router to do that, but I do not recommend that for security reasons and also it does not work in many enviroments where Router do not allow UPNP manipulation.
Most router have port forwarding abilities for gaming reasons (mostly it is used in P2P networks)
It works with TCP and UDP.
NAT Traversal
The common way is NAT traversal, also known as NAT hole punching. I will describe it in short for UDP. You can find a wiki explanation here for TCP and for UDP here. Unfortunately you need a server in the internet both clients can reach. Here the steps:
Both clients contact the server. The server now know IP and PORT of both clients.
Server send back the information to the clients.
Both(!) clients send now packages to each other on the known address.
It is necessary that both client send a UDP package and have to accept that the first package get lost. The reason is the router. Most router only accept packages from a source on a mapped PORT if a client has send a package to that source before.
UPDATE
Regarding to a comment of Remy Lebau I changed the Firewall piercing part to NAT Traversal as it was partly wrong.

what is need for localhost in socket programming as well as general application?

Localhost follows the loopback mechanism.
why we have to loop back the packets to our computer itself? what is the need for that(general case and specially socket programming)?
Also kindly specify some practical applications of localhost too?
And another clarfication i need was
localhost resolves to 127.0.0.1 (most time)
myhost name say "vinoth-computer" resolves to 192.168.111.12
is 127.0.0.1 and 192.168.111.12 one and same?
Think about next situation: you have a client and server application running on separate stations in production. But in QA or for unit testing you want to run the client and the server instances on the same station. You can put in client definitions or parameters address of server as 'localhost' or '127.0.0.1'.
Also, sometimes you want to run 2 separate processes on the same station, when by design they should be running on the same station. You can set a communication between them through sockets and use localhost on the client side part.
Local loop back can be used for communication applications with each other. There is a lot ways to do this, but this is one of simplest.
To specify application, great example is Apache server, which by default listen on localhost as well. So when you are developing web application, you can simply use localhost or 127.0.0.1 as address in your favorite browser.
192.168.111.12 is not same as 127.0.0.1
In your case its IP which refer to your computer in your local network (behind some router). Other computers in your network can address you computer using this address.
If you want to know more, or explain something in more detail, feel free to ask.

Hosting multiple websites on a single server

I have a bunch of different websites, mostly random weekend projects that I'd like to keep on the web because they are still useful to me. They don't see more than 3-5 hits per day between all of them though, so I don't want to pay for a server for each of them when I could probably fit them all on a single EC2 micro instance. Is that possible? They all run off different web servers, since I tend to experiment with a lot of new tech. I was thinking I could have each webserver serve on a different port, then have incoming requests to app1.com get routed to app1.com:3000 and requests to app2.com get routed to app2.com:3001 and so on, but I don't know how I would go about setting that up.
I would suggest that what you are looking for is a reverse web proxy, which typically includes among its features the ability to understand portions of the request at layer 7, and direct the incoming traffic to the appropriate set of one or more back-end ip/port combinations based on what's observed in the request headers (or other aspects of the request).
Apache, Varnish, and Nginx all have this capacity, as does HAProxy, which is the approach that I use because it seems to be very fast and easy on memory, and thus appropriate for use on a micro instance... but that is not at all to imply that it is somehow more "correct" to use than the others. The principle is the same with any of those choices; only the configuration details are different. One service is listening to port 80, and based on the request, relays it to the appropriate server process by opening up a TCP connection to the appropriate destination, tying the ends of the two pipes together, and otherwise for the most part staying out of the way.
Here's one way (among several alternatives) that this might look in an haproxy config file:
frontend main
bind *:80
use_backend app1svr if { hdr(host) -i app1.example.com }
use_backend app2svr if { hdr(host) -i app2.example.com }
backend app1svr
server app1 127.0.0.1:3001 check inter 5000 rise 1 fall 1
backend app2svr
server app2 127.0.0.1:3002 check inter 5000 rise 1 fall 1
This says listen on port 80 of all local IP addresses; if the "Host" header contains "app1.example.com" (-i means case-insensitive) then use the "app1" backend configuration and send the request to that server; do something similar for app2.example.com. You can also declare a default_backend to use if none of the ACLs match; otherwise, if no match, it will return "503 Service Unavailable," which is what it will also return if the requested back-end isn't currently running.
You can also configure a stats endpoint to show you the current state and traffic stats of your frontends and backends in an HTML table.
Since the browser isn't connecting "directly" to the web server any more, you have to configure and rely on the X-Forwarded-For header inserted into the request headers to identify the browser's IP address, and there are other ways in which your applications may have to take the proxy into account, but this overall concept is exactly how web applications are typically scaled, so I don't see it as a significant drawback.
Note these examples do use "Anonymous ACLs," of which the documentation says:
It is generally not recommended to use this construct because it's a lot easier
to leave errors in the configuration when written that way. However, for very
simple rules matching only one source IP address for instance, it can make more
sense to use them than to declare ACLs with random names.
— http://cbonte.github.io/haproxy-dconv/configuration-1.4.html
For simple rules like these, this construct makes more sense to me than explicitly declaring an ACL and then later using that ACL to cause the action that you want, because it puts everything together on the same line.
I use this to solve a different root problem that has the same symptoms -- multiple sites for development/test projects, but only one possible external IP address (which by definition means "port 80" can only go to one place). This allows me to "host" development and test projects on different ports and platforms, all behind the single external IP of my home DSL line. The only difference in my case is that the different sites are sometimes on the same machine as the haproxy and other times they're not, but the application seems otherwise identical.
Rerouting in way you show - depends on the OS your server is hosting on. For linux you have to use iptables, for windows you could use windows firewall. You should set all incoming connections to a port 80 to be redirected do desired port 3000
But, instead of port, you could use a different host name for each service, like
app1.apps.com
app2.apps.com
and so on. You can configure it with redirecting on your DNS hosting, for apps.com IMHO this is best solution, if i got you right.
Also, you can configure a single host to reroute to all other sites, like
app1.com:3001 -> apphost1.com
app1.com:3002 -> apphost2.com
Take in mind, in this case, all traffic will pas through app1.com.
You can easily do this. Set up a different hostname for each app you want to use, create a DNS entry that points to your micro instance, and create a name-based virtual host entry for each app.
Each virtual host entry should look something like:
<VirtualHost *>
ServerName app1.example.com
DocumentRoot /var/www/html/app1/
DirectoryIndex index.html
</VirtualHost>

Accessing a webservice on my localhost

I've a running WebService published on "http://localhost:8080/FreeMeteoWS/FreeMeteoWS?WSDL". I want to access this webservice from a device on the same network...what address should I put in order to retrieve the wsdl?
If that port i.e. 8080 is open for incoming connections on your computer you'd only need to find your local ip-address, this is done in different manners on different operating systems. When you have obtained local ip-address switch localhost out in favor for that ip-address.