Mixing two icecast stream with liquidsoap and stream it to icecast server - icecast

I trying to mix two stream with liquidsoap one on the left another on the right side how to mix it and stream it to icecast server.
I'm already stream those two stream with darkice
Here is my pseudo-code
stream1 = 'localhost/stream1' " streamed with darkice on my localmachine
stream2 = 'localhost/stream2' " streamed with darkice on my localmachine
stream3 = mix(stream1[on the left], stream2[on the right])
output.icecast(stream3)
Anyone have any idea? i'm new to this kind of problems.

You could use input.harbor to get the streams into liquidsoap, then mix them together.
source_1 = input.harbor('source1',port=9000)
source_2 = input.harbor('source2',port=9001)
mixed = add([source_1,source_2])
output.icecast(%vorbis,id="icecast",
mount="mystream.ogg",
host="localhost", password="hackme",
icy_metadata="true",description="",
url="",
mixed)
If the streams are already left/right panned, this should work. Otherwise liquidsoap does have a stereo.pan function.

liquidsoap has a built in a crossfade function that does what you want. For more advanced fading there is the smart crossfade function.

Related

Live555MediaServer restarts the stream at every new connection. Why setting "reuseSource" to true is not working as expected?

Live555MediaServer can be used to stream video files as rtsp streams. I have 2 clients (vlc) that connect to the server, A and B. I want to see the exact video stream in both the clients. Here is the problem: I connect A and after 10 seconds I connect B. When B is connected the video that I see starts over from the beginning, while A keeps streaming as it was.
I would like the 2 concurrent streams to be synchronized.
The live555 doc says that setting reuseFirstSource to True should work. So I tried to set reuseSource to true at DynamicRTSSPServer:121 but it didn't work. When I connect to the server using client B the video restarts from the beginning.
Boolean const reuseSource = True;
I expect to see the 2 concurrent streams synchronized even if one starts with a delay with respect to the other one.
I finally found a workaround and why there was this 'bug'.
Quick answer: set if condition at line 67 to false, i.e.
if (smsExists && isFirstLookupInSession) {
becomes
if (false) {
Explaination: Every time a new session is starting, the isFirstLookupInSession variable is set to true and the session is removed and recreated.
I wrote to the support of live555 and Finlayson told me and I quote
“LIVE555 Media Server” code was always intended to work this way, and was intended to be a ‘stand-alone appliance’ that does not have its code modified (e.g., by changing the value of “reuseFirstSource”).
Thus the only solution for creating a RTSP server through Live555 is to create your own server starting from the testProgs examples.
The workaround proposed here could generate unwanted behaviors, but for a simple rtsp server with multiple streams it's fine.

get the binary data transferred from grpc client

I am new to gRPC framework, and I have created a sample client-server on my PC (referring to this).
In my client-server application I have implemented a simple RPC
service NameStudent {
rpc GetRoll(RollNo) returns (Details) {}
}
The client sends a RollNo and receives his/her details which are name, age, gender, parent name, and roll no.
message RollNo{
int32 roll = 1;
}
message Details {
string name = 1;
string gender = 2;
int32 age = 3;
string parent = 4;
RollNo rollid = 5;
}
The actual server and client codes are adaptation of the sample code explained here
Now my server is able to listen to "0.0.0.0:50051(address:port)" and client is able to send the roll no on "localhost:50051" and receive the details.
I want to see the actual binary data that is transferred between client and server. i have tried using Wireshark, but I don't understand what I am seeing here.
Here is the screenshot of wireshark capture
And here are the details of highlighted entry from above screenshot.
Need help in understanding wireshark here, Or any other way that can be used to see the binary data.
Wireshark uses the port to determine how to decode the communication, and it doesn't know any protocol associated with 50051. So you need to configure it to treat this as HTTP.
Right click on a row and select "Decode As..." in the context menu.
Then set "Current" to "HTTP" or "HTTP2" (HTTP will generally auto-detect HTTP2) and hit "OK".
Then the HTTP/2 frames should be decoded. And if using a recent version of Wireshark, you may also see the gRPC frames decoded.
The whole idea of grpc is to HIDE that. Let's say we ignore that and you know what you're doing.
Look at https://en.wikipedia.org/wiki/Protocol_Buffers. gRPC uses Protocol Buffers for it's data representation. You might get a hint at the data you're seeing.
Two good starting points for a reverse engineer exercise are:
Start simple: compile a program that sends an integer. Understand it. Sniff it. Then compile a program that sends a string. Try several values. Once you understand it, pass to tacke the problem of understanding how's google sending your structure.
Use known data and do small variations: knowing what 505249... means is easier if you start knowing the data you're sending (as an example, send "Hello world" string; then change it to "Hella world"; see what changes on the coded sniff; also check that sending several times the same data produces the same sniffed output). Apply prior point: start simple, first empty string, then " ", then "a", then "b", etc. and then pass to complex and larger strings. Don't be affraid to start simple.

Canon SDK (EDSDK) capture region of specified size for video stream

I am very new to the EDSDK so sorry for maybe weird question in some places.
Is it possible to access a video stream and perform some operations on it using the SDK? I need this to capture very thin region (ROI) of a specified size (for example 3840x10 px) for each frame in the stream. Don`t understand this as compression of a frame, aspect ratios are not needed to follow. These changes in theory should increase fps, because the region will be very thin (Should they?).
I found the code snippet below from the official documentation, although it seems this causes only to send a signal for starting and stopping video rec, without accessing the stream.
EdsUInt32 record_start = 4; // Begin movie shooting
err = EdsSetPropertyData(cameraRef, kEdsPropID_Record, 0, sizeof(record_start), &record_start);
EdsUInt32 record_stop = 0; // End movie shooting
err = EdsSetPropertyData(cameraRef, kEdsPropID_Record, 0, sizeof(record_stop), &record_stop);
I would be very thanksful for any suggestions and help. Please feel free to ask any additional information!
This sdk doesnt allow you to directly get access to hi res streams like industrial cams would. You can access over USB ~960x640 liveview images in sequential JPGs. Movie recording can only be done to internal card, and after stopping transfering the result. Outside of this SDk, use of an external HDMI recorder gives access to a near realtime feed at max FullHD1080p, depending on model and not always “clean”.

Create an iostream using boost asio specifying ip and port

I have a problem concerning boost asio libraries. I successfully tried to create a socket between a client and a server, this involves creation of resolvers in order to specify ip and port to the server (the server only requires port) and other objects, but, most importantly, it is necessary to use write and read_some as functions to read and write from/in the socket.
I would really appreciate to use a stream, and this is possible in boost asio, but that's strange...
In almost all examples using streams, to create a server it is necessary to provide port, ok, let's talk about the client... client side, it is necessary to use the iostream constructor to specify coordinates for connecting the stream, here's the code:
tcp::iostream() s(argv[1], "daytime");
Well, I don't really understand what is passed in the first parameter and really don't know what daytime might ever represent...
Basically, here, I'm telling: "Hey stream, you must connect to this server..." but how can I specify ip and port of that server?
Note that, on the opposite, everything is almost clear server side:
boost::asio::io_service io_s;
tcp::acceptor acc(io_s, tcp::endpoint(tcp::v4(), 1950));
for (;;) {
tcp::iostream stream;
acc.accept(*stream.rdbuf());
stream << "Message" << std::endl;
}
Using this model, I would like to use
stream << mymessage_to_send << std::endl;
stream >> a_string_containing_my_message;
in order to send and receive.
How can I do this?
Thank you very much.
The boost asio sample code you quoted:
tcp::iostream s(argv[1], "daytime");
uses "daytime" as a lookup into the services table (usually in /etc/services on a linux system), which would identify that the port for the daytime service is 13.
If you want to connect to a port that is not one of the well known services, you can do so with something like:
tcp::iostream s("localhost", "57002");
Note that the port number is supplied as a string, not as an unsigned short integer as one might be tempted to try.
Of course, "localhost" can be replaced with an IP address "127.0.0.1"
Let's solve all 3 issues here:
Creating the iostream around the socket client side.
This is really simple:
boost::asio::ip::tcp::iostream socketStream;
socketStream.connect( hostname, std::to_string( port ) );
You have to check the state of the stream to see if it connected successfully.
Creating the iostream around the socket server side.
Assuming you have your acceptor object and it is bound and listening..
boost::asio::ip::tcp::iostream connectionSocketStream; // from the connection object
acceptor.accept( *connectionSocketStream.rdbuf() );
// or
acceptor.async_accept( *connectionSocketStream.rdbuf(), callback );
where callback is a function that takes an error code.
Streaming the objects
Now for the streaming itself and here your issue is that when you stream out the string "Message" the client side will need to know where this message begins and ends, and the regular iostream won't write anything to specify this. This is a flaw in iostream itself really.
The answer therefore is to use a boost archive, and you can use a text or binary archive as long as you use the same both ends. It even doesn't matter if one side is using 64-bit big-endian and the other side 32-bit little endian or any other mix.
Using binary archive you would send a message this way:
boost::archive::binary_oarchive oarch( socketStream, boost::archive::no_header );
oarch << "Message";
Remember to flush the stream (socketStream, not oarch) when you have completed sending all you wish to send at this point.
and receive a message
boost::archive::binary_iarchive iarch( socketStream, boost::archive::no_header );
iarch >> message;
You would potentially create one archive and use it throughout, especially for outbound. For inbound you may have issues if you get a streaming error as it will break your archive.
You can use a text archive instead of a binary one.
The boost archive will automatically put in header information so it knows when an object is complete and will only return to you once it has a complete object or something has broken.
Note: primitive types, eg std::string and even vector< int > etc. are automatically handled by an archive. Your own classes will need special overloads as to how to stream them. You should read boost::archive documentation.
Note: You can connect the archive object to the stream before the stream has been opened. The archive works around the streambuf object which does not change dependent on the stream opening successfully.
Creating without no_header would be an issue though as the archives immediately try to use the stream on construction (to read or write their header)
I've written a client/server system using Boost.Asio. The source is available on GitHub: Client.cpp and Server.cpp. Using Boost.Serialization together with Boost.Asio allows me to send arbitrary datastructures over the wire. I must say it is quite impressive!

How to Join N live MP3 streams into one using FFMPEG?

How to Join N live MP3 streams (radio streams like such live KCDX mp3 stream http://mp3.kcdx.com:8000/stream
) into 1 using FFMPEG? (I have N incoming live mp3 streams I want to join them and stream out 1 live mp3 stream) I mean I want to mix sounds like thay N speakers speak at the same time (btw N stereo to 1 mono), please help.
BTW: My problem is mainly how to make FFMPEG read from stream not from file...
Would you mind giving some code examples, please.
It looks like url_fopen(), defined in avio.h, is the function you are looking for.