How to password protect a stream on Wowza? - wowza

After setting up a stream, if possible, can you password protect the stream whenever you may access the stream, like I enter the IP address on VLC and then the stream require a password.

Please take a look at Wowza SecureToken. It allows you to generate a unique token for each video stream that is something like "StreamName|Password". So even if somebody gets the token for one video stream, unless they know the hashing algorithm, they can't use it to access other streams.
Good luck.

Related

How to read Timing Reference Signal/Ancillary data from a video stream?

I’m in search for a solution that makes it possible to read the Timing Reference Signal (TRS) and Ancillary data (HANC and VANC) of a serial digital video. The TRS gives information about the start and the end of active video (SAV/EAV), the Ancillary data gives, for example, information about embedded audio. With this I want to code an application that analyzes the data that is transported in the non-picture area of serial video.
I read much about GStreamer and found with GstVideo Ancillary a collection that makes it possible to handle the Ancillary Data of a video.
Unfortunately, it’s not clear to me, how this collection works. For me, it looks like that this collection can only construct Ancillary data for a video and it’s not possible to read ancillary data from a detected videostream.
Another idea is to read the whole video stream and display, in a first step, the data words of the stream. TRS and ANC packets have to start with a special sequence of identifiers that makes it possible to localize them. Is GStreamer for this the right choice? Are there better recommended libraries for this task?

Use TIdCmdTCPServer for binary data?

I have a TCP server in my Windows application based on TIdCmdTCPServer and it has been performing great. Now I have a device that does not send text string but binary data. I really don't want to rewrite my working code to use TIdTCPServer.
Is there any way I can use my current code and grab the discarded data somewhere, i.e. can I have access to the raw data received without interfering with other connections that use text?
I tried the OnBeforeCommandHandler event, but it already seems to assume the data is a string (i.e. cuts off at the first zero byte).
TIdCmdTCPServer does not stop reading on nul bytes, like you claim. By default, it reads each command by calling TIdIOHandler.ReadLn(), which reads until a (CR+)LF is received. However, the TIdIOHandler.DefStringEncoding is set to US-ASCII by default, so that could be causing data loss when reading binary data as a string.
That being said, TIdCmdTCPServer is designed primarily for textual commands. By default, tt cannot receive binary commands, or binary parameters. However, if the binary data follows a textual command, your TIdCommandHandler.OnCommand event handlers can read the binary data after the command has been received, by simply reading the binary using the ASender.Context.Connection.IOHandler as needed.
Otherwise, if that does not suit your needs (because the binary commands are not in a format that would normally trigger an OnCommand event), you will have to derive a new class from TIdCmdTCPServer and have it either:
override the virtual ReadCommandLine() method to read a binary command and its parameters from the Connection and return that data in a string format of your choosing (you can use Indy's IndyTextEncoding_8bit encoding, or BytesToStringRaw() function, to help you with that, or use whatever string format you want). And then define a CommandHandler to match that stringified command.
override the virtual DoExecute() method, then you have full control over the reading of the Connection and can handle commands however you want. To trigger OnCommand events, call the server's CommandHandlers.HandleCommand() method passing it string values of your choosing.
Personally, I would not recommend mixing textual and non-textual clients on the same server. They are clearly using different protocols, so you should be using different servers on different ports to handle them separately.

QTextEdit for input and output

I am considering using QTextEdit as console-like IO element (for serial data).
The problem with this approach is that (user) input and (communication) output are mixed and they might not be synchronous.
To detect new user input, it might be possible to store and compare plainText on certain input events, e.g. when Enter/Return is pressed.
Another approach might be to use the QTextEdit as view only for separately managed input and output buffers. This could also simplify the problem of potentially asynchronous data (device sends characters while user is typing, very unlikely in my case).
However, even merging the two "streams" by single-character timestamp holds potential for conflict.
Is there a (simple) solution or should I simply use separate and completely independent input/output areas?
Separate I/O areas is the simplest way to proceed if your UI is command driven and the input is line-oriented.
Alternatively, the remote device can be providing the echo, without a local echo. The remote device will then echo the characters back when it makes sense, to maintain coherent display.
You can also display a local line editing buffer to provide user feedback in case the remote echo was delayed or unavailable. That buffer would be only for feedback and have no impact on other behavior of the terminal; all keystrokes would be immediately sent to the remote device.

How do I clear user input (cin) that occurred while the process was blocked?

I have a C++ program that takes input from the user on std::cin. At some points it needs to call a function that opens a GUI window with which the user can interact. While this window is open, my application is blocked. I noticed that if the user types anything into my application's window while the other window is open, then nothing happens immediately, but when control returns to my application those keystrokes are all acted upon at once. This is not desirable. I would like for all keystrokes entered while the application is blocked to be ignored; alternatively, a way to discard them all upon the application regaining control, but retaining the capability to react to keystrokes that occur after that.
There are various questions on Stack Overflow that explain how to clear a line of input, but as far as I can tell they tend to assume things like "the unwanted input only lasts until the next newline character". In this case this might not be so, because the user could press enter several times while the application is blocked. I have tried a variety of methods (getline(), get(), readsome(), ...) but they generally seem not to detect when cin is temporarily exhausted. Rather, they wait for the user to continue supplying content for cin. For example, if I use cin.ignore(n), then not only is everything typed while the GUI window was open ignored, but the program keeps waiting afterwards while the user types content until a total of n characters have been typed. That's not what I want - I want to ignore characters based on where in time they occurred, not where in the input stream they occur.
What is the idiom for "exhaust everything that's in cin right now, but then stop looking for more stuff"? I don't know what to search for to solve this.
I saw this question, which might be similar and has an answer, but the answer asks for the use of <termios.h>, which isn't available on Windows.
There is no portable way to achieve what you are trying to do. You basically need to set the input stream to non-blocking state and keep reading as long as there are any characters.
get() and getline() will just block until there is enough input to satisfy the request. readsome() only deals with the stream's internal buffer and is only use to non-blockingly extract what was already read from the streams internal buffer.
On POSIX systems you'd just set the O_NONBLOCK with fcntl() and keep read()ing from file descriptor 0 until the read returns a value <= 0 (if it is less than 0 there was an error; otherwise there is no input). Since the OS normally buffers input on a console, you'd also need to set the stream to non-canonical mode (using tcsetattr()). Once you are done you'd probably restore the original settings.
How to something similar on non-POSIX systems I don't know.

WinApi get number of available bytes from a USB port?

Is there a way to check the number of bytes available from a USB device (printer in our case)?
We're using CreateFile and ReadFile and WriteFile for IO communications with our USB device, which works. But We can't figure out how much data is available without actually doing a read. We can't use GetFileSize, as even the documentation says you can't use it for a :
"nonseeking device such as a pipe or a communications device"...
So that doesn't work. Any suggestions? Are we doing our USB I/O incorrectly? Is there a better way to Read/Write to USB?
You first need to open up the port in asynchronous mode. To do that, pass the flag FILE_FLAG_OVERLAPPED to CreateFile. Then, when you call ReadFile, pass in a pointer to an OVERLAPPED structure. This does an asynchronous read and immediately returns ERROR_IO_PENDING without blocking (or, if the OS already has the data buffered, you might get lucky and get a successful read -- be prepared to handle that case).
Once the asynchronous I/O has started, you can then periodically check if it has completed with GetOverlappedResult.
This allows you to answer the question "are X bytes of data available?" for a particular value of X (the one passed to ReadFile). 95% of the time, that's good enough, since you're looking for data in a particular format. The other 5% of the time, you'll need to add another layer of abstraction top, where you keep doing asynchronous reads and store the data in a buffer.
Note that asynchronous I/O is very tricky to get right, and there's a lot of edge cases to consider. Carefully read all of the documentation for these functions to make sure your code is correct.
Can you use C#? If so you can access the USB port using System.IO.SerialPort class, and then set up a DataReceived event handler for incoming data. There is a BytesToRead property that tells you how much data is waiting to be read.
All of this must be available in native code, if I can find it I'll edit this.
EDIT: the best I can find for native is ReadPrinter - I don't see how to check if data is there, this will block if it's not.