According to RFC4585, AVPF profile allows a device to send feedback earlier than the usual transmission of a regular RTCP packet. But, depending on the bandwidth, number of users and periodicity of regular RTCP packets, it is possible for the participants of a session not be able to use early feedback.
How is this threshold calculated? This is not provided (it'd be nice to have it, at least for the point-to-point case).
From the RFC:
Note that the algorithms do not depend on all senders and receivers agreeing on the same value for this threshold. It is merely intended to provide conceptual guidance to application designers and is not used in any calculations.
So, if this threshold is even implemented (sounds like it's not necessary) it will be implementation specific.
There are open source implementations of AVPF; duobango being one example, and it seems Jingle also implements this. It may be worth looking through their codebase for some help. I wasn't able to find anything more.
Good luck!
Related
A remote system sends a message via middleware (MQ) to my application.
In middleware a transformation (using xslt) is applied to this message. It is just reformatted and there is no enrichment nor validation. My system is the only consumer of this transformed message and the xslt is maintained by my team.
The original author of all of this has long gone and I am wondering why he thought it was a good idea to do the transformation in middleware rather than in my app. I can't see the value in moving this to middleware, it makes it less visible and less simple to maintain.
Also I would have thought that the xslt would be maintained by the message producer not the consumer.
Are there any guidelines for this sort of architecture? Has he done the right thing here?
It is a bad idea to modify a message body in the middleware. This negatively affects the maintainability and performance.
The only reason of doing this is trying to connect two incompatible endpoints without modifying them. This would require the transformation of the source content to be understood by the destination endpoint.
The motivation to delegate middleware to perform transformation could be a political one (endpoints are maintained by different teams, management is reluctant to touch the endpoint code, etc.).
If you are trying to create an application architecture where there is a need to serve data to different users in different formats, and perhaps receive data in different formats (think weather reports, or sports news), then creating a hub capable of doing the transformations between many different formats makes excellent sense. (Whether you call that "middleware" is up to you.) Perhaps your predecessor had this kind of architecture in mind, but it never grew big or complex enough to justify the design.
From a architectural point of view, it's a good idea to provide consumers of messages or content that is in a humanly readable format, e.g. xslt, unless there is a significant performance gain in using a binary format.
In the humanly readable format case, one simply has to look at the message to verify that it is correct. In the binary case, one would have to develop a utility to tranform binary message into a humanly readable form. Different implementers of such a utility may not always interpret the binary form as intended and it may turn into a finger pointing exercise as to who or what is correct.
Also, if one is looking at what's in the queue, it is easier to make sense of it if the messages are in a humanly readable format.
It doesn't hurt to start with humanly readable format and get the app working first. Then profile the app and see if in the big picture the transformation routines are significant sources of delay. If yes, then go to a binary format.
It would have been preferable to have the original message producer provide messages in xslt format, but they must have had good reasons for doing what they did when they did it. E.g potentially other consumers, xslt didn't exist then, resource constraints, etc.
Read about the adaptor design pattern and you will understand the intent of the current system architecture.
Before saying anything I have to say that, albeit I'm an experienced programmer in Java, I'm rather new to C / C++ programming.
I have to save a binary file in a format that makes it accessible from different operating systems & platforms. It should be very efficient because I have to deal with a lot of data. What approaches should I investigate for that? What are the main advantages and disadvantages?
Currently I'm thinking about using the network notation (something like htonl that is available both under unix and windows ). Is there a better way?
Network order (big-endian) is something of a de facto standard. However, if your program will be used mostly on x86 (which is little-endian), you may want to stick with that for performance reasons (the protocol will still be usable on big-endian machines, but they will instead have the performance impact).
Besides htonl (which converts 32-bit values), there's also htons (16-bit), and bswap_64 (non-standard for 64-bit).
If you want a binary format, but you'd like to abstract away some of the details to ease serialization and deserialization, consider Protocol Buffers or Thrift. Protocol Buffers are updatable (you can add optional or repeated (0 or more) fields to the schema without breaking existing code); not sure about Thrift.
However, before premature optimization, consider whether parsing is really the bottleneck. If reading every line of the file will require a database query or calculation, you may be able to use a more readable format without any noticeable performance impact.
I think there are a couple of decent choices for this kind of task.
In most cases, my first choice would probably be Sun's (now Oracle's) XDR. It's used in Sun's implementation of RPC, so it's been pretty heavily tested for quite a while. It's defined in RFC 1832, so documentation is widely available. There are also libraries (portable and otherwise) that know how to convert to/from this format. On-wire representation is reasonably compact and conversion fairly efficient.
The big potential problem with XDR is that you do need to know what the data represents to decode it -- i.e., you have to (by some outside means) ensure that the sender and receiver agree on (for example) the definition of the structs they'll send over the wire, before the receiver can (easily) understand what's being sent.
If you need to create a stream that's entirely self-describing, so somebody can figure out what it contains based only on the content of the stream itself, then you might consider ASN.1. It's crufty and nasty in some ways, but it does produce self-describing streams, is publicly documented, and it's used pretty widely (though mostly in rather limited domains). There are a fair number of libraries that implement encoding and decoding. I doubt anybody really likes it much, but if you need what it does, it's probably the first choice, if only because it's already known and somewhat accepted.
My first choice for a situation like this would be ASN.1 since it gives you the flexibility of using whatever programing language you desire on either end, as well as being platform independent. It hides the endian-ness issues from you so you don't have to worry about them. One end can use Java while the other end uses C or C++ or C#. It also supports multiple encoding rules you can chose from depending on your needs. There is PER (Packed Encoding Rules) if the goal is making the encoding as small as possible, or there is E-XER (Extended XML Encoding Rules) if you prefer to exchange information using XML, or there is DER (Distingushed Encoding Rules) if your application involves digital signatures or certificates. ASN.1 is widely used in telephony, but also used in banking, automobiles, aviation, medical devices, and several other area. It is mature proven technology that has stood the test of time and continues to be added in new areas where communication between disparate machines and programming languages is needed.
An excellent resource where you can try ASN.1 free is http://asn1-playground.oss.com where you can play with some existing ASN.1 specifications, or try creating your own, and see what the various encoding rules produce.
There are some excellent books available as a free download from http://www.oss.com/asn1/resources/books-whitepapers-pubs/asn1-books.html where the first one is titled "ASN.1 — Communication Between Heterogeneous Systems".
Abis is the signals which are passed from BTS to BSC in mobile networks. The work they want to do is to collect the messages from BTS, analyse it to find some specific errors etc. So for doing these, I have to actually know how to do protocol analyser. the language which i am told is to use is c or CPP.
There are three main stages on analysing data for any protocol:
Capturing or generating the network traffic: For mobile networks, that generally involves very expensive receiver hardware - hardware that usually comes with its own analyser software that will be far better than anything you might code yourself. Base stations may allow for a way to monitor their operation and capture data. It is also theoretically possible to repurpose other hardware (e.g. a cell phone or a lab instrument), or to generate the data using a simulator.
Extracting the data of interest: You need to extract and isolate the data for the protocol that interests you. Depending on the encapsulation and encryption properties of the network, that might be impossible for data captured in the wild - in that case you'd need something that would act as a node in the network and provide access to its inner workings.
Analysing the protocol of interest: You need a piece of software that will not only implement the protocol, but that will provide far more extensive logging and error-recovery capabilities than any production implementations. That way it will be able to point out and handle misbehaving nodes.
If you intend to write a protocol analyser of your own, you need to aqcuire the protocol specification and code such an implementation. You should be warned that even the simplest protocols are in fact quite difficult to implement correctly.
Without more information on your development and target platforms, the source and format of the data and the resources that you have available, there is no way for us to provide more information.
PS: It would also help if your question contained an actual question that we could answer.
I have a custom build of a Unix OS.
My task: Adding an IPSec to the OS.
I am working on Phase I, done sending the first 2 packets.
What I am trying to do now is making the Identification Payload.
I've been reading RFC 2409 (Apendix B) which discuss the keying materials (SKEYID, SKEYID_d, SKEYID_a, SKEYID_e and the IV making).
Now, I use SHA-1 for authontication and thus I use HMAC-SHA1 and my encryption algorithm is AES-256.
The real problem is that the RFC is not clear enough of what should I do regarding the PRF.
It says:
"Use of negotiated PRFs may require the
PRF output to be expanded due to
the PRF feedback mechanism employed by
this document."
I use SHA-1, does it mean I do not negotiate a PRF?
In my opinion, AES is the only algorithm that needs expention (a fixed length of 256 bit), so, do I need to expand only the SKEYID_e?
If you happen to know a clearer, though relible, source then the RFC please post a link.
You cannot negotiate a PRF based solely on RFC2409, so don't worry about that. 3 key Triple-DES, AES-192, and AES-256 all require the key expansion algorithm in Appendix B. Many implementations have these, so testing interoperability should not be that hard.
The IETF RFCs are often not clear enough. However, they are written for the sole purpose of describing interoperability so finding a reference implementation to either explore its code or test against is almost essential. Indeed 2409 specifically notes:
The authors encourage independent implementation, and interoperability testing, of this hybrid protocol.
Finding another implementation is what you really need; finding someone else's source is better still. Failing that, read the bibliography. It has been said that some RFCs written by some firms intentionally obfuscate or simply hide the information needed to produce a conformant implementation in order to build 'market advantage'. There is no royal road to understanding 2049.
I need to calculate a machine id for computers running MacOS, but I don't know where to retrieve the informations - stuff like HDD serial numbers etc. The main requirement for my particular application is that the user mustn't be able to spoof it. Before you start laughing, I know that's far fetched, but at the very least, the spoofing method must require a reboot.
The best solution would be one in C/C++, but I'll take Objective-C if there's no other way. The über-best solution would not need root privileges.
Any ideas? Thanks.
Erik's suggestion of system_profiler (and its underlying, but undocumented SystemProfiler.framework) is your best hope. Your underlying requirement is not possible, and any solution without hardware support will be pretty quickly hackable. But you can build a reasonable level of obfuscation using system_profiler and/or SystemProfiler.framework.
I'm not sure your actual requirements here, but these posts may be useful:
Store an encryption key in Keychain while application installation process (this was related to network authentication, which sounds like your issue)
Obfuscating Cocoa (this was more around copy-protection, which may not be your issue)
I'll repeat here what I said in the first posting: It is not possible, period, not possible, to securely ensure that only your client can talk to your server. If that is your underlying requirement, it is not a solvable problem. I will expand that by saying it's not possible to construct your program such that people can't take out any check you put in, so if the goal is licensing, that also is not a completely solvable problem. The second post above discusses how to think about that problem, though, from a business rather than engineering point of view.
EDIT: Regarding your request to require a reboot, remember that Mac OS X has kernel extensions. By loading a kernel extension, it is always possible to modify how the system sees itself at runtime without a reboot. In principle, this would be a Mac rootkit, which is not fundamentally any more complex than a Linux rootkit. You need to carefully consider who your attacker is, but if your attackers include Mac kernel hackers (which is not an insignificant group), then even a reboot requirement is not plausible. This isn't to say that you can't make spoofing annoying for the majority of users. It's just always possible by a reasonably competent attacker. This is true on all modern OSes; there's nothing special here about Mac.
The tool /usr/sbin/system_profiler can provide you with a list of serial numbers for various hardware components. You might consider using those values as text to generate an md5 hash or something similar.
How about getting the MAC ID of a network card attached to a computer using ifconfig?