Enable xml declaration for SAML2 Requests with Sustainsys.Saml2 - xml-declaration

I'd like to enable xml declarations for requests gernerated with Sustainsys.Saml2.Mvc but so far didn't find a way to configure it.
I know declarations in XML1.0 are optional, but the IdP (that I have no control over) requires a declaration and sadly fixing/changing that is not an option.
Is there a way to enable xml declarations for Sustainsys.Saml2 besides forking and changing the source code?

This question got me to look up what the SAML spec says about declarations: nothing. But in most examples I've seen of SAML2 messages, there is no declaration. So I'd say it should be a compatibility setting that can be enabled in special cases.
There is no support for it right now, but it can be added (a well-written PR will be accepted).
There is now support for notifications, that can be used to add XML declarations.

Related

WS-Security actions explanation specifically SAML related

The WSHandlerConstants class defines action constants like SAML_TOKEN_SIGNED and SAML_TOKEN_UNSIGNED
I am struggling to find any documentation about these action constants, after looking around a lot I am still unable to find explanation for the below
Mapping of action constants to the expected behaviour they are
suppose to trigger
Which constant should be defined on the outgoing end (client) versus
which constants should be defined on the incoming end (server), if a
constant can be used at both ends then how does its behaviour change.
What effect does each constant have on the SAML token being produced
I am investigating actions related to SAML authentication and generation.
After digging through the source I have found that in the WSSConfig there is default action mapping.However the action classes only get invoked via the WSS4JOutInterceptor.
The WSS4JInInterceptor uses the actions configured on the server side to work out if the tokens are valid,I however could not work out exactly how.
I suspect that there should be some easy way to find out these different combinations. At the end I hope to have some clarity on,
If a SAML token is generated with these(X,Y,Z) characteristics then it can be validated successfully when CXF is configured with these (A,B,C) actions and a brief explanation for each of them. Some guidance on best practices and most used combinations wouldn't hurt.

How to rename Liferay's default cookies?

I have JSP project which uses Liferay framework. There are default Liferay cookies named COOKIE_SUPPORT and GUEST_LANGUAGE_ID in Liferay. I dont want hackers to view any of my technology information by any means. How can I rename these cookie?
If you want to protect the framework you're using, you won't have to worry about the names of the cookies. Worry about server identification, elements of the DOM, structure and mechanics of URLs, secure&hardened setup of your server, common translations, default content, standard error messages, etc.
In other words: If you don't want to give away, which standard framework you're using (and this is not limited to Liferay) you'll have to roll your own. Good luck with getting this as powerful and as well tested as any standard framework.
Rather worry about keeping your systems updated all the time and protect from well known vulnerabilities in older systems. For hardening Liferay specifically, you might want to start with my blog series on securing Liferay (linking chapter 1 which refers to the other chapters)
Promoting a comment into this answer: One way to find out how to change them is to search for their names in the source code and identify the kind of plugin you need to provide different values - most likely this will be an ext-plugin. After all, Liferay's source is available. I don't see anything short of this.

WSDL-like code-generation for EbXML CPA+Schemas (not EbXML message itself)

Background:
A certain government-backed wholesaler of broadband services in Australia took feedback from discussion groups about how best to deliver B2B services to retail ISPs. They settled on EbXML.
Problem:
We're a very small shop (comparatively) that doesn't want to spend a lot of time going forward on integration. We're already familiar with integration of paired (inbound and outbound) SOAP services. In the past we've made use of WSDL-based code generation tooling (mostly with RPC/Literal services) where the WSDL has been descriptive and simple enough for the code generation tools to digest.
If at all possible we'd like to avoid having to hand-integrate the services with our business 'stack'. We know that the 'Interface Schemas' have been updated several times; we'd like to (as much as possible) do code and schema generation such that we can model our relationship with the supplier and the outbound/inbound messages as simple "queues" (tables) in an SQL database -- this will be our point of integration.
Starting with the outbound ("sender") SOAP web-service... it publishes a Document/Literal WSDL description of the service that seems to work correctly with various tools (e.g: wsdl2java, SoapUI) to generate the EBXML 'wrapper' messages. This says nothing about the 'payload' messages themselves which (at least for the MSH we've looked at) need to be multipart/related attachments with type of text/xml.
The 'payload' messages are defined in the provided CPA (something like bindings) and Schema (standard-looking XSD) files. The MSH itself doesn't seem to provide any external validation for the payload messages.
Question:
Is the same kind of code generation (as seen with WSDL-described SOAP web services) tooling available for EbXML CPAs/Schemas? (i.e: tools that can consume the CPA and 'payload' interface schemas and spit out java/c++/whatever, and/or something WSDL-like specific to the 'payload' interface messages and/or example messages).
If so, where do I look?
If not, are there any EbXML-specific problems that would prevent it? (I'd rather not get several weeks into a project to develop tools that are impossible to implement 'correctly' given the information at hand).
The MSH is payload agnostic. The payloads are not defined in the CPA, only the service and action names that are used to send the ebXML payloads are. The service and action are transmitted in the ebXML header, which is the first part of the multipart message. The payloads themselves can be xml, binary or a combination. Each payload is another part.
An MSH is responsible for tasks like:
sending (usually asynchronous) acknowledgements for received messages
resending messages if an acknowledgement has not been received within a certain amount of time
ignoring duplicate messages
assuring the order in which messages are delivered is correct
the actual behaviour is all configurable using the CPA, but a compliant MSH would support all this.
This implies that an MSH has to keep an administration of the messages it has sent and received, which is usually done in a database.
I would be surprised if you could find tooling to generate an MSH from a specific CPA. What you can find is software/components that implement a generic MSH and that can be configured with CPAs.
Assuming you don't want to build your own, look for an existing ebMS adapter. Configure it with your CPA(s). Then generate the payloads however you like and pass them to the ebMS adapter.
Google for "ebMS adapter" or "ebMS support".
Alas, it seems there's no specific tooling around the 'payload' messages for EbXML, spefically because EbXML doesn't regulate those messages.
However, the CPA (through canSend and canRecv) elements acts somewhat like a SOAP WSDL, and the XSDs serve the same purpose as with SOAP, so it's not too far off.
There does exist software for turning types defined in XSDs into messages (merging in user-supplied data) at runtime, but per my question there's no obvious tooling for code generation around CPAs and related XSDs.
Furthermore, actually writing software to do this yourself is made more problematic by the dificulty of searching for the meta-grammar for XML Schema (i.e: that grammar which remains of XML Schema once XML tokenization is factored out). Basically, this was difficult because in the XML world, the word "grammar" has an different meaning which polutes search results.
I was able to write a parser for the XML syntax snippets present at the top of each of the MSDN articals on XML Schema (elements listed down the left), which in turn allowed me to generate an LL1 grammar for XML schema which works on the pre-parsed AST of a given XSD.
From there I built a top-down parser from this meta-grammar which:
Follows <xsd:import>s and <xsd:include>s to resolve namespaces into further XSDs.
Recursively resolves message types in order to produce a 'flattened' type for each CPA message.
Generates a packer/unpacker data-structures for the message types which allow generation of code in various languages, as well as serialisation to and parsing from validated 'payload' XML.
There are still various XML Schema restrictions, keys, and other constraints that my code generators don't know about, but support for these can be added in time.
I'll update this answer with links to grammars (and possibly code -- depends on legals) as time permits. I'll leave the question as non-accepted for a while so that if someone miraculously finds a tool which makes much less work of the code generation, I'll accept an answer based on that.

How to monitor communication in a SOA environment with an intermediary?

I'm looking for a possiblity to monitor all messages in a SOA enviroment with an intermediary, who'll be designed to enforce different rule-sets over the message's structure and sequences (e.g., let's say it'll check and ensure that Service A has to be consumed before B).
Obviously the first idea that came to mind is how WS-Adressing might help here, but I'm not sure if it does, as I don't really see any mechanism there to ensure that a message will get delivered via a given intermediary (as it is in WS-Routing, which is an outdated proprietary protocol by Microsoft).
Or maybe there's even a different approach that the monitor wouldn't be part of the route but would be notified on request/responses, which might it then again make somehow harder to actively enforce rules.
I'm looking forward to any suggestions.
You can implement a "service firewall" either by intercepting all the calls in each service as part of your basic servicehost. Alternatively you can use 3rd party solutions and route all your service calls to them (they will do the intercepting and then forward calls to your services).
You can use ESBs to do the routing (and intercepting) or you can use dedicated solutions like IBM's datapower, XML firewall from Layer7 etc.
For all my (technical) services I use messaging and the command processor pattern, which I describe here, without actually calling the pattern name though. I send a message and the framework finds to corresponding class that implements the interface that corresponds to my message. I can create multiple classes that can handle my message, or a single class that handles a multitude of messages. In the article these are classes implementing the IHandleMessages interface.
Either way, as long as I can create multiple classes implementing this interface, and they are all called, I can easily add auditing without adding this logic to my business logic or anything. Just add an additional implementation for every single message, or enhance the framework so it also accepts IHandleMessages implementations. That class can than audit every single message and store all of them centrally.
After doing that, you can find out more information about the messages and the flow. For example, if you put into the header information of your WCF/MSMQ message where it came from and perhaps some unique identifier for that single message, you can track the flow over various components.
NServiceBus also has this functionality for auditing and the team is working on additional tooling for this, called ServiceInsight.
Hope this helps.

How to implement backend of api with multiple versions

I'm using Django to implement a private rest-like API and I'm unsure of how to handle different versions of the API on the backend.
Meaning, if I have 2 versions of the API what does my code look like? Should I have different apps that handle different version? Should different functions handle different versions? Or should I just use if statements for when one version differs from another?
I plan on stating the version in the Header.
Thanks
You do not need to version REST APIs. With REST, versioning happens at runtime either through what one might call 'must-ignore payload extension rules' or through content negotiation.
'must-ignore payload extension rules' refer to an aspect you build into the design of your messages. 'Must-ignore' means that a piece of software that processes a message of the given format must ignore any unknown syntactical constructs. This is what we all know from HTML and what makes it possible to insert all sorts of fancy tags into an HTML page without the parser choking.
'Must-ignore' allows you to evolve the capabilities of your service by adding stuff to what you send already without considering clients that only understand the older versions.
Content-negotiation refers to the HTTP-built-in mechanism of negotiating the actual representation the server sends to a given client at runtime. The typical scenario is this: Clients send the Accept header in the request to advertise what they are capable of and servers pick the representation to send back based on these capabilities. But there are also variations of this theme (see here for details: http://www.w3.org/Protocols/rfc2616/rfc2616-sec12.html ).
Content negotiation allows for incompatible changes, meaning that I can evolve my service to being able to send incompatible old and new versions and based on the Accept header my service will send the appropriate one.
Bottom line: with both approaches, your API remains as it is. No need to do any versioning at the API level - especially not the often suggested (but totally wrong) inclusion of version identifiers in the URIs (remember, you are doing REST here, not SOAP!)