Elliotte Rusty Harold blogged about the WS-Addressing specifications reaching Candidate Recommendation status, and did a bit of editorializing along the way:
Frankly, no. Not everything carries well over HTTP, and frankly I'm surprised that Elliotte doesn't agree with that. HTTP works well as a point-to-point, client-initiated request/response protocol, but there are a lot of situations where a point-to-point, client-initiated, request/response protocol simply doesn't cut it for large-scale integration work.
These specs are seeing some serious pushback within the W3C. The problem is that there already is an addressing system for the Web. It's called the URI, and it's not at all clear that web services addressing does anything beyond URIs do except add complexity. In fact, it's pretty clear that it doesn't do anything except add complexity.
Here's the problem. Web Services Addressing "defines two constructs, message addressing properties and endpoint references, that normalize the information typically provided by transport protocols and messaging systems in a way that is independent of any particular transport or messaging system." In other words this is another example of the excessive genericity problem, just like DOM, and remember how well that worked. One of the big fundamental problems with DOM was that they tried to develop an architecture that could work for all conceivable programming languages; but developers don't want and don't need an API for all programming languages. they want an API that's tailored to their own programming language. This is why language-specific libraries like XOM and Amara are so much easier to use and more productive than DOM.
Web Services Addressing is trying to define an addressing scheme that can work over HTTP, SMTP, FTP, and any other protocol you can imagine. However, each of these protocols already have their own addressing systems. Developers working with these protocols don't want and don't need a different addressing system that's marginally more compatible with some protocol they're not using in exchange for substantially less compatibility with the protocol they are using. Besides nobody's actually doing web services over anything except HTTP anyway. Doesn't it just make more sense to use the well understood, already implemented debugged HTTP architecture for this instead of inventing something new?
For example, consider your canonical "push" model; as it stands right now, there is no way to do the virtual equivalent of a classroom environment: multiple clients (students) all passively receiving content distributed in packets from the server (instructor). What's more, that distribution model is essentially a broadcast scenario with zero guarantees of delivery--if a student nods off in class, does the instructor care? Not normally, no, and certainly not to the expense of the other students in class. Broadcast scenarios are a powerful argument against HTTP, since to try and replicate this either
- the clients have to poll continuously for updates, or
- the server has to become the client, and constantly "push" the various elements out to the clients... er, servers... whatever, taking whatever firewall/security issues might be in between them into account (which is why the client-polling scneario is usually the way people go)
Look, at the end of the day, the WS-Addressing spec only requires the Action header, which serves the exact same role as the HTTP verb does: it simply provides a way to describe what the intended action for the message should be. Everything else is optional in the specification, including the To, From, ReplyTo or FaultTo headers (which use the EndpointReferences, the "virtual addresses" ERH is railing about)--which enable flexibility in message responses that HTTP simply cannot provide, highly useful in a workflow situation--or the MessageId/CorrelatesTo headers--which provide the ability to "thread" messages together, something that HTTP could only do with the introduction of cookies, something that Fielding argues against in a major way as it destroys the basic principles of a REST-based system.
In particular, I take issue with the idea that "nobody's actually doing web services over anything except HTTP anyway". If that's the case, somebody had better tell the messaging vendors, like IBM or Sonic or TIBCO or Fiorano, because they seem to be investing a lot into the XML services standards in an effort to help build that exact integration infrastructure that CxOs have been pursuing with all the zeal of the Holy Grail for about forty years now. I can think of four consulting clients off the top of my head that are using XML services over something other than HTTP, and will probably continue to do so for a long time to come. Guess they must just be trying to overcomplicate their lives, even though SOAP-over-TCP and SOAP-over-FTP actually cut the lines of code they had to maintain because they no longer had to build a complicated polling process. And as for "Doesn't it just make more sense to use the well understood, already implemented debugged HTTP architecture for this instead of inventing something new?", shortly, no, it doesn't. There's no sense in trying to take bits that were designed for a distributed hypermedia system (Fielding's words, not mine) and trying to bend it to fit a problem space that isn't distributed hypermedia. Can we learn from the REST architectural style? Absolutely--and the new WS-* specs, including SOAP, do exactly that, favoring self-descriptive messages over RPC calls as the principal abstraction to deal with. But does that mean we tie ourselves solely to HTTP? Oh, hell no.
Barely hidden between the lines in Elliotte's post is a general accusation that the WS-* guys are deliberately overcomplicating things. Frankly, I think that's an unfair categorization of what's been going on. SOAP 1.1 was complex, oh, Lord yes. SOAP 1.2 is actually fairly simple, all things considered, despite the perception you might get when you look at the spec and see three parts and a hundred or so pages instead of a single 30-page document (as SOAP 1.1 was). SOAP 1.2 standardizes a basic message format, with room for extensions (where the rest of the WS-* stack goes, for the most part), and provides a standard fault structure so that not everybody needs to define their own custom fault formats. (This is important if the fault is at the infrastructural level, not at the application level.) Everything else layers directly on top of SOAP, and frankly, if you don't need it, don't use it--the WS-* specs try very hard to be composable, meaning if you don't need a particular element, you don't use it and you don't pay for it. In fact, some of these specs (WS-Eventing, for example) are simple enough that you could implement it by hand without any help (Axis, BEA, etc) whatsoever.
Don't let a tempting political quantity (the basic desire to mistrust the big vendors and think they're just out the screw the little guy) cloud your vision over what's going on in the industry. The big vendors may very well be out to screw the little guy, but that doens't mean that everything they do isn't useful. If you build a REST-like XML service, you're actually following right into the "new" XML service model, and if you slap a SOAP:Body around your message and a SOAP:Envelope around that, you'll be wire-compatible with other SOAP 1.2 endpoints. Even better, if and when you need reliability, workflow capabilities, or integration with somebody else's WS-* stack, you're already primed to go. And wasn't that the point of all this stuff in the first place?