O'Reilly FYI

News from within O'Reilly

Juval Lowy explains Service-Orientation

 
By Kathryn Barrett
November 18, 2008 | Comments: 2
9780596521301_cat.gif

Juval Lowy's Programming WCF Services is considered to be the most definitive treatment of Microsoft's WCF (Windows Communication Foundation) available. In it, Juval provides both the guidance and insight needed to master the skills for building maintainable, extensible, and reusable WCF-based applications. Juval's talent as a teacher—that for tackling vast subjects and making them easy to learn—comes through especially well in this appendix from his book: An Introduction to Service-Orientation. These days, there's no avoiding the phrase "service-oriented," but few people can explain what it means and why it's so important. Juval gets to the heart of the matter in this excerpt.

Appendix A. Introduction to Service-Orientation


Table of Contents

A Brief History of Software Engineering
Object-Orientation
Component-Orientation
Service-Orientation
Benefits of Service-Orientation
Service-Oriented Applications
Tenets and Principles
Practical Principles
Optional Principles
What's Next?
A Service-Oriented Platform

This book is all about designing and developing service-oriented applications using WCF—yet there is considerable confusion and hype concerning what service-orientation is and what it means. To make matters worse, most of the vendors in this space equate their definition of service-orientation with their products and services. The vendors (Microsoft included) add to the confusion by equating service-orientation with high-end Enterprise applications, where handling high scalability and throughput is a must (mostly because they all contend for that market, where the business margins are made).

This appendix presents my understanding of what service-orientation is all about and attempts to put it in a concrete context. My take is different from that of the large vendors, but I believe it is more down-to-earth, rooted as it is in trends and the natural evolution of our industry. As you will see, I believe that service-orientation is not a breakthrough or a quantum leap of thought, but rather the next gradual step (and probably not the last step) in a long journey that spans decades.

To understand where the software industry is heading with service-orientation, you should first appreciate where it came from. After a brief discussion of the history of software engineering and its overarching trend, this appendix defines service-oriented applications (as opposed to mere architecture), explains what services themselves are, and examines the benefits of the methodology. It then presents the main principles of service-orientation and augments the abstract tenets with a few more practical and concrete points to which most applications should adhere. Finally, the appendix concludes with a look to the future.

A Brief History of Software Engineering

The first modern computer was an electromechanical, typewriter-sized device developed in Poland in the late 1920s for enciphering messages. The device was later sold to the German Commerce Ministry, and in the 1930s the German military adopted it for enciphering all wireless communication. Today we know it as the Enigma.

Enigma used mechanical rotors to change the route of electrical current flow to a light board in response to a letter key being pressed, resulting in a different letter being output (the ciphered letter). Enigma was not a general-purpose computer: it could only do enciphering and deciphering (which today we call encryption and decryption). If the operator wanted to change the encryption algorithm, he had to physically alter the mechanical structure of the machine by changing the rotors, their order, their initial positions, and the wired plugs that connected the keyboard to the light board. The "program" was therefore coupled in the extreme to the problem it was designed to solve (encryption), and to the mechanical design of the computer.

The late 1940s and the 1950s saw the introduction of the first general-purpose electronic computers for defense purposes. These machines could run code that addressed any problem, not just a single predetermined task. The downside was that the code executed on these computers was in a machine-specific "language" with the program coupled to the hardware itself. Code developed for one machine could not run on another. In fact, at the time there was no distinction between the software and the hardware (indeed, the word "software" was coined only in 1958). Initially this was not a cause for concern, since there were only a handful of computers in the world anyway. As machines became more prolific, this did turn into a problem. In the early 1960s the emergence of assembly language decoupled the code from specific machines, enabling it to run on multiple computers. That code, however, was now coupled to the machine architecture: code written for an 8-bit machine could not run on a 16-bit machine, let alone withstand differences in the registers or available memory and memory layout. As a result, the cost of owning and maintaining a program began to escalate. This coincided more or less with the widespread adoption of computers in the civilian and government sectors, where the more limited resources and budgets necessitated a better solution.

In the 1960s, higher-level languages such as COBOL and FORTRAN introduced the notion of a compiler: the developer would write in an abstraction of machine programming (the language), and the compiler would translate that into actual assembly code. Compilers for the first time decoupled the code from the hardware and its architecture. The problem with those first-generation languages was that the code resulted in nonstructured programming, where the code was internally coupled to its own structure via the use of jump or go-to statements. Minute changes to the code structure often had devastating effects in multiple places in the program.

The 1970s saw the emergence of structured programming via languages such as C and Pascal, which decoupled the code from its internal layout and structure using functions and structures. The 1970s was also the first time developers and researchers started to examine software as an engineered entity. To drive down the cost of ownership, companies had to start thinking about reuse—that is, what would make a piece of code able to be reused in other contexts. With languages like C, the basic unit of reuse is the function. But the problem with function-based reuse is that the function is coupled to the data it manipulates, and if the data is global, a change to benefit one function in one reuse context is likely to damage another function used somewhere else.

Object-Orientation

The solution to these problems that emerged in the 1980s, with languages such as Smalltalk and later C++, was object-orientation. With object-orientation, the functions and the data they manipulated were packaged together in an object. The functions (now called methods) encapsulated the logic, and the object encapsulated the data. Object-orientation enabled domain modeling in the form of a class hierarchy. The mechanism of reuse was class-based, enabling both direct reuse and specialization via inheritance. But object-orientation was not without its own acute problems. First, the generated application (or code artifact) was a single, monolithic application. Languages like C++ have nothing to say about the binary representation of the generated code. Developers had to deploy huge code bases every time they needed to make a change, however minute, and this had a detrimental effect on the development process and on application quality, time to market, and cost. While the basic unit of reuse was a class, it was a class in source format. Consequently, the application was coupled to the language used—you could not have a Smalltalk client consuming a C++ class or deriving from it. Language-based reuse implied uniformity of skill (all developers in the organization had to be skilled enough to use C++), which led to staffing problems. Language-based reuse also inhibited economy of scale, because if the organization was using multiple languages it necessitated duplication of investments in framework and common utilities. Finally, having to access the source files in order to reuse an object coupled developers to each other, complicated source control, and coupled teams together, since it made independent builds difficult. Moreover, inheritance turned out to be a poor mechanism for reuse, often harboring more harm than good because the developer of the derived class needed to be intimately aware of the implementation of the base class (which introduced vertical coupling across the class hierarchy).

Object-orientation was oblivious to real-life challenges, such as deployment and versioning issues. Serialization and persistence posed yet another set of problems. Most applications did not start by plucking objects out of thin air; they had some persistent state that needed to be hydrated into objects. However, there was no way of enforcing compatibility between the persisted state and the potentially new object code. Object-orientation assumed the entire application was always in one big process. This prevented fault isolation between the client and the object, and if the object blew up, it took the client (and all other objects in the process) with it. Having a single process implies a single uniform identity for the clients and the objects, without any security isolation. This makes it impossible to authenticate and authorize clients, since they have the same identity as the object. A single process also impedes scalability, availability, responsiveness, throughput, and robustness. Developers could manually place objects in separate processes, yet if the objects were distributed across multiple processes or machines there was no way of using raw C++ for the invocations, since C++ required direct memory references and did not support distribution. Developers had to write host processes and use some remote call technology (such as TCP sockets) to remote the calls, but such invocations looked nothing like native C++ calls and did not benefit from object-orientation.

Component-Orientation

The solution for the problems of object-orientation evolved over time, involving technologies such as the static library (.lib) and the dynamic library (.dll), culminating in 1994 with the first component-oriented technology, called COM (Component Object Model). Component-orientation provided interchangeable, interoperable binary components. With this approach, instead of sharing source files, the client and the server agree on a binary type system (such as IDL) and a way of representing the metadata inside the opaque binary components. The components are discovered and loaded at runtime, enabling scenarios such as dropping a control on a form and having that control be automatically loaded at runtime on the client's machine. The client only programs against an abstraction of the service: a contract called the interface. As long as the interface is immutable, the service is free to evolve at will. A proxy can implement the same interface and thus enable seamless remote calls by encapsulating the low-level mechanics of the remote call. The availability of a common binary type system enables cross-language interoperability, so a Visual Basic client can consume a C++ COM component. The basic unit of reuse is the interface, not the component, and polymorphic implementations are interchangeable. Versioning is controlled by assigning a unique identifier for every interface, COM object, and type library.

While COM was a fundamental breakthrough in modern software engineering, most developers found it unpalatable. COM was unnecessarily ugly because it was bolted on top of an operating system that was unaware of it, and the languages used for writing COM components (such as C++ and Visual Basic) were at best object-oriented but not component-oriented. This greatly complicated the programming model, requiring frameworks such as ATL to partially bridge the two worlds. Recognizing these issues, Microsoft released .NET 1.0 in 2002. .NET is (in the abstract) nothing more than cleaned-up COM, MFC, C++, and Windows, all working seamlessly together under a single new component-oriented runtime. .NET supports all the advantages of COM and mandates and standardizes many of its ingredients, such as type metadata sharing, dynamic component loading, serialization, and versioning.

While .NET is at least an order of magnitude easier to work with than COM, both COM and .NET suffer from a similar set of problems:

Technology and platform

The application and the code are coupled to the technology and the platform. Both COM and .NET are available only on Windows. Both also expect the client and the service to be either COM or .NET and cannot interoperate natively with other technologies, be they Windows or not. While bridging technologies such as web services make interoperability possible, they force the developers to let go of almost all of the benefits of working with the native framework, and they introduce their own complexities and coupling with regard to the nature of the interoperability mechanism. This, in turn, breaks economy of scale.

Concurrency management

When a vendor ships a component, it cannot assume that its clients will not access it with multiple threads concurrently. In fact, the only safe assumption the vendor can make is that the component will be accessed by multiple threads. As a result, the components must be thread-safe and must be equipped with synchronization locks. However, if an application developer is building an application by aggregating multiple components from multiple vendors, the introduction of multiple locks renders the application deadlock-prone. Avoiding the deadlocks couples the application and the components.

Transactions

If multiple components are to participate in a single transaction, the application that hosts them must coordinate the transaction and flow the transaction from one component to the next, which is a serious programming endeavor. This also introduces coupling between the application and the components regarding the nature of the transaction coordination.

Communication protocols

If components are deployed across process or machine boundaries, they are coupled to the details of the remote calls, the transport protocol used, and its implications for the programming model (e.g., in terms of reliability and security).

Communication patterns

The components may be invoked synchronously or asynchronously, and they may be connected or disconnected. A component may or may not be able to be invoked in either one of these modes, and the application must be aware of its exact preference. With COM and .NET, developing asynchronous or even queued solutions was still the responsibility of the developer, and any such custom solutions were not only difficult to implement but also introduced coupling between the solution and the components.

Versioning

Applications may be written against one version of a component and yet encounter another in production. Both COM and .NET bear the scars of DLL Hell (which occurs when the client at runtime is trying to use a different, incompatible version of the component than the one against which it was compiled), so both provide a guarantee to the client: that the client would get at runtime exactly the same component versions it was compiled against. This conservative approach stifled innovation and the introduction of new components. Both COM and .NET provided for custom version-resolution policies, but doing so risked DLL Hell-like symptoms. There was no built-in versioning tolerance, and dealing robustly with versioning issues coupled the application to the components it used.

Security

Components may need to authenticate and authorize their callers, but how does a component know which security authority it should use, or which user is a member of which role? Not only that, but a component may want to ensure that the communication from its clients is secure. That, of course, imposes certain restrictions on the clients and in turn couples them to the security needs of the component.

Off-the-shelf plumbing

In the abstract, interoperability, concurrency, transactions, protocols, versioning, and security are the glue—the plumbing—that holds any application together.

In a decent-sized application, the bulk of the development effort and debugging time is spent on addressing such plumbing issues, as opposed to focusing on business logic and features. To make things even worse, since the end customer (or the development manager) rarely cares about plumbing (as opposed to features), the developers typically are not given adequate time to develop robust plumbing. Instead, most handcrafted plumbing solutions are proprietary (which hinders reuse, migration, and hiring) and are of low quality, because most developers are not security or synchronization experts and because they were not given the time and resources to develop the plumbing properly.

The solution was to use ready-made plumbing that offered such services to components. The first attempt at providing decent off-the-shelf plumbing was MTS (Microsoft Transactions Server), released in 1996. MTS offered support for much more than transactions, including security, hosting, activation, instance management, and synchronization. MTS was followed by J2EE (1998), COM+ (2000), and .NET Enterprise Services (2002). All of these application platforms provided adequate, decent plumbing (albeit with varying degrees of ease of use), and applications that used them had a far better ratio of business logic to plumbing. However, by and large these technologies were not adopted on a large scale, due to what I term the boundary problem. Few systems are an island; most have to interact and interoperate with other systems. If the other system doesn't use the same plumbing, you cannot interoperate smoothly. For example, there is no way of propagating a COM+ transaction to a J2EE component. As a result, when crossing the system boundary, a component (say, component A) had to dumb down its interaction to the (not so large) common denominator between the two platforms. But what about component B, next to component A? As far as B was concerned, the component it interacted with (A) did not speak its flavor of the plumbing, so B also had to be dumbed down. As a result, system boundaries tended to creep from the outside inward, preventing the ubiquitous use of off-the-shelf plumbing. Technologies like Enterprise Services and J2EE were useful, but they were useful in isolation.

Service-Orientation

If you examine the brief history of software engineering just outlined, you'll notice a pattern: every new methodology and technology incorporates the benefits of its preceding technology and improves on the deficiencies of the preceding technology. However, every new generation also introduces new challenges. Therefore, I say that modern software engineering is the ongoing refinement of the ever-increasing degrees of decoupling.

Put differently, coupling is bad, but coupling is unavoidable. An absolutely decoupled application would be useless, because it would add no value. Developers can only add value by coupling things together. Indeed, the very act of writing code is coupling one thing to another. The real question is how to wisely choose what to be coupled to. I believe there are two types of coupling, good and bad. Good coupling is business-level coupling. Developers add value by implementing a system use case or a feature, by coupling software functionality together. Bad coupling is anything to do with writing plumbing. What was wrong with .NET and COM was not the concept; it was the fact that developers could not rely on off-the-shelf plumbing and still had to write so much of it themselves. The real solution is not just off-the-shelf plumbing, but rather standard off-the-shelf plumbing. If the plumbing is standard, the boundary problem goes away, and applications can utilize ready-made plumbing. However, all technologies (.NET, Java, etc.) use the client thread to jump into the object. How can you possibly take a .NET thread and give it to a Java object? The solution is to avoid call-stack invocation and instead to use message exchange. The technology vendors can standardize the format of the message and agree on ways to represent transactions, security credentials, and so on. When the message is received by the other side, the implementation of the plumbing there will convert the message to a native call (on a .NET or a Java thread) and proceed to call the object. Consequently, any attempt to standardize the plumbing has to be message-based.

And so, recognizing the problems of the past, in the late 2000s the service-oriented methodology has emerged as the answer to the shortcomings of component-orientation. In a service-oriented application, developers focus on writing business logic and expose that logic via interchangeable, interoperable service endpoints. Clients consume those endpoints (not the service code, or its packaging). The interaction between the clients and the service endpoint is based on a standard message exchange, and the service publishes some standard metadata describing what exactly it can do and how clients should invoke operations on it. The metadata is the service equivalent of the C++ header file, the COM type library, or the .NET assembly metadata, yet it contains not just operation metadata (such as methods and parameters) but also plumbing metadata. Incompatible clients—that is, clients that are incompatible with the plumbing expectations of the object—cannot call it, since the call will be denied by the platform. This is an extension of the object- and component-oriented compile-time notion that a client that is incompatible with an object's metadata cannot call it. Demanding compatibility with the plumbing (on top of the operations) is paramount. Otherwise, the object must always check on every call that the client meets its expectations in terms of security, transactions, reliability and so on, and thus the object invariably ends up infused with plumbing. Not only that, but the service's endpoint is reusable by any client compatible with its interaction constraints (such as synchronous, transacted, and secure communication), regardless of the client's implementation technology.

In many respects, a service is the natural evolution of the component, just as the component was the natural evolution of the object, which was the natural evolution of the function. Service-orientation is, to the best of our knowledge as an industry, the correct way to build maintainable, robust, and secure applications.

The result of improving on the deficiencies of component-orientation (i.e., classic .NET) is that when developing a service-oriented application, you decouple the service code from the technology and platform used by the client from many of the concurrency management issues, from transaction propagation and management, and from communication reliability, protocols, and patterns. By and large, securing the transfer of the message itself from the client to the service is also outside the scope of the service, and so is authenticating the caller. The service may still do its own local authorization, however, if the requirements so dictate. Similarly, as long as the endpoint supports the contract the client expects, the client does not care about the version of the service. There are also tolerances built into the standards to deal with versioning tolerance of the data passed between the client and the service.

Benefits of Service-Orientation

Service-orientation yields maintainable applications because the applications are decoupled on the correct aspects. As the plumbing evolves, the application remains unaffected. A service-oriented application is robust because the developers can use available, proven, and tested plumbing, and the developers are more productive because they get to spend more of the cycle time on the features rather than the plumbing. This is the true value proposition of service-orientation: enabling developers to extract the plumbing from their code and invest more in the business logic and the required features.

The many other hailed benefits, such as cross-technology interoperability, are merely a manifestation of the core benefit. You can certainly interoperate without resorting to services, as was the practice until service-orientation. The difference is that with ready-made plumbing you rely on the plumbing to provide the interoperability for you.

When you write a service, you usually do not care which platform the client executes on—that is immaterial, which is the whole point of seamless interoperability. However, a service-oriented application caters to much more than interoperability. It enables developers to cross boundaries. One type of boundary is the technology and platform, and crossing that boundary is what interoperability is all about. But other boundaries may exist between the client and the service, such as security and trust boundaries, geographical boundaries, organizational boundaries, timeline boundaries, transaction boundaries, and even business model boundaries. Seamlessly crossing each of these boundaries is possible because of the standard message-based interaction. For example, there are standards for how to secure messages and establish a secure interaction between the client and the service, even though both may reside in domains (or sites) that have no direct trust relationship. There is also a standard that enables the transaction manager on the client side to flow the transaction to the transaction manager on the service side, and have the service participate in that transaction, even though the two transaction managers never enlist in each other's transactions directly.

I believe that every application should be service-oriented, not just Enterprise applications that require interoperability and scalability. Writing plumbing in any type of application is wrong, constituting a waste of your time, effort, and budget, resulting in degradation of quality. Just as with .NET every application was component-oriented (which was not so easy to do with COM alone) and with C++ every application was object-oriented (which was not so easy to do with C alone), when using WCF, every application should be service-oriented.

Service-Oriented Applications

A service is a unit of functionality exposed to the world over standard plumbing. A service-oriented application is simply the aggregation of services into a single logical, cohesive application, much as an object-oriented application is the aggregation of objects.

The application itself may expose the aggregate as a new service, just as an object can be composed of smaller objects.

Inside services, developers still use concepts such as specific programming languages, versions, technologies and frameworks, operating systems, APIs, and so on. However, between services you have the standard messages and protocols, contracts, and metadata exchange.

The various services in an application can be all in the same location or be distributed across an intranet or the Internet, and they may come from multiple vendors and be developed across a range of platforms and technologies, versioned independently, and even execute on different timelines. All of those plumbing aspects are hidden from the clients in the application interacting with the services. The clients send the standard messages to the services, and the plumbing at both ends marshals away the differences between the clients and the services by converting the messages to and from the neutral wire representation.

Tenets and Principles

The service-oriented methodology governs what happens in the space between services. There is a small set of principles and best practices for building service-oriented applications, referred to as the tenets of service-oriented architecture:

Service boundaries are explicit

Any service is always confined behind boundaries, such as technology and location. The service should not make the nature of these boundaries known to its clients by exposing contracts and data types that betray such details. Adhering to this tenet will make aspects such as location and technology irrelevant. A different way of thinking about this tenet is that the more the client knows about the implementation of the service, the more the client is coupled to the service. To minimize the potential for coupling, the service has to explicitly expose functionality, and only operations (or data contracts) that are explicitly exposed will be shared with the client. Everything else is encapsulated. Service-oriented technologies should adopt an "opt-out by default" programming model, and expose only those things explicitly opted-in. This tenet is the modern incarnation of the old object-oriented adage that the application should maximize encapsulation and information hiding.

Services are autonomous

A service should need nothing from its clients or other services. The service should be operated and versioned independently from the clients, enabling it to evolve separately from them. The service should also be secured independently, so it can protect itself and the messages sent to it regardless of the degree to which the client uses security. Doing this (besides being common sense) further decouples the client and the service.

Services share operational contracts and data schema, not type-specific metadata

What the service decides to expose across its boundary should be type-neutral. The service must be able to convert its native data types to and from some neutral representation and should not share indigenous, technology-specific things such as its assembly version number or its type. In addition, the service should not let its client know about local implementation details such as its instance management mode or its concurrency management mode. The service should only expose logical operations. How the service goes about implementing those operations and how it behaves should not be disclosed to the client.

Services are compatible based on policy

The service should publish a policy indicating what it can do and how clients can interact with it. Any access constraints expressed in the policy (such as the need for reliable communication) should be separate from the service implementation details. Put differently, the service must be able to express, in a standard representation of policy, what it does and how clients should communicate with it. Being unable to express such a policy indicates poor service design. Note that a non-public service may not actually publish any such policy. This tenet simply implies that the service should be able to publish a policy if necessary.

Practical Principles

Well-designed applications should try to maximize adherence to the tenets just listed. However, those tenets are very abstract, and how they are supported is largely a product of the technology used to develop and consume the services, and of the design of the services. Consequently, just as not all code written in C++ is fully object-oriented, not all WCF applications may fully comply with the basic tenets just described. I therefore supplement those tenets with a set of more down-to-earth practical principles:

Services are secure

A service and its clients must use secure communication. At the very least, the transfer of messages from the clients to the service must be secured, and the clients must have a way of authenticating the service. The clients may also provide their credentials in the message so that the service can authenticate and authorize them.

Services leave the system in a consistent state

Conditions such as partially succeeding in executing the client's request are forbidden. All resources the service accesses must be in a consistent state after the client's call. If an error occurs the system state should not be only partially affected, and the service should not require the help of its clients to recover the system back to a consistent state after an error.

Services are thread-safe

The service must be designed so that it can sustain concurrent access from multiple clients. The service should also be able to handle causality and logical thread reentrancy.

Services are reliable

If the client calls a service, the client will always know in a deterministic manner whether the service received the message. In-order processing of messages is optional.

Services are robust

The service should isolate its faults, preventing them from taking it down (or taking down any other services). The service should not require clients to alter their behavior according to the type of error the service has encountered. This helps to decouple the clients from the service on the error-handling dimension.

Optional Principles

While I view the practical principles as mandatory, there is also a set of optional principles that may not be required by all applications (although adhering to them as well is usually a good idea):

Services are interoperable

The service should be designed so that any client, regardless of its technology, can call it.

Services are scale-invariant

It should be possible to use the same service code regardless of the number of clients and the load on the service. This will grossly simplify the cost of ownership of the service as the system grows and allow different deployment scenarios.

Services are available

The service should always be able to accept clients' requests and should have no downtime. Otherwise, if the service has periods of unavailability the client needs to accommodate them, which in turn introduces coupling.

Services are responsive

The client should not have to wait long for the service to start processing its request. If the service is unresponsive the client needs to plan for that, which in turn introduces coupling.

Services are disciplined

The service should not block the client for long. The service may perform lengthy processing, but only as long as it does not block the client. Otherwise, the client will need to accommodate that, which in turn introduces coupling.

What's Next?

Since service-oriented frameworks provide off-the-shelf plumbing for connecting services together, the more granular those services are, the more use the application can make of this infrastructure, and the less plumbing the developers have to write. Taken to the ultimate conclusion, every class and primitive should be a service, to maximize the use of the ready-made plumbing and to avoid handcrafting plumbing. This, in theory, will enable effortlessly transactional integers, secure strings, and reliable classes. But in practice, is that viable? Can .NET support it? Will future platforms offer this option?

I believe that as time goes by and service-oriented technologies evolve, the industry will see the service boundary pushed further and further inward, making services more and more granular, until the most primitive building blocks will be services. This would be in line with the historical trend of trading performance for productivity via methodology and abstraction. As an industry, we have always traded performance for productivity. .NET, where every class is treated as a binary component, is slower than COM, but the productivity benefit justifies this. COM itself is orders of magnitude slower than C++, yet developers opted for COM to address the problems of object-orientation. C++ is likewise slower than C, but it did offer the crucial abstractions of objects over functions. C in turn is a lot slower than raw assembly language, but the productivity gains it offered more than made up for that.

My benchmarks show that WCF can easily sustain hundreds of calls per second per class, making it adequate for the vast majority of business applications. While of course there is a performance hit for doing so, the productivity gains more than compensate, and historically, it is evident that this is a trade-off you should make. WCF does have detrimental overhead, but it's to do with ownership, not performance (which is adequate). Imagine a decent-sized application with a few hundred classes, each of which you wish to treat as a service. What would the Main( ) method of such an application look like, with hundreds of service host instances to be instantiated, opened, and closed? Such a Main( ) method would be unmaintainable. Similarly, would a config file with many hundreds of service and client endpoint declarations be workable?

The truth is that in practical terms, WCF cannot support (out of the box) such large-scale granular use. It is designed to be used between applications and across layers in the same application, not in every class. Just as COM had to use C++ and Windows, WCF is bolted on top of .NET. The language used (C# or Visual Basic) is merely component-oriented, not service-oriented, and the platform (.NET) is component-oriented, not service-oriented. What is required is a service-oriented platform, where the basic constructs are not classes but services. The syntax may still define a class, but it will be a service, just as every class in .NET is a binary component, very different from a C++ class. The service-oriented platform will support a config-less metadata repository, much like .NET generalized the type library and IDL concepts of COM. In this regard, WCF is merely a stopgap, a bridging technology between the world of components and the world of service (much like ATL once bridged the world of objects and C++ with the world of components, until .NET stepped in to provide native support for components at the class and primitive level).

A Service-Oriented Platform

If you take a wider view, every new idea in software engineering is implemented in three waves: first there is the methodology, then the technology, then the platform.

For example, object-orientation as a methodology originated in the late '70s. The top C developers at the time did develop object-oriented applications, but this required manually passing state handles between functions and managing tables of function pointers for inheritance. Clearly, such practices required a level of conviction and skills that only very few had. With the advent of C++ in the early '80s came the technology, allowing every developer to write object-oriented applications. But C++ on its own was sterile, and required class libraries. Many developers wrote their own, which of course was not productive or scalable. The development of frameworks such as MFC as an object-oriented platform, with types ranging from strings to windows, is what liberated C++ and enabled it to take off.

Similarly, take component-orientation: in the first half of the '90s, developers who wanted to use COM had to write class factories and implement IUnknown, and concoct registry scripts and DLL entries. As a methodology, COM was just inaccessible. Then ATL came along, and this technology enabled developers to expose mere C++ classes as binary components. But the programming model was still too complex, since Windows knew nothing about COM, and the language was still object-oriented, lacking support for basic constructs such as interfaces. .NET as a component-oriented runtime provided the missing platform support for components at the class, primitive, language, and class library level.

Service-orientation emerged as a methodology in the early 2000s, but at the time it was practically impossible to execute. With WCF, developers can expose mere classes as services, but the ownership overhead prevents widespread and granular use. I do not have a crystal ball, but I see no reason why the waves of methodology/technology/platform should stop now. Extrapolating from the last 30–40 years of software engineering, we are clearly missing a service-oriented platform. I believe the next generation of technologies from Microsoft will provide just that.

Every class as a service

Until we have a service-oriented platform, must we suffer the consequences of either unacceptable ownership overhead (granular use of WCF) or productivity and quality penalties (handcrafted custom plumbing)?

Chapter 1, WCF Essentials introduces my InProcFactory class, which lets you instantiate a service class over WCF:

public static class InProcFactory
{
   public static I CreateInstance<S,I>(  ) where I : class
                                           where S : I;
   public static void CloseProxy<I>(I instance) where I : class;
   //More members
}

When using InProcFactory, you utilize WCF at the class level without ever resorting to explicitly managing the host or having client or service config files:

[ServiceContract]
interface IMyContract
{...}

class MyService : IMyContract
{...}

IMyContract proxy = InProcFactory.CreateInstance<MyService,IMyContract>(  );
proxy.MyMethod(  );
InProcFactory.CloseProxy(proxy);

This line:

IMyContract proxy = InProcFactory.CreateInstance<MyService,IMyContract>(  );

is syntactically equivalent to the C# way of instantiating a class type:

IMyContract proxy = new MyService(  );

The difference syntax-wise is that with C#, there is no need to specify the queried interfaces since the compiler will examine the class, see if it supports the interface, and implicitly cast the class to the assigned interface variable. As it lacks compiler support for services, InProcFactory requires you to specify the required contract.

However, the big difference between instantiating the class over WCF rather than C# is that when you do this all the powerful WCF features described in the rest of this book kick in: call timeout, encrypted calls, authentication, identity propagation, transaction propagation, transaction voting, instance management, error masking, channel faulting, fault isolation, buffering and throttling, data versioning tolerance, synchronization, synchronization context affinity, and more. With very little effort, you can also add tracing and logging, authorization, security audits, profiling and instrumentation, and durability, or intercept the calls and add many degrees of extensibility and customization.

InProcFactory lets you enjoy the benefits of WCF without suffering the ownership overhead. To me, InProcFactory is more than a useful utility—it is a glimpse of the future.


You might also be interested in:

2 Comments

I'm really against this make each class a separate service concept.

It reminds me of the Microkernel and especially workplace OS but worse.

The idea of microkernels was each service ( like memory) was an easily managed entity with carefully defined inputs and outputs which could be consumed.)

Workplace OS took this to limit and performance was so poor that no amount of optimization would help. So they had no choice but to scrap the project ( were talking $2B 10 years ago)

the bad thing is once such a design is made and implemented just like workplace OS it is very difficult and expensive to fix.

Classes by nature frequently communicate with other classes ,serializing everything makes this far to slow. Think about big orders with 10,000 line items each line item could be a separate service call to add up some total and what do you gain ?

I consider service to service communication and some sort of nearness analysis a vital part of SOA and grouping related classes into services will help significantly by

- Increasing performance
- Allow for better maintenance and less change as related changes will deploy together .
- Allow for better OO/maintenance and Agile techniques within each service. ( though less outside)
- Allows better integration and use of different services such as Java , CRM/ERP services etc.
- Encourages chunkier cross service calls and better attention paid to the service interface ( for easier use, compatibility/maintenance and performance)
- Discourages the service as a DB store proc wrapper.
- Avoids having to put performance hacks in the wrong place to try to avoid cross class calls.

This does not mean services should be 20 classes 3 is a good line but 1 to 9 is ok in a number of cases depending on class relationships. Basically if 2 classes communicate all the time they should be in the same service.

I can see in 5-15 years hosted environments/frameworks hosting classes which can be moved to machines or be in proc though its far too early for this and you NEED the ability for classes to communicate via normal inproc ( without serialization) .

I not you mention this for the future but telling people to use each class as a service now without some more comments on the hazzards ( especially when they don't have a lot of experience in SOA) will lead to a lot of frustration for some .

OK, this is sort of unrelated, but its part of this article and part of the Appendix A of the "Programming WCF Services" book.

Juval Löwy writes:"The first modern computer was an electromechanical, typewriter-sized device developed in Poland in the late 1920s for enciphering messages. The device was later sold to the German Commerce Ministry, and in the 1930s the German military adopted it for enciphering all wireless communication. Today we know it as the Enigma."

As far as I know, the Enigma was never developed or invented in Poland, it was invented first by a German Arthur Scherbius, also other people invented similar ideas in parallel: Hugo Alexander Koch (Dutch), Edward Hugh Hebernn (USA), Arvid Gerhard Damm (Sweden).

Juval Löwy's confusionmight be that a polish man Biuro Szyfrów broke the Enigma code.

Anyway, I guess somebody should double check my info, as its mostly from Wikipedia and might as well be wrong. But I think its not.

So I hope it will be corrected in the 3rd Edition of his wonderful book.

 

Popular Topics

Browse Books

Archives

Or, visit our complete archives.

FYI Topics

Recommended for You

Got a Question?