Sunday, December 28, 2008
The first is the power that organizations have over individuals. Random/arbitrary increases of interest rates on charge cards with little recourse. Banks making errors, taking a long time to pay what's owed and then when they pay it twice by mistake demand that the error be corrected, "Immediately or else...". The petty bureaucracies of home owners' associations - you can't park a Ford F150 in your driveway, but a Cadillac Escalade is OK (Thanks Frisco, Texas).
The second is in the everyday communcation between parties. There is the kind of power asymmetry described above, but then there is what I call "interest asymmetry". Where one party in a conversation say something - which is of no interest to the other party. We have all been involved in conversations with spouses, other family members, children, where something that is riveting to them is really dull to us. In the interests of harmony, I will not cite specific examples here....
At a wider level, we this interest asymmetry shoing up when we use social neyworking sites. We have the opportunity to converse with many people using these tools, but these conversations have inherent asymmetry too. What we choose to say is, at least, interesting to us. What we choose to "listen to" has variable degrees of utility. I am interested in family postings about the kids, but not teribly interested in the ins and outs of Cpmmercial Property Law in England (something my brother in law is an expert in). For non-family/non-friends I am typically interested in work related stuff, or special interests (food, sailing...). So when I see the jumb;ed stream of messages, I put filters on, e.g. "Oh this is Paul talking about LIBOR again, I think I will ignore it." or "This is Robin talking about the kids, Christmas trees, presents, etc." I will ignore that." The atter case because I follow Robin's inciteful postings on technology, but not on his children.
People who are followed by a large crowd (because of celebrity, interests, self-promotion) have an even more asymmetric communication approach using the media of social networking because they have so much to say, and limited opportuinty to listen if all their followers were to respond. While they will often have set themselves up with expectations of symmetric communication, the style quickly becomes asymmetric.
Tuesday, December 23, 2008
The key is to think of this kind of architecture is subscription driven or subscription dominated. This has been brought home. big time, in the social networking frameworks (like Twitter). People who post on Twitter essentially say whatever comes into their heads. We follow individuals or groups because, on balance, we get more out of following them than not. However, we will need to filter. For example, there are Twitterers who post about the industries that I am interested in, the beer they like to drink, their cooking interests, their children, their other hobbies,.....
I typically don't want to see all of that, but the poster shouldn't be deterred and stop. The poster's responsibility is to post. The listener's responsibility is to filter the dull stuff - or the stuff that is dull to the specific listener. That is often hard because the signal to noise ratio for any specific listener will be different than the signal to noise ratio for any other.
The4 same is true in any kind of event dominated system - human or otherwise. The listener is in a position to make decisions about what it is interested in, what it may respond to. The "teller" must continue to deliver the narrative.
Legion were the companies that I worked at where filing clerks had messed up - the most extreme being the travel industry person who didn't know what to do with the "audit coupons" on a paper airline ticket. She filed them in a shoe box under her desk...
Fast forward to the database world and guess what we have. A super fast filing system. So feeding the database is like feeding the filing cabinets. Stuff is put away so you can find it again, but it isn't the operational life blood of the company. The operational life blood is the interactions between humans, the interactions between systems - in reality the events that cause value to be created for the organization. Our systems are event driven and data-filed - not database driven, at least not if they are to be truly valuable and truly model the way that value is created in the information systems.
Monday, September 1, 2008
I wrote a thought piece about auto insurance and a direct link between insurance and the vehicle. I was reminded of that piece by a TV commercial that I saw today. In the commercial, the car knew which insurance company was covering it, and if it didn't like the company, then it would find reasons not to go. In one case it ejected the keys from the ignition, in another it refused to unlock its doors and in the third case, the tires deflated.
The piece I wrote years ago also had knowledge between the car and the insurer, but had some more active features. For example, if we could envisage a car with multiple settings (the boring, around town go to work setting or the "boy racer" weekend setting, or others), then we could imagine having variable rate insurance - essentially the insurance premium is selected based on the car mode that is selected. If you never engage the "boy racer" mode, you never get charged that premium. So essentially we are looking at dynamic premium pricing based on a number of conditions all known about from the edge.
Lots of implications here. We could have a different key for each family member - or a key/thumbprint combo. If the thumbprint isn't a licensed (i.e. known to the insurance company) person, then the car doesn't start. So you could lock out unauthorized (friends of your children) drivers. What about speeding? The car knows that you have been speeding :-(. A car rental company (Acme) in Connecticut attempted to fine a renter for excessive speed. This was struck down in February 2002 by the Department of Consumer Protection. However, it may well be possible for insurance companies to use this kind of monitoring to assess premiums and to assist with accident "fault finding."
Lots of issues of course, but the key is to be thinking in these kinds of terms - where real time (or nearly real time information) can be used to guide decision making - especially pricing.
Tuesday, July 15, 2008
At one client the following question (or a variant) comes up quite frequently, "We want to service enable our C++ back end code, but the services framework is all Java how can we get them to co-exist?"
So we have an implementation, we can't use it directly but we do want to leverage it. Clearly the implementation itself isn't the service. If it were there would be no discussion. Also the interfaces it already provides may not be the same as the operations we would want to offer.
So we take the time-honored approach and wrap the code. The wrapper then exposes the service's operations and deals with the complexity of mapping the operations to whatever the original code supported. Now what is the Service in this scenario? Perhaps it is the wrapper code - after all it is the wrapper that has the signature, the wrapper that is directly invoked, the wrapper that will have the nice QOS measures, the wrapper I will look up in my registry.... However the wrapper doesn't "do" anything. Of course if the wrapper is really generic and abstract then it doesn't help to describe the service as being the wrapper. No one will have a clue when they go to the registry (white pages) what the service does. A description like "Wraps existing C++ code so it can be made available in the services framework" is hardly a confidence booster.
So what to do? My general favorite is to use a special purpose framework (auto generated if possible) to wrap the C++ (or other legacy) code. Make sure that the service is the wrapper and not the C++ implementation and manage that in the registry.
After all, one of the SOA principles is autonomy. The actual implementation doesn't have to be constrained as long as the interface supports the appropriate operations.
Why is this view of the business important?
It is holistic, actions that occur in one of these flows has impact on the others. Shipping the software to a customer (Physical Goods) can tell the money flow that revenue recognition can occur and can deliver information that the shipment has taken place.
Looking at the interactions of these flows gives us clues into:
- Adherence to accounting practice (when do we recognize revenue, for example?)
- Inefficiences in process (oh, we recalculate the revenue in many different places, even though nothing has changed)
- Opportunities to improve quality of service (let's move the credit check - an information flow request) to earlier in the cycle so we don't incur costs before we know if the customer can/will pay)
- Business modeling - "what if" (What happens if we attempt to move the credit check earlier?)
- Real time information delivery from both Goods and Money (How many units did we ship today?)
- Realization that the information need to control a Goods flow isn't available identifies a process breakdown
Ultimately, of course, what we deal with in information systems are informational abstractions of the Goods and Money. For us in IT it is all information. However to the business it isn't so, although we use the lens of the information system to help us look at the underlying realities. Our information system lenses are so distorted, however, that we often don't or can't know what is properly in or out of focus.
There are some real nuances to worry about here too. For example, in a content delivery system (e.g. a newspaper), the information (content) is treated as the Goods. Likewise in banking much of the money that flows through the bank is actually treated as Goods - but with a significant impact on Money and Information as well.
It is not trivial to create a GIM model, but the effect is enormous:
- Silo thinking is reduced - all streams can see the effects on each other of actions taken. Moving Credit Check later in the Information Stream affects the Goods stream because it will curtail process actions.
- Sub-optimal decision making is highlighted - "If I take this action, what breaks?
- It provides a common "grammar" for talking about the business between the business and IT - a goal that has not been reached in many years of trying.
- An end to end trace of a business process - with all the stakeholders shown can be built and optimized.
Why doesn't a process model work?
This is a form of process model - one where we specifically illustrate what happens to all of the major "data" sets (Goods, Information, Money). So instead of being a step by step way through the process, it is a way to handle the information exchanges more effectively. So it is an enriched process model.
What other models help?
We can leverage many of the standard models (e.g. in UML), but nothing gives us a complete enough view.
How do we do it?It revolves around a fundamental understanding of business modeling - starting with the operational flow of goods. It is after all the operational flow of goods from start to finish that defines what the business delivers. So drive from flow of goods - starting anywhere. That depends on the scope and depends on the business. The key is that it is an operational goods flow that matters. That operational goods flow means starting with operational process.
In the operational goods flow, swim lanes and swimming pools are appropriate, but often it is a good idea to use business iconography and not boxes/lines to show things. That way it illustrates that we are firmly in the business domain, not IT.
For each step in the handling of the flow ask:
- Has this step had any effect on the Money side - e.g. recognized revenue, incurred measureable cost, disposed of an asset...
- What information might be available as a result of this action being taken. Is the information interesting "real time", is the information needed for subsequent analysis, is the information in any sense privileged?
Yes, the questions really are that simple. The answers aren't and the implications aren't, but the questions are. What you do with the wealth of information is another matter.
We will notice over time that the timing needs of the Money flow and the Information flow don't exactly match the Goods flow. This is to be expected. We will also notice that the goods flow will continue regardless of whether the Money and Information flows are keeping up. In many cases we will see that the other flows play "catchup" and deal with the implications of Goods flow later. This is a primary cause for inconsistencies between different parts of the business and difficulties in rationalization.
It is of course impossible to fully serialize the business so that the flow of Goods is interrupted until all the Money and Information flows have properly completed. Thta's one reason why we look at the physical Goods flow - that simply doesn't wait.
In subsequent articles, I will talk about some of the interesting patterns and interactions that we see when doing this kind of modeling.
Friday, June 13, 2008
In the late 1980s Sybase introduced the notion of triggers into their flagship RDBMS product. A trigger was a piece of code that was executed when "something interesting" happened to the data under management. Triggers provided a very flexible way of reacting to changes like, for example, deleting a row from the database. That deletion could then cause a series of other actions (usually embedded in stored procedures), for example cascading the delete of one row to others - and thus enforcing referential integrity. Quite an elegant solution at the database level.
Of course, being about the only capability available for guaranteeing that the events would be detected, triggers started to get abused, whole rafts of business logic were embedded in stored procedures, and the database became the logic engine and the storage engine. No real separation of concerns there.
Now let's forward to 2008 and look for parallels and opportunities. For transaction processing, we are beginning to see much lighter weight data management engines (look at Google's bigtable implementation or Amazon's simpledb). Business logic is being pushed into services - probably where it should be. That leaves us with a bit of a hole. The value that triggers provided is still needed, but now it should be at the same level of data abstraction as the data managed by the services. That level is typically at the business object level.
So taking the simple concept of a trigger, we can ask ourselves if there is value to the enterprise in knowing when "interesting" things happen to business objects. (Of course this is still too broad, but bear with me here). There are some business events that are quite interesting to know about. For example a big customer win (resulting in the creation of a new customer in business terms) is likely to be very interesting to the organization as a whole. Can be notified as a morale booster/internal PR operation, can add some heft to other sales activities,... The list is endless. It isn't sensible to notify this at the database level, that is too low level, to proprietary. It makes a whole lot more sense to publish the "event" on the corporate internal nervous system.
It is impossible for the customer management service to know who or what might be interested in the information. Just as in the database world, the table with the "add trigger" doesn't actually know which other tables might be affected. All the customer management service knows is "that something just happened and there may be others interested". It then becomes the job of the corporate nervous system to let this event be "broadcast". Anything interested can then pick it up and make its own decisions.
It is therefore the responsibility of the ESB to provide this kind of capability in the containers that surround and manage the business objects, just as it was the DBMS's responsibility a generation earlier. It is the enterprise business architecture's job to determine which of these events should be triggered, and how the interested parties might react.
No there isn't much new here, we have had pub/sub architectures before - usually to aid with application integration. What is new here is that we can use the standards based capabilities that our SOA frameworks give us, together with a firm understanding of the need to drive change, to guide us to delivering a more flexible, sustainable architecture.
Wednesday, May 7, 2008
- We can think of it like an extension of the earliest programming models we ever dealt with - subroutines and functions
- We don't have to get all snarled up in worrying about parallel processing/multi-tasking and other hard to debug kinds of models
- We can break up problems into conveniently sized chunks by some kind of hierarchic decomposition
- All languages support the model intrinsically.
Does this mean that it is the best way? Perhaps for some problems, but surely not for all. I see a subroutine or function as being a rather surly, uncommunicative programming equivalent of a slob sitting in a chair, swilling beer and only responding when he feels like doing so. Very good at doing one thing at a time, easy to discover (after all who can miss the chair, beer can and TV remote).
This model is so dominant that we have even simulated it on top of asynchronous protocols. So even in a system that doesn't support request/response directly (e.g. an asynchronous, message based system) we still allow for the same semantics, but this time fudging them to use pairs of asynchronous messages. Changing the transport doesn't really change the programming model. We still think in RPC terms.
Enter SOA and we often have the same view. This time instead of an RPC, we sometimes refer to a "Service Call". Immediately we should be on the defensive. Call implies request/response. Should that be what our services do? Well, of course sometimes they should.
Now let's look at things from a different perspective. In an Event Driven Architecture, when something happens, the component that realizes something has happened feels the need to communicate this goodness to anyone who will listen. Some listeners will take action as a result. It is almost a gossiping model, a model where as soon as information is available, it is able to be shared. Not only does sharing take place, but it is absolutely impossible for the originator or broadcaster of the information to control what happens to it. It is out in the aether with all sorts of actions taking place as a result. Some of the actions might be beneficial, some not.
So here, we have a much lighter weight, less surly architecture an approach where the actions are distinguished from the notifications, where the actors are actively listening and making decisions for themselves. An architecture that exhibits autonomy and a loosely coupled style. Of course the trouble with a gossiping style is that a large amount of information is made available, only some of which will be acted on. So it is very "chatty" Chattiness in IT systems correlates well with increased resource consumption. In constrained systems (especially those that are network or disk constrained) increasing resource consumption can have a large, and negative impact on the cost of doing business.
In the EDA model, information "publishers" and "consumers" are constantly doing other things. They are true multi-taskers, ever prioritizing their activities, deciding whether to wait for more information or whether to act immediately. Capable of acting and absorbing information at the same time, making complex decisions largely autonomously. In other words, there is less friction in this style of interaction.
Information, of course, doesn't need to be acted on immediately. A useful tidbit might be recognized a long time after some other tidbit, making a situation clear. The arrival of the final tidbit can trigger a complex set of actions, relying on information disassociated from the original event, but somehow dependent on it.
Bottom line, we need to understand when it is appropriate to use one style or another. Sometimes it is good for a bunch of slobs to hang out, grunt and drink beer. At other times a more nimble, more overlapping, less controlling approach can deliver systems that function more smoothly.