Thursday, December 15, 2011

Clouds and scalability

This post comes from an online exchange with Roger Sessions (@rsessions on twitter) Leo de Sousa (@leodesousa) and Chris Potts (@chrisdpotts).

Roger makes the point that the various cloud vendors make their case on "scalability" without defining the term sufficiently. As marketing (almost) always does. So he has a point. The question for me then is, "What scales?". It is my firm commitment that when using terms that you intend to quantify, you had better get the dimensions correct. Is scalability a benefit? Of course that depends on what it means. It feels good, hits us in the unthinking (or as Daniel Kahneman calls it "System1") area. It's only when we look more deeply we realize that we have no idea what it means. Yes I'll have 7kg of scalability please.

It all gets to the economics of what you think you want to do. Here are some examples:
  • I want to be able to increase the workload that my system is capable of without having to buy, provision, manage a bunch of servers - Scaling for workload
  • I want to be able to add lots of new users without having to.....- Scaling for users
  • I want my system to be available and priced according to the actual usage. Kinda like electricity. So when all my users are signing in, I want to allocate lots of capacity because that's intensive. But when it is running along smoothly I need less. Scaling for peak demand
  • I want to empower a demonstration team so they can bring up new instances of a standardized template and demonstrate something to a customer/prospect and then tear it down while incurring as little cost as possible. - Scaling for efficiency of people
  • I want to be able to add new functionality with less effort/cost than previously. Scaling for functionality
  • I want to reduce the burden on in house departments (finance, legal, HR or other "overhead" departments) in the deployment of equipment. - Scaling for organizational effectiveness
While I am about it, I wonder what the effective scaling order looks like. For example, maybe I want to scale linearly for workload. In other words as demand increases, supply increases at the same rate. No effective reduction in cost/transaction.

Or I might be prepared for slightly more - the ratio is for each increase in demand, I get a 1.1 increase in cost of supply.

Or I might want to see a reduction - for each increase in demand, my cost goes down .

So as Roger observed, make the vendors of the cloud services be specific about what they are selling when selling scalability

Friday, September 9, 2011

We call that government

I was reading this post about QANTAS having to stop on the way from Dallas to Brisbane to refuel several times since starting the "nonstop" service. The service is "direct" from DFW to Sydney - which in the strained parlance of the travel industry means it isn't direct, it just means that you have the same flight number all the way, regardless of the number of stops. But I digress...

The posting got me to thinking about the diminishing returns when you add overhead. To fly further, you have to add more fuel. But adding more fuel doesn't give you a linear increase. You have to add more fuel to compensate for the fuel you had added to make you fly further... At the limit, all you do is fly a plane with just fuel and the necessary flight crew. Nothing useful comes of it - except the corner cases where you are testing limits on purpose. The focus is wrong. It isn't about getting the plane there (again some corner cases like getting the plane to a warzone), it's about getting the passengers and freight there.

The mission gets mangled if the focus on the plane not the passengers or freight.
Similarly in many corporations, if we consider the "running of the compnay" to be equivalent to the fuel, then as we add more "running the company" "resources" so we get diminishing returns.

Eventually we have a company that is dedicated to just running itself, does nothing useful, but everyone is busy.

We call that government

Wednesday, August 17, 2011

Social Media and CRM

I am pretty late to the blogosphere about the differences between social media and CRM. But in customer meetings I see this kind of confusion all the time. So here goes. Oh, and since I mainly work in the travel industry, my examples come from there.

So first what are the goals of CRM? Knowing the customer. Being able to have enough insight into the customer to persuade the customer to buy more, become more intimate - generally increase their direct value. Maybe to right wrongs - by providing compensation if bad things happen. So CRM tends to foster a set of 1:1 exchanges. Necessary but not sufficient.

In contrast social media is about somehow leveraging an network of relationships. It's one stage on from CRM (or maybe many stages beyond CRM!). So if I have a bad experience on an airline, I need the airline to do what t needs to do for compensation. But the social aspect doesn't stop there. I am heavily armed with well connected devices, a network of acquaintances and friends and time - especially on the flight. So in some ways the Social Network space is a competition between the airline provider, and the customer. Rach trying to get "the story" out to their social network orbits. The magic happens when the stories coincide - when the CRM aspect of looking after me is also told by the airline and by me. The double dip of great publicity.

But when things go badly for a customer (e.g. my luggage misconnected) then the bad story needs to be acknowledged. Through the social channels - posting on the passenger's FB wall for example, and some compensation placed at the same time. Otherwise the annoyed passenger with lots of time will send out a stream of invective to any/all who may listen.

So when thinking about Social Media realize:
  • It's a conversation
  • It's about the whole orbit
  • Don't forget to do CRM blocking and tackling

Microphone ready events

I was working with some customers this week, and the topic of "how do you generate events?" comes up. Not how as in the dirty mechanics, but how when something is done manually today, getting that systematized. So, please bear with me on the following example.

Most airlines board their passengers in some kinds of groups. So, announcements are made to get the people in the correct groups to board. A common way to do this is for the gate agent to announce that "Group n" is to board now. Perhaps it would be cool to page all the people in the appropriate group and have them rock up to board. Sounds like fun (and remember folks this is just an illustration). So if you were delivering messages to the smartphone or whever, you would simly send the message. Easy in an event based system?

Not so fast. Current systems probably don't recognize a legitimate event for this. It is procedure and policy (+ experience, queue monotoring, temperature on the jet bridge, screaming children,....) that actually triggers announcement - the what I call the "Microphone event". So how would we systematize that?

Change the gate agent system to do this - and think about the complexity involved, and the myriad of other delivered messages?
Automate the system to do something really clever, like start the boarding process clock at t-30 minutes, monitor the number of boarding pases lifted/scanned (as a proxy for queue depth), and when that number approximates the number of people in the group, then release the next group....
Other thoughts?

This brings up the point, that where we have humans sensing and responding to situations, they will raise events and deliver them through a variety of ways. Sometimes shouting throughh the mic is best of all. But if it does become desirable to do these kinds of things automatically, then use what the people do as cues for the kinds of events that the system needs to generate, sense and respond.

Thursday, July 28, 2011

Enterprise Architects - With apologies to Buffy Sainte-Marie

In her great song - Universal Soldier (sung here by Donovan), Buffy Sainte-Marie describes an impossible ideal for a soldier. Sometimes I think we enterprise architects have to have a similar set of impossible and conflicting attributes. We have to be (as discussed in other posts) archivists and activists), but then we also have to be good listeners (not a common trait among activists), understand the business in totality and yet have a reasonable (whatever that is) understanding of the technology that powers it. We have to be agents of change, and stewards of stability. We have to understand the patterns of interactions and their implications. We have to understand systems concepts (systems of record/systems of reference) and we have to understand the financials. We have to understand some of the legal implications of our decisions and we have to understand risk. We have to understand how to negotiate and when to strongarm.

All in all quite a difficult and conflicting set of characteristics

Friday, July 22, 2011

Irregular is the new regular

That's a bit of a mouthful - but it is the synthesis of a lot of material from the SITA technology conference held near Brussels in June this year.
So, I'll put in context and then see where it leads us. The context is that in the airline industry, cost cutting has become so critical (tiny profit margins flying planes) and the global situation has become so complex that we see more and more disruption. Then the clod-footed governments step in and put excessive fines onto the airlines for delays and the whole schedule becomes a crap shoot. Airlines have rarely actually operated the flifght schedule they publish - something always seems to go wrong, but with the ever decreasing tolerances, the ripple effects of tiny perturbations can be huge. Layer in larger events (weather, volcanoes, strikes, terrorism..) and we have an unholy mess.

Now in many systems development approaches and methods, we are encouraged to do the "happy path" stuff first. Work on the "normal" cases, deliver those and then do the abnormal things on subsequent iterations. That's great if the normal cases truly are normal. And by that I mean deliver most of the value, and absorb the least cost. But what if there is no "normal"? Then how do we prioritize?

Clearly the "customer" - the stakeholder paying for the solution or on whose behalf the system is being developed will have input to that. But therein lies the rub. There's all sorts of stuff that can go wrong that will potentially affect the architecture and design. And no this isn't enterprise architecture - this is much smaller than that. This is solution architecture - and a classic source of tension in development.

The solution architect has the responsibility for ensuring that the solution is fit for purpose - but the main purpose is in understanding the degree to which change will stress the system. If this weren't necessary then every solution would be a simple client/server solution (yeah, tongue firmly in cheek here).

So understanding what's normal/what isn't. What change stressors will be taken into consideration and still not get all crystal-bally is what makes solution architecture hard. It's also what makes it fun!

Don't be seduced by quick wins on the happy path when the need for the system to absorb change stress is paramount. Don't get all wrapped up in clever system integrity and cunning architecture when you are simply managing transactions - e.g. a recipe file!

Monday, March 21, 2011

Intellectual Honesty, Ivory Towers, pragmatism and other emotive phrases

This isn't just enterprise architecture. It's about architecture at lots of levels.

Like many posts, this all started with an innocent (hah, likely!) tweet and then became richer as people joined in. Here's the original. RT @SLSingh Bad Science by @BenGoldacre asks "Why don’t journalists link to primary sources?"  originally posted by @frodedanielsen (someone I have never met). At which point the irrepressible Alkes Buterman and Doug Newdick chimed in.
Aleks Buterman

RT @ RT @ Bad Science by @ asks "Why don’t journalists link to primary sources?" >>+1 < +42
Doug Newdick

@ @ how many are @ and @ fans?
19 MarFavoriteRetweetReply

But then Aleks posed the hard question... @seabird20 @dougnewdick I'm a big fan of intellectual honesty (#popper) but isn't that easily confused for "academic" #entarch?

That's what got me to this point - and it made me reflect on framing - using language that manages to demean an "opponent" while at the same time boosting your own value. "academic" is just one such word. Often used by teams that are under great pressure to meet a deadline and seeing that deep analysis could extend the time to delivery, demolish some closely held beliefs, make them look stupid because they forgot something. Use in context, "oh you are architects are so academic, that isn't a concern". While others will be thinking ("yet but it will become one..."). Of course the opposite word, used by those not wishing to do deep analysis is "pragmatic". We have a pragmatic solution.... Thus cutting off possible debate/discussion.
Architects can (and should) find themselves in a difficult position here. They (we) must be seen to be taking the side of pragmatism, and yet ensure that the teams are not taking on excessive technical debt. We need to be asking the important questions around operability of the system, its performance and availability envelope, its need for flex when it is used in unanticpated ways. remembering of course that solutions get used in unexpected ways and in unanticipated quantities.
Make sure we use data and data values to describe the system envelope. How many users? How many transactions? How many transactions per second? What is the reliability/bandwidth of the communication path? What does it cost per transaction to use an external service? What if that service is not available? Can the system be unavailable for 1 minute, 5 minutes, 1 hour...? What does it cost to use a "pet" technology? and so on. Otherwise known as the -ilities. Sometimes called non-functional requirements, but I would argue that they are quivalent to functional requirements. If these are not met you don't have a functional system.

In VPEC-T terms this is a classic conflict of values. The EAs are charged with the rather nebulous protection of the future. The project teams are charged with getting value out there fast. Those value systems are fundamentally at odds. So as architects we have the responsibility to be scrupulous in our questioning, demand the data that supports the project's assertions of cost/risk/time to market. Finally those that write the checks have to decide.
What we architects must not do is put up wooly, ivory towered thinking to prove how clever we are, how stupid everyone else is, how the world will come to an end if the solutions are not developed the way we dictate. We must not be sowers of fud, but analysts of relevant data.

Thursday, March 3, 2011

Decisions under uncertainty

What data do we use to make decisions? Do we rely on the most up to date information possible? Do we use rough approximations? How do we reason about our decision thinking. Taking a concrete example first, I will then look for further insights and, of course, solicit comment.
My favorite example of decision under uncertainty is when I am making a purchasing decision and paying for the purchase by check (cheque!). I want to have some degree of certainty that the check will clear, so I have several strategies at my disposal.
  1. Call the bank, tell the bank that I have a particular purchase to make, give the intended cheque number and amount and get some kind of a gurantee that the check will clear (even if my expected paycheck deposit bounces).
  2. Call the bank/go to an ATM or whatever, get the current (available) balance, find out which payments/deposits have already cleared and satisfy myself that there is enough for the check I am about to write
  3. Call the bank/go to the ATM or whatever and get the currrent balance, ensure it is enough to cover the check I am about to write
  4. Look in the check ledger (or home accounting system) and satisfy myself that the balance is sufficient.
  5. Assume that because I have checks left, I must have money.
Well I was just kidding about #5. These choices all have some degree of risk (from very little risk in #1 to significant risk in #5). But which one do we choose, and why? I suspect it has a great deal to do with how we perceive the risks. So, for example, if I were working for a large company with a great history of making payroll, the likelihood of a pay check bouncing is very low, so I will act confidently and perhaps take any of the choices 2, 3, 4. However working for an underfunded startup, I may well choose 1. There are lots of other variables in here too. How much does the bank charge for bouncing a check - vs the value of the check. How much reputation (aka credit report) damage might I suffer, etc. So I make a decision. But except in case 1, the decision is somewhat uncertain.

In our daily lives, we constantly act on imperfect data, but we use our experiences to guide us. I am pretty sure that my socks will be in the sock drawer, so will go there first. But if they are in the laundry - I might have to take a corrective action, the risk is low, the actual impact is low, so I will go to the drawer first. Oh, and no, I don't consciously do the risk analysis of every small thing - that would be very counter productive. That's why we have experience! We can act without consciously managing trivial risk.

Now moving into information systems, things look a little different - but should they? We attempt to limit risk of doing the wrong things (including being gamed by clever risk expoitation) by using tight centralized control (aka databases of record, single points of truth, etc.) However, these kinds of systems are inherently fragile. They are also more unreliable than we think. The value of any piece of data  is only valid at the time we request it unless we wrap the request with explicit transaction boundaries. But then if we have to serialize access to data every time we want to use any of it, everything would gum up and come to a grinding halt. So we always use approximations

The "trick" is to realize that you are always retrieving an approximation, realize that much decision making can be relegated to a system of reference, but that when making changes you must affect the system of record. And when affecting the system of record we do need transactions! Transactions give us the "truth" at that moment. The decision to change the "truth" is however very often made on approximations. This isn't bad - it's reality. Of course we can contrive situations where we have to have "truth" just as in my example 1 above, but those cases are more rare than we imagine.

It then becomes an architectural and design principle - what source do we we use in order to make decisions? How does the system of record react when attempts are made to update it based on out of date information? That all plays into the Values and Trust axes relevant to that decision making.

Monday, February 28, 2011

Requirements "gathering"

This makes it sound as if requirements are like mushrooms in the forest - you take a small basket, go to some secret place and pick all the things that you hope are edible, bring them home, cook them, eat them and, you hope, not die of food poisoning.

But there is a lot that has to go into the creation of a "proper" set of requirements. This post was inspired by some observations from Nigel Green (@taotwit), Alec Sharp (@alecsharp) with Color commentary from Anne Manes (@atmanes), Kristof Dierckxsens (@kdierc) and Doug Newdick (@dougnewdick). All those @ signs give the twitter ids of the people involved. Mine is @seabird20.

The riff all started with this, innocuous little tweet from Alec...

alecsharp @seabird20 My point - I see far too many cases where BAs gather literally 1000s of requirements but never synthesize them into a solution.

This of course led me to thinking - we have people who have analyst in their title, but almost always need to do some level of synthesis. Analysis and synthesis are often presented as opposites, so what gives? Here, the expertchoice team point out that you can't have a solutiion from analysis alone! Well that's kinda obvious, but still well put.

We typically associate analysis and design (with design taking the sysnthesis role in this counterpoint but it really isn't).

Now back to requirements, why do we have thousands of detailed requirements listed on powerpoints, in word docs, in Excel spreadsheets or wherever? Does it really help us to create so much "electronic concrete?"

The Agile movement has been very helpful in helping is do a much lighter weight job of handling the requirements, but I think that many agile practices also miss the boat.
So what matters?

First off, if I have my architect hat on (not necessarily EA hat), I am interested in those requirements that will make a difference to the structure of the functioning system. House analogy time, what it's going to be made of, and how it will be put together not the color of the curtains. And that plays right back into the excellent book, "How buildings learn" by Stewart Brand. Being concerned with the shearing forces, the forces of change.

(Aside):One golden rule, if stuff is going to change a lot don't spend inordinate time documenting it. A mocked up screen is a much quicker/cleaner artifact than a page of documentation. But that also misses the big point.

Counting things might help you to estimate them...Counting things doesn't tell you some of the essential things that will determine the structure. So let's think about some examples:
  • You are getting together the "requirements" for an accounting system. Do the following make a difference to the structure? (and therefore have got to be right early)?
    • It has to be multi-currency
    • We have to support monetary amounts in the 10s of billions and 3 decimal places
    • Connectivity to remote users is intermittent
    • The users don't all speak the same language
    • We have thousands of accounts to manage
    • We want to consolidate data entry onto a single screen for account master data
    • We need to be able to drill down from summary data into the details
    • The system has to be available for transaction posting from 0600 until midnight
    • ...
Of course these aren't real requirements. Some of them make a big difference to our lives. Multi-currency??? Well if you have ever had to convert a single currency system to a multi-currency you understand those implications. So yeah, pay attention. Size of numeric fields? Not so much. It's pretty much a nit. Pain to change in the formal documentation if you have carefully mocked up a screen layout showing (8,2) and you now need  (11,3), that's a lot of editing if you have specified every screenshot. So don't!

Drill down? Well that can be huge because it exposes a level of functinality different from just posting transactions.

Do all of the "requirements" appear at the same time? - Definitely not. Looking at a simple mock up showing postings doesn't show that I have multi-currency. We have to do analysis to discover the individual postings need - and then synthesis to put it back together fully in context.

Story telling is vital here - our analysts have to tell stories, they have to understand the business future state that is being proposed, they have to understand the constraints, and then put all the ingredients together at the right time. The switch for analysts to synthesists - and maybe have to all the time. However as Alec said in a tweet:

"alecsharp@taotwit @seabird20 I often say a better job title would be "Business Synthesist" but a) hard to say after pints and b) terrible acronym "

and another

"alecsharp @seabird20 @taotwit Without synthesis, you end up with reams and reams of disparate "context-free" requirements."

We are often led to incredibly detailed specifications by our large consulting brethren. They do this as means to managing their profitability not the customer's value. "Oh the risk is too great unless we gather these requirements in excruciating detail, so pay us to do that." And then, "Well YOU didn't tell us so now let's have a change order, so pay for that too."

Requirements might be gathered as discrete little pieces of functional capability,  But we need the cookery of synthesis to turn them into a non-toxic meal..

Tuesday, January 18, 2011

More on what event models can learn from database models

One of my earliest posts was entitled, "What can SOA learn from RDBMS?" In that posting I was arguing for "Event Triggers" that fire when "interesting" business objects change. Of course, "interesting" needs further definition and so for tha matter does "change". However that isn't where I am going in this post.

Historically in the database world, most systems implemented "pessimistic" locking mechanisms. In a pessimistic approach, access (with possible intent to update) to a data resource could not be obtained unless the requestor were able to get exclusive control of the resource. More recent systems employ an "optimistic" approach whereby the data resource is only locked while it is actually being updated. The semantics are quite different, but the intent is the same - makle sure that we don't have conflicting updates.

In general pessimistic locking delivers absolute guarantees, but may cause throughput reductions or excessive resource consumption. Optimistic locking assumes that there will be little contention and therefore doen't consume resources not hold locks "until it has to". In a system with optimistic locks the update requestor may have to deal with the contention behavior (retrying operations or whatever). In the pessimistic scenario, the requesting application can't proceed unless it is safe to do so.

In eventing models, we have similar trade offs that we need to make. No, they aren't lock based, but they do have implications on throughput. These are all focussed around guranteed sequence and guaranteed delivery. Both of these impose throughput limitations on an event based system. Guaranteed sequence isn't really any kind of guarantee however. It is a guarantee that over a certain time frame, messages will arrive in sequence. However this unpacks into not applying events in consuming systems until the guarantee period has expired in some cases.

Why you might ask? Imagine that you have timestamped events coming in, you may not know that an event that comes in does not have an earlier predecessor that has somehow been held up. So you might choose to wait for the appropriate time interval "just to make sure nothing is coming in later" and then apply all events in proper sequence. Set the event sequence window to 5 minutes and in the worst case you are processing 5 minute batches. That doesn't sound very friendly.

So maybe you think about applying sequence numbers to the events. That's fine (well kind of) unless the sequence number calculator is a central shared resource, in which case there may be contention for that. At least however, you can detect that events are out of sequence because if a receiving system perceives a gap in sequence numbers, then it can wait for the "time out period" or the appearance of the missing sequence number(s) whichever happens first and then apply the updates. This is a bit intrusive to the event creators - managing sequence numbers was probably not in their original plan.

In neither of the above scenarios do we get maximal throughput. They are both intrusive and will deliver quite inconsistent throughput. They are rather akin to the pessimistic locking approach taken in database systems.

Looking at it a different way, we can perhaps detect (externally to the applications) that things have happened out of sequence. maybe using an out of band control infrastructure. Thus if a sender identifies when something was sent (using a messaging infrastructure), the receiver's act of receiving will identify when it was received. If the receipts are out of sequence the infrastructure can alert and trigger actions. The assumptions are that:
  • Sequence errors are few and far between
  • It is OK to recognize that the sequence has been violated
  • The infrastructure is capable of re-delivering the events in an appropriate sequence
  • There have been no irrevocable side effects as a result of the out of sequence receipt.
Attempting to detect post-hoc is likely (and I have not done the work to prove this yet) to deliver mmore consistent (and higher) throughput than attempting to guarantee, with a slight increase in risk.

This looks to be a fruitful area of research (or if the research has already been done, a fruitful area for some best practice patterns) as we treat 2011 as "The Year of the Event" as Todd Biske proposes.

Thursday, January 13, 2011

ROI Based Business Cases and Long Range Value

In a series of interesting twitter posts @taotwit(Nigel Green), @erikproper, @oscarberg, @jdevoo have been discussing modelling Value - other than monetary value. In VPEC-T (modeling based on Values, Policy, Events, Contents and Trust) the abstract concept of Value recognizes things other than monetary value.

In other posts, I have seen reference (and I am sorry that I cannot cite the references) to  a primary role of Enterprise Architecture as the team/approach for encouraging project funding across siloes. It was expressed more elegantly in the other posts, I admit!

These various posts and thoughts about Value led me to thinking about entrapreneurial organizations and risk averse organizations. Some of these arguments are kind of old hat, but in companies where innovation is a core driver (eg apple) the introduction of products isn't based (entirely) on ROI. Compare and contrast with very "mature" organizations where business as usual (in the sense that the company delivers the same products, with the same methods, with the same focus on return) because of shareholder or other leadership demands.

Architects are also sometimes thought to be ivory tower/"build it and they will come" kinds of people. This is an accusation that does have merit - we have been guilty of such sins, but it is also used as a weapon for politcal/organizational purposes. The Trust relationships are key here - because architects are likely to be destabilizers (of the status quo and current power bases) means that there will be a failure to Trust the EAs by those who will be affected/disempowered by innovation.

We see this happening frequently in enterprises where there is a process or organization for innovation. "Innovation will be handled only in the xxx organization". Anyone else onnovating outside that organization will be ignored. That's an intersection along the Value/Trust axes.

Wednesday, January 12, 2011

2011 The year of the event

Todd Biske, on his excellent Blog has offereed the imperative, "Let's make 2011 the year of the event" That is a truly wonderful cause. I am shamelessly taking the idea and adding some of my own thinking here. I urge you to read Todd's original post, though.

I think we have had misgivings about eventing models since the early EAI days. With the advent of EAI we worked with events as integration mechanisms. Without attempting to do anything about the underlying systems, and without a sensible taxonomy.


Now perhaps as we think seriously about events and event models there are 2 big ideas that we need to consider.

The first is that when some business object changes state, it should let "the world" know. Think Business Object trigger - here's a link back to 2008 on that.

The second is situational awareness, Here we really need to have a proper mechanism for understanding the effects of an event across a rabge of "observers" might be. It becomes a question of responsibility. When multiple "observers" see the same event they may all choose to handle it. However they are likely to be inconsistent (not necessarily any harm in that), some might claim ignorance (you never told me that), some might have taken different action knowing how another observer behaved...

Using human communications and human triggered interactions as a model, we need to look at the "situational awareness" aspect and decide what to do about conflicts in understanding, conflicts in behavior

Saturday, January 1, 2011

Time Part 2 - Januray 1, 2011

This post follows on from the introduction to thinking about time posted here.

In the previous post, I introduced a few of the nuanaces of time. It isn't about the obvious things like format, timezone, precision, calendar, etc. Those are important and can be the source of errors in coding, but they aren't the worst issues by any means. Of more importance is the meaning of time in a specific context. So to start with. let's look at the kinds of time we might be concerned with. I like to think of time as either a"point in time" or duration. Then associated with times there are various "operations" that are to be applied - dealing with relative time.

So thinking about the common "kinds" of time we see, we might classify this way:
  • Points in time
    • Real time - coincident (to a fine tolerance) with something else
    • Near real time - within the same "transaction" scope as something else
    • Actual time - When something happened
    • Notification time - When an event was notified. This may or not be the time the event happened
    • Observation time - when an observer receives notification that something happened
    • Effective time - a point in time when something becomes effective
    • Expiry time - a point in time when something is no longer valid
    • Cut off time - a point in time when changes are no longer accepted (may behve like an expiry time)
    • ?
  • Durations
    • Validity period - a time duration where something is valid
    • Length of time - a time duration where something is happening
    • Relative duration - a duration that is defined without there being an actual start time. For example, a half in soccer. A type level concept and not an instance level concept.
Clearly in this intial writing there are some quite nebulous words like "something" or "transaction" or "Event". Perhaps happening would be a more useful word.

Anyhow much of the difficulty in systems comes about because of ambiguity related to time. Something may "happen" causing the place at which it happens to think that the happening has occurred, but it has yet to be observed by other entities. Until it does the other entities cannot know that the state has changed.

This is not just a systems phenomenon, but a physics phenomen too. Light moves at a finite speed, so when we see an object (eg a star) that has emitted light, we are seeing the object as it was at the time the light was emitted, and not as it is "now" - or in the frame of reference of us, the observers. We don't (and cannot) know whether the object even still exists. We make assumptions, of course, but in reality we don't know. In other words there is considerable ambiguity.

The big question then is how to handle such ambiguities/inconsistencies of existence. We could take the "GOD" view. There is only one view of the truth, all questions must be asked of that view and there shall be  no other views.  That's fine, except it does take finite time to ask questions, so even if there is a "GOD" view there is and always will be inconsistency.

We can take a view that allows each observer to have its own "correct" view. But that doens't feel any better. We can never be sure when asking that observer if the same set of data has been included as in another observer's set. So for example, when I am presenting some conclusions, it might be quite reasonable for a questioner to ask if I had included the research that was published in December 2010. The point is that unless we are pretty explicit about what is in and what isn't, it is hard to determine what something actually means.

Ultimately we cop out and attempt to have a GOD view of our own data and then allow for other pockets of data which we "update" by taking extracts from the "GOD" view. That has sort of worked - we batch things together and know that the information was correct at midnight yesterday or other known time.

All changes in systems that work in individual events, where data are not batched, where a system is as up to the moment as possible because we don't know and cannot in today's systems know' what has actually been included and what hasn't.

The taxonomy of time that I presented earlier is really only useful as an analysis set. It helps us as analysts answer questions about the data as the data are purposed and repurposed in the (extended) enterprise. It gives a vocabulary to use when asking questions around a particular happening or duration.