Lean IT Value Streams: An Entity Lifecycle Approach

The concept of IT value is hotly debated. Positions range from dismissive (e.g., Nicholas Carr’s “IT Doesn’t Matter”) to enthusiastic. Many IT frameworks and authors address the issue of value in some form. ITIL version 3 suggests that the core value proposition of IT reduces to utility and warranty. ISACA, the sponsors of COBIT, consider the question significant enough to create a new parallel framework VAL-IT (i.e., VALue of IT).

From dramatic success in manufacturing, the Lean approach to value is moving into the world of services, including IT service management. However, in order to apply Lean thinking to IT management, the definition of value must be clarified, and activities that directly add it must be mapped out and distinguished from those that do not.

Even in manufacturing, this may not be straightforward; and in the case of intangible services, it is even more difficult. The thought experiment often used is to ask the question, “If I were the customer, would I pay for this?”

But this can obscure the question, “Who is the customer”? Is the customer the consumer or the shareholder? Their value experiences are distinct:

  • The customer experiences value through the individual acquired product (good or service)
  • The shareholder experiences value through the product offering’s market success in the aggregate, relative to competing offerings – or competing use of capital.

In other words, product lifecycle management – the shareholder’s concern – is distinct from the “quote to cash” sales cycle – the customer’s concern. And, analogously, the IT service lifecycle is distinct from the specific transactions flowing through a given IT service. Again there are two distinct value experiences:

  • The end user experiences value through the specific interaction with the system (e.g., creating a new customer)
  • The service sponsor experiences value through identifying an opportunity and finding a partner to build it effectively and efficiently, and operate it so that it is useful and reliable across its user or customer base.

It is the second version of value that figures into the various representations of an IT value chain, including ITIL v3’s concept of the IT Service Lifecycle. This lifecycle is in theory controlled through the function of Service Portfolio Management, as an evolution of the original service catalog concept and co-optation of the concept of Application Portfolio Management.

What constitutes a “portfolio”? All members of a portfolio share comparable data and go through a common lifecycle. Both of these criteria indicate that a portfolio’s subject may be an entity in the conceptual modeling sense. What else is required for an entity to merit the “portfolio” designation? The term “portfolio” implies things of consequence managed over a long time horizon.

How do we determine “what is of consequence”?

Entity lifecycle analysis is a well known technique for systems analysis and design [Robinson, 1980; Rosenquist, 1982]. The major “entities” (nouns) of interest are compiled (e.g., in a conceptual or logical data model) and their state changes analyzed, from birth to death.

For example, a Customer record might be created at the first sale to an identified individual, and then archived or deleted after five years of inactivity by that individual. Or an Asset record might be created when the purchase order is first issued, and then deleted from the active system when the asset has been retired from service. (These are, of course, simplifications.)

In performing high level architecture, one can gain insight by identifying the longest lived entities in the conceptual model. For example, in a simple manufacture/sales model, the concept of Customer might live on average 10 years, while the concept of Service Call might live on average a week. (Long term records of the Service Call don’t count; what we are interested in is the duration between record creation and its quiescence.)

This approach can be applied to the universe of discourse characterizing IT Service Management. (Closely related/synonymous domains include IT Governance, Enterprise Architecture, and Systems Engineering.) This domain will also be called “the business of IT” throughout this paper.

The purpose in applying this technique to the business of IT is to better understand the major activities of the large IT organization, so that the value of IT can be better managed, and Lean techniques in particular are enabled.

(Since we are applying this technique to a domain that in part contains it, the exercise is recursive and related to metamodeling; caveat lector.)

The following durations for the major ITSM data entities seem reasonable. Note that these durations are scoped by the organization – we are not talking about the lifecycle of an information concept or technology product except insofar as it has entered into the organization’s operations.

 Information Concept  Highly variable – up to life of organization
 Service  Years
 Asset  Years
 Technology Product  Years
 Person  Years
 Facility  Years
 Contract  Years
 Vendor  Years
 Project  Months
 Release  Weeks
 Change  Weeks
 Service Request  Days
 Incident  Hours
 Problem  Days/Weeks (if longer, feeds back
to Project and Release)

I propose that the following three lifecycles are the core of enterprise IT:

  • Service lifecycle management (from demand through retirement)
    • Application Service Lifecycle
    • Infrastructure Service Lifecycle
  • Technology product lifecycle management
  • Asset lifecycle management

These lifecycles are distinct yet interrelated; each has their own logic and none can be subsumed under another. In terms of the IT value chain, the essentials of the service lifecycle are primary, and the rest are supporting. But the supporting lifecycles are independent of the primary value chain in terms of timing and, often, initiating events.

DefinitionsService lifecycle management is first among equals; without the need for services, none of the rest have any meaning. The service lifecycle is a value chain, the core of “ERP for IT” and essentially identical to ITIL v3. The other lifecycles support it – but cannot be fully folded into it.

It is possible to decompose service lifecycle management into application and infrastructure layers. Many large IT organizations structure themselves exactly along these lines, with engineering and operations distinct from application development and management. But both are at base services, providing benefits that must be produced and consumed simultaneously.

Application service management depends on its infrastructure support. Infrastructure, in turn, depends on the acquisition of computing resources (capacity, based on the acquisition of computing assets through supply chain) and the establishment of engineering standards (technology product lifecycle management). These engineering standards underpin (in a mature IT shop) the “service offerings” presented by infrastructure organizations.

Technology product lifecycle management. Enterprise architecture is often called on to assist in the evaluation of new and novel products purporting to solve many problems. New products require functional and non-functional evaluation, definition of acceptable configurations, and ongoing control against policy and baseline configuration. Minimizing redundancy (e.g., reducing the number of database management systems, or programming languages, or operating systems) is a key concern, often at odds with other drivers.

Sets or “patterns” of product combinations may be defined and translated into service offerings, in the interest of greater consistency and provisioning speed. All products require upgrades and patches, are challenged by new competitors and disruptive forces, eventually lose vendor support, and ultimately must be removed (intentionally and with confidence) from the environment. This entire lifecycle is too often managed as a set of functional silos, even crossing multiple functions such as vendor, configuration, and security management.

Note that this lifecycle is often decoupled from the service lifecycle. Technology products are often introduced with the assumption that they will support multiple services, and of course any one service may depend on multiple technology products. In the case of such fundamental computing platforms as OS/390, CICS, MQ, and TCP/IP, the technology product may well outlive multiple services platformed on it, themselves long-lived. This overlapping interaction between the service and the technology product demonstrates that the technology product lifecycle cannot be subsumed into the service lifecycle, at least in organizations that have multiple services sharing common infrastructure.

The service lifecycle often does drive the technology product lifecycle. Typical anti-pattern: the project team tasked with implementing the service determines that a new technology product (perhaps specifically optimized for the service) is required, and push to have this new product’s approval accelerated, perhaps encountering reluctance from those responsible for the technology product portfolio (e.g., enterprise architects), who may point to redundancy or non-functional issues (security, scalability, performance, etc.).

Technology products generally subtype into hardware and software (the boundaries are not crisp and some products include both).

Asset lifecycle management. Assets are not the same as technology products, which are types, not instances. “Compaq DL360” is a type, while a specific device with a distinct serial # is an asset. “Oracle 11” is a type, while a particular license is an asset.

The management and especially the timely provisioning of assets for the large data center operation is a particularly challenging topic, requiring careful timing and coordinated forecasting. While vendor solutions for application-centric demand management are mature, solutions for infrastructure demand management are not. The discipline of capacity planning is an essential input, but the lifecycle also requires attention to vendor management, purchasing, logistics, asset control, power and cooling, physical security, and other disciplines.

As assets are instances of technology products, the asset lifecycle is dependent on the technology product lifecycle – one does (or should) not generally acquire an asset without considering whether the product is suitable. However, once the product is deemed suitable, many repeated cycles of acquiring particular instances and managing them through disposition may ensue.

As with technology products, the asset lifecycle is strongly influenced by the service lifecycle. This may seem obvious – why would we buy the assets if we didn’t have a service to platform upon them? However, just as project teams may run into challenges when they seek technology products that are not well aligned with the current portfolio, so too their needs for immediate provisioning of computing capacity may challenge the overall asset portfolio management strategy. This is one of the business drivers for virtualization and cloud computing, and (as above) demonstrates that the asset management lifecycle cannot be subsumed into the service lifecycle.

Clarifying the boundary between the application and infrastructure services may help; if the asset base is supporting an on-demand infrastructure service (e.g., a shared hosting environment), then the lifecycle of the associated assets becomes more dependent on the lifecycle of the service. But vendor management will remain independent (if the vendor also supplies other infrastructure). Refresh and growth of that infrastructure service will require constant attention to the asset base, and the associated supply chain and bill of materials problems remain nontrivial and distinct from the challenges of engineering and operating the service.

Toward IT Value Stream AnalysisMuch more could be said about these major lifecycles and their complex interrelationships. A comprehensive process architecture could be derived by analyzing often-chaotic interplay between the major state changes of the associated lifecycles. Synchronization points, dependencies, and critical paths would all be of interest, and constitute the foundation of IT value stream analysis.

Value streams can be understood in a broader architectural context. Leaving the particular domain of IT management aside, any modern (IT-based) business can be understood as:

  • Value streams
  • Business functions
  • IT services
Value streams are the end-to-end pipelines, crossing functions, that deliver some value for some stakeholders. The classic example is the order to cash lifecycle in manufacturing. Value streams are activities, heavy on verbs, countable and measurable. They should be few in number, and are supported by functions.

Functions (a.k.a. capabilities, aka business services) are ongoing organizational components. While a functional decomposition for a given organization might not be the same as its organizational structure, usually a good functional decomposition could serve as a baseline org structure. They are supported by IT services.

IT Services, seemingly obvious, still elude industry consensus. How to distinguish them from business services? Since business services wrap IT services in support of the value streams, this framework errs on the side of keeping them technical. If they are simply subsumed under business services, economies of scale through enterprise sharing can be lost.

Cross-functional coordination is required between the value stream and functional layers, and IT service portfolio management and architecture mediate between functions and IT services. Formalizing these relationships is essential to an operating model.

The question is, how are IT services themselves created? The core insight of this framework is that they are produced by IT value streams that follow the same model. The model is recursive and fractal.

  • Business units rely on IT services, which support their organizational functions in turn supporting value streams. This applies to both revenue generating and support (“back office”) units.
  • IT is a back office unit, along with HR, Legal, Finance, Properties, and so forth. (And even in a completely decentralized business, one can see it as a logical function.)
  • IT services themselves are produced by a value stream – the IT value stream, owned by the IT function. 
  • IT needs its own IT – represented by systems such as demand & portfolio management, IT service management, CMDB, monitoring, capacity, and many others.
  • The IT operating model as a whole consists of the managed IT value streams mapped to supporting functions, in turn supported by a portfolio of “IT for IT” services.
  • Finally, a logical data model is required to support the analysis, in particular for the purpose of metrics development – which, in turn, depends on clean and well structured transactional data about the business of IT. The data model in conjunction with the lifecycles may also be seen as an event model.

This s only a summary framework. To apply it for the purposes of management and research, further development is needed:

  • Each of the major lifecycles needs to be decomposed into a set of statuses (milestones). A rigorous approach for doing so needs to be established. Processes are fundamentally continuous; how are we to define state changes?
  • A metrics framework reflecting these lifecycles needs to be developed. Usually, reporting is available by function. The metrics framework therefore should focus on highlighting constraints across the value streams. This, in turn, drives data architecture. 
  • The milestones be matrixed to themselves so that process interdependencies can be understood. An analytic basis for doing this probably needs to be developed.
  • The IT functions are reasonably well understood. COBIT can probably serve as a first approximation. Several examples of functional analysis (ala IDEF0) exist, showing inputs, outputs, guides, and enablers. These interaction analyses can also be constrained to particular processes to give a rich view of end to end lifecycles. 
  • The value stream milestones can then be matrixed to the IT functions so handoffs and constraints can be understood in terms of accountable capabilities.
  • The IT services for IT are more challenging. They should be orthogonal to the functions (don’t have 1 IT service per function, identically named.) However, they also need to reflect the current reality of available solutions. Demand/portfolio and ITSM seem to be logical large grained loci of consolidation. Decomposing these systems into services, and analyzing the interactions of those services, is consistent with modern practices of service-oriented architecture.

ConclusionIn this paper I have described a framework for both managing and understanding IT value and its underpinnings, incorporating principles from Lean, the major IT frameworks, enterprise architecture, and other sources from the IT management and software engineering literatures. The framework as a whole must be empirically based and flexible. This paper proposes an axiomatic basis, but any such effort is a castle in the air until supported by evidence, and furthermore is valuable only insofar as it stimulates insight and in particular generates non-obvious questions and answers.

References:

Rosenquist, C.J. Entity Life Cycle Models and their Applicability to Information Systems Development Life Cycles: A Framework for Information Systems Design and Implementation. The Computer Journal 1982 25(3): 307-315.

Robinson, K.A. An entity/event data modeling method. The Computer Journal 1979 22(3):270-281.

Share this post

Charles Betz

Charles Betz

Charlie Betz is the founder of Digital Management Academy LLC, a training, advisory, and consulting firm focused on new approaches to managing the “business of IT.” He has previously held positions as enterprise architect, research analyst, developer and product owner, technical account manager, network manager, and consultant. From 2005-2011 he was the VP and chief architect for the "business of IT" for Wells Fargo, responsible for portfolio management, IT service management, and IT governance enablement. He has also worked for AT&T, Target, Best Buy, Accenture, and the University of Minnesota. As an independent researcher and author, he is the author of the forthcoming Agile IT Management: From Startup to Enterprise, the 2011 Architecture and Patterns for IT Management, and has served as a ITIL reviewer and COBIT author. Currently, he is the AT&T representative to the IT4IT Forum, a new IT management standard forming under The Open Group. He is a member of the ACM, IEEE, Association of Enterprise Architects, ISACA, and DAMA. Currently, he serves on the board of the Minnesota Association of Enterprise Architects chapter and is the organizer of the Agile Study Group, a working group of local practitioners and faculty examing Agile methods from the perspective of theory and pedagogy. Charlie is an instructor at the University of St. Thomas, and lives in Minneapolis, Minnesota with wife Sue and son Keane.

scroll to top