Monday, May 30, 2005

Life Principles

Last week, I had the pleasure of presenting a talk entitled Bridging the Gap Between Technology and Business at the DAMA International Symposium and Wilshire Meta-Data Conference. During my stay in Orlando, I had the privilege of attending a keynote lecture presented by Bruce Weinstein, Ph.D., also known as The Ethics Guy. Bruce has just written a new book entitled “Life Principles.
The core operating principles of my company, Flashmap Systems, pretty closely align, at least at the top level, with the Life Principles presented by Bruce Weinstein:

  • Do No Harm
  • Make Things Better
  • Respect Others
  • Be Fair
  • Be Nice
The Ethic Guy's presentation, of course, delved deeper into ethical questions, such as how, or why, to live by Life Principles. For instance, Respect Others involves confidentiality, telling the truth, and keeping promises. As any spouse caught cheating can attest, once trust is betrayed, it's awfully hard to ever get back.

Weinstein explored many questions of fairness, such as distributive justice which asks, How do you fairly divide portions among people?
  • equal shares to everyone
  • based on need
  • according to the amount of effort expended
  • in accordance with the size of one's contribution
  • based on merit
  • determined by free-market exchange
A different facet of fairness involves what's known as retributive justice. In other words, How do you punish fairly so that the punishment fits the crime? Another factor of fairness, called rectifactory justice, pertains to rectifying a situation. That is, How can you make wrong things right again?

Finally, the last of Weinstein's Life Principles, which is actually called Be Loving, not Be Nice, is virtually identical to the advice doled out by one of my personal heroes, Dale Carnegie -- author of the first blockbuster self-help book entitled How To Win Friends and Influence People. Dale Carnegie had an understanding of human nature that will never be outdated. His book, originally published during the Great Depression back in 1937, explains the underlying principles of dealing with people so that they feel important and appreciated. Carnegie explains how you can make someone want to do what you want them to by seeing the situation from the other person's point of view and "arousing in the other person an eager want." His advice is simple:
  • Smile
  • Display genuine interest (i.e., Listen deeply)
  • Show appreciation to others
Bruce Weinstein has done a great job of identifying fundamental ethical principles that have stood the test of time. The sources of these Life Principles represent the intersecting overlap that is common to all religions. Weinstein highly recommended a book by Jeffrey Moses entitled: Oneness: Great Principles Shared by All Religions,

Sunday, May 29, 2005

Software -- Engineering or Craft?

When software developers thoroughly understand the processes they're modeling, then software development lends itself to well-engineered processes. On the other hand, where it's hard to define software requirements, then the challenge of specifying a formal, universal process for consistently creating high-quality software is exceedingly difficult.

Software engineering arose out of a traditional approach for dealing with clearly defined problems and needs. Issues of management, cost, and robustness are paramount -- not how to define problems. Software engineering methods, such as SEI's CMM (Carnegie-Mellon's Software Engineering Institute Capability Maturity Model), were patterned primarily after domains described according to physical laws and defined parameters -- not dynamic evolving systems.

Agile software development methods emerged from within programmer communities where solutions to problems tend to unfold over time. Craft-based approaches highlight the centrality of individual producers. A transparent open-ended process is best suited to situations where software requirements are not well understood, or needs are poorly articulated.

Generally, software developers will only agree on pragmatic methods that work. These are usually just simple rules of thumb, such as:
  • test early and test often
  • perform daily builds
  • focus lots of resources on getting requirements correctly understood up front
Most attempts by software developers to draw upon and apply quantitative methods from other disciplines -- like manufacturing's Total Quality Management (TQM) or engineering's Statistical Quality Control (SQC) -- have resulted in less than stellar success. For example, while TQM is an important tool for manufacturing, it is far less effective when applied to design -- and software development is a pure design activity. Moreover, questions of design, especially domain knowledge specification, don't generally lend themselves easily to pure statistical analysis.

Software development will never be manufacturing, except to the extent that manufacturing is a design activity. Design is an inherently buggy process. The nature of design means that mistakes and unforeseen/unintended consequences will occur.

The key to software development is translating knowledge into execution. Formalizing this process has thus far proven to be all but impossible. Whereas most people can almost instantly describe a car's overall design, asking someone to describe a factory process for making cars is more difficult. Asking someone to define the process of designing a factory for making cars poses an even greater challenge.


Software as Construction

Construction can serve as a helpful model to highlight key questions and assumptions about software methodologies.

Building takes a variety of final forms -- from individual housing to large-scale urban commercial projects. Do-it-yourself amateurs, independent local contractors, large construction firms, and global engineering companies do the actual building.

A division of labor exists between architects, developers, and builders. The line between these activities is often unclear, as the knowledge of actual construction involves a set of skills covering each aspect. However, individual licensing practices exist for most of these levels, often controlled by government or practitioners themselves. Each project reflects local laws, customs, and resources, merging them with the final consumer's demands and limits.

Overall, construction presents a complex organizational problem that draws on experience rather than predefined rules or methods. Translating consumer demands into workable end projects that account for local limits and resources is an intensely intellectual activity. A building's architecture must reflect often uncodified functional and form demands. Designers must also consider how the people who will use the building live, work, and play, even if they have no direct knowledge of the architecture.

Quantifying a building's success is difficult, subjective, and most likely individualized. Some projects are mass-produced, focusing on meeting average needs and demands, and thus requiring only average building skills and materials. Other projects are one-off developments that require innovative designs and materials and highly skilled labor. The more unusual a project, the more likely it will encounter cost and time overruns, although all construction projects generally face this risk.



Software as Hollywood

On an industry level, Hollywood's film studios closely resemble software.

Film production requires highly skilled teams. These teams normally form for individual projects, with members individually judged by overall reputation and experience within the industry. People receive training both formally, through film schools, and informally, through apprenticeships or self-learning.

Product quality is hard to evaluate, let alone quantify, prior to market release. The industry defines success critically by peer review and publicly through box office sales, with no guarantee that these evaluations will match. Critical success does not guarantee longevity.

The nonlinear and multistaged production process often leaves the ultimate product undetermined until the final stages. Production is highly susceptible to cost overruns and missed deadlines. Large budgets and film crews provide no guarantee of timely production or the final product's actual success.

The industry allows for a range of large-scale studio production and small independent projects, producing a variety of products from one-off masterpieces to long-running television shows.

Hollywood's limitation as an analogy with software lies in the noninteractivity of the final product, which is designed to entertain passively. However, the software gaming industry, with revenues already equaling if not exceeding Hollywood's, presents an interesting example of interactive -- yet very Hollywood-like -- entertainment.



Software as University

A university combines highly independent and creative professionals in an overall structure that can be quite bureaucratic. Many IT firms already formally follow a campus model that places a high value on innovation, communication, and creativity. Research is structured around peer review, with dominant ideas evolving and emerging over time. Teaching and research resist rationalization, remaining highly subjective and individually centered.

Issues of productivity and quality always contain a mix of qualitative and quantitative factors. Candidates gain admission to the profession only through a long apprenticeship, guided by senior faculty. The training process responds slowly to demand, creating a cyclical pattern of under- and oversupplying academics. Guaranteeing that candidates will exhibit a standard quality when they exit the training process remains difficult.



Please note: The above edited excerpts, plus many of the ideas expressed in this posting, were taken from the article Software Development: An Outsider's View, Kyle Eischen, p36--44, Computer 35(5), May 2002, IEEE. (see also Rationalizing Software)

Saturday, May 28, 2005

Rationalizing Software

In 1776, Adam Smith wrote Wealth of Nations. His treatise changed our understanding of the economic world just as Newton's Principia changed our understanding of the physical world and Darwin's Origin of Species revolutionized our thinking regarding all living organisms based on his theory of evolution.

Adam Smith's remarkable book was published at a time when governments granted monopolies and gave subsidies to protect their own merchants, farmers and manufacturers against 'unfair' competition. The guilds operated stern local cartels: artisans of one town were prevented from traveling to another to find work. Local and national laws forbade the use of new, labor-saving machinery. Not surprisingly, poverty was accepted as the common, natural, and inevitable lot of most people. Adam Smith railed against this restrictive, regulated, 'mercantilist' system, and showed convincingly how the principles of free trade, competition, and choice would spur economic development, reduce poverty, and precipitate the social and moral improvement of humankind.

According to Adam Smith, rationalizing production increases quality, lowers cost, and improves efficiency. The expansion of industrial capitalism, the rise of bureaucracy, and the emergence of scientific management have all accelerated the momentum for creating defined, quantifiable, repeatable production and organizational processes. Today, it's a generally accepted principle that open market competition has proven the power of rational systems.

The logic of rational organization and production exists in almost every facet of modern industry. Yet, the creation of software has somehow managed to elude an engineered approach to its production. Software developers cannot agree on the details of a defined, quantifiable, repeatable process. Instead, highly skilled professionals still craft most of the world's software. The notion of a software factory remains as nebulous and unattainable as ever.

The following edited excerpts were taken from Software Development: An Outsider's View, Kyle Eischen, p36--44, Computer 35(5), May 2002, IEEE:

Can software be rationalized?

Rationalization assumes a quantifiable process, maximized for efficiency by a distinct division of labor, with defined inputs and outputs, managed by an effective rule-bound bureaucratic structure -- all of which results in a process capable of being engineered.

Given the rationalization of multiple craft industries over the past two centuries -- all of which were once considered impossible to manufacture or mass-produce -- and the resulting benefits in efficiency and quality, we can reasonably expect that software can and should become engineered as well.

Rationalization leads to rule-bound, fixed parameters and hierarchies that place control, knowledge, and power in institutions and not individuals. Bureaucracy is essentially an institutionalized algorithm that takes general inputs and produces fixed and anticipated outcome.

Thus, rationalization and bureaucracy -- the underpinnings of modern manufacturing -- have consistently emphasized management objectives and goals as the drivers of industrial development. The more rationalized and explicit a process, the more it can be managed, moving control of the process from producers to managers.

Why hasn't software development been rationalized?

Why, even with a tremendous effort to engineer the process from within the profession and from the industry overall, does software development continue to confront the same issues it did 30 years ago? Why haven't basic industrial patterns -- software industrialization, software manufacturing, software engineering, and software assembly lines -- become dominant within the industry?

By the time the automotive industry had reached its 30th anniversary, around 1930, the fundamental issues of production, as exemplified by Ford's assembly line; product, as exemplified by standardized design; and industry, as exemplified by the few dominant national players like Ford and GM, had all been clearly established.

Patent or copyright?

Is software development an invention derived through a scientific method, or is it an act of speech and creativity?

Is software an act of engineering or communication? If software is a rational endeavor, improving quality involves better and more resources: better management, better tools, more disciplined production, and more programmers. If software is a craft, improving quality involves the exact opposite: focusing on less hierarchy, better knowledge, more-skilled programmers, and greater development flexibility.

As early as 1974, Fred Brooks (The Mythical Man-Month: Essays on Software Engineering) stated that merely adding people to a software project didn't increase its productivity -- a classic diseconomies-of-scale phenomenon. The essential limit of software development is communication.

Communication is inherently difficult, mediated by always-contextual codes, norms, culture, and perceptions. Translating knowledge from one context to another, like translating any language involves not just basic grammar and syntax rules, but also issues of meaning and intent that are contextual and subjective.

Software is the Execution of Knowledge

Much of knowledge is tacit, undefined, uncodified, and developed over time, often without being explicit even to the individuals participating in the process. More importantly, such knowledge and practices are dynamic, constantly evolving, and transforming. It should not surprise us that modeling these processes proves exceedingly difficult and that such efforts are often incomplete, impractical, or unsatisfactory.

The further away from broadly accepted and understood domain knowledge we move, the more difficult translating that knowledge becomes. The issues are inevitably about context and communication.

Context and communication are the most salient features provided by ITatlas, ITguide, and ITscout -- Flashmap Systems' products. I can't ever overemphasize the importance of getting an IT organization to improve its communication with its user constituency. Invariably, the key to effective communication depends on context management. The reason why this is so important is because you can never ignore the cultural reality that no one likes to admit what they don't understand. Context is what's missing from an information dissemination strategy based solely on search engines. Search is a terrific starting point, but once information is found what people additionally need is an understanding of its context.

For additional discussions about rationalizing software, see Software -- Engineering or Craft?.

Friday, May 27, 2005

How Much of EA is BS?

How finely calibrated is your BS detector?  Does the following set it off?
We appreciate your patience. 
Please do not hang up. 

Your phone call is important to us.

Each of us everyday is being fed a nonstop diet rich in BS -- a diet that's only getting richer all the time. BS is coming at us as hyped-up, insincere, pretentious talk that's perpetually prevalent in every facet of our lives.

BS is spread by pervasive and interminable commercial advertising which shows up everywhere -- in ads on TV and in other media -- many making preposterous claims. BS rules in meaningless political rhetoric, PR spin, and dumbed-down news reports that focus on the trivialities of our celebrity-obsessed culture.

There's a best-selling book on the subject of BS written by Harry Frankfurt, a leading philosophy professor at Princeton University.

Click image to watch video interview
with Harry G. Frankfurt
For propriety sake, we'll refer to his book as "On BS"
It's a serious work by a serious man about a subject that seems to inundate us at every turn.

According to Professor Frankfurt, "One of the most salient features of our culture is that there's so much BS. Everyone knows this and each of us contributes his share." He continues, "There's a yearning for living in an environment in which you can really believe what people tell you, especially people who hold responsible positions or aspire to holding responsible positions." Sadly, what we often find instead are people who can't be trusted to tell the truth -- people who prefer to try and fool you, pull the wool over your eyes, and manipulate your beliefs.

People are starved for the truth. Nevertheless, BS rules. It seems that anytime you put people into a position where they feel obliged to talk about things that they don't really know very much about, you're apt to get a lot of BS.

There's a clear distinction between lying and BS. BS'ing is a more insidious threat to the truth than lying. The liar believes that he knows the truth and decides to substitute for the truth something that he knows to be false. The BS'er doesn't really care if something is true or false. All that matters is closing the deal.

BS runs on confidence -- think of the con men in the movie The Sting. It's because people don't understand what a BS'er is talking about that they're willing to be marveled and awed by un-understandable, impressive-sounding jargon.

It's precisely because so few people even begin to understand what Enterprise Architecture is -- both inside and outside of IT -- that EA is prone to attracting BS artists. The process of grasping architecture, itself, is like trying to tackle a greased pig (see What is IT Architecture?). Everytime you think you've got it, it squirms away.

Keep your BS detector turned on at all times. Beware of magic wands and hand waving. Be careful whenever you hear some fast-talking consultants and/or software salespeople talking about aligning IT strategy with business strategy. That goal has been the elusive holy grail of the computer industry forever.

EA is comprised of four subsets:
  1. Business Architecture
  2. Data Architecture
  3. Application Architecture
  4. Technology Architecture
That's no BS.

IT has repeatedly been given the message to think and act like businesspeople. They're doing the best they can. The problem is the businesspeople. Whatever models they're using for thinking about, talking about, and managing technology -- most just plain don't work. Businesspeople need enterprise architecture in order to improve how they think about and talk about IT, and how they manage their technology portfolios.

Thursday, May 26, 2005

Consolidate. Standardize. Prune.

How is it possible that someone with whom I almost totally disagree has, nonetheless, arrived at the same exact conclusion as me?

The person I'm referring to is Nicholas G. Carr,

author of 'Does IT Matter?'
Best known for his article 'IT Doesn't Matter' which appeared in the Harvard Business Review two years ago, Carr's most recent article, 'The Next IT Revolution', published in the May 23, 2005 edition of CIO Insight quotes:
  • a Hewlett-Packard study that found most corporate servers use just 10 to 35 percent of their processing power
  • an IBM estimate that average desktop-capacity-utilization rates are just 5 percent
  • a Gartner estimate that 50 to 60 percent of a typical company's network storage capacity is wasted
Carr compares utility computing with electrical power generation. He states:
Between 1905 and 1930, when it became possible to centralize power generation in big utilities and distribute current over a network, a revolution took place. Many manufacturers resisted, but lower costs, freed-up capital, enhanced flexibility and fewer headaches made even the largest companies switch. Similarly, advances in networking, hardware virtualization, grid computing and Web services are making IT utilities possible, in which IT will go from being a complex corporate asset to being a simple variable expense. Utility computing today merely hints at coming changes.
Carr's analysis reminds me of the following totally erroneous quotes from the past:
  • "I think there is a world market for maybe five computers."
                   -- Thomas Watson, chairman of IBM, 1943
  • "There is no reason anyone would want a computer in their home."
                   -- Ken Olson, chairman of DEC, 1977
  • "640K ought to be enough memory for anybody."
                   -- Bill Gates, chairman of Microsoft, 1981
In my opinion, Carr just doesn't get it. He can't quite grasp the true power of the exponentially exploding drivers that fuel IT's growth:
  1. transistors
  2. storewidth
  3. bandwidth
I suggest computing more closely resembles telephones and televisions than electrical power generation. How many phones are in your house? How many TV's? My guess is there's unbelievable under-utilization, but who cares? Locality is what's important.

With so much cheap storage, virtually free bandwidth, and limitless processing power, IT innovation will stem from moving the mountain to Mohammed rather than moving Mohammed to the mountain. Imagine the asynchronous behind-the-scenes movement of data where copies of most frequently accessed databases literally reside on local computers. The analogy here is like virtual memory where most commonly referenced pages reside in physical memory for rapid access. Think how Lotus Notes revolutionized email by synchronizing desktops and servers.

I wholeheartedly agree with Carr that significant savings can be achieved by
  • consolidating data centers
  • standardizing platforms, products, and applications
  • pruning redundancies, reducing inefficiencies, and eliminating unnecessary replication
On the other hand, I categorically disagree that IT doesn't matter. Unlike electricity, which enables the operation of equipment according to immutable physical laws, IT powers business through a highly complex and ever-changing dynamic involving technologies, processes, and people. Often, business problems get defined in terms of available solutions. Moreover, the technological solution to a business problem frequently changes the problem. There is a complex and intricate relationship that connects emerging technologies to paradigm shifts. With technologies continuing to advance at an exponential pace, rest assured that IT DOES MATTER!

Friday, May 20, 2005

Scaling Standards

What should you do about IT standards if you're a Fortune 500 company?
  • How many distinct IT products does your enterprise own?
  • How many different product categories do all those distinct products fit into?
One of the main reasons for setting up IT standards is to direct people so they will choose the right products within each of the various product categories.

Establishing IT standards would be almost trivial if there weren't too many product categories to standardize. Granted, decisions would still need to be made, tradeoffs evaluated, political considerations assessed, etc.  But, as the number of categories grow, scaling does become a major issue.

Sticking one's head in the sand like an ostrich is not a viable scalability solution although it's a strategy often adopted by default. Companies complain they haven't the resources to conduct technology audits to inventory what they own, let alone to methodically select, communicate, and enforce standards.

Typically, an overworked IT team tries to turn the whole standards problem into a technological issue. What's needed, they insist, is a crawler that can automatically go off and perform an exhaustive inventory search by scouring every hardware platform, software library, and executable image. The shear enormity of this task will generally get IT off the hook with the excuse that the job is too big and there aren't enough resources either in terms of money to buy the right tools or people to perform the necessary work.

Of course, ignoring the problem does not make the pain go away. It merely sweeps the issue under the rug. The reason why nobody is screaming loudly is because the vast majority of business managers are totally clueless when it comes to knowing even how to think about IT. I swear heads would roll if businesspeople ever grasped the true magnitude of waste directly attributable to underutilization and redundancy of IT products. Maybe that's why IT doesn't want to address this.

In a recent posting to the "'XML Developers List'" , Roger Costello of Mitre mused about scaling. Below is an edited excerpt from his thread entitled "Understanding Scaling -- Critical for a Net-Centric Vision".
A SCALABLE MOVIE REVIEW TECHNOLOGY
Roger Ebert is a professional movie critic. He reviews movies. But, over the years, the number of movies has multiplied (independent movies, blockbuster movies, etc). It is no longer possible for Ebert to review them all.

Having a single professional movie critic review movies doesn't scale. Even a group of professional movie critics would be soon overloaded.

As well as not scaling, there is a problem with "truth". A movie review represents someone's interpretation, not facts. Everyone has their own opinions about a movie. Ebert may love the movie, but the rest of the world may hate it.

An alternative to using a professional movie critic to review movies is to have each movie producer generate a movie review. This approach scales well -- there is one movie reviewer for each movie. Of course, the problem with this approach is that the movie review will be heavily biased.

A third alternative is to record the comments of the movie-goers. This approach is highly scalable -- there are many reviewers for each movie. Further, since there are many reviewers there is a diversity of opinions, which give a variety of perspective to someone who is trying to decide whether to view a movie.

Thus, a scalable movie review "technology" is one that can grow as the number of movies to be reviewed increases.

A SCALABLE BOOK REVIEW TECHNOLOGY
The New York Times has a small group of book reviewers. This group reviews books and then publishes their reviews.

Just as with the problem of reviewing movies, using professional reviewers does not scale. The New York Times group is unable to review the exponentially growing number of new and old books.

An alternative approach is for each book author to elicit a review from a fellow author. This scales well, but suffers from bias.

A third alternative is to let the users (the readers) provide a review of the books. This scales well and provides for a diversity of opinions.

A scalable book review "technology" is one that can grow as the number of books to be reviewed increases.
There are three approaches to solving the scalability problem:
  1. The high priests approach: Solve the problem using a small, centralized set of skilled professionals (the professionals may be geographically separate but functionally they are centralized)

  2. The authors approach: Solve a problem using the authors.

  3. The users approach: Solve the problem using the large body of distributed users.
For IT standards, the high priests approach is the only way to go. That's not to say there isn't tremendous benefit in allowing the user community to share feedback with both the high priests (i.e., the folks responsible for selecting standards) as well as their fellow users. Bottom line, there's always value in being able to communicate pertinent and timely information about IT products and IT standards. The real issue here is that IT products, especially those selected to be standards, are not like movies or books. Usually people only see a movie or read a book once. Moreover, on those occasions where somebody views a movie multiple times or re-reads a book over and over again, nothing really changes outside of losing the drama by knowing the ending of the story ahead of time. Such is not the case with software.

A novice's experience when first using a software product bears little resemblance to what transpires during a working session in the hands of an experienced user. It's not the software tool that changes -- it's the user who changes. Software usage, unlike movie-going or book-reading, involves a (sometimes steep) learning curve. Additionally, much of that learning curve is highly product-specific. Another key consideration involves support. There are high investment costs associated with selecting a product as an internal standard. Enterprises are required to commit time and money to develop a corps of product experts to support other users.

There is an initimate relationship between scaling and standards. The larger the number of users for a product chosen to be a standard, the greater the opportunity for cost amortization. Leverage also improves one's ability to negotiate better terms and conditions with software vendors.

In my opinion, CIOs should have a fiduciary responsibility to implement an effective IT standards program. The decision of what tools to standardize should never be left up to which vendor has the best salesperson. That's just like reviews written by authors. Similarly, while it's essential to carefully listen to the diverse opinions of the user community, there are way too many other factors beyond simple acquisition costs that need to be weighed before choosing a standard. That's why high priests are needed.

Thursday, May 19, 2005

It's COTS to be Good

Commercial-Off-The-Shelf software plays a critical role in every enterprise -- and for good reason. There's little justification for someone to go off and build a custom solution where a proven application package already exists that can satisfy an organization's needs.

COTS applications architecturally resemble a value chain with back-office on the left and front-office on the right, as illustrated in the graphic below. Click on the image to view a more legible Flash version that can be zoomed by clicking on the right-hand button of your mouse.

In the above graphic, goods and services flow from left-to-right while money moves from right-to-left.

Enterprises that purchase lots of COTS products from many different vendors can often find themselves drowning in a thick broth of "architecture soup". On the other hand, be very wary of business managers who sincerely believe that all their IT problems will
simply disappear by merely adopting the architecture of their favorite COTS vendor.

COTS software is very different than Infrastructure software. It's targeted directly to the business customer rather than to IT. As such, it can really help if a CIO fully grasps Deal Architecture. For help and advice on this topic, you might want to check out Vinnie Mirchandani's blog.

Wednesday, May 18, 2005

Categorization versus Search


               

Back in 1994, a couple of guys at Stanford, David Filo and Jerry Yang, both Ph.D. candidates in Electrical Engineering, launched a service called Yahoo!. Originally begun as a hobby to keep track of their personal interests on the Internet, eventually their home-brewed lists of favorite links became too long and unwieldy, and they broke them out into categories. When the categories became too full, they developed subcategories ... and the core concept behind Yahoo! was born. The name Yahoo! is an acronym for "Yet Another Hierarchical Officious Oracle."

Yahoo! became the first really significant attempt to bring order to the Web. As the Web expanded, Yahoo! hired experts to systematize the directory. But what a daunting task given that the Web is so huge and unstructured. How would you like to have the responsibility of organizing the world in advance, trying to fit everything into a classification hierarchy?

So along came Google, an alternative to the Yahoo! directory. One reason why Google was adopted so quickly was that it didn't try to predict in advance how information is structured.

Google's founders, Larry Page and Sergey Brin, were both grad students at Stanford University (just like Yahoo!'s founders Filo and Yang). In 1996 Page and Brin collaborated on a search engine called BackRub that analyzed the "back links" pointing to a given website. Their unique approach to link analysis eventually led them, in 1998, to put their Ph.D. plans on hold and start-up Google. Google is a play on the word googol, which refers to the number represented by the numeral 1 followed by 100 zeros.

With the Google search paradigm, no one decides in advance how to categorize. In fact, with Google there is significant value in not categorizing.

But, there are times when categorizing in advance really is important. That's when the cataloger needs to provide context. Such is the case with establishing and communicating IT standards. The whole point of standardizing is to make sure everyone is working on the same page. It's incredibly wasteful when multiple project teams each go off and purchase their own relational DBMS. Besides redundant acquisition costs, each separate product carries its own ongoing annual maintenance fee. Worst of all, skill sets associated with each tool are not readily transferable. In other words, don't expect a DBA trained in Oracle to be able to shift over and work as a DBA for DB2, or SQL Server, or MySQL, or Sybase, or Ingres, or PostgreSQL, or Firebird ...

Someone inside IT with authority has to provide context. That means setting up the categories. It also means identifying which products to use as well as which ones not to.

The biggest challenge in creating the environment for communicating IT standards is providing proper context. The problem is that IT itself is enormously complex. There are numerous product categories.

Tuesday, May 17, 2005

Ontological Classification

I contend that IT standards need to be organized around a taxonomy where taxonomy is defined as "classification into ordered groups or categories." The reason why taxonomy is necessary is to ensure that similar products which provide overlapping functionality cluster close together. The primary rationale for establishing IT standards is to reduce complexity and increase efficiencies through consolidation and improved collaboration.

While you'd think it impossible to imagine a more ugly word than taxonomy, I believe the IT industry may well have found one with the recent emergence in popularity of the term ontology. Defined as a branch of metaphysics, ontology is a term borrowed from philosophy that refers to the science of describing the kinds of entities in the world and how they are related.

The newfound appeal for this word ontology is probably the direct result of its usage within the next generation W3C initiative called the Semantic Web.

In Scientific American, May 2001, Tim Berners-Lee, original creator of the World Wide Web and inventor of HTML and HTTP, wrote:


The Semantic Web is an extension of the current web in which information is given well-defined meaning, better enabling computers and people to work in cooperation.

The Semantic Web's Ontology Language is intended to be used when the information contained in documents needs to be processed by applications, as opposed to situations where the content only needs to be presented to humans. This ontology language formally describes the meaning of terminology used in Web documents.

In an essay written by Clay Shirky entitled, "Ontology is Overrated: Categories, Links, and Tags", ontology is described as follows:

It is a rich irony that the word "ontology", which has to do with making clear and explicit statements about entities in a particular domain, has so many conflicting definitions. I'll offer two general ones.

The main thread of ontology in the philosophical sense is the study of entities and their relations. The question ontology asks is: What kinds of things exist in the world, and what manner of relations can those things have to each other? Ontology is less concerned with what is than with what is possible.

The knowledge management and AI communities have a related definition -- they've taken the word "ontology" and applied it more directly to their problem. The sense of ontology here is something like "an explicit specification of a conceptualization."


Shirky goes on to define categorization and classification as the act of organizing a collection of entities into related groups and ontological classification or categorization as organizing a set of entities into groups based on their essences and possible relations.

Chemistry's Periodic Table of Elements, organizing elemental atoms by the number of protons in their nucleus, is probably the best classification ever created.



Another famous example of categorization and classification is the Dewey Decimal Classification card catalogs used by American libraries.

Ontological classification works best when you need a domain-specific hierarchy with formal categories. To help make ontology a workable classification strategy, you need expert catalogers and coordinated expert users. On the other hand, if you've got a large, ill-defined corpus, naive users, catalogers who aren't experts, and no one to say authoritatively what's going on, then ontology is going to be a bad strategy. Based on this, it's not necessarily clear how good a fit the Semantic Web's Ontology Language will be. However, a Technology Architecture for specifying IT standards is an ideal candidate for ontological classification.

Look for example at IT Infrastructure. It consists of Clientware, Middleware, Serverware, Manageware, and Platforms, as illustrated below:


Click on the above image to view a more legible Flash version.

The $64,000 question is, how well does the above model describe infrastructure in terms of its descriptive and predictive value?

Monday, May 16, 2005

Why Hasn't SOA Lived Up to the Hype?

The two most popular SOA initiatives -- Microsoft .Net and Java Web Services -- both remind me of the old story about the software salesman whose marriage never quite got consummated. Instead, each night the salesman would sit at the edge of the bed and tell his wife how great it was going to be.

Platform-neutral interoperability designed for easy application-to-application communication over the Internet (and intranets) has been the dream scenario promised by distributed computing vendors and pundits for many years. Imagine, if you will, a world defined by a standards-based service-oriented architecture supported by every major software company (i.e., both Microsoft Corp. and EAM, Inc. -- "Everyone Against Microsoft"). Yet, after five years, we're still waiting for this ubiquitous computing fabric to unfold. By now the Internet was supposed to have been transformed into a computing platform rather than a medium in which users primarily just view and download content. What's holding everything up?

First, recognize that technological change almost always comes much more slowly than originally predicted. It takes awhile for new technologies to develop and mature, and for the people who use them to shift paradigms. The good news is that once a technology's ultimate success is achieved, it eventually far exceeds the original prognosticator's initial forecasts.

Consider some past examples.

Once upon a time system software was written only in assembly language. Compiled languages were thought to be too slow and inefficient. But then along came C. Finally, there was a high-level language that could produce fast code. Nowadays, virtually all system software (e.g., operating systems, compilers, database managers, etc.) is written using either the C language or one of its more popular object-oriented successors, C++, Java, and C#.

That transition -- from procedural languages, such as COBOL, C, FORTRAN, PASCAL, and PL/I, to object-oriented programming languages, like C++, Java, and C# -- itself took a lot longer than originally predicted. The problem was the difficulty programmers had shifting paradigms. It was like moving from the pre- to the post-Copernican world (where people had to change from believing that the sun revolved around the earth to accepting that, in fact, the earth revolves around the sun).

Much of the driving force that propelled object-oriented development was the shift from character-based terminal user interfaces to PC-based graphical user interfaces. The event-driven coding required to support GUI applications was radically more complicated than the procedural coding that powered systems designed for 3270 or VT100 terminals. Eventually, though, WIMP interfaces (i.e., Windows, Icons, Menus, and Pointing devices) captured the market.

Next, look at relational database managers. It took many years before RDBMS products replaced navigational DBMS tools like IMS, IDMS, Adabas, or TOTAL. But, today, virtually all enterprise data resides in one of the big three SQL-based RDBMS products: Oracle, SQL Server, and DB2.

So, what's the technological bottleneck thwarting SOA's ambitions? Mainly, we're waiting for software practitioners to discover how to de-couple processing. This paradigm shift, slowly inching forward, requires a transformation away from traditional request-reply synchronous processing (i.e., client-server). In its place software developers need to learn how to create systems based on asynchronous parallel processing. Interestingly, mastery of de-coupled software will also help unleash the full potential of the next generation of dual core processor hardware.

Be patient. SOA is coming. Its long-term impact will be even greater than promised. It just will take more time to mature than originally predicted. That's pretty much the way these things always work out.

Friday, May 13, 2005

Architecture, Portfolio, and Standards

When it comes to technology, count me among the 'true believers' of Architecture, Portfolio, and Standards.

Mind you, there's no consensus regarding an exact definition of the term architecture. Rather, you'll find there are nearly as many kinds of IT architecture as there are flavors of Baskin-Robbins ice cream. For example:  computer architecture, hardware architecture, software architecture, system architecture, network architecture, enterprise architecture, business architecture, data architecture, application architecture, technology architecture, information architecture, security architecture, plus who knows how many other variants. Yet, among most highly respected IT professionals, you'll discover universal, often zealous, support for architectural principles and concepts.

If you think of architecture as somehow suggesting the notion of blueprints, then consider portfolio as corresponding to the IT equivalence of walls, floors, ceilings, roofs, foundations, etc.  A technology portfolio consists of all the IT products, systems, and services an enterprise purchases.

Frankly, I'm dismayed by how many IT organizations, from the CIO down, haven't a clue what they actually own. This often leads to costly, chronic problems, including the purchase of shelfware -- purchased but unused products -- as well as its converse -- cases where someone goes off and buys a product the enterprise already owns.

Probably the worst waste of IT resources stems from the unbridled acquisition by multiple, different project teams of multiple, different, competing tools that provide essentially identical, redundant functionality. Obviously, such decision-making fails to take into account how the total cost of a product's ownership (its TCO) extends far beyond its initial purchase price.

Finally, standards represent the very essence of goodness (see IT Standards Manifesto and Getting Started with Standards). There's perhaps nothing an IT organization can do to achieve a faster ROI (return on investment) than establishing, communicating, and enforcing standards.

In order for people to be able to readily find them, standards require a robust, easy-to-understand classfication hierarchy (i.e., category tree aka taxonomy). Ideally, standards should be visually organized so that anyone, at a glance, can quickly see which products to use and which to avoid when performing a particular type of task.

Thursday, May 12, 2005

Prune IT Systems, Not Budgets

Michael Schrage, codirector of the MIT Media Lab's eMarkets Initiative, authored an intriguing column in the Jan. 15, 2005 issue of CIO Magazine. In it he wrote, "If you want to cut costs but avoid painful consequences to the business, don't just slash IT spending; prune redundant systems, and make sure your CFO and CEO know why." That sure sounds like sound advice to me. One of my article, "Architect Your IT Portfolio," delivers pretty much the same exact message.

As Schrage points out, "cutting IT costs is one of the most expensive things an organization can do." He suggests, instead, following the advice of GM's CTO, Tony Scott, who says you save more money through "systems reduction" rather than "cost reduction." In other words, by refocusing the business on cutting the number of IT systems instead of the volume of IT expenditures, GM was able to achieve durable, sustainable savings. For example, according to Scott:
discovered that they have six or seven -- maybe even 10 or 12 -- overlapping databases in three or four rival departments that could (and should) be consolidated into no more than two or three large shared databases. This doesn't merely reduce hard dollar IT costs; it provides a chance for operational and organizational efficiencies.
Granted, it's tough looking to GM for excellence considering they have $19.8 billion in cash liquidity yet a market valuation of only $15 billion. (Is GM really worth more dead than alive?) Nevertheless, they are correct in "treating systems and apps -- rather than budgets -- as the medium to be managed -- putting the managerial focus where it truly belongs: on the business value of IT rather than its accounting cost."

Wednesday, May 11, 2005

The New Enterprise Architecture

The March 1, 2005 edition of CIO Magazine has as its cover story "The New Enterprise Architecture". Its lead article, written by Christopher Koch, entitled "A New Blueprint for the Enterprise", explains Enterprise Architecture (EA) as:
  • city planning for IT
  • an overarching plan of the total data, business processes and IT assets inside a company
  • how they're used
  • how they should be built and shared
Below is an excellent excerpt describing "How EA Came in from the Cold":

[EA] has always been a big, difficult and expensive undertaking, and its ROI has often been opaque to the business. Standardizing, mapping and controlling IT assets does not make the business obviously more flexible, capable or profitable. As a result, IT architecture efforts often fail or become completely IT-centric.

But now that's changing.

Advances in integration technology -- primarily intelligent and flexible middleware and Web services -- are providing new ways for designing more agile, more responsive enterprise architectures that provide the kind of value the business has been seeking. With these new architectures, IT can build new business capabilities faster, cheaper and in a vocabulary the business can understand.

These advances are giving new life to a couple of old concepts that could inspire new EA efforts and revive failing ones.

The first concept is services, better known today as service-oriented architecture. SOA provides the value to the business that in the old enterprise architecture was rarely more than a vague promise.

The second concept currently driving enterprise architecture is events.
I wholeheartedly agree on the importance of a services-and-events approach. I've written a pair of articles that directly address both of these topics: I also concur completely with Koch's comment:
The grail of EA is to create a map of IT assets and business processes and a set of governance principles that drive a constant discussion about business strategy and how it can be expressed through IT.
Flashmap Systems technology portfolio products, FlashAtlas (soon to be renamed ITatlas) and the soon-to-be-announced ITguide, are both designed to help organizations create and communicate IT roadmaps.

Finally, although my nomenclature is slightly different (see my previous post, What is IT Architecture?), I am in full agreement regarding Koch's framework for describing an enterprise effort:
  • Information architecture: identifies where important chunks of information, like a customer record, are kept and how people typically access them.
  • Infrastructure architecture: the jumble of hardware and networks.
  • Application architecture: maps the relationships of software applications to one another.
  • Business architecture: outlines the company's most important business processes.


Tuesday, May 10, 2005

Getting Started with Standards

It’s a tough challenge communicating standards because the audience you’re trying to reach consists of a very diverse constituency of people who use, develop, operate, manage, evaluate, purchase, or own IT systems and services.

A standardization initiative begins with a technology audit. The goal is to identify previously purchased products. Unfortunately, you'll likely discover that many prior IT purchases are, indeed, shelfware. These are product investments that are largely invisible and silent. Think of them as the proverbial elephant standing in the middle of the room. No one wants to admit they're there.

The primary reason for so much shelfware is the abundance of information silos within most enterprises. Inadequate communications results in poor buy/build/drop decisions. Worse still, the lack of tools to support IT standards causes the cycle to repeat over and over again.

Organizations can achieve huge savings by bridging the communication gap across different silos: sharing best practices, improving knowledge sharing, and increasing purchasing power when negotiating with vendors. Standards really do enable an IT organization to do more with less. Standards adherence helps drive growth by enabling the creation of simplified operations, improved utilization, and optimized cost-effective scaling. Standardization even improves security by helping to speed identification and plugging of security holes.

How do you begin? Focus on products and category trees. List and classify past product investments. Identify different tools with similar functionality. By clustering like-products together you're on the road to standardization and consolidation which will inevitably lead to measurable hard dollar savings.

Monday, May 09, 2005

IT Transformation

An article that appeared in the February 2005 edition of The McKinsey Quarterly, entitled "Managing next-generation IT infrastructure", explains how IT can replace a traditional build-to-order application silo mind-set with an off-the-shelf product standards-oriented approach:

'Instead of specifying the hardware and the configuration needed for a business application ("I need this particular maker, model, and configuration for my network-attached storage box . . ."), developers specify a service requirement ("I need storage with high-speed scalability . . ."); rather than building systems to order, infrastructure groups create portfolios of "productized," reusable services. Streamlined, automated processes and technologies create a "factory" that delivers these products in optimal fashion. As product orders roll in, a factory manager monitors the infrastructure for capacity-planning and sourcing purposes.'



According to this article, enterprises that adopt the 3-layer model presented above will achieve significant savings by:
  • reducing complexity
  • eliminating redundant activity
  • boosting the utilization of assets
IT transformation begins by defining Demand Patterns. Applications require servers and storage. Sites need network connectivity. Users want access to desktops, laptops, PDAs, and so forth. The IT organization's challenge is to categorize demand into segments (such as uptime, throughput, and scalability) that are meaningful to business users. When grouped in this way, most applications, locations, and user groups fall into a relatively small number of clusters.

The key issues when segmenting demand include the time and frequency of need, the number of users, the amount of downtime that is acceptable, and the importance of speed, scalability, and mobility.

The middle layer in the above illustration, the one labeled Product Portfolios, deals chiefly with specifying and communicating internal corporate IT standards. At the portfolio level, IT has to make decisions about the scope, depth, and breadth of product offerings, with an eye toward optimizing resources and minimizing costs. Acceptable exceptions to standards must be clearly detailed and explained. At the product level, choices will define the features, service levels, and prices of each product.

When it comes to defining standards, according to the McKinsey article:
'companies may find that well-defined products and product portfolios are the single most important determinant of the infrastructure function's success. Developers and users may rebel if a portfolio offers too few choices, for instance, but a portfolio with too many won't reap the benefits of scale and reuse.'

Finally, the bottom row in the above model, the one labeled Factory, requires the creation of automated, repeatable processes that can be applied across systems. Since, according to this new transformational approach, application developers specify a service need but have no input into the underlying technologies or processes chosen to meet it, and IT has the freedom to choose which software technologies, hardware, and processes to use, then the net benefit -- bottom line -- is that IT is free to rethink and redesign its delivery model for optimal efficiency. Using standardized and documented processes, it can start developing an integrated set of software tools to automate its operations. It can even leverage its processes and automation tools to develop an integrated location strategy that could minimize the need for data centers, so that more functions can operate remotely in low-cost locations

Friday, May 06, 2005

IT Standards Manifesto

It’s hard to imagine how establishing and communicating corporate IT standards within an enterprise won’t, at the very least, result in significant, measurable hard dollar savings.

When I use the term IT standards, I'm not referring to de facto or de jure industry IT standards, such as those published by standards organizations like:
Industry standards such as
802, X12, HTTP, SQL, DNS, and scores more just like these, are literally indispensable to the computing industry. They represent the very essence of goodness. But those aren't the standards I'm talking about. Rather, my comments pertain to products selected as internal corporate standards, like a standard relational DBMS, or a standard application server, or a standard desktop operating system.

It doesn’t matter what performance metric you choose to measure, you’ll find that standardizing will improve your enterprise’s efficiency by lowering costs, shortening cycle times, and reducing staffing. Simultaneously, standardization, coupled with consolidation, will increase IT effectiveness.

Standards really do enable an IT organization to do more with less. If you want a responsive, agile IT organization, then adopt a simplified, streamlined, less complex, standardized computing environment.



Thursday, May 05, 2005

What is IT Architecture?

If you enter 'IT Architecture' into Google, don't bother pressing the I'm Feeling Lucky button because where you'll land is on a web site that's got absolutely nothing to do with IT. What you'll find, instead, may be of interest if you're designing buildings, but it won't help you one iota if what you're trying to do is design IT systems.



Unfortunately, clicking the Google Search button won't offer you much more assistance, not unless what you're looking for are "Sponsored Links", the majority of which will direct you to pages pertaining to 'Enterprise Architecture'.

Assuming, like me, you prefer your Google results unsponsored, let's try searching on
'Enterprise Architecture' directly. At least that search will return results relevant to IT. The first item retrieved directs you to the following web site:




There, you'll find the Institute For Enterprise Architecture Developments' version 1.4 of its Extended Enterprise Architecture Framework. For a detailed view of IFEAD's model, click on the following image:



While quite similar to the Zachman Framework, I believe the above graphic represents a significant improvement because it contains fewer rows. In articles I've written for FTP's Enterprise Architect magazine, I've complained that the biggest problem with the Zachman approach is deciding in which row something belongs. As you move down from the top to the bottom of the Zachman Framework, the boundaries between the rows appear to become ever more arbitrary. I've attributed this problem primarily to John Zachman's personal lack of experience in actually developing software systems. Please don't infer from this criticism that I reject everything Zachman advocates. Quite the contrary. I've always applauded John's idea that the fundamental purpose of Enterprise Architecture is twofold:
  1. Document everything about IT
  2. Model everything about IT
It's just that on these two points, I prefer to reverse the ordering, placing higher emphasis on modeling. That's why I was so disappointed to discover that what I consider to be the best Enterprise Architecture model, TOGAF (The Open Group Architecture Framework), doesn't even appear until you get to the bottom of page 7 of Google's search results.

TOGAF defines four types of architecture that are subsets of an overall Enterprise Architecture.
  1. Business Architecture
  2. Data Architecture
  3. Application Architecture
  4. Technology Architecture
The first three subsets are pretty universally understood. Business Architecture describes business processes. Data Architecture defines entities (aka objects). Application Architecture specifies solutions which are comprised of a little bit of process and little bit of data. But what about Technology Architecture? What exactly is it? My experience has been that most IT professional have a really tough time defining Technology Architecture. I've spent considerable energy trying to precisely define this term. My analysis has led me to conclude that Technology Architecture corresponds to an organization's technology portfolio.

A technology portfolio consists of all the IT products purchased by an enterprise. The goal is to figure out how best to organize all this information. In my opinion, the best way to do this is by using a three-layer, four-model representation as illustrated below:




The bottom layer corresponds to IT Infrastructure. While extremely expensive and complex, IT Infrastructure, by itself, really doesn't do much of anything. It just sits there. Value is derived only after Applications get layered on top. Depicted as the middle layer in the graphic
above, applications can either be developed or purchased. Of course, regardless of whether they're built or bought, applications generate data -- data that yearns to analyzed and mined for its Business Intelligence. That's the top layer of Technology Architecture.

The ITscout web site provides an interactive environment where you can explore these four models:
  1. IT Infrastructure Roadmap (bottom row)
  2. Application Development Roadmap (middle row)
  3. COTS Application Roadmap (middle row)
  4. Business Intelligence Roadmap (top row)
In future posts I'll delve into more details about Technology Architecture as well as the four ITscout roadmaps.


Wednesday, May 04, 2005

Dipping my Toe into the Blog pond



Conventional wisdom says a new technology is obsolete by the time it appears on the cover of BusinessWeek ("Blogs will change your business", May 2, 2005). At least 'Blogging' isn't offered yet as a course at the Harvard Business School. That's the ultimate technological kiss of death. HBS is still too busy telling corporate leaders why IT doesn't matter anymore.


Today, I'm taking the plunge, dipping my toe into the Blog pond. My goal is to share commentary that will complement the content presented on the ITscout web site.

Although I've been writing published articles for quite awhile now, mostly pertaining to IT architecture, I want to blog so I can more frequently express my thoughts and ideas.