Wednesday, June 29, 2005

RSS & Microsoft

Friday, June 24, 2005 3:18 PM/EST
More Takes on Microsoft's RSS Plans
If you're still grappling to understand the significance of Microsoft's latest RSS strategy announcements, here are some links that might be useful. The Channel 9 team has posted a video with demos of how the RSS integration into Longhorn and IE will work in different real-world scenarios. Charles Fitzgerald, manager of Microsoft's platform strategies unit, blogs about the kinds of lists Windows users might find interesting to consume via RSS. And Stanford Law Professor Lawrence Lessig discusses the significance of Microsoft's decision to license the pending Microsoft list extension to RSS under a Creative Commons license.
Read More...


Friday, June 24, 2005 3:07 PM/EST
Microsoft to Make RSS Part of Longhorn, IE 7.0 Fabric
The cat's officially out of the bag. Microsoft is building RSS support into Longhorn. Specifically, it is adding a common feed list, common data store and synchronization engine, allowing Longhorn applications to be able to consume RSS feeds, into the next version of Windows. Microsoft also is building RSS support into Internet Explorer 7.0 (both the version integrated in Longhorn and the one that will run on XP). And the company is extending the RSS spec with a "list" extension, enabling users to syndicate new kinds of content (music play lists, photos, Amazon.com wish lists, calendar info, etc.). Longhorn testers won't get the RSS-ready Longhorn build until the Professional Developers Conference in September. IE 7.0 Beta 1 will support RSS feeds, however, according to Microsoft.
Read More...


Tuesday, June 28, 2005

Usability...Usability... Usability...

Remember Johnny Carson as "Carnac the Magnificent" where he played a psychic who gave a punchline to a joke before revealing the corny setup. He'd hold up an envelope, then using his mystical powers provide the answer. Finally, he'd open the letter and read the question.

Our answer is:
"Usability... Usability... Usability..."

The question is:
"Name the three most important factors when considering software."

Okay, c'mon, it's just a stupid take-off on the old joke about, the answer is:
"Location... Location... Location..."
and the question is:
"Name the three most important factors when considering real estate."

Jacob Nielson, along with his pal, Don Norman, make for an interesting pair of cyber dudes. Their firm is creatively called the Nielsen Norman Group

Nielson, who speaks perfect English, but with a slight hint of a Scandinavian accent, is an unabashed self-promoter of all things usability-related. He personally likes to be referred to as "the guru of Web page usability" or the "Web's usablility czar". He publishes a website called useit.com and a usability newsletter called Jakob Neilson's Alertbox.

Don Norman is your consummate academic practitioner thinker. My memories of Don go back to my days as an aspiring young cognitive psychologist specializing in speech perception minoring in artificial intelligence. Those were wonderfully heady days, but as a struggling, just-married graduate student, money was always in short supply. In my first year as a grad student at Indiana University, my Ph.D. advisor, David Pisoni, was himself just a first-year rookie professor, the ink still fresh on his own doctoral diploma from the University of Michigan up in Ann Arbor. Long story, short, Pisoni got me this paying gig to write the test questions for Don Norman's book -- the first textbook ever written on the topic of "Cognitive Psychology".

Disappointingly, many years later, I finally get the chance to meet Dr. Norman in person. We are both attending a conference in Camden, Maine called "PopTech!". Turns out, I learn these many years later, that while I may have gotten paid for writing the textbook's questions, the author, himself, hated my work. Good thing I high-tailed it out of cognitive psychology and into computer science.


PopTech!


PopTech! was started by Bob Metcalfe (inventor of the Ethernet), John Sculley (former head of Pepsi and Apple -- best known as the person who fired Steve Jobs -- who had hired Scully in the first place), Tom DeMarco (highly-respected author, especially well-known for his books on software project management and information system design), among others. Offers a great, eclectic, annual technology program -- held in the Camden Opera House every October -- a delightful time of the year to visit a delightful coastal gem of a town down east in Maine -- just south of Acadia National Park.


Usability: Empiricism or Ideology?

The June 27, 2005 edition of Jakob Neilson's Alertbox provides some wonderful insights about the job of usability. Below are some [heavily] edited excerpts:
The economist Arnold Kling summarizes long-term economic growth using a somewhat peculiar metric: flour bags.
When you measure how many bags of flour you can buy for a day's wages, the average worker today generates 430 times the value of a worker in the year 1500.

Kling uses bags of flour to compare productivity because it's one of the few things that has been produced continuously and provides the same benefit today as it did in past centuries.
Ultimately, Kling's point is that economy is a learning mechanism.
This huge increase in wealth resulted from a gradual accumulation of learning.

We figured out how to do things faster, cheaper, and more efficiently one improvement at a time.

Knowledge doesn't ebb and flow. It stacks on top of itself.

Improvements build on each other to create ever-increasing productivity gains.
Learning advances are generated in two ways:
  • in the scientific method, you think up hypotheses and perform experiments to falsify them
  • If enough studies fail to falsify a hypothesis, then we start to believe it.
  • in Darwinism, multiple entrepreneurs bet in parallel on their ability to satisfy a customer's need.
  • Most companies go out-of-business because the economy's "invisible hand" rejects their proposals.
Basically, both of the above approaches are essentially the same:
  • you propose a solution
  • then you see if it works

Usability Derives Value from Reality

Usability methods such as field studies and competitive analyses help in the discovery of principles that can be used as a guide for building effective user interfaces that real people can comprehend and understand. Unfortunately, using usability to explain human behavior in complex systems under strongly context-dependent circumstances, is a less than exact science. It's not like, say, making predictions using physics. On the other hand, if some aspect of a user interface causes problems for many users in many different environments, it's probably a safe bet to not use it.

Usability as Ideology

  • right of empowerment -- users are in control of outcomes
  • right of simplicity
  • respect for people's time

Click here to read Nielson's June 27, 2005, Alertbox.




In my company, Flashmap Systems, we are true believers in Neilson's usability ideology.

We are extremely proud of the user interfaces our products provide: Everyone should understand that:
FOR USERs, the INTERFACE IS THE SYSTEM
In Flashmap's case, our products' usability target is the occasional, sporadic, intermittent user. We strive for a user interface that is smart and fun (sort of like my blog is supposed to be smart and funny).

Monday, June 27, 2005

Local WiFi turns into federal case

Local WiFi plan turning into a federal case
By Hiawatha Bray   |  June 27, 2005

A number of US cities are committed to the idea of treating Internet access as a public utility, especially since they won't have to string miles of wire. Wireless changes all the economics.

But, first, cities may have to clear it with Pete Sessions, a Texas Republican in the US House. This month, Sessions offered a thoroughly odd bill that would ban cities from running communications networks that compete against private-sector telecom companies. And, you thought Republicans believed in states' rights?

If a city decides to dabble in WiFi, why should a Texas congressman make a federal case of it? Maybe it has to do with his previous career as an executive of SBC Corp., a large phone company that isn't too eager to face fresh competition.

Do we really want cities going into the telecom business?

We do have a very large digital divide with estimates of no more than 25 percent of citizens in poor neighborhoods having Internet access. In my opinion, government ought to get high-speed access to those neighborhoods. The prospect of cheap municipal WiFi makes it possible.

Click here for Hiawatha Bray's story...

Saturday, June 25, 2005

The Spirit of New England

I'm a New Englander -- born and bred. Everyone hates our weather. Then again, some say we're the happiest people in the world because no matter what the season, we can't wait for the next one.

New England's highway infrastructure, like Europe's, preceded the automobile. So, our roads follow the topology of the land rather than a grid. Perhaps you're familiar with that old New England expression, often given in response to a request for directions, usually spoken with a thick downeasterner accent, that goes, "You cahn't get the're from h'ere." New England is comprised mostly of a collection of small towns that contain roads that run spirally out from a village center. Indeed, we get so excited when two roads cross at right angles, we usually call those intersections "Four Corners" (they're generally far and few between).

New England is a bunch of hills, all roughly the same height. Just about the only places around here that are flat we refer to as ponds or lakes, of which we have many. We do have some hills higher than others, like the White Mountains of New Hampshire, the Green Mountains of Vermont, the Berkshires in western Massachusetts, and perhaps the prettiest peak of them all, Mt. Katahdin up in Maine. Our beaches are second to none -- from the rocky coast of Maine to the sandy Cape Cod dunes in my home town of Wellfleet that still look pretty much as they did when Thoreau traipsed over them more than a hundred years ago.

New England's got our beloved Red Sox, our adored Patriots, and the pride of our Celtics (unfortunately no ursine Bruins have been spotted around these parts for awhile now). Some say that no place possesses a higher density of student populations, what with Harvard, MIT, Boston University, Boston College, Northeastern University, plus scores more smaller bastions of educational excellence. Perhaps it's these institutions that provide the seed that ignites the spark of entrepreneurial spirit that characterizes this region.

Politically, Massachusetts, my home state, has also always been a feisty, rebellious, little revolutionary place. Back in 1775, it was the Massachusetts Minuteman militia in Lexington and Concord who fired the "shot heard round the world." Throughout the 19th century, it was Massachusetts which led the movement to abolish slavery in America. It was Massachusetts that became the first and only state to ever send a non-WASP favorite son to the White House -- JFK. During Nixon's Watergate scandal, the citizens of the Commonwealth proudly displayed bumper stickers that read, "Don't blame me -- I'm from Massachusetts," referring to its status as the only state that cast its electoral votes for George McGovern. Even last week, Howard Dean, ex-governor of Vermont and presently Democratic National Chairman, in a speech in Boston provided a quote that pretty well says it all:
"It's a delight to be in a state with two Democratic senators, ten Democratic representatives, and a governor who's never here."
[Note: Massachusetts' Republican Governor Mitt Romney is testing the presidential waters for a possible national run in 2008]
Politically, let's just leave it that, like most New Englanders (especially those of us from Massachusetts), I think America needs a better slogan than "Our torture is not as bad as their torture."

Perhaps it's because I live here, but regardless, I tend to believe New England has a higher than normal zeal for entrepreneurship. I know that I personally have twice jumped in to create new businesses. My current venture, Flashmap Systems, was founded back in October 2000. I guess I couldn't have picked a worse time to decide to start-up a new IT-related firm. Our first couple of years in business coincided with a period that marked the worst recession in the 50+ year history of the computer industry. We had the poor misfortune, timing-wise, of delivering our flagship product, FlashAtlas, on September 21, 2001. Nevertheless, we've persisted and survived. I'm happy to report that today we have lots of satisfied customers that, if I were to name them, would read like a Who's Who list of prestigious, well-known, highly-respected brand name firms.

Our forte is that niche within the field of enterprise architecture that specializes in technology architecture, portfolios and standards. I don't believe anyone does a better job of providing tools that help IT organizations communicate to the diverse constituency of people who use, develop, purchase, or manage information systems and applications.

Right now is an especially fascinating time for us. We're just about to launch two brand new products.

The first, called ITguide, is virtually ready now, having just completed six months of successful beta testing. ITguide is revolutionary in terms of its pricing. No enterprise is ever going to be able to again say they can't afford to establish, communicate, and enforce IT standards. The benefits couldn't be simpler -- save money through consolidation and standardization.

Our second new product, ITatlas, coming very soon -- it's just about ready to begin its own beta testing -- represents a complete new implementation of our current FlashAtlas offering. Based on the lessons we've learned during almost four years of supporting FlashAtlas customers, ITatlas will include some phenomenal new features and functionality that we're pretty certain IT architects are going to love.

So now, or at least very soon, we're about to find out where we stand in terms of what Seth Godin refers to as "The four curves of want and get." Godin is a bestselling author, entrepreneur and agent of change. Below are four graphs that he says describes a product launch.


Fourcurves_1

Ideally, when you launch a product, you want graph A -- a steep launch curve followed, after the dotted line, by a steep acceptance curve. Graph A is the curve of the unserved market by the perfect product.

Often, according to Seth Godin, you get graph B, which looks the same as graph A for awhile, but then it stops. Graph B stops when a product isn't for everybody. In other words, when curve B stops, it's probably because a product has a small but eager market of early adopters, just itching to try something new. But then the market quickly saturates and the product stops spreading.

Graph C is the most likely curve of success, not A. Curve C is the remarkable product that takes a while to find its footing. Then, the idea starts moving through communities and slowly builds, until, yes, this product is remarkable and you've got a hit.

Alas, soon after launch, there's no way to tell graph C from graph D, is there? Graph D is the curve of the dud. Most launches are duds and, as Seth Godin points out, there's "not a lot you can do about it."

So, with two new products about to hit the market, we here at Flashmap Systems will anxiously await the evolution of our product's launch curves. One obvious challenge will be deciding where to put the dotted line. We'd hate to kill off one of our new products too early, or celebrate too early. Then again, if we wind up with two D curves, I guess I can always try to get a position at some local New England college. I remember an old saying I often heard back when I was a doctoral candidate that went: Those that can, do. Those that can't, teach. And those that can't teach, teach others to teach.

Friday, June 24, 2005

Stop "Fake" Analysis

I love "fake" news -- the kind presented on the Daily Show with Jon Stewart (see Fake News Follies). I totally concur with Bill Moyers who, appearing recently as a guest on the show, said: "You [Jon Stewart] are one of the few people who can get the truth insinuated. If Mark Twain were back today he'd be on Comedy Central."

What I very much dislike is "fake" analysis, especially when it comes from a reputable research firm like AMR.

The headline of a recent AMR Research Alert Highlight grabbed my attention.
 Alert Highlight
Enterprise Architect: A Strategic Role for the Demand-Driven Enterprise
Wednesday, June 22, 2005
Eric Austvold
Great title, right? The only problem is vacuous content. This is a classic example of what I wrote about in an earlier posting -- "How Much of EA is BS?"
"It seems that anytime you put people into a position where they feel obliged to talk about things that they don't really know very much about, you're apt to get a lot of BS."
Mr. Austvold, in this article, has taken the liberty of freely shuffling his use of terminology back and forth between "enterprise architect," "IT architect," and "enterprise IT architect." I'm sorry to complain, but it strikes me that this lack of descriptive specificity is reflective of someone who doesn't quite grasp the actual role played by enterprise architects.

While it might seem perfectly reasonable and logical to metaphorically compare an architect used for remodeling a house with an enterprise architect used for transforming a company's supply chain, as Mr. Austvold does in this article, I find that the more times I read through this AMR Alert Highlight, the more convinced I am that this author does not understand how enterprise architecture is actually practiced in the real world.

There's a problem here. For some reason, people in IT just love the notion of architecture. At the same time, though, no one seems to knows how to define what architecture actually means as it applies to IT. This impedance mismatch between what one person thinks they're saying and what another person thinks they're hearing leads to some awful misunderstandings.

One organization is trying to tackle this issue head-on -- IASA, the International Association for Software Architects.


IASA is a non-profit organization dedicated to the advancement and sharing of issues related to software architecture. Their mission is to serve as a driving force for research and standards which advance the understanding of software architecture.

The IASA web site includes some excellent Resource Links which are listed below:

World Wide Institute for Software Architecturewww.wwisa.org
SEI Architecturehttp://www.sei.cmu.edu/ata/ata_init.html
Bredemeyer Consultinghttp://www.bredemeyer.com/
EA Communityhttp://www.eacommunity.com/
Global Enterprise Architecture Organizationhttp://www.geao.org/
Institute for Enterprise Architecture Developmentshttp://www.enterprise-architecture.info/
Enterprise Architecture Interest Grouphttp://www.eaig.org/
Software Architectureshttp://www.softwarearchitectures.com
The open group architecturehttp://www.opengroup.org/architecture/togaf/
Software Fortresseshttp://www.objectwatch.com/
Enterprise Architect Magazinehttp://www.fawcette.com/ea
Convergent Architecturehttp://www.convergentarchitecture.com/
Software Fortress Modelhttp://www.objectwatch.com/
Documenting Architecture Viewshttp://www.sei.cmu.edu
Architect BootcampAmazon Link
Metropolis Architecture Stylehttp://www.pathelland.com/


Thursday, June 23, 2005

Business Process Breakthrough Needed

This blog entry is an unusually long posting as I try to first explain, and then place into perspective, the entire fifty year history of the computer industry.

My goal is to describe what I see as the major breakthrough that will propel software development into the next decade of the 21st century.



Every now and again, the software industry has experienced some truly remarkable epiphanies. For example:
  • The invention of compiled programming languages that translate source code written in a high level language into object code or machine language that may be directly executed by a computer's CPU.

    • FORTRAN, developed back in 1957 by a team at IBM led by John Backus, is generally credited as being the first successful compiler.

    • COBOL, created in 1960 by the Navy's Admiral Grace Hopper, was an especially important early high level language. It introduced the notion of separating DATA and PROCEDURE into separate portions of code.

    • ALGOL, developed in the 1960s, pioneered most of what's considered modern, present-day, programming language concepts, such as block structures, functions, and subroutines. Today's crop of popular languages, C and its descendents (C++, Java, and C#), still bear a remarkably striking resemblance to Algol.

  • In the 1970s, tremendous advances were made in the field of database management. None were more significant than the introduction of the ANSI/SPARC three schema two-transformation Data Architecture.

    • external schema -- supports multiple different views which control how data is accessed by all individual programs or users

    • conceptual schema -- defines a single, unified model that logically represents real-world entities and relationships

    • internal schema -- specifies how data is physically stored on disk, including access methods and related data structures such as pointers, indices, and hash tables

  • The 1980s are perhaps best remembered from a software perspective for the emergence of multi-layered Network Architecture stacks with protocols and interfaces. A layer in one stack communicates with a peer layer in another stack through protocols. Within each stack, interfaces specify how one layer interacts with the layers immediately above or below. Network Architecture is what made the Internet possible. The most important implementations included:

    • ISO's OSI (Open System Interconnect)

    • IBM's SNA (System Network Architecture)

    • Digital Equipment Corp.'s DECnet (DEC's Network Architecture)

    • TCP/IP (Transmission Control Protocol/Internet Protocol)

  • The really big software breakthrough that occurred during the 1990s was the widespread adoption of object-oriented programming (OOP). Three core concepts lie at the heart of the OO paradigm:

    • encapsulation
      An encapsulated object acts as a "black box" for other parts of a program which interact with it. An encapsulated object provides a service, but the calling objects do not need to know the details of how the service is accomplished. Objects present an interface which defines the set of services the object provides. Objects interact by sending messages back and forth to each other. Each message is defined by a signature which specifies a count of the number of different parameters comprising each message, plus a name and data type for each individual parameter.

    • polymorphism
      Objects can respond differently to the same message. Polymorphism usually refers to overloading. This is the idea of allowing the same code to be used when incoming messages may be made up of different data types. Overloading allows multiple functions, based on different data types, to be defined using the same name. For example, consider the difference in functionality provided by the "+" (plus) operator depending on whether the operands (i.e., the numbers being added) are integers, floating-points, or strings (the latter is often used to specify concatenation).

    • inheritance
      Inheritance is a relatively simple concept which allows one class to extend another, or to inherit characteristics. When a class is extended, to create a sub-class, all of the properties (variables and methods) of the original class still exist within the new class along with any other new properties which have been added. The sub-class can override existing properties by reusing the same name as an inherited variable and/or method. In other words, new properties and be added AND existing inherited properties can be modified.

    Possibly even more valuable than the three concepts presented above -- encapsulation, polymorphism, and inheritance -- is object technology's support for a property called closure.

    I'm quite sure you're already familiar with this notion of closure. For example, comsider how integer algebra supports the closure property. If you add two integer numbers together, the resulting sum is always an integer. If you multiply two integers together, the resulting product is always an integer.

    Similarly, relational database technology supports the closure property. If you take two relational tables and join them together using relational algebra, the final result is always another relational table.

    With objects, if you take one object and embed it within another object, the result is always another object. This is enormously powerful in that it enables support for a feature called object composition. The beauty of this capability is how it allows programmers to build software the same way people usually describe their requirements -- by exception. That is, I want my program to behave just as it already does, only differently.

    When a software developer takes one object and wraps it around another object, the programming code in the outer wrapper object has to be able to handle all incoming messages. Now, imagine an incoming message arrives that matches a message supported by the inner wrapped object. The outer wrapper object has a choice. One possibility is for the outer wrapper object to simply pass the incoming message along to the embedded inner wrapped object for processing. However, if new functionality is desired, the outer object wrapper can intercept the incoming message and pass it instead to some new code in the outer wrapper object, thereby bypassing the embedded inner wrapped object. Note that it's also entirely possible, and often quite practical, for the outer wrapper object to add code to support new messages, and thereby extend the existing object's original functionality.

    In terms of full disclosure, let me point out that development by exception can also be implemented using inheritance, and sometimes inheritance is a better approach for handling exceptions than composition. Explaining the differences between composition and inheritance is beyond the scope of this discussion. If you want to explore this topic further, let me suggest you begin by reading Object Composition vs. Inheritance.

    There's one last point I'd like to discuss regarding objects before we switch our attention to business processes. Along with the emergence of object technology came a three-tier approach to distributed processing. Just as data in the 1970s evolved into three layers (i.e., external, conceptual, internal), objects in the 1990s were broken down into three groups of categories:

    • Visual Objects -- refer to the external, visual appearance as seen by a user, particularly a user interacting through a graphical user interface. Today, most GUI environments are implemented using a design pattern known as MVC -- Model, View, Control.

    • Business Objects -- represent the equivalence of a Database conceptual schema. Each business object is generally described in terms of its properties (i.e., data fields), the messages it accepts (i.e., its behaviors), and the messages it sends (i.e., the other objects with which it collaborates). Another useful way of describing business objects is a technique called CRC -- Classes, Relationships, Collaborations. The most difficult problem in learning about object-oriented is getting a programmer to give up the global knowledge of control that is possible with procedural programming languages, and rely instead on the local knowledge of objects to accomplish tasks. Procedural designs can be characterized at an abstract level as having processes, data flows, and data stores. CRC represents a similar set of fundamental principles for object designs. Classes create a vocabulary for discussing a design. Designing requires finding just the right set of words to describe objects, a set that is internally consistent and evocative in the context of the larger design environment. Responsibilities identify problems to be solved. The responsibilities of an object are expressed by a handful of short verb phrases, each containing an active verb. The more that can be expressed by these phrases, the more powerful and concise the design. One of the distinguishing features of object design is that no object is an island. All objects stand in relationship to others, on whom they rely for services and control. Collaborators are objects which will send or be sent messages in the course of satisfying responsibilities.

    • Persistent Objects -- refer to how objects are stored on disk, often inside relational databases. Because of fundamental modeling differences between relational and object technologies, mapping software is often needed to translate between the different representational models.
While the hope for object technology to deliver on the promise of reusability was originally huge, the actual results achieved proved to be significantly less spectacular. Two problems in particular arose. The first involved business object bloat. It turns out that over time, business objects have a tendancy to just keep growing bigger and bigger with more and more functionality getting added to the mix. The second issue was even more vexing. Developers found it extremely difficult to find the objects they were supposed to reuse. Basically, the tools provided to software developers for browsing class hierarchies turned out to be the weakest aspect of integrated development environments (IDEs). If existing objects can't be easily found, developers just go ahead and reinvent the wheel by creating entirely new object classes.

That pretty much brings us up to the first decade of the 21st century, sometimes referred to as the aughts.

The 21st century breakthrough that has moved computing beyond the realm of object technology was the realization that the most reusable facet of enterprise computing is not business objects, but rather, business events.

The most effective method for leveraging business events is modeling business processes using state-transition diagrams. A stable state exists until such time that a business event occurs. The actual business event gets signaled by a message. In response to that message, business rules get fired off that determine whether or not a state-transition should take place.

While business events represent the most reusable facet of computing, business rules reflect the most volatile aspect of computing -- business rules are continuously changing.

The best way to automate business events and business rules is with message triplets that specify:
  1. objects
  2. actions
  3. conditions
When a business event occurs, it triggers the sending of a message to a business object. Included with the message is an action along with associated parameters. Then, conditions can be tested based on business rules. If the test returns TRUE, the message is sent and the corresponding action performed. On the other hand, if the test returns FALSE, the message is discarded and ignored.

The elegance of this approach comes from the fact that at every step in a process, what's being manipulated are business objects. In other words, the process is providing a locality of reference so that objects can be found and hence reused. Should an existing object not provide exactly the functionality that's required, the software developer can use either composition or inheritance to extend the object's functionality by exception. If a business object already provides an 80% solution, then the developer only needs to concentrate on satisfying the remaining 20%.

Obviously, there's a tremendous synergy that exists between business objects and business events. Add in the declarative nature of business rules and the resulting development environment represents a major advancement in programmer productivity. Collectively, the promise of focusing on business processes is their ability to enable enterprises to achieve enormous and sustained operating performance improvements.

People who are committed to business processes and who, in fact, comprehend its true nature, must assure that others within the enterprise also share a correct and deep, as opposed to flawed or superficial, understanding of business process concepts.

Dr. Michael Hammer, a long-time industry pundit, points out how business processes ought to be specified end-to-end -- not just as a single procedure -- not just as any sequence of activities -- not only routinization or automation. Indeed, Hammer views the end-to-end nature of business processes as the most distinguishing feature that lies at the heart of its power. He believes that central to harnessing business processes is reorienting people's mindsets, focusing everyone's attention on customers, outcomes, and teams. In other words, business processes provide the big picture that enables people to see themselves and their work in context.

The analysis of business processes should not be limited to transactional back-room operations. It also applies to creative activites (like product development), support or enabling processes (e.g., HR), infrastructure processes (such as plant maintenance), and even managerial work (such as stategic planning). All these benefit from the process disciplines of design, measurement, ownership, and ongoing management. According to Dr. Hammer, business process management entails precise design, honest diagnosis, disciplined remediation, and eternal vigilance.

Perhaps the most important point here is that executive management, the people Dr. Hammer is talking to, is hearing the same message IT people need to hear about the importance of business processes. Superior business processes are the result of creative design, precise implementation, and careful management of change. To succeed, an organization must have expertise in the full range of issues associated with process design and reengineering, from the conceptual (identifying and envisioning processes) through the technical (structuring and carrying out an implementation plan) to the organizational (anticipating and overcoming resistance to change).

BPEL and BPM

As Paul Harmon of BPTrends points out, there's a giant gap between the promise of business process management, as described above by Michael Hammer, and the reality of technology to deliver on this promise.

If you talk to business process modeling practitioners, you will discover that most assume BPEL (Business Process Execution Language) is, or soon will be, the language of choice for modeling and managing business processes.

The first, and thus far most successful implementation of BPEL, called BPEL4WS (BPEL for workstations), was originally jointly developed by BEA, IBM, and Microsoft. Unfortunately, BPEL suffers from a huge deficiency.

While many vendors claim to fully support BPEL, the sad truth is that each individual vendor has pretty much been forced to go off and build its own proprietary implementation with numerous features needed in order to fill the gaping holes in the BPEL specification.

The major underlying problem is that OASIS, the standards group responsible for the BPEL specification, has decided to leave "holes" in the standard which individual vendors will have to fill. This guarantees that even complete official BPEL implementations will require proprietary extenstions in order to work and that it will be impossible to interchange one vendor's version of BPEL with another's.

Most people naturally assume that BPEL should be able to manage all business processes -- both automated as well as non-automated. But, the current version of BPEL can only manage automated activities. This undermines the whole possiblity of using BPEL as a way of passing business process descriptions from one tool to another, or passing a company's business process descriptions to its partners.

Conclusion

The road to business process management is going to be difficult. As Paul Harmon has pointed out, BPEL is still a very immature standard and desperately in need of dramatic improvements. As Michael Hammer indicated, the paradigm shift associated with business process modeling and management extends far beyond the boundaries of an enterprise's IT organization.

The ultimate goal is to bring object-oriented's development by exception philosophy into the realm of business process models, such that an existing best practice workflow can be used as a starting point, and then easily extended or modified to accomodate local requirements. One can almost envision a world where a business-oriented developer working collaboratively with an object-oriented software developer could rapidly and almost effortlessly take existing business processes and business objects and make simple changes:
  • adding new capabilites
  • modifying existing capabilities
Add to this mix a declaratively-specified business rules engine that controls state-transitions, and the net result represents the ultimate software development platform in terms of productivity, speed of change, and responsiveness to business needs.

Wednesday, June 22, 2005

Federal Enterprise Architecture



Federal Enterprise Architecture


Over at Business Process Trends, Paul Harmon, Executive Editor/Analyst, and Celia Wolf, CEO/Publisher, do a sensational job of providing news and information related to all aspects of business process change, especially trends, directions and best practices. Their excellent, and free, BPTrends Monthly Newsletter recently discussed some of the innovative work being done by US government agencies as they seek to comply with the Clinger-Cohen act.


The Clinger-Cohen Act of 1996, also known as the Information Technology Management Reform Act, intended among its many purposes to "reform acquisition laws and information technology management of the Federal Government." The Clinger-Cohen Act was implemented to ensure that agencies improve the initial capital planning process for large acquisitions to develop realistic cost, schedule, and performance goals that are tied directly to agency strategic mission goals within available budget resources.

The Clinger-Cohen Act mandates that US government agencies create an Enterprise Architecture "as a means of integrating business processes and agency goals with IT." A number of government agencies are involved in driving the EA initiative, but the leaders are the Government Accountability Office (GAO), the Office of Management and Budget (OMB), the Federal Enterprise Architecture Program Management Office (FEA-PMO) -- which was established by the OMB in February of 2002 -- and the CIO Council, which is made up of CIOs for US government departments.

Now, don't get me wrong. I NEVER LOOK TO THE U.S. FEDERAL GOVERNMENT (or any government agency, for that matter) as a beacon for innovation or brilliant thinking (especially this current Administration and Congress). Indeed, the very fact that leadership for these EA initiatives is emanating from government tends to make me highly suspect that anything of value will ever materialize from these efforts. Nevertheless, I'll try my hardest to give the federal government the benefit of the doubt and overlook my inbred deep-seated prejudices.

To promote the Federal Enterprise Architecture, the GAO has defined five stages of maturity, as follows:
  • Stage 1. Create EA Awareness
  • Stage 2. Build an EA Management Foundation
  • Stage 3. Develop Architecture Products
  • Stage 4. Complete EA Products
  • Stage 5. Leverage EA to Manage Change
Looking at the Federal Enterprise Architecture diagram presented above, you'll see the first component to be developed is the Performance Reference Model. During fiscal 2005 budget reviews the OMB used the Performance Reference Model (PRM) to identify programs that were delivering value and to spotlight others that could not "demonstrate their value consistent with the principles of performance management and measurement."

The FEA Performance Reference Model, pictured below, combines elements of Balanced Scorecards, Baldrige Quality Criteria, Value Chain Theory, plus a number of other additional US government measurement tools designed to provide an overview of the value creation process in the Federal government.


Performance Reference Model

As aptly explained by Celia Wolf in the June 2005 edition of the Business Process Trends Advisor newsletter, the PRM consists of six measurement areas -- each divided into secondary measurements. These include:
Mission and Business Results which measure the specific outcomes an agency seeks

Customer Results which measure how well an agency or specific process within an agency is serving its customers (i.e., US citizens)

Processes and Activities which seek to capture the outputs that are the direct result of the process that an IT initiative supports

Technology which measures the performance of IT in support of specific processes

Two other "placeholders" are also included, Human Capital and Other Fixed Assets, to represent a pair of measurement areas not currently defined in the version 1.0 specification of the Performance Reference Model.
Isn't it ironic that the BPTrends opinion -- which states that "the entire US government effort puts too much emphasis on IT and IT components and not enough on processes and human elements" -- is almost exactly the opposite of my initial impression of FEA?

The longer I look at what the government has created with the Federal Enterprise Architecture, the more concerned I get that this whole thing may have been mislabeled. Perhaps a more befitting name might have been The Omnibus Beltway Bandits Act of 2005. For those of you unfamiliar with Washington DC, let me explain that the term Beltway Bandits is a euphemism that refers to the corps of outside consulting firms who prey on government agencies to the tune of billions of dollars per year.

It worries me how the goals of FEA seem so elusive and ephemeral, especially given the current state-of-the-art of architectural practices (or, more precisely, the lack thereof). It concerns me that so few private sector enterprises have attempted anything even close to the scope or scale being pursued by the federal government at this time. Quite frankly, major, significant, successful EA projects in the commercial business world are pretty much far and few between. Government is trying to leapfrog over industry -- staking out a leadership role in the field of enterprise architecture.

I'm pleased OMB has decided, at least for now, to move forward using a staged implementation approach. I just hope they studied the lessons taught by Sigmund Freud who explained how stages can't be skipped.

I can only imagine how much pressure there must be on the FEA management team to deliver measurable results that demonstrate business value quickly. I just urge everyone to not try and move too quickly. Like the final frontiers of space exploration aboard the USS Enterprise, FEA is trying to go where no IT organization has ever gone before. I just pray they proceed at a measured pace.

One Final Concern

I'm not quite sure what to make of the fact that the Technology Reference Model sits at the bottom of the FEA stack with the arrows in the diagram suggesting that it's the last component to be modeled and populated.

It seems to me the federal government already possess a massive technology portfolio. Granted, I seriously doubt the business leaders who are driving the FEA initiative have an inkling as to what the government actually owns or uses. Furthermore, I imagine these folks haven't a clue how to model or even think about this technology portfolio iceberg that's lying in wait, lurking ahead in the path of their rapidly moving Enterprise Architecture Titanic.

Am I correct in assuming that the FEA proposes waiting until the Performance Reference Model, Business Reference Model, Service Component Reference Model, and Data Reference Model are all developed and populated BEFORE anyone figures out what we can discard from our existing technology portfolio? Don't these managers want to start cutting costs and saving money now? Or, does the federal government prefer to pay consultants exorbitant fees to rearrange deck chairs?

In the computer industry there's a classic joke that goes:
Old Hardware Goes into Museums

Old Software Goes into Production Every Night
The federal government does not have the luxury of beginning with a clean slate. How about rearranging priorities? May I recommend starting with a three step plan: Consolidate. Standardize. Prune. Then go back and iterate repeatedly on all five layers of the Federal Enterprise Architecture.

Tuesday, June 21, 2005

Defining Taxonomy & Classification

Awhile back, in June 2004, the Delphi Group published a whitepaper entitled "Information Intelligence: Intelligent Classification and the Enterprise Taxonomy Practice". The document includes some excellent definitions which I've excerpted (with edits) below:
Taxonomy -- is a hierarchical (or polyhierarchical) listing of topics or subject categories. It need not include a definition of topics, but only the hierarchical relationship of topics to one another. There are no standard approaches to taxonomy construction. A taxonomy is often used to provide a structured navigational path through a content collection.

Thesaurus -- is a network of words and word meanings and relationships used to put conceptual definitions into context. It defines a lexicon that describes relationships between words. Thesaurus construction is defined by ANSI standard Z39.19. A thesaurus is often used to enhance the intelligence of a taxonomy by providing insight into word meanings and relationships.

Ontology -- is a network of relationships that are self-describing and used to track how items or words relate to one another. For example, a "lives at" link or "works for" link in an ontology would be used to track these types of relationships and their corresponding values for listed individuals.

Classification -- is the process of analyzing content and determining where in a taxonomy it belongs. For example, the assigning of a Dewey decimal code to a book, based on the mapping of its content into the Dewey decimal system, is a form of classification.


Fake News Follies

I apologize that lately the ITscout Blog has been filled with much blather about such non-IT topics as BS artists (see "How Much of EA is BS?") and celebrity worship (see "Celebrity Power"). In my defense, I did have gallbladder surgery this past Friday and ever since have been heavily medicated on Percocet. While Percocet has been a terrific pain killer, one of its more unpleasant side effects has been that it makes me a bit more mean and nasty than usual. So, please try and bear with me as I recover from my operation.

In the course of my complaining, some of my more venemous remarks were directed at cable news shows, especially those broadcast by CNN, MSNBC, and Fox News. By contrast, there is one news program, albeit one that specializes in fake news, that deserves special recognition for its superior broadcast journalism. I'm talking, of course, about Comedy Central's "Daily Show with Jon Stewart."
The Daily Show's cutting edge humor coupled with its insightful reporting style provides a perspective on explaining what's happening in today's world that's unmatched by any other news source anywhere. For instance, take a look at the conversation below between Daily Show anchor Jon Stewart and correspondent Stephen Colbert discussing the recent Michael Jackson verdict:
Jon Stewart

The trial is over. Michael Jackson has been acquitted. What's the overall lesson we should learn from this case?

Stephen Colbert

If you're planning to commit a crime, California is the place to do it. The DA's are functionally retarded and the juries won't convict anyone.

O.J. Simpson had the means, motive, and her blood was on his glove. He got off.

Robert Blake. 'I went back into the restaurant to get my gun' was his alibi. He got off.

They had videotapes of cops beating up Rodney King. The cops walked so the people rioted.

The rioters beat up another guy. They were videotaped. They walked.

I'll tell you, Jon, the only way to get convicted of a crime in Southern California is if you commit the crime in front of the jury.

Jon Stewart

Why is it so difficult for district attorneys to get convictions out in Los Angeles?

Stephen Colbert

It's simple. Brain drain, Jon. It's LA. All the really good prosecutors and detectives are working as consultants on the three 'CSIs' and four 'Law and Orders.'

For another example of the Daily Show's brilliance, read the following dialog between Jon Stewart and his guest Kenneth Timmerman, author of "Countdown to a Crisis."
Kenneth Timmerman

Iran is a confirmed terroist state. They've been murdering Americans for twenty years. They've been murdering them in Lebanon, in Saudi Arabia, and today they're murdering Americans through the insurgency in Iraq. And now they're on the verge of developing a nuclear arms capability. This is the cocktail from hell.

Jon Stewart

This is the argument made about Iraq. So, in essence, they [the Bush administration] were only one letter off.

Kenneth Timmerman

Iran has much closer ties to Al-Queda. I have credible evidence of eyewitness accounts that meetings took place before 9/11 inside Iran between top Iranian regime officials and bin Laden's son, and bin Laden's #2 in command. They were planning 9/11 together in Iran.









Monday, June 20, 2005

Executive Cockpits

Oh, how most businesspeople just hate IT people... and vice versa. Have you ever observed how these two groups interact -- say, for example, watching them wave at each other across a parking lot? If only they'd use all the fingers on their hand instead of just flipping each other half a peace sign.

Surely, not all CEOs dislike their IT subordinates. I'm quite certain the executives at firms like Wal-Mart, FedEx, or American Airlines, view their IT organizations as critically important strategic partners. Yet, at the same time, I'd wager those firms are far more the exception rather than the rule.

At most businesses, IT originally emerged as an outgrowth of the accounting function. And, as everyone knows, accounting isn't exactly strategic. Trying to direct an enterprise based on its accounting is like trying to steer a boat by watching its wake.
Looking at where an enterprise's CIO reports is a telltale indicator of how strategic IT is to that business. For example, if the chief information officer (CIO) reports to a firm's chief financial officer (CFO), then you can pretty much assume that IT is viewed by top management as merely a cost center. Where IT is strategic, CIOs almost always report to either the chief executive officer (CEO) or the chief operating officer (COO).

As I posted in an earlier blog, "Rationalizing Software," management never wants control and power placed into the hands of individuals. Rather, the whole purpose of rationalization and bureaucracy -- the underpinnings of modern management -- is always to transfer control from producers to managers.

Unfortunately, the creation of software has managed to elude an engineered approach to its production. Software developers simply cannot agree on the details of a defined, quantifiable, repeatable process. Instead, highly skilled software professionals are, in essence, craftsmen.

As I explained in "Software -- Engineering or Craft?," the key to software development is translating knowledge into execution. Formalizing that process has thus far proven to be all but impossible. Like I said in that earlier blog, while most people can almost instantly describe a car's overall design, asking someone to describe a factory process for making cars is much more difficult. Far harder still is asking someone to define a process for designing a factory for making cars.

The net net of all this is that many businesspeople go out of their way trying to avoid IT as much as they possibly can. One way to do so is by outsourcing. Another is by developing an almost total dependence on outside software suppliers such as SAP or Oracle/Peoplesoft.

Personally, I'm dismayed by businesspeople who shun technology. After all, a manager's job is to cope with change, and as the saying goes, the only thing constant is change. Basically, in responding to change, there are only four variables a manager can manipulate:
  1. Money
  2. People
  3. Property
  4. Technology
One can almost envision an executive cockpit containing four levers -- one for each of the four factors listed above.
Executives possess countless tools to help control and monitor financial portfolios. Have you ever known a business to not know how much money it has? Similarly, every business maintains tight reign over its human resources as well as its plant and equipment.

But, what about TECHNOLOGY?

Most businesspeople haven't a clue how to even think about, let alone define, their organization's technology. Most of the time when it comes to managing technology, businesspeople act like a pilot who has feathered engine #4. They try to rely, instead, on managing technology solely in terms of its related money, people, and assets. Frankly, this approach isn't very effective.

Businesspeople often look upon IT people with great suspicion. For instance, they see in their technology staff individuals who appear to be more loyal to their profession than they are to their employer. IT people -- with their endless jargon and acronyms -- are renown for making businesspeople feel stupid and inadequate.

On the other side of the ledger, IT people often feel isolated and under-appreciated by their business counterparts. They don't believe anyone else in the work world has to cope with anything even remotely comparable to the rate and ferocity of technological change that IT professionals must constinuously conquer. For instance, accountants are still using the same basic double-entry techniques developed by the Italians during the Renaissance hundreds of years ago. Meanwhile, the underlying forces driving current computing projects, namely Java and the World Wide Web, are all of just ten years old, with newer technologies like wireless and SOA but half that age. In the mean time, quickly coming down the pike we've got even newer technologies approaching like RFID and the semantic web. Being an IT professional means committing one's self to a lifelong pursuit of learning.

IT people are also greatly frustrated by the businesspeople who control the purse strings -- folks who seem to never know what they want, can't express their requirements, and constantly change their mind as soon as any IT solution gets delivered.

Businesspeople need to learn how to think about technology. That's the reason for investing in enterprise architecture.

Executives who insist on continuing to fly their enterprise fortresses on just three engines deserve to be shot out of the air. Just remember that the reason why GM is being forced to offer prospective customers outlandish rebates or employee discounts isn't because of pension problems or the cost of worker healthcare. It's because they're not building cars with the technology and features provided by competitors like Toyota or Honda.


Saturday, June 18, 2005

Celebrity Power

As much as I despise BS artists (see my earlier posting entitled "How Much of EA is BS?"), I abhor the public's insatiable appetite for celebrity worship. What is it about human nature that causes people to obsess over the likes of Paris Hilton, Michael Jackson, Tom Cruise, O.J., Oprah, or the thousands of other famous personalities from the fields of entertainment, sports, politics, and jet-setters that grace the covers of People Magazine or appear endlessly on TV shows like Entertainment Tonight?

Why is it that with the advent of 24-hour news channels like CNN, MSNBC, or Fox News, the quality of reporting has so plummeted? How I long for the days when we had trustworthy journalists like Walter Cronkite, Chet Huntley, or David Brinkley. Of course, I guess, they were just celebrities too!

I suspect there must be some type of strong relationship that links celebrity and brand. Both feed off of people's primal need to compare -- either people against people or product against product. It's as if the natural response when confronted with limitless choices is to establish bounds to restrict our options.

In the 80s and 90s, everyone hoped that the emergence of cable and satellite TV channels would greatly improve the quality of programming available to viewers. It hasn't. Instead, we're mostly fed endless streams of BS-laden infomercials together with BS-filled talking heads interspersed with never-ending reruns of old programs from yesteryear.

More recently, everyone wanted to believe that the Web might become the anti-TV -- built from the bottom up with power in the hands of the individual rather than being controlled by soul-less corporate broadcasters. Instead, once again, we find the power of fame limiting our choices.

According to Clay Shirky, writing in a Wired Magazine article that appeared back in August, 2004 ("Why Oprah Will Never Talk to You. Ever."), "fame is an extreme imbalance between inbound and outbound attention." Famous people receive lots of attention, but are unable to acknowledge all the attention they get.

Below are some edited excerpts from that article where Clay Shirky describes what he calls the Blogging Power Curve:
The outbound-only nature of TV makes reciprocal attention impossible. The Web makes it possible for anyone to communicate with anyone else, but it doesn't make it practical. Everyone has a limited amount of time. You can read only so many books or blogs, trade email with only so many people. At some point even a two-way medium like the Web reverts to the broadcast paradigm.

Egalitarianism works only in small systems. A world of a million bloggers is different from a world of a thousand bloggers, in much the same way that cities are not just large villages. Once things went urban -- with millions of bloggers and readers -- a small set of bloggers was tipped into the one-way topology of fame.
According to Clay Shirky, there's a Weblog Pecking Order:
  • Celebrity bloggers publish and rarely participate in the discourse that follows
  • Typical bloggers engage in open dialog in which nearly anyone can participate
  • Obscure bloggers only reach a small, closed community of readers
In Power Laws, Weblogs, and Inequality, published February 8, 2003, Clay Shirky explains further:
In systems where many people are free to choose between many options, a small subset of the whole will get a disproportionate amount of traffic (or attention, or income), even if no members of the system actively work towards such an outcome. This has nothing to do with moral weakness, selling out, or any other psychological explanation. The very act of choosing, spread widely enough and freely enough, creates a power law distribution.
Power law distributions were originally observed by the linguist George Zipf who discovered that word frequency falls in a power law pattern, with a small number of high frequency words (I, of, the), a moderate number of common words (book, cat cup), and a huge number of low frequency words (peripatetic, hypognathous).

Jacob Nielsen, an Internet usability guru, writing about power law distributions in web site page views (see Zipf Curves and Website Popularity), explained how Zipf curves follow a straight line when plotted on a double-logarithmic diagram.

   

In the diagrams above, one plots linear scales on both axes while the other shows the same data plotted using logarithic scales on both axes. As you can see, power law distributions, like the above Zipf curve, have a tendency to hug the axes of the diagram when plotted on linear scales. This describes:
  • a few elements that score very high (the left tail in the diagram)
  • a medium number of elements with middle-of-the-road scores (the middle part of the diagram)
  • a huge number of elements that score very low (the right tail in the diagram)
It's no accident that when we encounter famous people, they come off as cliquish or shallow. Becoming famous means being able to spend less time with everyone. You must limit the number and depth of interactions.

As a noveau blogger, I sometimes dream of attracting huge throngs of readers. Yet, as the evidence suggests, one should be very careful what one wishes for. The technology that enables blogging does not overcome the human limits on attention that are the root cause of celebrity-dom. The accumulated weight of attention recreates the imbalances associated with traditional media. Sadly, fame is an inevitable byproduct of large social systems, and as those systems get larger, the imbalance of fame becomes more pronounced, not less. Personally, I'll be delighted if the ITscout Blog achieves middle-of-the-road success. I must admit, though, I do hope I'm not the only one reading what I write.


Thursday, June 16, 2005

7 Essential Elements of EA




David C. Baker and Michael Janiszewski of DiamondCluster International published an interesting article in FTP's Enterprise Architecture entitled "7 Essential Elements of EA"

The seven elements are:
  • Set the stage with guiding principles.
  • Manage complexity with blueprints.
  • Organize for architecture success.
  • Integrate through architecture processes.
  • Keep on track with architecture governance.
  • Use tools to model, analyze, and communicate your architecture.
  • Measure architecture success with metrics.
The article's title, along with the magic number 7, are obviously a play on Stephen Covey's famous book, "Seven Habits of Highly Effective People"  (imitation is the greatest form of flattery!)

While I'm not quite sure there are exactly seven essential elements, I do believe these authors clearly demonstrate they possess significant experience and proficiency in developing business and technical blueprints that govern strategic IT initiatives. I have no disagreement with their list. Personally, however, I tend to believe there are two especially essential elements of EA that sit above everyting else:
  1. Modeling
  2. Documenting


Browser Wars Redux

Let the browser wars begin -- anew!
            

Firefox
has just been selected Product of the Year by PC World.
According to their review, "This open-source program [Firefox] is streamlined, customizable, and just plain better. No wonder it has attracted millions of users in just a few months."

Is it merely a coincidence that Microsoft finally announced plans to give the aging Internet Explorer the major overhaul it has sorely needed for years? (see Microsoft PressPass for related story)

For the first time since Microsoft launched its flagship browser in 1995, Internet Explorer is actually losing market share. Between February 2004 and the end of April 2005, Firefox went from virtually zero downloads to 50 million. A lot of the credit must go to the folks at Mozilla. Established in 1998 to breathe new life into the fast-failing Netscape browser platform, it's taken a really long time, plus the utter collapse of Netscape the company, but Mozilla has finally delivered on its promise.

There are a lot of reasons why users are fleeing Microsoft's Internet Explorer, but a lot of it boils down to security. IE is notorious for its spyware, adware, viruses, and pop-ups.

Perhaps more frustrating than security leaks is the fact that Microsoft quit adding new features to its browser. The last major feature refresh for Internet Explorer dates back to August 2001 -- and it shows. Just take a look at the ITscout Blog in both Firefox and IE. It's no contest. Firefox wins hands down in terms of simple aesthetics.

Besides the more appealing rendering of HTML, Firefox offers significant feature improvements over IE, including tabbed browsing for juggling multiple Web pages, and built-in pop-up blocking to prevent ads from opening new browser windows. Other refinements include helpful managers for file downloads, integrated search bars, and more accessible controls for managing histories, cookie files, and the browser cache.

The Firefox browser does an excellent job of faithfully displaying Web pages, offers a superior user interface, and suffers fewer crashes than IE. It's also highly customizable through something called Firefox Extensions. If you haven't done so already, I strongly recommend you consider downloading and installing Firefox, if for no other reason than because the ITscout Blog looks so much prettier in Firefox than it does in IE.


XML Uber Alles

XML, the eXtensible Markup Language:
  1. separates content from presentation
  2. moves structured data across the Web

XML is:

  • platform independent
  • totally textual
  • structured
  • totally extensible
  • an open standard
  • language independent
  • web enabled
Given the feature list above, is it any wonder why XML has achieved such rapid, universal support from virtually every corner of the IT industry?

XML is the lifeblood of web services. It's literally the technology that has enabled SOA to emerge at this time as a viable way of doing distributed computing. Because it's totally textual, XML messages can pass freely through the same magic Port 80 that lets HTML traffic move back and forth between web browsers and servers.

Want proof XML is now uber alles?

First, just to make sure everyone understands what I'm saying, uber alles comes from German etymology. According to Merriam-Webster, it means above everything else.

Mary Jo Foley of Microsoft Watch recently reported in her June 1, 2005 column, "Office 12 to Get New File Formats," that Microsoft is making XML-based file formats the default in its next-generation Office suite due to ship in the latter half of 2006.

The new Word, Excel and PowerPoint formats will be designated as .docx, .xlsx, and .pptx, respectively. Microsoft is referring to the family of new formats as "Microsoft Office Open XML Formats."

Microsoft is committing to publish the forthcoming XML formats and make them available under the same royalty-free license that exists for the Office 2003 Reference Schemas -- openly offered and available for broad industry use. Licensees will be able to integrate these formats into their servers, applications and business processes "without financial consideration to Microsoft."

For users of older versions of Office -- specifically Office 2000, Office 2003 and Office XP -- Microsoft will make available software downloads that will allow them to read, edit and save using the new file formats.

Distinct from the binary-based file format that has been a mainstay of past Office versions, the new Office XML Formats are compact, robust file formats that enable better data integration between documents and back-end systems. Microsoft is promising that the new formats, which will make use of industry-standard ZIP compression technology, will allow users to create much smaller files requiring less byte space.

For additional information, see Microsoft's preview announcement: "Microsoft Office Open XML Formats Overview".


Wednesday, June 15, 2005

.NET vs. Java

Like a classic Greek drama, an endless battle swirls around .NET versus Java (particularly J2EE -- Java version 2, Enterprise Edition). Both players tout their own superiority across countless dimensions, including:
  • scalability
  • security
  • performance
  • interoperability
  • ease of development
  • support for 3rd-party products
Feature for feature -- both platforms are pretty much neck and neck -- at least in terms of what they allow developers to do. When one camp innovates, the other one almost immediately follows by delivering virtually equivalent functionality.

Philosophically, however, both worlds remain poles apart.

Most IT organizations would prefer not to have to support both platforms. Truthfully, though, they have little choice since they already own both. Microsoft's tight grip on the desktop combined with Java's strong hold on the server have made heterogeneous computing the reality almost everywhere.

.NET rules desktops

Java dominates servers

Because Java sits above the operating system, its rich suite of APIs offer a complex set of capabilities that enable it to run on Windows, Linux, Solaris, Mac OS, UNIX, etc.

.NET, on the other hand, includes deep integration with its underlying operating system. That means an application developed in .NET will only run on Windows. Of course, that Windows-only drawback is also .NET's greatest strength. On the desktop, Microsoft holds a huge edge over Java's AWT- and Swing-based graphical user interface, both in terms of visual development as well as deployed applications.

Functionally, Java and .NET are nearly identical. Yet their primary marketing messages are exact opposites:
Java says:
RUN ON ANY PLATFORM BUT PROGRAM IN ONLY ONE LANGUAGE

.NET says:
PROGRAM IN ANY LANGUAGE, BUT RUN ON ONLY ONE PLATFORM

Who's Using What?

What sorts of companies are using which platform? The longstanding stereotypical stigma has been that Java is for computer scientists while Microsoft caters to programmers who learned to code in Visual Basic. In truth, building serious enterprise-class applications in either environment requires a high degree of skill and knowledge.

IT organizations focused on robustness and security tend to choose Java. Those that need a solution faster, with less business complexity, often choose .NET.

Organizations that spend lots of money on IT are more likely to do the majority of development in Java. Conversely, small departmental IT groups tend to select .NET as their platform of choice. Of course, as already stated, large organizations mostly are using both platforms.

A host of cultural factors holds sway over the two environments. .NET is often seen as a benevolent dictator. Because it only supports an all Microsoft environment, .NET is sometimes perceived as all-or-nothing strategy. Java, on the other hand, has developed the reputation of being a big, dynamic, democratic community that thrives on open source software. Of course, the democracy exhibited by the Java Community Process (JCP) is often slow and chaotic. Many describe it as herding cats.

.NET's Cathedral vs. Java's Bazaar

Conventional wisdom suggests that innovation ought to be more likely to originate among members of the Java community. For starters, beyond Sun, its creator, Java enjoys the strong backing of many powerful software vendors, including IBM, Oracle, and BEA. Also, .NET has been hurt by long product delays, especially Longhorn -- the next generation Windows operating system.

A few years back, Eric Raymond authored a seminal paper he called "The Cathedral and the Bazaar." His visionary look at the open-source community is being played out today in the battle between .NET and Java.

Innovations emerge slowly from the Microsoft Cathedral. Meanwhile, developers using community-advanced Java are benefitting from not being coupled into an entire environment.

Because Microsoft has this grand release cycle, a lot of improvements aren't getting out into the hands of .NET developers. As a result, .NET's most developer-critical component, Visual Studio, is now facing increasing competition from the openness of Eclipse.
Will the delays plaguing .NET allow nimble Java to advance ahead?

Will .NET successfully extend Microsoft's monopolistic control over desktops and office out to servers?

Will the heterogeneous computing landscape continue to persist?
Only time will tell the answer to these question. Meanwhile, Java works well in some areas, and .NET works well in others.

In summary, let me end with a quote by Grady Booch who pointed out long ago, in an article about "Developing the Future" that appeared in the Communications of the ACM in March 2001, "there are three harsh realities: developing complex software of quality is wickedly hard, it's not getting any easier, and there is a very real shortage of skilled men and women to do the work."

Tuesday, June 14, 2005

Enterprise IT Consolidation

One key reason for simplifying complex IT environments is to improve an organization's ability to react more quickly to dynamically changing market conditions, customer requirements, and internal business needs. Most consolidation initiatives are undertaken with the hope of being able to take advantage of the flexibility benefits inherent in open platform environments.

Consolidating results in Simplicity

Simplifying an IT environment by STANDARDIZING
  • reduces total cost of ownership (TCO)
  • improves return on investment (ROI)
Achieving these goals means doing more with less. This generally involves consolidating different types of servers, storage, networks, applications, databases, and other components.

Consolidation requires a complete understanding of both short- and long-term ramifications. The ultimate test is enacting changes that actually foster new levels of productivity.

Most successful IT consolidation journeys are comprised of multiple steps, including:
  • consolidating data centers and networks into fewer, centralized locations

  • consolidating storage, database servers, file servers, etc.

  • deploying enterprise-class Manageware solutions

  • enhancing business-continuity capabilities

  • integrating enterprise applications to improve service levels, reduce redundancies, and streamline resources

  • introducing utility pricing and managed services that help improve responsiveness, reduce costs, mitigate risk, and provide greater flexibility

Bottom line:

The key drivers of Enterprise IT Consolidation are:
  • improved system availability
  • enhanced system management
  • improved disaster recovery
  • optimized system performance
  • improved security
  • reduced spending


What's in Your Portfolio?

One of the best parts of blogging is having the liberty to decide who else's writings I want to showcase on my blog. By including other's work, I'm exposing my blog readers to articles they may not have already seen. That benefits the author of the original article. It also benefits me, the blogger, because I'm able to edit out the best portions while also occassionally making minor changes in phrasing or presentation style. If the readers' attention was piqued by my edited version, they can always click on the link to access the original article.

Given the above preface, I obviously now want to introduce you to another person's writings. The article I'm referencing originally appeared in the February 5, 2005 edition of CIO Insight. It was a column written by John Parkinson entitled 'Leveraging Your Project Portfolio for Better Returns.'
                                
Edited excerpts appear below:
Modern market theory states that markets are so efficient that no one can perform much better than the market average for very long. Transparency -- the notion that everyone can see what the winners are doing -- will rapidly drive copycat behavior, forcing a rapid regression to the market's mean performance level.

It's also conventional wisdom that, because there are really many "markets" for different classes of investments, you should spread your capital across the different areas where markets are made. This lets you balance risk (that you might lose everything if a single market crashes) with reward (that you might get rich if you somehow bet exactly right).

This is the basis of asset-allocation strategies.

Judging by these theories, rather than try to pick winners, your portfolio should be as much like the market (and everyone else's) as possible. In the financial world this often means following an index (or a collection of indices) that mirror the structure of the market.

If you want to beat the market theorists, you only have three tools:
  1. concentration
  2. turnover
  3. leverage
In investment terms this means:
  1. fewer but larger bets
  2. shorter time horizons for the bets to pay off
  3. using borrowed capital to make the (bigger) bets
The basis of your bets is that you can find enough opportunities that the overall market has missed (and hence undervalues), cash in quickly before the market catches on (as theory says it must, eventually), and then move on to the next opportunity.

Notice that this is a very different approach from asset-allocation models that urge diversification across several uncorrelated asset classes, low turnover (to reduce trading fees), and periodic rebalancing of the portfolio as different asset classes go through cycles of performance.

What's all this got to do with IT -- and how does it affect the CIO?

Think of the typical IT organization as a kind of specialized investor employing an asset-allocation strategy. Using other people's money (business unit capital), the CIO makes bets that investments in technology will generate returns for the business via automation-powered leverage of various kinds -- mostly to do with productivity, time to market, low cost of compliance, or cost-effective scalability.

The CIO puts together a "portfolio" of systems that are similar to various classes of assets -- core business automation, specialized capability, personal productivity and so on. Each of these asset classes has a required investment cycle (build or buy, deploy, support and enhance, retire) and a "yield profile" that should provide the basis for a return on the investment.

The challenge for the CIO -- just as it is for the manager of financial assets -- is in maintaining a balance among the asset types, most of which have very different yield profiles. Spend too much on core automation efforts and you won't be competitive in emerging areas. Ignore "infrastructure" and your operational costs will increase as changes in technology drive new cost-effectiveness opportunities that you can't take advantage of. Ignore the basics in order to do only innovation-driven projects, and core capabilities suffer. Getting the balance right isn't easy or obvious, but it is critically important.

Most CIOs opt for an IT version of "efficient-market theory," in part because they have limited discretionary investment capital these days. As a result, most IT portfolios look pretty much the same, making truly differentiated technology capabilities quite rare.

Add in the fact that most portfolios also have low "turnover" -- new systems tend to be added to older systems rather than replacing them -- and corporate IT takes on a "mature" (and complex) look. In financial investment terms the portfolio looks like a risk-averse, infrequently rebalanced 401K.

Let's look at how the three "beat the market" tools -- concentration, turnover and leverage -- might apply to managing an IT portfolio.

Concentration

In most big IT shops, there is still more diversity of technology and platform components than there needs to be. Simplification, standardization, and consolidation have had a big impact, but a lot more could still be done.

CIOs don't like to be too dependent on a single supplier (they fear "lock-in" and a lack of pricing leverage), while bemoaning the cost of integration and support across multiple products and platforms. A more concentrated strategy does have risks (supplier viability among them), but a simpler, more standardized approach also lets you use our second tool -- turnover -- more effectively.

Turnover

Turnover affects the IT organization in two ways:
  • Shorter life-cycle stages gets you to the available benefits of new technology sooner, and allows you to change course (if and when you need to) more frequently -- frequent, small course adjustments are much better from an investment optimization point of view than large, infrequent course changes.

  • Shorter life cycles requires a shift from an ROI model based on costs to one based on returns. In essence, you can choose between increasing ROI by decreasing the cost for a given level of return, or by increasing the return for a given level of cost. This might seem a minor distinction, but in practice very different investment strategies (and mind-sets) are required to make the different approaches work. IP telephony is a good example. Once you have gained the cost savings from switching, your attention should be focused on all the other (information-powered) applications that you can base on the new IP telephony devices and capabilities.

Leverage

Leverage means using "borrowed" capital to make investments so bets can be bigger. We've already noted that CIOs use business units' capital for investments in technology. Yet, capital from one set of users can create capabilities that benefit others, such as common infrastructure investments.

It's tough to know whether to spend $20 million on one large project, $2 million on ten small projects, or something in between. This is another aspect of balancing the portfolio: You want a mix of:
  • moderate risk but certain yields
  • high risk but less-certain yields
Not all the projects bearing higher risks will come out right, but the mix, if managed correctly, will give you the aggregate yield you need. And you don't want anything that has a low and certain yield -- unless you have no choice. Most of all, you don't want guaranteed losers -- and it's amazing how many portfolios actually contain a few.

If you accept the mixed-portfolio notion, you'll have to periodically dump investments that don't meet your yield criteria and replace them with ones that do. That means killing projects or platforms that don't work out, and that's always hard to do in IT.