The Linux community has recently posted a number of leaked memos that Microsoft admits are internal MS documents. These papers are Microsoft analyses designed to summarize the Linux phenomenon and help Microsoft focus on how to attack this alternative platform and prevent the public from enjoying it. Prominent Linux people have also commented on these memos and how they reveal Microsoft's internal culture and world-view.
But interest in understanding the Microsoft ethos is not limited to Linux supporters. OS/2 users and advocates can learn a great deal about Microsoft's inner workings and how to take advantage of their smug neglect of the OS/2 community. For example, the articles show a pervasive amnesia about OS/2 as an available option for computer users. These memos also confirm that such tactics as FUD, the threat of lawsuits, and the corruption of open standards are all normal, everyday tactics in the Microsoft repertoire. Let us analyze the memos and see what we can learn about "The Microsoft Way" and how to overcome its intentional obstacles to progress.
Open Source Software (OSS) is a development process which promotes rapid
creation and deployment of incremental features and bug fixes in an existing code /
knowledge base. In recent years, corresponding to the growth of Internet, OSS
projects have acquired the depth & complexity traditionally associated with
commercial projects such as Operating Systems and mission critical servers.
Consequently, OSS poses a direct, short-term revenue and platform threat to Microsoft
-- particularly in server space. Additionally, the intrinsic parallelism and free idea
exchange in OSS has benefits that are not replicable with our current licensing model
and therefore present a long term developer mindshare threat.
Okay, what does this basically mean? Microsoft perceives a product to be a "threat" if it presents itself as any of these:
1. a revenue alternative -- somebody might spend money on a non-MS product
2. a platform alternative -- MS might lose its monopoly position
3. a developer alternative -- people might actually write software for a non-MS product.
Therefore, MS believes it can be harmed somehow if people can spend their money on alternatives, if people can buy a different O.S., or if software makers write code for a different O.S. Interestingly, Linux (in particular) and OSS software (in general) fit all three MS definitions of "threat": a free, non-MS O.S., that coders like because they can get access to the source code. Java has some of these qualities, despite not being open-source (yet?). MS equates "alternative" with "threat". Therefore, freedom of choice is a source of fear and loathing to MS. The idea that there may be zero (or negative!) costs with leaving MS and migrating to another platform scares the daylights out of MS. If we want to see OS/2 succeed, we must make and market OS/2 into a full-fledged, zero-relative-cost alternative to MS platforms.
However, other OSS process weaknesses provide an avenue for Microsoft to garner
advantage in key feature areas such as architectural improvements (e.g. storage+),
integration (e.g. schemas), ease-of-use, and organizational support.
Therefore, improving OS/2's storage, integration, ease-of-use, and technical support are vital. Lest we overlook IBM's recent contributions, note that Aurora has 2 TB (Terabyte!) storage boundary, LVM (Logical Volume Manager) to eliminate drive-letter problems during storage upgrades, and superb network management and connectivity. What we as OS/2 advocates need to focus on are the other two issues: installation as an ease-of-use issue, and a credible technical support framework.
Open Source Software
What is it?
Open Source Software (OSS) is software in which both source and binaries are
distributed or accessible for a given product, usually for free. OSS is often mistaken for
"shareware" or "freeware" but there are significant differences between these
licensing models and the process around each product.
What follows is a long explanation of various categories of software, which we shall bypass at this point.....
Later, the document resumes:
OSS is a concern to Microsoft for several reasons:
1.OSS projects have achieved "commercial quality"
A key barrier to entry for OSS in many customer environments has been its perceived lack
of quality. OSS advocates contend that the greater code inspection & debugging in OSS
software results in higher quality code than commercial software.
Recent case studies (the Internet) provide very dramatic evidence in customer's eyes that
commercial quality can be achieved / exceeded by OSS projects. At this time, however
there is no strong evidence of OSS code quality aside from anecdotal.
Note the clever distinction here (which Linux advocate Eric Raymond missed in his analysis). "Commercial quality code" apparently involves "customer's eyes" (in Microsoft's own words) rather than any REAL code quality. In other words, to Microsoft and the software market in general, a software product has "commercial quality" if it has the "look and feel" of commercial software products. A product has commercial quality code if and only if there is a PUBLIC PERCEPTION that it is made with commercial quality code. This means that MS will take seriously any product that has an appealing, commercial-looking appearance because MS assumes -- rightly so -- that this is what the typical, uninformed consumer uses as the judgment benchmark for what is "good code."
Looking at it from a historical vantage point, MS broke off from IBM at exactly the point where IBM decided to go with WorkPlace Shell instead of the Windows 3.X GUI. (Bill Schindler wrote a superb chronological summary of these facts about two years ago in *Extended Attributes*.) OS/2 success depends greatly on getting the consumer to see and experience the WPS, which MS knows is clearly superior to any MS GUI (and, we believe, to any other GUI out there). As long as IBM wrote a "boring" OS/2 product with a "dull" command-line interface, MS was content to play along with them. As soon as the WPS became a reality, MS declared war on IBM and on OS/2.
What can we learn from this? That getting the WPS "look" in the hands of the consumer would have a significant positive impact on OS/2 sales and success. My own experience shows that consumers LOVE the WPS -- once they get a chance to try it.
2.OSS projects have become large-scale & complex
Another barrier to entry that has been tackled by OSS is project complexity. OSS teams are
undertaking projects whose size & complexity had heretofore been the exclusive domain of
commercial, economically-organized/motivated development teams. Examples include the
Linux Operating System and Xfree86 GUI.
OSS process vitality is directly tied to the Internet to provide distributed development
resources on a mammoth scale. Some examples of OSS project size:
Size matters! If a "small" product or project succeeds, nobody cares outside the group of implementers. However, a "big" project takes on a sense of worth solely by virtue of its size. In other words, a 40-milllion-lines-of-code product is accompanied by a hokey sense of technological wonder, something akin to Carl Sagan's "billions and billions of years ago." To sell the public on a small, tightly-integrated technological wonder is extremely difficult. What we consider "bloat" is spun by the Microsoft minions to imply power, capability, and a sense of achievement or accomplishment. This is related to the cultural decadence of the first point; people who consider code to be "commercial quality" solely because of its attractive user interface will also be improperly impressed by the sheer size and girth of a product's coding regimen. It will be harder to sell a "small" OS/2 than a "big" OS/2 -- after all, "thin clients" have been soundly rejected by most PC buyers, who believe "big is beautiful" and that needing more processing power and more storage means a program is *better* instead of *worse*. Shameful, isn't it?
We need OS/2 to achieve a "large" accomplishment of some sort. The fact that OS/2 did the Olympics well has not been celebrated enough. Perhaps Sydney 2000 will be our best chance (and maybe our last Olympic event, if the current contract is not unexpectedly renewed). Otherwise, we will have to find some other Everest-sized task for OS/2 to prove its mettle publicly.
1.OSS has a unique development process with unique strengths/weaknesses
The OSS process is unique in its participants' motivations and the
resources that can be brought to bare down on problems. OSS,
therefore, has some interesting, non-replicable assets which should be
An interesting piece of terminology -- "non-replicable assets" -- implies that Microsoft's modus operandi typically involves copying anything that others do. This is one aspect of the "virus model" that Microsoft uses; namely, to act like some kind of rapidly-mutating DNA that gobbles everything up, installs itself everywhere, and infects anyone or anything that it can convert into a "host". Microsoft is a copycat, not an innovator.
What can we learn from this? That any innovation that we produce that has any signficance to Microsoft will likely be rudely abducted, carelessly re-engineered, and ruthlessly commoditized by MS. What the Linux community does so well in response is to rev too fast for Microsoft to keep up! This is the most likely successful path, and we should emulate this. This is one of the key benefits of OSS that Microsoft fears. We should make this our weapon as well.
The document then explains the genesis of the OSS movement and some of its features, but we shall move ahead to the discussion of the process itself....
Open Source Process
Commercial software development processes are hallmarked by organization around
economic goals. However, since money is often not the (primary) motivation behind
Open Source Software, understanding the nature of the threat posed requires a deep
understanding of the process and motivation of Open Source development teams.
In other words, to understand how to compete against OSS, we must target a process
rather than a company.
As Eric Raymond notes here, MS "gets it" at least on this point: OSS is not "owned" by anyone. It is a culture, a process, and a "life form" that Microsoft's command-and-control copy-and-leverage model must compete with.
NOTE: since MS FUD can now be expected to attack the OSS process and community, we in the OS/2 development, advocacy, and user community may be able to operate under the "cover" of OSS. They will be taking a lot of flak, and it may be possible to position OS/2 as the "best mix" of OSS-style innovation and rapid improvements, combined with the ultra-careful methodology of IBM as the custodian of the base OS. "The thrill of OSS with the safety of IBM." This is not to FUD anyone, but merely to clarify the similarities and the differences between the OS/2 community and its OSS comtemporaries.
Open Source Development Teams
Some of the key attributes of Internet-driven OSS teams:
Geographically far-flung. Some of the key developers of Linux, for example, are uniformly
distributed across Europe, the US, and Asia.
Note that this is also a key element of Java as well as (to a lesser extent) a province of OS/2's native-application community. Europe has long been a hotbed of OS/2 innovation and this will continue. Stirring up interest in other geographical areas could prove quite beneficial as well.
Large set of contributors with a smaller set of core individuals. Linux, once again, has had
over 1000 people submit patches, bug fixes, etc. and has had over 200 individuals directly
contribute code to the kernel.
Well, here the course certainly diverges. OS/2's kernel is under the watchful oversight of IBM, which contributes to the stability of the OS/2 development process. That is to say, maintaining compatibility with OS/2 is much easier than with Windows -- because MS has so many conflicting versions of its stuff out there -- but it may be easier to maintain such compatibility with OS/2 than with Linux as well. I'd be interested to find out what kind of compatibility issues currently exist among various Linux app development staffs, and how these will be resolved. It may be that divergent kernels make localized compatibility solutions possible, but may prevent a broader reach into the consumer market. On the other hand, that hasn't stopped MS. The OS/2 model of steady kernel-API foundations combined with a dynamic development community just seems to me to be the safe middle ground.
Not monetarily motivated (in the short run). These individuals are more like hobbyists
spending their free time / energy on OSS project development while maintaining other full
time jobs. This has begun to change somewhat as commercial versions of the Linux OS
Here we have one of the key element to Linux's success, whether or not MS or anyone else has noticed: you can't write great code if you don't have revenue, and you sure can't write great code for free if somebody else isn't footing the bill. Every Linux coder, like every OS/2 coder, needs a job. In the case of OS/2, that job has generally been trying to sell what you code. However, the Linux community is built on free code, which means that the MS tactic of shutting off software revenues to prevent an application from becoming self-supporting doesn't work. Essentially, the Linux community has developed sufficient non-Linux revenue to become self-supporting without needing to sell their Linux products. In order for MS to apply their usual revenue-squelching tactics, they would have to pressure the bosses who employ Linux people and try to get them all fired from their regular jobs, so they wouldn't have the necessary free time to code their apps. Don't think they won't try it on at least a small scale, particularly with NT's encroachment into the Unix space.
NOTE: OS/2 development can learn from this. To battle MS for market share and customer base requires a revenue stream more or less independent of the products you are pushing. To put it another way, non-OS/2 revenue must subsidize OS/2 development until a critical mass can be achieved. This critical mass is what Linux is building, and we need to have the same aim as well.
OSS Development Coordination
Communication -- Internet Scale
Coordination of an OSS team is extremely dependent on Internet-native forms of
collaboration. Typical methods employed run the full gamut of the Internet's
24x 7 monitoring by international subscribers
Hmmm, sounds like the OS/2 community, too, doesn't it? Never underestimate the Net's ability to collect "leftover" OS/2 users from around the world and assemble a "virtual community" that overcomes the geographical limitations of a low-OS/2-user-density in most geopolitical communities.
OSS projects the size of Linux and Apache are only viable if a large enough
community of highly skilled developers can be amassed to attack a problem.
Consequently, there is direct correlation between the size of the project that OSS can
tackle and the growth of the Internet.
Application of this principle to OS/2 means we must get AS MANY OS/2 SUPPORTERS ON THE WEB as possible, and quickly. Building and improving this "virtual community" has been a key element in the survival and continued growth of OS/2 and OS/2 development. Keep up the good work, people.
In addition to the communications medium, another set of factors implicitly coordinate
the direction of the team.
Common goals are the equivalent of vision statements which permeate the distributed
decision making for the entire development team. A single, clear directive (e.g.
"recreate UNIX") is far more efficiently communicated and acted upon by a group
than multiple, intangible ones (e.g. "make a good operating system").
Okay, then we should set some concrete goals for OS/2, shouldn't we?
Precedence is potentially the most important factor in explaining the rapid and
cohesive growth of massive OSS projects such as the Linux Operating System.
Because the entire Linux community has years of shared experience dealing with
many other forms of UNIX, they are easily able to discern -- in a non-confrontational
manner -- what worked and what didn't.
In other words, Linux users are here categorized as typically experienced coders. OS/2's user base is more varied, including a lot of SOHO and small businesspeople and regular end-users. This means that improving OS/2 tends to focus more on usability issues and applications, and less on raw technical details. OTOH, we become more dependent on application vendors than the Linux community is, at a time when most appmakers have failed to capitalize on the OS/2 opportunity and stubbornly refuse to recognize the established OS/2 user base. (Being too cowardly to add OS/2 development to their stable, these commercial vendors wish it would just "go away" so their foolish obstinance would somehow be proved "right.")
Well, we must make up for our generally less code-oriented user base by building better ties with the more open-minded app developers. VOICE has been particularly good at connecting users with developers from among the OS/2 community. Perhaps non-OS/2 developers could also be invited so they could get some positive feedback from people willing to pay for OS/2 versions of their products.
There weren't arguments about the command syntax to use in the text editor --
everyone already used "vi" and the developers simply parcelled out chunks of the
command namespace to develop.
Having historical, 20:20 hindsight provides a strong, implicit structure. In more
forward looking organizations, this structure is provided by strong, visionary
This line of reasoning seems to imply that the OSS community is made up of nothing more than a bunch of fix-it men who happen to have worked on the same line of appliances before. The Linux developers are thus painted with the broad brush of being mere "followers" and "technicians" lacking in vision and leadership. This kind of stereotyping is dangerous. MS has become so cocky that it feels it can cubbyhole any perceived enemies neatly into little boxes for precise counterattacks. This is also why MS wrongly believes that there is no longer a viable OS/2 community, because they foolishly believe that only IBMers ever liked it in the first place.
NatBro points out that the need for a commonly accepted skillset as a pre-requisite for
OSS development. This point is closely related to the common precedents
phenomena. From his email:
A key attribute ... is the common UNIX/gnu/make skillset that OSS taps into and
reinforces. I think the whole process wouldn't work if the barrier to entry were
much higher than it is ... a modestly skilled UNIX programmer can grow into
doing great things with Linux and many OSS products. Put another way -- it's not
too hard for a developer in the OSS space to scratch their itch, because things
build very similarly to one another, debug similarly, etc.
Whereas precedents identify the end goal, the common skillsets attribute describes the
number of people who are versed in the process necessary to reach that end.
This characterization of OSS as a strictly UNIX-oriented phenomenon is interesting, because while this Unix/Linux origins of OSS are well-known, the fact is that the Netscape and Java communities are also moving toward an open-source model. In addition, we might consider HTML to be the prime example of open source code, since anyone with a Web development tool employing mirroring can open and read HTML website code and duplicate it -- without even the GNU license restrictions being involved. Focusing solely on attacking Linux or even the broader Unix community fails to take into account these other sprouting OSS variants. If OS/2 begins to combine some of the open-source features with its carefully-managed base kernel and APIs, the mix may exceed Microsoft's ability to model and attack.
The Cathedral and the Bazaar
A very influential paper by an open source software advocate -- Eric Raymond -- was
first published in May 1997 (http://www.redhat.com/redhat/cathedral-bazaar/).
Raymond's paper was expressly cited by (then) Netscape CTO Eric Hahn as a
motivation for their decision to release browser source code.
Raymond dissected his OSS project in order to derive rules-of-thumb which could be
exploited by other OSS projects in the future. Some of Raymond's rules include:
Every good work of software starts by scratching a developer's personal itch
This summarizes one of the core motivations of developers in the OSS process --
solving an immediate problem at hand faced by an individual developer -- this
has allowed OSS to evolve complex projects without constant feedback from a
marketing / support organization.
In other words, OSS is driven by making great products, whereas Microsoft is driven by focus groups, psychological studies, and marketing. As if we didn't know that already....
Good programmers know what to write. Great ones know what to rewrite (and reuse).
Raymond posits that developers are more likely to reuse code in a rigorous open
source process than in a more traditional development environment because they
are always guaranteed access to the entire source all the time.
So Microsoft does not let its developers access the source code as needed? Tsk, tsk. You can't trust anybody these days, can you???
Widely available open source reduces search costs for finding a particular code
``Plan to throw one away; you will, anyhow.''
Quoting Fred Brooks, ``The Mythical Man-Month'', Chapter 11. Because
development teams in OSS are often extremely far flung, many major
subcomponents in Linux had several initial prototypes followed by the selection
and refinement of a single design by Linus.
Treating your users as co-developers is your least-hassle route to rapid code
improvement and effective debugging.
Raymond advocates strong documentation and significant developer support for
OSS projects in order to maximize their benefits.
Code documentation is cited as an area which commercial developers typically
neglect which would be a fatal mistake in OSS.
Well, that's an intriguing idea -- the assumption that in a large organization, development teams will be around long enough that key individuals will be able to read the code sans documentation while DOJ investigators will not be able to do so. ;)
Rapid development has always implied a tradeoff between lines of operational code versus lines of documentation. A "self-documenting code" has always been a sort of "holy grail" for development teams. Personally, I think that Java does pretty darn well in that respect. This could prove useful when competing with C++ for development cycle reduction and out-innovating the Microsofties.
Release early. Release often. And listen to your customers.
This is a classic play out of the Microsoft handbook. OSS advocates will note,
however, that their release-feedback cycle is potentially an order of magnitude
faster than commercial software's.
The difference here is, in every release cycle Microsoft always listens to its MOST IGNORANT CUSTOMERS. This is the key to dumbing down each release cycle of software for further assaulting the non-PC population. Linux and OS/2 developers, OTOH, tend to listen to their SMARTEST customers. This necessarily limits the initial appeal of the operating system, while enhancing its long-term benefits. Perhaps only a monopolist like MIcrosoft could get away with selling worse products each generation -- products focused so narrowly on the least-technical member of the consumer base that they necessarily sacrifice technical excellence. Linux and OS/2 tend to appeal to the customer who knows greatness when he or she sees it. The good that Microsoft does in bringing computers to the non-users is outdone by the curse they bring upon the experienced users, because their monopoly position tends to force everyone toward the lowest-common-denominator, not just the new users.
NOTE: This means that Microsoft does the "heavy lifting" of expanding the overall PC marketplace. The great fear at Microsoft is that somebody will come behind them and make products that not only are more reliable, faster, and more secure, but are also easy to use, fun, and make people more productive. That would mean that Microsoft had merely served as a "pioneer" and taken all the arrows in the back, while we who have better products become a "second wave" to "homestead" on Microsoft's "tamed territory." Well, sounds like a good idea to me.
Given a large enough beta-tester and co-developer base, almost every problem will
be characterized quickly and the fix obvious to someone.
This is probably the heart of Raymond's insight into the OSS process. He
paraphrased this rule as "debugging is parallelizable". More in depth analysis
Well, the win32os2 project might benefit a great deal from this. Sooner or later they will make a working converter, but OSS might be a way to get it sooner. Say, could that be the reason they joined up with the WINE folks???
Once a component framework has been established (e.g. key API's & structures
defined), OSS projects such as Linux utilize multiple small teams of individuals
independently solving particular problems.
Because the developers are typically hobbyists, the ability to `fund' multiple,
competing efforts is not an issue and the OSS process benefits from the ability to pick
the best potential implementation out of the many produced.
Note, that this is very dependent on:
A large group of individuals willing to submit code
Alas, we need a larger group of OS/2 coders!!! The sheer size of the Unix base has allowed OSS to prosper there first. Perhaps the growth of Java may allow an OSS-style development community to benefit OS/2. Microsoft fought tooth and nail to keep OS/2 from reaching "critical mass" status. It currently has "survival mass" because of the Web, but it needs to grow further to reach "critical mass," a level of internal self-sufficiency.
A strong, implicit componentization framework (which, in the case of Linux was inherited
from UNIX architecture).
Well, we have a strong set of base APIs in OS/2, but I don't know if the level of componentization is there. Anybody out there know about this??
The core argument advanced by Eric Raymond is that unlike other aspects of software
development, code debugging is an activity whose efficiency improves nearly
linearly with the number of individuals tasked with the project. There are little/no
management or coordination costs associated with debugging a piece of open source
code -- this is the key `break' in Brooks' laws for OSS.
Interesting that "other aspects" besides debugging somehow don't have improved efficiency from parallel coding, according to this analysis. What "other aspects?" Apparently, this includes product specification, design, initial coding, and marketing. The costs here are supposedly due to "management and coordination costs." Yet the very Internet structure that makes OSS parallel debugging cost-effective can also apply to collaboration on each of these other steps!!! Microsoft apparently has missed that point -- or else they hope desperately that everyone else has missed it.
Raymond includes Linus Torvald's description of the Linux debugging process:
My original formulation was that every problem ``will be transparent to
somebody''. Linus demurred that the person who understands and fixes the
problem is not necessarily or even usually the person who first characterizes it.
``Somebody finds the problem,'' he says, ``and somebody else understands it.
And I'll go on record as saying that finding it is the bigger challenge.'' But the
point is that both things tend to happen quickly
``Debugging is parallelizable''. Jeff [Dutky <email@example.com>] observes
that although debugging requires debuggers to communicate with some
coordinating developer, it doesn't require significant coordination between
debuggers. Thus it doesn't fall prey to the same quadratic complexity and
management costs that make adding developers problematic.
So what this analysis seems to be saying is, OSS is only a threat during the debugging part of the development cycle. This implies that as long as no Linus-kin exist out there -- or if they can be kept out of the development stream -- MS can prevent OSS from succeeding by beating the system on the other parts of the development cycle. Personally, I think this is rubbish. The same communication path to a coordinating developer could also be derived for communication to a coordinating designer, a coordinating marketer, or any other sort of information-based activity. The key ingredient here is the collaborative effect of the Net itself, not the debugging process. Any edge gained in the debugging process via Internet collaboration will not be orders of magnitude greater than a similar edge in other parts of the development/marketing cycle.
What is therefore needed is a central coordinating body for these other stages of OS/2 product parallelization. Not a rah-rah TeamOS/2 type of coordination, but rather a serious, businesslike development community based on the Web and with specific goals and schedules.
One advantage of parallel debugging is that bugs and their fixes are found /
propagated much faster than in traditional processes. For example, when the
TearDrop IP attack was first posted to the web, less than 24 hours passed before the
Linux community had a working fix available for download.
A worthy goal -- 24-hour response time.
An extension to parallel debugging that I'll add to Raymond's hypothesis is "impulsive
debugging". In the case of the Linux OS, implicit to the act of installing the OS is the
act of installing the debugging/development environment. Consequently, it's highly
likely that if a particular user/developer comes across a bug in another individual's
component -- and especially if that bug is "shallow" -- that user can very quickly patch
the code and, via internet collaboration technologies, propagate that patch very
quickly back to the code maintainer.
Put another way, OSS processes have a very low entry barrier to the debugging
process due to the common development/debugging methodology derived from the
Once again, the implicit belief that only Unix-style programming can benefit from the OSS model. Perhaps a fatal oversight for Microsoft to make.
Any large scale development process will encounter conflicts which must be resolved.
Often resolution is an arbitrary decision in order to further progress the project. In
commercial teams, the corporate hierarchy + performance review structure solves this
problem -- How do OSS teams resolve them?
In the case of Linux, Linus Torvalds is the undisputed `leader' of the project. He's
delegated large components (e.g. networking, device drivers, etc.) to several of his
trusted "lieutenants' who further de-facto delegate to a handful of "area" owners (e.g.
OS/2 has IMO no such "undisputed leader" and IBM certainly does not qualify. Large organizations tend to blot out the individual creativity and vision needed to fill such a role. Anybody have any nomination for "codemeister" of the OS/2 community???
Other organizations are described by Eric Raymond:
Some very large projects discard the `benevolent dictator' model entirely. One
way to do this is turn the co-developers into a voting committee (as with
Apache). Another is rotating dictatorship, in which control is occasionally passed
from one member to another within a circle of senior co-developers (the Perl
developers organize themselves this way).
Okay, cool, let's consider these as viable options, too. Perhaps we should schedule a VOICE (http://www.os2voice.org/) chat on this topic?
This section provides an overview of some of the key reasons OSS developers seek to
contribute to OSS projects.
Solving the Problem at Hand
This is basically a rephrasing of Raymond's first rule of thumb -- "Every good work of
software starts by scratching a developer's personal itch".
Many OSS projects -- such as Apache -- started as a small team of developers setting
out to solve an immediate problem at hand. Subsequent improvements of the code
often stem from individuals applying the code to their own scenarios (e.g. discovering
that there is no device driver for a particular NIC, etc.)
Well, there's already quite a bit of that going on -- the digital camera project, for example -- and this sort of effort needs to be encouraged and rewarded.
The Linux kernel grew out of an educational project at the University of Helsinki.
Similarly, many of the components of Linux / GNU system (X windows GUI, shell
utilities, clustering, networking, etc.) were extended by individuals at educational
In the Far East, for example, Linux is reportedly growing faster than internet connectivity --
due primarily to educational adoption.
Universities are some of the original proponents of OSS as a teaching tool.
Research/teaching projects on top of Linux are easily `disseminated' due to the wide
availability of Linux source. In particular, this often means that new research ideas are first
implemented and available on Linux before they are available / incorporated into other
Prepare for a Microsoft assault on the university system, a la its recent attempted takeover of the state of Indiana.
The most ethereal, and perhaps most profound motivation presented by the OSS
development community is pure ego gratification.
In "The Cathedral and the Bazaar", Eric S. Raymond cites:
The ``utility function'' Linux hackers are maximizing is not classically economic,
but is the intangible of their own ego satisfaction and reputation among other
The classic "psychic income" of the economist.
And, of course, "you aren't a hacker until someone else calls you hacker"
Homesteading on the Noosphere
A second paper published by Raymond -- "Homesteading on the Noosphere"
(http://sagan.earthspace.net/~esr/writings/homesteading/), discusses the difference
between economically motivated exchange (e.g. commercial software development
for money) and "gift exchange" (e.g. OSS for glory).
"Homesteading" is acquiring property by being the first to 'discover' it or by being
the most recent to make a significant contribution to it. The "Noosphere" is loosely
defined as the "space of all work". Therefore, Raymond posits, the OSS hacker
motivation is to lay a claim to the largest area in the body of work. In other words, take
credit for the biggest piece of the prize.
Eric Raymond here notes that there is no implication of "biggest prize" in his writing, but that this is a Microsoft insertion that reflects their obsession with seeing winners and losers in all transactions -- even free ones!! The mutual respect among hackers in the Linux community is also to be found among members of the OS/2 community, which is a tremendous advantage over the cutthroat world of Windows. The public should be educated about this....
From "Homesteading on the Noosphere":
Abundance makes command relationships difficult to sustain and exchange
relationships an almost pointless game. In gift cultures, social status is determined
not by what you control but by what you give away.
In other words, OSS and the very nature of zero-cost information duplication mean that software competition is worse than zero-sum; it is totally counterproductive and fruitless. The fact that someone else has figured this out chaps Microsoft's hide to no end. That's because Microsoft uses this very argument to attempt to justify its brutally-obtained and illegally-preserved monopolies as "the natural order of things." There is a difference between a monopoly grown by code excellence in an open Petri dish (OSS), versus a monopoly grown by bullying, intimidation, and extortion (Windows). MS hopes nobody figures that part out. They hate the idea that somebody else might also "win" (prosper, succeed, become prominent) in addition to Microsoft. The fear is that unless everyone else loses, Microsoft will not win. Considering the shoddiness of their products and the carelessness of their debugging, that might just be a well-founded fear, however irrational it appears upon first glance.
For examined in this way, it is quite clear that the society of open-source hackers
is in fact a gift culture. Within it, there is no serious shortage of the 'survival
necessities' -- disk space, network bandwidth, computing power. Software is
freely shared. This abundance creates a situation in which the only available
measure of competitive success is reputation among one's peers.
Ergo, Microsoft will likely attempt to substitute something more substantive in this space (read: money), or else they may choose allies from within the OSS community in hopes of appealing to their assumed desire for prominence by promising them better tools, marketing help, etc. etc.
The article then quotes from an interview with Eric Raymond, which points to the "scarcity" of public honor and celebrity that occurs in most development environments, and that peer respect may be the substitute goal for which OSS participants often reach. Certainly a class of craftsmanlike workers wants to be respected for the quality of their work; this kind of respect is faked by Microsoft when it falsely claims to produce "great" products. In other words, among the "non-replicable assets" of the OSS community is that fact that if the emperor is running naked, anybody can look in the source code and find out the truth. Bad code can not be totally hidden by clever brainwashing (a.k.a. "marketing").
This is a controversial motivation and I'm inclined to believe that at some level,
Altruism `degenerates' into a form of the Ego Gratification argument advanced by
One smaller motivation which, in part, stems from altruism is Microsoft-bashing.
Interesting that a Microsoftie would recognize that "bashing" (exposing) Microsoft is actually a public good, a benefit to the entire community of man. There may be hope for a few of these folks after all. :)
A key threat in any large development team -- and one that is particularly exacerbated
by the process chaos of an internet-scale development team -- is the risk of
Code forking occurs when over normal push-and-pull of a development project,
multiple, inconsistent versions of the project's code base evolve.
In the commercial world, for example, the strong, singular management of the
Windows NT codebase is considered to be one of it's greatest advantages over the
`forked' codebase found in commercial UNIX implementations (SCO, Solaris, IRIX,
Oops, they forget: a single codebase means that the same error can infect millions of victims simultaneously, while a more diverse codebase roughly corresponds to genetic diversity in the organic world. However, precise enumeration of the relative merits and tradeoffs involved has never been made. An interesting subject for a dissertation, perhaps.
In any case, OS/2 has a strong, singular codebase management within IBM -- and even more so, with the new Warp Server and Warp Client built on an identical SMP-based foundation. This merged path, combined with IBM's well-known aversion to risk, gives OS/2 an advantage over both the NT segment and the Unix segment. That is to say, world-class OS/2 code obviates the need for a diverse codebase, and also makes NT's unified codebase a non-issue.
Forking in OSS -- BSD Unix
Within OSS space, BSD Unix is the best example of forked code. The original BSD
UNIX was an attempt by U-Cal Berkeley to create a royalty-free version of the UNIX
operating system for teaching purposes. However, Berkeley put severe restrictions on
non-academic uses of the codebase.
In order to create a fully free version of BSD UNIX, an ad hoc (but closed) team of
developers created FreeBSD. Other developers at odds with the FreeBSD team for
one reason or another splintered the OS to create other variations (OpenBSD,
There are two dominant factors which led to the forking of the BSD tree:
Not everyone can contribute to the BSD codebase. This limits the size of the
effective "Noosphere" and creates the potential for someone else to credibly
claim that their forked code will become more dominant than the core BSD code.
Unlike GPL, BSD's license places no restrictions on derivative code. Therefore,
if you think your modifications are cool enough, you are free to fork the code,
charge money for it, change its name, etc.
Both of these motivations create a situation where developers may try to force a fork
in the code and collect royalties (monetary, or ego) at the expense of the collective
Aha! As I suspected, MS is looking for a place to inject its monopoly-driven cash surplus to fork OSS code and "threaten" the credibility of the OSS process!!! A leopard never changes its spots....
(Lack of) Forking in Linux
In contrast to the BSD example, the Linux kernel code base hasn't forked. Some of the
reasons why the integrity of the Linux codebase has been maintained include:
Universally accepted leadership
Linus Torvalds is a celebrity in the Linux world and his decisions are considered
final. By contrast, a similar celebrity leader did NOT exist for the BSD-derived
Perhaps Microsoft will buy BSD, then, and select a Benedict Arnold as a figurehead, puppet leader of a "new OSS movement??" Is there any precedent for this? Why of course -- they tried to "fork" Java!!
Linus is considered by the development team to be a fair, well-reasoned code
manager and his reputation within the Linux community is quite strong.
However, Linus doesn't get involved in every decision. Often, sub groups
resolve their -- often large -- differences amongst themselves and prevent code
Therefore, thou shalt watch thy back, Mr. Torvalds. And while you're at it, choose a successor carefully. We don't know yet whether MS stoops to "active measures" or not.
Once again, the issue of a central coordinating leadership in the OS/2 community arises. There is no one person or group that everyone trusts and respects to manage the development process for OS/2 products -- not IBM, not Stardock, not Sundial -- despite the fact that each of these parties has clearly recognizable strengths and accomplishments. This lack of focus has been Team OS/2's undoing and we must address this ASAP. Nominations for benevolent dictator, anyone?
Open membership & long term contribution potential.
In contrast to BSD's closed membership, anyone can contribute to Linux and
your "status" -- and therefore ability to `homestead' a bigger piece of Linux -- is
based on the size of your previous contributions.
Indirectly this presents a further disincentive to code forking. There is almost no
credible mechanism by which the forked, minority code base will be able to
maintain the rate of innovation of the primary Linux codebase.
Well, the question for OS/2 is whether the "OS" part needs to be innovated quickly, or if just the apps, the drivers, the utilities, and other features need such a quick turnaround. I believe that with OS/2's clear technical superiority over MS products, particularly NT, that innovating the main codebase is not the issue. It's all the other things that have been the issue, particularly key apps.
GPL licensing eliminates economic motivations for code forking
Because derivatives of Linux MUST be available through some free avenue, it
lowers the long term economic gain for a minority party with a forked Linux tree.
In other words, only someone with Microsoft's cash supply can make a deliberate code fork work, even to the point of damaging the original tree.
The ideal situation, then, is to have a non-forkable code base in the OS (like OS/2 does, since it remains in IBM's hands), while having an OSS-style development community for the add-ons, the applications, and other trappings. The problem here is that there is not the outside-OS/2 revenue stream available as there is among the Linux coders; most OS/2 developers must "eat their seed corn" to survive. Anybody know how we can develop a financial support system a la the Linux community? Perhaps by having more members focused on the server side, where the real money is....
Look for MS to try something like that, once the price of Windows2000 is high enough that they no longer need Office as a cash cow.... Or maybe they will make Office the base upon which to develop OSS products for Windows. The purpose would be, of course, to eliminate the cash flow to all Windows developers outside of MS. This has been their plan all along; look how they give away an e-mail package and a PIM free in every box of Office97.
Forking the codebase also forks the "Noosphere"
Ego motivations push OSS developers to plant the biggest stake in the biggest
Noosphere. Forking the code base inevitably shrinks the space of
accomplishment for any subsequent developers to the new code tree..
This statement focuses too much on egotism. OS/2 development is far more focused on making something that works, just as OS/2 users choose OS/2 because it really works, not because of some ego trip.
Open Source Strengths
What are the core strengths of OSS products that Microsoft needs to be concerned
OSS Exponential Attributes
Like our Operating System business, OSS ecosystems have several exponential
OSS processes are growing with the Internet
The single biggest constraint faced by any OSS project is finding enough
developers interested in contributing their time towards the project. As an
enabler, the Internet was absolutely necessary to bring together enough people
for an Operating System scale project. More importantly, the growth engine for
these projects is the growth in the Internet's reach. Improvements in
collaboration technologies directly lubricate the OSS engine.
Put another way, the growth of the Internet will make existing OSS projects
bigger and will make OSS projects in "smaller" software categories become
Interestingly, this is the same argument I am making about the OS/2 community -- we have the Web available to accumulate developer and user resources, essentially bypassing the MS-jammed retail distribution channel. However, the OS/2 user base is not growing at Internet-exponential rates, due to the fact that an OS-specific product requires that somebody change everybody's boot sector out there. The Web, like Java, runs on top of the OS and thus does not require dramatic rebuilding of people's hard drives. However, Linux is in the same position: Linux can only grow as the number of boot sectors containing Linux also grows. I believe there is much more at work here than just "Linux is free" as to why Linux grows with the Internet while OS/2 currently does not.
OSS processes are "winner-take-all"
Like commercial software, the most viable single OSS project in many
categories will, in the long run, kill competitive OSS projects and `acquire' their
IQ assets. For example, Linux is killing BSD Unix and has absorbed most of its
core ideas (as well as ideas in the commercial UNIXes). This feature confers
huge first mover advantages to a particular project
Another departure point: Microsoft seems to think that forcing other cars off the racetrack and cannibalizing their pit crews is a natural consequence of the so-called "free market." Therefore, they believe that Linux will kill all the other Unix's before it threatens the rest of the software world. Notice I said threaten: never forget that Microsoft considers Linux a threat, not an alternative. This paragraph does more to expose their mindset than almost any other; it shows the MS mentality to be nothing more than a "law of the jungle" mentality on steroids -- or, as I prefer to call it, HYPERDARWINISM. Under this paradigm, the long-term outcome of all business activity is necessarily a single, global monopoly. (Also closely associated with this brain-dead paradigm is the notion that every new product is automatically better than any old one.) Microsoft believes that their monopolistic activities are nothing more than self-defense, because they believe that freedom of choice is literally impossible without strict government regulation to mandate alternative products.
Developers seek to contribute to the largest OSS platform
The larger the OSS project, the greater the prestige associated with contributing
a large, high quality component to its Noosphere. This phenomena contributes
back to the "winner-take-all" nature of the OSS process in a given segment.
Once again, assuming that OSS is just another form of dog-eat-dog take-over-the-world-or-die-trying hyperdarwinism.
Larger OSS projects solve more "problems at hand"
The larger the project, the more development/test/debugging the code receives.
The more debugging, the more people who deploy it.
A positive feedback cycle. Every user becomes a potential enhancement agent and evangelist. This only works during the techno-geek phase of growth, however. We have seen with OS/2 how difficult it is to extend this growth pattern into the base of everyday PC users.
Binaries may die but source code lives forever
One of the most interesting implications of viable OSS ecosystems is long-term
Long-Term Credibility Defined
Long term credibility exists if there is no way you can be driven out of business in the
near term. This forces change in how competitors deal with you.
Note the terminology used here "driven out of business." MS believes that putting other companies out of business is not merely "collateral damage" -- a byproduct of selling better stuff -- but rather, a direct business goal. To put this in perspective, economic theory and the typical honest, customer-oriented businessperson will think of business as a STOCK CAR RACE -- the fastest car with the most skillful driver wins. Microsoft views business as a DEMOLITION DERBY -- you knock out as many competitors as possible, and try to maneuver things so that your competitors wipe each other out and thereby eliminate themselves. In a stock car race there are many finishers and thus many drivers get a paycheck. In a demolition derby there is just one SURVIVOR. Can you see why "Microsoft" and "freedom of choice" are absolutely in two different universes?
For example, Airbus Industries garnered initial long term credibility from explicit
government support. Consequently, when bidding for an airline contract, Boeing
would be more likely to accept short-term, non-economic returns when bidding
against Lockheed than when bidding against Airbus.
Loosely applied to the vernacular of the software industry, a product/process is
long-term credible if FUD tactics can not be used to combat it.
Microsoft's first line of defense. Why? Because software does not have any intrinsic value; it is merely magnetized spots on a disk. It has near-zero per-unit cost since it is infinitely replicable. Therefore, the value of software lies almost solely in the OPINIONS that people have about it, and in the culture and the business structure that comes to rely upon it. Therefore, FUD (Fear, Uncertainty, Deceit) is often the most effective way to damage a software product. It also happens to be the most cost-effective: talk is cheap.
To put it another way, look at the stock market. A stock has no intrinsic value (except in the case where dividends are paid). Therefore, most stock investment returns profit only if a majority of people bid up the price based on their OPINIONS of what the future value of the stock will be. FUD applied to the stock market can devalue a stock overnight. On the other hand, it's hard to FUD about food -- it has intrinsic value, as do most commodities and physical products.
To gain long-term credibility, OS/2 must gain an unassailable long-term commitment by a permanent segment of the economy (big banks, apparently). If enough big banks permanently standardize on OS/2 such that it becomes a permanent fixture of the banking community, it may indeed gain long-term credibility. The other way to do it is to simply continue growing the user base and acquire long-term commitments along the way.
OSS is Long-Term Credible
OSS systems are considered credible because the source code is available from
potentially millions of places and individuals.
This is the key element of the MS FUD attack against OS/2. Since only IBM has the source code, only IBM can prosper it; therefore, the LTC (long-term credibility) of OS/2 is, according to MS, solely and directly dependent of the volume of pro-OS/2 talk from IBM mouths. Whether or not this is just a pile of horse puckey remains to be seen.
The likelihood that Apache will cease to exist is orders of magnitudes lower than the
likelihood that WordPerfect, for example, will disappear. The disappearance of
Apache is not tied to the disappearance of binaries (which are affected by purchasing
shifts, etc.) but rather to the disappearance of source code and the knowledge base.
Inversely stated, customers know that Apache will be around 5 years from now --
provided there exists some minimal sustained interested from its user/development
One Apache customer, in discussing his rationale for running his e-commerce site on
OSS stated, "because it's open source, I can assign one or two developers to it and
maintain it myself indefinitely. "
To take advantage of this rationale, is IBM perhaps looking at Java as the replacement OS/2 API, and thus moving a key element of OS/2 into the domain of the OSS movement? Given IBM's boisterous support of Java, I believe this is true. To put it another way, IBM believes significant LTC may rub off from Java onto OS/2, if OS/2 is recognized as a premier Java platform.
Lack of Code-Forking Compounds Long-Term Credibility
The GPL and its aversion to code forking reassures customers that they aren't riding
an evolutionary `dead-end' by subscribing to a particular commercial version of
The "evolutionary dead-end" is the core of the software FUD argument.
Which is why it hardly makes sense to learn Windows NT right now, since it is an evolutionary dead-end, right Bill? Actually, there's quite a bit of truth to that statement. However, the more important non-forking issues involve the API. A non-forked Win32 is vital for MS to escape this fate themselves. The fact that there have been no significant attempts by competitors to contaminate the Win32 API shows that Microsoft's claims of a "conspiracy" against them are totally bogus. It is Microsoft who has worked to fork the Java API and (to a lesser extent) to create subtle forks in various Windows APIs that other companies successfully license and/or emulate. Remember Win32s and its many varieties?
Linux and other OSS advocates are making a progressively more credible argument
that OSS software is at least as robust -- if not more -- than commercial alternatives.
The Internet provides an ideal, high-visibility showcase for the OSS world.
In particular, larger, more savvy, organizations who rely on OSS for business
operations (e.g. ISPs) are comforted by the fact that they can potentially fix a
work-stopping bug independent of a commercial provider's schedule (for example,
UUNET was able to obtain, compile, and apply the teardrop attack patch to their
deployed Linux boxes within 24 hours of the first public attack)
A more mature, prosperous OS/2 development community would provide that sort of advantage also, but not to the extent that OSS provides. A clear point for Linux.
Alternatively stated, "developer resources are essentially free in OSS". Because the
pool of potential developers is massive, it is economically viable to simultaneously
investigate multiple solutions / versions to a problem and chose the best solution in the
For example, the Linux TCP/IP stack was probably rewritten 3 times. Assembly code
components in particular have been continuously hand tuned and refined.
IBM has similarly poured massive resources into OS/2 and improved the code significantly over several development generations. The key difference is, we on the "outside" don't get a view of that incremental improvement process and the key decision points. Thus, FUD is more effectively against OS/2 because the development of better and better OS/2 code takes place in "secret." The wiseguys in the PC media who laugh at OS/2 thus are showing the depth of their ignorance. OS/2 has come a long way since these pundits stopped paying attention to the leading edge and settled for harping on the boring mediocrity of the monopoly marketplace.
OSS = `perfect' API evangelization / documentation
OSS's API evangelization / developer education is basically providing the developer
with the underlying code. Whereas evangelization of API's in a closed source model
basically defaults to trust, OSS API evangelization lets the developer make up his own
NatBro and Ckindel point out a split in developer capabilities here. Whereas the
"enthusiast developer" is comforted by OSS evangelization, novice/intermediate
developers --the bulk of the development community -- prefer the trust model +
organizational credibility (e.g. "Microsoft says API X looks this way")
Better pour another bucket of coins into the meter, Bill. Your credibility about where your code is going has just about expired. Microsoft has proven itself brilliant at having no partners -- just the ability to transform co-conspirators into victims.
Once again, exposing more and more of the Java API will make OS/2 more and more credible in the sense of developer evangelization versus trust. No programmer wants to have to trust MS, IBM, or Sun any more than absolutely necessary.
Strongly componentized OSS projects are able to release subcomponents as soon as
the developer has finished his code. Consequently, OSS projects rev quickly &
Note that this ONLY works if the debugging process is of the same order of magnitude in terms of speed and parallelization. Releasing faster and faster will only drive people away if the releases come out faster than they can be fixed!!!
Open Source Weaknesses
The weaknesses in OSS projects fall into 3 primary buckets:
The biggest roadblock for OSS projects is dealing with exponential growth of
management costs as a project is scaled up in terms of rate of innovation and size. This
implies a limit to the rate at which an OSS project can innovate.
Starting an OSS project is difficult
From Eric Raymond:
It's fairly clear that one cannot code from the ground up in bazaar style. One can
test, debug and improve in bazaar style, but it would be very hard to originate a
project in bazaar mode. Linus didn't try it. I didn't either. Your nascent developer
community needs to have something runnable and testable to play with.
Raymond `s argument can be extended to the difficulty in starting/sustaining a project
if there are no clear precedent / goal (or too many goals) for the project.
Obviously, there are far more fragments of source code on the Internet than there are
OSS communities. What separates "dead source code" from a thriving bazaar?
One article (http://www.mibsoftware.com/bazdev/0003.htm) provides the following
"....thinking in terms of a hard minimum number of participants is misleading.
Fetchmail and Linux have huge numbers of beta testers *now*, but they
obviously both had very few at the beginning.
What both projects did have was a handful of enthusiasts and a plausible
promise. The promise was partly technical (this code will be wonderful with a
little effort) and sociological (if you join our gang, you'll have as much fun as
we're having). So what's necessary for a bazaar to develop is that it be credible
that the full-blown bazaar will exist!"
This roughly approximates the reason that for-profit coders embark on a particular course: there must be a credible argument that future customer base (meaning: people running an OS that can run your product) for the future, or at least far enough into the future to make a healthy return investment. The difference between a profiteer and an enthusiast often involves whether they care about the really long-term health of what they produce.
I'll posit that some of the key criteria that must exist for a bazaar to be credible
Large Future Noosphere -- The project must be cool enough that the intellectual reward
adequately compensates for the time invested by developers. The Linux OS excels in this
OS/2 has a friendly community of mutual respect -- but the size of said community needs to be increased. This is the "critical mass" or "firestorm" concept: a growing flame generates its own fuel flow by pulling in resources from its surroundings.
Scratch a big itch -- The project must be important / deployable by a large audience of
developers. The Apache web server provides an excellent example here.
In other words, a "raison d'etre" or a reason WHY to have OS/2. Being "not Microsoft" is one factor, but this is far from being enough to build a platform. Windows' "why factor" often involves a perceived need to run MS-Office, based on its large installed base. What sort of large-scale "solution" can OS/2 provide better than other platforms? This too may require finding a "killer app."
Solve the right amount of the problem first -- Solving too much of the problem
relegates the OSS development community to the role of testers. Solving too little before
going OSS reduces "plausible promise" and doesn't provide a strong enough component
framework to efficiently coordinate work.
Yes, which is why MS has not built MS-Office into the OS and charged $200 for an OEM copy of the resultant combo. By giving developers the impression that they can achieve long-term success by coding for Windows, MS keeps stringing them along. Putting too little into the OS is why people usually don't accept DOS as a "modern" product.
On the OS/2 side of things, there are plenty of opportunities in the areas of scanner support, digital camera support, consumer devices, etc. IBM has determined to stay out of these areas because they are not aimed at its core customer base. This leaves a wide-open field for OS/2 developers to plow. Rather than wait timidly for IBM to add such features, and then complain about IBM not doing anything, OS/2 developers who take the bull by the horns and build these solutions themselves gain the "first mover" advantage.
When describing this problem to JimAll, he provided the perfect analogy of "chasing
tail lights". The easiest way to get coordinated behavior from a large, semi-organized
mob is to point them at a known target. Having the taillights provides concreteness to
a fuzzy vision. In such situations, having a taillight to follow is a proxy for having
strong central leadership.
Of course, once this implicit organizing principle is no longer available (once a
project has achieved "parity" with the state-of-the-art), the level of management
necessary to push towards new frontiers becomes massive.
As Eric Raymond points out, this is balderdash. MS builds an entry barrier ON PURPOSE once they have achieved critical mass in a particular market segment, so that nobody can play the "fast follower" game on them the way MS itself has done to others.
Furthermore, he points out that MS cannot grasp on a gut emotional level the concept of developer-driven code evolution -- not with the MS command-and-control management style that was exposed in the book Showstopper!. This was a fine expose' of early Windows NT management under the crude dictatorship of David Cutler.
This is possibly the single most interesting hurdle to face the Linux community now
that they've achieved parity with the state of the art in UNIX in many respects.
Still more bogus logic. The web is about decentralized origination and distribution of information; OSS is about decentralized origination and distribution of information-based solutions. Microsoft, being a proponent of tight central control of information and manic possessiveness of the distribution channel to prevent the growth of superior alternatives, cannot accept the idea that COMPUTER PEOPLE CAN THINK FOR THEMSELVES. The idea that programmers actually have a mind of their own is anathema to the Microsoft management.
IBM seems to find such freethink merely inconvenient. As for Microsoft, they recognize it is a clear threat to their hegemony.
Another interesting thing to observe in the near future of OSS is how well the team is
able to tackle the "unsexy" work necessary to bring a commercial grade product to
In the operating systems space, this includes small, essential functions such as power
management, suspend/resume, management infrastructure, UI niceties, deep Unicode
For Apache, this may mean novice-administrator functionality such as wizards.
I would be surprised if the win32-os2 project was not some of the "least-sexy" work on the planet right now. But it is being done, and done without any centralized controlling monolith like MS dictating to the developers to keep stroking.
Integrative work across modules is the biggest cost encountered by OSS teams. An
email memo from Nathan Myrhvold on 5/98, points out that of all the aspects of
software development, integration work is most subject to Brooks' laws.
Up till now, Linux has greatly benefited from the integration / componentization
model pushed by previous UNIX's. Additionally, the organization of Apache was
simplified by the relatively simple, fault tolerant specifications of the HTTP protocol
and UNIX server application design.
Future innovations which require changes to the core architecture / integration model
are going to be incredibly hard for the OSS team to absorb because it simultaneously
devalues their precedents and skillsets.
In other words, MS here falls once again into the "Linux people are just code debuggers" mentality. New products and features arise at a dramatic rate in the Linux world, and will continue to do so -- even at the core OS level. This is not like the MS model in which they could not throw away their sloppy DOS foundation but instead were forced to build a 32-bit platform on a 16-bit base (with the obvious poor results).
These are weaknesses intrinsic to OSS's design/feedback methodology.
One of the key's to the OSS process is having many more iterations than commercial
software (Linux was known to rev it's kernel more than once a day!). However,
commercial customers tell us they want fewer revs, not more.
Then why don't you LISTEN, Bill? Oh, I forgot, you must keep the cash cows milked.
The Linux OS is not developed for end users but rather, for other hackers. Similarly,
the Apache web server is implicitly targetted at the largest, most savvy site operators,
not the departmental intranet server.
The key thread here is that because OSS doesn't have an explicit marketing / customer
feedback component, wishlists -- and consequently feature development -- are
dominated by the most technically savvy users.
One thing that development groups at MSFT have learned time and time again is that
ease of use, UI intuitiveness, etc. must be built from the ground up into a product and
can not be pasted on at a later time.
Balderdash! Windows is a GUI "pasted on" to a 16- bit legacy DOS core. The same GUI is "pasted on" a hijacked legacy OpenVMS core and called NT. Long file names in Windows are merely 8.3-character extensions "pasted on" over a mutant DOS FAT system. Time and again, MS will "paste on" tons of goofy distractions on top of their legacy products and falsely call this "innovation." This paragraph can only be taken as an admission that MS realizes their products are severely lacking in ease-of-use, user-friendliness, etc. and that they are constantly patching over these weaknesses with ever-more-infantile nonsense like the cartoon "helpers" and other childish trinkets.
On the other hand, one of the strengths of OS/2's WorkPlace Shell is that it is truly object-oriented to the core and not merely a pretty face. Yet the OS/2 design is modular enough to allow third-party WPS enhancements (such as Object Desktop and Candy Barz) or even WPS replacements. IBM "got it right" with WPS, and MS knows it. This is why MS could not allow the WPS to gain significant market share, for then the magnitude of OS/2's superiority over MS products would have been obvious to all.
The writer in this case is correct about one thing -- Linux (as well as OS/2) have their code evolution driven by the technically savvy, while MS finds it driven by the least technical members of its audience. The result for MS is not a product that is "user-friendly," but rather one that is merely "visually appealing." That is to say, Windows is designed to maximize the APPEARANCE of being user-friendly, not necessarily the REALITY of it. In other words, Windows is designed mostly just to market itself.
Eric Raymond's powerful, succinct summary of this situation follows:
Programs built this way look user-friendly at first sight, but turn out to be huge time
and energy sinks in the longer term. They can only be sustained by carpet-bomb
marketing, the main purpose of which is to delude users into believing that (a) bugs
are features, or that (b) all bugs are really the stupid user's fault, or that (c) all bugs will
be abolished if the user bends over for the next upgrade. This approach is
The other way is the Unix/Internet/Web way, which is to separate the engine (which
does the work) from the UI (which does the viewing and control). This approach
requires that the engine and UI communicate using a well-defined protocol. It's
exemplified by browser/server pairs -- the engine specializes in being an engine, and
the UI specializes in being a UI.
With this second approach, overall complexity goes down and reliability goes up.
Further, the interface is easier to evolve/improve/customize, precisely because it's not
tightly coupled to the engine. It's even possible to have multiple interfaces tuned to
Finally, this architecture leads naturally to applications that are enterprise-ready --
that can be used or administered remotely from the server. This approach works -- and
it's the open-source community's natural way to counter Microsoft.
Brilliant. Yes, one main reason for the massive delays in the next NT (Windows2000, or whatever they eventually call it) is its monolithic structure. Nick Petreley's "The Last Ten Minutes" article brutally dashes the hopes of any MS proponents who believe that this architecture can result in an enterprise-class product.
With OS/2, we have a product that does not communicate between UI and core using an "open protocol," but rather through a well-defined set of APIs. This is different from the Linux structure in that it makes design of multiple UIs for the same base OS relatively easy on the desktop side (which IBM cashes in on quite regularly with their enterprise customers), but does not appear to be as compartmentalized as the Unix family. In other words, writing alternative UIs for OS/2 is an art, but Linux can be expected to sprout dozens of them due to the open nature of the communication protocol and the code itself.
Finally, there is a significant drawback to this multi-UI approach, which MS realizes and thus has outlawed for its OEM preload vassal companies: it is hard to market one product to a large audience if that one product has many faces. Consumers buy based first on appearance. Ideally, then, there should be a set of common UI components specified by the OSS community if they want Linux to spread into the "mainstream." There needs to be enough similarity among the various Linux UIs so that they are easily recognized as being of the Penguin family. Say, perhaps a Penguin logo in the top left corner, where Apple puts its fruit moniker??
The interesting trend to observe here will be the effect that commercial OSS providers
(such as RedHat in Linux space, C2Net in Apache space) will have on the feedback
How can OSS provide the service that consumers expect from software providers?
Product support is typically the first issue prospective consumers of OSS packages
worry about and is the primary feature that commercial redistributors tout.
However, the vast majority of OSS projects are supported by the developers of the
respective components. Scaling this support infrastructure to the level expected in
commercial products will be a significant challenge. There are many orders of
magnitude difference between users and developers in IIS vs. Apache.
Hence, the continued growth of stable, large-scale support structures like Indelible Blue's recent fee-based arrangement is vital for OS/2. Having online support via newsgroups and discussion lists is fine, but it is not the enterprise-class, publicly-visible support system that is necessary to satisfy the emotional concerns of most decisionmakers. To put it another way, "IBM is too hard, discussion lists are too soft." The small business customer needs something "in between," and the consumer needs something more accessible.
For the short-medium run, this factor alone will relegate OSS products to the top tiers
of the user community.
As has happened to some extent with OS/2. It is still considered a "power user" tool.
A very sublime problem which will affect full scale consumer adoption of OSS
projects is the lack of strategic direction in the OSS development cycle. While
incremental improvement of the current bag of features in an OSS product is very
credible, future features have no organizational commitment to guarantee their
Same bogus argument as before, people. OSS is inherently a "grass-roots" design process. Instead of "listening to our customers" (a nice MS slogan, if it were true), OSS is directly designed by them -- orders of magnitude more effective feedback.
This is where the non-OSS nature of OS/2 hurts us. We can design lots of nice features that rely on the OS, but the innards remain closed to most coders. We want new features and fixes at the OS level, but we must petition IBM, similar to the way Windows users must petition MS for improvements. We either need OS/2 to become open-source (not likely), or else a healthy communication channel between OS/2 users/developers and IBM management must be built. That only will become "likely" if our end of the channel becomes the one thing IBM listens to: enterprise-cash rich. In other words, we may need to form alliances with large IBM customers to use them as feedback channels into IBM's leadership. Or else we could just find some venture capital and make enterprise inroads ourselves (not likely!).
This lack of a good feedback channel to IBM is where OS/2, like Windows, is at a disadvantage outside of its core user base.
What does it mean for the Linux community to "sign up" to help build the Corporate
Digital Nervous System? How can Linux guarantee backward compatibility with
apps written to previous API's? Who do you sue if the next version of Linux breaks
some commitment? How does Linux make a strategic alliance with some other entity?
"Digital Nervous System." Bah! You know very will this is an attempt to hijack the letters "D.N.S." and redefine them, Bill. Cut the malarkey. And didn't my FUD-o-meter just go off-scale with that last set of questions?
Open Source Business Models
In the last 2 years, OSS has taken another twist with the emergence of companies that
sell OSS software, and more importantly, hiring full-time developers to improve the
code base. What's the business model that justifies these salaries?
In many cases, the answers to these questions are similar to "why should I submit my
protocol/app/API to a standards body?"
Why? Because you believe your innovation is too important to keep a secret. Because you want to enhance the degree of trust that others have in the long-term viability of your product. Because you are not ashamed of your design. There are three reasons why MS shuns standards bodies -- because it thrives on secrecy to hide its clever manipulation of APIs; because it does not want to rely on outsiders for the long-term viability of its products (MS prefers to let monopoly market share do that); and because its code design is a laughingstock.
The vendor of OSS-ware provides sales, support, and integration to the customer.
Effectively, this transforms the OSS-ware vendor from a package goods manufacturer
into a services provider.
You just answered your own question, buddy. Services are always a high-margin, low-infrastructure business. Giving away the OS makes perfect sense when the returns are so lucrative. IBM should have done this, too.
Loss Leader -- Market Entry
The Loss Leader OSS business model can be used for two purposes:
Jumpstarting an infant market
Breaking into an existing market with entrenched, closed-source players
Many OSS startups -- particularly those in Operating Systems space -- view funding
the development of OSS products as a strategic loss leader against Microsoft.
Linux distributors, such as RedHat, Caldera, and others, are expressly willing to fund
full time developers who release all their work to the OSS community. By
simultaneously funding these efforts, Red Hat and Caldera are implicitly colluding
and believe they'll make more short term revenue by growing the Linux market rather
than directly competing with each other.
IBM has decided to do this with Java, not with native OS/2 development.
An indirect example is O'Reilly & Associates employment of Larry Wall -- "leader"
and full time developer of PERL. The #1 publisher of PERL reference books, of
course is O'Reilly & Associates.
For the short run, especially as the OSS project is at the steepest part of it's growth
curve, such investments generate positive ROI. Longer term, ROI motivations may
steer these developers towards making proprietary extensions rather than releasing
FUD. This line of thinking shows that MS hopes that its OSS competition only uses the OSS model for market-entry and market-development phases of growth, not as a long-term strategic model. In other words, "OSS is no different from the way we do business, it's just a different form of lowered entry cost." The writer seems to imply that the risk of lost credibility will not impact an OSS company since lost credibility has not harmed MS due to its monopoly position. This an subtle apples-to-oranges comparison, and therefore is bogus.
Commoditizing Downstream Suppliers
This is very closely related to the loss leader business model. However, instead of
trying to get marginal service returns by massively growing the market, these
businesses increase returns in their part of the value chain by commoditizing
The best examples of this currently are the thin server vendors such as Whistle
Communications, and Cobalt Micro who are actively funding developers in SAMBA
and Linux respectively.
No, Bill. The best example of commoditizing downstream suppliers is -- YOU!!! MS uses restrictive preload agreements to prevent PC differentiation at the software level by its "downstream supplier channel" -- the preload OEMs. MS enjoys the dramatically increased leverage it has over these OEMs as hardware prices fall, making the profit margin smaller than the OEM cost of the operating system itself!
Both Whistle and Cobalt generate their revenue on hardware volume. Consequently,
funding OSS enables them to avoid today's PC market where a "tax" must be paid to
the OS vendor (NT Server retail price is $800 whereas Cobalt's target MSRP is
Yes, you admit that your OS price is a monopoly-based "tax" on the price of PCs. Mark this paragraph in red and file it under "People's Exhibit #1" at the DOJ trial.
The earliest Apache developers were employed by cash-strapped ISPs and ICPs.
Another, more recent example is IBM's deal with Apache. By declaring the HTTP
server a commodity, IBM hopes to concentrate returns in the more technically arcane
application services it bundles with it's Apache distribution (as well as hope to reach
Apache's tremendous market share).
First Mover -- Build Now, $$ Later
One of the exponential qualities of OSS -- successful OSS projects swallow less
successful ones in their space -- implies a pre-emption business model where by
investing directly in OSS today, they can pre-empt / eliminate competitive projects
later -- especially if the project requires API evangelization. This is tantamount to
seizing a first mover advantage in OSS.
There's that same bogus assumption -- that personal preference and freedom of choice are non-operational in software markets. This is a deep pile of horse puckey that MS has waded into. Assuming that file interoperability is not an issue, some people will always prefer Word Perfect over Word, and some people will always prefer 123 over Excel. Whether they can access the source code is immaterial. MS might just as well try to argue that because Toyota's Camry is the best-selling car in the U.S., the Honda Accord is doomed. This would not be true no matter how "open" the design process became for cars.
In addition, the developer scale, iteration rate, and reliability advantages of the OSS
process are a blessing to small startups who typically can't afford a large in--house
Examples of startups in this space include SendMail.com (making a commercially
supported version of the sendmail mail transfer agent) and C2Net (makes commercial
and encrypted Apache)
Notice, that no case of a successful startup originating an OSS project has been
observed. In both of these cases, the OSS project existed before the startup was
Below your radar, Bill. "Successful" probably requires a million dollars annual gross in order to be a valid blip on the MS radar.
Sun Microsystem's has recently announced that its "JINI" project will be provided via
a form of OSS and may represent an application of the pre-emption doctrine.
The next several sections analyze the most prominent OSS projects including Linux,
Apache, and now, Netscape's OSS browser.
A second memo titled "Linux OS Competitive Analysis" provides an in-depth review
of the Linux OS. Here, I provide a top-level summary of my findings in Linux.
What follows is a summary of the Linux OS and its attributes, which we shall bypass for the moment....
The memorandum resumes:
Linux is a short/medium-term threat in servers
The primary threat Microsoft faces from Linux is against NT Server.
Linux's future strength against NT server (and other UNIXes) is fed by several key
Linux uses commodity PC hardware and, due to OS modularity, can be run on smaller
systems than NT. Linux is frequently used for services such as DNS running on old 486's in
Due to it's UNIX heritage, Linux represents a lower switching cost for some organizations
UNIX's perceived Scaleability, Interopability, Availability, and Manageability (SIAM)
advantages over NT.
Merely "perceived?" No, they are REAL advantages over NT. The writer cannot safely admit that. He has a family to feed, no doubt.
Linux can win as long as services / protocols are commodities
Linux is unlikely to be a threat on the desktop
Linux is unlikely to be a threat in the medium-long term on the desktop for several
Poor end-user apps & focus. OSS development process are far better at solving
individual component issues than they are at solving integrative scenarios such as
end-to-end ease of use.
"Poor" is relative. If the writer means "no MS Office or Frontpage," then of course that is true. If the writer means "no apps," then this is just FUD. That's the lie they used against OS/2, and MS is apparently going back to the same well.
As for ease-of-use, the WPS is far superior to anything else. Having used Mac, Windows, and DesqView, I believe I can honestly state that. Gnome and KDE in the Linux world are probably somewhere between DesqView and Windows at this stage. They are sure to improve. Then there's always the possibility of WPS for Java or even WPS for Linux.
Switching costs for desktop installed base. Switching desktops is hard and a
challenger must be able to prove a significant marginal advantage. Linux's process is more
focused on second-mover advantages (e.g. copying what's been proven to work) and is
therefore unlikely to provide the first-mover advantage necessary to provide switching
The difficulty for OS/2 to make inroads into the Windows 3.1 installed base is proof of the first-mover advantage for Windows. However, there is also a first-mover DISadvantage -- you can become a whipping boy if your products stink. Which is probably a major reason why MS is killing the NT line and trying to rename in Windows2xxx.
UNIX heritage will slow encroachment. Ease of use must be engineered from the
ground up. Linux's hacker orientation will never provide the ease-of-use requirements of
the average desktop user.
Bogus. Windows' heritage is DOS. OS/2 was originally a command-line-only system. The writer is merely parroting the MS party line, exposing the deep denial and self-delusion present at MS. They want to forget their past by profusely denying it. Linux will get easier. Windows cannot, because its shoddy foundation makes it inherently unpredictable. IMO, reliable, repeatable responses are the foundation of an easy-to-use system. This is one reason why ATMs are considered easy to use, despite the absence of goofy cartoons.
In addition to the attacking the general weaknesses of OSS projects (e.g. Integrative /
Architectural costs), some specific attacks on Linux are:
All the standard product issues for NT vs. Sun apply to Linux.
Fold extended functionality into commodity protocols / services and create new protocols
That only helps MS if these new protocols are Windows-only protocols (if not at first, then later). Bill, that's illegal. It's called "monopoly maintenance." Go directly to jail. Do not pass Go. Do not collect two billion dollars.
Linux's homebase is currently commodity network and server infrastructure. By
folding extended functionality (e.g. Storage+ in file systems, DAV/POD for
networking) into today's commodity services, we raise the bar & change the
rules of the game.
In other words, if you can't buy the other guy's property, move the boundary and then rewrite history. <YAWN> This is so predictable, it's pathetic. This is why the OS monopoly must be broken, because it is like playing every game on the other team's home field. They can shade the boundaries, trim the grass, water the sod, and pull out the Zamboni every time it snows. That is not a "free market," it is a rigged market. Every new conquest becomes a new "home field advantage" for the next round of leverage. If MS can hijack the open standards of the Internet and replace them with proprietary MS-only protocols, they will then use that position as leverage to take over something else.
The point must be hammered home again and again: stick with non-MS, open standards, and then NOBODY can pull the rug out from beneath you. A more-open Java, open Netscape, and open Web standards are vital. It may be necessary to promote Linux simply because it means that you never have to worry about proprietary leverage by anybody, no matter how successful they are. Wouldn't an open-source version of OS/2 be nice right about now? If it ever does happen, it probably won't be for another ten years, though. And MS has some code in there that they can use as legal leverage to prevent it.
As long as the Linux keeps MS from taking over the Web and other areas of the market, a non-OSS OS/2 can still be a viable, successful platform.
In an attempt to renew it's credibility in the browser space, Netscape has recently
released and is attempting to create an OSS community around it's Mozilla source
Organization & LIcensing
Netscape's organization and licensing model is loosely based on the Linux
community & GPL with a few differences. First, Mozilla and Netscape Communicator
are 2 codebases with Netscape's engineers providing synchronization.
Mozilla = the OSS, freely distributable browser
Netscape Communicator = Branded, slightly modified (e.g. homepage default is
set to home.netscape.com) version of Mozilla.
Unlike the full GPL, Netscape reserves the final right to reject / force modifications
into the Mozilla codebase and Netscape's engineers are the appointed "Area
Directors" of large components (for now).
Capitalize on Anti-MSFT Sentiment in the OSS Community
Relative to other OSS projects, Mozilla is considered to be one of the most direct,
near-term attacks on the Microsoft establishment. This factor alone is probably a key
galvanizing factor in motivating developers towards the Mozilla codebase.
The writer doesn't bother speculating why anyone would be "anti-MS" and where that sentiment would come from. It's merely the same sentiment that you have toward the schoolyard bully after a few months of his petty terrorism.
The availability of Mozilla source code has renewed Netscape's credibility in the
browser space to a small degree. As BharatS points out in
"They have guaranteed by releasing their code that they will never disappear
from the horizon entirely in the manner that Wordstar has disappeared. Mozilla
browsers will survive well into the next 10 years even if the user base does
Suppose IBM released the source code just for the WorkPlace Shell. That would be a nice little boost of LTC.
Scratch a big itch
The browser is widely used / disseminated. Consequently, the pool of people who
may be willing to solve "an immediate problem at hand" and/or fix a bug may be quite
And of course the WPS could then become widely used and disseminated, promulgating throughout a much wider "noosphere." This would have some degree of legitimizing effect on OS/2 itself.
Post parity development
Mozilla is already at close to parity with IE4/5. Consequently, there no strong
example to chase to help implicitly coordinate the development team.
Netscape has assigned some of their top developers towards the full time task of
managing the Mozilla codebase and it will be interesting to see how this helps (if at
all) the ability of Mozilla to push on new ground.
An interesting weakness is the size of the remaining "Noosphere" for the OSS browser.
1.The stand-alone browser is basically finished.
There are no longer any large, high-profile segments of the stand-alone browser which must
be developed. In otherwords, Netscape has already solved the interesting 80% of the
problem. There is little / no ego gratification in debugging / fixing the remaining 20% of
This sequence implies that nobody in the entire OSS domain will come up with a new "killer feature" for the OSS browser. Is that just wishful thinking, or a lack of respect by MS?
2.Netscape's commercial interests shrink the effect of Noosphere contributions.
Linus Torvalds' management of the Linux codebase is arguably directed
towards the goal of creating the best Linux. Netscape, by contrast, expressly
reserves the right to make code management decisions on the basis of Netscape's
commercial / business interests. Instead of creating an important product, the
developer's code is being subjugated to Netscape's stock price.
Whether or not this was true is now a moot point; Navigator's features are hardly of much impact on new owner AOL's stock price.
Potentially the single biggest detriment to the Mozilla effort is the level of integration
that customers expect from features in a browser. As stated earlier, integration
development / testing is NOT a parallelizable activity and therefore is hurt by the
In particular, much of the new work for IE5+ is not just integrating components within
the browser but continuing integration within the OS. This will be exceptionally
painful to compete aga inst.
Not as painful as it is to use, Mr. Bill.
The contention therefore, is that unlike the Apache and Linux projects which, for now,
are quite successful, Netscape's Mozilla effort will:
Produce the dominant browser on Linux and some UNIX's
Continue to slip behind IE in the long run
Horsefeathers. This speculation implies that Mozilla cannot improve faster than IE -- or, that OSS products cannot rev faster than MS products. Yet the article already admitted that the opposite was true; OSS revs orders of magnitude faster. Thus, the writer believes that there will not be any incentive to improve Mozilla. As I said, horsefeathers.
The memo then attempts to use some Internet traffic statistics in a wimpy effort to bolster this absurd claim. He fails to see that when a product goes OSS, the traffic spreads out across a much wider spectrum of interested parties. It becomes harder to track. It disappears below MS's radar, but begins growing roots.
The article then summarizes the Apache web server movement, which we shall bypass to get to the meatier stuff: the recommended punishment that MS wishes to inflict on the OSS innovators.
In general, a lot more thought/discussion needs to put into Microsoft's response to the
OSS phenomena. The goal of this document is education and analysis of the OSS
process, consequently in this section, I present only a very superficial list of options
Where is Microsoft most likely to feel the "pinch" of OSS projects in the near future?
Server vs. Client
The server is more vulnerable to OSS products than the client. Reasons for this
Clients "task switch" more often -- the average client desktop is used for a wider
variety of apps than the server. Consequently, integration, ease-of-use, fit & finish, etc. are
Servers are more task specific -- OSS products work best if goals/precedents are
clearly defined -- e.g. serving up commodity protocols
Probably a significant reason for IBM's emphasis of OS/2 on the server side right now.
Commodity servers are a lower "commitment" than clients -- Replacing commodity
servers such as file, print, mail-relay, etc. with open source alternatives doesn't interfere
with the end-user's experience. Also, in these commodity services, a "throw-away"
"experimental" solution will often by entertained by an organization.
Servers are professionally managed -- This plays into OSS's strengths in customization
and mitigates weaknesses in lack of end-user ease of use focus.
Capturing OSS benefits -- Developer Mindshare
The ability of the OSS process to collect and harness the collective IQ of thousands of
individuals across the Internet is simply amazing. More importantly, OSS
evangelization scales with the size of the Internet much faster than our own
evangelization efforts appear to scale.
Bill, could it be that people actually PREFER openness over deceit?
How can Microsoft capture some of the rabid developer mindshare being focused on
Some initial ideas include:
Capture parallel debugging benefits via broader code licensing -- Be more liberal
in handing out source code licenses to NT to organizations such as universities and certain
If you don't mind being a laughingstock, Bill, go ahead and show us that shameful bag of spaghetti code.
Provide entry level tools for low cost / free -- The second order effect of tools is to
generate a common skillset / vocabulary tacitly leveraged by developers. As NatBro
points out, the wide availability of a consistent developer toolset in Linux/UNIX is a
critical means of implicitly coordinating the system.
They tried to flood the market with J++ and we didn't buy it. We know what MS "open" tools are really all about.
Put out parts of the source code -- try to generate hacker interest in adding value to
MS-sponsored code bases. Parts of the TCP/IP stack could be a first candidate. OshM
points out, however that the challenge is to find some part of MS's codebase with a big
enough Noosphere to generate interest.
Also, to find a piece of the code that hasn't been rigged to sabotage somebody else's product.
Provide more extensibility -- The Linux "enthusiast developer" loves writing to /
understanding undocumented API's and internals. Documenting / publishing some internal
API's as "unsupported" may be a means of generating external innovations that leverage
our systems investments. In particular, ensuring that more components from more teams are
scriptable / automatable will help ensure that power users can play with our components.
In other words, to engage in historical revisionism. Find closed, proprietary pieces of NT, and attempt to gain brownie points by showing them off. Good grief, the bottom of the barrel is starting to get some deep grooves from all the scraping going on.
Creating Community/Noosphere. MSDN reaches an extremely large population. How
can we create social structures that provide network benefits leveraging this huge
developer base? For example, what if we had a central VB showcase on Microsoft.com
which allowed VB developers to post & published full source of their VB projects to share
with other VB developers? I'll contend that many VB developers would get extreme ego
gratification out of having their name / code downloadable from Microsoft.com.
<YAWN> But that is merely preaching to the choir. People who are not using MS products don't want to ruin their clean reputation.
Monitor OSS news groups. Learn new ideas and hire the best/brightest individuals.
In other words, raid the kitchen to steal the cooks. Just like with Borland, eh?
Capturing OSS benefits -- Microsoft Internal Processes
What can Microsoft learn from the OSS example? More specifically: How can we
recreate the OSS development environment internally? Different reviewers of this
paper have consistently pointed out that internally, we should view Microsoft as an
idealized OSS community but, for various reasons do not:
Another "we can do everything better than anyone else already does it." But that doesn't apply to honest, integrity, reliability, and other key ingredients of a good company; it sure won't apply to an OSS-cloning effort.
Different development "modes". Setting up an NT build/development environment is
extremely complex & wildly different from the environment used by the Office team.
Different tools / source code managers. Some teams use SLM, other use VSS. Different
bug databases. Different build processes.
No central repository/code access. There is no central set of servers to find, install,
review the code from projects outside your immediate scope. Even simply providing a
central repository for debug symbols would be a huge improvement. NatBro:
"a developer at Microsoft working on the OS can't scratch an itch they've got
with Excel, neither can the Excel developer scratch their itch with the OS -- it
would take them months to figure out how to build & debug & install, and they
probably couldn't get proper source access anyway"
Wide developer communication. Mailing lists dealing with particular components &
bug reports are usually closed to team members.
After a few more paragraphs like that, I'm surprised ANYONE would want to work there.
More component robustness. Linux and other OSS projects make it easy for developers
to experiment with small components in the system without introducing regressions in other
"People have to work on their parts independent of the rest so internal
abstractions between components are well documented and well
exposed/exported as well as being more robust because they have no idea how
they are going to be called. The linux development system has evolved into
allowing more devs to party on it without causing huge numbers of integration
issues because robustness is present at every level. This is great, long term, for
overall stability and it shows."
What the quotation says is, "Their stuff works, so it's easier to build more stuff from it." Same is true for OS/2, in general. A stable base OS means that you can build a whale of a nice app on top of it. As a point of reference, look how well Object Desktop for OS/2 has succeeded -- version 2.0 sold out in just a few days -- whereas the Windows version is stuck in development and testing. This is proof positive that developers would have flocked to OS/2 and dumped Windows if the marketplace had not been rigged.
The trick of course, is to capture these benefits without incurring the costs of the OSS
process. These costs are typically the reasons such barriers were erected in the first
Integration. A full-time developer on a component has a lot of work to do already before
trying to analyze & integrate fixes from other developers within the company.
Iterative costs & dependencies. The potential for mini-code forks between "scratched'
versions of the OS being used by one Excel developer and "core" OS used by a different
Extending OSS benefits -- Service Infrastructure
Supporting a platform & development community requires a lot of service
infrastructure which OSS can't provide. This includes PDC's, MSDN, ADCU, ISVs,
The OSS communities "MSDN" equivalent, of course, is a loose confederation of web
sites with API docs of varying quality. MS has an opportunty to really exploit the web
for developer evangelization.
Blunting OSS attacks
Generally, Microsoft wins by attacking the core weaknesses of OSS projects.
De-commoditize protocols & applications
OSS projects have been able to gain a foothold in many server applications because
of the wide utility of highly commoditized, simple protocols. By extending these
protocols and developing new protocols, we can deny OSS projects entry into the
David Stutz makes a very good point: in competing with Microsoft's level of desktop
integration, "commodity protocols actually become the means of integration" for OSS
projects. There is a large amount of IQ being expended in various IETF working
groups which are quickly creating the architectural model for integration for these
No, Bill, that's still illegal, an act of monopoly maintenance. You can't tie protocols to your monopoly OS or your monopoly Office suite for the purpose of preventing the growth of a competing product (Linux). Go directly to jail. Do not pass Go. Do not collect two billion dollars.
Some examples of Microsoft initiatives which are extending commodity protocols
DNS integration with Directory. Leveraging the Directory Service to add value to
DNS via dynamic updates, security, authentication
Remember what I said earlier about trying to take over the DNS???
HTTP-DAV. DAV is complex and the protocol spec provides an infinite level of
implementation complexity for various applications (e.g. the design for Exchange over
DAV is good but certainly not the single obvious design). Apache will be hard pressed to
pick and choose the correct first areas of DAV to implement.
Structured storage. Changes the rules of the game in the file serving space (a key
Linux/Apache application). Creates a compelling client-side advantage which can be
extended to the server as well.
MSMQ for Distributed Applications. MSMQ is a great example of a distributed
technology where most of the value is in the services and implementation and NOT in the
wire protocol. The same is true for MTS, DTC, and COM+.
Make Integration Compelling -- Especially on the server
The rise of specialty servers is a particularly potent and dire long term threat that
directly affects our revenue streams. One of the keys to combating this threat is to
create integrative scenarios that are valuable on the server platform. David Stutz
The bottom line here is whoever has the best network-oriented integration
technologies and processes will win the commodity server business. There is a
convergence of embedded systems, mobile connectivity, and pervasive
networking protocols that will make the number of servers (especially "specialist
servers"??) explode. The general-purpose commodity client is a good business
to be in - will it be dwarfed by the special-purpose commodity server business?
Remember the MS modus operandi: find a need and prevent others from filling it. This is the opposite of capitalism.
System Management. Systems management functionality potentially touches all
aspects of a product / platform. Consequently, it is not something which is easily
grafted onto an existing codebase in a componentized manner. It must be
designed from the start or be the result of a conscious re-evaluation of all
components in a given project.
Ease of Use. Like management, this often must be designed from the ground up
and consequently incurs large development management cost. OSS projects will
consistently have problems matching this feature area
Solve Scenarios. ZAW, dial up networking, wizards, etc.
Client Integration. How can we leverage the client base to provide similar
integration requirements on our servers? For example, MSMQ, as a piece of
middleware, requires closely synchronized client and server codebases.
Middleware control is critical. Obviously, as servers and their protocols risk
commoditization higher order functionality is necessary to preserve margins in
the server OS business.
"Preserving margins" is just a codeword for maintaining higher prices via monopoly maintenance. Once again, this is a statement of the MS goal of preventing competition by taking the competition's product and giving it away as part of the monopoly MS product to prevent long-term credibility of competition and maintain MS monopoly profits. Time to throw away the key to that jail door.
Release / Service pack process. By consolidating and managing the arduous task
of keeping up with the latest fixes, Microsoft provides a key customer advantage
over basic OSS processes.
Long-Term Commitments. Via tools such as enterprise agreements, long term
research, executive keynotes, etc., Microsoft is able to commit to a long term
vision and create a greater sense of long term order than an OSS process.
Notice MS products present only a "sense" of order. In reality, the products become more disorderly and chaotic over time. OSS, OS/2, and other alternatives don't fall apart because they are inherently more orderly by superior design. It makes no sense to buy disorderly products from an orderly company.
Continue to Part 2: Halloween II