On 10 May 98 at 14:29, Herbert Graf wrote:
> The 386SX chip was not crippled, it was the same situation as
> with the 8088. Intel developed the 8086, a 16 bit internal and
> external processor, but then they realized that there were very few
> 16bit support components, but tons of 8 bit, so to make it cheaper
> to purchase a system, they developed the 8088 (and later the 80188,
> although that never caught attention), a 16 bit internal 8 bit
> external interface CPU. With the 386 it was the same story, Intel
> developed a great powerful and fast 32 bit processor with very
> little 32 bit component support out there, so they developed the
> SX, an internal 32 bit external 16 bit CPU. It made economic sense,
> and was not an attempt to oust the competition, there practically
> was none.
The 8088 was an 8086 internally, but with its 8-bit data bus, it
could (potentially, at least) be used in inexpensive designs with
existing 8080/8085/Z80 support chips and peripherals.
The 386SX was a 386 internally, but with its 24-bit data bus, it
could be used in inexpensive designs with existing 80286 support
chips and peripherals.[*]
The 486SLC and 486DLC were 486 internally, but could be used in
inexpensive designs with existing 386 support chips and
peripherals.[*]
[*] Intel's intention was that makers of 80286 motherboards could
quickly modify their designs to 386SX, and of 386 boards to
486SLC/DLC. CPU upgrade packaging of these chips to plug into
existing boards was strictly third-party until introduction of the
486 "OverDrive" chips.
The 486SX eliminated the FPU to provide a low-cost entry-level
version. Paradoxically, the FPU is rarely used by most *server*
applications, but may be critical to some desktop/home applications.
The Celeron eliminates the L2 cache to provide a low-cost
entry-level version. I do not see any large aftermarket of PC
motherboards that will take a Celeron, that will not as easily take a
"real" PII. Nor do I see any particular category of applications
that derive no benefit from L2 cache.
We have five cases here (over ten years) where Intel has released a
reduced-capability version of a successful chip. The first three
offered system builders some cost advantages BEYOND just reducing the
price of the CPU. I don't see any such advantage to Celeron; in my
mind, that makes this a different kind of situation.
[In previous cases, Intel has pretty much managed to limit the
mass-market perception of the reduced-functionality chip as "down
market" or "substandard". They may not have been so lucky this
time.]
David G
|