The overwhelming majority of this information was derived from Micro Channel
Architecture: Revolution in Personal Computing, by Pat A. Bowlds, pages
135-137. ISBN 0-442-00433-8
The Beginning - The IBM 1401
All I/O operations controlled are directly by the CPU.
The CPU was also responsible for information transfers to and from memory
or I/O devices. Many processing cycles were consumed with I/O management.
Throughput often limited by the speed of any I/O device, because the CPU
was forced to wait until an I/O operation was complete before other activity
could begin.
I/O Processor Emerges - IBM System/360
In 1964, the I/O Processor appears (as I do also). This device boosted
performance by allowing the CPU to delegate I/O management to the I/O Processor
by using high level commands. The I/O Processor then managed all the data
transfers between I/O and memory.
The I/O Processor was the ancestor to the channel architecture used in IBM
mainframes. Channels consist of an I/O Processor, multiple I/O channels and
their controllers. They are suitable for mainframe computing with centralized
storage and processing. In addition, the channels can be cabled across the
floor to meet a variety of peripheral requirements.
I/O Devices Use DMA - IBM System/7
In the early 1970s the System/7 permitted I/O devices to have DMA. These
first uses of DMA by other devices other than the CPU was the foundation of
today's busmasters.
I/O devices were able to execute low-level commands from the CPU. The
Channel Controller has a similar function to the I/O Processor.
Data transfers from one I/O device to another required many steps and
involved the participation of several parties. At the time, this resulted in
significant performance gains by offloading more work from the CPU onto I/O
devices.
Smart I/O - IBM Series/1
The Series/1 introduced the concept of intelligent adapters (Smart I/O).
These smart I/O devices interpreted and executed their own commands, which gave
them increased independence from the CPU. New software protocols called Control
Blocks were used to permit Series/1 adapters to access memory directly (called
first-party DMA operation) as well as communicate with the channel
controller.
Fall From Grace - IBM AT
This hell-spawn bus continues to bedevil us with all sorts of bad mojo. The
channel controller was replaced by a DMA controller. The DMA controller could
take control of the I/O bus and act as a third party in data transfers between
devices and memory over multiple DMA Channels.
The AT's MASTER signal introduced the capability of bus ownership by a Bus
Master. This device was given direct memory access using a single dedicated DMA
channel. Problems with the AT implementation of DMA include: no provision for
peer-to-peer data transfer, arbitration by multiple devices, preemption, and no
defined method for equitable bus ownership. It would have been possible for a
bus master to gain control of the bus and keep other devices from using it. If
a bus master did hog the bus, it would lead to lost data from the loss of
memory refresh cycles.
It is possible to design a device and device driver to prevent these
problems, but designers rarely used the AT's MASTER signal.
Supreme Being - IBM PS/2 Micro Channel
In 1987, IBM blessed the hudled masses yearning for true busmaster
capabilities with Micro Channel. Unfortunately, IBM
seemed to have wanted to rest after that.
True bus master capabilities were finally achieved with a hardware mediated
arbitration process, method of preemption, and a fairness algorithm for
equitable bus sharing.
A new protocol was defined- Subsystem Control Block Architecture, which
provides the procedures for peer-to-peer communications and data exchange
between masters. SCBs provides a framework for the high level command
capabilities associated with the bus master function.
Busmaster Benefits
Busmasters offload functions from the system master, minimize interrupts,
provide their own processing power, and eliminate third party DMA transfers. A
busmaster goes to the memory or I/O slave (in addition to other busmasters, of
course) and gets the data it's looking for.
Bus masters do not necessarily increase system performance. If the system
microprocessor is not busy with other tasks, and is very powerful, an I/O slave
implementation may be faster than a bus master implementation with a less
powerful processor. To Summarize - bus master performance benefits are observed
in systems in which the system microprocessor (or I/O bus) is busy and near
saturation.
Multiple CPU Subsystems
This is interesting for those questing for the "Superserver". Busmasters can
upgrade the processing capability of systems by adding a new CPU subsystem that
serves as a replacement of the system master. The new CPU subsystem can be
given control of the system resources after the default master has initialized
the system. The default master can be made quiescent, relegated to supporting
I/O functions, or operate concurrently with the new CPU subsystem.
Because multiple bus masters are supported by Micro Channel, multiple CPU
subsystems can (with the appropriate operating system and software support)
operate concurrently. This concurrent process can provide significant system
processing capabilities without wasting any existing system logic.
Processor Independence
The added CPU(s) can have a completely different software architectures
from that of their host system. Examples are the adapter for the IBM 6152
Academic System based on the RISC MC68881 CPU with 8MB on-board, Prometa
BusMaster WS/88K, based on the RISC 88000 and running UNIX System V, the
YARC Systems Micro 785+, based on the MC68020 at 40 MHz and runs FORTRAN,
C, and Pascal, the Xtend Renaissance CPU board,
the IBM PS/2 Wizard adapter, based on
the RISC i860 at 33 MHz and running numeric intensive calculations
(like a big math-co!), the rare and (still) costly
P/390, running OS390, the (even rarer)
P/370 running
OS370 (or whatever, MUSIC/SP
I think), and ending up with the darling of the bunch, the AOX/Kingston MCMaster,
based on a 386 or
486 from 25 to 33 MHz and running whatever
OS you want. Because Micro Channel architecture is SEPARATE
from the CPU architecture of the system, maximum design flexibility is
achieved.
From SCO:
Micro Channel Architecture came out at approximately the same time as IBM
moved to 32-bit processors However, the change to MCA was more than an upgrade
in the processor, it is a redesign of the entire bus architecture.
One of the most significant changes was the introduction of smaller
expansion slots. Originally AT cards were 4.75 x 13.5 inches, whereas the new
MCA cards are 3.5 x 11.5 inches. This allowed the same number of expansion
cards to be installed in a smaller area. However, this also means that existing
cards were not compatible with the new MCA machines.
A key issue in the miniaturization of the expansions card is the concept of
surface mount components or surface mount technology (SMT). Most of the
components look 'flattened' in comparison to their non-MCA counterparts.
Earlier architectures used "through-hole" mounting which were holes drilled
through the system board (hence the name) and then the chips were mounted into
holders which were soldered into these holes. Not only does SMT save space, it
also saves time and money since it is easier to produce boards in this fashion
versus "through-hole".
Another key enhancement was the spacing of the connectors with SMT. The
0.050 inch spacing of the pins corresponds to the spacing on the expansions
cards, making design much easier.
A radical rearrangement of the signals on the Micro Channel bus puts a
ground on every fourth pin. This reduces interference so much that MCA machines
can operate at 80 MHz and still comply with FCC regulations on the amount of
interference generated. The speed of MCA machines is also increased because
this improved arrangement of signals means a high bandwidth on the bus.
Therefore, the Micro Channel is not limited to the 8 MHz bus speed of the AT
architecture. An additional advantage is obtained because the ground pins are
no more than 1/10 of an inch away from any signal lines. This substantially
reduces noise emissions.
Although subsequently implemented on the AT bus, MCA was the first to double
the width of the data bus the 32-bit, which allowed anything attached to it to
be accessed twice as fast. MCA was also the first to expand the address bus
from 24 bits to 32. This increased accessible memory from 16 megabytes to 4
gigabytes.
Perhaps the most significant change from the traditional PC design was its
hardware-mediated bus arbitration. With the old AT architecture, devices could
share the bus, but it required special software. The hardware-mediate bus
arbitration that MCA provides is borrowed from mainframes and allows it to use
multiple processors, thus allowing multitasking and parallel processing. The
current MCA implementation of hardware-mediations allows up to eight
microprocessors and eight devices that all share the single data bus (such as
DMA controllers). An added advantage to this hardware arbitration is that the
CPU is no longer busy arbitrating bus requests and can devote itself to other
tasks.
To implement this arbitration strategy, MCA adds new lines to the bus. One
important set is the four lines determining the Arbitration Bus Priority Level.
This represents 16 different priority levels that could be assigned to a device
that wants to take control of the bus. MCA also added three lines that do that
actual work. The Preempt signal tells all the other devices on the bus that the
bus is being requested. The Arbitrate/Grant signal is sent by the Central
Arbitration Point and begins the actual arbitration. The Burst signal is sent
by a device as a "Do Not Disturb" sign to tell the other device that they ought
not to even ask for the bus until this transfer is completed. Additionally,
each device checks the Arbitration Bus Priority Level lines. If a higher
priority level has already been asserted, the device stops its attempt to gain
control of the bus.
For the end user the most notable change was the introduction of the Micro
Channel's Programmable Option Select. With this, the entire hardware
configuration is stored in the CMOS and it almost eliminates the conflicts
(such as base address and interrupts) that were common on AT installations. In
order to add a news card, the machine's reference disk needs to be booted and
the configuration file need to be loaded from the options diskette that comes
with the card. This makes installation of new cards substantially easier for
the novice user because this installation procedure is the same for each card,
no matter the manufacturer.
An added advance in terms of expansion cards is the concept of interrupt
sharing. Many expansion board manufacturers allow only a limited range of
interrupts which might prevent use of certain cards because of conflicts.
Interrupt sharing is possible because the Micro Channel allows level-sensitive
interrupts.
With edge-sensitive interrupts, as on the standard AT-bus, an interrupt is
generated and then drops. The PIC determines which device the interrupt came
from and services it. If interrupts were shared in this scenario, any interrupt
coming between the time the first one is generated and serviced would be lost.
The PIC would have no means of knowing that a second one occurred.
With level-sensitive interrupts, when an interrupt is generated it is held
high until forced low by the PIC when it is serviced. If another device were on
the same interrupt, pulling down the interrupt line of the would still leave
the line for the second device high. The PIC would then be able to see it and
service the device.
|