Contents[hide] |
[edit]The Modular One
This was a 16-bit computer built with Emitter Coupled Logic (ECL) and was competitive with other first generation minicomputers. Its most distinctive hardware features were memory-mapped I/O, and an early version of segmented memory (similar to the later Intel 8086but having both base and limit). The latter, together with two execution states (Normal State and Special State) made possible the implementation of a self-protecting operating system kernel (known as the Executive, or Exec). Such ideas were popular in British computer academia at the time and later were adopted by some US designs such as the aforementioned Intel 8086. Furthermore, the power system was set up as a peripheral with interrupt capabilities that gave the machine the ability to power down gracefully in an emergency.
A really important idea in Modular One was that the main memory was much like another peripheral, for instance a printer, but was both input and output. When an instruction (or data) was retrieved from memory, the request went out over a cable one or two metres long to another (memory) box, also about one metre cubed. It was thought that a voltage edge was faster than a pulse, so a request was represented by a single voltage transition. The word being read would travel the one or two metres, and then because reading magnetic-core memory destroys its contents it would be sent to be re-written back to where it had been.
The Modular One was comparatively expensive. It was somewhat exotic in that its modular design resulted in almost every system delivered being somewhat different, which created a high maintenance burden. It never sold widely outside of the UK, and even in the UK it was surpassed in sales by DEC and Data General before the end of the 1970s. The systems were cost reduced with new technology over the mid '70s to mid '80s but never gained a significant market share.
Many universities were equipped with Modular One systems, in part due to the government of the time having a 'buy British' policy. [1]
[edit]Operating system
The Exec was known as E4. (E1, E2 and E3 were much simpler execs used only in the first few years of the company). E4 was based on an early version of object-oriented principles, though lacking most of what are now considered essential features of the paradigm. Objects included Activities (now more commonly known as tasks or processes), Segments (of memory), Files, Semaphores and Clocks. Another object type, the Sphere, was a run-time protection domain within which all other object types (including other Spheres) existed. There was some similarity to Unix in the use of serial byte oriented streams in the file system and interprocess communication, in contrast to the record-oriented file systems then dominant in commercial data processing. E4 also supported real-time priorities and virtual memory at the Segment level. It was a relatively elegant OS for its time but was never ported to other hardware, having been written entirely in assembler. (Appropriate and effective high-level language implementations were not readily available at the time.)
[edit]Later history
During the mid '80s, the company realised that the future lay in open systems and attempted to make the transition to Unix with re-badged systems from Motorola and Sequoia, however, sales of the proprietary systems fell off before the new systems could be ramped up to replace them, and the company was taken over by ACT in late 1989.[2]
Magnetic-core memory
From Wikipedia, the free encyclopedia
| Computer memory types |
|---|
| Semiconductor memory |
| Volatile |
|
| Non-volatile |
|
Magnetic-core memory was the predominant form of random-access computer memory for 20 years (circa 1955-75). It uses tiny magnetic toroids (rings), the cores, through which wires are threaded to write and read information. Each core represents one bit of information. The cores can be magnetized in two different ways (clockwise or counterclockwise) and the bit stored in a core is zero or one depending on that core's magnetization direction. The wires are arranged to allow an individual core to be set to either a "one" or a "zero", and for its magnetization to be changed, by sending appropriate current pulses through selected wires. The process of reading the core causes the core to be reset to a "zero", thus erasing it. This is called destructive readout.
Such memory is often just called core memory, or, informally, core. Although core memory had been superseded by semiconductor memory by the end of the 1970s, memory is still occasionally called "core"; in particular, a file recording the contents of memory after a system error is usually called a core dump.
Contents[hide] |
[edit]History
Frederick Viehe applied for various patents on the use of transformers for building digital logic circuits in place of relay logic beginning in 1947, issued through 1960, and assigned to IBM, working in his home laboratory. The first application was the Electronic Relay Circuit, revised and called the Memory Transformer. He was issued a patent for core memory manufacture in 1966.[1]Independently, substantial work in the field was carried out by the Shanghai-born Americanphysicists An Wang and Way-Dong Woo, who created the pulse transfer controlling device in 1949.[2] The name referred to the way that the magnetic field of the cores could be used to control the switching of current in electro-mechanical systems. Wang and Woo were working atHarvard University's Computation Laboratory at the time but, unlike MIT, Harvard was not interested in promoting inventions created in their labs. Instead Wang was able to patent the system on his own.
The MIT Whirlwind computer required a fast memory system for real-time aircraft tracking use. At first, Williams tubes—a storage system based on cathode ray tubes—were used, but these devices were always temperamental and unreliable. “Several researchers in the late 1940s, including Jay Forrester, conceived the idea of using magnetic cores for computer memory, but Forrester received the principal patent for his invention of the co-incident core memory that enabled the 3D storage of information.[3]. William Papian of Project Whirlwind cited one of these efforts, Harvard's "Static Magnetic Delay Line," in an internal memo. The first Core memory of 32 x 32 x 16 bits was installed on Whirlwind in the summer of 1953. Papian, described: "Magnetic-Core Storage has two big advantages: (1) greater reliability with a consequent reduction in maintenance time devoted to storage; (2) shorter access time (core access time is 9 microseconds: tube access time is approximately 25 microseconds) thus increasing the speed of computer operation.”[4] In April 2011, Forrester recalled that “the Wang use of cores did not have any influence on my development of random-access memory. The Wang memory was expensive and complicated. As I recall, which may not be entirely correct, it used two cores per binary bit and was essentially a delay line that moved a bit forward. To the extent that I may have focused on it, the approach was not suitable for our purposes”. He describes the invention and associated events, in 1975.[5]. Forrester has since observed "“It took us about seven years to convince the industry that random-access magnetic-core memory was the solution to a missing link in computer technology,” Forrester later said. “Then we spent the following seven years in the patent courts convincing them that they had not all thought of it first. ”[6]
Two key inventions led to the development of magnetic core memory in 1951. The first, An Wang's, was the write-after-read cycle, which solved the problem of how to use a storage medium in which the act of reading erased the data read enabling the construction of a serial, one dimensional shift register of o(50) bits, using two cores to store a bit. A Wang core shift register is in the Revolution exhibit at the Computer History Museum. The second, Jay Forrester's, was the coincident-current system, which enabled a small number of wires to control a large number of cores enabling 3D memory arrays of several million bits e.g. 8K x 8K x 64 bits.
Forrester's coincident-current system required one of the wires to be run at 45 degrees to the cores, which proved impossible to wire by machine, so that core arrays had to be assembled under microscopes by workers with fine motor control. Initially, garment workers were used. Successful automated assembly was only achieved in the 1970s around the time core memory became obsolete; because of this, automated assembly never entered the actual industrial production of core memory.
It was during the early 1950s that Seeburg developed the use of this coincident current ferrite core memory storage in the "Tormat" memory of its new range of jukeboxes, starting with the V200 released in 1955. Development work was completed in 1953.
Some manufacturers employed Scandinavian seamstresses who had been laid off due to mechanization of the textile industry, for example to replace the memory of the Swedish computer BESK with core memory in 1956. By the late 1950s industrial plants had been set up in the Far East to build core. Inside, hundreds of workers strung cores for low pay. This lowered the cost of core to the point where it became largely universal as main memory by the early 1960s, replacing both inexpensive low-performance drum memory and costly high-performance systems using vacuum tubes, and later transistors, as memory. The cost of core memory declined sharply over the lifetime of the technology: costs began at roughlyUS$1.00 per bit and dropped to roughly US$0.01 per bit. Core was replaced by integratedsemiconductor RAM chips in the 1970s.
Wang's patent was not granted until 1955, and by that time core was already in use. This started a long series of lawsuits, which eventually ended when IBM bought the patent outright from Wang for US$500,000.[7] Wang used the funds to greatly expand Wang Laboratories, which he had co-founded with Dr. Ge-Yao Chu, a school mate from China. In 1964, after years of legal wrangling, IBM paid MIT $13 million for rights to Forrester’s patent—the largest patent settlement to that date.[8]
Core memory was part of a family of related technologies, now largely forgotten, which exploited the magnetic properties of materials to perform switching and amplification. By the 1950s vacuum-tube electronics was well-developed and very sophisticated, but tubes had a limited lifetime, used much more power and were much larger than semiconductor or magnetic technology, and their operating characteristics changed over their life. Magnetic devices had many of the virtues of the discrete and integrated solid-state devices that would replace them, and were extensively used in military applications. A notable example was the portable (truck-based) MOBIDICcomputer developed by Sylvania for the United States Army Signal Corps in the late 1950s. The contents of electronic memory were lost when power was disconnected, but core memory was non-volatile and kept its contents.
[edit]Description
The term "core" comes from conventional transformers whose windings surround a magnetic core. In core memory the wires pass once through any given core—they are single-turn devices. The magnetic material for a core memory requires a high degree of magnetic remanance, the ability to stay highly magnetized, and a low coercitivity so that less energy is required to change the magnetization direction. The core can take two states, encoding one bit, which can be read when "selected" by a "sense wire". The core memory contents are retained even when the memory system is powered down (non-volatile memory). However, when the core is read, it is reset to a "zero" which is known as destructive readout. Circuits in the computer memory system then restore the information in an immediate re-write cycle. The properties of materials used for memory cores are dramatically different from those used in power transformers.
[edit]How core memory works
The most common form of core memory, X/Y line coincident-current – used for the main memory of a computer, consists of a large number of small ferrite (ferromagnetic ceramic) toroids — cores— held together in a grid structure (each grid called a plane), with wires woven through the holes in the cores' middle. In early systems there were four wires, X, Y,Sense and Inhibit, but later cores combined the latter two wires into one Sense/Inhibit line. Each toroid stores one bit (a 0 or 1). One bit in each plane could be accessed in one cycle, so each machine word in an array of words was spread over a stack of planes. Each plane would manipulate one bit of a word in parallel, allowing the full word to be read or written in one cycle.
Core relies on the "square loop" properties of the ferrite material used to make the toroids. Wires that pass through the cores create magnetic fields. Only a magnetic field greater than a certain intensity ("select") can cause the core to change its magnetic polarity. To select a memory location, one of the X and one of the Y lines are driven with half the current ("half-select") required to cause this change. Only the combined magnetic field generated where the X and Y lines cross (a logical AND function) is sufficient to change the state; other cores will see only half the needed field ("half-selected"), or none at all. By driving the current through the wires in a particular direction, the resulting induced field forces the selected core's magnetic flux to circulate in one direction or the other (clockwise or counterclockwise). One direction is a stored 1, while the other is a stored 0.
The toroidal shape of a core is preferred since the magnetic path is closed, there are no magnetic poles and thus very little external flux. This allows the cores to be packed closely together and to not have their magnetic fields interact. The alternating 45 degree positioning in a core array helps to reduce any stray coupling.
[edit]Reading and writing
To read a bit of core memory, the circuitry tries to flip the bit to whatever polarity the machine regards as the 0 state, by driving the selected X and Y lines that intersect at that core.
- If the bit was already 0, the physical state of the core is unaffected.
- If the bit was previously 1, then the core changes magnetic polarity. This change, after a delay, induces a voltage pulse into the Sense line.
Detecting such a pulse means that the bit contained 1. Absence of the pulse means that the bit contained 0. The delay in sensing the voltage pulse is called the access time of the core memory.
Following any such read, the bit contains 0. This illustrates why core memory featuresdestructive reads: Any operation that reads the contents of a core erases those contents.
To write a bit of core memory, the circuitry assumes there has been a read operation and the bit is in the 0 state.
- To write a 1 bit, the selected X and Y lines are driven, with current in the opposite direction as for the read operation. As with the read, the core at the intersection of the X and Y lines changes magnetic polarity.
- To write a 0 bit (in other words, to inhibit the writing of a 1 bit), the same amount of current is also sent through the Inhibit line. This reduces the net current flowing through the respective core to half the select current, inhibiting change of polarity.
The access time plus the time to rewrite is the memory cycle time.
The Sense wire is used only during the read, and the Inhibit wire is used only during the write. For this reason, later core systems combined the two into a single wire, and used circuitry in the memory controller to switch the function of the wire.
Core memory controllers were designed so that every read was followed immediately by a write (because the read forced all bits to 0, and because the write assumed this had happened). Computers began to take advantage of this fact. For example, a value in memory could be read with post-increment almost as quickly as it could be read; the hardware simply incremented the value between the read phase and the write phase of a single memory cycle (perhaps signalling the memory controller to pause briefly in the middle of the cycle). This might be twice as fast as obtaining the value with a read-write cycle, incrementing the value in some processor register, and writing the new value with another read-write cycle.
[edit]Other forms of core memory
Word line core memory was often used to provide register memory. Other names for this type are linear select and 2-D. This form of core memory typically wove three wires through each core on the plane, word read, word write, and bit sense/write. To read or clear words, the full current is applied to one or more word read lines; this clears the selected cores and any that flip induce voltage pulses in their bit sense/write lines. For read, normally only one word read line would be selected; but for clear, multiple word read lines could be selected while the bit sense/write lines ignored. To write words, the half current is applied to one or more word write lines, and half current is applied to each bit sense/write line for a bit to be set. In some designs, the word read and word write lines were combined into a single wire, resulting in a memory array with just two wires per bit. For write, multiple word write lines could be selected. This offered a performance advantage over X/Y line coincident-current in that multiple words could be cleared or written with the same value in a single cycle. A typical machine's register set usually used only one small plane of this form of core memory. Some very large memories were built with this technology, for example theExtended Core Storage (ECS) auxiliary memory in the CDC 6600, which was up to 2 million 60-bit words.
Another form of core memory called core rope memory provided read-only storage. In this case, the cores, which had more linear magnetic materials, were simply used as transformers; no information was actually stored magnetically within the individual cores. Each bit of the word had one core. Reading the contents of a given memory address generated a pulse of current in a wire corresponding to that address. Each address wire was threaded either though a core to signify a binary [1], or around the outside of that core, to signify a binary [0]. As expected, the cores were much larger physically than those of read-write core memory. This type of memory was exceptionally reliable. An example was the Apollo Guidance Computer used for the moon landings.
[edit]Physical characteristics
| This unreferenced section requires citations to ensureverifiability. |
The performance of early core memories can be characterized in today's terms as being very roughly comparable to a clock rate of 1MHz (equivalent to early 1980s home computers, like the Apple II and Commodore 64). Early core memory systems had cycle times of about 6 µs, which had fallen to 1.2 µs by the early 1970s, and by the mid-70s it was down to 600 ns (0.6 µs). Some designs had substantially higher performance: the CDC 6600 had a memory cycle time of 1.0 µs in 1964, using cores that required a half-select current of 200 mA.[9] Everything possible was done in order to decrease access times and increase data rates (bandwidth), including the simultaneous use of multiple grids of core, each storing one bit of a data word. For instance a machine might use 32 grids of core with a single bit of the 32-bit word in each one, and the controller could access the entire 32-bit word in a single read/write cycle.
Core memory is non-volatile storage – it can retain its contents indefinitely without power. It is also relatively unaffected by EMP and radiation. These were important advantages for some applications like first generation industrial programmable controllers, military installations and vehicles like fighter aircraft, as well as spacecraft, and led to core being used for a number of years after availability ofsemiconductor MOS memory (see also MOSFET). For example, the Space Shuttle flight computers initially used core memory, which preserved the contents of memory even through the Challenger's disintegration and subsequent plunge into the sea in1986.[citation needed]
Another characteristic of early core was that the coercive force was very temperature sensitive: the proper half select current at one temperature is not the proper half select current at another temperature. So the memory controllers would include temperature sensors (typically a thermistor) to adjust the current levels correctly for temperature changes. An example of this is the core memory used byDigital Equipment Corporation for their PDP-1 computer; this strategy continued through all of the follow-on core memory systems built by DEC for their PDP line of air-cooled computers. Another method of handling the temperature sensitivity was to enclose the magnetic core "stack" in a temperature controlled oven. Examples of this are the heated air core memory of the IBM 1620 (which could take up to 30 minutes to reach operating temperature, about 106 °F, 41 °C) and the heated oil bath core memory of the IBM 7090, early IBM 7094s, and IBM 7030.
Core was heated instead of cooled because the primary requirement was a consistent temperature, and it was easier (and cheaper) to maintain a constant temperature well above room temperature than one at or below it.
In 1980, the price of a 16 kW (kiloword, equivalent to 32kB) core memory board that fitted into a DEC Q-bus computer was aroundUS$3,000. At that time, core array and supporting electronics fit on a single printed circuit board about 25 x 20 cm in size, the core array was mounted a few mm above the PCB and was protected with a metal or plastic plate.
Diagnosing hardware problems in core memory required time-consuming diagnostic programs to be run. While a quick test checked if every bit could contain a one and a zero, these diagnostics tested the core memory with worst-case patterns and had to run for several hours. As most computers had just a single core memory board, these diagnostics also moved themselves around in memory, making it possible to test every bit. An advanced test was called a "Schmoo test" in which the half select currents were modified along with the time at which the sense line was tested ("strobed") It seems the data plot looked like a cartoon character called "Schmoo" and the name stuck. In many occasions, errors could be resolved by gently tapping the printed circuit board with the core array on a table. This slightly changed the position of the cores to the wires running through and could fix the problem. The procedure was seldom needed, as core memory proved to be very reliable compared to other computer components of the day.
Emitter-coupled logic
From Wikipedia, the free encyclopedia
(Redirected from Emitter Coupled Logic)
In electronics, emitter-coupled logic (ECL) is a logic family that achieves high speed by using an overdriven BJT differential amplifier with single-ended input whose emitter current is limited to avoid the slow saturation region of transistor operation.[2] As the current is steered between two legs of an emitter-coupled pair, ECL is sometimes called current-steering logic (CSL),[3] current-mode logic(CML)[4] or current-switch emitter-follower (CSEF) logic.[5]
In ECL, the transistors are never in saturation, the input/output voltages have a small swing (0.8 V), the input impedance is high and the output resistance is low; as a result, the transistors change states quickly, gate delays are low, and the fanout capability is high.[6] In addition, the essentially-constant current draw of the differential amplifiers minimises delays and glitches due to supply-line inductance and capacitance, and the complementary outputs decrease the propagation time of the whole circuit by saving additional inverters.
ECL's major disadvantage is that each gate continuously draws current, which means it requires (and dissipates) significantly more power than those of other logic families, especially when quiescent.
The equivalent of emitter-coupled logic made out of FETs is called source-coupled FET logic (SCFL).[7]
A variation of ECL in which all signal paths and gate inputs are differential is known as differential current switch (DCS) logic.[8]
Contents[hide] |
[edit]History
ECL was invented in August 1956 at IBM by Hannon S. Yourke.[10][11]Originally called current-steering logic, it was used in the Stretch, IBM 7090, and IBM 7094 computers.[9]
While ECL circuits in the mid-1960s through the 1990s consisted of a differential amplifier input stage to perform logic, followed by an emitter follower to drive outputs and shift the output voltages so they will be compatible with the inputs, Yourke's current switch, also known as ECL, consisted only of differential amplifiers. To provide compatible input and output levels, two complementary versions were used, an NPN version and a PNP version. The NPN output could drive PNP inputs, and vice-versa. "The disadvantages are that more different power supply voltages are needed, and both pnp and npn transistors are required."[9]
Motorola introduced their first digital monolithic integrated circuit line, MECL I, in 1962.[12] Motorola developed several improved series, with MECL II in 1966, MECL III in 1968 with 1 nanosecond gate propagation time and 300 MHz flip-flop toggle rates, and the 10,000 series (with lower power consumption and controlled edge speeds) in 1971.[13]
The drawbacks associated with ECL have meant that it has been used mainly when high performance is a vital requirement. Older high-end mainframe computers, such as the Enterprise System/9000 members of IBM's ESA/390 computer family, used ECL[14] as did theCray-1;[15] current IBM mainframes use CMOS.
[edit]Implementation
ECL is based on an emitter-coupled (long-tailed) pair, shaded red in the figure on the right. The left half of the pair (shaded yellow) consists of two parallel-connected input transistors T1 and T2 (an exemplary two-input gate is considered) implementing NOR logic. The base voltage of the right transistor T3 is held fixed by a reference voltage source, shaded light green: the voltage divider with a diode thermal compensation (R1, R2, D1 and D2) and sometimes a buffering emitter follower (not shown on the picture); thus the emitter voltages are kept relatively steady. As a result, the common emitter resistor RE acts nearly as a current source. The output voltages at the collector load resistors RC1 and RC3 are shifted and buffered to the inverting and non-inverting outputs by the emitter followers T4 and T5 (shaded blue). The output emitter resistors RE4 and RE5 do not exist in all versions of ECL. In some cases 50 Ω line termination resistors connected between the bases of the input transistors and −2 V act as emitter resistors.[16]
[edit]Operation
The ECL circuit operation is considered below with assumption that the input voltage is applied to T1 base, while T2 input is unused or a logical "0" is applied.
During the transition, the core of the circuit – the emitter-coupled pair (T1 and T3) – acts as a differential amplifier with single-ended input. The "long-tail" current source (RE) sets the total current flowing through the two legs of the pair. The input voltage controls the current flowing through the transistors by sharing it between the two legs, steering it all to one side when not near the switching point. The gain is higher than at the end states (see below) and the circuit switches quickly.
At low input voltage (logical "0") or at high input voltage (logical "1") the differential amplifier is overdriven. The one transistor (T1 or T3) is cut-off and the other (T3 or T1) is in active linear region acting as a common-emitter stage with emitter degeneration that takes all the current, starving the other cut-off transistor.
The active transistor is loaded with the relatively high emitter resistance RE that introduces a significant negative feedback (emitter degeneration). To prevent saturation of the active transistor so that the diffusion time that slows the recovery from saturation will not be involved in the logic delay,[2] the emitter and collector resistances are chosen such that at maximum input voltage some voltage is left across the transistor. The residual gain is low (K = RC/RE < 1). The circuit is insensitive to the input voltage variations and the transistor stays firmly in active linear region. The input resistance is high because of the series negative feedback.
The cut-off transistor breaks the connection between its input and output. As a result, its input voltage does not affect the output voltage. The input resistance is high again since the base-emitter junction is cut-off.
The active transistor is loaded with the relatively high emitter resistance RE that introduces a significant negative feedback (emitter degeneration). To prevent saturation of the active transistor so that the diffusion time that slows the recovery from saturation will not be involved in the logic delay,[2] the emitter and collector resistances are chosen such that at maximum input voltage some voltage is left across the transistor. The residual gain is low (K = RC/RE < 1). The circuit is insensitive to the input voltage variations and the transistor stays firmly in active linear region. The input resistance is high because of the series negative feedback.
The cut-off transistor breaks the connection between its input and output. As a result, its input voltage does not affect the output voltage. The input resistance is high again since the base-emitter junction is cut-off.
[edit]Characteristics
Other noteworthy characteristics of the ECL family include the fact that the large current requirement is approximately constant, and does not depend significantly on the state of the circuit. This means that ECL circuits generate relatively little power noise, unlike many other logic types which typically draw far more current when switching than quiescent, for which power noise can become problematic. In cryptographic applications, ECL circuits are also less susceptible to side channel attacks such as differential power analysis.
The propagation time for this arrangement can be less than a nanosecond, making it for many years the fastest logic family.
[edit]Power supplies and logic levels
The ECL circuits usually operate with negative power supplies (positive end of the supply is connected to ground) in contrast to other logic families in which negative end of the supply is grounded. This is done mainly to minimize the influence of the power supply variations on the logic levels as ECL is more sensitive to noise on the VCC and relatively immune to noise on VEE.[17] Because ground should be the most stable voltage in a system, ECL is specified with a positive ground. In this connection, when the supply voltage varies, the voltage drops across the collector resistors change slightly (in the case of emitter constant current source, they do not change at all). As the collector resistors are firmly "tied up" to ground, the output voltages "move" slightly (or not at all). If the negative end of the power supply was grounded, the collector resistors would be attached to the positive rail. As the constant voltage drops across the collector resistors change slightly (or not at all), the output voltages follow the supply voltage variations and the two circuit parts act as constant current level shifters. In this case, the voltage divider R1-R2 compensates the voltage variations to some extent. The positive power supply has another disadvantage - the output voltages will vary slightly (±0.4 V) against the background of high constant voltage (+3.9 V). Another reason for using a negative power supply is protection of the output transistors from an accidental short circuit developing between output and ground[18] (but the outputs are not protected from a short circuit with the negative rail).
The value of the supply voltage is chosen so that a sufficient current to flow through the compensating diodes D1 and D2 and the voltage drop across the common emitter resistor RE to be adequate.
ECL circuits available on the open market usually operated with logic levels incompatible with other families. This meant that interoperation between ECL and other logic families, such as the popular TTL family, required additional interface circuits. The fact that the high and low logic levels are relatively close meant that ECL suffers from small noise margins, which can be troublesome.
At least one manufacturer, IBM, made ECL circuits for use in the manufacturer's own products. The power supplies were substantially different from those used in the open market.[14]
Positive emitter-coupled logic (PECL) is a further development of ECL using a positive 5V supply instead of a negative 5V supply. Low-voltage positive emitter-coupled logic (LVPECL) is a power optimized version of PECL, using a positive 3.3V instead of 5V supply. PECL and LVPECL are differential signaling systems, and are mainly used in high speed and clock distribution circuits.
Logic levels:[19]
| Type | Vee | Vlow | Vhigh | Vcc | |
| PECL | GND | 3.4 V | 4.2 V | 5.0 V | |
| LVPECL | GND | 1.6 V | 2.4 V | 3.3 V | 2.0 V |
No comments:
Post a Comment