Monday, 28 November 2011

IET History of Technology Swansea 2006

Below is a copy of a paper on the early hands-on introduction of digital techniques to the steel industry at Port Talbot Wales. It was given to an IET (Institution of Engineering and Technology UK) History of Technology conference in Swansea in July 2006. It was recently copied on to one of my blogs as a way of preserving the text providing easy access, the link name indicates the most significant in-house development of all that of a SCADA (Supervisory Control and Data Acquisition) system that was large, fast and reliable at a time when most commercially available systems were none of these.

Because the computer system was based on multiple radial serial links it was unique, rugged and far easier to maintain on-line than one based on Ethernet.

POSTSCRIPT ON ISSUE OF THIS BLOG VERSION  on 28 November 2011
'The First Thirty Years of Digital Control at Port Talbot Steelworks'

The text is virtually unchanged from the original issue of this paper. However instead of just six photographs and a few diagrams grouped at the end of the text this Google blogspot format has allowed far more photographs to be embedded at appropriate points in the text together with more informative explanations of their content.

The more time passes, the more it becomes apparent that other small dedicated groups have made, and are still making, major advances in computer applications, the more I believe we did make huge strides in the development of  a unique SCADA system. In terms of number of signals handled, speed of response, system architecture including provision of live dual computers systems merged with telemetry via independent radial links to byte handling microprocessors, plus above all highly efficient software designed specifically for our SCADA purposes spanning our own organiser, drivers, data handling, functional design for microprocessors, and highly efficient, fast, informative character graphics (as opposed to pixels) on the first 16 colour VDUs.

For years I came away from trade exhibitions mystified to find the 'big boys' (eg Siemens and Asea/ABB) were demonstrating similar systems with unbelievably poor VDU responses, far worse than we were getting from large on-line SCADA systems. ICI's in-house development team ran into similar problems at the end when using DEC's VMS operating system. As with today's laptop PCs, one thousand times faster and even bigger if measured by memory capacity than the computers in this paper, I fear the answer lies in the use of complex general purpose software over reliant on inevitably slow disc transfers, for operating systems, drivers and applications .

We were able to use radial links thanks to the spare telephone links radiating to every part of a huge steelworks site from its Energy Control Centre. These systems were rugged, had data link monitoring built in and had excellent degradation characteristics and easy changeover of just 16 serial links to instrumentation controls, PLCs and VDUs because both computers were always collecting the same live data. In sum that made for easy fault diagnosis. All these characteristics were vital since the teams task was not just to design and build the software/hardware systems but to maintain them 7x24 as well, a task made feasible by the quality of the design and implementation work and the depth of understanding within the team which came from use of 'naked' computers containing nothing but home produced software.


Radial, as opposed to network, design of this type has so much to offer. Major development of data transmission by wireless opens up the potential of point to point radio, an obvious modern alternative to cable.

Brian Corbett          29 November 2011

Today's iPad is demonstrating today the enormous improvements in computer system performance possible from simplicity and focussed design.

Brian Corbett           17 July 2012
 


THE FIRST THIRTY YEARS OF DIGITAL CONTROL AT PORT TALBOT STEELWORKS
by Brian Corbett   B Sc (Eng), ACGI, FIET


1)  INTRODUCTION

In April 1966 the Steel Company of Wales, on the advice of management consultants, decided to set up an Automation Development Department. I was one of three recruits to join the two engineers already supervising automation projects on the Hot and Cold Mill. Our task was to spread computer automation techniques to the rest of the rest of the plant. The TRW computer employed on the Cold Mill seemed particularly archaic since its only memory was on a rotating magnetic drum, so efficient software implied laying out the program and data in such a way that the next piece required was about to come under the reading head. By contrast the new Hot Mill scheme being engineered by American GE seemed modern with fast random access memory in the form of ultra reliable core store plus a magnetic drum for bulk data.
 

The early history of computing had been essentially a battle for reliability of the Central Processing Unit (CPU) and the realisation of fast reliable memory, both for bulk storage and for read/write random access. To emphasise the point note that Manchester claims on 21 June 1948 to have operated the world’s first stored program on a 32 bit computer of pentode vacuum tubes (valves) with a memory of just 128 bytes stored as charges on a cathode ray tube, which were visible as spots of light. That was less than sixty years ago, at the time it was a major achievement to build a machine reliable enough to complete a single execution of a program. In deference to its Manchester past, the CPU was then known as the ‘mill’, a fine analogy where the binary program is likened to the lifting of individual warp threads, and the cycle to the passing of the shuttle as one weft thread after another the intended pattern is created in the textile.


Unprecedented history was made in computers in less than sixty years, of which I spent thirty ‘hands on’ in industrial process automation system development. This paper is based on research of my memory and records with particular emphasis on the developments which led to large scale SCADA being developed at Port Talbot.


By 1966 the hardware was poised for take off industrially as the basis of reliable fast process control. Ferranti for instance, on which hardware most of this history rests, had just introduced the Argus 500 computer (Fig1) with small scale integration, (a whole transistorised OR gate in just one can!), the RAM of the day was implemented as magnetic core (one toroid per bit) and the disc was sealed with a fixed head per track. Program was loaded via paper tape readers and dumped by paper tape punches. The choice for printers was between the rugged teleprinter with upper cases letters only, a legacy of the restrictions of the 6 bit code which proceeded ASCI, and the IBM electric typewriter which had taken the touch out of typing in the office. One other vital feature of the day was the issuing by Ferranti of the detailed drawings of their I/O and much software source code - a far cry from the policy of withholding of information from the public domain, as adopted successively by IBM, DEC and Microsoft.

FERRANTI ARGUS 500, a 1 MHz processor, shown with just one (max 4) 12kB core memory module 


This is essentially a case history of 30 years from that point of time, excluding developments in rolling mill control. A story of hands on development of industrial control systems by a nine man team of engineers and programmers working under the direction of the author at Port Talbot works. This was an era which provided an exceptionally wide brief from understanding the processes well enough to design the control system, through selection and purchase of the hardware, designing the system architecture and communication via data links, producing the complete mini computer software including simple efficient operating systems, mathematical modelling, and the on-line control itself. The team were usually responsible for the installation, system testing and commissioning, and, the production, maintenance and updating of software for the lifetime of the system.


We were not alone in having such a brief, which was fairly common at the time amongst major process industries, notably the involvement of ICI. At the time only prosperous basic process industries offered sufficient payback potential to justify the expense of a computer system. Although in 1996 a computer was a thousand times less powerful than today’s PC, a system including a moderate amount of input output and termination racking would cost £100,000. The total cost when programmed, installed, cabled and commissioned, was several times greater. Cost and time overruns were usual and particularly severe in the software area. As will be seen some of the early hoped for savings were illusory but by 1980 all had changed and it was difficult to satisfy widespread demand - there was no other show in town. 


2) STATE OF PROCESS COMPUTING IN 1966

Like most of the era my introduction was to be via manufacturer’s programming courses, IBM, GEC and Ferranti, in my case. The syllabus of such courses was very similar. The first thing was to learn binary arithmetic and Boolean Algebra, for neither featured in the university education of electronic engineers in the early fifties.


The world of process computing then was still inherently about binary numbers, software was written in assembler languages which had little more than a one to one relationship with machine code, just a few mnemonics, names and comments to explain the intent of each instruction. Talk about diving in the deep end! No easing in by way of word processing, spreadsheet or Internet browsing, for they didn’t exist. 


Then, as now, the computer’s memory was full of nothing but ones and noughts organised into words, 24 bits in the case of the Ferranti Argus. But there was no way of avoiding binary. The fundamental conceptual problem we all faced in this binary maze was to clearly appreciate the difference between two types of binary numbers, a memory address (its location) and the contents of that memory address; and to appreciate that whilst one program instruction referred to a memory address in binary another referred directly to a quantity in binary. Besides program codes and binary integers the contents could be four alphabetic letters in 6 bit binary codes, or six 4 bit binary coded decimal symbols of 0-9 (which needed to be converted to true binary integers before arithmetic use), or 24 bits of totally independent on-off states, or a number expressed as an exponential (a fraction in binary multiplied by a binary power).


Its binary nature was further evident from the method of storing program on an 8 hole paper tape. Vital paper that flew through the reader at lightening speed (300, 8 bit characters per second) and occasionally jammed and tore. Additional sections had to be hand punched and carefully inserted as a splice in order to restore the tape to use, the same procedure was used to make minor modification to a program. Most arithmetic in process control was by binary integers, so even though mathematical models were written and tested in Fortran on an IBM 360 mainframe, they had to be compiled to run in integer arithmetic.

Nor was there a great deal of written help. For instance the first task when faced with an empty computer was to load a paper tape containing the bootstrap program. After moving a few places the tape would invariably stop leaving you scratching your head. The secret was to try to anticipate what the computer was waiting for, for instance the binary code of start address needed to be set up on the 24 hand-keys.



3) AUTOMATION DEVELOPMENT DEPARTMENT

When Derrick Harvey set up the Automation Development Department in 1966 the author was one of three graduates brought in to work on computer systems.


David Hutchinson’s first project was a very ambitious large scale (£1,000,000) system on the new Basic Oxygen Steel-making (BOS) Plant. It was essentially contracted as a turn-key project to English Electric, using a Marconi Myriad computer, though the laboratory was handled differently.5 Like the Hot Mill this early project had considerable BSC involvement in the software development, but under the control of the supplier. The idea was to ensure that after commissioning Port Talbot would be capable of on-site software maintenance. He was later in charge of the entire Project Engineering Department at Port Talbot.


Tony Beswick chose a radically different approach for the first system on a blast furnace. He decided to do all the system design and commissioning in house. Free to select the computer solely on its own merits, he chose Ferranti’s brand new Argus 500 hardware coupled with their new disc operating system (Director), to which was added purpose designed application software, operator interface panels and remote input output scanners purchased from Solatron. The essential task of the computer was to collect real time process information so as to run a dynamic mathematical model of the process being developed at BISRA (British Iron and Steel Research Association) and later transferred to Port Talbot. That project was to consume 30 man years of BSC software and application effort over a 10 year period, but it provided the basis for confident in-house system and software production. Tony later played a major role in the development of Ferranti’s own software.



Computer Developer's view of Tony Beswick's Ferranti Argus 500 driven Blast Furnace System, see paper tape reader & punch & file and  24 bit key input-output monitor panel


The author spent his first year on a promising but ultimately futile attempt to automate the first ore grab unloader, part of the new tidal harbour for 100,000 ton ships, but doomed to defeat by the hostile outdoor environment in which the grab swing had to be measured and transmitted. A further six months followed for a cost benefit study on integrating the works gas and electrical power systems, which seeded the desire to establish an energy control computer system. 


Lucky to be the last to start a computer project I was able to learn from the other experiences of the other two. Both on-going projects were making much slower progress than expected, and in both cases the major problems appeared to rest with the amount of effort required for software development. All the talk was of software crashes, of unreliable input/output, and, of problems understanding exactly what characters were passing along the serial data links.

The complexity of both the Blast Furnace and Basic Oxygen Steel-making (BOS) projects was far outstripping our ability to implement. Simplicity was essential, especially in such a complex technology. I drew conclusions which would serve as essential guides for the rest of my career.


Writing new bespoke software was fraught and time consuming. It was better to base systems on software which had the designed in flexibility to be used on a variety of processes, and, just as important, to be extended by evolution not revolution.


The quality of basic control of process inputs was poor because most of them still depended on manual control, weighing of raw materials for example. Basic automation was the key, it was necessary to learn to walk first.


Disc operating systems were slow, complex, and the major cause of unreliability. An appropriate real time control system had to be crash free and operate 168 hours a week, or 24 by 7 as they now say.



4) BATCH ANNEALING (June 68 - 1969)

Annealing is a major process in steel strip production, it is used to stress relieve strip after the work hardening of cold rolling. It was then done on almost 100 bases each holding 16 coils in four stacks. Annealing does not start at temperatures below 650 C, and the rate increases rapidly with temperature but above 735 C the strip surface is degraded by the production of coarse carbides, or even more seriously as a result of the coil laps welding together.


The major problem was to ensure all four stacks received the same specified annealing. At the time all balancing of the individual stacks was exercised by hand control of around 20 burners per furnace by operators moving round a huge processing bay. In spite of their attention invariably one pedestal ‘lagged’ (failed to reach the soak temperature promptly) and delayed the whole batch for many hours, sometimes tens of hours. In addition there were frequent examples where just turning off the gas (also a manual operation) at the completion of the cycle was delayed wastefully for several hours.


In June 1968 a manual control exercise quantified the potential for automatic 4-zone control as a time saving (plant capacity) of 25% and an associated gas consumption reduction of 10%. One furnace was entirely rebuilt to make it suitable for zone control by computer. On request R&D undertook an exercise to identify an appropriate formula to define the instantaneous annealing rate as a function of temperature. The computer by real time integration accumulated the amount of annealing, a technique which soon became the universally accepted method across the industry.


Ferranti, in conjunction with ICI, had commissioned the world’s first multi-loop Direct Digital Control (DDC) in 1962. Later they consolidated their experience, in conjunction with Esso Fawley, developing a control software suite called Consul (Control Subroutine Language) applicable to any process industry. By 1968 they were selling a slightly cut down system Consul B programmed by Ian Kirk and John Thompson. It ran in core store without the need for a disc operating system. But it too provided sophisticated multi loop 3-term control, alarm and basic process record logging, and allowed the operator to interact with any one selected loop using a standard Consul control panel. If bespoke features were required they had to be written to integrate with this framework. 


Consul B software provided 16 subroutines which could be interconnected to form a control loop. For instance one algorithm calculated the error and other provided PID algorithm. For this application special software was incorporated by BSC to provide for co-ordinated stack control, using a combination of nine temperatures and three set points, and also carried out the integration.


A two week period working night-shift in Manchester with our control scheme installed on the target computer just prior to delivery was sufficient to demonstrate the viability of the scheme. The scheme was tested back to back with a simulation of the process also in Consul.


Within weeks of delivery of the computer to Port Talbot cabling was completed to two bases and the newly designed furnace. So, within seven months of the start of detailed investigation into the application of Consul B to the process, several annealing charges had been controlled by DDC under shift supervision by development engineers. They went without hitch and led directly into a major process trial on over 5,000 tons of steel on two bases over a period of several months. This time the operatives were in charge of shift operations, and the results exceeded expectations. 


Given the prevalence of manual control across the works it showed that simple but effective automatic control could result in huge savings, improved product quality, and increased capacity. More than anything else it was a case of simply doing the right thing all the time, computers were good at that, but men got bored.


Unfortunately, in spite of also demonstrating the impeccable reliability of core RAM based process computer systems in 1969, it didn’t lead to the hoped for computerisation of annealing, just a half hearted implementation a decade later. 1969 was too early for over manned Port Talbot to risk trusting control of a major in-line process to a computer system, an attitude which prevailed for a decade. Data loggers would be sanctioned, mathematical modelling was a status symbol, but forget on-line control.


Moreover there seemed little point in aiming for sophisticated mathematical modelling until the process inputs were at least properly measured and regulated. But it was hard to convince people of the benefits given the high capital cost of computer systems of the era. Far too many systems, then as now, were set off to chase over ambitious targets, justified by huge potential savings which were to prove illusory.


As for the computers themselves it was evident that they were really about software. Yet the talk of the day, then as now, was all of hardware (speed of processor and input-output, and, the memory capacity). Consul B handed control of configuration to the engineer who understood the process. This approach also reduced the cost by eliminating much bespoke software, thus reducing time scales from years to months, and vitally provided crash free reliability.

FERRANTI CONSUL B monitoring and input panel as used on Annealing project


DDC was the direct forerunner of the now widespread application of SCADA (supervisory control and data acquisition) which morphed into total digital control with the advent of independent digital 3-term controllers and PLC’s. Ferranti demonstrated with Consul B that much could be done with core store only systems and a small efficient operating system. Taking into account the far greater speed of response and far greater reliability it actually delivered more than complex systems. It offered a framework for development of real time control systems via incremental, evolutionary, development of software. The path of this evolution is the real theme of this paper.


5) IRONWORKS AT LLANWERN (project start 1971, blast furnace blow in Feb 76) 1, 2

Llanwern ‘Scheme C’ was carried out largely to increase the production capacity of the ‘heavy end’ of the works, iron and steel making. The development of the ironworks included building the biggest blast furnace to date in the UK, plus new raw material Bedding, Sinter and Coke Oven plants. IBM, Honeywell, GEC and Ferranti prepared bids, most with software similar to Consul. Ironically GEC had already shelved Conrad one of the earliest implementations of its type, though it was later restored to become the basis of their cold mill control schemes.


Ferranti Argus computers were chosen as the cheapest and most appropriate basis given their low cost. Consul was used for the new blast furnace and the new sinter plant, but they and the laboratory were to be data linked to a higher level Ironmaking computer which incorporated a disc operating system.


The crux of the upgrade to Consul was to substitute monochrome Visual Display Units (VDUs), displaying 32 rows each of 64 characters, for its original single loop display panel. In a direct comparison with its predecessor 32 loops could now be displayed simultaneously. But that VDU was capable of displaying any layout of variables, each being referred to as a page. There were around ten VDUs, each with its own special Consul keyboard and each could be displaying a different page. Most of the numbers on display would be plant variables, temperature for example, but some of them, say set points or switches, could be varied by manual entry using the cursor to select the position and merely overwriting the value before entering it.


This first evolutionary upgrade thus allowed Consul to grow massively into the VDU era and it was renamed Consul H. It is arguable that this was the first implementation of SCADA, though because of the limitation of monochrome displays it would scarcely be recognised as such today. 

FERRANTI CONSUL H  Full (Programmers) Keyboard as used at Llanwern Scheme C


The full Consul keyboard had grown to match the vast increase in project complexity Each variable was now identified by what was essentially a five hexadecimal digit, where the first three were mnemonics, say HOP (hopper) for the plant area, ORE for material and the suffix WT for weight. Although a full keyboard was needed to design the system a far simpler subset keyboard allowed the plant operator access via selection of a predefined VDU page.

FERRANTI CONSUL H Keyboard as used in Control Room
 

Each Consul system ran in core store but the systems were equipped with a disc purely to provide memory for the off-line production of a new ‘program’. Program is written in inverted commas because in truth all it defined were the linkages between pre-written subroutines. The disc was used to keep two versions of program, the first an image of what was running in the core and the second the latest configuration being developed by the engineers - thus providing a fast reload to upgrade and an equally fast one to return to a proven program.


Simultaneously there had been an important step forward technically at the higher Ironworks level, the adoption of the high level language Coral 66 which had recently been developed by the Royal Radar Establishment (RRE) at Malvern and was already in use at Port Talbot blast furnaces. Coral was later accepted as the standard language for process control across BSC. Years later the programmers switched with surprising ease to the ‘C’ language, used to write UNIX and PC code, because of the similarity in range of facilities.


Process Control software needs flexibility and guaranteed fast responses to real events. Many separate programs have to be obeyed in ever changing priority order, to simulate parallel processing. Neither was it viable to allocate one whole 24 bit memory word the changing status of just one switch, for automation involves a huge number of such single bit variables. Bit manipulation (shift instructions) and masking operations to capture the particular single bit of interest are everywhere. Input scanning operations take place hundreds of times a second, control action needs to be calculated and outputs delivered. Coral provided the means to program those features.


Fortran and Cobol reigned in the very different Data Processing environment. A scheduled succession of long programs, for say payroll calculations, had just to read a pack of cards at the outset and print a report at the end of each run. 


This Ironmaking computer ran under Ferranti Director. One of its main functions was development of Coral programs, such as the latest version of the real time dynamic mathematical model of a blast furnace developed at Port Talbot. It also ran interactive calculations enabling the management to establish the best blend of materials to feed into the blast furnace, and produced well laid out shift, daily and weekly production reports unrestricted by the use of the cryptic mnemonic identities used in the plant machines. In additional it was the hub of data links collecting data from the two Consul H machines and the analyses of iron and raw material from the laboratory. It was disappointing that the dynamic model of the process, on which so much talented effort had been expended, failed in its long term advisory (predictive) role, even though this time it was being fed accurate input data.


Llanwern had again demonstrated lack of management trust in computer control. All vital functions were duplicated, so that the Consul H system designed for DDC was in fact not used in this mode, and a full sequence definition matrix panel existed side by side with the computers ability to do the same function. Computer functions were going to have to be essential to the operation of plant before there would be a real breakthrough.


6) DIRECT CONTROL of BLAST FURNACE 4 CHARGING (blast furnace out 1975, blow in Sept 1978) 2, 3

In parallel to the Llanwern project which was commissioned in 1974 great progress had been made on Port Talbot blast furnaces by Tony Beswick’s team. Crucially they had been sited in the Blast Furnace Administration office block. By being in daily contact with the managers and maintenance engineers they had developed software whilst interacting with their ‘customers’. In so doing they had demonstrated their competence at developing software fit for the purpose of helping to run the blast furnace process efficiently. In particular this had been the production of a program to select the appropriate recipe and define the sequence for the charging of raw materials, and, to produce production and technical reports. At the same time Barry Wood had been developing his dynamic model of the process, and although it never achieved the long term accuracy needed to use for open loop process control, it gave him a deep understanding of the blast furnace process, which was demonstrated daily in production’s morning meeting. The whole team had earned the trust of production management.


Llanwern had showed the way ahead for blast furnace charging in which the whole weighing/charging process was under automatic, if not computer, control. At Port Talbot these functions were still done manually by the driver of a Scale Car, a weigh-scale in the form of a locomotive. The car could be positioned successively under some of around 20 bunkers to assemble a batch made up of several raw materials. Selection of materials and their weight was entirely dependent on the operator following his instructions precisely and manually holding open a bunker gate just long enough to let through the correct amount of material. Then he would dump the batch into a skip which would be hoisted to the top of the furnace, where a uni-selector would step through relay sequence logic to charge the batch through the double gas seal at the top of the furnace.

Scale Car for weighing and batching coke ores and fluxes into Port Talbot Blast Furnaces pre 1975


In 1975 Blast Furnace 4 was taken out of service for a reline and major rebuild which included completely new raw material handling. The Scale Car was replaced by a unitised feeder, screen and weigh hopper, under each of 16 material storage bunkers. Batches were released from the weigh hoppers onto gathering conveyors and assembled in two batch holding hoppers above the skip pits. The whole charging system was in two identical halves to help ensure continuity through plant breakdown, even if at a reduced rate.


The scene was thus poised to make a major breakthrough, to computerise blast furnace charging from burden definition to direct control of the plant. We had a plant manager Owen Davies, a maintenance electrical engineer Mike Williams, and a weighing engineer Cliff Croft, all committed to computer control. They had already tried a solid state NAND/NOR logic solution for charging one of the smaller furnaces and found the system impossible to maintain because of inability to monitor the sequence, to even know where it was, let alone why charging had stopped. But instead of drawing back to relay logic they wanted to help design the computer system. For reasons of flexibility in the handling of errors, as well as cost saving, the computer was given direct control of the weighing by direct input from the load cells. The computer had control of each electrical starter; screen, feed vibrator, conveyor, hopper discharge gate, and initiated the skip hoist and the top sequence.


Terrified at the thought of a full function conventional scheme being requested in parallel with the computer, as at Llanwern, I well remember saying, at the high level management meeting which finally adopted the project, ‘I am probably the only person in the room who believes the correct solution is to provide standby in the form of a second computer’. That’s exactly how we proceeded on subsequent upgrades of the two large furnaces, but for this first one a rudimentary relay logic back up sufficed, which in the event was never needed so reliable was the single computer scheme.


Unfortunately it was the end of a road for Ferranti Argus 500 for they had downsized to match the 16 bit word length of their main hardware competitor Digital Equipment Corporation (DEC), who in turn had just moved upmarket from their 12 bit ‘pile it high and sell it cheap’ machine to their world leading mini computer the PDP11.


In downsizing Ferranti wrote off most of their existing software base, including the Consul H of Llanwern, and it was years before they recovered the ground lost. No doubt I am not alone in wondering what would have happened had they instead moved upmarket to 32 bits and so been able to build on their existing software base, in a similar way to which Microsoft upgrade today.


This 700E was Ferranti’s first TTL 16 bit machine and developed 0.3 Million Instructions per second (MIPS). It had already been commissioned on three ironworks ‘loggers’, including one which replaced Ferranti’s failed attempt to build their Process Management System (SCADA) in time for the new Sinter Plant.

FERRANTI ARGUS 700E processor controls
The in-house team designed and produced the whole core image for the blast furnace charging computer using its Coral 66 compiler. The software included their own real time multi-tasking operating system which occupied just 4.5 KB of core store, and was at the heart of all subsequent Port Talbot systems. Like Ferranti’s this organiser employed just a single interrupt every 10ms, on receipt of which the organiser would evaluate its priority and switch program if a higher priority one was waiting. Efficient program switching was a hardware feature of Ferranti computers.


The software was tested and demonstrated to the users, running back to back with a simple software simulation of the process. Finally the completed 96kB software image was loaded onto the target Argus 700E, a full year before the furnace was blown in. Ferranti’s analog and digital I/O interfaces were used. Their MP49 serial link cards drove operator monochrome VDUs with standard QWERTY keyboards, report and alarm printers, and data links to the development and management process computer in the offices. 


The system was an instant success the furnace was filled with material continuously in hours beating the previous record easily. A well designed computer system ran beautifully with a well designed new plant. Whenever a sequence step exceeded its expected time an alarm appeared telling the operator and maintenance man exactly what was causing the hold up, almost always a plant malfunction, maybe a conveyor trip. For the first time at Port Talbot was charging well screened raw materials under tight automatic control, and it showed up as a big reduction in coke rate, the primary smelting fuel. Blast furnace stoppages due to charging faults were a thing of the past.

LIVE MIMIC of CHARGING by Skip Hoist from PIT to Gas sealing BELLS at Furnace Top, tracking by batch number the weight and source bunker and bell operation of (Coke, Sinter and Miscellaneous materials including pellets & ores). The burden composition was derived from chemical calculations on a separate Ironworks computer. 


In retrospect it was clear how efficient this way of working had been. Records show that only one man year of software engineering had been needed for the initial implementation because no time was wasted in meetings with a myriad of suppliers. Neither were there special panels to design and get manufactured, just standard VDU/ keyboards. But software development restarted with commissioning. The system sounds simple enough but there was plenty of scope to add operator VDU functions which allowed operators to deal with the inevitable plant problems, such as screen or conveyor blockages or skip hoist failures, without losing track of materials or having deviations from the intended burden chemistry. Such finesse extracted full benefit from the modular design of the plant. Two years later the records showed just 2 man years effort in application engineering and slightly less in software. 


Later the highly instrumented, computerised, centralised blast furnace control room was totally destroyed by fire following a breakout of hot metal in the adjacent cast house. The remote stockhouse computer survived. Recovering its centralised monitoring was simply a matter of providing a standard VDU and cabling a single 4-wire link, not so for the rest of the individually wired facilities. By then the Reheat project (see later) had been commissioned and opened the path to distributing the instrumentation and computer in several safer environments, with little but SCADA needed for the central control room.



7) COAL  HANDLING (started Oct 1977, commissioned May 1979)

Coal was in future to come from cheap imports through the deep water harbour. A vital feature of the computer system involved the tracking of different coals as they were added, blended and reclaimed from these beds. The software was written as a bespoke solution to the needs of the project. From the outset this system was used as a test bed to try out new ideas for use on the new Morfa Coke Ovens Battery, and so the computer system incorporated three advances which were to set the shape of systems to follow.



Firstly, it used colour VDUs as the operator interface. The controllers were ordered at the prototype stage from Serck Controls, who produced the UK’s first 32 colour character graphics controller (16 foreground and another 16 background). Contemporary versions used equal weighting of Red/Green/Blue (RGB) on a TV display to produce 8 foreground only colours including black and white. Each Serck controller occupied one half height 19” rack and cost around £4000. Digivision supplied professional standard monitors and Alphanumeric, a start up company, supplied our special VDU keyboards. Now you know how the name of that system supplier originated!


They were driven from the computer by codes sent along a serial link. All our subsequent VDUs followed the efficient full 8 bit protocol designed, along with the hardware, by Martin Baker of Serck, including a speed-up feature which would be called ‘loss-less data compression’ in today’s jargon. Asynchronous serial links were expected to carry data following ASC11 character code of 7 bits plus parity, but he realised that the link hardware could equally well ignore ASC11 and carry its own 8 bit binary codes. At a stroke doubling the data capacity of the line, a lesson we carried through into our own data link protocol designs 


Secondly by using a microprocessor as interface between the computer and a remote cubicle of I/O. Media I/O was chosen for quality, manufactured by GEC Lewisham. For this project it was headed by an 8 bit serial Intel 8008 microprocessor (though this was soon  upgraded to the newly available parallel 8080), to form Micro Media. It used a newly available Universal Asynchronous Receive Transmit (UART) chip to communicate to the Ferranti computer by asynchronous data link. The micro’s task was to scan the Media Inputs and keep an up to date map of values and provide for input scan and individual output requests from the mini computer.


Thirdly, this system was commissioned on one of the very first 700G processors which Ferranti manufactured. The 700G was built from four 4 bit microprocessor chips and developed a speed of 0.6MIPS, twice the speed of its forerunner.


It is virtually impossible now to imagine the huge impression made by the use of colour and high quality monitors, which was in its way equivalent to the introduction of colour onto TV. Together with the next event it gave a huge boost to the acceptance of process computers at Port Talbot.





8) AFFECTS of  DE-MANNING of the STEEL INDUSTRY

The manning at Port Talbot was to fall from around 17,000 in 1966 to little over 4500 by 1996. An incredible reduction since there was increased production volume and a massive improvement in product quality over the same period. By the late 70’s employees were being encouraged to leave, in fact those over fifty were given positive encouragement with redundancy packages, and felt obliged to leave and save jobs for younger men. One by one the young graduates in the blast furnace team decided to leave the industry.



For those left the massive de-manning that followed the Steel Strike of early 1980 was a double edged sword. Not only were we losing talented, seemingly irreplaceable, software personnel, but the production managers, who had been so wary of using the technology in vital functions, overnight moved to complete acceptance, realising there was no other solution to the wholesale reduction in process staff needed for survival.


The priority was no longer to convince managers to adopt the technology, but to get them to accept that we had to concentrate on one project at a time. Well aware of the project implications we continually reassessed our own development priorities, in much the same way as the computer operating systems themselves. Though not to quite the same 10ms timescale!


In many ways we became victims of our own success, but we didn’t sink, we stayed afloat. That is a tribute to the efficiency of in-house team working, with no skilled manpower wasted on production of commercially watertight written enquiry documents, negotiating contracts or communicating with third parties over a design detail. Instead we retained close informal contact with the end-users, the production staff and operators.


It was decided to focus limited resources on the production of systems which could be replicated with minimum effort and be ultra reliable 2. In software terms this led to a core of reusable general purpose SCADA (Supervisory Control and Data Acquisition) software. In spite of which over half the available software effort was unavoidably directed to the production of unique software, tailored to fit individual process needs.


Proof of the strategy lay in the production of effective, reliable, crash free, dual hot computer systems, which minimised the effort needed for post commissioning support. In the 1980s’ around five new, or significantly upgraded, systems were being commissioned a year. Effort per project fell from the 30 man years of software and engineering effort consumed by Port Talbot blast furnace (to 1970), via 20 man years on Llanwern blast furnace (1976), to 5 man years on SCADA projects by 1981, with further reductions to perhaps half that level.


Our responsibility as a team was not only hardware procurement and design, development and commissioning, but for also lifetime software maintenance and system upgrading. The capital cost of a commissioned system was around half that of an equivalent one purchased externally, which magnified enormously when considering the total absence of the cost for liaising with a contractor during the supply period, and even more when including post commissioning development and evaluating the lifetime cost.  Systems were built as far as possible out of standard modules each bought cheaply from the appropriate supplier, and included those specially designed as building blocks for the radial serial links and automatic changeover.



9) PORT TALBOT'S OWN SCADA for COKE OVENS/POWER PLANT (Coke July 1979, on-line May 1981) 2

We regrouped around the old annealing team with the addition of two software recruits from the Data Processing Department and one from R&D. The new team comprised just five software engineers, and a set of system engineers, varying to suit the project, typically numbering four including the author. Continuing the approach of Blast Furnace Charging all (100%) software in the computers including the small multi tasking real time operating system was produced at Port Talbot.


It was decided to close the existing coke oven battery because of its age and in order to eliminate the pollution of a nearby residential area of the town. The new Morfa battery was sited at the east end of the works. Coke oven gas was cleaned in a multi-process by-products plant.


This time no less than eight Micro Media systems distributed the I/O covering the oven battery itself and each of the by-products plants. Two tiny ones were mounted on the pusher cars which identified the oven being pushed by reading the transponder badge fitted to each oven door.


All interfaces to the computer were made by serial link. Ferranti had begun to include small 64 byte send and receive memories (silos) in their serial link cards which allowed line loading with meaningful data to get very close to 100%, given matching software. In order to drive over long lines, many of several kilometres to the opposite end of the works, 0/20mA was adopted for binary signalling. Whilst starting off in the earliest projects at 1200bps eventually 19200bps became the universal objective in order to achieve the throughputs then needed. Binary either works well or it doesn’t, making it easy to identify the highest attainable line speed, for if the rate was one step too high total rubbish would be delivered (hieroglyphics would appear on a VDU line) but halve the communication rate and it would be perfect.


Unfortunately 20mA drivers were not adequately specified so that the maximum speed attainable depended on the circuit design, the problem being that the line which charged during the 20mA period to signal 0 had to fall rapidly enough to signal 1, and the thresholds chosen appropriately. The local firm Digitrol provided excellent optically isolated line drivers for the computer end. These modules, originally designed for us by Lions Communications, incorporated RS232 on a front socket to allow ‘listen-in’ monitoring of each individual line without affecting on-line traffic. In all cases the 20mA line was powered from the sending end, a tremendous help in commissioning, open the four in line terminal block isolators and look for 12vdc on two of the four wires on one side and 12vdc on the other and any cross wiring was immediately obvious.


The system was eventually brought to its final shape when the Coal Handling computer was moved from its filthy location and amalgamated with the much larger Coke Oven system to form a dual hot computer system. In order to reduce the risk of bumps on transfer the Media digital outputs, for instance for start/stop of conveyors, were converted to monostable, so becoming analogous to ‘push to change’.


The second SCADA, for Power Plant C, which was developed in tandem, took the principles further. Firstly solely for reasons of cost reduction the I/O was bought from a telemetry supplier, Transmitton, but more importantly it was required to be headed by a micro with five serial computer ports rather than just one as on Micro Media. Software allowed each port to support communication with an independent SCADA computer.

                                                             inputs       inputs       outputs


Diagram Telemetry Outstations where Analog and Digital signals were interfaced to SCADA computer by Microprocessor (both GEC Micro-Media and Transmitton Telemetry Outstations)

Two of these serial ports allowed the I/O to be connected to both the on-line and off-line computers. Each computer would be continually scanning the I/O, each would drive its own VDUs, the sole difference being that only the on-line machine VDUs could be used for output commands, be they single bit start/stop or changing the value of a set point or gain. Three ports were future proofing, to allow connections to another three computers. Within a few years most of the computers on plant were interconnected in this way to an Energy Control SCADA, then still several years off.


Developing shape of 'Heavy End' System (eventually both Blast Furnaces had dual SCADA computers)

The Power Plant I/O was divided into several microprocessor outstations, three in the new plant plus others in the Margam A and B power plants, in order to provide comprehensive cover.


It was a huge step forward for telemetry. At that time outstations were tiny, designed for monitoring a reservoir or a substation. Every manufacturer designed unique message structures to suit their own market, reply message lengths of 20 to 30 bits were typical, and most systems were designed for use on dial up lines. Our outstations were huge in comparison, typically 256 analog and 256 digital signals, the lines like today’s broadband were always on, and the messages were variable length but composed of bytes driven by UART chips and interpreted by software.


For Transmitton design of the requisite software to run in EPROM on an Intel 8085 (8-bit parallel) was the major challenge of the contract. That they achieved it was largely due to the efforts of Denis Williams, who later went free lance as a designer of electronic modules - but not before he had collaborated with us on the next on the next vital micro-processor project.


Serck soon dropped VDU manufacture, but a new company start-up Microvitec provided a version of their M2000 stand alone colour VDU which incorporated the Serck protocol, though their sales literature later referred to SERC!  The Microvitec M2000 was used for the first time as extras on the Power Plant, halving the cost per VDU. From then on a single VDU could be used either in SCADA (Serck) mode or as a general purpose terminal working to the conventional ASC11 standard, because the basic emulation in the terminal was DEC’s VT100. Commissioning problems were encountered, but most were simply the result of a poor 20mA line driver circuit within the device. When Microvitec went out of business a small one man software contractor ported ‘Serck’ to what was effectively disc-less PC hardware, so looking to the casual viewer like a stand alone PC monitor. 

Control desk of new Power Generation Plant with three Serck VDUs


Our SCADA, an acronym which wasn’t current at the time, initially owed a great deal to what had been observed of similar proprietary systems like Ferranti’s now defunct Consul H as used on the Llanwern blast furnace and Honeywell’s mould breaking TDC 2000 both aimed at direct computer control, and, Taylor’s Shared Display and Kent K70 for supervisory control. Visually the early screens show a resemblance to the Mimic, Bar, and Alarm features of K70/90, but we were soon to find our own voice. Below is the mimic produced for Turbine 6, all the variables are simply current scan values and typically updated every second.


Mimic Diagram of Steam Turbine in first use of Serck VDU 16 colour controllers



10) REHEAT FURNACES (old on-line Feb 1982, new A May 1985 + new B Aug 1987)

This project marked the next big step forward in our use of computer technology.


What we feel must have been the first ‘VDU only’ control room in major UK industry went into service in Feb 82 with the first existing furnace to be instrumented digitally. By November 1982, at the end of the first phase of plant commissioning, all five existing furnaces were controlled from the central control room, which still comprised two colour VDU’s, two printers, an emergency stop button, CCTV, an intercom and telephone. There were no controller facias, no meters or on-off switches or alarm panels, and no chart recorders. The computer was the only link between the operator and the automatic controls. It alone was responsible for displaying the state of the process and routing all control actions to the plant (including manual control of loops). In addition it carried out the entire alarm function by checking and recording alarms plus alerting the operator to current critical alarms in rotation by display in dedicated windows on each screen. It also provided historical graphs and logs.

 VDU only Control Desk for new walking beam Reheat Furnaces + CCTV monitors

Phase 1 of the reheat project was to upgrade the five existing furnaces, which previously were locally controlled by five crews, by replacing their Askania hydraulic control of gas flow and furnace pressure with 3-term digital zone temperature controllers and overseeing all the furnaces from a single new central control room by SCADA.

It was the first part of the major redevelopment of the Hot Strip Mill to double coil weight so improving hot coil quality and increasing throughput. It was the first mill in the UK to use a hot coil box to decouple control of the roughing and finishing mills, which together with the use of a reversing roughing mill allowed the space for an additional finishing stand. GEC won a huge contract to modernise the powerful servo drives of hot mill and to replace the mill computers, but BSC retained the responsibility for in-house provision of the computer control for the furnaces. Considering these furnaces were in-line with the mill this was the first significant show of trust since blast furnace charging.


The project went through two more phases as first production switched to a brand new walking beam furnace complete with PLC controlled slab handling and finally by replacing the remaining original furnaces with a second identical walking beam normally working in parallel, though both furnaces had to be capable of working alone. By this time the only additions to the control room were a battery of CCTVs, to provide a visual safety checks on operatives and slab/coil handling.

 Closer view of one side of VDU only Control Desk


Although the foundation had been laid down, as previously described, for large scale SCADA on the Coke and Power projects, these two earlier systems provided extra facilities in conventional control rooms, neither computer had been the first line means of controlling vital process plant. The reheat project started as faith in computers was growing, but there was no efficient digital link with single loop 3-term controllers on the open market. Multi-loop controllers were still rejected as multi-loop failure was considered too risky (as per annealing), so this ruled out DDC. As a major process supplier to British Steel was fond of saying, the appropriate control system design philosophy for steel was ‘belt, braces and a piece of string’.


One day the Hot Strip Mill telephoned to say a salesman was demonstrating a digital 3-term controller. It was a Eureka Moment for me, just what was needed, for not only was it a good-looking professional controller, which would satisfy the conservative requirement for single loop integrity, but it was conceived as a multi-dropped building block within a fully digital system communicating via an RS 422 asynchronous link.


Crucially Eurotherm, following the commercially successful open system approach of SUN, were committed to publishing their communication protocol. It was the prototype 6350 Turnbull Control Systems (TCS) controller. At that stage the protocol was a straight forward use of ASC11 defined text and message control characters, and error checking was done simply by repeating the character stream. Nevertheless it allowed the computer to read the controller’s vital information like MV, SP, Ratio, O/P, Gains; and to write (output) operator controls to issue SP and  Ratio, or to switch the controller mode between Automatic, Ratio and Manual modes, and when in Manual to set the position of the control valve.

Rack of TCS digital control devices linked by RS422 serial lines via Concentrator to SCADA Computer


By the time of the first walking beam furnace TCS had substituted a binary protocol. TCS also developed a large range of other modules of identical physical appearance particularly, an 8 loop controller, 32 input signal processor (analogs or digitals), and a totaliser device to integrate instantaneous flow measurements into mass transfer.


Unlike Serck unfortunately TCS never did make the transition to 8-binary bit characters, preferring to keep easy recognition of ASC11 control characters, but at the expense of using 3 bytes instead of two for each 16 bit analog variable.


Close up of most used compact Mimic of updating state of a single 10 zone Reheating furnacen

Building on the Microprocessor Outstation SCADA link concept, a contract was placed with Transmitton for Denis Williams to develop the software concept further. The result was the Concentrator interfacing between all TCS devices and the computer, with the ability to drive on the plant side five serial lines each populated with multi-dropped TCS modules, and on the other side to communicate independently with five mini computers. It was not thought of in those terms but it was essentially a client-server structure, whereby the concentrator (server) continually scanned the TCS devices for variable data and so held an up to date map of all the devices, a subset of which it served on demand to any of five computers.

Diagram of Micro processor CONCENTRATOR to interface new TCS Digital Instrument/Controllers to up to five SCADA computers


The computer link protocol, as always, was defined at Port Talbot. It employed full 8 bit serial data by the simple expedient of adding message length to the header and a cyclic redundancy check (16 bit CRC) to the tail of each message. Though error checking was essential to guarantee message validity in practice data errors were almost non existent, even when 2000 bytes were being delivered every second.


Transactions were also designed for routing output commands straight through the concentrator to the TCS device. Another instructed the concentrator to collect and relay the complete set-up data from a specified TCS device (typically around 50 items). Thus the SCADA was able to display and modify the set up of the selected TCS device, a big improvement on doing it one item at a time with a hand held terminal.


Along with the automatic slab handling came four Programmable Logic Controllers (PLCs). This was another implementation of the micro processor, which concentrated on the digital logic needed to replace relay and uni-selector sequencing systems. It provided a scanning of digital I/O, just as TCS had replaced those analog input functions. PLC communication was by a single serial link feeding both dual computers, for the digital information gathered was unlikely to be of universal interest. Like TCS many PLCs could be multi-dropped on a single line. The PLC line protocol followed the GEC Extra Simple Protocol (ESP).


Together TCS and PLC had at a stroke eliminated the need for mini computer I/O or outstations. Each had integrated scanning and automatic control, and could be sited to minimise cabling. Total digital control and SCADA had come of age.


Input/output cubicles had formed a vital part of early process computers. Now all that remained at the computer end were asynchronous 4-wire serial links to drive VDUs, printers, digital instrumentation, PLCs, and inter-computer data links. At the end of a further decade or so of development 32 such links per computer had become our standard computer configuration, with some users having to accept VDUs driven from the off-line computer (essentially indistinguishable to the observer).

Automatic changeover of dual computers was held off by watchdog circuit, a technique whereby a monostable circuit has to be kept in its unstable position by priming it every second from software, thus indicating a correctly functioning system. Since computer I/O now consisted of solely of asynchronous serial links changeover, principally of VDUs, was done by banks of four pole changeover relays. No change was needed for the plant I/O interfaces which stream data to both computers. The changeover module design was done by Digitrol, whose watchdog module also included a digital clock to provide an accurate time reference regardless of shutdown periods.


Two pairs of single cubicle mini computers were used. The first pair controlled the reheating furnaces via SCADA. Note the four Serck VDU controllers and the changeover chassis in the middle cubicle, and the two 10MB cartridge reload discs. The other pair were the Optimal Computers (nicknamed Optional by the operators in their early days!). They took slab data from an IBM production control machine, linked it to a physical slab and tracked that slab as it was charged to either furnace, modelling and integrating the heat content as the coil passed through the furnace, and on discharge passed the enhanced slab record to the GEC mill computer by serial link. From a hardware point of view these computer systems were virtually identical but the software of the Optimal was entirely a one-off to suit the requirement. In December 1982 a single cubicle Argus 700G of similar layout, but with a fixed head Winchester disc (rather than the floppy and cartridge discs), with 16 serial link drives cost £26,500 before discount.  


Pair of identical Port Talbot SCADA each actively monitoring the process. Centre cubicle has four Serck VDU controllers at top and a changeover rack at the bottom which determines which of the SCADA pair is on-line single and changes over the VDU's accordingly. The outriders are 10MB cartridge discs.


On the other hand the SCADA system, with its far higher serial link traffic, ran on software which was a direct evolution of earlier such systems. As always it incorporated a great deal of special to project software for such functions as report logging, adjustment of furnace control on events such as the opening of furnace charge or discharge doors, or to interface with gas analysers.


By the time of commissioning of Walking Beam Furnace B the signal handling capacity of the computer had been increased to 2048 analog and 3072 digital bits, and the VDU page display capacity to 256 analog and 256 digitals. The rate of data handling was exploding, being measured not in hundreds of variables but thousands per second. In order to maintain response rates scanned data protocols were refined so that data was reported to the computer only on significant change, a complex task with the imperative of designing rugged logic so as to avoid frozen data in the SCADA system.


By the end of the project no less than nine SCADA systems were operating on processes all over Port Talbot works, most based on this same dual live processor design. One of these was in the Energy Control centre, where the system had virtually free access to relevant current process data via the outstations and concentrators on the power plants, the reheat furnaces, the two large blast furnaces, the sinter plant and the coke ovens.


11) BLAST FURNACE 4 REBUILD 1992

Throughout the 30 years the Ironworks used nothing but in-house computer systems. The two large blast furnaces and the sinter plant as well as coke were amongst the earliest process to adopt SCADA and were upgraded every few years to take advantage of ever increasing functionality.


Blast Furnace 4 (with all its associated hydraulic, air, electric and control services) was demolished and replaced in a 149 day shutdown by a bigger furnace. The new furnace was constructed on the ground in seven horizontal sections, each being equipped and wired with hydraulics, electrics and instrumentation. During the stop period these modules were stacked one by one by the world’s tallest crane, the last being the furnace top.


This was the final Port Talbot SCADA to be produced, for Ferranti had finally folded marking the end of the road for our SCADA software. The system needed one final doubling in variable data capacity and finesse, which showed itself particularly in an even more refined tool for on-line configuration of mimic diagrams. The hardware architecture had been set a decade earlier on the Reheat Furnace project. Direct Control of charging was this time done by PLC, but the SCADA computer’s role was enhanced to cover the charging operator interface. This time the full complement of 32 serial links were in use.


 SCADA Mimic of Blast Furnace

It was the most densely instrumented blast furnace yet. The instrumentation and associated controls of the furnace, stoves, high pressure top and gas plant were again covered by TCS. Temperature measurements in the hearth/stack alone were counted in hundreds. Everything else was controlled by GEM 80 PLC, the hot blast stove changeover, the powerful dust collection plant which sucked air from the covered cast house, the hydraulic drills and guns used to tap and plug the furnace, the motor control centres and variable speed drives, the electrical substations, distributed gas alarms, all services whether hydraulic or compressed air or oil or steam.


 Remote Monitoring and Control of Charging from Control Room away from the hazards of the Furnace

Though 16 bit words in many PLCs were prepared specially for communication with the SCADA many other words had to be mapped into the system even though only a few bits were of real interest. The effect of such an increase in plant scope, the widespread PLC automation and relative inefficiency their data preparation for communication purposes was to enforce a further doubling of SCADA scanning, taking the digital inputs to 10,000 and the analog data base to over 4000. Luckily the Argus 700 processor was by 1992 developing 1 MIPS (obeying one million instructions per second), had floating point arithmetic and was equipped with 256KB of RAM.


12) SCADA DEVELOPMENT

The SCADA software loaded to on-line systems remained resident in RAM during operation. Each system was configured as a binary image on a development computer with a disc operating system running a Coral compiler. These development computers provided a full range of facilities to efficiently write, compile and assemble run time software images. The added complexity of disc operating systems on-line was deemed unsuitable for the target SCADA machines where the essentials were speed of response, high serial link data handling capacity and above all reliability. Nevertheless these systems were rapidly growing increasingly complex so all the more need for the byword of the day KISS, ‘keep it simple stupid’ – well, as simple as possible. The discs in the on-line systems were only essential when updating these systems, they provided for fast load of updates, and the dual system structure made it easy to upgrade the off-line system before trying it out tentatively by forcing changeover.


The credit for the detailed design which created the performance, facilities, and underlying reliability of the systems goes without question to the software team. I often observed that ‘it amazes me what can be done given good software development, but without it you are powerless’.


John Howley worked throughout from annealing to the final blast furnace system. He specialised mainly on the plant side of the SCADA interface, designing the serial link protocols and the data base and producing the drivers to continually stream ever more values, yet updating individual points just as quickly as before.


Gary Howell made the VDU system a joy to use. VDU screen design was based on the ‘what you see is what you get’ approach, adopted since the days before SCADA. Thus structural representation, say the pipe work, or the representations of decimal or binary variables, appeared cell by cell as they were configured.


Together they provided a fast responding large scale SCADA system for process operators, maintenance engineers, and managers in their offices.


A feature of the integrated design was that each variable on the screen, be it say a temperature, the result of a calculation, or the state of a contact breaker, could instantaneously be traced back, by 'selecting and right clicking' in today's PC jargon, to its precise origin, its location in the PLC or TCS device, or its calculation. Most layouts were produced during commissioning, refined on-line in the early stages of production use, and extended as the plant facilities grew and more signals became available.


These invaluable features have echoes in today’s PC spread sheets, user friendly control of layout and ability to see the data sources and derivation of each cell. Without this feature it is virtually impossible to verify the accuracy of the real time data being displayed on a SCADA screen, or even to identify the simple case of ‘crossed wires’. Features we were to learn to our cost were not available on the proprietary SCADA packages of 1992 we were to use on the Steel Plant, where a glamorous pixel graphics package had merely been bolted on for marketing reasons.


Whilst many were seduced by pixel graphics, by the glamour of multitudinous colours and fine resolution, Port Talbot systems retained character graphics with its huge advantage in compression of communication data. It took a long time for us to accept that process operators did not want pictorial representation, their ideal was wall to wall numeric data, the more the better, with just enough structure to the mimics for orientation. In these circumstances it is hard to see where access at the pixel level pays off, except in the production of graphs. Use of 32 purpose selected colours soon showed that very few foreground/background colour combinations are usable. As today’s audio CDs regularly demonstrate with covers that look pretty -.pity the lyrics can’t be read!


These character graphics screens, like spread sheets, are divided into 32x64 cells as implicit in the Serck protocol. There are typically 100 mimics purpose designed for each system, and each screen is laid out in 32 rows. But although every row has 64 character cells there are no imposed columns. For a binary state representation requires only a single cell, whereas a measurement typically requires 2 – 5 cells for a numeric or a text variable, and to allow free page design there cannot be restrictions on the mix in any row. Any similarity with spread sheets ends there, because most variables presented on the screen result from automatic update (every 1-16 seconds) and not by manual entry. The variables and the diagram can be constructed from a set of 256 characters designed for ‘Lego’ mimic construction, of which only 36 are needed for the alphabet and numerals, and each character can gain extra meaning from the allocation of appropriate foreground and background colours.


Though the paper emphasises the SCADA aspect of process control, this same five man team were responsible for the specific to project bespoke software which was incorporated in all these systems. They also provided the software in the disc operating systems which ran on a management level process computer in the Ironworks, or on the tracking and modelling of the Reheat Furnace Optimal computer, or on the many DEC Alpha computers which were about to be used on the Steel Plant. More than half the software effort available over the period needed to be allocated to such tasks.


Stability was a vital ingredient as the team stayed together for almost 20 years. Part was due to the high unemployment in South Wales during much of the period, but above all it shows the attraction to individuals of freedom to express their creativity. And that applied in spite of an over-heavy workload.


Not only did we own all software source code but in general the original writer was still around for the next stage of evolution. Owning software meant being responsible for updating systems whenever it suited, usually when resources could be spared. Several SCADA software versions were in use on plant at any one time, but they were upgraded whenever possible both in the interests of releasing more comprehensive facilities but also to retire older versions to ease the support process. This is in bleak contrast to those using bought in software where systems have to be upgraded on a time scale designed to ease the supplier’s support problems, or left in a time warp. Purchase of software designed for the system always included ownership of the source code, so as to remain in complete control of destiny. Leaving open the options to update it ourselves or hand to the job to a contractor with the appropriate software skills.


Once it became obvious that Ferranti were failing we considered porting the software onto other hardware, maybe using a proprietary small real time operating system such as PSOS, PDOS or Wind River. Only DEC had a compiler for the UK language Coral, so making conversion to ‘C’ a logical alternative. But reality had to be faced, and that prohibited diverting scarce software resources from the more urgent task of producing applications for a steel plant struggling for survival.


By my retirement in 1996 nine SCADA systems were still running on Argus 700 hardware and our software covering, coal/coke, sinter, blast furnace 4, blast furnace 5, power generation, energy control, reheat furnaces, galvanising and annealing plants. GEC Projects had a monopoly on the supply of computer systems for the rolling mills.




13) STEELMAKING (enquiry Dec 1992, requirements Aug 93, design May 94, on-line with Lab May 94 and BOS from Dec 94)

The task was to replace on-line a troublesome poorly understood system on the Basic Oxygen Steel plant (BOS), and to do it without disturbing plant operation. It consisted of Ferranti Argus computers and an Ethernet full of DEC computers, which had grown for 20 years with software written by a succession of external contractors.


Steel-making is a unique process where the batch nature cannot be ignored as for the first time in the steel process material is being made to meet specific orders. To make it difficult many of the vital material transfers and weighing are carried out by mobile plant, be it the delivery of ‘hot metal’ (blast furnace iron) by rail in torpedoes and then its transfer to ladles which are carried to the BOS vessel by crane, or the 30% scrap also weighed, mixed and delivered by crane. Anything but free flowing raw material additives are weighed out as accurately as possible and added to the vessel to provide detailed control of the steel composition. Predictive static modelling defines the weight and composition needed for each constituent and is used to prepare batches several heats in advance, so any change to planned loading of batches provides a material/heat model tracking problem which takes several ‘heats’ to eradicate.


Nevertheless there is much in a Basic Oxygen Plant be it weighing, instrumentation or digital logic to justify the SCADA approach chosen when the enquiries were issued. But at best SCADA could only cover a small but vital fraction of the requirements of this system. Unfortunately Ferranti Computers PLC had gone under and with them the development life of the Port Talbot SCADA software, so an external supplier was needed. Though with hindsight it might have been quicker, better, and far cheaper, to 'port' our software using a small proprietary real time control operating system, as long contemplated but dismissed for lack of software manpower.


Most of the process industry had for years used DEC PDP11 computers with its associated RSX11 operating system and were happy with the choice. But by the time of this project DEC had moved to VAX/VMS which led to many complaints in process industry about its complexity and poor response times. It was hoped that the hardware power of the new 64 bit Alpha range would provide a better match for VMS, which was considered to have the functionality of a data processing mainframe operating system.


The choice of DEC computers also meant moving from the easy fault isolation of simple radial serial links to Ethernet, which had already led to a few catastrophic failures on site. Ethernet provided a thousand time increase in speed of data transfer, which dropped to say fifty times when compared with 32 parallel radial lines many working at near data saturation throughput levels. Network hubs and bridges were installed to segregate the network to improve visibility and ease maintenance of the network. The beauty of a radial configuration is that almost all failure affects only a single line and that maintenance is straight forward and rapid, for there is little chance of a common mode occurrence causing total network failure. When an Ethernet network fails in a real time control context, in a world where operators are blind without functioning computers, such failures are devastating. Restarting the network and synchronising the computer databases is a complex and time consuming problem.


Replacement of the laboratory computers was handled entirely from Port Talbot with a single software engineer programming the DEC computers, working with a one man software contractor expert in designing PC software interfaces to the analysers. That project went particularly smoothly, testament again to avoiding irrelevant people interfaces.


The much bigger steel-making project was organised by forming a joint team with R&D (Tees-side Labs), and the computer supplier. Tees-side designed the models and the relational data base. They joined Port Talbot and the computer system supplier from the outset to detail the Functional Requirements of the project.


It was assumed that the supplier would be able to provide a sound basis of SCADA for the universal tasks since his was a well advertised product, but we were dismayed to run into intractable problems in this area. These were associated with bolted on graphics and the lack of a user oriented SCADA set up system, but above all with catastrophic data transcription faults. Data transcription, also used by IBM in process control, was a technique whereby scanned data is copied from the front-end scanning server to continually update data bases in other computer network nodes, including the work-stations used for every VDU. Whenever transcription went wrong there were wholesale discrepancies between the data bases in the various machines including work-station mimics. Such errors would commence as a result of a new measurement being added to the server (front-end), an essential part of both commissioning and on-line development. And to think that by then that Client-Server not transcription was all the rage!


Compared to this proprietary system the Port Talbot SCADA was a world beater, perhaps it was - many knowledgeable outsiders were impressed. After all this was no amateur effort, for Port Talbot were first to commission a VDU only control room via SCADA on a large industrial process plant, and possibly the first UK industrial user of true multi-colour character VDUs. Port Talbot was first to commission the 16 bit Ferranti 700E processor in direct control (blast furnace charging), and were first to commission TCS devices in an on-line computer system, in both cases in advance of the equipment manufacturer’s themselves. Quick also to apply Intel’s first 8 bit microprocessors and the UART chip to serial communications. Leads we feel confident were maintained in SCADA until reaching the end of the line with the demise of Ferranti Computers plc.


Digital Equipment’s PDP11 had been the software team’s preferred 16 bit CPU, whereas at the time I was still influenced by Ferranti’s experience in industrial control software, especially Consul H. The irony is that had the PDP11 been chosen as the basis of our SCADA we might well have succeeded in licensing the software to TCS, who for marketing reasons needed a system running on a computer with a world-wide pedigree. Though Digital were themselves soon to disappear without trace just like Ferranti Computers.





ACKNOWLEDMENTS

I acknowledge the fine opportunity offered by the steel industry. I bear a particular debt to Derrick Harvey who formed the Automation Development Department in1966 and gave unwavering support. I also fully acknowledge that this was a team effort and have tried to credit individual contributions where appropriate in the text. There are many who have been missed, and, at the risk of offending others, would like to pay credit to Jeff Groom who led the Annealing and Coal/Coke projects, and Clint Budd, Jerry Larsen and Bob Davies who were long term members of the software team. Also it should be made clear that the contracts for instrumentation, electrics and PLCs, and their associated automatic control systems, were normally handled by other engineers in the Project Department, usually on a turn-key basis.



REFERENCES

1) A General Purpose Hierarchy Design for an Ironworks, by B H Corbett & E P Walker
     Trends in On-Line Computer Control Systems, IEE Conference 127, April 1975

2) System Design to Minimise Failure, by B H Corbett
     Engineering of Process Control Systems, The Metals Society, Conference April 1977

3) Computer Control of Charging of Port Talbot No. 4 Blast Furnace, by B H Corbett & R Watkins,  Measurement and Control in the Handling & Processing of Materials, IEE Conference April 1978

4) An Ironworks Computer System for Port Talbot, by B H Corbett
      Trends in On-Line Computer Control Systems, IEE Conference 172, March 1979

5) Design for Availability in Computerised Analytical Laboratories for Steelworks on-line control, by D S Hutchinson and K E Morgan
      Trends in On-Line Computer Control Systems, IEE Conference 172, March 1979;

6) Works Wide Process Control Network for BSC, Port Talbot, by B H Corbett 
      Computers in Communication and Control, EUREL, EUROCON 84, Sept. 1984




4 comments:

  1. I note that your image of the Ferranti A700E control/monitor unit is upside down !
    Please could you rectify this ! Thanks !

    MikeW (ex-Ferranti Argus700 programmer !)

    ReplyDelete
  2. Energy Control and Power Plant SCADA still operate on Ferranti. This was a very interesting read

    ReplyDelete
  3. Wow what a great blog, i really enjoyed reading this, good luck in your work. Tailored Rugged atr chassis

    ReplyDelete