Il Tutto è qualcosa di più e di diverso della somma delle parti
Negli anni 20, circa 50 anni prima dell'articolo di Philip Anderson "More is Different" - che si può ritenere una delle basi fondanti della teoria dei sistemi complessi - un gruppo di psicologi, Kurt Koffka, Wolfgang Köhler e Max Wertheimer, formatosi nella cosiddettaScuola di Berlino e successivamente considerati come i fondatori della psicologia della Gestalt, si interessò allo studio di un processo estremamente complesso, la percezione, in particolare la percezione visiva umana, formulando le leggi della formazione delle unità fenomeniche, dette anche leggi di segmentazione o fattori di unificazione del campo visivo o principi della Gestalt, le quali regolano quei fattori che favoriscono il raggruppamento o l'unificazione degli elementi percettivi in un insieme unitario che ne determina il significato. Il tema centrale è la percezione e l'organizzazione di significato del complesso figura/sfondo:
Sebbene queste sei leggi, a cui se ne aggiunge una settima espressa in tre sensi, siano state formulate e applicate strettamente allo studio della percezione visiva, esse contengono in alcune di loro e nel loro insieme una validità più generale, che si può sintetizzare come Il Tutto è qualcosa di più e di diverso della somma delle parti, la tesi proposta da Anderson come base per lo studio della complessità.
1. legge della vicinanza: all’interno di una scena visiva gli elementi più vicini tra loro verranno percepiti come un tutto, ovvero, a parità di altre condizioni, la variabile che garantisce l’emergere di una figura unitaria è rappresentata dalla distanza relativa degli elementi che la compongono, ovvero assume il ruolo di figura la zona delimitata dai margini che sono più vicini tra loro, in cui gli elementi del campo percettivo vengono uniti in forme con maggiore coesione quanto minore è la distanza tra loro.
Nella figura le rette non vengono percepite singolarmente ma in una serie di due.
Si vedono quindi quattro colonne strette e non tre larghe.
Anche nella teoria dei sistemi e della complessità si considerano elementi e processi con maggiore interazione tra loro quanto più sono vicini, sia fisicamente che concettualmente come modello.
2. legge della somiglianza: stabilisce che gli elementi che appaiono identici - o quanto meno simili - vengono percepiti assieme. Tendono ad unificarsi fra loro elementi che possiedono un qualche tipo di somiglianza osservandone la loro visuale a distanza per ciò che concerne il colore e gli oggetti, il movimento e il posizionamento (o orientamento) - gli elementi vengono uniti in forme con tanta maggior coesione quanto maggiore è la loro somiglianza.
Nella figura vediamo due righe orizzontali di due linee di punti scuri inframezzata da una riga di due lineee di punti chiari.
3. legge della chiusura:afferma che siamo predisposti a fornire le informazioni mancanti per chiudere una figura e distinguerla dal suo fondo. Dunque i margini chiusi o che tendono ad unirsi si impongono come unità di forma su quelli aperti.
Secondo questo principio la mente e l'occhio umano tendono a vedere come chiuse figure che in realtà non lo sono. Se vengono disegnati due semicerchi uno accanto all'altro l'occhio tende a chiudere la figura facendola diventare un cerchio. Secondo questo principio la mente e l'occhio umano tendono a vedere come chiuse figure separate tra loro che hanno forma geometrica di due semicerchi. Se i semicerchi vengono messi uno accanto all'altro con i loro rispettivi versi, quali le parti curve che guardano all'esterno, le due unità avranno la risultante forma geometrica di un solo cerchio.
Osservando la figura si percepisce una circonferenza completando le parti mancanti.
Molte altre forme, seguendo lo stesso esempio tendono a chiudere e completare figure simili, in base anche all'esperienza passata; ad esempio il logo del WWF:
Anche nella teoria dei sistemi il concetto di chiusura di processo o loop di sistema per effettuare o modellare una retroazione che produce stabilità o runaway di sistema è stato un concetto storico fondamentale, applicato in seguito in modo più generale ai sistemi viventi autopoietici come il modello SOP il concetto di chiusura di sistema è fondamentale per la meta-descrizione a livello logico 2 della complessità.
E' da notare che le leggi descrivono le interpretazioni della percezione visiva comunemente effettuate, quasi certamente di origine genetica sviluppatesi durante l'evoluzione, ma non descrivono la "realtà" ma solo una sua "costruzione" che tende a semplificare il riconoscimento e raggruppare in una forma semplice, comprensibile e significativa la complessità della figura reale: nel primo caso ad esempio si percepiscono quattro colonne strette, ma in realtà quello che "esiste" sono otto linee verticali diversamente spaziate, nel secondo caso 42 punti di due diversi colori, nel terzo sei parentesi quadre opposte oppure due semicerchi. Le tre leggi della vicinanza, somiglianza e chiusura si applicano anche alla rete dei concetti e delle idee per fornire un significato; ad esempio Douglas Hofstadter in "Gödel, Escher, Bach" presenta un esempio della rete semantica dell'autore:
All the people at this party
They've got a lot of style
They've got stamps of many countries
They've got passport smiles
Some are friendly
Some are cutting
Some are watching it from the wings
Some are standing in the centre
Giving to get something
Photo Beauty gets attention
Then her eye paint's running down
She's got a rose in her teeth
And a lampshade crown
One minute she's so happy
Then she's crying on someone's knee
Saying laughing and crying
You know it's the same release
I told you when I met you
I was crazy
Cry for us all Beauty
Cry for Eddie in the corner
Thinking he's nobody
And Jack behind his joker
And stone-cold Grace behind her fan
And me in my frightened silence
Thinking I don't understand
I feel like I'm sleeping
Can you wake me
You seem to have a broader sensibility
I'm just living on nerves and feelings
With a weak and a lazy mind
And coming to peoples parties
Fumbling deaf dumb and blind
I wish I had more sense ot humor
Keeping the sadness at bay
Throwing the lightness on these things
Laughing it all away
Laughing it alI away
Laughing it all away
Nel governare gli uomini e nel servire il Cielo
nulla è meglio della parsimonia,
perché solo la parsimonia antepone l'ottenere.
Anteporre l'ottenere significa accumulare virtù.
Chi accumula virtù tutto sottomette,
quando tutto sottomette
nessuno conosce il suo culmine,
quando nessuno conosce il suo culmine
ei può possedere il regno.
Chi possiede la madre del regno
può durare a lungo.
Questo si chiama
affondare le radici e rinsaldare il tronco,
via della lunga vita e dell'eterna giovinezza.
L'esperienza di riposare nel cuore, quando mediti, non è una cosa che possa essere afferrata o forzata. Accade naturalmente, man mano che aumenta sempre più la sintonia con i ritmi dei nostri silenzi interiori. La figura di questa carta riflette la dolcezza e la delicatezza di quest'esperienza. I delfini che emergono dal cuore e fanno un arco per tuffarsi nel terzo occhio, riflettono il divertimento e l'intelligenza che sorgono allorché siamo in grado di connetterci col cuore, e agire nel mondo partendo da lì. In questo momento, permettiti di essere più delicato e più ricettivo, perché una gioia inesprimibile ti attende proprio dietro l'angolo. Nessun altro potrà indicartela, e quando la troverai, non troverai parole per esprimerla agli altri. Eppure è lì, nelle profondità del tuo cuore, matura e pronta per essere scoperta.
Ascolta il cuore, muoviti in sintonia col cuore, a qualunque costo: Una condizione di assoluta semplicità, il cui prezzo altro non è che ogni cosa... Essere semplici è arduo, perché il suo prezzo è ogni tuo avere. Devi perdere ogni cosa, per poter essere semplice. Ecco perché la gente ha scelto di essere complessa, e si è dimenticata come fare a essere semplice. D'altro canto, solo un cuore semplice palpita di anelito divino, cammina mano nella mano con Dio. Solo un cuore semplice canta con lui in profonda armonia. Per raggiungere questo punto, dovrai scoprire il tuo cuore, il tuo anelito, il tuo vero palpito.
T+62.000 secondi: Smith, intercom: "Thirty-five thousand, going through one point five."
T+68.000 secondi: CAPCOM: "Challenger, go at throttle up". Scobee: "Roger, go at throttle up".
T+73.000 secondi: Smith, intercom: "Uh oh..."
T+89.000 secondi: Flight: "FIDO, trajectories" FIDO: "Go ahead." Flight: "Trajectory, FIDO" FIDO: "Flight, FIDO, filters (radar) got discreting sources. We're go." FIDO: "Flight, FIDO, till we get stuff back he's on his cue card for abort modes" Flight: "Procedures, any help?" Unknown: "Negative, flight, no data." GC: "Flight, GC, we've had negative contact, loss of downlink (of radio voice or data from Challenger)." Flight: "OK, all operators, watch your data carefully."
T+1 min. 56 secondi PAO: "Flight controllers here are looking very carefully at the situation. Obviously a major malfunction."
T+2 min. 1 secondo GC: "Flight, GC, negative downlink." Flight: "Copy."
T+2 min. 8 secondi PAO: "We have no downlink."
T+2 min. 25 secondi FIDO: "Flight, FIDO." Flight: "Go ahead." FIDO: "RSO reports vehicle exploded." Flight: (dopo una lunga pausa): "Copy. FIDO, can we get any reports from recovery forces?" FIDO: "Stand by."
T+2 min. 45 secondi Flight: "GC, all operators, contingency procedures in effect."
A seguito del disastro del 28 Gennaio 1986 della missione STS-51-L dello Space Shuttle Challenger, dopo 73 secondi di volo, la commissione presidenziale d'inchiesta emise un rapporto finale nel quale sono contenute delle osservazioni e analisi di uno tra i più importanti fisici della storia, Richard P. Feynman, riguardo all'affidabilità dello Shuttle. Il suo resoconto è diventato un classico riguardo al rapporto tra tecnica e gestione di progetti altamenti complessi.
Appendix F - Personal observations on the reliability of the Shuttle
It appears that there are enormous differences of opinion as to the probability of a failure with loss of vehicle and of human life. The estimates range from roughly 1 in 100 to 1 in 100,000. The higher figures come from the working engineers, and the very low figures from management. What are the causes and consequences of this lack of agreement? Since 1 part in 100,000 would imply that one could put a Shuttle up each day for 300 years expecting to lose only one, we could properly ask "What is the cause of management's fantastic faith in the machinery?"
We have also found that certification criteria used in Flight Readiness Reviews often develop a gradually decreasing strictness. The argument that the same risk was flown before without failure is often accepted as an argument for the safety of accepting it again. Because of this, obvious weaknesses are accepted again and again, sometimes without a sufficiently serious attempt to remedy them, or to delay a flight because of their continued presence.
There are several sources of information. There are published criteria for certification, including a history of modifications in the form of waivers and deviations. In addition, the records of the Flight Readiness Reviews for each flight document the arguments used to accept the risks of the flight. Information was obtained from the direct testimony and the reports of the range safety officer, Louis J. Ullian, with respect to the history of success of solid fuel rockets. There was a further study by him (as chairman of the launch abort safety panel (LASP)) in an attempt to determine the risks involved in possible accidents leading to radioactive contamination from attempting to fly a plutonium power supply (RTG) for future planetary missions. The NASA study of the same question is also available. For the History of the Space Shuttle Main Engines, interviews with management and engineers at Marshall, and informal interviews with engineers at Rocketdyne, were made. An independent (Cal Tech) mechanical engineer who consulted for NASA about engines was also interviewed informally. A visit to Johnson was made to gather information on the reliability of the avionics (computers, sensors, and effectors). Finally there is a report "A Review of Certification Practices, Potentially Applicable to Man-rated Reusable Rocket Engines," prepared at the Jet Propulsion Laboratory by N. Moore, et al., in February, 1986, for NASA Headquarters, Office of Space Flight. It deals with the methods used by the FAA and the military to certify their gas turbine and rocket engines. These authors were also interviewed informally.
An estimate of the reliability of solid rockets was made by the range safety officer, by studying the experience of all previous rocket flights. Out of a total of nearly 2,900 flights, 121 failed (1 in 25). This includes, however, what may be called, early errors, rockets flown for the first few times in which design errors are discovered and fixed. A more reasonable figure for the mature rockets might be 1 in 50. With special care in the selection of parts and in inspection, a figure of below 1 in 100 might be achieved but 1 in 1,000 is probably not attainable with today's technology. (Since there are two rockets on the Shuttle, these rocket failure rates must be doubled to get Shuttle failure rates from Solid Rocket Booster failure.)
NASA officials argue that the figure is much lower. They point out that these figures are for unmanned rockets but since the Shuttle is a manned vehicle "the probability of mission success is necessarily very close to 1.0." It is not very clear what this phrase means. Does it mean it is close to 1 or that it ought to be close to 1? They go on to explain "Historically this extremely high degree of mission success has given rise to a difference in philosophy between manned space flight programs and unmanned programs; i.e., numerical probabilityusage versus engineering judgment." (These quotations are from "Space Shuttle Data for Planetary Mission RTG Safety Analysis," Pages 3-1, 3-1, February 15, 1985, NASA, JSC.) It is true that if the probability of failure was as low as 1 in 100,000 it would take an inordinate number of tests to determine it ( you would get nothing but a string of perfect flights from which no precise figure, other than that the probability is likely less than the number of such flights in the string so far). But, if the real probability is not so small, flights would show troubles, near failures, and possible actual failures with a reasonable number of trials. and standard statistical methods could give a reasonable estimate. In fact, previous NASA experience had shown, on occasion, just such difficulties, near accidents, and accidents, all giving warning that the probability of flight failure was not so very small. The inconsistency of the argument not to determine reliability through historical experience, as the range safety officer did, is that NASA also appeals to history, beginning "Historically this high degree of mission success..."
Finally, if we are to replace standard numerical probability usage with engineering judgment, why do we find such an enormous disparity between the management estimate and the judgment of the engineers? It would appear that, for whatever purpose, be it for internal or external consumption, the management of NASA exaggerates the reliability of its product, to the point of fantasy.
The history of the certification and Flight Readiness Reviews will not be repeated here. (See other part of Commission reports.) The phenomenon of accepting for flight, seals that had shown erosion and blow-by in previous flights, is very clear. The Challenger flight is an excellent example. There are several references to flights that had gone before. The acceptance and success of these flights is taken as evidence of safety. But erosion and blow-by are not what the design expected. They are warnings that something is wrong. The equipment is not operating as expected, and therefore there is a danger that it can operate with even wider deviations in this unexpected and not thoroughly understood way. The fact that this danger did not lead to a catastrophe before is no guarantee that it will not the next time, unless it is completely understood. When playing Russian roulette the fact that the first shot got off safely is little comfort for the next. The origin and consequences of the erosion and blow-by were not understood. They did not occur equally on all flights and all joints; sometimes more, and sometimes less. Why not sometime, when whatever conditions determined it were right, still more leading to catastrophe?
In spite of these variations from case to case, officials behaved as if they understood it, giving apparently logical arguments to each other often depending on the "success" of previous flights. For example. in determining if flight 51-L was safe to fly in the face of ring erosion in flight 51-C, it was noted that the erosion depth was only one-third of the radius. It had been noted in an experiment cutting the ring that cutting it as deep as one radius was necessary before the ring failed. Instead of being very concerned that variations of poorly understood conditions might reasonably create a deeper erosion this time, it was asserted, there was "a safety factor of three." This is a strange use of the engineer's term, "safety factor." If a bridge is built to withstand a certain load without the beams permanently deforming, cracking, or breaking, it may be designed for the materials used to actually stand up under three times the load. This "safety factor" is to allow for uncertain excesses of load, or unknown extra loads, or weaknesses in the material that might have unexpected flaws, etc. If now the expected load comes on to the new bridge and a crack appears in a beam, this is a failure of the design. There was no safety factor at all; even though the bridge did not actually collapse because the crack went only one-third of the way through the beam. The O-rings of the Solid Rocket Boosters were not designed to erode. Erosion was a clue that something was wrong. Erosion was not something from which safety can be inferred.
There was no way, without full understanding, that one could have confidence that conditions the next time might not produce erosion three times more severe than the time before. Nevertheless, officials fooled themselves into thinking they had such understanding and confidence, in spite of the peculiar variations from case to case. A mathematical model was made to calculate erosion. This was a model based not on physical understanding but on empirical curve fitting. To be more detailed, it was supposed a stream of hot gas impinged on the O-ring material, and the heat was determined at the point of stagnation (so far, with reasonable physical, thermodynamic laws). But to determine how much rubber eroded it was assumed this depended only on this heat by a formula suggested by data on a similar material. A logarithmic plot suggested a straight line, so it was supposed that the erosion varied as the .58 power of the heat, the .58 being determined by a nearest fit. At any rate, adjusting some other numbers, it was determined that the model agreed with the erosion (to depth of one-third the radius of the ring). There is nothing much so wrong with this as believing the answer! Uncertainties appear everywhere. How strong the gas stream might be was unpredictable, it depended on holes formed in the putty. Blow-by showed that the ring might fail even though not, or only partially eroded through. The empirical formula was known to be uncertain, for it did not go directly through the very data points by which it was determined. There were a cloud of points some twice above, and some twice below the fitted curve, so erosions twice predicted were reasonable from that cause alone. Similar uncertainties surrounded the other constants in the formula, etc., etc. When using a mathematical model careful attention must be given to uncertainties in the model.
During the flight of 51-L the three Space Shuttle Main Engines all worked perfectly, even, at the last moment, beginning to shut down the engines as the fuel supply began to fail. The question arises, however, as to whether, had it failed, and we were to investigate it in as much detail as we did the Solid Rocket Booster, we would find a similar lack of attention to faults and a deteriorating reliability. In other words, were the organization weaknesses that contributed to the accident confined to the Solid Rocket Booster sector or were they a more general characteristic of NASA? To that end the Space Shuttle Main Engines and the avionics were both investigated. No similar study of the Orbiter, or the External Tank were made.
The engine is a much more complicated structure than the Solid Rocket Booster, and a great deal more detailed engineering goes into it. Generally, the engineering seems to be of high quality and apparently considerable attention is paid to deficiencies and faults found in operation.
The usual way that such engines are designed (for military or civilian aircraft) may be called the component system, or bottom-up design. First it is necessary to thoroughly understand the properties and limitations of the materials to be used (for turbine blades, for example), and tests are begun in experimental rigs to determine those. With this knowledge larger component parts (such as bearings) are designed and tested individually. As deficiencies and design errors are noted they are corrected and verified with further testing. Since one tests only parts at a time these tests and modifications are not overly expensive. Finally one works up to the final design of the entire engine, to the necessary specifications. There is a good chance, by this time that the engine will generally succeed, or that any failures are easily isolated and analyzed because the failure modes, limitations of materials, etc., are so well understood. There is a very good chance that the modifications to the engine to get around the final difficulties are not very hard to make, for most of the serious problems have already been discovered and dealt with in the earlier, less expensive, stages of the process.
The Space Shuttle Main Engine was handled in a different manner, top down, we might say. The engine was designed and put together all at once with relatively little detailed preliminary study of the material and components. Then when troubles are found in the bearings, turbine blades, coolant pipes, etc., it is more expensive and difficult to discover the causes and make changes. For example, cracks have been found in the turbine blades of the high pressure oxygen turbopump. Are they caused by flaws in the material, the effect of the oxygen atmosphere on the properties of the material, the thermal stresses of startup or shutdown, the vibration and stresses of steady running, or mainly at some resonance at certain speeds, etc.? How long can we run from crack initiation to crack failure, and how does this depend on power level? Using the completed engine as a test bed to resolve such questions is extremely expensive. One does not wish to lose an entire engine in order to find out where and how failure occurs. Yet, an accurate knowledge of this information is essential to acquire a confidence in the engine reliability in use. Without detailed understanding, confidence can not be attained.
A further disadvantage of the top-down method is that, if an understanding of a fault is obtained, a simple fix, such as a new shape for the turbine housing, may be impossible to implement without a redesign of the entire engine.
The Space Shuttle Main Engine is a very remarkable machine. It has a greater ratio of thrust to weight than any previous engine. It is built at the edge of, or outside of, previous engineering experience. Therefore, as expected, many different kinds of flaws and difficulties have turned up. Because, unfortunately, it was built in the top-down manner, they are difficult to find and fix. The design aim of a lifetime of 55 missions equivalent firings (27,000 seconds of operation, either in a mission of 500 seconds, or on a test stand) has not been obtained. The engine now requires very frequent maintenance and replacement of important parts, such as turbopumps, bearings, sheet metal housings, etc. The high-pressure fuel turbopump had to be replaced every three or four mission equivalents (although that may have been fixed, now) and the high pressure oxygen turbopump every five or six. This is at most ten percent of the original specification. But our main concern here is the determination of reliability.
In a total of about 250,000 seconds of operation, the engines have failed seriously perhaps 16 times. Engineering pays close attention to these failings and tries to remedy them as quickly as possible. This it does by test studies on special rigs experimentally designed for the flaws in question, by careful inspection of the engine for suggestive clues (like cracks), and by considerable study and analysis. In this way, in spite of the difficulties of top-down design, through hard work, many of the problems have apparently been solved.
A list of some of the problems follows. Those followed by an asterisk (*) are probably solved:
1.Turbine blade cracks in high pressure fuel turbopumps (HPFTP). (May have been solved.)
2.Turbine blade cracks in high pressure oxygen turbopumps (HPOTP).
11.Flight acceleration safety cutoff system (partial failure in a redundant system).*
12.Bearing spalling (partially solved).
13.A vibration at 4,000 Hertz making some engines inoperable, etc.
Many of these solved problems are the early difficulties of a new design, for 13 of them occurred in the first 125,000 seconds and only three in the second 125,000 seconds. Naturally, one can never be sure that all the bugs are out, and, for some, the fix may not have addressed the true cause. Thus, it is not unreasonable to guess there may be at least one surprise in the next 250,000 seconds, a probability of 1/500 per engine per mission. On a mission there are three engines, but some accidents would possibly be contained, and only affect one engine. The system can abort with only two engines. Therefore let us say that the unknown suprises do not, even of themselves, permit us to guess that the probability of mission failure do to the Space Shuttle Main Engine is less than 1/500. To this we must add the chance of failure from known, but as yet unsolved, problems (those without the asterisk in the list above). These we discuss below. (Engineers at Rocketdyne, the manufacturer, estimate the total probability as 1/10,000. Engineers at marshal estimate it as 1/300, while NASA management, to whom these engineers report, claims it is 1/100,000. An independent engineer consulting for NASA thought 1 or 2 per 100 a reasonable estimate.)
The history of the certification principles for these engines is confusing and difficult to explain. Initially the rule seems to have been that two sample engines must each have had twice the time operating without failure as the operating time of the engine to be certified (rule of 2x). At least that is the FAA practice, and NASA seems to have adopted it, originally expecting the certified time to be 10 missions (hence 20 missions for each sample). Obviously the best engines to use for comparison would be those of greatest total (flight plus test) operating time -- the so-called "fleet leaders." But what if a third sample and several others fail in a short time? Surely we will not be safe because two were unusual in lasting longer. The short time might be more representative of the real possibilities, and in the spirit of the safety factor of 2, we should only operate at half the time of the short-lived samples.
The slow shift toward decreasing safety factor can be seen in many examples. We take that of the HPFTP turbine blades. First of all the idea of testing an entire engine was abandoned. Each engine number has had many important parts (like the turbopumps themselves) replaced at frequent intervals, so that the rule must be shifted from engines to components. We accept an HPFTP for a certification time if two samples have each run successfully for twice that time (and of course, as a practical matter, no longer insisting that this time be as large as 10 missions). But what is "successfully?" The FAA calls a turbine blade crack a failure, in order, in practice, to really provide a safety factor greater than 2. There is some time that an engine can run between the time a crack originally starts until the time it has grown large enough to fracture. (The FAA is contemplating new rules that take this extra safety time into account, but only if it is very carefully analyzed through known models within a known range of experience and with materials thoroughly tested. None of these conditions apply to the Space Shuttle Main Engine.
Cracks were found in many second stage HPFTP turbine blades. In one case three were found after 1,900 seconds, while in another they were not found after 4,200 seconds, although usually these longer runs showed cracks. To follow this story further we shall have to realize that the stress depends a great deal on the power level. The Challenger flight was to be at, and previous flights had been at, a power level called 104% of rated power level during most of the time the engines were operating. Judging from some material data it is supposed that at the level 104% of rated power level, the time to crack is about twice that at 109% or full power level (FPL). Future flights were to be at this level because of heavier payloads, and many tests were made at this level. Therefore dividing time at 104% by 2, we obtain units called equivalent full power level (EFPL). (Obviously, some uncertainty is introduced by that, but it has not been studied.) The earliest cracks mentioned above occurred at 1,375 EFPL.
Now the certification rule becomes "limit all second stage blades to a maximum of 1,375 seconds EFPL." If one objects that the safety factor of 2 is lost it is pointed out that the one turbine ran for 3,800 seconds EFPL without cracks, and half of this is 1,900 so we are being more conservative. We have fooled ourselves in three ways. First we have only one sample, and it is not the fleet leader, for the other two samples of 3,800 or more seconds had 17 cracked blades between them. (There are 59 blades in the engine.) Next we have abandoned the 2x rule and substituted equal time. And finally, 1,375 is where we did see a crack. We can say that no crack had been found below 1,375, but the last time we looked and saw no cracks was 1,100 seconds EFPL. We do not know when the crack formed between these times, for example cracks may have formed at 1,150 seconds EFPL. (Approximately 2/3 of the blade sets tested in excess of 1,375 seconds EFPL had cracks. Some recent experiments have, indeed, shown cracks as early as 1,150 seconds.) It was important to keep the number high, for the Challenger was to fly an engine very close to the limit by the time the flight was over.
Finally it is claimed that the criteria are not abandoned, and the system is safe, by giving up the FAA convention that there should be no cracks, and considering only a completely fractured blade a failure. With this definition no engine has yet failed. The idea is that since there is sufficient time for a crack to grow to a fracture we can insure that all is safe by inspecting all blades for cracks. If they are found, replace them, and if none are found we have enough time for a safe mission. This makes the crack problem not a flight safety problem, but merely a maintenance problem.
This may in fact be true. But how well do we know that cracks always grow slowly enough that no fracture can occur in a mission? Three engines have run for long times with a few cracked blades (about 3,000 seconds EFPL) with no blades broken off.
But a fix for this cracking may have been found. By changing the blade shape, shot-peening the surface, and covering with insulation to exclude thermal shock, the blades have not cracked so far.
A very similar story appears in the history of certification of the HPOTP, but we shall not give the details here.
It is evident, in summary, that the Flight Readiness Reviews and certification rules show a deterioration for some of the problems of the Space Shuttle Main Engine that is closely analogous to the deterioration seen in the rules for the Solid Rocket Booster.
Avionics
By "avionics" is meant the computer system on the Orbiter as well as its input sensors and output actuators. At first we will restrict ourselves to the computers proper and not be concerned with the reliability of the input information from the sensors of temperature, pressure, etc., nor with whether the computer output is faithfully followed by the actuators of rocket firings, mechanical controls, displays to astronauts, etc.
The computer system is very elaborate, having over 250,000 lines of code. It is responsible, among many other things, for the automatic control of the entire ascent to orbit, and for the descent until well into the atmosphere (below Mach 1) once one button is pushed deciding the landing site desired. It would be possible to make the entire landing automatically (except that the landing gear lowering signal is expressly left out of computer control, and must be provided by the pilot, ostensibly for safety reasons) but such an entirely automatic landing is probably not as safe as a pilot controlled landing. During orbital flight it is used in the control of payloads, in displaying information to the astronauts, and the exchange of information to the ground. It is evident that the safety of flight requires guaranteed accuracy of this elaborate system of computer hardware and software.
In brief, the hardware reliability is ensured by having four essentially independent identical computer systems. Where possible each sensor also has multiple copies, usually four, and each copy feeds all four of the computer lines. If the inputs from the sensors disagree, depending on circumstances, certain averages, or a majority selection is used as the effective input. The algorithm used by each of the four computers is exactly the same, so their inputs (since each sees all copies of the sensors) are the same. Therefore at each step the results in each computer should be identical. From time to time they are compared, but because they might operate at slightly different speeds a system of stopping and waiting at specific times is instituted before each comparison is made. If one of the computers disagrees, or is too late in having its answer ready, the three which do agree are assumed to be correct and the errant computer is taken completely out of the system. If, now, another computer fails, as judged by the agreement of the other two, it is taken out of the system, and the rest of the flight canceled, and descent to the landing site is instituted, controlled by the two remaining computers. It is seen that this is a redundant system since the failure of only one computer does not affect the mission. Finally, as an extra feature of safety, there is a fifth independent computer, whose memory is loaded with only the programs of ascent and descent, and which is capable of controlling the descent if there is a failure of more than two of the computers of the main line four.
There is not enough room in the memory of the main line computers for all the programs of ascent, descent, and payload programs in flight, so the memory is loaded about four time from tapes, by the astronauts.
Because of the enormous effort required to replace the software for such an elaborate system, and for checking a new system out, no change has been made to the hardware since the system began about fifteen years ago. The actual hardware is obsolete; for example, the memories are of the old ferrite core type. It is becoming more difficult to find manufacturers to supply such old-fashioned computers reliably and of high quality. Modern computers are very much more reliable, can run much faster, simplifying circuits, and allowing more to be done, and would not require so much loading of memory, for the memories are much larger.
The software is checked very carefully in a bottom-up fashion. First, each new line of code is checked, then sections of code or modules with special functions are verified. The scope is increased step by step until the new changes are incorporated into a complete system and checked. This complete output is considered the final product, newly released. But completely independently there is an independent verification group, that takes an adversary attitude to the software development group, and tests and verifies the software as if it were a customer of the delivered product. There is additional verification in using the new programs in simulators, etc. A discovery of an error during verification testing is considered very serious, and its origin studied very carefully to avoid such mistakes in the future. Such unexpected errors have been found only about six times in all the programming and program changing (for new or altered payloads) that has been done. The principle that is followed is that all the verification is not an aspect of program safety, it is merely a test of that safety, in a non-catastrophic verification. Flight safety is to be judged solely on how well the programs do in the verification tests. A failure here generates considerable concern.
To summarize then, the computer software checking system and attitude is of the highest quality. There appears to be no process of gradually fooling oneself while degrading standards so characteristic of the Solid Rocket Booster or Space Shuttle Main Engine safety systems. To be sure, there have been recent suggestions by management to curtail such elaborate and expensive tests as being unnecessary at this late date in Shuttle history. This must be resisted for it does not appreciate the mutual subtle influences, and sources of error generated by even small changes of one part of a program on another. There are perpetual requests for changes as new payloads and new demands and modifications are suggested by the users. Changes are expensive because they require extensive testing. The proper way to save money is to curtail the number of requested changes, not the quality of testing for each.
One might add that the elaborate system could be very much improved by more modern hardware and programming techniques. Any outside competition would have all the advantages of starting over, and whether that is a good idea for NASA now should be carefully considered.
Finally, returning to the sensors and actuators of the avionics system, we find that the attitude to system failure and reliability is not nearly as good as for the computer system. For example, a difficulty was found with certain temperature sensors sometimes failing. Yet 18 months later the same sensors were still being used, still sometimes failing, until a launch had to be scrubbed because two of them failed at the same time. Even on a succeeding flight this unreliable sensor was used again. Again reaction control systems, the rocket jets used for reorienting and control in flight still are somewhat unreliable. There is considerable redundancy, but a long history of failures, none of which has yet been extensive enough to seriously affect flight. The action of the jets is checked by sensors, and, if they fail to fire the computers choose another jet to fire. But they are not designed to fail, and the problem should be solved.
Conclusions
If a reasonable launch schedule is to be maintained, engineering often cannot be done fast enough to keep up with the expectations of originally conservative certification criteria designed to guarantee a very safe vehicle. In these situations, subtly, and often with apparently logical arguments, the criteria are altered so that flights may still be certified in time. They therefore fly in a relatively unsafe condition, with a chance of failure of the order of a percent (it is difficult to be more accurate).
Official management, on the other hand, claims to believe the probability of failure is a thousand times less. One reason for this may be an attempt to assure the government of NASA perfection and success in order to ensure the supply of funds. The other may be that they sincerely believed it to be true, demonstrating an almost incredible lack of communication between themselves and their working engineers.
In any event this has had very unfortunate consequences, the most serious of which is to encourage ordinary citizens to fly in such a dangerous machine, as if it had attained the safety of an ordinary airliner. The astronauts, like test pilots, should know their risks, and we honor them for their courage. Who can doubt that McAuliffe was equally a person of great courage, who was closer to an awareness of the true risk than NASA management would have us believe?
Let us make recommendations to ensure that NASA officials deal in a world of reality in understanding technological weaknesses and imperfections well enough to be actively trying to eliminate them. They must live in reality in comparing the costs and utility of the Shuttle to other methods of entering space. And they must be realistic in making contracts, in estimating costs, and the difficulty of the projects. Only realistic flight schedules should be proposed, schedules that have a reasonable chance of being met. If in this way the government would not support them, then so be it. NASA owes it to the citizens from whom it asks support to be frank, honest, and informative, so that these citizens can make the wisest decisions for the use of their limited resources. For a successful technology, reality must take precedence over public relations, for nature cannot be fooled.
Space Shuttle Challenger Memorial, Arlington National Cemetery
La successiva metastruttura discussa da Tyler Volk e Jeff Bloom è il tempo, e per estensione i calendari, struttura fisica e psicologica primaria - insieme allo spazio - che indica movimento, memoria, enumerazione, progressione, sequenzialità e dinamica di eventi e processi:
Background
Time can be considered a binary of movement and memory and can be observed by connecting several spaces. Time can be seen as an arrow or cycle. Time also is evident as counting, progression, and sequences.
Examples
In science: biological clocks, animal behavior, velocity, acceleration, time-space phenomena, etc.
In architecture and design: how time is defined and related to in particular contexts; at Arcosanti (an environmentally situated desert city in Arizona) all buildings are multiuse in order to minimize building use down-time; etc.
In art: in drama, music, dance, and other performance arts time is the fundamental organizing pattern, as well as fundamental to the perceptual experience; etc.
In social sciences: calendars, clocks, history, sequences and stages in development, etc.
In other senses: time to kill; wasting time; time management; timeliness;
"The Ultimate Ground of Being" is Paul Tillich's decontaminated term for "God" and would also do for "the Self of the world" as I put it in my story for children. But the secret which my story slips over to the child is that the Ultimate Ground of Being is you. Not, of course, the everyday you which the Ground is assuming, or "pretending" to be, but that inmost Self which escapes inspection because it's always the inspector. This, then, is the taboo of taboos: you're IT!
Yet in our culture this is the touchstone of insanity, the blackest of blasphemies, and the wildest of delusions. This, we believe, is the ultimate in megalomania—an inflation of the ego to complete absurdity. For though we cultivate the ego with one hand, we knock it down with the other. From generation to generation we kick the stuffing out of our children to teach them to "know their place" and to behave, think, and feel with proper modesty as befits one little ego among many. As my mother used to say, "You're not the only pebble on the beach!"
Anyone in his right mind who believes that he is God should be crucified or burned at the stake, though now we take the more charitableview that no one in his right mind could believe such nonsense. Only a poor idiot could conceive himself as the omnipotent ruler of the world, and expect everyone else to fall down and worship.
But this is because we think of God as the King of the Universe, the Absolute Technocrat who personally and consciously controls every details of his cosmos—and that is not the kind of God in my story. In fact, it isn't my story at all, for any student of the history of religions will know that it comes from ancient India, and is the mythical way of explaining the Vedanta philosophy. Vedanta is the teaching of the Upanishads, a collection of dialogues, stories, and poems, some of which go back to at least 800 B.C. Sophisticated Hindus do not think of God as a special and separate superperson who rules the world from above, like a monarch. Their God is "underneath" rather than "above" everything, and he (or it) plays the world from inside. One might say that if religion is the opium of the people, the Hindus have the inside dope. What is more, no Hindu can realize that he is God in disguise without seeing at the same time that this is true of everyone and everything else. In the Vedanta philosophy, nothing exists except God. There seem to be other things than God, but only because he is dreaming them up and making them his disguises to play hide-and-seek with himself. The universe of seemingly separate things is therefore real only for a while, not eternally real, for it comes and goes as the Self hides and seeks itself.
But Vedanta is much more than the idea or the belief that this is so. It is centrally and above all the experience, the immediate knowledge of its being so, and for this reason such a complete subversion of our ordinary way of seeing things. It turns the world inside out and outside in. Likewise, a saying attributed to Jesus runs:
When you make the two one, and when you make the inner as the outer and the outer as the inner and the above as the below ... then shall you enter [the Kingdom].... I am the Light that is above them all, I am the All, the All came forth from Me and the All attained to Me. Cleave a [piece of] wood, I am there; lift up the stone and you will find Me there.
Today the Vedanta discipline comes down to us after centuries of involvement with all the forms, attitudes, and symbols of Hindu culture in its flowering and slow demise over nearly 2,800 years, sorely wounded by Islamic fanaticism and corrupted by British puritanism. As often set forth, Vedanta rings no bell in the West, and attracts mostly the fastidiously spiritual and diaphanous kind of people for whom incarnation in a physical body is just too disgusting to be borne. But it is possible to state its essentials in a present-day idiom, and when this is done without exotic trappings, Sanskrit terminology, and excessive postures of spirituality, the message is not only clear to people with no special interest in "Oriental religions"; it is also the very jolt that we need to kick ourselves out of our isolated sensation of self.
But this must not be confused with our usual ideas of the practice of "unselfishness," which is the effort to identify with others and their needs while still under the strong illusion of being no more than a skin-contained ego. Such "unselfishness" is apt to be a highly refined egotism, comparable to the in-group which plays the game of "we're more-tolerant-than-you." The Vedanta was not originally moralistic; it did not urge people to ape the saints without sharing their real motivations, or to ape motivations without sharing the knowledge which sparks them.
For this reason The Book I would pass to my children would contain no sermons, no shoulds and oughts. Genuine love comes from knowledge, not from a sense of duty or guilt. How would you like to be an invalid mother with a daughter who can't marry because she feels she ought to look after you, and therefore hates you? My wish would be to tell, not how things ought to be, but how they are, and how and why we ignore them as they are. You cannot teach an ego to be anything but egotistic, even though egos have the subtlest ways of pretending to be reformed. The basic thing is therefore to dispel, by experiment and experience, the illusion of oneself as a separate ego. The consequences may not be behavior along the lines of conventional morality. It may well be as the squares said of Jesus, "Look at him! A glutton and a drinker, a friend of tax-gatherers and sinners!"
Furthermore, on seeing through the illusion of the ego, it is impossible to think of oneself as better than, or superior to, others for having done so. In every direction there is just the one Self playing its myriad games of hide-and-seek. Birds are not better than the eggs from which they have broken. Indeed, it could be said that a bird is one egg's way of becoming other eggs. Egg is ego, and bird is the liberated Self. There is a Hindu myth of the Self as a divine swan which laid the egg from which the world was hatched. Thus I am not even saying that you ought to break out of your shell. Sometime, somehow, you (the real you, the Self) will do it anyhow, but it is not impossible that the play of the Self will be to remain unawakened in most of its human disguises, andso bring the drama of life on earth to its close in a vast explosion.
Another Hindu myth says that as time goes on, life in the world gets worse and worse, until at last the destructive aspect of the Self, the god Shiva, dances a terrible dance which consumes everything in fire. There follow, says the myth, 4,320,000 years of total peace during which the Self is just itself and does not play hide. And then the game begins again, starting off as a universe of perfect splendor which begins to deteriorate only after 1,728,000 years, and every round of the game is so designed that the forces of darkness present themselves for only one third of the time, enjoying at the end a brief but quite illusory triumph.
Today we calculate the life of this planet alone in much vaster periods, but of all ancient civilizations the Hindus had the most imaginative vision of cosmic time. Yet remember, this story of the cycles of the world's appearance and disappearance is myth, not science, parable rather than prophecy. It is a way of illustrating the idea that the universe is like the game of hide-and-seek.
If, then, I am not saying that you ought to awaken from the egoillusion and help save the world from disaster, why The Book? Why not sit back and let things take their course? Simply that it is part of "things taking their course" that I write. As a human being it is just my nature to enjoy and share philosophy. I do this in the same way that some birds are eagles and some doves, some flowers lilies and some roses. I realize, too, that the less I preach, the more likely I am to be heard.
Così parlò il maestro programmatore:
"Senza il vento, l'erba non si muove. Senza il software, l'hardware è inutile."
8.1
Un novizio chiese al maestro: "Vedo che una ditta informatica è molto più grande di tutte le altre. Torreggia sopra la concorrenza come un gigante tra i nani. Ognuna delle sue divisioni potrebbe includere un'intera azienda. Perchè è così?"
Il maestro rispose, "Perchè fai domande così stupide? Quella ditta è grande perchè è grande. Se producesse solo hardware, nessuno lo comprerebbe. Se producesse solo software, nessuno lo userebbe. Se facesse solo manutenzione di sistemi, la gente la tratterebbe come un servo. Ma siccome combina tutte queste cose, la gente la considera una divinità! Senza cercare di sforzarsi, conquista senza fatica."
8.2
Un giorno un maestro programmatore passò vicino a un novizio. Il maestro notò la preoccupazione del novizio con un videogioco portatile. "Chiedo scusa" disse, "posso esaminarlo?"
Il novizio scattò in piedi e passò il dispositivo al maestro. "Vedo che il dispositivo sostiene di avere tre livelli di gioco: Facile, Medio, e Difficile," disse il maestro. "Eppure ogni dispositivo di questo tipo ha un altro livello di gioco, in cui il dispositivo non cerca di sconfiggere l'umano, nè di essere sconfitto dall'umano."
"Dica, gran maestro" implorò il novizio, "Come si trova quest'opzione misteriosa?"
Il maestro gettò il dispositivo a terra e lo schiacciò sotto i piedi. E improvvisamente il novizio trovò l'illuminazione.
8.3
C'era una volta un programmatore che lavorava sui microprocessori. "Guarda come mi trovo bene qui" disse a un programmatore di mainframe venuto in visita, "ho il mio sistema operativo e la mia memoria di massa. Non devo condividere le mie risorse con nessuno. Il software è autoconsistente e facile da usare. Perchè non abbandoni il tuo lavoro e non ti unisci a me qui?"
Il programmatore di mainframe allora iniziò a descrivere il suo sistema al suo amico, dicendo "Il mainframe siede come un vecchio saggio che medita in mezzo al centro dati. I suoi dischi sono appoggiati uno accanto all'altro come un grande oceano di macchine. Il software è multisfaccettato come un diamante, e convoluto come una giungla primitiva. I programmi, ognuno dei quali è unico, attraversano il sistema come un rapido fiume. Ecco perchè sono felice dove sono."
Il programmatore di microprocessori, sentendo ciò, rimase in silenzio. Ma i due programmatori rimasero amici per tutta la vita.
8.4
Hardware incontrò Software sulla strada per Changtse. Software disse: "Tu sei lo Yin e io sono lo Yang. Se viaggiamo insieme diventeremo famosi e guadagneremo grandi somme di denaro." E così si incamminarono insieme, pensando di conquistare il mondo.
Poi incontrarono Firmware, che era vestito di stracci e camminava appoggiato a un bastone spinoso. Firmware disse loro: "Il Tao sta oltre lo Yin e lo Yang. E' silenzioso e immobile come una pozza d'acqua. Non cerca la fama, quindi nessuno è al corrente della sua presenza. Non cerca la fortuna, perchè è completo in sè stesso. Esiste oltre lo spazio e il tempo."
Software e Hardware, con vergogna, tornarono a casa loro.