Ljerka Beus-Dukic welcomed everyone to this mainly-for-students event; it was good to see several non-student visitors too. The event was in two equal parts. First the four speakers each gave a short talk on how they used requirements in industry or commerce. Then the speakers formed a panel and answered questions from students about practice “in the real world”.
Phil Cantor (Smartstream Technologies) began by commenting that he had no idea how to get on to the Evening Standard’s Rich List by working in software. He found people in Amazon, Google, Facebook and Microsoft – but even these were not exactly software engineers.
He was not sure about the importance of book-learning in RE either; commercial practice was probably mostly “winging it” (it was evident that this expression was new to the students; it seems to be Victorian actors’ slang for going on stage unprepared, relying on the prompter hidden in the wing of the stage for the lines) rather than following any academically-correct procedure. But he was quite sure that choosing the best technical solution was wrong: in commercial “real life”, time to market was key, since the first software product could take 80% or more of the business. Compared to this enormous competitive advantage, almost all technical requirements fade into insignificance.
Keith Derham (Barclays Capital) studied RE at Westminster after 6 years of life “in the real world”, so he came to academic study of requirements with genuine curiosity about what one should do to solve the many problems of requirements work. The students looked at him with genuine curiosity, too.
It was not just a matter of building software “the right way”. You could build a house “the right way” with walls that stood up and pipes that did not leak and a pretty white picket fence out the front, but if it did not deliver on its purpose, it was useless. A perfect 2-bed terrace house for a family of 10 would not be fit for purpose, however well-constructed.
Students would not find jobs advertised for “Requirement Engineers”. In fact job titles meant next to nothing anyway. Derham was a “Systems Integration Analyst” – he got system A to talk to system B; but the work could change from day to day.
In RE, “you have to be a pacifist”. You must not lean towards any solution; you have to break up a lot of fights. Everybody would like a big requirements document containing all the requirements, including all of their own: but that’s no good. Instead, you have to “keep all communications channels open”. That might mean taking minutes or arranging meetings. It was a team effort, working with people divided by a big financial organization into many small roles – requirements, development, rollout, service to clients. Everything is audited, traceable to the original decision-maker, controlled. It was very different from working in a small free-and-easy firm, with frequent requirements changes and short prototyping-style life cycles, whether or not those involve actual “Agile” practices.
RE, being unknown in Derham’s corner of the world, is “a very helpful secret weapon”! Students should make the most of their time learning it.
Ireri Ibarra (RPS Group) worked in a very different area – system safety. Here, risk implied a very large requirement not to cause death, injury or loss of property.
Requirements had to be provable, traceable and feasible. You had to show through very careful work that risk was mitigated. Sometimes, as in the aviation industry, there was heavy regulation to enforce a legal obligation to demonstrate safety. For instance, you might have to show – she apologized to the audience while displaying a grim clause from a safety standard – that the risk of death was no more than 1 in a million.
In other cases the regulatory hand was lighter, but it was still vital to ensure safety. Safety had to be built in to systems from the start: it could not be “bolted on” afterwards. Half the requirements came out of safety analysis.
Vesna Music (Delphi Diesel Systems) – her name is pronounced “moo-shich” – was also a systems engineer, working in the automotive industry.
Her introduction to the importance of requirements came through a scarring experience on a job with a Far Eastern client. The task was to create a custom Electronic Control Unit (ECU), one of about 30 small computers scattered about a modern car.
The project had a very tight timescale and an absurdly small budget. Her company assumed the low figures implied that the job would be 90% off-the-shelf, with just a little customisation here and there.
It turned out to be 90% custom.
The client did not share Western assumptions about give-and-take in requirements trade-offs – we’ll give you these features if you’ll drop those requirements. The client wanted everything.
Eventually the job was finished; sign-off (and hence payment) was achieved only by going painfully through all the emails to prove to the client that he had in fact agreed to each requirement and schedule change. Ultimately it all came down to very careful traceability – there was nothing academic about it.
She used Simulink to show that the system produced the specified output under the specified conditions. In other words, it was an executable specification. That enabled her to go to the client and ask the classic prototyping question “Is that what you want?” – followed immediately [of course] by a torrent of missed requirements.
It was specially important not to miss requirements in a safety-critical system.
But not all requirements can be found by simulation. In one “horror story”, she put a version of the software on the ECU processor “target” (i.e. a different processor from the simulator’s), only to find that the shutdown checks when the driver turned off the ignition took a longer than expected time, about 150 milliseconds. You might think this not very serious, but the test drivers got up to quite a lot of tricks. They discovered that if they restarted within 60 to 90 milliseconds (!) the ECU got completely stuck – it had to be “flushed”, which would mean a breakdown and either a workshop visit or a roadside fix with special equipment. The problem should really have been detected earlier but it wasn’t covered by the specifications either.