Robots! They’re taking our jobs!
Well, maybe, if you’re in the manufacturing business… but how about requirements engineering?
Just for fun, we unleashed a home-cooked neural network on a trove of 68 requirements documents to see if it could whip us up a set of 45 functional requirements that passed the QVscribe Quality Analysis test.
The results? Surprisingly not bad!
But before we get to the lessons learned from our favourite robot, let’s give you a little background info on the project.
Download the Printable Article
Introducing the neural network; the next Philip K. Dick?
Can a computer write a novel? Well, that’s what this neural network was designed to do.
But hey, we thought, let’s take this neural network for a spin and see what it can do with requirements documents, too.
A pet project by one of our team, it uses a character-level language model that determines the next character in sequence (plus some word-based sequence determination) based on probabilities derived from the training data.
Introducing the training data; a smorgasbord of requirements docs
As you probably know, real and freely available requirements documents are a little harder to source than fictional works.
In the end, we dug up 68 good-looking docs, a mishmash from a variety of industries; software, medical devices, aerospace, and shipping – including docs from NASA. Each document was pre-processed to draw out the requirements themselves, fed in as training data to the robot.
1. Keep Requirements Concise
Keep It Simple Stupid! After results analysis, we discovered that the old KISS engineering principle is indeed solid.
12/13 robot-written requirements that scored 5/5 for quality were just one sentence long.
Caption: Check out this perfect-scoring requirement. If you were up to date with your system acronyms, it’d make sense.
Lesson: The shorter the requirement, the less room for error. Hone those requirements to be more concise.
2. Ensure Best Practice Compliance
The world’s brightest minds and most respected engineers don’t just come up with standards documents for fun (okay, well maybe they do). But they also do it to advance the field and help everyone stay on the same page – and in the case of RE, write more clear and accurate requirements.
The most common errors that affected our neural networks’ requirement scores were issues with complying with industry-standard best practices such as the International Council on Systems Engineering Guide for Writing Requirements (INCOSE), which has no less than 41 rules in the latest edition. Check out the Guide on Automating the INCOSE Guide for Writing Requirements.
[Fix: Bound limits in measurable terms]
[Fix: Use one imperative per requirement]
3. Check for conflicts and redundancies
After checking that each of your requirements are clear and concise on their own, it’s important to make sure they all work together. We used the QVscribe Similarity Analysis to quickly check that the robot wasn’t just spewing nonsense.
We were pleasantly surprised! One thing the robot did really well was writing requirements that were unique and did not contradict each other.
Lesson: Check your doc thoroughly, as ensuring unique requirements significantly reduces review time. Or get a robot to do it – automate the process.
Download the Printable Article
4. Use an Authoring Tool
How do you make writing requirements more concise, compliant, and repeatable? With a syntax guide, like EARS, the Easy Approach to Requirements Syntax.
Doesn’t applying EARS syntax rules like If/Then and While sound awfully familiar to system coding itself? And if it sounds like system code, isn’t there an authoring tool, much like VS Code, that can help with conforming to syntax?
We don’t know for sure whether any of our input docs used an EARS authoring tool, but we can tell you what our robot’s results were.
In the test, 15/45 requirements were EARS conforming, and 12/15 of those scored at least a 3/5 quality score. Only one 1/5 quality-scored requirement was EARS conforming.
5. Be human
So, could the requirements generated be used as a requirements document, or even possibly plucked out as a specific system requirement? That would be a solid no. Maybe you could flash the auto-generated requirements quickly in a non-engineer’s face as ‘proof’ you’ve been working on something.
But as for useable output, we aren’t there yet.
Even though some of the requirements were relatively well-written (29% scored 5/5 for quality), the neural net simply doesn’t have real-world experience writing requirements or working on engineering projects to score higher.
Yes, working collaboratively with your team, with all their diversified experiences to write well-rounded, comprehensive requirements is still essential. You can’t hack and automate your job anytime soon – it’s all about collaboration and implicit (rather than explicit) understanding that gets the work done.