You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Software tends to be designed in a way that is very specific and specialized to a particular context. Often, this makes software too tight to be flexible and to evolve gracefully with changing requirements. The book delves into various techniques for making software more flexible but lays them out in broad strokes in the first chapter, often drawing inspiration from nature, the design of the human body and human design in other domains.
Additive programming
Additive programming (like manufacturing) refers to the idea that new features should not have to significantly (structurally) modify existing code - it should only have add to it or adjust existing functions for new requirements. Doing so requires that we design components in as flexible a way as possible to prevent needing to restructure it.
A generally useful design philosophy for this is to minimize assumptions about how a program works and how it will be used, because this limits later extensibility. Premature specialization often necessitates later disproportionately heavy refactors with slight perturbations in requirements.
We don't want programs to be completely disjoint in a way that sub-components don't collaborate. The combined system of sub-components must cooperate to provide functionality that no one part can provide on its own. Components should do limited things very well (for re-use), and pieces should combine and interact with well-defined contracts (for modularity).
As another generally useful design philosophy, the range of outputs of a part should be of a much smaller range (more well-defined) than the acceptable inputs for the part. This permits a large amount of user flexibility, while helping produce precise and predictable results.
A design technique for making feature addition easier is to separate data and procedures into logical layers. For example, numerical data and units corresponding to the data can be separated - units functionality can be added as a separate layer without modifying original numerical data manipulation.
Combining partial information: we might have to combine data that aren't all available at once and might arrive at different times or from different locations (e.g. type inferencing) - can produce best-effort result when data is incomplete. Allowing this flexibility can improve software robustness.
Degeneracy and Redundancy
Degeneracy: having multiple ways to perform an operation can be useful for - error detection, performance management, intrusion detection, etc. Degeneracy is additive in that we can introduce it without having to modify existing procedures. Degeneracy allows capability like:
Dynamically deciding the right combination of implementations depending on context.
Ability to cross-check for correctness in more safety-critical applications where formal proofs are not possible.
Improved security - it is not enough to attack just one component.
Software redundancy of this kind trades short-term effort in implementing extra computation, with long-term maintenance cost over changing requirements.
Some methods of computing a result could have strengths/weaknesses that other methods don't, hence giving flexibility for an attack strategy (e.g. planning a database query). Kalman filters (used for state estimation) as another example, combine partial information from different mechanisms to provide joint information that is better than any individual part.
Software requirements can be complex but defining them strictly (for determining interface contracts) can be very intractable - how might one define what it means to be a good chess bot?
Borrowing from biology, we might want adaptable software depending on the parts that small individual parts are connected to (like specialization of stem cells) or parts that aren't functioning properly.
Components can be customized with modular sub-components that keep with the interface of the larger component.
Exploratory behavior
Another idea for improving flexibility is to allow a general program to generate solutions for specialized use-cases through exploratory techniques. This seems reminiscent of machine learning methods, but the generate-test system proposed is much less sophisticated. This system uses a generator mechanism that produces uniformly distributed solutions to a general problem. Then, an independently constructed tester checks generated solutions based on changing requirements and accepting solutions that fit requirements, allowing for quick adaptation (by swapping out testers). Of course, there is some wastage from unused solutions, but such a technique could be very useful where flexibility is more valued than raw efficiency.
One could also explore procedures by running different (redundant) methods in parallel and choosing the method that converges first. Since these procedures are truly disjoint, they can be executed in parallel without too much cost with modern multi-core architectures.
Cost of flexibility
As mentioned earlier, adding flexibility can result in adding redundant code, or accounting for cases that aren't immediately required. However, this tends to pay off in the long run. Additionally, very specialized and efficient code can co-exist with other components that are designed for flexibility. Often, flexibility and ease of use is much more important to users than pure immediate performance.
One pertinent example is that of higher-level languages that bridge the hardware-software barrier. Programming languages incur overhead (either during interpretation or at compile time) but are much easier to maintain and extend than writing assembly directly executed by hardware's fixed instruction set interface.
Problems with correctness
Formal proofs of correctness may not always be compatible with flexibility because they usually require fixed known inputs and interfaces which limit flexibility significantly and reduce the permissible partial information that a procedure an accept.
Furthermore, even in safety critical application like flight controllers we might want systems to work reasonably even in dire conditions outside of what it was originally specified to work under.
As such, we should not require proofs for all procedures, even thought they might have their place in certain parts of the software.
As a separate point, sometimes generalization can make proofs easier to do because (perhaps) the abstracted version of the problem is easier to map to known results. Hence, generalization and adding flexibility can still be a useful exercise.
The text was updated successfully, but these errors were encountered:
Software tends to be designed in a way that is very specific and specialized to a particular context. Often, this makes software too tight to be flexible and to evolve gracefully with changing requirements. The book delves into various techniques for making software more flexible but lays them out in broad strokes in the first chapter, often drawing inspiration from nature, the design of the human body and human design in other domains.
Additive programming
Degeneracy and Redundancy
Exploratory behavior
Cost of flexibility
Problems with correctness
The text was updated successfully, but these errors were encountered: