A brief history of modularity
Josh Emerson
21 de novembro de 2016
0 minutos de leituraJust over a week ago, we were sponsors at the Brighton conference, ffconf. It was a day full of brilliant talks, both thought provoking and useful.Ashley Williams of npm gave a talk titled “A brief history of modularity”, which we felt was particularly relevant to Snyk, and so we thought we’d share a summary of the talk here.
As well as reading our recap, you can download the slides via npm, by running npm install a-brief-history
(we love that Ashley’s talk is itself an npm module!), and the video will be available shortly (we will link to it once it becomes available).
Why modularize?
npm is the largest module repository in the world. It now has more modules than Bower (JS), nuget (.NET), Packagist (PHP), PyPI and Rubygems.org combined. So why does npm have so many modules?
People responded to Ashley’s tweet with a whole range of responses, but most people gave similar reasons for modularizing code:
It allows for reuse of common code
It enforces separation of concerns
It makes documenting and testing code easier
Why not modularize?
There is a real cost of complexity when it comes to modularization. As Guy Podjarny writes about in The 5 dimensions of an npm dependency, you may have a lot more dependencies that you rely on than you at first realize. For each of your dependencies, you need to ask a lot of questions.
Eventually, you find a package that seems to suit your needs, more or less. But your problems have only just begun. It’s up to you to evaluate the library: does it have tests? Can you understand the source code? Is it actively maintained? Is the documentation easy to find and consult?
Rich Harris – Small Modules, It’s not quite that simple
In addition to that list of questions, you should also be asking about security. Do the package maintainers address vulnerabilities in their code? Are there any known vulnerabilities in the dependency or it’s dependencies?
Software development is change management
After looking at the benefits and costs of modularization, Ashley really focuses in on a disparity between the perception and reality of modularization.
When we think about software development, we probably think about solving problems, of creating systems that perform tasks. But what we don’t often think about is that software evolves continuously, and not always in the same direction.
David Parnas wrote a book in 1972 with the title “On the Criteria To Be Used in Decomposing Systems into Modules,” in which he thinks about modularization from a change management perspective.
[Start] with a list of difficult design decisions or design decisions which are likely to change. Each module is then designed to hide such a decision from the others.
When you think about modularization from this perspective, you try to hide the complexity inside of modules, so that the rest of the system does not need to be concerned with the implementation.
On his blog, programming is terrible, Tef gives us an approach for doing just this.
Write code that is easy to delete, not easy to extend. Instead of building re-usable software, we should try to build disposable software. I don’t need to tell you that deleting code is more fun than writing it.
In conclusion
Ashley ends her talk by summarizing that modularizing code is tough, because there’s a lot more we need to understand in order to begin to work out how and when to split code. One of the main challenges with deciding how and when to modularize code, is that there’s a time based element to it. It’s only through time that we can observe whether a strategy is working for us or against us.