Data science needs to fail more, faster.
Darwin never actually said the following quote, but it’s truthy so I’ll use it:
It is not the strongest of the species that survive, nor the most intelligent, but the one most responsive to change. —Darwin-ish
I was watching a talk by Josh Wills the other day. He was applying Lean engineering concepts to data science. To illustrate how important rapid learning is, he told a story about the team that built the Gossamer Condor and won the Kremer Prize for human-powered flight. They won because they failed more repeatedly than their competition. I’m sure their competition was brilliant, and they definitely had several years head start, but they lost because it took them maybe a year to iterate from design, to build, to test flight, to crash and destroy, and go back to design again. The Gossamer Condor team’s breakthrough insight was this: if the Condor could be repaired and improved in days, then they’d test 100 designs in the time their competition tested one.
50 years on, and data science is following this same anti-pattern of the teams that didn’t get the Kremer Prize: come up with brilliant ideas, painstakingly move them out to the real world, watch them fail, and then slowly start the brittle process anew. This would be a problem for any profession—the faster you can iterate, the faster you can learn, and the more problems will be solved—but it is an especially pressing problem in data science because there is a huge shortage of data scientists, so inefficiencies mean that many critical problems are not getting solved.
What can we do about it? Luckily, software engineering, which is a sister to data science, has been working through these problems for the last two decades and has some pretty good patterns to build from. Devops is the area of software engineering concerned with moving software from development to real-world use quickly and safely. Devops lets software engineers try more things, and therefore learn faster. Here are the pieces I believe are necessary for data science:
Automated tests: these don’t have to be exhaustive, but there should be an automatic way to know that changes don’t horribly break your user-facing system
1-Button Deploy: If releasing changes takes more than one step, it will break more frequently, and more importantly in the context of this article, releases will happen less frequently.
1-Button Rollback: The counterpart to 1-Button Deploy, if an error is discovered in a user-facing system, reverting to a pre-error state must be swift and reliable.
Instrumented Infrastructure: Data science problems often require distributed architectures, non-obvious dependencies, and complex feedback loops. To successfully try many things quickly, it is necessary to spend a minimal amount of time understanding the infrastructure, tuning it, and correcting errors.
It’ll take some work, but I believe Devops is the next crucial frontier for Data Science—a massively underrated piece of this rapidly changing discipline.
(Yes, I cofounded a company, Mortar, that amongst other things addresses these problems… because I think they are so important to solve :)