Biology is staggeringly complex. To understand this complexity we are generating data from biological experiments at unprecedented scales. At the current rate of data generation the demand for optical, magnetic and silicon based storage will far exceed the supply by the year 2040. This magnitude of data generation is made possible by massively parallelized sample manipulation methods for reading genetic code (next-generation sequencing), making synthetic genes (next-generation DNA synthesis) and high-throughput cell manipulation (10X genomics).
These capabilities in biological experimentation massively decreased the cost per experiment (data). The dropping costs have enabled exponential adoption of these technologies and made it possible to create previously impossible biotechnologies (e.g. gene therapy, single cell atlas, DNA data storage). Surprisingly, these massively parallel approaches are inherently error-prone, but, we have learnt how to correct for errors just like how nature corrects for its own. To push the frontier of human health and our understanding of evolution we have to reduce these errors further by several orders of magnitude. While massively parallelized sequencing reduced the cost of actually reading genetic material, the steps that come before sequencing such as sample preparation, enzyme manufacturing and DNA synthesis have lagged behind. To keep up with Moore's law like scaling, we need to lower the cost of experimentation even more and enable scaling not only in genomics but in several fields of systems biology (proteomics, metabalomics).
In the early days of modern computing (1950s), electronic circuits were being assembled by hand. The assembly process back then was strikingly similar to how state-of-the-art automation for biology is built now. And the circuit design process then was done with pen and paper, just like how biological workflows are created, managed and tracked in labs today. Current methods to develop new biological processes and subsequently scale them is impeded by the inertia associated with current technologies -- on the one hand the physical hardware is error-prone, expensive and lacks scalable manufacturing and the other hand there is no unified language in creating, managing and sharing biological workflows. This limits our ability to manipulate biological samples beyond a certain scale -- we now have "Tyranny of numbers" problem with biology.
On the other hand, most of modern society is built with digital technologies and software tools. You can spin up a complex piece of software and launch into production in no time. The two key aspects of the software industry, its agility and scalability is currently lacking in how we manipulate and biology and biological data. In software, these two attributes was made possible by the foundation laid out on digital electronics. In the past two decades we have leveraged the same digital technology for reading and writing genetic code and for manipulating biological samples. And we are beginning to build infrastructure to transfer, store, process and derive meaning from massive amounts of biological. Volta is leveraging the maturity of the electronics industry and the advancements in software to build the next generation of biological automation. Our vision is to build biological automation that is as agile, reliable and scalable as digital electronics.