When code needs to be optimized – for instance, to run machine learning predictions faster or more efficiently on the cloud – companies sometimes turn to highly paid and uniquely skilled engineers to optimize the code.
While this approach can be effective, it also has drawbacks:
- Such engineers are expensive and hard to come by.
- Like all humans, they tire. It is often possible to reach 80% of the goal in 20% of the time, but the effort to squeeze out the remaining 20% of the performance can be exceeding tedious and time-consuming.
- Optimizing code is processor-dependent because different processors have different instruction sets, cache sizes, architectures and more. A human needs intimate knowledge of the target processor to achieve high levels of optimization, and it is challenging for humans to be experts on multiple, disparate processing architectures. Therefore, performance engineers typically specialize in a certain type of architecture (e.g., CPU or GPU) and so different engineers are required depending on the target hardware.
- Human optimizers still use middleware code libraries, which introduce their own inefficiencies.
- Manual optimization takes time, and this time might not always be available for the project.
Very few people turn C code into assembly language nowadays. Almost everyone relies on a compiler to perform this task. Performance optimization work is akin to having engineers manually turn high-level code into assembly language. For most people it isn’t fun to do. And for most companies, this activity is orthogonal to adding long-term value to their software products.
Recent advancements in programming languages and compilers have allowed researchers to begin looking for ways to take this even further – to provide automatic techniques for code optimization.
This exciting new field of research and practice is called machine programming, and at Inteon, we believe this is the future of software development.