Composable Models

Feb 5, 2023

In the last ML cycle, a specific strategy often beat out all others (at least in competitions) — ensemble models. This algorithm combines several weaker, simpler models to create a stronger, more robust model. As a result, nearly every Kaggle competition was won by an ensemble model — often composed of tens of underlying models. However, these models were never feasible to deploy in production since they would multiply the number of models that needed to be developed, deployed, and maintained.

Composability is back. In open source, there’s composability in diffusion models — blending fine-tuned Stable Diffusion models to produce composite models that include multiple styles. In large language models, there’s composability in chaining models together — taking patterns and workflows around LLMs and turning them into building blocks.

Composability often isn’t easy to productionize — it’s difficult to deploy and test. But, on the other hand, the interfaces are more flexible now — they are natural language, and there are many mappings available to developers (image-to-text, text-to-image, speech-to-text, prompt-to-prompt, text-to-embedding). So maybe we’ll see real composability this time.