Databricks’ Acquisition of MosaicML Tells Us a Lot about the Direction of the Generative AI Market

Subscribe To Download This Insight

By Reece Hayden | 3Q 2023 | IN-7012

With huge growth expected in the enterprise Business-to-Business (B2B) generative Artificial Intelligence (AI) market, vendors across, and even beyond, the supply chain are looking to position themselves to benefit from this so-called “iPhone moment.” Databricks and Snowflake are following a shrewd market entry strategy, but plenty of novel B2B monetization strategies remain, especially for Machine Learning Operations (MLOps)-competent stakeholders.

Registered users can unlock up to five pieces of premium content each month.

Log in or register to unlock this Insight.


Databricks and Snowflake Follow a Shrewd Generative AI Market Entry Strategy


The early generative Artificial Intelligence (AI) market has rested on the consumer segment. Over the next few years, we expect a huge acceleration in the enterprise market that will create more than US$50 billion in revenue opportunity by 2030 for the supply chain. Vendors outside of the immediate supply chain are even trying to access this burgeoning opportunity, with cloud data specialists Databricks and Snowflake looking to build out their generative AI value proposition.

Databricks recently acquired MosaicML in a deal reportedly worth US$1.3 billion. Databricks’ platform helps customers remove internal data siloes and sort incoming data from various clouds, allowing complete visibility and the ability to manipulate data across the entire data footprint. By acquiring MosaicML (a platform that offers low/no-code tools, infrastructure, and models to support enterprise generative AI deployment), the combined proposition will enable enterprises to cost efficiently and securely build customized Large Language Models (LLMs) and fine-tune them using well-curated internal datasets.

In addition, Databricks’ primary competitor, Snowflake, has followed a comparable strategy to build out its enterprise generative AI value proposition. It has partnered with NVIDIA to integrate the NeMo framework into its enterprise data platform. By integrating NVIDIA’s low/no-code tools and models into its data platform, it will achieve a similar effect, enabling enterprises to build fine-tuned Machine Learning (ML) models that reflect specific business processes and unique operations. On top of the platform synergies, NVIDIA’s market leading ML hardware will help drive performance improvements for training and inferencing workloads.

Combining Enterprise Data and Generative AI Can Unlock the B2B Commercial Opportunity


These strategies are indicative of the direction of the generative AI market. Accessing the enterprise segment means moving away from giant, closed models (like GPT-3 or BERT) that have a range of capabilities and a large knowledge base, and toward smaller, more customizable models fine-tuned on enterprise datasets. These have significant advantages in the Business-to-Business (B2B) domain and ABI Research expects them to dominate as the market enters a growth phase:

  • Contextualization: Models must be built to serve specific use cases. This requires models to be contextualized with datasets related to specific functions. For example, a Human Resources (HR) chatbot should be trained in internal policies to be able to provide contextual, rather than generalized feedback.
  • Explainability: Using internal datasets and applying “retrieval-based methods” to extract information can support transparency by providing sources/citations for information. This allows end users to test and check output to ensure accuracy.
  • Lower Risk of Hallucination: Direct access to datasets for specific purposes will improve accuracy and reduce the chances of data approximations.
  • Faster Fine-Tuning: Customized smaller models can be easily fine-tuned with more datasets or new functionalities, as fewer parameters require additional training. This lowers training overheads and can reduce time-to-value.
  • Lower Risk: Day by day, we see more legal challenges emerging against “giant, general” models like ChatGPT based on copyright infringement or other data infractions. Leveraging internal datasets to build custom models helps enterprises mitigate against this substantial risk.
  • Reduced Cost: Smaller, more contextualized models are cheaper to train and run inferencing.

Plenty of Vendors Can Access the Growing Enterprise Generative AI Opportunity


Databricks and Snowflake are looking to jump on the generative AI enterprise train by integrating models and tools into their data platform. This strategy clearly has a strong enterprise value proposition, as it lowers the barriers for enterprise deployment of custom generative AI models. But this is just one monetization strategy. Other suppliers across the generative AI value chain have plenty of opportunities to build novel B2B monetization strategies. ABI Research outlines some of the most interesting strategies below:

  • Operational Change Management and Consultancy Services: Maximizing the commercial potential of generative AI deployment in the enterprise requires deep integration across business processes, restructuring, and upskilling across business groups. Enterprises are unlikely to be able to make such widespread changes without external support. Business consultants and system integrators are the obvious leaders in the market, but hyperscalers and Independent Software Vendors (ISVs) could leverage their ML expertise to support this process.
  • Cloud Optimization and Support: Although some generative AI deployment will occur on-premises, most of the training will likely remain in the cloud. But this can be costly, so enterprises will want to optimize cloud costs moving forward. Hyperscalers, integrators, and compute infrastructure providers should develop professional services that can optimize enterprise cloud training and inferencing. Given the potential snowballing costs of getting cloud deployment wrong, this could be a lucrative opportunity for “cloud experts.”
  • Customized, Ready to Deploy Models: With ownership and control top of mind, off-the-shelf closed-source models will not be sufficient for most enterprises. Instead, “nimble, fine-tuned” models will be more fit for purpose in the B2B domain. AI experts could use open-sourced “giant” models to build enterprise-level customized “nimble” slices. These smaller models have significant advantages, but building them requires in-house ML expertise.
  • Application Development and Fine-Tuning Services: Enterprise deployment of generative AI requires high-performance, contextualized models, but most lack the skill set required. The clear opportunity is for ISVs or ML service providers with strong MLOps competencies, but hyperscalers and others could seize this opportunity. The key to this is assessing enterprise pain points and building highly contextualized models/applications, not just plug-ins or applications built off generalized LLMs.
  • Open Source Productization: Increasingly comparable model performance of open-source models provides ML suppliers an opportunity to build models quickly and easily, as well as platforms tools, services, applications without paying for Application Programming Interface (API) access. This is an emerging field, but could easily complement other strategies for ML tools, data services, ISVs, hyperscalers, and enterprise service providers. Amazon Bedrock is one example, as it has been a generative AI framework around its own proprietary closed-source model, Titan, but also provided access to open-source solutions.

Building highly differentiated and varied enterprise monetization strategies will be critical for suppliers looking to jump on the generative AI bandwagon. ABI Research’s recently published report, Generative AI: Identifying Technology Supply Side Revenue Opportunities (AN-5832), identifies and evaluates further monetization strategies that stakeholders can use in the B2B market.




Companies Mentioned