• World of DaaS
  • Posts
  • World of DaaS Roundtable Recap: Building AI-Ready Data Products

World of DaaS Roundtable Recap: Building AI-Ready Data Products

July 24, 2025

At our latest World of DaaS roundtable, leaders from across the data and AI space gathered to discuss what it takes to build data products for AI-native environments. The conversation covered a wide range of topics, from RAG readiness to agent-driven workloads to the shifting expectations of enterprise buyers.

1. AI Workflows Are Quietly Reshaping the Stack

Many companies are discovering that their data infrastructure is already being used to support AI use cases, even if that was never the intention.

One participant realized that about 90 percent of their customers were using their entity resolution engine to support downstream AI. “If you don’t know who is who in your data, your downstream AI doesn’t work,” they said. The takeaway was that data prep is no longer just about hygiene, it’s foundational to AI performance.

Structured data providers echoed this shift. Buyers are becoming more sophisticated. Instead of polished dashboards, they are asking for raw and structured data they can feed directly into AI workflows. “Buyers are smarter now. They used to test data on scale. Now they test on quality.”

2. Agents Are Driving Massive Query Volume

Several participants discussed how LLMs and agents are inflating query demand across the board.

“AI is really good, but still dumb enough that it needs to do 100 times the work to get a good answer,” one person said. For companies supporting agent-driven workflows, this creates a serious pricing tension. Traditional query-based billing models don’t fit when agents are generating thousands of semi-overlapping requests.

There is a strong push for pricing based on distinct items returned, not the number of queries made. One idea: “Let me pay for the actual data points I use, not every time I ask for them.”

3. RAG Is Coming, But No One Has Scaled It Yet

Retrieval-Augmented Generation (RAG) is top of mind, but still early in its commercial rollout.

Today, most buyers are requesting full datasets to plug into their own RAG pipelines. “Right now they’re just saying, ‘give us your S3 bucket’ and that’s not sustainable.” Providers want to move toward a more scalable model, where natural language interfaces or vector endpoints allow dynamic querying.

One participant suggested that providers should think of RAG like ElasticSearch. “It’s just a smarter way to search your data. If you wouldn’t have built a query endpoint 10 years ago, maybe don’t rush into RAG now.”

There’s also a growing awareness that feeding the wrong or misunderstood data into RAG systems creates real-world risk. As one participant put it, “If your AI says it’s sunny when it’s actually raining, that damages your brand across the board. Quality and accuracy matter more than ever.” When LLMs generate output based on flawed inputs, the cost isn’t just technical, it’s reputational.

4. AI-Native Buyers Are Different

A new type of customer is emerging… AI-native companies that are just now realizing they can buy structured data instead of scraping it. One participant, who runs a large B2B intent data company, shared that “We’re now seeing those customers say, ‘Great, we want your data, but also other data.’ The appetite is growing.”

These buyers are often led by engineers or researchers, not procurement teams. Some admitted they didn’t even know that data-as-a-service existed.

Discoverability is becoming a major challenge. “It’s like moving from an offline business to an online one. You need to be searchable and accessible by agents, not just people.”

5. The Moat Is Moving

Everyone agreed that the value of general-purpose data is eroding fast. What matters now is uniqueness, defensibility, and speed.

“The data that’s out there already belongs to the models,” said one participant. “You need something they can’t just scrape or synthesize.”

Some companies are doubling down on specialized time series data or high-accuracy forecasting. “We’ve gone 59 out of 63 years by constantly reinventing ourselves and having unique data, more accurate forecasts that have value.” Others are building flexible licensing models to support both consumer-scale and enterprise integrations. 

One executive summed it up well: “You win by having the highest quality, most accurate data. Everything else is noise.”

Key Takeaways

  • AI-native demand is growing fast, but buyers are often inexperienced and hard to reach.

  • Query volume is exploding. Pricing models need to evolve for agentic workflows.

  • RAG is promising but still difficult to productize and scale.

  • Data quality and uniqueness are the strongest long-term moats.

  • Distribution matters more than ever. You need to be found by humans and machines alike.

If you are a DaaS executive interested in participating in future roundtables, apply to join our World of DaaS community.

Reply

or to participate.