Why the Old Data Playbook Doesn't Work Anymore
Ten years ago, data teams were celebrated for their clean warehouses, streamlined ETL pipelines, and weekly dashboards. This was a model that worked when business intelligence was the main goal. But with the rise of AI, these outputs are no longer enough.
A 2024 McKinsey study showed that 65% of organizations are already using generative AI in one or more business activities. Yet many of these same organizations admit they are struggling to scale AI beyond experimental pilots. Why? Because the legacy DNA of a data team focused on reporting, pipelines, and single-dimensional specialization was never designed for this new reality.
Today’s Data + AI team must design systems that can learn, reason, and integrate intelligence directly into products and decisions. This transformation is fundamental and represents what we call the new DNA.
The New DNA of the Data + AI Team
1. From Structured to Unstructured Data
Structured data like SQL tables, CRM fields, and transaction logs used to be the lifeblood of analytics. But in 2025, over 80% of enterprise data is unstructured (IDC). This includes a wide range of information:
- Customer support tickets
- Slack messages and chat logs
- PDFs and scanned contracts
- Audio and video transcripts
- IoT sensor data
While unstructured data is more difficult to process, it is also typically more valuable. It contains the sentiment, intent, and real-world context that structured fields often lack.
Real-World Example: Netflix
Netflix makes sense of unstructured behavioral data from watch habits to metadata tags and thumbnail clicks—to provide highly personalized recommendations. Without mastering unstructured data, this level of personalization would not be possible.
Takeaway: Teams that can consume, clean, and interpret unstructured data will gain a significant competitive advantage over those with conventional BI configurations.
2. Agent Architectures Are Replacing Single Models
In the early days of AI, companies would simply put a single ML model into production. Today, that approach is outdated. Modern teams are shifting toward agent-based architectures where multiple models collaborate.
- One model pulls relevant information.
- Another assesses compliance, bias, or tone.
- A third produces the final output.
This type of orchestration, often called “agent systems,” is quickly becoming the new standard.
Industry Data
Gartner projects that by 2026, 30% of businesses will rely on AI agents to perform structured business activities, a significant increase from less than 2% in 2023.
Example: GitHub Copilot
Copilot uses several models and ranking layers to generate usable code suggestions, not just plain text. This “agentic” approach is why over half of GitHub’s code is now influenced by AI.
Takeaway: Businesses that implement agent architectures will see more stable and production-quality AI, while those using single, one-off models will struggle with scale and reliability.
3. Retrieval Engineering and RAG Pipelines
Triggering a large language model (LLM) is just the first step. To move from flashy demos to reliable enterprise applications, teams are now focused on retrieval-augmented generation (RAG).
Why RAG Matters
- Decreases hallucinations by grounding responses in specific data.
- Bases outputs on provable information, increasing reliability.
- Updates results as the knowledge base changes.
Example: ChatGPT Enterprise
OpenAI’s enterprise product uses RAG pipelines to anchor responses in a company’s private datasets. This is why firms like PwC and Canva have adopted it across their organizations.
Takeaway: RAG pipelines are the backbone of dependable enterprise AI. Without them, outputs will remain unpredictable and untrustworthy.
4. Evaluation Is No Longer Binary
In traditional software, tests either pass or fail. AI is different. Evaluating AI systems requires a balance of objective metrics and subjective judgment.
- Precision and recall measure accuracy.
- Tone and context measure usability.
- Explainability builds stakeholder trust.
Example: Meta’s LLaMA Framework
Meta built a hybrid evaluation system for LLaMA that combines automated metrics with human ratings on reasoning and bias. It’s a clear reminder that “accuracy” is only half the story.
Takeaway: The ability to define and measure “good enough” for AI outputs both technically and from a human perspective is a critical skill for today’s teams.
5. AI Observability as the Trust Layer
Classic observability was about catching broken pipelines or lagging dashboards. AI observability goes further: it’s about catching drift, hallucinations, and downgraded prompts before users are impacted.
The most important observability metrics include:
- Hallucination rates
- Cost and latency
- Prompt failure trends
- Bias detection
- Model drift
Industry Proof
Monte Carlo, a leader in data observability, reports that companies with robust observability programs cut error detection time by 30%. In AI, this means catching failures before they snowball across thousands of users.
Example: LinkedIn
LinkedIn continuously monitors its recommendation algorithms for fairness and drift to ensure that small issues don’t become major problems for its vast user base.
Takeaway: Without observability, companies risk deploying AI that erodes trust more quickly than it adds value.
6. Governance and Ethics Are Integrated, Not Bolted On
When AI is embedded in core business processes, governance and compliance become non-negotiable.
- Regulations: The EU AI Act of 2025 places strict demands on transparency and risk.
- Business Impact: Poorly governed AI can expose companies to significant fines and reputational damage.
Example: IBM Watson
Watson’s initial healthcare initiatives struggled partly because governance frameworks were not established to ensure reliability and explainability. The past has reshaped how businesses approach ethical AI today.
Takeaway: Governance isn’t a barrier to innovation; it’s what makes AI adoption sustainable and responsible.
The Skills That Define the New DNA
The new Data + AI team brings together disciplines that once operated in silos:
- Data Engineering: To manage structured and unstructured pipelines.
- ML Engineering: For model training, fine-tuning, and deployment.
- Retrieval Engineering: To build and optimize RAG pipelines.
- Product Thinking: To integrate AI outputs into actionable workflows.
- Observability & Governance: To track trust, compliance, and fairness.
It’s less about having “rockstar” individuals and more about building hybrid teams that can solve complex, system-level problems.
Where Pedals Up Fits Into This Picture
At Pedals Up, we’ve seen how businesses in SaaS, fintech, e-commerce, and Web3 are grappling with these shifts. Our role is to help teams build and operationalize AI capabilities from unstructured data pipelines to observability dashboards so they can focus on business outcomes rather than infrastructure.
We don’t believe in selling “AI in a box.” We partner with organizations to build customized systems that fit their specific needs, whether that means rolling out a RAG pipeline, combining agent architectures, or creating comprehensive governance frameworks.
Creating the Future-Ready Data + AI Team

The future DNA of the Data + AI team is not about dashboards or one-off models. It’s about a holistic approach that includes:
- Unstructured data fluency
- Agent-based systems
- Retrieval and RAG engineering
- Hybrid evaluation metrics
- Observability and governance as core principles
The firms that adopt this new DNA will embed intelligence into the heart of their products and decisions. Those that don’t will remain stuck in perpetual pilots that never achieve their full potential.
At Pedals Up, our view is simple: the companies that succeed with AI are the ones that treat it as a team sport, not a side project.