christopher@baus.net

Reconsidering the value in AI

This is still playing out in the industry, but I think I got it wrong in two areas recently.

  1. I undervalued vertical market AI applications built on generic LLMs like OpenAI, Anthropic, etc. as wrappers.
  2. I also didn't see the value or complexity in "prompt engineering."

Application Data

One of the biggest lessons I’ve learned is that application data is a valuable commodity that significantly enhances AI’s utility. While LLMs themselves are general-purpose, the data they interact with defines their value in specific use cases. Companies that build applications leveraging domain-specific data have a competitive moat: providing context, personalization, and accuracy. This is why AI-driven vertical market applications are starting to proliferate. They streamline AI’s use in practical, industry-specific workflows while benefiting from ongoing improvements in foundation models.

Commoditized APIs

The sudden emergence of DeepSeek, which mimicked OpenAI’s API, highlights another reality: LLM access itself is becoming a commodity. If an API-compatible model can slot into existing applications overnight, then differentiation won’t come from who provides the mode. It will come from how applications use it. This reinforces the importance of application-layer innovation, where proprietary data, workflow integrations, and user experience will define winners.

The Real Challenge of Prompt Engineering

Another area where my initial assumptions were off was in underestimating the importance of sophisticated prompt engineering. Early on, the dominant view was that prompts were simple, and that improvements in models would render prompt complexity unnecessary. That hasn’t been the case.

Fine-tuning prompts, structuring them effectively, and validating their performance through rigorous Evals processes has proven to be a significant amount of work in my current project. Subtle differences can lead to drastically different outcomes, and the best-performing applications are those that systematically test and optimize prompts to maximize reliability and accuracy. This goes beyond simple trial-and-error; it requires a structured approach to benchmarking outputs and refining context.

The Shift in AI Value

The AI industry is evolving in a direction where the real competitive advantage lies not in access to the models themselves but in:

  • Proprietary application data that enhances AI performance.
  • Workflow-specific integrations that drive real utility.
  • Deep expertise in optimizing AI interactions through prompt engineering and Evals.

I initially underestimated how quickly these shifts would occur, but it’s now clear that the next wave of AI differentiation will come from how businesses wield these tools, not just from the tools themselves.

Show Comments