Latest Posts
Recent explorations in technology and human systems
The AI Correction Is Coming . And I Feel Fine
"The AI Correction Is Coming. And I Feel Fine" offers tech leaders a strategic guide through AI's predictable boom-to-deployment cycle, informed by Perez and Christensen's frameworks. It analyzes current market dynamics and signals of an inevitable correction, offering actionable strategies. The article advises prioritizing durable leverage in infrastructure, data, and orchestration to prepare for the productive deployment phase, emphasizing operational excellence.
Contract-first Prompting
"Contract-First Prompts: Engineering Predictable AI Interactions," introduces a methodology to enhance the reliability and efficiency of interactions with Large Language Models (LLMs). It addresses the common problem of LLMs producing imprecise outputs due to underspecified instructions, likening it to making an API call without a clear contract.
The Future of Soundness
"The Future of Soundness: Effect in AI and Beyond (Part 4)," concludes the series by arguing that Systemically Sound Programming with Effect-TS is essential for developing the next generation of applications, particularly in the unpredictable realm of Artificial Intelligence.
Architecting Soundness
"Architecting Soundness: Concurrency, Error Management, and Dependencies (Part 3)," elaborates on the practical application of Effect-TS to address systemic fragility in software.
The Effect Blueprint
"The Effect Blueprint: Engineering Predictable Behavior (Part 2)," introduces Effect-TS as the solution to the "TypeScript Gap" discussed in Part 1—the inability of TypeScript alone to guarantee robust application behaviors beyond type safety. The core concept is the Effect datatype, which is presented as a pure, immutable description of a computation rather than an immediate execution.
The Typescript Gap
"The TypeScript Gap: Why Our Systems Are Still Fragile (Part 1)," argues that while TypeScript provides excellent type safety and ensures data consistency, it leaves a significant gap in addressing the systemic fragility of complex applications
Orchestrating Work
This blog post, "Architecting Agentic AI Workflows (Part 5)," builds upon previous discussions of the AI computer and LLM as an operating system. It shifts focus from single LLM interactions to creating complex, multi-step AI solutions using AI Agents.
Building AI Apps
Developers can interact with the LLM OS using APIs, focusing on prompt engineering and Tool-Calling to leverage its capabilities. The Vercel AI SDK simplifies integration by providing core primitives for managing LLM interactions, including handling conversation history and executing external functions. Mastering these techniques enables the creation of intelligent applications that combine AI's pattern-matching with traditional coding logic.
The LLM Operating System
The LLM Operating System redefines the role of large language models as central orchestrators in computing, akin to traditional operating systems. It emphasizes natural language as a programming interface, allowing developers to interact with AI capabilities intuitively. Key features include the LLM acting as the core computational unit, managing context like RAM, and utilizing external tools as peripherals. This paradigm shift facilitates a new approach to application development, moving from explicit coding to intelligent orchestration of tasks and resources.
The Pattern Matching Computer
Pattern matching is the core operating principle of LLMs, distinguishing them from traditional computers. It involves implicit, statistical processes to generate content based on learned patterns rather than explicit, deterministic logic. LLMs predict sequences by recognizing statistical regularities in training data, leading to coherent outputs but also potential errors like hallucinations. Understanding this principle is crucial for effective development and application of LLMs, paving the way for strategic programming and interaction with AI systems.