PLAY PODCASTS
LangChain: LLM Integration for Elixir Apps with Mark Ericksen

LangChain: LLM Integration for Elixir Apps with Mark Ericksen

Mark Ericksen, creator of the Elixir LangChain framework, joins the Elixir Wizards to talk about LLM integration in Elixir apps. He explains how LangChain abstracts away the quirks of different AI providers (OpenAI, Anthropic’s Claude, Google’s Gemini) so you can work with any LLM in one more consistent API.

Elixir Wizards

June 12, 202538m 18s

Audio is streamed directly from the publisher (aphid.fireside.fm) as published in their RSS feed. Play Podcasts does not host this file. Rights-holders can request removal through the copyright & takedown page.

Show Notes

Mark Ericksen, creator of the Elixir LangChain framework, joins the Elixir Wizards to talk about LLM integration in Elixir apps. He explains how LangChain abstracts away the quirks of different AI providers (OpenAI, Anthropic’s Claude, Google’s Gemini) so you can work with any LLM in one more consistent API. We dig into core features like conversation chaining, tool execution, automatic retries, and production-grade fallback strategies.

Mark shares his experiences maintaining LangChain in a fast-moving AI world: how it shields developers from API drift, manages token budgets, and handles rate limits and outages. He also reveals testing tactics for non-deterministic AI outputs, configuration tips for custom authentication, and the highlights of the new v0.4 release, including “content parts” support for thinking-style models.

Key topics discussed in this episode:

• Abstracting LLM APIs behind a unified Elixir interface
• Building and managing conversation chains across multiple models
• Exposing application functionality to LLMs through tool integrations
• Automatic retries and fallback chains for production resilience
• Supporting a variety of LLM providers
• Tracking and optimizing token usage for cost control
• Configuring API keys, authentication, and provider-specific settings
• Handling rate limits and service outages with degradation
• Processing multimodal inputs (text, images) in Langchain workflows
• Extracting structured data from unstructured LLM responses
• Leveraging “content parts” in v0.4 for advanced thinking-model support
• Debugging LLM interactions using verbose logging and telemetry
• Kickstarting experiments in LiveBook notebooks and demos
• Comparing Elixir LangChain to the original Python implementation
• Crafting human-in-the-loop workflows for interactive AI features
• Integrating Langchain with the Ash framework for chat-driven interfaces
• Contributing to open-source LLM adapters and staying ahead of API changes
• Building fallback chains (e.g., OpenAI → Azure) for seamless continuity
• Embedding business logic decisions directly into AI-powered tools
• Summarization techniques for token efficiency in ongoing conversations
• Batch processing tactics to leverage lower-cost API rate tiers
• Real-world lessons on maintaining uptime amid LLM service disruptions

Links mentioned:
https://rubyonrails.org/
https://fly.io/
https://zionnationalpark.com/
https://podcast.thinkingelixir.com/
https://github.com/brainlid/langchain
https://openai.com/
https://claude.ai/
https://gemini.google.com/
https://www.anthropic.com/
Vertex AI Studio https://cloud.google.com/generative-ai-studio
https://www.perplexity.ai/
https://azure.microsoft.com/
https://hexdocs.pm/ecto/Ecto.html
https://oban.pro/
Chris McCord’s ElixirConf EU 2025 Talk https://www.youtube.com/watch?v=ojL_VHc4gLk

Getting started:
https://hexdocs.pm/langchain/getting_started.html
https://ash-hq.org/
https://hex.pm/packages/langchain
https://hexdocs.pm/igniter/readme.html
https://www.youtube.com/watch?v=WM9iQlQSF_g
@brainlid on Twitter and BlueSky

Special Guest: Mark Ericksen.

Topics

Large language modelsAI integrationmachine learningdeep learningnatural language processingNLPAI in software developmentAI assistantschatbot developmentprompt engineeringAPI orchestrationconversational AIAI pipelinessoftware engineeringdeveloper toolsfunctional programmingElixir programmingBEAM VMErlang VMPhoenix frameworkLiveViewLangChainMLOpsDevOpsCI/CDcloud AI servicesserverless computingmicroservicesreal-time streamingtelemetryobservabilityperformance optimizationscalabilityfault tolerancesoftware architectureAPI designdeveloper experienceopen source librariescommunity-driven developmentsoftware toolingcode generationtest automationintegration patternsdata processingJSONasynchronous programmingconcurrencybackground processingtask queuesdata scienceedge computingcloud computingAI ethicsdata privacydata securityIoTML pipelinesmonitoringloggingAI educationtech podcastdeveloper podcastsoftware development podcastElixir tutorialsprompt workflowsmodel-agnostic AImulti-model supportcost optimizationrate limitingretry strategiesfallback chainstoken managementbatching strategiesmultimodal processingstructured data extractionhuman-in-the-loopOpenAI integrationAnthropic ClaudeGoogle GeminiAzure AIAWS AIAPI key managementauthentication strategiesoutage handlingcontent orchestrationlive coding demosLiveBook notebookstutorial seriespolyglot interoperabilityElixir toolingLangChain vs PythonAI framework comparisonsoftware innovationcutting-edge techenterprise AIstartup AIAI roadmapfuture of AIemerging technologiesopenAIchatGPTdeveloper workflowsAI scalabilitybusiness intelligenceproduction reliabilitytech insightspodcast episodeSeason 14 ElixirverseElixir Wizards