
The New Stack Podcast
320 episodes — Page 3 of 7
Ep 1514AI Agents are Dumb Robots, Calling LLMs
AI agents are set to transform software development, but software itself isn’t going anywhere—despite the dramatic predictions. On this episode of The New Stack Makers, Mark Hinkle, CEO and Founder of Peripety Labs, discusses how AI agents relate to serverless technologies, infrastructure-as-code (IaC), and configuration management. Hinkle envisions AI agents as “dumb robots” handling tasks like querying APIs and exchanging data, while the real intelligence remains in large language models (LLMs). These agents, likely implemented as serverless functions in Python or JavaScript, will automate software development processes dynamically. LLMs, leveraging vast amounts of open-source code, will enable AI agents to generate bespoke, task-specific tools on the fly—unlike traditional cloud tools from HashiCorp or configuration management tools like Chef and Puppet. As AI-generated tooling becomes more prevalent, managing and optimizing these agents will require strong observability and evaluation practices. According to Hinkle, this shift marks the future of software, where AI agents dynamically create, call, and manage tools for CI/CD, monitoring, and beyond. Check out the full episode for more insights. Learn more from The New Stack about emerging trends in AI agents: Lessons From Kubernetes and the Cloud Should Steer the AI RevolutionAI Agents: Why Workflows Are the LLM Use Case to Watch Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
Ep 1513Goodbye SaaS, Hello AI Agents
The transition from SaaS to Services as Software with AI agents is underway, necessitating new orchestration methods similar to Kubernetes for containers. AI agents will require resource allocation, workflow management, and scalable infrastructure as they evolve. Two key trends are driving this shift: Data Evolution – From spreadsheets to AI agents, data has progressed through relational databases, big data, predictive analytics, and generative AI. Computing Evolution – Starting from mainframes, the journey has moved through desktops, client servers, web/mobile, SaaS, and now agentic workflows. Janakiram MSV, an analyst, notes on this episode of The New Stack Makers that SaaS depends on data—without it, platforms like Salesforce and SAP lack value. As data becomes more actionable and compute more agentic, a new paradigm emerges: Services as Software. AI agents will automate tasks previously requiring human intervention, like emails and sales follow-ups. However, orchestrating them will be complex, akin to Kubernetes managing containers. Unlike deterministic containers, AI agents depend on dynamic, trained data, posing new enterprise challenges in memory management and infrastructure. Learn more from The New Stack about evolution to AI agents: How AI Agents Are Starting To Automate the Enterprise Can You Trust AI To Be Your Data Analyst? Agentic AI is the New Web App, and Your AI Strategy Must Evolve Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
Ep 1512How Generative AI Is Reshaping the SDLC
Amazon Q Developer is streamlining the software development lifecycle by integrating AI-powered tools into AWS. In an interview at AWS in Seattle, Srini Iragavarapu, director of generative AI Applications and Developer Experiences at AWS, discussed how Amazon Q Developer enhances the developer experience. Initially focused on inline code completions, Amazon Q Developer evolved by incorporating generative AI models like Amazon Nova and Anthropic models, improving recommendations and accelerating development. British Telecom reported a 37% acceptance rate for AI-generated code.Beyond code completion, Amazon Q Developer enables developers to interact with Q for code reviews, test generation, and migrations. AWS also developed agentic frameworks to automate undifferentiated tasks, such as upgrading Java versions. Iragavarapu noted that internally, AWS used Q Developer to migrate 30,000 production applications, saving $260 million annually. The platform offers code generation, testing suites, RAG capabilities, and access to AWS custom chips, further flattening the SDLC by automating routine work. Listen to The New Stack Makers for the full discussion.Learn more from The New Stack about Amazon Q Developer: Amazon Q Developer Now Handles Your Entire Code Pipeline Amazon Q Apps: AI-Powered Development for All Amazon Revamps Developer AI With Code Conversion, Security Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
Ep 1511OAuth Works for AI Agents but Scaling is Another Question
Maya Kaczorowski noticed that AI identity and AI agent identity concerns were emerging from outside the security industry, rather than from CISOs and security leaders. She concluded that OAuth, the open standard for authentication, already serves the purpose of granting access without exposing passwords. Kaczorowski, a respected technologist and founder of Oblique, a startup focused on self-serve access controls, recently wrote about OAuth and AI agents and shared her insights on this episode of The New Stack Makers. She noted that developers see AI agents as extensions of themselves, granting them limited access to data and capabilities—precisely what OAuth is designed to handle. The challenges with AI agent identity are vast, involving different approaches to authentication, such as those explored by companies like AuthZed. While existing authorization models like RBAC or ABAC may still apply, the real challenge lies in scale. The exponential growth of AI-related entities—from users to LLMs—could mean even small organizations manage hundreds of thousands of agents. Future solutions must accommodate this massive scale efficiently. For the full discussion, check out The New Stack Makers interview with Kaczorowski. Learn more from The New Stack about OAuth requirements for AI Agents: OAuth 2.0: A Standard in Name Only? AI Agents Are Redefining the Future of Identity and Access ManagementJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
Ep 1510LLMs and AI Agents Evolving Like Programming Languages
The rise of the World Wide Web enabled developers to build tools and platforms on top of it. Similarly, the advent of large language models (LLMs) allows for creating new AI-driven tools, such as autonomous agents that interact with LLMs, execute tasks, and make decisions. However, verifying these decisions is crucial, and critical reasoning may be a solution, according to Yam Marcovitz, tech lead at Parlant.io and CEO of emcie.co.Marcovitz likens LLM development to the evolution of programming languages, from punch cards to modern languages like Python. Early LLMs started with small transformer models, leading to systems like BERT and GPT-3. Now, instead of mere text auto-completion, models are evolving to enable better reasoning and complex instructions.Parlant uses "attentive reasoning queries (ARQs)" to maintain consistency in AI responses, ensuring near-perfect accuracy. Their approach balances structure and flexibility, preventing models from operating entirely autonomously. Ultimately, Marcovitz argues that subjectivity in human interpretation extends to LLMs, making perfect objectivity unrealistic.Learn more from The New Stack about the evolution of LLMs: AI Alignment in Practice: What It Means and How to Get It Agentic AI: The Next Frontier of AI Power Make the Most of AI Agents: Tips and Tricks for Developers Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
Ep 1509Writing Code About Your Infrastructure? That's a Losing Race
Adam Jacob, CEO of System Initiative, discusses a shift in infrastructure automation—moving from writing code to building models that enable rapid simulations and collaboration. In The New Stack Makers, he compares this approach to Formula One racing, where teams use high-fidelity models to simulate race conditions, optimizing performance before hitting the track.System Initiative applies this concept to enterprise automation, creating a model that understands how infrastructure components interact. This enables fast, multiplayer feedback loops, simplifying complex tasks while enhancing collaboration. Engineers can extend the system by writing small, reactive JavaScript functions that automate processes, such as transforming existing architectures into new ones. The platform visually represents these transformations, making automation more intuitive and efficient.By leveraging models instead of traditional code-based infrastructure management, System Initiative enhances agility, reduces complexity, and improves DevOps collaboration. To explore how this ties into the concept of the digital twin, listen to the fullNew Stack Makers episode.Learn more from The New Stack about System Initiative:Beyond Infrastructure as Code: System Initiative Goes LiveHow System Initiative Treats AWS Components as Digital TwinsSystem Initiative Code Now Open SourceJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
Ep 1508OpenTelemetry: What’s New with the 2nd Biggest CNCF Project?
Morgan McLean, co-founder of OpenTelemetry and senior director of product management at Splunk, has long tackled the challenges of observability in large-scale systems. In a conversation with Alex Williams onThe New Stack Makers, McLean reflected on his early frustrations debugging high-scale services and the need for better observability tools.OpenTelemetry, formed in 2019 from OpenTracing and OpenCensus, has since become a key part of modern observability strategies. As a Cloud Native Computing Foundation (CNCF) incubating project, it’s the second most active open source project after Kubernetes, with over 1,200 developers contributing monthly. McLean highlighted OpenTelemetry’s role in solving scaling challenges, particularly in Kubernetes environments, by standardizing distributed tracing, application metrics, and data extraction.Looking ahead, profiling is set to become the fourth major observability signal alongside logs, tracing, and metrics, with general availability expected in 2025. McLean emphasized ongoing improvements, including automation and ease of adoption, predicting even faster OpenTelemetry adoption as friction points are resolved.Learn more from The New Stack about the latest trends in Open Telemetry:What Is OpenTelemetry? The Ultimate GuideObservability in 2025: OpenTelemetry and AI to Fill In GapsHoneycomb.io’s Austin Parker: OpenTelemetry In-DepthJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
Ep 1507What’s Driving the Rising Cost of Observability?
Observability is expensive because traditional tools weren’t designed for the complexity and scale of modern cloud-native systems, explains Christine Yen, CEO of Honeycomb.io. Logging tools, while flexible, were optimized for manual, human-scale data reading. This approach struggles with the massive scale of today’s software, making logging slow and resource-intensive. Monitoring tools, with their dashboards and metrics, prioritized speed over flexibility, which doesn’t align with the dynamic nature of containerized microservices. Similarly, traditional APM tools relied on “magical” setups tailored for consistent application environments like Rails, but they falter in modern polyglot infrastructures with diverse frameworks.Additionally, observability costs are rising due to evolving demands from DevOps, platform engineering, and site reliability engineering (SRE). Practices like service-level objectives (SLOs) emphasize end-user experience, pushing teams to track meaningful metrics. However, outdated observability tools often hinder this, forcing teams to cut back on crucial data. Yen highlights the potential of AI and innovations like OpenTelemetry to address these challenges.Learn more from The New Stack about the latest trends in observability:Honeycomb.io’s Austin Parker: OpenTelemetry In-DepthObservability in 2025: OpenTelemetry and AI to Fill In GapsObservability and AI: New Connections at KubeConJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
Ep 1506How Oracle Is Meeting the Infrastructure Needs of AI
Generative AI is a data-driven story with significant infrastructure and operational implications, particularly around the rising demand for GPUs, which are better suited for AI workloads than CPUs. In an episode ofThe New Stack Makersrecorded at KubeCon + CloudNativeCon North America, Sudha Raghavan, SVP for Developer Platform at Oracle Cloud Infrastructure, discussed how AI’s rapid adoption has reshaped infrastructure needs.The release of ChatGPT triggered a surge in GPU demand, with organizations requiring GPUs for tasks ranging from testing workloads to training large language models across massive GPU clusters. These workloads run continuously at peak power, posing challenges such as high hardware failure rates and energy consumption.Oracle is addressing these issues by building GPU superclusters and enhancing Kubernetes functionality. Tools like Oracle’s Node Manager simplify interactions between Kubernetes and GPUs, providing tailored observability while maintaining Kubernetes’ user-friendly experience. Raghavan emphasized the importance of stateful job management and infrastructure innovations to meet the demands of modern AI workloads.Learn more from The New Stack about how Oracle is addressing the GPU demand for AI workloads with its GPU superclusters and enhancing Kubernetes functionality: Oracle Code Assist, Java-Optimized, Now in BetaOracle’s Code Assist: Fashionably Late to the GenAI PartyOracle Unveils Java 23: Simplicity Meets Enterprise PowerJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
Ep 1505Arm: See a Demo About Migrating a x86-Based App to ARM64
The hardware industry is surging, driven by AI's demanding workloads, with Arm—a 35-year-old pioneer in processor IP—playing a pivotal role. In an episode ofThe New Stack Makersrecorded at KubeCon + CloudNativeCon North America, Pranay Bakre, principal solutions engineer at Arm, discussed how Arm is helping organizations migrate and run applications on its technology.Bakre highlighted Arm’s partnership with hyperscalers like AWS, Google, Microsoft, and Oracle, showcasing processors such as AWS Graviton and Google Axion, built on Arm’s power-efficient, cost-effective Neoverse IP. This design ethos has spurred wide adoption, with 90-95% of CNCF projects supporting native Arm binaries.Attendees at Arm’s booth frequently inquired about its plans to support AI workloads. Bakre noted the performance advantages of Arm-based infrastructure, delivering up to 60% workload improvements over legacy architectures. The episode also features a demo on migrating x86 applications to ARM64 in both cloud and containerized environments, emphasizing Arm’s readiness for the AI era.Learn more from The New Stack about Arm: Arm Eyes AI with Its Latest Neoverse Cores and SubsystemBig Three in Cloud Prompts ARM to Rethink SoftwareJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
Ep 1504Heroku Moved Twelve-Factor Apps to Open Source. What’s Next?
Heroku has open-sourced its Twelve-Factor App methodology, initially created in 2011 to help developers build portable, resilient cloud applications. Heroku CTO Gail Frederick announced this shift at KubeCon + CloudNativeCon North America, explaining the move aims to involve the community in modernizing the framework. While the methodology inspired a generation of cloud developers, certain factors are now outdated, such as the focus on logs as event streams. Frederick highlighted the need for updates to address current practices like telemetry and metrics visualization, reflecting the rise of OpenTelemetry.The updated Twelve-Factor methodology will expand to accommodate modern cloud-native realities, such as deploying interconnected systems of apps with diverse backing services. Planned enhancements include supporting documents, reference architectures, and code examples illustrating the principles in action. Success will be measured by its applicability to use cases involving edge computing, IoT, serverless, and distributed systems. Heroku views this open-source effort as an opportunity to redefine best practices for the next era of cloud development.Learn more from The New Stack about Heroku: How Heroku Is Positioned To Help Ops Engineers in the GenAI EraThe Data Stack Journey: Lessons from Architecting Stacks at Heroku and MattermostJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
Ep 1503How Falco Brought Real-Time Observability to Infrastructure
Falco, an open-source runtime observability and security tool, was created by Sysdig founder Loris Degioanni to collect real-time system events directly from the kernel. Leveraging eBPF technology for improved safety and performance, Falco gathers data like pod names and namespaces, correlating them with customizable rules. Unlike static analysis tools, it operates in real-time, monitoring events as they occur. In this episode of The New Stack Makers, TNS Editor-in-Chief, Heather Joslyn spoke with Thomas Labarussias, Senior Developer Advocate at Sysdig, Leonardo Grasso, Open Source Tech Lead Manager at Sysdig and Luca Guerra, Sr. Open Source Engineer at Sysdig to get the latest update on Falco. Graduating from the Cloud Native Computing Foundation (CNCF) in February 2023 after entering its sandbox six years prior, Falco’s maintainers have focused on technical maturity and broad usability. This includes simplifying installations across diverse environments, thanks in part to advancements from the Linux Foundation.Looking ahead, the team is enhancing core functionalities, including more customizable rules and alert formats. A key innovation is Falco Talon, introduced in September 2023, which provides a no-code response engine to link alerts with real-time remediation actions. Talon addresses a longstanding gap in automating responses within the Falco ecosystem, advancing its capabilities for runtime security.Learn more from The New Stack about Falco:Falco Is a CNCF Graduate. Now What?Falco Plugins Bring New Data Sources to Real-Time SecurityeBPF Tools: An Overview of Falco, Inspektor Gadget, Hubble and CiliumJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
Ep 1502How cert-manager Got to 500 Million Downloads a Month
Jetstack’s cert-manager, a leading open-source project in Kubernetes certificate management, began as a job interview challenge. Co-founder Matt Barker recalls asking a prospective engineer to automate Let’s Encrypt within Kubernetes. By Monday, the candidate had created kube-lego, which evolved into cert-manager, now downloaded over 500 million times monthly.Cert-manager’s journey to CNCF graduation, achieved in September, began with its donation to the foundation four years ago. Relaunched as cert-manager, the project grew under engineer James Munnelly, becoming the de facto standard for certificate lifecycle management. The thriving community and ecosystem around cert-manager highlighted its suitability for CNCF stewardship. However, maintainers, including Ashley Davis, noted challenges in navigating differing opinions within its vast user base.With graduation achieved, cert-manager’s roadmap includes sub-projects like trust-manager, addressing TLS trust bundle management and Istio integration. Barker aims to streamline enterprise-scale deployments and educate security teams on cert-manager’s impact. Cert-manager has become integral to cloud-native workflows, promising to simplify hybrid, multicloud, and edge deployments.Learn more from The New Stack about cert-manager:Jetstack’s cert-manager Joins the CNCF Sandbox of Cloud Native TechnologiesJetstack Secure Promises to Ease Kubernetes TLS SecurityJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
Ep 1501Why Are So Many Developers Out of Work in 2024?
The tech industry faces a paradox: despite high demand for skills, many developers and engineers are unemployed. At KubeCon + CloudNativeCon North America in Salt Lake City, Utah, Andela and the Cloud Native Computing Foundation (CNCF) announced an initiative to train 20,000 technologists in cloud native computing over the next decade. oss O'neill, Senior Program Manager at Andela and Chris Aniszczyk, CNCF’s CTO, highlighted the lack of Kubernetes-certified professionals in regions like Africa and emphasized the need for global inclusivity to make cloud native technology ubiquitous.Andela, operating in over 135 countries and founded in Nigeria, views this program as a continuation of its mission to upskill African talent, aligning with its partnerships with tech giants like Google, AWS, and Nvidia. This initiative also addresses the increasing employer demand for Kubernetes and modern cloud skills, reflecting a broader skills mismatch in the tech workforce.Aniszczyk noted that companies urgently seek expertise in cloud native infrastructure, observability, and platform engineering. The partnership aims to bridge these gaps, offering opportunities to meet evolving global tech needs.Learn more from The New Stack about developer talent, skills and needs: Top Developer Skills for AI and Cloud Jobs5 Software Development Skills AI Will Render ObsoleteCloud Native Skill Gaps are Killing Your GainsJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
Ep 1500MapLibre: How a Fork Became a Thriving Open Source Project
When open source projects shift to proprietary licensing, forks and new communities often emerge. Such was the case with MapLibre, born from Mapbox’s 2020 decision to make its map rendering engine proprietary. In conjunction with All Things Open 2024, Seth Fitzsimmons, a principal engineer at AWS and Tarus Balog, principal technical strategist for open source at AWS shared that this engine, popular for its WebGL-powered vector maps and dynamic customization features, was essential for organizations like BMW, The New York Times, and Instacart. However, Mapbox’s move disappointed its open-source user base by tying the upgraded Mapbox GL JS library to proprietary products.In response, three users forked the engine to create MapLibre, committing to modernizing and preserving its open-source ethos. Despite challenges—forking often struggles to sustain momentum—MapLibre has thrived, supported by contributors and corporate sponsors like AWS, Meta, and Microsoft. Notably, a community member transitioned the project from JavaScript to TypeScript over nine months, showcasing the dedication of unpaid contributors.Thanks to financial backing, MapLibre now employs maintainers, enabling it to reciprocate community efforts while fostering equality among participants. The project illustrates the resilience of open-source communities when proprietary shifts occur.Learn more from The New Stack about forking open source projects:Why Do Open Source Projects Fork?OpenSearch: How the Project Went From Fork to FoundationJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
Ep 1499OpenSearch: How the Project Went from Fork to Foundation
At All Things Open in October, Anandhi Bumstead, AWS’s director of software engineering, highlighted OpenSearch's journey and the advantages of the Linux Foundation's stewardship. OpenSearch, an open source data ingestion and analytics engine, was transferred by Amazon Web Services (AWS) to the Linux Foundation in September 2024, seeking neutral governance and broader community collaboration. Originally forked from Elasticsearch after a licensing change in 2021, OpenSearch has evolved into a versatile platform likened to a “Swiss Army knife” for its broad use cases, including observability, log and security analytics, alert detection, and semantic and hybrid search, particularly in generative AI applications.Despite criticism over slower indexing speeds compared to Elasticsearch, significant performance improvements have been made. The latest release, OpenSearch 2.17, delivers 6.5x faster query performance and a 25% indexing improvement due to segment replication. Future efforts aim to enhance indexing, search, storage, and vector capabilities while optimizing costs and efficiency. Contributions are welcomed via opensearch.org.Learn more from The New Stack about deploying applications on OpenSearchAWS Transfers OpenSearch to the Linux FoundationFrom Flashpoint to Foundation: OpenSearch’s Path ClearsSemantic Search with Amazon OpenSearch Serverless and TitanJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
Ep 1498Is Apache Spark Too Costly? An Amazon Engineer Tells His Story
Is Apache Spark too costly? Amazon Principal Engineer Patrick Ames tackled this question during an interview with The New Stack Makers, sharing insights into transitioning from Spark to Ray for managing large-scale data. Ames, described as a "go-to" engineer for exabyte-scale projects, emphasized a goal-driven approach to solving complex engineering problems, from simplifying daily chores to optimizing software solutions.Initially, Spark was chosen at Amazon for its simplicity and open-source flexibility, allowing efficient merging of data with minimal SQL code. The team leveraged Spark in a decoupled architecture over S3 storage, scaling it to handle thousands of jobs daily. However, as data volumes grew to hundreds of terabytes and beyond, Spark’s limitations became apparent. Long processing times and high costs prompted a search for alternatives.Enter Ray—a unified framework designed for scaling AI and Python applications. After experimentation, Ames and his team noted significant efficiency improvements, driving the shift from Spark to Ray to meet scalability and cost-efficiency needs.Learn more from The New Stack about Apache Spark and Ray: Amazon to Save Millions Moving From Apache Spark to RayHow Ray, a Distributed AI Framework, Helps Power ChatGPT Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
Ep 1497Codiac: Kubernetes Doesn't Need To Be That Complex
In this New Stack Makers, Codiac aims to simplify app deployment on Kubernetes by offering a unified interface that minimizes complexity. Traditionally, Kubernetes is powerful but challenging for teams due to its intricate configurations and extensive manual coding. Co-founded by Ben Ghazi and Mark Freydl, Codiac provides engineers with infrastructure on demand, container management, and advanced software development life cycle (SDLC) tools, making Kubernetes more accessible.Codiac’s interface streamlines continuous integration and deployment (CI/CD), reducing deployment steps to a single line of code within CI/CD pipelines. Developers can easily deploy, manage containers, and configure applications without mastering Kubernetes' esoteric syntax. Codiac also offers features like "cabinets" to organize assets across multi-cloud environments and enables repeatable processes through snapshots, making cluster management smoother.For experienced engineers, Codiac alleviates the burden of manually managing YAML files and configuring multiple services. With ephemeral clusters and repeatable snapshots, Codiac supports scalable, reproducible development workflows, giving engineers a practical way to manage applications and infrastructure seamlessly across complex Kubernetes environments.Learn more from The New Stack about deploying applications on Kubernetes:Kubernetes Needs to Take a Lesson from Portainer on Ease-of-Use Three Common Kubernetes Challenges and How to Solve Them Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
Ep 1496Valkey: What’s New and What’s Next?
Valkey, an open-source fork of Redis launched in March, introduced its multithreaded Version 8.0 in September, now available through AWS ElastiCache. At All Things Open 2024 in Raleigh, AWS's Kyle Davis explains that Valkey was developed after Redis changed to a restrictive license, drawing contributors from companies like AWS, Google, Alibaba, and Oracle. Notably, some contributors emerged independently, including a significant contributor from Vietnam. Version 8.0 differentiates itself from Redis by leveraging multithreaded CPUs, addressing the efficiency of I/O operations in modern hardware. Additionally, data structure refinements were made to improve memory efficiency by up to 20%, particularly benefiting large-key databases.Looking ahead, Valkey plans two annual updates, with the next release expected in 2025. New modules are anticipated, including a JSON module for efficient data manipulation and a Bloom filter for probabilistic data presence checks. Version 9.0 may bring substantial changes to clustering, updating it to better leverage modern technologies. The Valkey project aims to continue evolving its capabilities to meet the demands of advanced data storage needs.Learn more from The New Stack about Valkey: Valkey Is a Different Kind of Fork AWS Adds Support, Drops Prices, for Redis-Forked Valkey Valkey: A Redis Fork With a Future Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
Ep 1495Why Beginning Developers Love Python
Deb Nicholson, executive director of the Python Software Foundation, attributes Python’s popularity to its minimal syntactical complexity, which appeals to beginners and seasoned developers alike. Python allows flexibility for those exploring coding without a specific focus, unlike purpose-built languages. Since her leadership began in 2022, Nicholson has overseen the foundation’s role in managing Python’s fiscal and operational needs, including the package index that hosts over half a million add-ons. This open ecosystem enables contributions from large corporations and individual developers while demanding vigilant security measures.Nicholson envisions Python's future advancements, particularly in improving multi-threading and expanding usage in mobile development. She acknowledges Python’s critical role in AI and data science but remains cautious about AI’s pervasive application, likening it to a temporary trend. On open source in the enterprise, Nicholson critiques companies profiting from open-source tools while adopting restrictive licenses. Instead, she admires models like Red Hat’s, which leverage open source sustainably without compromising accessibility or innovation.Learn more from The New Stack about Python: Python 3.13: Blazing New Trails in Performance and ScaleThe Top 5 Python Packages and What They DoPython Mulls a Change in Version NumberingJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
Ep 1494Platform Engineering Rules, now with AI
Platform engineering will be a key focus at KubeCon this year, with a special emphasis on AI platforms. Priyanka Sharma, executive director of the Linux Foundation, highlighted the convergence of platform engineering and AI during an interview on The New Stack Makers with Adobe’s Joseph Sandoval. KubeCon will feature talks from experts like Chen Goldberg of CoreWeave and Aparna Sinha of CapitalOne, showcasing how AI workloads will transform platform operations.Sandoval emphasized the growing maturity of platform engineering over the past two to three years, now centered on addressing user needs. He also discussed Adobe's collaboration on CNOE, an open-source initiative for internal developer platforms. The intersection of platform engineering, Kubernetes, cloud-native technologies, and AI raises questions about scaling infrastructure management with AI, potentially improving efficiency and reducing toil for roles like SRE and DevOps. Sharma noted that reference architectures, long requested by the CNCF community, will be highlighted at the event, guiding users without dictating solutions. Learn more from The New Stack about Kubernetes: Cloud Native Networking as Kubernetes Starts Its Second DecadePrimer: How Kubernetes Came to Be, What It Is, and Why You Should Care How Cloud Foundry Has Evolved With Kubernetes Join our community of newsletter subscribers to stay on top of the news and at the top of your game. game. https://thenewstack.io/newsletter/
Ep 1493Data Observability: MultiCloud, GenAI Make Challenges Harder
Rohit Choudhary, co-founder and CEO of Acceldata, placed an early bet on data observability, which has proven prescient. In a New Stack Makers podcast episode, Choudhary discussed three key insights that shaped his vision: First, the exponential growth of data in enterprises, further amplified by generative AI and large language models. Second, the rise of a multicloud and multitechnology environment, with a majority of companies adopting hybrid or multiple cloud strategies. Third, a shortage of engineering talent to manage increasingly complex data systems.As data becomes more essential across industries, challenges in data observability have intensified. Choudhary highlights the complexity of tracking where data is produced, used, and its compliance requirements, especially with the surge in unstructured data. He emphasized that data's operational role in business decisions, marketing, and operations heightens the need for better traceability. Moving forward, traceability and the ability to manage the growing volume of alerts will become areas of hyper-focus for enterprises.Learn more from The New Stack about data observability: What Is Data Observability and Why Does It Matter?The Looming Crisis in the Observability MarketThe Growth of Observability Data Is Out of Control!Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
Ep 1492Rust’s Expanding Horizons: Memory Safe and Lightning Fast
Rust has maintained its place among the top 15 programming languages and has been the most admired language for nine consecutive years. In a New Stack Makers podcast, Joel Marcey, director of technology at the Rust Foundation, discussed the language's growing importance, including initiatives to improve its security, performance, and adoption in various domains. While Rust is widely used in systems and backend programming, it’s also gaining traction in embedded systems, safety-critical applications, game development, and even the Linux kernel.Marcey highlighted Rust’s strengths as a safe and fast systems language, noting its use on the web through WebAssembly (Wasm), though adoption there is still early. He also addressed Rust vs. Go, explaining that Rust excels in performance-critical applications. Marcey discussed recent updates, such as Rust 1.81, and project goals for 2024, which include a new edition and async improvements.He also touched on government interest in Rust, including DARPA’s initiative to convert C code to Rust, and the Rust Security Initiative, aimed at maintaining the language’s strong security reputation.Learn more from The New Stack about Rust Could Rust be the Future of JavaScript Infrastructure?Rust Growing Fastest, But JavaScript Reigns SupremeRust vs. Zig in Reality: A (Somewhat) Friendly DebateJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
Ep 1491Are We Thinking About Supply Chain Security All Wrong?
In a New Stack Makers episode, Ashley Williams, founder and CEO of axo, highlights how the software world depends on open-source code, which is largely maintained by unpaid volunteers. She likens this to a CVS relying on volunteer-run shipping companies, pointing out how unsettling that might be for customers. The conversation focuses on open-source maintainers’ reluctance to be seen as "suppliers" of software, an idea explored in a 2022 blog post by Thomas Depierre. Many maintainers reject the label, as there is no contractual obligation to support the software they provide. Williams critiques the industry's response to this, noting that instead of involving maintainers in software supply chain security, companies have relied on third-party vendors. However, these vendors have no relationship with the maintainers, leading to increased vulnerabilities. Williams advocates for better engagement with maintainers, especially at build time, to improve security. She also reflects on the growing pressures on maintainers and the underappreciation of release teams.Learn more from The New Stack about open source software supply chain2023: The Year Open Source Security Supply Chain Grew UpFortifying the Software Supply ChainThe Challenges of Securing the Open Source Supply ChainJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
Ep 1490What a CTO Learned at Nvidia About Managing Engineers
In this New Stack Makers podcast, Xun Wang, CTO of Bloomreach, brings insights from his time at Nvidia, particularly lessons from its founder, Jensen Huang, to his current role in e-commerce personalization. Wang emphasizes structuring organizations to reflect the architecture of the products they build, applying a hands-on, detail-oriented approach that encourages deep understanding of engineering challenges. He credits Huang for teaching him the importance of focusing on fundamental architecture rather than relying on iterative testing alone. Wang highlights the impact of generative AI (GenAI) on Bloomreach, explaining how AI-driven search is essential to understanding human language and user intent. As GenAI reshapes application development, Wang stresses the need for engineers to adopt new skills in AI manipulation, while still maintaining traditional coding expertise. He advocates for continuous learning, acknowledging the challenge of staying updated in a rapidly evolving field. Wang, himself, reads extensively to keep pace with innovations, underscoring the importance of staying curious and adaptable in today’s tech landscape. Learn more from The New Stack about Entrepreneurship for Engineers: How to Grow into Leadership Engineering Leaders: Switch to Wartime Management Now How Teleport’s Leader Transitioned from Engineer to CEO Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
Ep 1489How to Find Success with Code Reviews
Code reviews can be highly beneficial but tricky to execute well due to the human factors involved, says Adrienne Braganza Tacke, author of *Looks Good to Me: Actionable Advice for Constructive Code Review.* In a recent conversation with *The New Stack*, Tacke identified three challenges teams must address for successful code reviews: ambiguity, subjectivity, and ego.Ambiguity arises when the goals or expectations for the code are unclear, leading to miscommunication and rework. Tacke emphasizes the need for clarity and explicit communication throughout the review process. Subjectivity, the second challenge, can derail reviews when personal preferences overshadow objective evaluation. Reviewers should justify their suggestions based on technical merit rather than opinion. Finally, ego can get in the way, with developers feeling attached to their code. Both reviewers and submitters must check their egos to foster a constructive dialogue.Tacke encourages programmers to first review their own work, as self-checks can enhance the quality of the code before it reaches the reviewer. Ultimately, code reviews can improve code quality, mentor developers, and strengthen team knowledge. Learn more from The New Stack about code reviews:The Anatomy of Slow Code Reviews One Company Rethinks Diff to Cut Code Review TimesHow Good Is Your Code Review Process?Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
Ep 1488How Apache Iceberg and Flink Can Ease Developer Pain
In the New Stack Makers episode, Adi Polak, Director, Advocacy and Developer Experience Engineering at Confluent discusses the operational and analytical estates in data infrastructure. The operational estate focuses on fast, low-latency event-driven applications, while the analytical estate handles long-running data crunching tasks. Challenges arise due to the "schema evolution" from upstream operational changes impacting downstream analytics, creating complexity for developers. Apache Iceberg and Flink help mitigate these issues. Iceberg, a table format developed by Netflix, optimizes querying by managing file relationships within a data lake, reducing processing time and errors. It has been widely adopted by major companies like Airbnb and LinkedIn. Apache Flink, a versatile data processing framework, is driving two key trends: shifting some batch processing tasks into stream processing and transitioning microservices into Flink streaming applications. This approach enhances system reliability, lowers latency, and meets customer demands for real-time data, like instant flight status updates. Together, Iceberg and Flink streamline data infrastructure, addressing developer pain points and improving efficiency. Learn more from The New Stack about Apache Iceberg and Flink:Unfreeze Apache Iceberg to Thaw Your Data LakehouseApache Flink: 2023 Retrospective and Glimpse into the Future 4 Reasons Why Developers Should Use Apache Flink Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
Ep 1487How Heroku Is Positioned to Help Ops Engineers in the GenAI Era
Bob Wise, CEO of Heroku, discussed the impact of generative AI (GenAI) coding tools on software development in a recent episode of The New Stack Makers. He compared the rise of these tools to adding an "infinite number of interns" to development teams, noting that while they accelerate code writing, they don't yet simplify testing, deployment, or production operations. Wise likened this to the early days of Kubernetes, which focused on improving operations rather than the frontend experience. He emphasized that Kubernetes' success was due to its focus on easing the operational burden, something current GenAI tools have yet to achieve.Heroku, acquired by Salesforce in 2010, is positioned to benefit from these changes by helping teams transition to more automated systems. Wise highlighted Heroku’s strategic bet on Postgres, a database technology that's gaining traction, especially for GenAI workloads. He also discussed Heroku's ongoing migration to Kubernetes, aligning with industry standards to enhance its platform.Learn more from The New Stack about HerokuThe Data Stack Journey: Lessons from Architecting Stacks at Heroku and MattermostKubernetes and the Next Generation of PaaS Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
Ep 1486OpenJS Foundation’s Leader Details the Threats to Open Source
After the XZ Utils backdoor vulnerability was uncovered in March, the OpenJS Foundation saw a surge in inquiries from potential open source JavaScript contributors. Robin Ginn, executive director of the foundation, noted that volunteer-led JavaScript communities often face challenges in managing these contributions. The discovery that a single contributor, "Jia Tan," planted the backdoor heightened vigilance, especially when new contributors requested admin privileges. Ginn emphasized that trust is not synonymous with security, especially in open source projects where maintainers must be vigilant about who can access their repositories.The XZ vulnerability highlighted broader concerns about the security of open source software, particularly in projects with only a single maintainer. Despite receiving a significant grant from Germany's Sovereign Tech Fund, the foundation remains under-resourced, with just two full-time staffers supporting 35 projects. Ginn urged companies that rely on open source software to invest in it by hiring maintainers, ensuring these critical projects are properly supported.Learn more from The New Stack about open source vulnerabilityLinux xz Backdoor Damage Could Be Greater Than Feared Unzipping the XZ Backdoor and Its Lessons for Open Source Linux xz and the Great Flaws in Open Source Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
Ep 1485What’s the Future for Software Developers?
Paige Bailey, who began coding at age 9 in rural Texas, now leads the GenAI developer experience at Google. In a conversation with Chris Pirillo on The New Stack Makers, Bailey reflected on the evolving role of software development in the era of generative AI. While she once urged her nieces and nephews to pursue computer science degrees, Bailey now believes that critical thinking and problem-solving may be more crucial for future tech careers. She emphasized that generative AI is democratizing software development, making it more accessible and enabling developers to focus on creative tasks rather than the minutiae of coding. Bailey's experience at Google highlights this shift, as she now acts more as a reviewer and overseer of AI-generated code. She sees GenAI not as a replacement for developers but as a tool to accelerate their creativity and tackle longstanding backlogs. Bailey believes the key is ensuring everyone understands how to effectively apply generative AI to their work.Learn more from The New Stack about the future of development: 7 Ways to Future Proof Your Developer Job in the Age of AI The Future of Developer Careers 4 Forecasts for the Future of Developer RelationsJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
Ep 1484Want to Create Software Sustainably? Anne Currie’s Got Ideas
Anne Currie, a leading expert in sustainable tech and part of the Green Software Foundation, discusses practical steps for building resilient, sustainable software in an episode of The New Stack Makers. With 30 years of experience, Currie co-authored Building Green Software, emphasizing the tech industry's role in the energy transition. She highlights the complexity of adapting technology to renewable energy, involving extensive research and debunking misinformation. Currie discusses the importance of energy proportionality—the idea that increased utilization improves a computer's energy efficiency—and how this concept aligns with modern DevOps practices that reduce carbon emissions while enhancing speed, cost efficiency, and security.Currie also emphasizes architecting systems to operate on renewable power and draws parallels between managing variable grid power and internet bandwidth. Using examples like video conferencing, she illustrates how software can adapt to fluctuating resources. The episode also touches on potential pitfalls like greenwashing and the challenges in accurately naming concepts like energy proportionality.Learn more from The New Stack about sustainability: Sustainability: How Did Amazon, Azure, Google Perform in 2023? Sustainability Focus: Cloud Efficiency, Not Carbon Emissions Developers Should Press Cloud Providers on Sustainability Join our community of newsletter subscribers to stay on top of the news and at the top of your game. https://thenewstack.io/newsletter/
Ep 1483VMware’s Golden Path
In an era marked by complexity, the golden path is essential for software architects, asserts James Watters, senior director of R&D at VMware Tanzu, Broadcom. This approach, emphasizing fewer application patterns, simplifies life for security personnel, developers, and infrastructure teams. VMware defines the golden path as streamlining software development, crucial in today's economic climate. Watters highlights this in the Broadcom report: State of Cloud Native App Platforms 2024, noting that 55% of organizations favor this method for its consistency and security. Watters, a pioneer in platform as a service since 2009, helped establish Cloud Foundry and now drives VMware Tanzu. Tanzu's golden operations offer standardized, consistent processes across platforms, crucial for efficiency and security. Watters advocates for minimal DIY in favor of operational consistency, providing commands for building, deploying, and scaling applications. Tanzu’s focus is on integrating AI to enhance user interfaces and data access, impacting platform engineering significantly in the coming years. This integration aims to offer a better developer experience while maintaining security and efficiency. Learn more from The New Stack about golden paths: Golden Paths Start with a Shift Left Platform Engineering Not Working Out? You’re Doing It Wrong. How to Pave Golden Paths That Actually Go Somewhere Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
Ep 1482Setting Microservices Up for Success: Real-World Advice
Maintaining and ensuring the success of a microservice-based system can be challenging. Sarah Wells, a seasoned tech consultant with over 20 years of experience, offers valuable insights in her book "Enabling Microservices Success" and a discussion on The New Stack Makers podcast. Drawing from her tenure at the Financial Times (FT), Wells illustrates how transitioning to microservices and adopting DevOps and SRE practices enabled FT to accelerate software releases from 12 annually to over 20,000. This transformation required merging IT organizations, investing in automation, and fostering team autonomy. Wells emphasizes that successful microservices adoption depends not only on developer expertise but also on organizational structures. She highlights the importance of continuous delivery and proactive communication, especially during critical periods like major news events. Additionally, she discusses the evolving roles of senior engineers and the need for flexibility in defining architectural responsibilities. Wells advocates for "engineering enablement" over "platform teams" to better support effective service management and evolution. Learn more from The New Stack about enabling successful outcomes of microservices: What Is Microservices Architecture? 4 Strategies for Migrating Monolithic Apps to Microservices Continuous Improvement Metrics for Scaling Engineering Teams Join our community of newsletter subscribers to stay on top of the news and at the top of your game. https://thenewstack.io/newsletter/
Ep 1473How OpenTofu Happened — and What’s Next?
In August 2023, the open source community rallied to create OpenTofu, an alternative to Terraform, after HashiCorp, now owned by IBM, adopted a restrictive Business Source License for Terraform. Ohad Maislish, co-founder and CEO of env0, explained on The New Stack Makers how this move sparked the initiative. A few hours after HashiCorp's license change, Maislish secured the domain opentf.org and began developing the new project, eventually named OpenTofu, which was donated to The Linux Foundation to ensure its license couldn't be altered.Maislish highlighted the importance of distinguishing between vendor-backed and foundation-backed open source projects to avoid sudden licensing changes. Before coding, the community created a manifesto, gathering significant support and pledges, but received no response from HashiCorp. Consequently, they proceeded with the fork and development of OpenTofu. Despite accusations of intellectual property theft from HashiCorp, OpenTofu gained traction and was adopted by organizations like Oracle. The community continues to prioritize user feedback through GitHub.Learn more from The New Stack about OpenTofu: OpenTofu vs. HashiCorp Takes Center Stage at Open Source Summit OpenTofu Amiable to a Terraform Reconciliation OpenTofu 1.6 General Availability: Open Source Infrastructure as Code Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
Ep 1481The Fediverse: What It Is, Why It’s Promising, What’s Next
In the early days, the internet was a decentralized space created by enthusiasts. However, it has since transformed into a centralized, commerce-driven entity dominated by a few major players. The promise of the fediverse, a decentralized social networking concept, offers a refreshing alternative.Evan Prodromou, OpenEarth Foundation's director of open technology, has been advocating for decentralized social networks since 2008, starting with his creation, Identi.ca. Unlike Twitter, Identi.ca was open source and federated, allowing independent networks to interconnect.Prodromou, a co-author of ActivityPub—the W3C standard for decentralized networking used by platforms like Mastodon—discusses the evolution of the fediverse on The New Stack Makers podcast. He notes that small social networks dwindled to a few giants, such as Twitter and Facebook, which rarely interconnected. The acquisition of Twitter by Elon Musk disrupted the established norms, prompting users to reconsider their dependence on centralized platforms.The fediverse aims to address these issues by allowing users to maintain relationships across different instances, ensuring a smoother transition between networks. This decentralization fosters community management and better control over social interactions.Check out the full podcast episode to explore how tech giants like Meta are engaging with the fediverse and how to join decentralized social networks.Learn more from The New Stack about fediverse:FediForum Showcases New Fediverse Apps and Developer NetworkOne Login: Towards a Single Fediverse Identity on ActivityPubWeb Dev 2024: Fediverse Ramps Up, More AI, Less JavaScriptJoin our community of newsletter subscribers to stay on top of the news and at the top of your game. https://thenewstack.io/newsletter/
Ep 1480Why Framework’s ‘Right to Repair’ Ethos Is Gaining Fans
In a recent episode of The New Stack Makers, recorded at the Open Source Summit North America, Matt Hartley, Linux support lead at Framework, discusses the importance of the "right to repair" movement. This initiative seeks to allow consumers to repair and upgrade their own electronic devices, countering the trend of disposable electronics that contribute to environmental damage. Framework, a company offering modular and customizable laptops, embodies this philosophy by enabling users to replace outdated components easily.Hartley, interviewed by Chris Pirillo, highlights how Framework’s approach helps reduce electronic waste, likening obsolete electronics to a form of "technical debt." He shares his personal struggle with old devices, like an ASUS Eee, illustrating the need for repairable technology. Hartley also describes his role in fostering a DIY community, collaborating closely with Fedora Linux maintainers and creating user-friendly support scripts. Framework’s community is actively contributing to the platform, developing new features and hardware integrations.The episode underscores the growing momentum of the right to repair movement, advocating for consumer empowerment and environmental sustainability. Learn more from The New Stack about repairing and upgrading devices: New Linux Laptops Come with Right-to-Repair and More Troubling Tech Trends: The Dark Side of CES 2024 Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
Ep 1479What’s the Future of Distributed Ledgers?
Blockchain technology continues to drive innovation despite declining hype, with Distributed Ledgers (DLTs) offering secure, decentralized digital asset transactions. In an On the Road episode of The New Stack Makers recorded at Open Source Summit North America, Andrew Aitken of Hedera and Dr. Leemon Baird of Swirlds Labs discussed DLTs with Alex Williams. Baird highlighted the Hashgraph Consensus Algorithm, an efficient, secure distributed consensus mechanism he created, leveraging a hashgraph data structure and gossip protocol for rapid, robust transaction sharing among network nodes. This algorithm, which has been open source under the Apache 2.0 license for nine months, aims to maintain decentralization by involving 32 global organizations in its governance. Aitken emphasized building an ecosystem of DLT contributors, adhering to open source best practices, and developing cross-chain applications and more wallets to enhance exchange capabilities. This collaborative approach seeks to ensure transparency in both governance and software development. For more insights into DLT’s 2.0 era, listen to the full episode.Learn more from The New Stack about Distributed Ledgers (DLTs) IOTA Distributed Ledger: Beyond Blockchain for Supply Chains Why I Changed My Mind About Blockchain Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
Ep 1478Linux xz and the Great Flaws in Open Source
The Linux xz utils backdoor exploit, discussed in an interview at the Open Source Summit 2024 on The New Stack Makers with John Kjell, director of open source at TestifySec, highlights critical vulnerabilities in the open-source ecosystem. This exploit involved a maintainer of the Linux xz utils project adding malicious code to a new release, discovered by a Microsoft engineer. This breach demonstrates the high trust placed in maintainers and how this trust can be exploited. Kjell explains that the backdoor allowed remote code execution or unauthorized server access through SSH connections.The exploit reveals a significant flaw: the human element in open source. Maintainers, often under pressure from company executives to quickly address vulnerabilities and updates, can become targets for social engineering. Attackers built trust within the community by contributing to projects over time, eventually gaining maintainer status and inserting malicious code. This scenario underscores the economic pressures on open source, where maintainers work unpaid and face demands from large organizations, exposing the fragility of the open-source supply chain. Despite these challenges, the community's resilience is also evident in their rapid response to such threats. Learn more from The New Stack about Linux xz utils Linux xz Backdoor Damage Could Be Greater Than Feared Unzipping the XZ Backdoor and Its Lessons for Open Source The Linux xz Backdoor Episode: An Open Source Myster Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
Ep 1477How Amazon Bedrock Helps Build GenAI Apps in Python
Suman Debnath, principal developer advocate for machine learning at Amazon Web Services, emphasized the advantages of using Python in machine learning during a New Stack Makers episode recorded at PyCon US. He noted Python's ease of use and its foundational role in the data science ecosystem as key reasons for its popularity. However, Debnath highlighted that building generative AI applications doesn't necessarily require deep data science expertise or Python. Amazon Bedrock, AWS’s generative AI framework introduced in September, exemplifies this flexibility by allowing developers to use any programming language via an API-based service. Bedrock supports various languages like Python, C, C++, and Java, enabling developers to leverage large language models without intricate knowledge of machine learning. It also integrates well with open-source libraries such as Langchain and llamaindex. Debnath recommends visiting the community AWS platform and GitHub for resources on getting started with Bedrock. The episode includes a demonstration of Bedrock's capabilities and its benefits for Python users. Learn More from The New Stack on Amazon Bedrock: Amazon Bedrock Expands Palette of Large Language Models Build a Q&A Application with Amazon Bedrock and Amazon Titan 10 Key Products for Building LLM-Based Apps on AWSJoin our community of newsletter subscribers to stay on top of the news and at the top of your game/
Ep 1476How to Start Building in Python with Amazon Q Developer
Nathan Peck, a senior developer advocate for generative AI at Amazon Web Services (AWS), shares his experiences working with Python in a recent episode of The New Stack Makers, recorded at PyCon US. Although not a Python expert, Peck frequently deals with Python scripts in his role, often assisting colleagues in running scripts as cron jobs. He highlights the challenge of being a T-shaped developer, possessing broad knowledge across multiple languages and frameworks but deep expertise in only a few.Peck introduces Amazon Q, a generative AI coding assistant launched by AWS in November, and demonstrates its capabilities. The assistant can be integrated into an integrated development environment (IDE) like VS Code. It assists in explaining, refactoring, fixing, and even developing new features for Python codebases. Peck emphasizes Amazon Q's ability to surface best practices from extensive AWS documentation, making it easier for developers to navigate and apply.Amazon Q Developer is available for free to users with an AWS Builder ID, without requiring an AWS cloud account. Peck's demo showcases how this tool can simplify and enhance the coding experience, especially for those handling complex or unfamiliar codebases.Learn more from The New Stack about Amazon Q and Amazon’s Generative AI strategy:Amazon Q, a GenAI to Understand AWS (and Your Business Docs)Decoding Amazon’s Generative AI StrategyResponsible AI at Amazon Web Services: Q&A with Diya WynnJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
Ep 1475Who’s Keeping the Python Ecosystem Safe?
Mike Fiedler, a PyPI safety and security engineer at the Python Software Foundation, prefers the title “code gardener,” reflecting his role in maintaining and securing open source projects. Recorded at PyCon US, Fiedler explains his task of “pulling the weeds” in code—handling unglamorous but crucial aspects of open source contributions. Since August, funded by Amazon Web Services, Fiedler has focused on enhancing the security of the Python Package Index (PyPI). His efforts include ensuring that both packages and the pipeline are secure, emphasizing the importance of vetting third-party modules before deployment.One of Fiedler’s significant initiatives was enforcing mandatory two-factor authentication (2FA) for all PyPI user accounts by January 1, following a community awareness campaign. This transition was smooth, thanks to proactive outreach. Additionally, the foundation collaborates with security researchers and the public to report and address malicious packages.In late 2023, a security audit by Trail of Bits, funded by the Open Technology Fund, identified and quickly resolved medium-sized vulnerabilities, increasing PyPI's overall security. More details on Fiedler's work are available in the full interview video.Learn more from The New Stack about PyPl:PyPl Strives to Pull Itself Out of TroubleHow Python Is EvolvingPoisoned Lolip0p PyPI PackagesJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
Ep 1474How Training Data Differentiates Falcon, the LLM from the UAE
The name "Falcon" for the UAE’s large language model (LLM) symbolizes the national bird's qualities of courage and perseverance, reflecting the vision of the Technology Innovation Institute (TII) in Abu Dhabi. TII, launched in 2020, addresses AI’s rapid advancements and unintended consequences by fostering an open-source approach to enhance community understanding and control of AI. In this New Stack Makers, Dr. Hakim Hacid, Executive Director and Acting Chief Researcher, Technology Innovation Institute emphasized the importance of perseverance and innovation in overcoming challenges. Falcon gained attention for being the first truly open model with capabilities matching many closed-source models, opening new possibilities for practitioners and industry. Last June, Falcon introduced a 40-billion parameter model, outperforming the LLaMA-65B, with smaller models enabling local inference without the cloud. The latest 180-billion parameter model, trained on 3.5 trillion tokens, illustrates Falcon’s commitment to quality and efficiency over sheer size. Falcon’s distinctiveness lies in its data quality, utilizing over 80% RefinedWeb data, based on CommonCrawl, which ensures cleaner and deduplicated data, resulting in high-quality outcomes. This data-centric approach, combined with powerful computational resources, sets Falcon apart in the AI landscape. Learn more from The New Stack about Open Source AI: Open Source Initiative Hits the Road to Define Open Source AI Linus Torvalds on Security, AI, Open Source and TrustTransparency and Community: An Open Source Vision for AI Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
Ep 1471Out with C and C++, In with Memory Safety
Crash-level bugs continue to pose a significant challenge due to the lack of memory safety in programming languages, an issue persisting since the punch card era. This enduring problem, described as "the Joker to the Batman" by Anil Dash, VP of developer experience at Fastly, is highlighted in a recent episode of The New Stack Makers. The White House has emphasized memory safety, advocating for the adoption of memory-safe programming languages and better software measurability. The Office of the National Cyber Director (ONCD) noted that languages like C and C++ lack memory safety traits and are prevalent in critical systems. They recommend using memory-safe languages, such as Java, C#, and Rust, to develop secure software. Memory safety is particularly crucial for the US government due to the high stakes, especially in space exploration, where reliability standards are exceptionally stringent. Dash underscores the importance of resilience and predictability in missions that may outlast their creators, necessitating rigorous memory safety practices.Learn more from The New Stack about Memory Safety:White House Warns Against Using Memory-Unsafe Languages Can C++ Be Saved? Bjarne Stroupstrup on Ensuring Memory SafetyBjarne Stroupstrup's Plan for Bringing Safety to C++Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
Ep 1472How Open Source and Time Series Data Fit Together
In the push to integrate data into development, time series databases have gained significant importance. These databases capture time-stamped data from servers and sensors, enabling the collection and storage of valuable information. InfluxDB, a leading open-source time series database technology by InfluxData, has partnered with Amazon Web Services (AWS) to offer a managed open-source service for time series databases. Brad Bebee, General Manager of Amazon Neptune and Amazon Timestream highlighted the challenges faced by customers managing open-source Influx database instances, despite appreciating its API and performance. To address this, AWS initiated a private beta offering a managed service tailored to customer needs. Paul Dix, Co-founder and CTO of InfluxData joined Bebee, and highlighted Influx's prized utility in tracking measurements, metrics, and sensor data in real-time. AWS's Timestream complements this by providing managed time series database services, including TimesTen for Live Analytics and Timestream for Influx DB. Bebee emphasized the growing relevance of time series data and customers' preference for managed open-source databases, aligning with AWS's strategy of offering such services. This partnership aims to simplify database management and enhance performance for customers utilizing time series databases. Learn more from The New Stack about time series databases:What Are Time Series Databases, and Why Do You Need Them?Amazon Timestream: Managed InfluxDB for Time Series Data Install the InfluxDB Time-Series Database on Ubuntu Server 22.04Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
Ep 1470Postgres is Now a Vector Database, Too
Amazon Web Services (AWS) has introduced PG Vector, an open-source tool that integrates generative AI and vector capabilities into PostgreSQL databases. Sirish Chandrasekaran, General Manager of Amazon Relational Database Services, explained at Open Source Summit 2024 in Seattle that PG Vector allows users to store vector types in Postgres and perform similarity searches, a key feature for generative AI applications. The tool, developed by Andrew Kane and offered by AWS in services like Aurora and RDS, originally used an indexing scheme called IVFFlat but has since adopted Hierarchical Navigable Small World (HNSW) for improved query performance. HNSW offers a graph-based approach, enhancing the ability to find nearest neighbors efficiently, which is crucial for generative AI tasks. AWS emphasizes customer feedback and continuous innovation in the rapidly evolving field of generative AI, aiming to stay responsive and adaptive to customer needs. Learn more from The New Stack about Vector Databases Top 5 Vector Database Solutions for Your AI Project Vector Databases Are Having a Moment – A Chat with Pinecone Why Vector Size Matters Join our community of newsletter subscribers to stay on top of the news and at the top of your game. https://thenewstack.io/newsletter/
Ep 1469Valkey: A Redis Fork with a Future
Valkey, a Redis fork supported by the Linux Foundation, challenges Redis' new license. In this episode, Madelyn Olson, a lead contributor to the Valkey project and former Redis core contributor, along with Ping Xie, Staff Software Engineer at Google and Dmitry Polyakovsky, Consulting Member of Technical Staff at Oracle highlights concerns about the shift to a more restrictive license at Open Source Summit 2024 in Seattle. Despite Redis' free license for end users, many contributors may not support it. Valkey, with significant industry backing, prioritizes continuity and a smooth transition for Redis users. AWS, along with Google and Oracle maintainers, emphasizes the importance of open, permissive licenses for large tech companies. Valkey plans incremental updates and module development in Rust to enhance functionality and attract more engineers. The focus remains on compatibility, continuity, and consolidating client behaviors for a robust ecosystem. Learn more from The New Stack about the Valkey Project and changes to Open Source licensingLinux Foundation Backs 'Valkey' Open Source Fork of Redis Redis Pulls Back on Open Source Licensing, Citing Stingy Cloud ServicesHashiCorp's Licensing Change is only the Latest Challenge to Open Source Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
Ep 1468Kubernetes Gets Back to Scaling with Virtual Clusters
A virtual cluster, described by Loft Labs CEO Lukas Gentele at Kubecon+ CloudNativeCon Paris, is a Kubernetes control plane running inside a container within another Kubernetes cluster. In this New Stack Makers episode, Gentele explained that this approach eliminates the need for numerous separate control planes, allowing VMs to run in lightweight, quickly deployable containers. Loft Labs' open-sourced vcluster technology enables virtual clusters to spin up in about six seconds, significantly faster than traditional Kubernetes clusters that can take over 30 minutes to start in services like Amazon EKS or Google GKE.The integration of vCluster into Rancher at KubeCon Paris enables users to manage virtual clusters alongside real clusters seamlessly. This innovation addresses challenges faced by companies managing multiple applications and clusters, advocating for a multi-tenant cluster approach for improved sharing and security, contrary to the trend of isolated single-tenant clusters that emerged due to complexities in cluster sharing within Kubernetes. Learn more from The New Stack about virtual clusters: Vcluster to the Rescue Navigating the Trade-Offs of Scaling Kubernetes Dev Environments Managing Kubernetes Clusters for Platform Engineers Join our community of newsletter subscribers to stay on top of the news and at the top of your game. https://thenewstack.io/newsletter/
Ep 1466How Giant Swarm Is Helping to Support the Future of Flux
When Weaveworks, known for pioneering "GitOps," shut down, concerns arose about the future of Flux, a critical open-source project. However, Puja Abbassi, Giant Swarm's VP of Product, reassured Alex Williams, Founder and Publisher of The New Stack at Open Source Summit in Paris that Flux's maintenance is secure in this episode of The New Makers podcast. Giant companies like Microsoft Azure and GitLab have pledged support. Giant Swarm, an avid Flux user, also contributes to its development, ensuring its vitality alongside related projects like infrastructure code plugins and UI improvements. Abbassi highlighted the importance of considering a project's sustainability and integration capabilities when choosing open-source tools. He noted Argo CD's advantage in UI, emphasizing that projects like Flux must evolve to meet user expectations and avoid being overshadowed. This underscores the crucial role of community support, diversity, and compatibility within the Cloud Native Computing Foundation's ecosystem for long-term tool adoption.Learn more from The New Stack about Flux: End of an Era: Weaveworks Closes Shop Amid Cloud Native Turbulence Why Flux Isn't Dying after WeaveworksJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
Ep 1464AI, LLMs and Security: How to Deal with the New Threats
The use of large language models (LLMs) has become widespread, but there are significant security risks associated with them. LLMs with millions or billions of parameters are complex and challenging to fully scrutinize, making them susceptible to exploitation by attackers who can find loopholes or vulnerabilities. On an episode of The New Stack Makers, Chris Pirillo, Tech Evangelist and Lance Seidman, Backend Engineer at Atomic Form discussed these security challenges, emphasizing the need for human oversight to protect AI systems.One example highlighted was malicious AI models on Hugging Face, which exploited the Python pickle module to execute arbitrary commands on users' machines. To mitigate such risks, Hugging Face implemented security scanners to check every file for security threats. However, human vigilance remains crucial in identifying and addressing potential exploits.Seidman also stressed the importance of technical safeguards and a culture of security awareness within the AI community. Developers should prioritize security throughout the development life cycle to stay ahead of evolving threats. Overall, the message is clear: while AI offers remarkable capabilities, it requires careful management and oversight to prevent misuse and protect against security breaches.Learn more from The New Stack about AI and security: Artificial Intelligence: Stopping the Big Unknown in Application, Data Security Cyberattacks, AI and Multicloud Hit Cybersecurity in 2023Will Generative AI Kill DevSecOps? Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
Ep 1465How Kubernetes Faces a New Reality with the AI Engineer
The Kubernetes community primarily focuses on improving the development and operations experience for applications and infrastructure, emphasizing DevOps and developer-centric approaches. In contrast, the data science community historically moved at a slower pace. However, with the emergence of the AI engineer persona, the pace of advancement in data science has accelerated significantly. Alex Williams, founder and publisher of The New Stack co-hosted a discussion with Sanjeev Mohan, an independent analyst, which highlighted the challenges faced by data-related tasks on Kubernetes due to the stateful nature of data. Unlike applications, restarting a database node after a failure may lead to inconsistent states and data loss. This discrepancy in pace and needs between developers and data scientists led to Kubernetes and the Cloud Native Computing Foundation initially overlooking data science. Nevertheless, Mohan noted that the pace of data engineers has increased as they explore new AI applications and workloads. Kubernetes now plays a crucial role in supporting these advancements by helping manage resources efficiently, especially considering the high cost of training large language models (LLMs) and using GPUs for AI workloads. Mohan also discussed the evolving landscape of AI frameworks and the importance of aligning business use cases with AI strategies. Learn more from The New Stack about data development and DevOps: AI Will Drive Streaming Data Use — But Not Yet, Report Says https://thenewstack.io/ai-will-drive-streaming-data-adoption-says-redpanda-survey/ The Paradigm Shift from Model-Centric to Data-Centric AI https://thenewstack.io/the-paradigm-shift-from-model-centric-to-data-centric-ai/ AI Development Needs to Focus More on Data, Less on Models https://thenewstack.io/ai-development-needs-to-focus-more-on-data-less-on-models/ Learn more from The New Stack about data development and DevOps: AI Will Drive Streaming Data Use - But Not Yet, Report SaysThe Paradigm Shift from Model-Centric to Data-Centric AIAI Development Needs to Focus More on Data, Less on Models Join our community of newsletter subscribers to stay on top of the news and at the top of your game. https://thenewstack.io/newsletter/