
Test & Code
237 episodes — Page 3 of 5
Ep 137137: Become an Author - Matt Harrison interviews Brian Okken
Matt Harrison, author of many Python books, is putting together a course, Effective Book Authoring, to help other people write and publish books. As part of this course, he's including interviews with people who have already written books, including me. This is that interview.We discuss:Why I wrote "Python Testing with pytest"Self publishing vs working with a publisherThe writing, editing, and publishing processWriting formatBook promotionAdvice to other writersSpecial Guest: Matt Harrison.Links:Effective Book Authoring — Matt's coursePython Testing with pytest - Simple, Rapid, Effective, and Scalable
Ep 136136: Wearable Technology - Sophy Wong
Wearable technology is not just smart consumer devices like watches and activity trackers. Wearable tech also includes one off projects by designers, makers, and hackers and there are more and more people producing tutorials on how to get started. Wearable tech is also a great way to get both kids and adults excited about coding, electronics, and in general, engineering skills. Sophy Wong is a designer who makes really cool stuff using code, technology, costuming, soldering, and even jewelry techniques to get tech onto the human body. Sophy joins the show to answer my many questions about getting started safely with wearable tech.Some of the questions and topics:Can I wash my clothing if I've added tech to it?Is there any danger in wearing technology or building wearable tech?Are there actual wires and cables conductive thread in the fabric and textiles of some wearable tech projects?What's a good starter project? Especially if I want to do a wearable tech project with my kids?Dealing with stretch with clothing and non-bendy electronics.Some questions around the Sophy Wong and HackSpace "Wearable Tech Projects" book.How did you get into wearable tech?Do you have a favorite project?Can I get into wearable tech if I don't know how to code or solder?Are these projects accessible to people with limited budgets?Making projects so you can reuse the expensive bits on multiple projects.Special Guest: Sophy Wong.Links:sophywong.comWearable Tech Projects book — The wearable technology bookcostumes — The dress is on this page, as well as the Ghostbuster pack and costume.spacesuitMusic video with Sophy's space suitKobakant tutorials
Ep 135135: Speeding up Django Test Suites - Adam Johnson
All test suites start fast. But as you grow your set of tests, each test adds a little bit of time to the suite. What can you do about it to keep test suites fast? Some things, like parallelization, are applicable to many domains. What about, for instance, Django applications? Well, Adam Johnson has thought about it a lot, and is here to tell us how we can speed up our Django test suites. Topics include:parallelizing testsmoving from disk to memoryusing fake data and factory functionstargeted mockingSpecial Guest: Adam Johnson.Links:Speed Up Your Django Tests — the book by Adam JohnsonKukicha — "or twig tea, ..., is a Japanese blend made of stems, stalks, and twigs."
Ep 134134: Business Outcomes and Software Development - Benjamin Harding
Within software projects, there are lots of metrics we could measure. But which ones really matter. Instead of a list, Benjamin Harding shares with us a way of thinking about business outcomes that can help us with every day decision making. We talk about:Business outcomes vs vanity metricsAs a developer, how do you keep business outcomes in mindThinking about customer value all the timeCommunicating decisions and options in terms of costs and impact on business outcomesCompany culture and it's role in reinforcing a business outcome mindsetAnd even the role of team lead as impact multiplier I really enjoyed this conversation. But I admit that at first, I didn't realize how important this is on all software development. Metrics are front and center in a web app. But what about a service, or an embedded system with no telemetry. It still matters, maybe even more so. Little and big decisions developers face every day that have impact on costs and benefits with respect to customer value and business outcome, even if it's difficult to measure.Special Guest: Benjamin Harding.
Ep 133133: Major League Hacking - Jon Gottfried
Hackathons have been spreading around the world; many at university campuses. Major League Hacking, MLH, has been encouraging and helping hackathons.Hacking can be thought of as tinkering. Taking things apart and putting them back together as an interesting experience. There's always been some of this as part of software culture.The people at Major League Hacking have taken this to a whole new level, bringing together Tech creators who enjoy playing around with and crating new technology, on campuses, and now in virtual spaces, all over the world.Jonathon Gottfried, one of the cofounders of Major League Hacking, joins the show to talk about:hacker meetups and eventshackathonswhat it's like to go to a hackathonhow to help out with hackathons as an experienced engineer, even virtually as a mentorhackathons continuing virtually during the pandemicinternships and fellowships on open source projects to help students gain experience, even during the pandemicMLH approach to internships, giving interns a support group, including peers, mentors, and project maintainersand MLH itselfSpecial Guest: Jon Gottfried.Links:Major League Hacking
Ep 132132: mocking in Python - Anna-Lena Popkes
Using mock objects during testing in Python.Anna-Lena joins the podcast to teach us about mocks and using unittest.mock objects during testing. We discuss:the different styles of using mockspros and cons of mocksdependency injectionadapter patternmock hellmagical universeand much moreSpecial Guest: Anna-Lena Popkes.Links:Personal webpage of Anna-Lena PopkesMagical Universe — Awesome Python features explained using the world of magicTest & Code 102: Cosmic Python, TDD, testing and external dependencies — The episode where Harry Percival discusses mocking.Talk: Harry Percival - Stop Using Mocks (for a while) - YouTube — Talk: Harry Percival - Stop Using Mocks (for a while)unittest.mock AutospeccingMock Hell Talk (45 min version) Edwin Jung - PyCon 2019 Mock Hell Talk (30 min version) - Edwin Jung - PyConDE PyCon EstoniaKI macht Schule!Talk Python #186 : 100 Days of Python in a Magical Universe
Ep 131131: Test Smarter, Not Harder
Some people avoid writing tests. Some drudge through it painfully. There is a better way. In this episode, I'm going to share some advice from Luke Plant on how to "Test Smarter, Not Harder".Links:Test smarter, not harder - lukeplant.me.uk — The original article by Luke
Ep 130130: virtualenv activation prompt consistency across shells - an open source dev and test adventure - Brian Skinn
virtualenv supports six shells: bash, csh, fish, xonsh, cmd, posh. Each handles prompts slightly differently. Although the virtualenv custom prompt behavior should be the same across shells, Brian Skinn noticed inconsistencies. He set out to fix those inconsistencies. That was the start of an adventure in open source collaboration, shell prompt internals, difficult test problems, and continuous integration quirks. Brian Skinn initially noticed that on Windows cmd, a space was added between a prefix defined by --prompt and the rest of the prompt, whereas on bash no space was added.For reference, there were/are three nominal virtualenv prompt modification behaviors, all of which apply to the prompt changes that are made at the time of virtualenv activation:If the environment variable VIRTUAL_ENV_DISABLE_PROMPT is defined and non-empty at activation time, do not modify the prompt at all. Otherwise: If the --prompt argument was supplied at creation time, use that argument as the prefix to apply to the prompt; or,If the --prompt argument was not supplied at creation time, use the default prefix of "() " as the prefix (the environment folder name surrounded by parentheses, and with a trailing space after the last paren.Special Guest: Brian Skinn.Links:virtualenvInitial issue that started the adventurefinal PRpent: pent Extracts Numerical Text -- Mini-language driven parser for structured numerical dataLightening talk on pent
Ep 129129: How to Test Anything - David Lord
I asked people on twitter to fill in "How do I test _____?" to find out what people want to know how to test. Lots of responses. David Lord agreed to answer them with me. In the process, we come up with lots of great general advice on how to test just about anything.Specific Questions people asked:What makes a good test?How do you test web app performance?How do you test cookie cutter templates?How do I test my test framework? How do I test permission management?How do I test SQLAlchemy models and pydantic schemas in a FastAPI app?How do I test warehouse ETL code?How do I test and mock GPIO pins on hardware for code running MicroPython on a device?How do I test PyQt apps?How do I test web scrapers?Is it the best practice to put static html in your test directory or just snippets stored in string variables?What's the best way to to test server client API contracts?How do I test a monitoring tool?We also talk about:What is the Flask testing philosophy?What do Flask tests look like?Flask and Pallets using pytestCode coverage Some of the resulting testing strategies:Set up some preconditions. Run the function. Get the result.Don't test external services.Do test external service failures.Don't test the frameworks you are using.Do test your use of a framework.Use open source projects to learn how something similar to your project tests things.Focus on your code.Focus on testing your new code. Try to architect your application such that actual GUI testing is minimal.Split up a large problem into smaller parts that are easier to test.Nail down as many parts as you can.Special Guest: David Lord.
Ep 128128: pytest-randomly - Adam Johnson
Software tests should be order independent. That means you should be able to run them in any order or run them in isolation and get the same result.However, system state often gets in the way and order dependence can creep into a test suite. One way to fight against order dependence is to randomize test order, and with pytest, we recommend the plugin pytest-randomly to do that for you.The developer that started pytest-randomly and continues to support it is Adam Johnson, who joins us today to discuss pytest-randomly and another plugin he also wrote, called pytest-reverse.Special Guest: Adam Johnson.Links:pytest-randomly: pytest plugin to randomly order tests and control random.seedpytest-reverse: pytest plugin to reverse test order.Empirically revisiting the test independence assumptionpytest-xdistfactory_boy FakerNumPyHyrum's Law
Ep 127127: WFH, WTF? - Tips and Tricks for Working From Home - Reuven Lerner & Julian Sequeira
Many people have been working from home now that are not used to working from home. Or at least are working from home more than they ever did before. That's definitely true for me. Even though I've been working from home since March, I wanted some tips from people who have been doing it longer.Julian Sequeira, of PyBites fame, has been working from home for about a year. Reuven Lerner, an amazing Python trainer, has been working from home for much longer.We originally had a big list of WFH topics. But we had so much fun with the tips and tricks part, that that's pretty much the whole episode.But there's lots of great tips and tricks, so I'm glad we focused on that.Special Guests: Julian Sequeira and Reuven Lerner.Links:PyBites — Julian's site for teaching PythonTeaching Python and data science around the world — Reuven LernerBonbon - WikipediaTest & Code Mailing List — Join for your chance to win a free course from Talk Python Training. One course given away every week for 6 weeks.
Ep 126126: Data Science and Software Engineering Practices ( and Fizz Buzz ) - Joel Grus
Researches and others using data science and software need to follow solid software engineering practices. This is a message that Joel Grus has been promoting for some time.Joel joins the show this week to talk about data science, software engineering, and even Fizz Buzz.Topics include:Software Engineering practices and data scienceDifficulties with Jupyter notebooksCode reviews on experiment codeUnit tests on experiment codeFinding bugs before doing experimentsTests for data pipelinesTests for deep learning models Showing researchers the value of tests by showing the bugs found that wouldn't have been found without them."Data Science from Scratch" book Showing testing during teaching Data Science"Ten Essays on Fizz Buzz" book Meditations on Python, mathematics, science, engineerign and designTesting Fizz BuzzDifferent algorithms and solutions to an age old interview question.If not Fizz Buzz, what makes a decent coding interview question.pytesthypothesisMath requirements for data scienceSpecial Guest: Joel Grus.Links:Ten Essays on Fizz Buzz (with discount) by Joel GrusI don't like notebooks. (presentation)
Ep 125125: pytest 6 - Anthony Sottile
pytest 6 is out. Specifically, 6.0.1, as of July 31. And there's lots to be excited about. Anthony Sottile joins the show to discuss features, improvements, documentation updates and more.Full release notes / changelogSome of what we talk about:How to update (at least, how I do it) Run your test suites with 5.4.3 or whatever the last version you were usingUpdate to 6Run again. Same output? Probably good.If there are any warnings, maybe fix those.You can also run with pytest -W error to turn warnings into errors.Then find out all the cool stuff you can do nowNew Features pytest now supports pyproject.toml files for configuration. but remember, toml syntax is different than ini files. mostly quotes are needed pytest now includes inline type annotations and exposes them to user programs. Most of the user-facing API is covered, as well as internal code.New command-line flags --no-header and --no-summaryA warning is now shown when an unknown key is read from a config INI file. The --strict-config flag has been added to treat these warnings as errors.New required_plugins configuration option allows the user to specify a list of plugins, including version information, that are required for pytest to run. An error is raised if any required plugins are not found when running pytest.Improvements You can now pass output to things like less and head that close the pipe passed to them. thank you!!!Improved precision of test durations measurement. use --durations=10 -vv to capture and show durationsRich comparison for dataclasses and attrs-classes is now recursive.pytest --version now displays just the pytest version, while pytest --version --version displays more verbose information including plugins. --junitxml now includes the exception cause in the message XML attribute for failures during setup and teardown.Improved Documentation Add a note about --strict and --strict-markers and the preference for the latter one.Explain indirect parametrization and markers for fixtures.Bug FixesDeprecationsTrivial/Internal ChangesBreaking Changes you might need to care about before upgrading PytestDeprecationWarning are now errors by default. Check the deprecations and removals page if you are curious.-k and -m internals were rewritten to stop using eval(), this results in a few slight changes but overall makes them much more consistenttestdir.run().parseoutcomes() now always returns the parsed nouns in plural form. I'd say that's an improvementSpecial Guest: Anthony Sottile.Links:pytest Changelog / Release NotesDeprecations and Removals — pytest documentation
Ep 124124: pip dependency resolver changes
pip is the package installer for Python. Often, when you run pip, especially the first time in a new virtual environment, you will see something like:WARNING: You are using pip version 20.1.1; however, version 20.2 is available. You should consider upgrading via the 'python -m pip install --upgrade pip' command.And you should. Because 20.2 has a new dependency resolver.Get in the habit, until October, of replacing pip install with pip install --use-feature=2020-resolver. This flag is new in the 20.2 release.This new pip dependency resolver is the result of a lot of work. Five of the people involved with this work are joining the show today: Bernard Tyers, Nicole Harris, Paul Moore, Pradyun Gedam, and Tzu-ping Chung. We talk about: pip dependency resolver changesuser experience research and testingcrafting good error messagesefforts to improve the test suitetesting pip with pytestsome of the difficulties with testing pipworking with a team on a large projectworking with a large code basebringing new developers into a large projectSpecial Guests: Bernard Tyers, Nicole Harris, Paul Moore, Pradyun Gedam, and Tzu-ping Chung.Links:Changelog — pip 20.2 documentation — Including --use-feature=2020-resolverpypa/pip: The Python package installer — github repotesting pip - documentationpip - The Python Package Installer — pip 20.2 documentationChanges to the pip dependency resolver in 20.2 — Changes to the pip dependency resolver in 20.2
Ep 123123: GitHub Actions - Tania Allard
Lots of Python projects are starting to use GitHub Actions for Continous Integration & Deployment (CI/CD), as well as other workflows.Tania Allard, a Senior Cloud Developer Advocate at Microsoft, joins the show to answer some of my questions regarding setting up a Python project to use Actions.Some of the topics covered:How to get started with GitHub Actions for a Python project?What are workflow files?Does it matter what the file name is called?Can I have / Should I have more than one workflow?Special Guest: Tania Allard.Links:Using Python with GitHub Actions - GitHub Docsawesome-actions — A curated list of awesome actions to use on GitHub
Ep 122122: Better Resumes for Software Engineers - Randall Kanna
A great resume is key to landing a great software job. There's no surprise there. But so many people make mistakes on their resume that can very easily be fixed.Randall Kanna is on the show today to help us understand how to improve our resumes, and in turn, help us have better careers.Special Guest: Randall Kanna.Links:The Standout Developer — link includes discount
Ep 121121: Industrial 3D Printing & Python, Finite State Machines, and Simulating Hardware - Len Wanger
Len Wanger works on industrial 3D printers. And I was pleased to find out that there's a bunch of Python in those printers as well. In this episode we talk about:3D printersWhat are the different types of 3D printers?Where are 3D printed industrial parts being used?Why use one type of additive manufacturing over another?Python in 3D printing hardware.What are Finite State Machines, FSMs?Benefits of FSMs for testing, logging, and breaking a complex behavior into small testable parts.Benefits of simulation in writing and testing software to control hardware.Special Guest: Len Wanger.Links:pystate — Python package for co-routine base state machinesImpossible Objects — Composite 3D PrintingFinite-state machine, FSM
Ep 120120: FastAPI & Typer - Sebastián Ramírez
FastAPI is a modern, fast (high-performance), web framework for building APIs with Python based on standard Python type hints. Typer is a library for building CLI applications, also based on Python type hints. Type hints and many other details are intended to make it easier to develop, test, and debug applications using FastAPI and Typer.The person behind FastAPI and Typer is Sebastián Ramírez.Sebastián is on the show today, and we discuss:FastAPIRest APIsSwagger UIFuture features of FastAPIStarletteTyperClickTesting with Typer and ClickTyper autocompletionTyper CLISpecial Guest: Sebastián Ramírez.Links:ExplosionFastAPITyperOpenAPI Specification JSON SchemaOAuth 2.0StarlettepydanticSwagger UI — REST API Documentation ToolTesting - TyperClickTesting Click ApplicationsCLI Option autocompletion - TyperTyper CLI - completion for small scripts
Ep 119119: Editable Python Installs, Packaging Standardization, and pyproject.toml - Brett Cannon
There's stuff going on in Python packaging and pyproject.toml.Brett and I talk about some upcoming work on Python packaging, such as:editable installsthe need for standardizationconfiguration of other tools in pyproject.tomlAnd then get off on tangents and talk about:why it's good to have packages like pip, toml, setuptools, wheel, etc not part of the standard libraryshould we remove some stuff from the standard librarythe standard library using unittest for testing the standard library why not hypothesisI didn't bring up "why not pytest?" but you know I was thinking it.why CPython and not C++Pythonand moreSpecial Guest: Brett Cannon.Links:episode 52: pyproject.toml : the future of Python packaging - Brett CannonPython Packaging AuthorityPEP 517 -- A build-system independent format for source treesPEP 518 -- Specifying Minimum Build System Requirements for Python ProjectsWhat the heck is pyproject.toml?Flit PoetryensconstomlsetuptoolsdistutilspipHTTPX
Ep 118118: Code Coverage and 100% Coverage
Code Coverage or Test Coverage is a way to measure what lines of code and branches in your code that are utilized during testing. Coverage tools are an important part of software engineering. But there's also lots of different opinions about using it. Should you try for 100% coverage? What code can and should you exclude? What about targets?I've been asked many times what I think about code coverage or test coverage. This episode is a train of thought brain dump on what I think about code coverage.We'll talk about:how I use code coverage to help me write source codeline coverage and branch coveragebehavior coverageusing tests to ask and answer questions about the system under testhow to target coverage just to the code you care aboutexcluding codegood reasons and bad reasons to exclude codeAnd also the Pareto Principle or 80/20 rule, and the law of diminishing returns and how that applies (or doesn't) to test coverage.Links:Coverage.pypytest-cov
Ep 117117: Python extension for VS Code - Brett Cannon
The Python extension for VS Code is most downloaded extension for VS Code. Brett Cannon is the manager for the distributed development team of the Python extension for VS Code.In this episode, Brett and I discuss the Python extension and VS Code, including:pytest supportvirtual environment supporthow settings work, including user and workspace settingsmulti root projectstesting Python in VS Codedebugging and pydevdjump to cursor featureupcoming featuresSpecial Guest: Brett Cannon.Links:Brett Cannon on ChangelogJohn WickBallerinaFrank Willison AwardPython extension for VS Codeepisode 117 : How IDEs can make software testing easier - Paul EverittUser and Workspace Settingsvirtual environmentsTesting Python in VS CodepydevdJump to Cursor in Feb VS Code Python blog
Ep 116116: 15 amazing pytest plugins - Michael Kennedy
pytest plugins are an amazing way to supercharge your test suites, leveraging great solutions from people solving test problems all over the world. In this episode Michael and I discuss 15 favorite plugins that you should know about.We also discuss fixtures and plugins and other testing tools that work great with pytesttoxGitHub ActionsCoverage.pySelenium + splinter with pytest-splinterHypothesisAnd then our list of pytest plugins:pytest-sugar pytest-cov pytest-stress pytest-repeat pytest-instafail pytest-metadatapytest-randomlypytest-xdist pytest-flake8 pytest-timeout pytest-spec pytest-picked pytest-freezegun pytest-check fluentcheckThat last one isn't a plugin, but we also talked about pytest-splinter at the beginning. So I think it still counts as 15.Special Guest: Michael Kennedy.Links:pytest-sugar — changes the default look and feel of pyptest (e.g. progressbar, show tests that fail instantly)pytest-cov — run coverage.py from pytestpytest-stress — allows you to loop tests for a user defined amount of time.pytest-repeat — for repeating test executionpytest-instafail — shows failures and errors instantly instead of waiting until the end of test session.pytest-metadata — for accessing test session metadatapytest-randomly — randomly order tests and control random.seedpytest-xdist — distributed testingpytest-flake8 — pytest plugin to run flake8pytest-timeout — terminate tests after a certain timeoutpytest-spec — display test execution output like a specificationpytest-picked — run the tests related to the changed files (according to Git)pytest-freezegun — easily freeze timepytest-check — allows multiple failures per testfluentcheck — fluent assertionsepisode 104 — Top 28 pytest plugins with Anthony SottilePython Testing with pytest — The easiest way to get up to speed with pytest fast. There's also a chapter dedicated to plugins that also teaches you how to write and test your own plugins.toxGitHub ActionsCoverage.pypytest-splinter — provides a set of fixtures to use splinter for browser testingsplinter — makes it easy to write automated tests of web applicationshypothesis — property-based testingTalk Python Episode #267 — This episode is a cross post with Talk Python
Ep 115115: Catching up with Nina Zakharenko
One of the great things about attending in person coding conferences, such as PyCon, is the hallway track, where you can catch up with people you haven't seen for possibly a year, or maybe even the first time you've met in person. Nina is starting something like the hallway track, online, on twitch, and it's already going, so check out the first episode of Python Tea.Interesting coincidence is that this episode is kind of like a hallway track discussion between Nina and Brian.We've had Nina on the show a couple times before, but it's been a while. In 2018, we talked about Mentoring on episode 44. In 2019, we talked about giving Memorable Tech Talks in episode 71.In this episode, we catch up with Nina, find out what she's doing, and talk about a bunch of stuff, including:Live CodingOnline ConferencesMicrosoft Python teamPython Tea, an online hallway trackQ&A with Python for VS Code teamPython on hardwareAdafruitDevice Simulator ExpressCircuitPythonTricking out your command promptZsh and Oh My ZshEmacs vs vi key bindings for shells Working from homeSpecial Guest: Nina Zakharenko.Links:nnjaio - TwitchNina Zakharenko 💜🐍 (@nnja) / TwitterDevice Simulator Express - Visual Studio MarketplaceInitial code for Microsoft's PyBadge at PyCon 2020Goodbye Print, Hello Debugger! - Nina Zakharenko - PyCon 2020 Talk 𝙽𝚒𝚗𝚊 𝚉𝚊𝚔𝚑𝚊𝚛𝚎𝚗𝚔𝚘 💜🐍 - DEV.toPython Tea AnnouncementThe Live Coders Conference
Ep 114114: The Python Software Foundation (PSF) Board Elections - Ewa Jodlowska / Christopher Neugebauer
"The mission of the Python Software Foundation is to promote, protect, and advance the Python programming language, and to support and facilitate the growth of a diverse and international community of Python programmers."That's a lot of responsibility, and to that end, the PSF Board Directors help out quite a bit. If you want to be a part of the board, you can. There's an election coming up right around the corner and you gotta get your nomination in by May 31. You can also join the PSF if you want to vote for who gets to be part of the board. But what does it really mean to be on the Board, and what are some of the things the PSF does? To help answer those questions, I've got Ewa Jodlowska, the PSF Executive Director, and Christopher Neugebauer, a current board member, on the show today. I've also got some great links in the show notes if we don't answer your questions and you want to find out more.Special Guests: Christopher Neugebauer and Ewa Jodlowska.Links:Latest PSF Board Elections DiscussionPython Software FoundationOverview of ElectionsDuties and Responsibilities of DirectorsLife as a Python Software Foundation Director - YouTube
Ep 113113: Technical Debt - James Smith
Technical debt has to be dealt with on a regular basis to have a healthy product and development team.The impacts of technical debt include emotional drain on engineers and slowing down development and can adversely affect your hiring ability and retention.But really, what is technical debt? Can we measure it? How do we reduce it, and when?James Smith, the CEO of Bugsnag, joins the show to talk about technical debt and all of these questions.Special Guest: James Smith.
Ep 112112: Six Principles of Readable Tests - David Seddon
"Code is read much more often than it is written." - Guido van Rossum This is true for both production code and test code.When you are trying to understand why a test is failing, you'll be very grateful to the test author if they've taken the care to make it readable.David Seddon came up with 6 principles to help us write more readable tests. We discuss these, as well as more benefits of readable tests.David's 6 Principles of Readable Tests:Profit from the work of othersPut naming to workShow only what mattersDon’t repeat yourselfArrange, act, assertAim highSpecial Guest: David Seddon.Links:How to write readable tests (presentation) · David SeddonHow to write readable tests (slides)pytestWebTest factory_boydjango-webtest
Ep 111111: Subtests in Python with unittest and pytest - Paul Ganssle
In both unittest and pytest, when a test function hits a failing assert, the test stops and is marked as a failed test. What if you want to keep going, and check more things? There are a few ways. One of them is subtests.Python's unittest introduced subtests in Python 3.4. pytest introduced support for subtests with changes in pytest 4.4 and a plugin, called pytest-subtests. Subtests are still not really used that much.But really, what are they? When could you use them? And more importantly, what should you watch out for if you decide to use them?That's what Paul Ganssle and I will be talking about today.Special Guest: Paul Ganssle.Links:Subtests in Python — Paul's article on subtestssubtests in unittest - Python docs pytest-subtests: plugin to support subtests in pytestpytest-check: A pytest plugin that allows multiple failures per test.
Ep 110110: Testing Django - from unittest to pytest - Adam Parkin
Django supports testing out of the box with some cool extensions to unittest. However, many people are using pytest for their Django testing, mostly using the pytest-django plugin.Adam Parkin, who is known online as CodependentCodr, joins us to talk about migrating an existing Django project from unittest to pytest. Adam tells us just how easy this is.Special Guest: Adam Parkin.Links:Django Tutorial, on testingThe Django docs on testing & the test databaseThe Django docs on the TestCase classpytest-django docs
Ep 109109: Testing in Financial Services - Eric Bergemann
Financial services have their own unique testing development challenges. But they also have lots of the same challenges as many other software projects. Eric Bergemann joins Brian Okken to discuss:Specific testing challenges in the financial services domainCI/CD : Continuous Integration, Continuous DeploymentTDD : Test Driven DevelopmentConfidence from testable applicationsTesting strategies to add coverage to legacy systemsTesting the data and test cases themselvesDevOpsContinuous testingManual testing proceduresBDD & GherkinHiring in vs training industry knowledgeSpecial Guest: Eric Bergemann.Links:ParagonThe Phoenix Project: A Novel about IT, DevOps, and Helping Your Business Win
Ep 108108: PySpark - Jonathan Rioux
Apache Spark is a unified analytics engine for large-scale data processing. PySpark blends the powerful Spark big data processing engine with the Python programming language to provide a data analysis platform that can scale up for nearly any task.Johnathan Rioux, author of "PySpark in Action", joins the show and gives us a great introduction of Spark and PySpark to help us decide how to get started and decide whether or not to decide if Spark and PySpark are right you.Special Guest: Jonathan Rioux.Links:PySpark in ActionSparkPySpark documentationJoel Grus, livecoding
Ep 107107: Property Based Testing in Python with Hypothesis - Alexander Hultnér
Hypothesis is the Python tool used for property based testing. Hypothesis claims to combine "human understanding of your problem domain with machine intelligence to improve the quality of your testing process while spending less time writing tests."In this episode Alexander Hultnér introduces us to property based testing in Python with Hypothesis.Some topics covered:What is property based testingThinking differently for property based testingUsing hypothesis / property based testing in conjunction with normal testingFailures saved and re-runWhat parts of development/testing is best suited for hypothesis / property based testing Comparing function implementationsTesting against REST APIs that use Open API / Swagger with schemathesis Changing the number of tests in different test environmentsSystem, integration, end to end, and unit testsSpecial Guest: Alexander Hultnér.Links:Hypothesis homeHypothesis docsTest Fast, Fix More - Property based testing with Hypothesis by Alexander Hultnér - YouTubeQuickcheck, Grandfather of property based testingBeyond Unit Tests, Hillel Wayne, PyCon 2018Better Testing With Less Code, Matt Bachmann, PyCon 2016Choosing properties for property-based testing (F#)schemathesis: Hypothesis + Open API / Swagger for testing web applications
Ep 106106: Visual Testing : How IDEs can make software testing easier - Paul Everitt
IDEs can help people with automated testing.In this episode, Paul Everitt and Brian discuss ways IDEs can encourage testing and make it easier for everyone, including beginners. We discuss features that exist and are great, as well as what is missing.The conversation also includes topics around being welcoming to new contributors for both open source and professional projects.We talk about a lot of topics, and it's a lot of fun. But it's also important. Because IDEs can make testing Some topics discussed:Making testing more accessibleTest First vs teaching testing lastTDD workflowAutorunRerunning last failuresDifferent ways to run different levels of testsCommand line flags and how to access them in IDEspytest.inizooming in and out of test levelsrunning parametrizationsrunning tests with coverage and profilingparametrize vs parameterizeparametrization identifierspytest fixture supportglobal configurations / configuration templatescoverage and testing and being inviting to new contributorsconfidence in changes and confidence in contributionsnavigating code, tests, fixturesgrouping tests in modules, classes, directoriesBDD, behavior driven development, cucumber, pytest-bddweb development testingparallel testing with xdist and IDE supportrefactor renameSpecial Guest: Paul Everitt.Links:Python Testing with pytest — The pytest bookPyCharmPyCharm proepisode 54: Python 1994 - Paul Everittpytest-xdist
Ep 105105: TAP: Test Anything Protocol - Matt Layman
The Test Anything Protocol, or TAP, is a way to record test results in a language agnostic way, predates XML by about 10 years, and is still alive and kicking.Matt Layman has contributed to Python in many ways, including his educational newsletter, and his Django podcast, Django Riffs.Matt is also the maintainer of tap.py and pytest-tap, two tools that bring the Test Anything Protocol to Python.In this episode, Matt and I discuss TAP, it's history, his involvement, and some cool use cases for it.Special Guest: Matt Layman.Links:mattlayman.comDjango Riffs, a podcast for learning Django · Matt LaymanTest Anything Protocolpytest-tap: Test Anything Protocol (TAP) reporting plugin for pytesttappy - TAP tools for Python
Ep 104104: Top 28 pytest plugins - Anthony Sottile
pytest is awesome by itself. pytest + plugins is even better. In this episode, Anthony Sottile and Brian Okken discuss the top 28 pytest plugins.Some of the plugins discussed (we also mention a few plugins related to some on this list): pytest-cov pytest-timeout pytest-xdist pytest-mock pytest-runner pytest-instafail pytest-django pytest-html pytest-metadatapytest-asynciopytest-split-testspytest-sugarpytest-rerunfailurespytest-envpytest-cachepytest-flaskpytest-benchmarkpytest-orderingpytest-watchpytest-pythonpathpytest-flake8pytest-pep8pytest-repeatpytest-pylintpytest-randomlypytest-seleniumpytest-mypypytest-freezegunHonorable mention:pytest-blackpytest-emojipytest-pooSpecial Guest: Anthony Sottile.Links:PyPI Download StatsTop PyPI Packages: A monthly dump of the 4,000 most-downloaded packages from PyPITest & Code 25: Selenium, pytest, Mozilla – Dave Huntpre-commit
Ep 103103: Django - Lacey Williams Henschel
Django is without a doubt one of the most used web frameworks for Python. Lacey Williams Henschel is a Django consultant and has joined me to talk about Django, the Django community, and so much more.Topics:DjangoThe Django CommunityDjango GirlsDjango Girls TutorialDjangoConSoftware TestingUsing tests during learningpytest-djangotesting DjangoWagtailSpecial Guest: Lacey Williams Henschel.Links:Django Django Girls Django Girls TutorialDjangoCon US 2020 Django: Under the HoodPyDataPyCascadesDjango REST frameworkpytest-djangoWagtail CMS - Django Content Management System
Ep 102102: Cosmic Python, TDD, testing and external dependencies - Harry Percival
Harry Percival has completed his second book, "Architecture Patterns with Python". So of course we talk about the book, also known as "Cosmic Python". We also discuss lots of testing topics, especially related to larger systems and systems involving third party interfaces and APIs.Topics Harry's new book, "Architecture Patterns with Python". a.k.a. Cosmic Python TDD : Test Driven DevelopmentTest PyramidTradeoffs of different architectural choicesMocks and their pitfallsAvoiding mocksSeparating conceptual business logicDependency injectionDependency inversionIdentifying external dependenciesInterface adapters to mimize the exposed surface area of external dependenciesLondon School vs Classic/Detroit School of TDDTesting strategies for testing external REST APIsLinks:Cosmic Python - Simple Patterns for Building Complex ApplicationsArchitecture Patterns with Python - on AmazonHarry Percival (@hjwp) / TwitterBob Gregory (@bob_the_mighty) / Twittervcrpy · PyPIWriting tests for external API callsStop Using Mocks (for a while) - Harry's PyCon talk
Ep 101101: Application Security - Anthony Shaw
Application security is best designed into a system from the start. Anthony Shaw is doing something about it by creating an editor plugin that actually helps you write more secure application code while you are coding.On today's Test & Code, Anthony and I discuss his security plugin, but also application security in general, as well as other security components you need to consider.Security is something every team needs to think about, whether you are a single person team, a small startup, or a large corporation.Anthony and I also discuss where to start if it's just a few of you, or even just one of you.Topics include:Finding security risks while writing code.What are the risks for your applications.Thinking about attack surfaces.Static and dynamic code analysis.Securing the environment an app is running in.Tools for scanning live sites for vulnerabilities.Secret management.Hashing algorithms.Authentication systems.and Anthony's upcoming cPython Internals book.Special Guest: Anthony Shaw.Links:Python Security - plugin for PyCharmBanditHack The Box
Ep 100100: A/B Testing - Leemay Nassery
Let's say you have a web application and you want to make some changes to improve it. You may want to A/B test it first to make sure you are really improving things.But really what is A/B testing? That's what we'll find out on this episode with Leemay Nassery.Special Guest: Leemay Nassery.
Ep 9999: Software Maintenance and Chess
I play a form of group chess that has some interesting analogies to software development and maintenance of existing systems. This episode explains group chess and explores a few of those analogies.
Ep 9898: pytest-testmon - selects tests affected by changed files and methods - Tibor Arpas
pytest-testmon is a pytest plugin which selects and executes only tests you need to run. It does this by collecting dependencies between tests and all executed code (internally using Coverage.py) and comparing the dependencies against changes. testmon updates its database on each test execution, so it works independently of version control.In this episode, I talk with testmon creator Tibor Arpas about testmon, about it's use and how it works.Special Guest: Tibor Arpas.Links:testmon.orgDetermining affected testsTibor's post on using pytest in PyCharmruntime-info plugin for PyCharm
Ep 9797: 2019 Retrospective, 2020 Plans, and an amazing decade
This episode is not just a look back on 2019, and a look forward to 2020. Also, 2019 is the end of an amazingly transofrmative decade for me, so I'm going to discuss that as well.top 10 episodes of 201910: episode 46, Testing Hard To Test Applications - Anthony Shaw9: episode 64, Practicing Programming to increase your value8: episode 70, Learning Software without a CS degree - Dane Hillard7: episode 75, Modern Testing Principles - Alan Page6: episode 72, Technical Interview Fixes - April Wensel5: episode 69, Andy Hunt - The Pragmatic Programmer4: episode 73, PyCon 2019 Live Recording3: episode 71, Memorable Tech Talks, The Ultimate Guide - Nina Zakharenko2: episode 76, TDD: Don’t be afraid of Test-Driven Development - Chris May1: episode 89, Improving Programming Education - Nicholas TollerveyLooking back on the last decade Some amazing events, like 2 podcasts, a book, a blog, speaking events, and teaching has led me to where we're at now.Looking forward to 2020 and beyond I discussed what's in store in the next year and moving forward.A closing quote Software is a blast. At least, it should be. I want everyone to have fun writing software. Leaning on automated tests is the best way I know to allow me confidence and freedome to:rewrite big chunks of codeplay with the codetry new thingshave fun without feargo home feeling good about what I didbe proud of my code I want everyone to have that.That's why I promote and teach automated testing.I hope you had an amazing decade. And I wish you a productive and fun 2020 and the upcoming decade. If we work together and help eachother reach new heights, we can achieve some pretty amazing thingsLinks:Thanks, 201X! — Mahmoud Hashemi's blog
Ep 9696: Azure Pipelines - Thomas Eckert
Pipelines are used a lot in software projects to automated much of the work around build, test, deployment and more. Thomas Eckert talks with me about pipelines, specifically Azure Pipelines. Some of the history, and how we can use pipelines for modern Python projects.Special Guest: Thomas Eckert.Links:click repoAzure Pipelines Action · Actions · GitHub Marketplace
Ep 9595: Data Science Pipeline Testing with Great Expectations - Abe Gong
Data science and machine learning are affecting more of our lives every day. Decisions based on data science and machine learning are heavily dependent on the quality of the data, and the quality of the data pipeline.Some of the software in the pipeline can be tested to some extent with traditional testing tools, like pytest.But what about the data? The data entering the pipeline, and at various stages along the pipeline, should be validated.That's where pipeline tests come in.Pipeline tests are applied to data. Pipeline tests help you guard against upstream data changes and monitor data quality.Abe Gong and Superconductive are building an open source project called Great Expectations. It's a tool to help you build pipeline tests.This is quite an interesting idea, and I hope it gains traction and takes off.Special Guest: Abe Gong.Links:Great Expectations
Ep 9494: The real 11 reasons I don't hire you - Charity Majors
You've applied for a job, maybe lots of jobs. Depending on the company, you've gotta get through:a resume reviewa coding challangea phone screenmaybe another code examplean in person interviewIf you get the job, and you enjoy the work, awesome, congratulations.If you don't get the job, it'd be really great to know why.Sometimes it isn't because you aren't a skilled engineer.What other reasons are there?Well, that's what we're talking about today.Charity Majors is the cofounder and CTO of Honeycomb.io, and we're going to talk about reasons for not hiring someone.This is a very informative episode both for people who job hunt in the future and for hiring managers and people on the interview team.Special Guest: Charity Majors.Links:The (Real) 11 Reasons I Don’t Hire You — The article
Ep 9393: Software Testing, Book Writing, Teaching, Public Speaking, and PyCarolinas - Andy Knight
Andy Knight is the Automation Panda. Andy Knight is passionate about software testing, and shares his passion through public speaking, writing on automationpanda.com, teaching as an adjunct professor, and now also through writing a book and organizing a new regional Python conference.Topics of this episode include:Andy's book on software testingBeing an adjunct professorPublic speaking and preparing talk proposals including tips from Andy about proposals and preparing for talksPyCarolinasSpecial Guest: Andy Knight.Links:Automation PandaAndy's Speaking eventsPyCarolinas 2020
Ep 9292: 9 Steps to Crater Quality & Destroy Customer Satisfaction - Cristian Medina
Cristian Medina wrote an article recently called "Test Engineering Anti-Patterns: Destroy Your Customer Satisfaction and Crater Your Quality By Using These 9 Easy Organizational Practices"Of course, it's sarcastic, and aims to highlight many problems with organizational practices that reduce software quality.The article doesn't go out of character, and only promotes the anti-patterns. However, in this interview, we discuss each point, and the corollary of what you really should do. At least, our perspectives.Here's the list of all the points discussed in the article and in this episode:Make the Test teams solely responsible for qualityRequire all tests to be automated before releasingRequire 100% code coverageIsolate the Test organization from DevelopmentMeasure the success of the process, not the product. Metrics, if rewarded, will always be gamed.Require granular projections from engineersReward quick patching instead of solvingPlan for today instead of tomorrowSpecial Guest: Cristian Medina.Links:Test Engineering Anti-Patterns: Destroy Your Customer Satisfaction and Crater Your Quality By Using These 9 Easy Organizational Practices — The article we discuss in the show.tryexceptpass — Cris's blog
Ep 9191: Python 3.8 - there's a lot more new than most people are talking about
Python 3.8.0 final is live and ready to download. On todays episode, we're going to run through what's new, picking out the bits that I think are the most interesting and affect the most people, including new language features standard library changes optimizations in 3.8 Not just the big stuff everyone's already talking about. But also some little things that will make programming Python even more fun and easy. I'm excited about Python 3.8. And really, this episode is to my way to try to get you excited about it too.Links:What’s New In Python 3.8 - at docs.python.orgDownload Python 3.8 at Python.org
Ep 9090: Dynamic Scope Fixtures in pytest 5.2 - Anthony Sottile
pytest 5.2 was just released, and with it, a cool fun feature called dynamic scope fixtures. Anthony Sottile so tilly is one of the pytest core developers, so I thought it be fun to have Anthony describe this new feature for us. We also talk about parametrized testing and really what is fixture scope and then what is dynamic scope.Special Guest: Anthony Sottile.Links:pytest changelogpytest fixturesdynamic scope fixturesepisode 82: pytest - favorite features since 3.0 the pytest book — Python Testing with pytest
Ep 8989: Improving Programming Education - Nicholas Tollervey
Nicholas Tollervey is working toward better ways of teaching programming. His projects include the Mu Editor, PyperCard, and CodeGrades. Many of us talk about problems with software education. Nicholas is doing something about it.Special Guest: Nicholas Tollervey. Links:Code With Mu — a simple Python editor for beginner programmersMade With Mu — A blog to celebrate projects that use the Mu Python code editor to create cool stuff.PyperCard — Easy GUIs for AllCodeGrades
Ep 8888: Error Monitoring, Crash Reporting, Performance Monitoring - JD Trask
Tools like error monitoring, crash reporting, and performance monitoring are tools to help you create a better user experience and are fast becoming crucial tools for web development and site reliability. But really what are they? And when do you need them? You've built a cool web app or service, and you want to make sure your customers have a great experience. You know I advocate for utilizing automated tests so you find bugs before your customers do. However, fast development lifecycles, and quickly reacting to customer needs is a good thing, and we all know that complete testing is not possible. That's why I firmly believe that site monitoring tools like logging, crash reporting, performance monitoring, etc are awesome for maintaining and improving user experience. John-Daniel Trask, JD, the CEO of Raygun, agreed to come on the show and let me ask all my questions about this whole field.Special Guest: John-Daniel Trask.