
Weekly Dev Tips
75 episodes — Page 2 of 2

Ep 25What Good is a Repository
What good is a repository? This week I'm following up on last week's tip about the Repository pattern. A listener pointed out to me that I never directly answered the question posed in the last episode of "Do I need a repository?" I'll be sure to do so here. Sponsor - DevIQ Thanks to DevIQ for sponsoring this episode! Check out their list of available courses and how-to videos. Show Notes / Transcript Last episode I addressed a fairly common online argument against the use of repositories. I suggest you listen to that episode and then jump back into this one, but hey, you do what you want. I addressed the usual arguments against using a repository by using a particular article as an example, but I want to be clear that that article was by no means the direct cause of my response. In fact, I hadn't even seen that particular article until I sat down to record the episode and it happened to be pretty high up in my search results and made a good example of the kinds of arguments folks like to throw out when arguing against the pattern. So, now that we've heard and perhaps refuted some of the arguments against using the repository pattern, let's talk about why you might use it. What good is a Repository in your application? The Repository pattern is simply an abstraction of your persistence strategy for your application. In fact, the repository pattern is most frequently used along with the strategy design pattern as a way to pull low level knowledge of persistence details out of your application's other classes. I've heard the repository pattern also described as an example of the facade design pattern, since it hides away much of the detail of this or that persistence technology and exposes a much simpler interface for getting and storing entities. I can get behind that definition, too. You can think of the repository pattern as essentially a particular use case of the facade pattern in which the complex underlying implementation is related to persistence. There's one more pattern we can consider in relation to the repository, though, and that's the adapter. The main difference between a facade and an adapter is in the intent. A facade's intent is to simplify, while an adapter's intent is to allow multiple implementations to be accessed through a common interface. A repository typically does both of these things, providing a simple interface that hides unneeded complexity and allows multiple implementations, like relational, non-relational, in-memory, or even file- or API-based approaches. So, the repository pattern is all about providing an intention-revealing name to a facade or adapter that can be used as a strategy to reduce coupling in your application. Let's drill into these other patterns a little more. The strategy pattern lets you change how a class does something without having to change the class itself. If you're familiar with dependency injection, you already know this pattern. It works by passing in as a parameter a particular implementation to be used, allowing this implementation to vary without the class that uses it having to change. It's one of the most powerful design patterns for writing loosely-coupled code that follows the SOLID principles. It's very challenging to write unit-testable code in strongly typed languages without using this pattern. The facade pattern is helpful when you want to make a complex API easier to use. This might be because the complexity is unnecessary, or because there are certain "right ways" to do things in your particular application and you want to make it easier for your team to follow the right path and avoid the wrong ones. Creating a facade to expose simple persistence operations like creating, updating, and deleting records, as well as some mechanism for fetching and querying, is a pretty common technique and can allow teams to focus on business logic moreso than data access logic in many cases. That said, keep in mind that the facade can hide useful features and sometimes necessary complexity that client software should otherwise be able to access. Care must be taken in how the facade is designed to ensure it doesn't cripple the use of the libraries it wraps. The adapter pattern is helpful for testing, since it allows tests to easily substitute in implementations that behave however the particular test requires. Using adapters can also allow an application to work more flexibly with different actual providers, plugging in the appropriate one as necessary. You can see that the Repository pattern is really a particular implementation of one or more other, more generic patterns. Next week I'll talk a little bit more about the pattern, and how it can be further extended by layering additional patterns on top of it. Let's wrap up this episode by answering an important question. Do I need a repository? No, you don't need a repository. However, if any of the benefits I've described in the last two episodes sounds like something you might wan

Ep 24Do I Need a Repository?
Do I Need a Repository? This week we'll answer this extremely common question about the Repository pattern, and when you should think about using it. Sponsor - DevIQ Thanks to DevIQ for sponsoring this episode! Check out their list of available courses and how-to videos. Show Notes / Transcript This week we're going to return to the Repository design pattern to answer a very common question: when should you use it? This question appears very frequently in discussions about Entity Framework or EF Core, usually with someone saying "Since EF already acts like a repository, why would you create your own repository pattern on top of it?" Before we get into the answer to this question, though, let me point out that if you're interested in the repository pattern in general I have a link to a very useful EF Core implementation in the show notes for this episode that should help get you started or perhaps give you some ideas you can use with your existing implementation. Also, just a reminder that we talked about the pattern in episode 18 on query logic encapsulation, but otherwise I haven't spent a lot of time on repository tips here, yet. Ok, so on to this week's topic. Should you bother using the repository pattern when you're working with EF or EF Core, since these already act like a repository? If you Google for this, you're likely to discover an article discussing this topic that suggests repository isn't useful. In setting the scene, the author discusses an app he inherited that had performance issues caused by lazy loading, which he says "was needed because the application used the repository/unit of work pattern." Before going further, let's point out two things. One, lazy loading in web applications is evil. Just don't do it except maybe for internal apps that have very few users and very small data sets. Read my article on why, linked from the show notes. Second, no, you don't need lazy loading if you're using repository. You just need to know how to pass query and loading information into the repository. The author later goes on to say "one of the ideas behind repository is that you might replace EF Core with another database access library but my view it's a misconception because a) it's very hard to replace a database access library, and b) are you really going to?" I agree that it's very hard to replace your data access library, unless you put it behind a good abstraction. As to whether you're going to, that's a tougher one to answer. I've personally seen organizations change data access between raw ADO.NET, Enterprise Application Block, Typed Datasets, LINQ-to-SQL, LLBLGen, NHibernate, EF, and EF Core. I've probably forgotten a couple. Oh yeah, and Dapper and other "micro-ORMs", too. If you're using an abstraction layer, you can swap out these implementation details quickly and easily. You just write a new class that is essentially an adapter of your repository to that particular tool. If you're hardcoded to any one of them, it's going to be a much bigger job (and so, yeah, you're less likely to do it because of the pain involved.) Next, the author lists some of the bad parts of using repository. First, sorting and filtering, because a particular implementation he found from 2013 only returned an IEnumerable and didn't provide a way to allow filtering and sorting to be done in the database. Yes, poor implementations of a pattern can result in poor performance. Don't do that if performance is important. Next, he hits on lazy loading again. Ironically, at the time this article was published, EF Core didn't even support lazy loading, so this couldn't be a problem with it. Unfortunately, now it does, but as I mentioned, you shouldn't use it in web apps anyway. It has nothing to do with repository, despite the author thinking they're linked somehow. His third perf-related issue is with updates, claiming that a repository around EF Core would require saving every property, not just those that have changed. This is also untrue. You can use EF Core's change tracking capability with and through a repository just fine. His fourth and final "bad part" of repositories when used with EF Core is that they're too generic. You can write one generic repository and then use that or subtype from it. He notes that it should minimize the code you need to write, but in his experience as things grow in complexity you end up writing more and more code in the individual repositories. Having less code to write and maintain really is a good thing. The issue with complexity resulting in more and more code in repositories is a symptom of not using another pattern, the specification. In fact, the specification pattern addresses pretty much all of the issues described in his post that I haven't already debunked. The author knows about this pattern, which he describes as 'query objects', but doesn't see how they can be used together with repositories just as effectively as he uses t

Ep 23Domain Events - After Persistence
Domain Events - After Persistence The previous tip talked about domain events that fire before persistence. This week we'll look at another kind of domain event that should typically only fire after persistence. Sponsor - DevIQ Thanks to DevIQ for sponsoring this episode! Check out their list of available courses and how-to videos. Show Notes / Transcript If you're new to the domain events pattern, I recommend you listen to episode 22 before this one. In general, I recommend listening to this podcast in order, but I can't force that on you... When you have a scenario in your application where a requirement is phrased "when X happens, then Y should happen," that's often an indication that using a domain event might be appropriate. If the follow-on behavior has side effects that extend beyond your application, that's often an indication that the event shouldn't occur unless persistence is successful. Let's consider a contrived real-world example. Imagine you have a simple ecommerce application. People can browse products, add them to a virtual cart or basket, and checkout by providing payment and shipping details. Everything is working fine when you get a new requirement: when the customer checks out, they should get an email confirming their purchase. Sounds like a good candidate for a domain event, right? Ok, so your first pass at this is to simply go into the Checkout method and raise a CartCheckedOut event, which you then handle with a NotifyCustomerOnCheckoutHandler class. You're using a simple in-proc approach to domain events so when you raise an event, all handlers fire immediately before execution resumes. You roll out the change with the next deployment. Unfortunately, another change to the codebase resulted in an undetected error related to saving new orders. Meaning, they're not being saved in the database. Now the result is that customers are checking out, being redirected to a friendly error screen, but also getting an email now confirming their order was placed. They're mostly assuming everything is fine on account of the pleasant email confirmation, but in fact your system has no record of the order they just placed because it didn't save. In this kind of situation, you'd really rather not send that confirmation email until you've successfully saved the new order. While in-proc domain events are often implemented using simple static method calls to raise or register for events, post-persistence events need to be stored somewhere and only dispatched once persistence has been successful. One approach you can use for this in .NET applications is to store the events in a collection on the entity or aggregate root, and then override the behavior of the Entity Framework DbContext so that it dispatches these events once it has successfully saved the entity or aggregate. My CleanArchitecture sample on GitHub demonstrates how to put this approach into action using a technique Jimmy Bogard wrote about a few years ago. It involves overriding the SaveChanges method on the DbContext, finding all tracked entities with events in their collection, and then dispatching these events. His original approach fires the events before actually saving the entity, but I much prefer persisting first and using a different kind of domain event for immediate, no side effect events. In the Clean Architecture sample, I have a simple ToDo entity that raises an event when it is marked complete. This event is only fired once the entity's state is saved. At that point, a handler tasked with notifying anybody subscribed to that entity's status could safely send out notifications. The pattern is very effective as a lightweight way to decouple follow-on behavior from actions that trigger them within the domain model, and it doesn't require adding additional architecture in the form of message queues or buses to achieve it. Would your team or application benefit from an application assessment, highlighting potential problem areas and identifying a path toward better maintainability? Contact me at ardalis.com and let's see how I can help. Show Resources and Links Clean Architecture Sample (GitHub) Domain-Driven Design Fundamentals - includes Domain Events

Ep 22Domain Events - Before Persistence
Domain Events - Before Peristence Domain Events are a DDD design pattern that in my experience can really improve the design of complex applications. In this episode I describe specifically how you would benefit from raising and handling these events prior to persisting the state of your entities. Sponsor - DevIQ Thanks to DevIQ for sponsoring this episode! Check out their list of available courses and how-to videos. Show Notes / Transcript So before we get started, let's describe what a domain event is. A domain event is something that happens in your system that a domain expert cares about. Domain events are part of your domain model. They belong in the Core of your Clean Architecture. They should be designed at the abstraction level of your domain model, and shouldn't reference UI or infrastructure concerns. Domain events are a pattern, and one with several difference implementation approaches. I generally segment these approaches into two camps: before persistence and after persistence. For this tip, we're going to focus on events that occur and are handled prior to persistence. In future tips, I'll talk about domain events that should only be dispatched once persistence has been successful. So, as a pattern, what problem are domain events designed to solve? Just as with other event-driven programming models, such as user interface events, domain events provide a way to decouple things that occur in your system from things such occurrences trigger. A common example I use is checking out a shopping cart from an ecommerce site. When the user checks out, a variety of other things generally should take place. The order should be saved. Payment should be processed. Inventory should be checked. Notifications should be sent. Now, you could put all of this logic into a Checkout method, but then that method is going to be pretty large and all-knowing. It's probably going to violate the Single Responsibility and Open/Closed Principles. Another approach would be to raise an event when the user checks out, and then to have separate handlers responsible for payment processing, inventory monitoring, notifications, etc. Looking specifically at events that make sense to handle before persistence, the primary rule is that such events shouldn't have any side effects external to the application. A common scenario is to perform some kind of validation. Imagine you have a Purchase Order domain object, which includes a collection of Line Item objects. The Purchase Order has a maximum amount associated with it, and the total of all the Line Item object amounts must not exceed this amount. For the sake of simplicity let's say the Purchase Order object includes the logic to check whether its child Line Item objects exceeds its maximum. When a Line Item object is updated, how can we run this code? One option would be to provide a reference from each Line Item to its parent Purchase Order. This is fairly common but results in circular dependencies since Purchase Order also has a reference to a collection of its Line Item objects. These objects together can be considered an Aggregate, and ideally dependencies should flow from the Aggregate Root (in this case Purchase Order) to its children, and not the other way around. So, let's assume we follow this practice, which means we can simply call a method on Purchase Order from Line Item directly. Another common approach I see developers use instead of domain events is to pull all of the logic up from child objects into the aggregate root object. So instead of having a property setter or property on Line Item to update its amount, there might be a method on Purchase Order called UpdateLineItemAmount that would do the work. This breaks encapsulation and will cause the root object to become bloated while the child objects become mere DTOs. It can work, but it's not very good from an object-oriented design standpoint. So how would domain events apply here? First, you'd put the logic to modify Line Item on the Line Item class where it belongs. Then, in the UpdateAmount method, you would raise an appropriate domain event, like LineItemAmountUpdated. The aggregate root would subscribe to this event and would handled the event in process (not asynchronously or in another thread). If necessary, the handler could raise an exception. In any case, it could update properties of the root object to indicate whether it was currently in a valid state, which could easily be reflected in the UI. This is one particular use case for domain events that I've found very helpful, and which I typically refer to as aggregate events since there isn't a separate event handler type in this case. I have a small sample showing them in action on GitHub you can check out in the show notes. With aggregate events in place, you can check all the blocks for your object design. Your aggregate's dependencies flow from root to children. Your aggregate's child objects are responsible for their own behavior. Changes to child o

Ep 21Breadcrumbs and Troubleshooting
Breadcrumbs and Troubleshooting This week I'm taking a break from design patterns to talk about a useful skill to prevent you and your team having to reinvent the wheel when it comes to troubleshooting problems or working through new tools or frameworks. Sponsor - DevIQ Thanks to DevIQ for sponsoring this episode! Check out their list of available courses and how-to videos. Show Notes / Transcript Have you ever spent a few hours working through getting a new tool, library, package, or framework working? Along the way, did you run into things that didn't quite go as easily as the documentation, video, or presentation on the subject made it out to be? Did you end up spending time on Google, StackOverflow, etc. trying to figure out how to really get things to work in your real world environment? If you answered yes to these questions, you're in good company. I've certainly been there countless times. Now, follow-up question. Have you ever done all of the above, but with a sense of deja vu, because you'd had to do the exact same thing some time previously? And when you found the blog post or sample that reminded you of the issue, you were like "Oh yeah, I had to do this last time, too!" I find these to be some of the most frustrating hours of my work. I want to be building things, making progress, seeing things grow in functionality, not banging my head against the same walls I've left dents and bloodstains on in the past. I also want to make sure my coworkers, my teammates, and my clients benefit from the cuts and bruises I acquire as I blaze a trail through unknown territories. So, what can you do to limit the amount of retreading through through the same painful terrain you (and often your team) have to do? Obviously the first thing you could do is take notes. This is natural for some developers, but others find it distracting. When they're in the zone, figuring things out and getting things done, they don't want to stop to document things along the way. Breaking their flow might mean the difference between getting things working and giving up and walking away. If you can take notes, do so. I suggest keeping track of the URLs you found useful, along with screenshots of things like property settings or other configurations that you needed to modify to get things working. Sometimes, you can document things after you got things working. This is often true for fairly simple problems. However, for something that's taken hours rather than minutes, it's likely that by the time you're done, you've forgotten a few steps along the way. Here are a few things you can do to leave yourself a trail of breadcrumbs as you work. And of course, by breadcrumbs, I actually mean something better than breadcrumbs that you'll actually find later, since the whole origin of breadcrumbs is from a story in which breadcrumbs are a decidedly poor choice to leave behind you in order to find your way. One approach I use is to use a particular browser instance while I'm working on a specific problem, and to open all links related to the problem at hand in their own tabs. If they're useless, I close them, but if they're at all helpful, I leave them open. Once I've figured out whatever it is I've been working on, I can look at my tab history and add the links as references wherever is appropriate. Sometimes that's a link in a source code file. Sometimes it's in a README file. Sometimes it's in a blog post (or a Trello card for a blog post I want to write). In any case, I associate the links to the resources that helped me along the way with the problem I just solved while everything is still fresh in my mind and the links are literally stil open in my browser. Another tool you can use is screen recording. If you don't like actually writing/typing notes, you can record conference calls with clients using tools like Zoom or GoToMeeting. You can also record your own screen using tools like Camtasia, which I highly recommend. Then you can quickly jump around in the video to see yourself tackling problems, and retroactively make notes or write up a postmortem or checklist. Occasionaly the video itself might even be worth editing into something, perhaps for internal consumption by your team. Yet another tool I've used in the past is TimeSnapper, which would take and store screenshots every so many seconds on your machine. Then it would let you play them back later to see what you'd been spending your time on. I haven't used it in a while but it appears to still be active. You could do something similar by just taking a screenshot periodically as you progress through a problem, but you're much more likely to forget without a tool like this. The most important thing to take away from this episode is that you don't want to fight through the same problem more than once. Ideally you want to prevent your team from having to fight through problems you've already solved. The key is to share the necessary information in a way that doesn't slow you

Ep 20Abstraction Levels and Authorization
Abstraction Levels and Authorization Working at too low of an abstraction level is a common source of duplication and technical debt. A very common culprit in this area is authorization. Sponsor - DevIQ Thanks to DevIQ for sponsoring this episode! Check out their list of available courses and how-to videos. Show Notes / Transcript Let's take a quick break from the more commonplace design patterns and talk a little bit about abstraction levels and how they impact duplication and technical debt in our software designs. You can think of high levels of abstraction as being at the level of the real world concept your software is modeling. For an ecommerce application, it might be buying something, or adding an item to a cart. The whole notion of a cart or basket is a metaphor explicitly pulled into ecommerce applications from the real world. There's certainly no literal cart or basket involved in most online shopping experiences. Low levels of abstraction refer to implementation details used by the actual software (and sometimes hardware) used by the system. When developing software, it's a good design decision to encapsulate low levels of abstraction separately from higher levels, and thus to avoid mixing abstraction layers more than necessary within any given module. The more you mix abstraction levels, the more you add tight coupling to your design, making it harder to change in response to future requirements. A common requirement in many applications is authorization. Authorization is often conflated with the other auth word, authentication. Authentication is the process of determining who the user is. Authorization is the process of determining whether a particular user should be allowed to perform a certain operation. It can include default rules for anonymous users, but aside from that authorization only makes sense once authentication has taken place and you know who the user is. Authorization rules can take many forms, and can be as granular as specifying that a specific user has access to a specific resource. However, most applications that need authorization will leverage roles or claims to specify how groups of users should or should not have access to certain sets of resources. This makes it much easier to manage collections of users and collections of resources, since otherwise a huge number of specific user-to-resource rights would need to be maintained. However, even this is often prone to duplication that results from too low of an abstraction level. It's common in platforms like .NET to use roles as at least one part of determining authorization, and to use conditional logic like if (user.IsInRole("Admins")) any time authorization logic needs to be performed. In any non-trivial system that uses this pattern, you'll probably find quite a few lines of code that match this expression, meaning there is a great deal of duplication. Duplication isn't always bad, but in this case the implementation detail of performing a role check as one part of checking whether a user is authorized to access a particular resource is adding to the system's technical debt. Frequently, authorization rules will change over time. What happens when a new role or set of claims is created that should have access to some resources? Every one of the if statements related to access to that resource will need to be modified. What happens if you switch from using roles to claims? Every if statement will need to be modified. Of course, when these modifications take place, there's also the chance that bugs will be introduced, and these will manifest in many cases as security breaches. There are many patterns you can use to improve this design. You can use a more declarative approach, such that adding certain attributes will protect certain endpoints in your application. This can remove conditional logic and can eliminate some duplication since these attributes can be applied at class or even base class levels. However, if your authorization logic is more complex than simple role membership, it may not be sufficient or at the least you'll need to write your own attributes or filters. Another approach I've found useful is to create a first-class abstraction that describes whether a given user should have access to a given resource. I typically call such types privileges but you can refer to them as AuthorizationPolicies or whatever makes sense to you and your team if you prefer. A privilege takes in who the user is and what resource they're attempting to work with, and specifies what operations that user can perform on that resource. Since it's a design pattern, not a specific solution, how you implement the details is up to you. A common approach is to implement methods for things CanRead, CanCreate, and CanModify. You can also further modify it to work for collections or types of resources, so that for example if the user should be able to manage product definitions, you could check whether the user has rights to type P

Ep 19Learn the Strategy Pattern
Learn the Strategy Pattern The Strategy design pattern is one of the most fundamental and commonly-used patterns in modern object-oriented design. Take some time to make sure you're proficient with it. Sponsor - DevIQ Thanks to DevIQ for sponsoring this episode! Check out their list of available courses and how-to videos. Show Notes / Transcript I'm continuing a small series of tips on design patterns that I started with episode 17. I encourage you to listen to these tips in order, since in many cases they'll build on one another. This week I want to briefly describe the strategy pattern, and more importantly, why I think it's a pattern every object-oriented software developer should know. The strategy design pattern is often applied as a refactoring technique to improve the design of some existing code. The original code likely has some tightly-coupled and/or complex logic in it that would be better off separated from the current implementation. I think one reason why I'm so fond of the strategy pattern is that it literally helps with every one of the SOLID principles of object-oriented design. In my SOLID course on Pluralsight, I also discuss the Don't Repeat Yourself, or DRY, principle, which strategy can help with as well. Let's look at how. First, if you have a class that's doing too much (therefore breaking SRP - Single Responsibility Principle), common refactorings like extract method and move method can be used to pull logic out of one big method. However, if you then call this method from the big method, either statically or by directly instantiating a class to which you've moved the logic, you're not helping the coupling aspect of the problem. We'll get to that when we get to the 'D' in SOLID. Applying the strategy design pattern in this case is really just a slight twist on the usual extract and move method refactorings. You're still doing that, but you also typically create a new interface and pass in the interface to the original code. After moving the original implementation code to a new class that implements the new interface, you should have a new class that follows SRP and your original class should at least have fewer responsibilities. Considering this refactoring I just described, it's easy to see how it can help with the Open/Closed principle, or OCP, too. Whereas the original code's complex logic would have needed modified and recompiled any time a change was requested, the new design can accommodate changes in the implementation of extracted method by writing new code that implements the same interface. Then, an instance of the new class that has this new implementation can be passed into the existing code without touching the existing code. I talked about how important this is with legacy code in episode 15. Of course, if you do have multiple implementations of your abstract types, it's important that they all behave as advertised, otherwise you may encounter runtime exceptions. Ensuring that any implementation you write that inherits from another type, whether an interface or a class, means following the Liskov Substitution Principle, or LSP. Following LSP is much easier when the base type's behavior is fairly small. Large interfaces require much more effort to fully implement that smaller ones. The Interface Segregation Principle, or ISP, suggests keeping interfaces small and cohesive, so that client code doesn't need to depend on behavior it doesn't use. Done properly, the interfaces you create while implementing the strategy design pattern should be tightly focused. That brings us to the Dependency Inversion Principle, or DIP. This is really what the strategy pattern is all about. Whereas the initial code was tightly coupled to a specific implementation, the refactored version of the original method now depends on an abstraction. Instead of the original method deciding how to do the work, the code that calls the method makes that decision by deciding which implementation of the interface to provide. If you're familiar with dependency injection, then the strategy pattern should already be familiar to you. Make sure you're comfortable with pulling out dependencies when you discover them, though. The method extract and interface creation aspects of the strategy pattern aren't always emphasized when dependency injection is discussed. We're out of time for this week but I'll mention that the strategy pattern also helps with the DRY principle by creating a single place for a particular implementation to live, as well as the Explicit Dependencies Principle, by ensuring classes request their dependencies rather than hiding them in their methods. You can learn more about these principles from the show notes at weeklydevtips.com/019. Would your team or application benefit from an application assessment, highlighting potential problem areas and identifying a path toward better maintainability? Contact me at ardalis.com and let's see how I can help. Show Resources and Links Strategy Pattern

Ep 18Repository Tip - Encapsulate Query Logic
Repository Tip - Encapsulate Query Logic The Repository design pattern is one of the most popular patterns in .NET development today. However, depending on its specific implementation, its benefits to the system's design may vary. One thing to watch out for is query logic leaking out of the repository implementation. Sponsor - DevIQ Thanks to DevIQ for sponsoring this episode! Check out their list of available courses and how-to videos. Show Notes / Transcript Last week I talked about Design Patterns in general, and how in most cases it makes sense to have basic familiarity with a breadth of patterns, but to go deep on the ones that are most valuable in your day-to-day development. Repository is one of a handful of patterns I've found to be useful in virtually every ASP.NET app I've been involved with over the last ten years or so. Before I knew about this pattern, I'd already learned that separation of concerns was a good idea, and that having a separate layer or set of types for data access was beneficial. The biggest benefit you get by using a Repository instead of a Data Access Layer or static DB helper class is reduced coupling, because you can follow the Dependency Inversion Principle. By the way, if you're not familiar with any of these terms or principles, there are links on the show notes page at weeklydevtips.com/018, where you'll also find a link to my recommended generic repository implementation. This week's tip assumes you're already at least basically familiar with the repository pattern. Recently, I'm spending most of my time helping a variety of teams to write better software, and a pretty common issue I find for those app using the repository is that query logic can leak out. This can result in code and concept duplication, which violates the Don't Repeat Yourself, or DRY, principle. It can also result in runtime errors if query expressions are used that LINQ-to-Entities cannot translate into SQL. The most common reason for this issue is repository List implementations that return back IQueryable results. An IQueryable result is an expression, not a true collection type. It can be enumerated, but until it is, the actual translation from the expression into a SQL query isn't performed. This is referred to as deferred execution, and it does have some advantages. For instance, if you have a repository method that returns a list of customers, and you only want those whose last name is 'Smith', it can dramatically reduce how much data you need to pull back from the database if you can apply the LastName == Smith filter before the database query is made. But where are you going to add the query logic that says you only want customers named 'Smith'? That sort of thing is often done in the UI layer, perhaps in an MVC Controller action method. For something very simple, it's hard to see the harm in this. But imagine that instead of filtering for customers named 'Smith', you were instead writing a filter that would list the optimal customers to target for your next marketing campaign, using a variety of customer characteristics and perhaps some machine learning algorithms. Once you start putting your query logic in the UI, it's going to start to multiply, and you're going to have important business logic where it doesn't belong. This makes your business logic harder to isolate and test, and makes your UI layer bloated and harder to work with. The problem with the IQueryable return type from repositories is that it invites this kind of thing. Developers find it easy to build complex filters using LINQ and lambda expressions, but rarely take the time to see whether they're reinventing the wheel with a particular query. The fact that this approach can easily be justified because of the benefits of deferred execution and perhaps the notion that the underlying repository List method is benefiting greatly from code reuse only exacerbates the problem. The underlying problem with returning IQueryable is that it breaks encapsulation and leaks data access responsiblities out of the repository abstraction where it belongs. Rather than returning IQueryable, repositories should return IEnumerable or even just List types. Doing so consistently will ensure there is no confusion among developers as to whether the result of a repository is an in-memory result or an expression that can still be modified before a query is made. But then how do you allow for different kinds of queries, without performing them all in memory? There are a few different approaches that can work, and I'll cover them in future tips, but the simplest one is to add additional methods to the Repository as needed. This is often a good place to start, as it is simple and discoverable. In the example I'm using here, the CustomerRepository class could have a new method called ListByLastName added to it, which accepted a lastName parameter and returned all customers with that last name. Likewise, a collection of customers fitting certain characteris

Ep 17On Design Patterns
On Design Patterns Design Patterns offer well-known, proven approaches to common problems or situations in software application development. Having a broad knowledge of the existence of patterns, and at least a few you're proficient in, can dramatically improve your productivity. Sponsor - DevIQ Thanks to DevIQ for sponsoring this episode! Check out their list of available courses and how-to videos. Show Notes / Transcript I'll admit I've been a fan of design patterns for a long time. The idea of design patterns transcends software development, and in fact the so-called Gang of Four book, Design Patterns, takes its organization and inspiration from the 1977 book, A Pattern Language. That book, by Christpher Alexander, describes common patterns in towns, buildings, and construction methods, but the idea that there are common patterns to solving similar problems applies equally to software as well as traditional building construction and architecture. One thing that really appeals to me about design patterns is their ability to reduce waste. As software developers, we tend to want to increase efficiency and productivity, and one of the most frustrating parts of writing software (for me, at least) is when I'm stuck on a problem. This frustration is even greater when it's a problem I feel like I should know the answer to, or that I know is relatively common, so someone has solved it before. Design patterns are a great way to help you avoid reinventing the wheel (or, in many cases, a giant square that you're hoping will work as a wheel). Unfortunately, you can't always use your usual search engine skills to come up with a design pattern. You usually have to at least be aware that it exists so that you can start to recognize scenarios where it might apply. Once you know that a pattern exists, and have at least a vague sense of when it's used, then you can easily search for more information on how to apply the pattern when you think you might have a situation that warrants it. Thus, the first step in your path to pattern mastery is exposure. You need to spend at least a little bit of time learning the names of the patterns that exist, and where they're used. If you haven't already, you'll probably find a few design patterns that you use all the time. You can go deep in your knowledge of how and when to use these patterns. Last week, I talked about becoming a T-shaped developer as a means of differentiating yourself among competitors. Your knowledge of design patterns should have a similar T-shape, but for different reasons. You want the wide breadth of knowledge so you can speak intelligently about patterns and know what terms to search for when you want to go deep. But many patterns have fairly specific uses, so there's no need for you to try and become an expert in all of them if you're not solving the kinds of problems for which some patterns are designed. I'm sure I'll have more tips about design patterns in upcoming shows, but one last reason why you owe it to yourself to gain at least a cursory knowledge of them is their value as a higher level language tool. When you know a design pattern by name, and how and why one would use it, you can discuss possible solutions with your team in a far more efficient and clear manner. The actual implementation of many patterns can involve several different types organized in a particular fashion, usually with specific inheritance or composition relationships. How these types are used by your system is another aspect of the pattern's implementation. Without knowing the pattern and its name, communicating a proposed solution to another developer would require describe at least most of this detail. However, if both developers are familiar with the pattern in question, one can simply say to the other, "have you thought about applying the XYZ pattern here?" and convey the same intent with less chance for confusion and with fewer words. If you want to learn more about design patterns, I recommend the Design Pattern Library on Pluralsight as a good place to start. You can also reach out to me if you think your team would benefit from a private workshop on design patterns. Show Resources and Links Design Patterns by Gamma, Helm, Johnson, Vlissides (Gang of Four) A Pattern Language by Christopher Alexander Design Patterns Library (Pluralsight)

Ep 16Becoming a T-Shaped Developer
Becoming a T-Shaped Developer It's difficult to differentiate yourself if you don't have a single area of expertise. Either you'll have difficulty landing work or you'll be forced to compete with a host of other non-specialists on rate. By becoming a T-shaped developer, you can market yourself as an expert in a particular area and stand out from the crowd! Sponsor - DevIQ Thanks to DevIQ for sponsoring this episode! Check out their list of available courses and how-to videos. Show Notes / Transcript In this episode, I'm going to talk about what it means to be a "T-Shaped" developer. But before we get into that, let's talk a little bit about how software developers typically market themselves, and how companies post job openings, using some real data and numbers. Let's consider a pretty common, but vague, job description: "web developer". Let's search for jobs using this term on a few different sites. We'll leave location out of the search - remote work is becoming increasingly acceptable and it's just easier to compare numbers if we don't restrict by location. Looking at Indeed.com, there are 40,000 jobs matching this search string. LinkedIn's Job Search has over 14,000. GlassDoor finds 110,000. Monster.com will only tell us there were over 1000 results found, but it's a good guess it was a lot more than 1000. The point is, there are a huge number of positions out there that match the search term (or exact job title) of 'web developer'. If you identify primarily as simply a 'web developer', you're in a crowd of hundreds of thousands. The good news is, there's definitely demand for people to fill that kind of role. The bad news is, how do you convince a particular client that you're the best candidate for their 'web developer' vacancy, if that's as far as you've gone in differentiating yourself? When you're marketing a product, one that isn't creating a brand new market segment, it can be useful to identify how big the market for that kind of product is. Say you're looking to enter the footwear business. It's good to know that there are billions of dollars spent by millions of customers on footwear every year. However, when you go to actually sell your footwear, you're probably not going to try to market it to "people who buy shoes" - you're going to niche down to a particular segment. Maybe basketball playing teens who aspire to be NBA players. Maybe outdoor fanatics who want the best hiking shoes. Maybe fashion-conscious women who will pay a premium for comfort. You'll sell more shoes by appealing to specific demographics of buyers than by trying to appeal broadly to any shoe-buyer. People can only remember one or two leaders in a given market niche, and as a marketer you want your product to occupy one of those positions. If you can't be the #1 or #2 for the market, you need to pick a smaller, more focused market in which you can occupy that position. Think about it for automobile companies. What's the most successful automobile company? In my opinion there's not even a clear winner here. What if we narrow it down to trucks? Many of you would probably say Ford or Chevrolet. How about electric cars? I would argue Tesla has done an excellent job of being first in mind as the electric car manufacturer, even though last year Nissan sold more electric vehicles globally than did Tesla. As a developer, you are the product you're trying to sell. You have a set of skills and experience that you can bring to bear when presented with a problem. There are a wide number of skills that most developers need to know, but don't need to be expert in. Visualize a horizontal line representing the breadth of skills you have. Now make the line thicker at the bottom by a few units to represent the relatively shallow depth of knowledge you have for most skills. You work with source control, but you're not known throughout the industry for your source control skills. You can apply CSS to HTML, but you're not writing books about how to apply CSS to HTML. You're competent with C#, or JavaScript, or PHP, but again you're not a well-known expert in them. Now think about a particular skill or passion you have that goes beyond mere competence. Maybe you could have a podcast all about your git knowledge and the dark arts of mastering its intricacies. Maybe you could write a book about the most powerful ways to use CSS selectors to achieve amazing results. Whole programming languages might be tough to become well-known for (think Jon Skeet for C# for example), but you could position yourself as the go-to expert in lambda expressions or arrow functions or a particular design pattern. Whatever skill you already have, or could have, that's where you're going to go deep with your knowledge. Visualize that thick horizontal line representing your shallow knowledge of a wide variety of topics, and now draw a much deeper vertical line dropping down from its center, forming a 'T' shape. This T-shape represents your s

Ep 15Maintain Legacy Code with New Code
Maintain Legacy Code with New CodeMany developers work in legacy codebases, which are notoriously difficult to test and maintain in many cases. One way you can address these issues is by trying to maximize the use of new, better designed constructs in the code you add to the system.Sponsor - DevIQThanks to DevIQ for sponsoring this episode! Check out their list of available courses and how-to videos.Show Notes / TranscriptLegacy code can be difficult to work with. Michael Feathers defines legacy code in his book, Working Effectively with Legacy Code, as "code without tests", and frequently it's true that legacy codebases are difficult to test. They're often tightly coupled, overly complex, and weren't written with modern understanding of good design principles in mind. Whether you're working with a legacy codebase you've inherited, or one you wrote yourself over some period of time, you probably have experienced the pain that can be involved with trying to change a large, complex system that suffers from a fair bit of technical debt and lacks the safety net of tests.There are several common approaches to working with such codebases. One simple approach, that can be appropriate in many scenarios, is to do as little as possible to the code. The business is running on it, none of the original authors are still with the company, nobody understands it, so just keep your distance and hope it doesn't break on your watch. Maybe in the meantime someone is working on a replacement, but you have no idea if or when that might ever ship, and anyway you have other things you need to work on that are less likely to keep you at work late or bring you in on the weekends. I don't have any solid numbers on how much software falls into this category, but I suspect it's a lot.The second approach is also common, and usually takes place when the first one isn't an option because business requirements won't wait for a rewrite of the current system. In this case, developers must spend time working with the legacy system in order to add or change functionality. Because it's big, complex, and probably untestable, changes and deployments are stressful and error-prone, and a lot of manual testing effort is required. Regression bugs are common, as tight coupling within the system means changes in one area affect others areas in often inexplicable and unpredictable ways. This is where I think the largest amount of maintenance software development takes place, since let's face it most software running today was written without tests but still needs to be updated to meet changing business needs.A third approach some forward-thinking companies take, understanding the risks and costs involved in full application rewrites, is to invest in refactoring the legacy system to improve its quality. This can take the place of dedicated effort focused on refactoring, as opposed to adding features or fixing bugs. Or it can be a commitment to follow the Boy Scout Rule such that every new change to the system also improves the system's quality by improving its design (and, ideally, adding tests). Some initial steps teams often take when adopting this approach are to ensure source control is being used effectively and to set up a continuous integration server if none is in place. An initial assessment using static analysis tools can establish the baseline quality metrics for the application, and the build server can track these heuristics to help the team measure progress over time. This approach works well for systems that are mission-critical and aren't yet so far gone into technical debt that it's better to just declare "technical bankruptcy" and rewrite them. I've had success working with several companies using this approach - let me know if you have questions about how to do it with your application.Now let's stop for a moment and think about why working with legacy code is so expensive and stressful. Yes, there's the lack of tests which limits our confidence that changes to the code don't break things unintentionally, but that's based on a root assumption. The assumption is that we're changing existing code and therefore, other code that depends on it might break unexpectedly. What if we break down that assumption, and instead we minimize the amount of existing code we touch in favor of writing new code. Yes, there's still some risk that our changes to allow incorporating our new code might cause problems, but outside of that, we're able to operate in the liberating zone of green field development, at least on a small scale.When I say write new code, I don't mean go into a method, add a new if statement or else clause, and start writing new statements in that method. That's the traditional approach that tends to increase complexity and technical debt. What I'm proposing instead is that you write new classes. You put new functionality into types and methods that didn't exist before. Since you're writing brand new classes, you know that no other code

Ep 14Smarter Enumerations
Smarter Enumerations Enumerations are a very primitive type that are frequently overused. In many scenarios, actual objects are a better choice. Sponsor - DevIQ Thanks to DevIQ for sponsoring this episode! Check out their list of available courses and how-to videos. Show Notes / Transcript Enums are an extremely common construct in applications. They provide a simple way to give labels to numeric values. They're especially useful for efficiently capturing a set of flag values by using binary AND and OR operations on values set to powers of 2. However, as primitive value types, they don't have the capability to add behavior to the values they represent, and this often results in a particular flavor of the primitive obsession code smell that I discussed in episode 12. One of the first signs that you're stretching the limits of an enum in C# is if you find that you want to display the names associated with the values to the user, and some of the names should have spaces in them when displayed. For example, you might have a Roles enum that includes a SalesRepresentative name. If you display that in a dropdownlist in the UI, you'll want to have a space between Sales and Representative. There are a few hacky ways to achieve this. The first would be to parse the name of the enum and insert spaces anywhere you find capital letters in the middle of the string. Another common one is to add an attribute that contains the user-friendly version of the enum's name, and if this attribute is present, use it when displaying the enum's name. Both of these can work, but they're not ideal. They both require more code outside of the enum, making it harder to work with, and scattering logic related to the enum into other types. While we're on the topic of displaying enum values to end users, another fairly common requirement in this area is to control which enum options are displayed to the user. Once again, you can use attributes to control this behavior, or maybe even some kind of naming convention for the enum labels (maybe add a Visible or Hidden suffix and then strip off the suffix when displaying the name). As you can guess, both of these approaches just lead you further down the path of cluttering up your non-enum code to accommodate the lack of behavior within the enums themselves. What you really need is a better abstraction. Enumeration Classes The pattern I favor is the SmartEnum class, also known as the Strongly Typed Enum Class. With this pattern, you start with a class definiton that includes the basic capabilities of an enum type, such as having a simple name and value. Then, you define the set of available options as static properties on the class. For example, if you were creating a Roles enumeration class, you would add static properties on the Roles class for things like Administrator or SalesRepresentative. These static properties would be of type Roles (or Role, as you prefer). Working with these static instances mirrors working with enums. You can simply type Roles (dot) and your IDE will show you the set of static properties that represent the possible options, just the same as an enum. Since you're representing your options as a class, you now have the ability to add any behavior you require. If you need to display the value in a certain way, you can add a property or method to do so. If you need to add metadata that will determine when or whether a particular option is visible or available to a given user, you can add this as well. When you do, the business logic you're adding is encapsulated within the enumeration class, rather than spread throughout your user interface code. If you're looking to get started with this approach, I've created a GitHub repo and Nuget package at Ardalis.SmartEnum. I've also written several articles over the years on this topic that I'll add to the show notes for this episode, which you'll find at weeklydevtips.com/014. Show Resources and Links SmartEnum (GitHub) SmartEnum (Nuget) Listing Strongly Typed Enum Options in C# Enum Alternatives in C#

Ep 13Be Thankful and Show Gratitude
Be Thankful and Show Gratitude It's highly unlikely that you're a software developer who works in a vacuum. Here are a few tips for showing your gratitude to the people, companies, products, and tools that help you to be successful. Sponsor - DevIQ Thanks to DevIQ for sponsoring this episode! Check out their list of available courses and how-to videos. Show Notes / Transcript Last year around Thanksgiving I published an article about showing gratitude as a software developer. I'll link to it in the show notes and I encourage you to read it if you find this topic interesting. The topic of showing gratitude and being thankful, specifically as software developers, remains relevant today, so I thought it worth revisiting. Since you're listening to this podcast, I'm going to go out on a limb and assume you're a software developer. A programmer. A coder. Maybe that's not your title, and maybe it's not even your main responsibility, but you've built software. In building that software, regardless of your platform or language of choice, you've almost certainly leveraged a wide variety of resources that helped you along the way. You may not even realize, or perhaps you've taken for granted, some of the things that helped you. As the saying goes, sometimes you don't know how much you miss something until it's gone. Many of the most valuable resources we have available to us are provided freely by others. If those others feel unappreciated, they may take their passion and energy elsewhere, so don't assume that just because someone isn't charging you money for their efforts, that they don't value things you might do, non-monetarily. Let's consider a few simple examples to highlight this point. One is StackOverflow. You've probably used it, since it's the de facto standard question and answer site for software development. When you find that answer you were looking for, try to give it an upvote. And while you're at it, vote the question up, too, since someone had to ask it in order for you to get the answer you needed. Some publications, like Medium, provide a way for you to show appreciation by liking or clapping for an article. Be sure to show your support for content you find valuable by taking advantage of these features. In addition, you can share content you find interesting on social media with a quick tweet or post on Facebook or your own blog (thus producing some additional content of your own). Of course, for a podcast like this one, leaving a review in iTunes or Stitcher is highly appreciated (assuming it's a good review). Reviews help your favorite podcasts get discovered by more people, and also encourage publishers to keep producing content. It can be difficult sometimes to record content in a vacuum and send it out to the Internet, not knowing who is actually listening to it, or how they're feeling about it. It's very different from public speaking because of this lack of feedback. Reviews, as well as comments on individual show pages, are one way you can let publishers know they're being heard and appreciated. You're probably using some open source tools as part of your development. Most open source projects I work with today are hosted on GitHub. If you find a particular project helpful or interesting, see if you can help support it. In GitHub, starred repositories are easier for you to find later. In addition, from their docs, "Starring a repository also shows appreciation to the repository maintainer for their work. Many of GitHub's repository rankings depend on the number of stars a repository has. For example, repositories can be sorted and searched based on their star count." Of course, you can also take to social media or any of the other things I mentioned to show support, as well as offering to help by adding issues, fixing issues via pull requests, or offering to help document the project. Often end users can provide extremely valuable documentation since the maintainer of the project may not realize the ways in which many developers use their library or tools. By showing appreciation for the tools and resources you use to be successful, you're doing a few things. You're helping to ensure these (generally free) resources continue to exist. This is obviously good for you. You're also setting an example for others, who may do the same, which magnifies your own contributions to further help support these resources. Again, good for you. You're also potentially developing positive relationships within the developer community. Who knows which tweet, comment, or pull request of yours that expresses gratitude will lead to a connection that culminates in a new contract or job opportunity. People get invited to help support projects they support. People want to work with supportive, helpful people. Aside from "being nice" or being "the right thing to do", actively showing gratitude within your professional community costs you nearly nothing but can yield tangible benefits in

Ep 12Primitive Obsession
Primitive ObsessionPrimitive Obsession describes code in which the design relies too heavily on primitive types, rather than solution-specific abstractions. It often results in more verbose code with more duplication of logic, since logic cannot be embedded with the primitive types used.Sponsor - DevIQThanks to DevIQ for sponsoring this episode!Show Notes / TranscriptPrimitives refer to built-in types, like bool, int, string, etc. The primitive obsession code smell refers to overuse of primitive types to represent concepts that aren't a perfect fit, because the primitive supports values that don't make sense for the element they're representing. For example, it's not unusual to use a string to represent a ZIP Code value or a Social Security Number. Many systems will use an int to represent a value that cannot be negative, such as the number of items in a shopping basket. In such a case, if the system even bothers to enforce the invariant stating that shopping basket quantity must be positive, it must do so somewhere other than in the type representing the quantity. Ideally, the shopping basket or basket item type would enforce this, but again in many designs the shopping basket item quantity is simply a property that can be set to anything. In which case any service, UI call, etc. that manipulates a basket item would first need to ensure it was being set properly. This can result in a great deal of duplicate code, with the usual technical debt that arises when you violate the Don't Repeat Yourself principle. In some places, someone will forget to perform the checks, or they'll perform them differently, and bugs will creep in. Or the rules will be updated, but not everywhere, which results in the same inconsistent behavior. When you work with too primitive of an abstraction, you end up having to code around this deficiency every time you work with the type.EncapsulationI've talked about encapsulation before - it's obviously an important concept in software design. By choosing to represent a concept with a primitive, you give up the ability to leverage encapsulation when working with this concept in your solution. The biggest problem with primitive obsession is that it results in a lot of behavior being added around the types in question, rather than encapsulated within them. Instead of having to check, probably in many places, that Quantity is positive or that a string represents a valid ZIP code, it's far better to create a type to represent the concept in question, along with its rules.Such types should typically be immutable value objects that cannot be created in an invalid state (and thus need not be validated where they are passed in as parameters). It's useful to have easy ways to cast primitives to and from these value objects, but this should be done only at the edges of the application (user input/output, persistence). Try to use the value object as much as possible within your actual business logic or domain model, rather than a primitive representation of the type.You can make working with your new type about as easy as working with the primitive it's replacing by making sure you override its ToString method. You can also handle comparisons and equality, and configure implicit and explicit casting operators. Jimmy Bogard wrote an article about 10 years ago that describes how to do exactly this for a simple ZIP Code type in C# - there's a link in the show notes. Yes, you'll end up with a dozen or so lines of code in your ZIP Code class instead of just using a string, but any logic that relates to ZIP Codes will also live in this class, rather than being scattered throughout your application.When you represent a concept in your system with a primitive type, you're asserting that the concept can be represented by any value that type can hold. If you expose method signatures that accept primitive values, the only clue you might offer to clients of that method could be the names of the parameters. Invalid values might no immediately be discovered, or if they are, the related errors might be buried within the behavior of the method, rather than immediately apparent. If instead you use a separate value object to represent a concept, a method that accepts parameters using this type will be much easier for clients to work with. If there are exceptions related to type conversion, they will be discovered immediately when the client attempts to create an instance of the value object, and this behavior will be consistent everywhere, unlike different methods that may or may not perform validity checks on their inputs.You can learn more about the primitive obsession code smell and literally dozens of others, along with how to refactor them, in my Pluralsight course, Refactoring Fundamentals.Show Resources and LinksEncapsulationDon't Repeat YourselfRefactoring for C# DevelopersRefactoring FundamentalsDealing with Primitive Obsession - Jimmy BogardDesign Smell: Primitive Obsession - Mark Seeman

Ep 11Encapsulating Collection Properties
Encapsulating Collection Properties Encapsulation is a key aspect of object-oriented programming and software engineering. Unfortunately, many systems fail to properly encapsulate collection properties, resulting in reduced quality. Sponsor - DevIQ Thanks to DevIQ for sponsoring this episode! Check out their list of available courses and how-to videos. Show Notes / Transcript Encapsulation essentially means hiding the inner workings of something and exposing a limited public interface. It helps promote more modular code that is more reliable, since verifying the public interface's behavior provides a high degree of confidence that the object will interact properly with collaborators in a system. One area in which encapsulation often isn't properly followed is with collection properties. Collection Properties Any time you have an object that has a collection of related or child objects, you may find this represented as a collection property. If you're using .NET and Entity Framework, this property is often referred to as a navigation property. Some client code can fetch the parent object from persistence, specify to EF that it should load the related entities, and then navigate from the parent object to its related objects by iterating over an exposed collection property. For example, a Customer object might have a set of Orders they've placed previously. This could be represented most simply by having a public List property on the Customer class. This property must expose a getter, and in many cases system designs will have it expose a public setter as well. In that case, any code in the system would be able to set a Customer's order collection to any list of Orders, or to null. This could obviously result in undesired behavior. Some developers might offer token resistance to this total lack of encapsulation by removing the setter (or making it private), but the damage is done as long as the property exposes a List data type, with all of its mutable functionality. This kind of design exposes too much functionality from the Customer, since it inherently allows any client code that works with a Customer to: Directly add or remove an order to/from the Customer Clear all orders from the Customer In these cases, the Customer object in question has no way of controlling, preventing, or even detecting these changes to its Orders collection. Why is this important? Well, there is probably a decent amount of workflow involved in placing a new order for a customer. It's probably not sufficient to simply add a new order without any additional work. Now, you can argue that somewhere there's a service that does all the required work, but how does the object model enforce the use of said service? If any client code can instantiate an order and add it to a customer, how is the design of the system leading developers toward doing the right thing (using a service, in this case)? On the other hand, if there is no way to directly add an order to a customer, developers will probably quickly discover that there is a service for this purpose, and it's more likely that this service will provide the only way of adding new orders to customers. In most cases, there are only certain operations on related collections that an object should expose, and these it probably wants to have direct control over. If Customer collaborators shouldn't be able to directly delete all of a customer's orders, don't expose the collection as a List. Instead, expose a ReadOnlyCollection, or an IEnumerable. Both EF 6 and EF Core support properly encapsulating collection navigation properties, so don't feel like you have to expose List types in order to keep EF happy. Check out the links in the show notes at WeeklyDevTips.com/011 to see how to configure EF to support proper collection encapsulation. Show Resources and Links Encapsulated Collections in EF Core Exposing Private Collection Properties to Entity Framework Encapsulation Exposing Collection Properties

Ep 10Pain Driven Development
Pain Driven DevelopmentPain Driven Development, or PDD, is the practice of writing software in such a way that you only "fix" problems when they are causing pain, rather than trying to preempt every possible issue.Sponsor - DevIQThanks to DevIQ for sponsoring this episode!Show Notes / TranscriptMany of you have probably heard of various "DD" approaches to writing software. There's TDD, or Test Driven Development. There's BDD, for Behavior Driven Development. In this tip, I want to introduce you to another one, PDD: Pain Driven Development.Pain Driven DevelopmentSoftware development is full of principles, patterns, and best practices. It can be tempting, especially when you've recently learned about a new way of doing things, to want to apply it widely to maximize its benefits. Some time ago, when XML was a new thing, for instance, Microsoft went all-in with it. They decided to "XML ALL THE THINGS" and in some places, this was great. And in many cases, not so much. In my own experience, I find this is often the case when I'm learning a new design pattern or trying to fully understand a particular principle. It can be easy, when you're constantly on the lookout for applications of recent knowledge, to find excuses to apply these techniques.One particular set of principles that many object-oriented programmers know are the SOLID principles. I have a course on SOLID on Pluralsight that I encourage you to check out, which covers these principles in depth. One thing that is worth remembering, though, is that you shouldn't, and honestly can't, apply all of the principles to every aspect of your software. You need to pick your battles. You need to actually ship working software. You don't know when you begin a project where extension is going to be necessary, so you can't anticipate every way in which you might support the Open-Closed Principle for every class or method in your program. Build and ship working software, and let feedback and new requirements guide you when it comes to applying iterative design improvements to your code. When you're back in the same method for the Nth time in the last month because yet another requirement has changed how it's supposed to work, that's when you should recognize the pain your current design is causing you. That's where Pain Driven Development comes into play. Refactor your code so that the pain you're experiencing as a result of its current design is abated.Extreme Programming introduced the concept of YAGNI, or You Ain't Gonna Need It. PDD is closely aligned with this concept. YAGNI cautions against building things you might need in the application, and instead favors building only what's required today (but in a responsible manner, so you can revise the design in the future). PDD offers similar guidance, but from a different perspective. The message with PDD is, follow YAGNI and build only what is required today, but recognize when you'll "need it" by the pain the current design causes you as you try to work around/with it.Well-designed code is enjoyable to work with. If you frequently find yourself frustrated with the code you're working with, see if you can identity the source(s) of the pain, and apply refactoring techniques to alleviate the problem.Show Resources and LinksPain Driven Development (PDD)SOLID Principles of OODRefactoring Fundamentals

Ep 9Data Transfer Objects (part 2)
Data Transfer Object Tips (Part 2) One classification of objects in many applications is the Data Transfer Object, or DTO. Here are some more tips that may help you avoid problems when using these objects. Sponsor - DevIQ Thanks to DevIQ for sponsoring this episode! Check out their list of available courses and how-to videos. Show Notes / Transcript Last week we talked about the definition of a DTO and how they're typically used. This week we'll cover a few more common problems with them and offer some Dos and Don'ts. Mapping and Factories It's fairly common to need to map to a DTO and another type, such as an entity. If you're doing this in several places, it's a good idea to consolidate the mapping code in one place. A static factory method on the DTO is a common approach to this. Note that this isn't adding behavior to the DTO, but rather is just a static helper method that we're putting on the DTO type for organizational purposes. I usually name such methods with a From prefix, such as FromCustomer(Customer customer) for a CustomerDTO type. There's a simple example in the show notes for episode 8. public class CustomerDTO { public string FirstName { get; set; } public string LastName { get; set;} public static CustomerDTO FromCustomer(Customer customer) { return new CustomerDTO() { FirstName = customer.FirstName, LastName = customer.LastName }; } } You can also use a tool like AutoMapper, which will eliminate the need to use such static factory methods. I usually quickly end up moving to AutoMapper if I have more than a couple of these methods to write myself. What about attributes? It's common in ASP.NET MVC apps to use attributes from the System.ComponentModel.DataAnnotations namespace to decorate model types for validation purposes. For example, you can add a Required attribute to a property, and during model binding if that property isn't, an error will be added to a collection of validation errors. Since these attributes don't impact your ability to work with the class as a DTO, and since typically the DTO is tailor made for the purpose of doing this binding, I think it's perfectly reasonable to use these attributes for this purpose. You can rethink this decision if at some point the attributes start to cause you pain. Follow Pain Driven Development (PDD): if something hurts, take a moment to analyze and correct the problem. Otherwise, keep on delivering value to your customers. If you're not a fan of attribute-based validation, you can use Fluent Validation and define your validation logic using a fluent interface. You'll find a link in the show notes. Keeping DTOs Pure Avoid referencing non-DTO or primitive types from your DTOs. This can pull in dependencies that can make it difficult to secure your DTO. In some cases, it can introduce security vulnerabilities, such as if you have methods accepting input as DTOs, and these DTOs reference entities that your app is directly updating in the database. An attacker could guess at the structure of the entity and perhaps its navigation properties and could add or update data outside of the bounds of what you thought you were accepting. Take care in your update operations to only update specific fields, rather than model binding an entity object from external input and then saving it. DTO Dos and Don'ts Let's wrap up with some quick dos and don't for Data Transfer Objects: Don't hide the default constructor Do make properties available via public get and set methods Don't validate inputs to a DTO Don't add instance methods to your DTO Do consolidate mapping logic into static factories Do consider moving to AutoMapper if you have more than a few such factory methods Do feel free to use attributes to help with model validation Don't reference non-DTO types, such as entities, from DTOs Show Resources and Links AutoMapper Pain Driven Development (PDD) Fluent Validation

Ep 8Data Transfer Objects (part 1)
Data Transfer Object Tips (Part 1) One classification of objects in many applications is the Data Transfer Object, or DTO. Here are some tips that may help you avoid problems when using these objects. Sponsor - DevIQ Thanks to DevIQ for sponsoring this episode! Check out their list of available courses and how-to videos. Show Notes / Transcript Data Transfer Objects, or DTOs, are, as the name suggests, objects whose main purpose is to transfer data. In Object Oriented Programming, we typically think about objects as encapsulating together state, or data, and behavior. DTOs have the distinction of being all about the data side of things, without the behavior. Why do we need DTOs? DTOs are used as messages. They transfer information from one part of an application to another. Depending on where and how they transfer information, they might have different names. Often, they're simply referred to as DTOs. In some cases, you may see them characterized as View Models, API Models, or Binding Models. Not all view models in MVC apps are DTOs, but many can and probably should be. For instance, in an ASP.NET MVC application, you typically don't want to have any behavior in the ViewModel type that you pass from a controller action to a view. It's just data that you want to pass to the view in a strongly typed fashion. If you're following the MVVM pattern to build apps using WPF or something similar, then your ViewModel in that scenario is supposed to have behavior, not be a DTO. Ideally we'll come up with a better name for ViewModels in MVC apps, but obvious choices like ViewData are already overloaded. Why shouldn't DTOs have behavior? DTOs don't have behavior because if they did, they wouldn't be DTOs. Their entire purpose is to transfer data, not to have behavior. And because they are purely data objects, they can easily be serialized and deserialized into JSON, XML, etc. Your DTO's data schema can be published and external systems can send data to your system in a wire format that your system can translate into an instance of your DTO. If your DTO has behavior on it, for instance to ensure its properties are only set to valid values, this behavior won't exist in the string representation of the object. Furthermore, depending on how you coded it, you might not be able to deserialize objects coming from external sources. They might not follow contraints you've set, or you might not have provided a default public constructor, for instance. The goal of DTOs is simply to hold some state, so you can set it in one place and access it in another. To that end, the properties on a DTO should all have public get and set methods. There's no need to try to implement encapsulation or data hiding in a DTO. That's it for this week. Next week I'll talk some more about DTOs and provide a list of Do's and Don'ts.

Ep 7Prefer Custom Exceptions
Prefer Custom ExceptionsLow level built-in exception types offer little context and are much harder to diagnose than custom exceptions that can use the language of the model or application.Sponsor - DevIQThanks to DevIQ for sponsoring this episode!Show Notes / TranscriptGiven the choice, avoid throwing basic exception types like Exception, ApplicationException, and SystemException from your application code. Instead, create your own exception types that inherit from System.Exception. You can also catch common but difficult-to-diagnose exceptions like NullReferenceException and wrap them in your own application-specific exceptions. You should think about your application exceptions as being part of your domain model. They represent known bad states that your system can find itself in or having to deal with. You should be able to use your ubiquitous language to discuss these exceptions and their sources within the system with your non-technical domain experts and stakeholders. Let's talk about a few different examples.Throwing Low-Level ExceptionsConsider some code that does the following:public decimal CalculateShipping(string zipCode){ var area = GetAreaFromZipcode(zipcode); if (area == null) { throw new Exception("Unknown ZIP Code"); } // perform shipping calculation}The problem with this kind of code is, client code that attempts to catch exceptions resulting from the shipping calculation are forced to catch generic Exception instances, instead of a more specific exception type. It takes very little code to create a custom exception type for application-specific exceptions like this one:public class UnknownZipCodeException : Exception{ public string ZipCode { get; private set; } public UnknownZipCodeException(string message, string zipCode) : base(message) { ZipCode = zipCode; }}In fact, in many cases you can create an overload that sets a standard default exception message, so you're consistent and your code is more expressive with fewer magic strings. Add this overload to the above exception, for instance:public UnknownZipCodeException(string zipCode) :this("Unknown ZIP Code",zipCode){}And now the original code can change to:public decimal CalculateShipping(string zipCode){ var area = GetAreaFromZipcode(zipcode); if (area == null) { throw new UnknownZipCodeException(zipCode); } // perform shipping calculation}Now client code can easily catch and handle the UnkownZipcodeException type, resulting in a more robust and intuitive design.Replace Framework Exceptions with Custom ExceptionsAn easy way to make your software easier to work with, both for your users and for developers, is to use higher level custom exceptions instead of low level exceptions. Low level exceptions like NullReferenceException should rarely be returned from business-level classes, where most of your custom logic should reside. By using custom exceptions, you make it much more clear to everybody involved what the actual problem is. You're working at a higher abstraction level, using the language of the business domain.For example, let’s say you’re writing an application that works with a database. Perhaps it’s an ASP.NET Core application in the medical or insurance industry, and it references individual customers as Subjects. Within some business logic dedicated to creating an invoice, recording a prescription, or filing a claim, there’s a reference to the Subject Id that is invalid. When your data layer makes the request and returns from the database, the result is empty.var subject = GetSubject(subjectId);subject.DoSomething();Obviously in this code, if Subject is null, the last line is going to throw an exception (you can avoid this by using the Null Object Pattern). Let’s further assume that we can’t handle this exception here – if the subject id is incorrect, there’s nothing else for this method to do but throw an exception, since it was going to return the subject otherwise. The current behavior for a user, tester, or developer is this:Unhandled Exception:System.NullReferenceException: Object reference not set to an instance of an object.One of the most annoying things about the NullReferenceException is that it is so vague. It never actually specifies which reference, exactly, was not set to an instance of an object. This can make debugging, or reporting problems, much more difficult. In the above example, we’re not specifically throwing any exception, but we are allowing a NullReferenceException to be thrown in the event that we’re unsuccessful in looking up a Subject for a given ID. It’s still a part of our design to rely on NullReferenceException, though in this case it’s implicit. What if instead of returning null from GetSubject we threw a SubjectNotFoundException? Or if we weren’t sure that an exception made sense in every scenario, what if we checked for null and then threw a better exception before moving on to work with the returned subject, like in this example:var subject = GetSubject(subjectId);if (subject == null) t

Ep 6Make It Work. Make It Right. Make It Fast.
Make It (Work|Right|Fast)Don't fall into the premature optimization trap. Follow this sequence when developing new features.Sponsor - DevIQThanks to DevIQ for sponsoring this episode!Show Notes / TranscriptThere's a three step process that I first heard of from Kent Beck. Following these steps when implementing a new feature can help you remain focused on getting the work done, and can avoid falling into the trap of premature optimization.The First Step: Make it workThe first step is to make it work. Since we're talking about software, there is no cost of materials. You can make the code do what it's supposed to do in whatever ugly, messy manner you want, so long as it works. Don't waste time worrying about whether your approach is ideal, your code elegant, or your design patterns perfect. If you can see multiple ways to do something, and you're not sure which is best, pick one and go with it. You can leave a TODO comment or make a note in your notebook that you keep with you as you code if you think it's important enough to revisit. Otherwise, when you're done, be sure it works, and works repeatably. You should have some kind of automated tests that demonstrate that it works. I should probably also note that this process works best with small units of work. In fact, kanban demonstrates that your overall process will be improved if you work on the smallest scoped work items you can. You should be able to follow all three of these steps multiple times per day. If you're spending days or longer just trying to make it work, you need to come up with a smaller "it" and get the reduced scope item to work, first. Then move on to steps two and three before continuing on with the larger scoped work.The Second Step: Make it rightOnce you have a working solution, and an inexpensive way to ensure it remains working while you modify it, follow the refactoring fundamentals to improve your code's design. Look for code smells. Follow software principles. Make sure it's good enough that when you return to it, you'll be able to understand and change it without too much effort (or someone else will be able to do so). Tests serve as a great form of documentation, especially if you name them well. If you think you need more tests, or you need to better organize your tests, this is the time to do so. But stop when you have enough tests that when they're green, you're confident your code does what it should. Don't chase some arbitrary metric beyond this point, when you could be delivering more value in the form of more features or bug fixes.The Third Step: Make it fastIf it's not fast (in terms of performance) enough already, now is the time to measure and tune the application's performance. Performance characteristics of the system should be described just like other system requirements, and effort should be made on improving performance only until these measurable requirements are met (otherwise, how will you know when you're done?). For some applications, there is great ROI for every small bit of performance improvement. This is true of large, public ecommerce sites like Amazon.com, where they've measured customer cart abandonment levels increasing based on milliseconds of additional latency. However, most applications have less stringent requirements, and in many cases users who have no choice but to use the system for their job. In such cases, you want to provide the user with good enough performance, but remember that beyond good enough is waste. If users don't really notice the difference between 1 second page load times and 800ms page load times, you probably don't need to spend several hours trying to trim 200ms when that time could have been spent fixing a bug that's been plaguing users for weeks.SummaryYour key takeaways from this episode should be:Work on small pieces of work. For each piece:Make it workMake it rightMake it fastStop working on the code as soon as it work. Stop cleaning it up and adding tests as soon as you're confident it works and is clean enough to maintain next time someone needs to touch it. Stop tuning its performance as soon as it's good enough. If you follow these steps, you'll stay as productive as possible, you'll ship quality software, and you won't get mired in analysis paralysis or gold plating your code. Check the show notes at weeklydevtips.com/006 for a bunch of links to more information on many of the things I mentioned in this episode.Show Resources and LinksKanban: Getting StartedRefactoring FundamentalsList of Code SmellsList of Software PrinciplesUnit Test Naming ConventionMeasuring and Tuning Web PerformanceBeyond Good Enough is Waste

Ep 5New is Glue
New is GlueBe wary of the new keyword in your code, and recognize the decision you're making by using it.Show Notes / TranscriptThis week we're going to talk about the phrase, "New is Glue". I came up with this phrase a few years ago and wrote a blog post about it, and have incorporated it into a number of my conference presentations and training courses on Pluralsight and DevIQ. The goal of the phrase is to stick in developers' heads so that when they are writing or reviewing code that directly instantiates an object, they understand that this is tightly coupling their code to that particular object implementation. Let's step back for a moment and talk about coupling, and then return to why tight coupling and the new keyword can be problematic.CouplingYour software is going to be coupled to a variety of things in order to function. Software that has no coupling is too abstract to actually do anything. Some things, like your choice of framework or language, are likely tight coupling decisions you're happy with because you've considered the alternatives and are satisfied that you can achieve your application's goals in the language and framework of choice. Other things, like the infrastructure your application depends on, might be tightly or loosely coupled depending on how you write your code. Code that is tightly coupled to a particular file system or a particular vendor's database tends to be more difficult to test, change, and maintain than software that is loosely coupled to its infrastructure. Although coupling is unavoidable, you want to make a conscious decision about whether and where you want to tightly couple your code to specific implementations, rather than having accidental coupling spread throughout your codebase.CouplersThere are typically three ways in which code references specific implementations in a tightly coupled manner. Each of these violates the Open-Closed Principle since functions that use these techniques cannot change their implementation logic without the code itself being changed. The first two are making calls to static methods and the closely related technique of referencing a Singleton pattern instance, particularly when accessed using a static Instance method (making this just a variation of the static method call). The third is the use of the new keyword to directly instantiate a particular instance type. Your application needs to work with certain implementation types in order to be useful, but the decision of which implementation types to use should be made in ideally one place, not scattered throughout your system.When is new a problem?Using the new keyword can be a problem when the type being created is not just a POCO, or Plain Old CLR Type, and especially when the type has side effects that make the calling logic difficult to test in isolation. If you find yourself instantiating a string or a DateTime in a function, or a custom DTO, that's probably not going to make that function any more difficult to test. The fact that you're gluing your code to that particular implementation is not a large concern, so you should recognize it but move on because it isn't a major risk to the software's maintainability.However, if you discover a function that is directly instantiating a SqlConnection that, during its construction, immediately tries to connect to a database server and throws an exception if it can't find one, that presents a much bigger risk. Without special tools, there's no simple way to test the calling code in a way that doesn't involve setting up a real database for it to try to connect to. Further, the lack of abstraction around the connection may make it more difficult to implement connection pooling, caching, or a different kind of data store. If you find code like this in your data access layer, implementing an interface, then it's probably fine and in the right location. If you find it in your business logic or UI layer, you should think long and hard before you decide it's a good risk to tightly couple to a particular data technology from this location in your code.Is new bad?The point of "New is Glue" is not to say that "new is bad", but rather to raise awareness. You should think about the fact that you're tightly coupling the current function or class to a particular implementation when you use the new keyword. In many cases, after briefly considering this, you'll conclude that you're fine with the coupling and continue on. But in some cases you may realize that you're about to add tight coupling to an implementation, and the current code isn't where you want to make that decision. In that case, you'll want to refactor your code so that it works with an abstraction and some other code is responsible for providing the actual implementation the function or class will use.What about in tests?Often, our unit tests become very brittle and slow us down when there is a lot of repetition in them. Repeatedly instantiating types to be tested within each test is a

Ep 4Guard Clauses
Your methods should fail fast, if doing so can short-circuit their execution. Guard clauses are a programming technique that enables this behavior, resulting in smaller, simpler functions.Show Notes / TranscriptComplexity in our code makes it harder for us and others to understand what the code is doing. The smallest unit of our code tends to be the function or method. You should be able to look at a given function and quickly determine what it's doing. This tends to be much easier if the function is small, well-named, and focused. One factor that's constantly working against you as you try to keep your functions manageable is conditional complexity (a code smell) - if and switch statements. When not properly managed, these two constructs can quickly cause functions to shift from simple and easily understood to long, obtuse, and scary. One way you can cut down on some of the complexity is through the use of guard clauses.A guard clause is simply a check that immediately exits the function, either with a return statement or an exception. If you're used to writing code such that you check to ensure that everything is valid for the function to run, then you write the main function code, and then you write else statements to deal with error cases, this involves inverting your current workflow. The benefit is that your code will tend to be shorter and simpler, and less deeply indented.Imagine you have a method that performs a subscribe operation, and takes in three objects: a user, a subscription, and a term. Naturally, you want to ensure that these objects are not null before you start working with them. One way to structure the method would be to first check if the user is not null. Then, inside this if statement, check if the subscription is not null. And in this statement, check if the term is not null. Now in this statement, we are in what is sometimes referred to as the "happy path" for the function. Assuming the values of these objects are otherwise valid, the expected work of the function can take place in this block. Each of the different cases of invalid inputs should result in an appropriate exception being thrown, though, and so these require else blocks. There will be an else block for the term != null check, another for the subscription != null check, and finally an else block for the user != null check. This results in a fair amount of plumbing code and complexity that adds to the noise of the function and obscures its true purpose - to subscribe a user to a subscription.Refactor to reduce nesting and else statementsThe first way to address this is to eliminate the need for the else clauses. You can do this by inverting the if checks and putting the exception throwing statements at the start of the function instead of at the end. The first thing you check is if the user is null. If it is, throw an exception. No need for an else clause and no need for a nested if statement. Move on to the next argument. Check it, throw if it's null. Do the same for the third parameter. When you're done, you have 3 simple if statements, none of which are nested, and no else clauses. The function now fails fast, and after the input checks, the happy path for the function is whatever remains.Refactor to use Guard helper methodsChecking for null arguments is a common enough task in strongly typed programs that you can probably encapsulate it in its own helper function. I use a common static class I call Guard which provides static helper methods for common scenarios. For instance, in the example I just described, I'm going to want to throw an ArgumentNullException with the name of whichever argument was null in each of the three possible cases. Thus, it's very easy to write a method I call AgainstNull that simply takes in the argument and the argument's name, and throws an ArgumentNullException if the argument is null. This exception will bubble up and out of the calling method. When you implement this in the scenario I described, you end up with code that reads like this:Guard.AgainstNull(user, nameof(user));Guard.AgainstNull(subscription, nameof(subscription));Guard.AgainstNull(term, nameof(term));You can add additional Guard helper methods for any other common cases you need to check for, such as empty strings, negative numbers, invalid enum values, etc.If statements can take over your functions if you're not careful, making them much harder to understand and thus much harder to maintain. Cyclomatic complexity refers to the total number of paths through a given function, and should be kept in the low single digits whenever possible. Using guard clauses is one simple way to tame complexity in your functions and keep them smaller, simpler, and easier to maintain.Show Resources and LinksGuard ClausesGuardClauses Nuget PackageWeekly Dev Tips EmailCode Smells and Refactoring CourseRefactoring for C# Developers

Ep 3Listen Faster
Listen (and learn) Faster If you can do it without getting left behind, listen or watch educational content at a higher speed. Show Notes / Transcript I've always been interested in speed reading. As a child, it seemed like a super-power, since it would dramatically increase how quickly I could consume information, giving me more time to do other things. On a related note, I often have wished for a nice substitute for sleep that didn't have nasty side effects. But I digress... Listening faster If you're listening to this episode on a phone or mobile device, the app you're using most likely has an option to change the speed. I try to record these at a fairly measured pace, even if I'm otherwise animated or excited by the topic, because I want to make sure they're understandable even to those of you for whom English is not your first language. However, for those of you who can manage it, I encourage you to listen faster by adjusting the play speed to 1.25 or 1.5x, or even faster if you can manage it. If you're not sure how to configure a particular player, I cover a few options in an article on my blog about listening faster. Look for it in the show notes. If you're in a web browser on the show's site, there should be a little 1x icon on the right side of the player. Clicking it will cycle you through different speeds. Give it a shot and pick one that's comfortable for you. Watching faster Of course, there's also a lot of great content online. Whether it's YouTube, DevIQ, or Pluralsight, you can learn a lot about programming and your career in software development from video content. Here, too, you can usually adjust the playback speed. By adjusting the speed from 1x to 1.5x, you can consume a 30 minute presentation in just 20 minutes. Over time, these gains really add up and can make the difference between you falling behind and you passing by others as you compete to be the best you can be. Counterpoint There are those who disagree, and feel that listening to content at 1.5x (or whatever speed you prefer) messes up the artistic intent of the author. A fairly recent article on The Verge is titled simply Stop listening to podcasts at 1.5x speed. I mention this mainly to disagree with it and to give you my permission, as the author and "artist" involved in this podcast, to listen faster. There may be instances where some subtlety is lost, especially when you're talking about a heavily produced and edited show with multiple speakers involved. I'm going to strive not to be that subtle. My goal for these shows is that they provide you with small, useful, concrete nuggets that you can immediately apply to your work. If you can't consume this information at a faster speed because I'm using too much nuance and subtlety (and not because perhaps English isn't your first language), then I'm failing. Once I've produced one of these podcasts, I'm done with it. My only goal is that a significant number of developers find it, listen to it, and find it useful. The more content you're able to consume, hopefully the more value you're able to get from it. To that end, I encourage you to listen at a speed that works well for you. Show Resources and Links Life Hack: Listen Faster Stop Listening to Podcasts at 1.5x

Ep 2Check In Often
Check In Often As a developer, you should be using source control. You should probably be using distributed source control. And you should check in, probably more often than you think. Show Notes No matter what specific tool you use for source control, you should be checking in your code to a source control repository. Right now, do you have code that you're working on (or were recently working on) that isn't checked in? How long ago did you check it out? If it's been more than a few hours, that's probably too long to go without checking in your work. Why should you commit often? The more often you commit your code to a central repository, the sooner you'll discover integration issues. This works best if you're using continuous integration, which we'll talk about later, but even without it, someone else on your team may discover an issue with your code sooner if it's checked in somewhere they can get to it sooner. Committing is like ctrl-S. If you grew up using word processors like Word, you've probably developed the muscle memory of hitting ctrl-S to save frequently. All it takes is for your machine or application to crash one time while you have hours of unsaved work for you to realize that you really should be saving all the time. The more often you commit your code (and ideally, push changes off your local machine), the better off you are if the unthinkable happens to your machine. Recovering from dead ends. Of course, an even more common use for frequent commits is that they let you jump back in time when you find that you've gone down a bad path. This, in turn, can give you more confidence to try different approaches to problems or more ambitious refactorings, because you know you can easily jump back to a known good state at any point. One of my favorite sayings is a Turkish proverb, "No matter how far down the wrong path you've gone, turn back now." When you have an easy restore point to jump to, this is easy. When you don't, it can be tempting to keep fighting with an approach you know in your heart isn't great, simply because going backward is going to be at least as painful as struggling forward. Don't use folders for version control. Avoid the urge to use copy folder versioning. You know what I'm talking about. That copy of the code folder you made that appends today's date to it, that's sitting next to a dozen similarly named folders. That's not source control, it's undisciplined and sloppy. That's not to say I'm not guilty of doing it myself (this approach still gives me confidence when updating production code on a live server, for instance). But if you're regularly using source control properly, you won't find the need to do this in your local dev environment. When should you commit? You should be committing your code whenever you've made some tangible progress. Fixing a bug, adding a test or two, implementing a feature, completing a refactoring, cleaning up a bunch of formatting issues. Each of these could easily be represented as a single commit. Now, whether or not all of these end up in your commit history after you merge your changes into the master branch is another question. But while you're actively working locally, it can certainly be useful to have this granular of a commit history available to you. Commits are like game history If you’ve ever played a video game that allowed saving checkpoints, you understand the value of frequently saving your progress. If you don’t, and something comes along and kills you, you’ll have to repeat a lot of effort to get back to where you were. Frequent commits in your code will save you from the unexpected just like frequent saves in your games do. Show Resources and Links The Copy Folder Versioning Anti-Pattern Blog Post: Check In Often

Ep 1Overview of Weekly Dev Tips
##Episode 1: Overview of Weekly Dev Tips What is this podcast about, who is it for, and how can you participate in it? If you have questions or comments, join my mailing list and reply to the email you receive.