Archive for April, 2013

A little while ago I mentioned that in his book Microsoft .Net Architecting Applications for the Enterprise, Dino Esposito says that a working untestable software product is no different than a working testable one; I am paraphrasing, but that the second one will have a better design. I also said that I did not agree with the first part of this statement and I promised that I would expose my arguments so, here we are.

First of all I want to make perfectly clear that I have the highest respect for Mr. Esposito. I believe he is brilliant and every developer who uses Microsoft’s technologies should read as much of his work as possible. So with that out of the way, let’s begin.

I believe that a software product is a living organism that after it is born it grows, changes, reacts to external stimuli, it can even reproduce in the form of other software that uses some of its components. Of course it cannot do it by itself, it is us the developers who make that happen and of course, we do not do it because we have nothing better to do, we do it because someone asked for it. The important thing to keep in mind is that whoever requested the work to be done on that software needs it for one of two reasons: to make money or to save money.

Our customers embark themselves on the arduous, painful, and stressful activity of sponsoring a software product because they have a specific business need and they hope that their new or enhanced software will eventually produce an increase in sales or profit, an increase in productivity, an improvement of their processes, or something of the sort which in the end it just drills down to making or saving money.

To illustrate this let’s imagine there are two competing companies that have a software product which provides all kinds of academic support to schools and universities. It turns out there is a new trend among universities which provides an amazing business opportunity to these companies but they need to enhance and add new features to their products in order to support it. Both products have been developed in-house and they still have their original teams working for these companies so all the developers know their respective code inside out. Both products work well and their code bases are reasonably clean.

After just a few months the first company has implemented just enough of the new features in order to provide the new services. They were able to do it quickly because every time a modification was ready their automated tests told them right away which existing features would break so the issues were dealt with immediately and the QA team did minimal regression testing so they could focus on testing deeply the new requirements.

On the other hand, the second company is still struggling with their release. Every time a new feature is checked in, the QA team needs to spend a few days on a full regression testing effort and when they find either something that broke or a defect on the new features the code goes back to the dev team, of course, which spends a few days figuring out how is it that the new feature broke the old one and how they can make both of them coexist.

Eventually the second company releases its product. Unfortunately their competitor’s has been around for some time now and they have been able to keep their existing customers, get new ones and even take a few of their competitor’s ones.

As you can see, in this fairy tale which can very well happen in the real world, both companies had a working software product, I never said that neither of them had a mess in their code quite the opposite, the only difference was that the first one was designed for testability and had a good suite of tests. Their product helped them to increase their sales so its main objective was fulfilled. The second company since they could not get their product to market quickly enough, lost market share, lost sales and the big investment on their software will be recovered later, if at all.

And there you have it, even though the second company had a working software product, in the end, it did not work for their benefit so do yourselves and your employers or customers a huge business favor and be sure to insist on designing the product with testability in mind and having a great suite of tests to protect it.

Advertisements

Just this week I blogged about how to assert the message of an expected exception and one of the things I was mentioning is how quite some people had proposed different implementations.

Today I found this post from a fellow blogger in which he talks about the MSTest Extensions library. Basically it explains how to install the NuGet package and shows a small example of how to use the ExceptionAssert method to fulfill the need of asserting the thrown exception’s message.

Unfortunately this implementation still suffers from the limitations that we discussed the last time, it can only deal with Actions so if you want to test a method with a return value or a constructor, this option will not provide a solution.

Not everything about this option is bad though, the use of generics is a nice touch so I thought I could include a couple of improvements to the solution I proposed:

    public static class AssertException
    {
        public static void Is<T>(string message, Delegate action, params object[] parameters) where T : Exception
        {
            try
            {
                action.DynamicInvoke(parameters);
                Assert.Fail(string.Format("Expected exception of type <{0}> with message <{1}> but none was thrown.", typeof(T).Name, message));
            }
            catch (Exception ex)
            {
                if (ex is AssertFailedException || ex is AssertInconclusiveException) throw;

                if (ex.InnerException == null)
                {
                    Assert.IsTrue(ex is T, string.Format("Expected exception type: <{0}> Actual: {1}", typeof(T).Name, ex.GetType().Name));
                    Assert.AreEqual(message, ex.Message, true, CultureInfo.InvariantCulture);
                }
                else
                {
                    Assert.IsTrue(ex.InnerException is T, string.Format("Expected exception type: <{0}> Actual: {1}", typeof(T).Name, ex.InnerException.GetType().Name));
                    Assert.AreEqual(message, ex.InnerException.Message, true, CultureInfo.InvariantCulture);
                }
            }
        }

        public static void Is(Exception expected, Delegate action, params object[] parameters)
        {
            try
            {
                action.DynamicInvoke(parameters);
                Assert.Fail(string.Format("Expected exception of type <{0}> with message <{1}> but none was thrown.", expected.GetType().FullName, expected.Message));
            }
            catch (Exception ex)
            {
                if (ex is AssertFailedException || ex is AssertInconclusiveException) throw;

                if (ex.InnerException == null)
                {
                    Assert.IsTrue(expected.GetType().IsInstanceOfType(ex), string.Format("Expected exception type: <{0}> Actual: {1}", expected.GetType().Name, ex.GetType().Name));
                    Assert.AreEqual(expected.Message, ex.Message, true, CultureInfo.InvariantCulture);
                }
                else
                {
                    Assert.IsTrue(expected.GetType().IsInstanceOfType(ex.InnerException), string.Format("Expected exception type: <{0}> Actual: {1}", expected.GetType().Name, ex.InnerException.GetType().Name));
                    Assert.AreEqual(expected.Message, ex.InnerException.Message, true, CultureInfo.InvariantCulture);
                }
            }
        }
    }

As you can see now the class has two overloaded methods, a generic and a non-generic versions. Also the names of both the class and the method changed so it reads a little bit better and finally both now support inherited exceptions.

This is what the use of this class will now look like:

        [TestMethod]
        public void ConstructorThrowsExceptionWhenArgumentsAreInvalid()
        {
           AssertException.Is<ArgumentNullException>("Value cannot be null.\r\nParameter name: number",
              new Func<int?, string, Foo>((num, name) => new Foo(num, name)),
              null, "any");

           AssertException.Is<ArgumentNullException>("Value cannot be null.\r\nParameter name: name",
              new Func<int?, string, Foo>((num, name) => new Foo(num, name)),
              1, string.Empty);

           AssertException.Is(new ArgumentNullException("name"),
              new Func<int?, string, Foo>((num, name) => new Foo(num, name)),
              1, string.Empty);
        }

        [TestMethod]
        public void MethodsThrowExceptions()
        {
            var firstFoo = new Foo(-5, "something");
            AssertException.Is<ApplicationException>("Number should not be negative",
               new Action(firstFoo.DoSomething));

            var secondFoo = new Foo(5, "something");
            AssertException.Is<DivideByZeroException>("Number cannot be divided by zero",
               new Action<int>(secondFoo.DoSomethingElse),
               0);
        }

        [TestMethod]
        public void FunctionsThrowExceptions()
        {
            var foo = new Foo(1, "invalid");

            AssertException.Is<InvalidCredentialException>("Authorization denied",
               new Func<bool>(foo.Validate));

            AssertException.Is<NullReferenceException>("Seriously Foo cannot be null",
               new Func<string, int?, Foo, Foo>(foo.GetNewFoo),
               "any", 3, null);
        }

As you can see the code now almost reads as regular English. It is still more verbose than the attribute option but we discussed the advantages and disadvantages of both approaches on the previous post.

So a couple of conclusions that I can get are that even though we may have a good solution for a particular problem we can always find inspiration and ideas to improve those solutions and we should always be humble enough to realize that our solutions could be improved, no matter how much we might like them.

The same as last time here you have several options to assert the message of exceptions in your unit tests. Use the one you consider the most appropriate. Happy coding.

If you do a quick google search you will realize a lot of people out there have the need to be able to unit test that a piece of code throws an exception with a specific message. Unfortunately the ExpectedExceptionAttribute attribute does not provide this feature. It does have a “message” parameter but its use is to define the message the developer will see when the test fails.

All the people with this need, myself included, have searched long and hard for a way to implement an attribute with this functionality or, at the very least, a way to execute this kind of test reliably and in a way that it can be reused consistently for each possible case.

Most implementations I have found are different flavors of catching the exception and comparing the type and message to an expected one, but they are limited in the number or type of parameters, or on the return type they support. I did once find the implementation of an attribute but was a little disappointed since I thought it was too much code for something as simple as the objective at hand. Let me be very clear about the fact that I am NOT saying this is a bad option nor am I discouraging its use in any way or form. I am only saying that, in my very personal opinion, it is too much code for something so simple and, it is code that you should understand how it works since you might have the need to maintain it.

So because of this I came up with another implementation by using delegates which is smaller than the attribute one and supports both actions and functions with as many parameters as needed. It does have the disadvantage though that its use might be a little cryptic. I always strive for code readability so decorating the test method with an attribute may be a little better. Here is the code, some examples on its use and, for completeness, a Foo class to test it:

This is the code that we are interested in:


public static class CustomAssert
{
   public static void IfExceptionIs(Exception expected, Delegate action, params object[] parameters)
   {
      try
      {
         action.DynamicInvoke(parameters);
         Assert.Fail(string.Format("Expected exception of type <{0}>; with message <{1}> but none was thrown.", expected.GetType().FullName, expected.Message));
      }
      catch (Exception ex)
      {
         if (ex is AssertFailedException || ex is AssertInconclusiveException) throw;

         if (ex.InnerException == null)
         {
             Assert.AreEqual(expected.Message, ex.Message, true, CultureInfo.InvariantCulture);
             Assert.AreEqual(expected.GetType(), ex.GetType());
         }
         else
         {
             Assert.AreEqual(expected.Message, ex.InnerException.Message, true,  CultureInfo.InvariantCulture);
             Assert.AreEqual(expected.GetType(), ex.InnerException.GetType());
         }
      }
   }
}

This would be the testing Foo class:

public class Foo
{
   private readonly int? _number;
   private readonly string _name;

   public Foo(int? number, string name)
   {
      if(!number.HasValue) throw new ArgumentNullException("number");
      if(string.IsNullOrEmpty(name)) throw new ArgumentNullException("name");

      _number = number;
      _name = name;
   }

   public void DoSomething()
   {
      if(_number < 0) throw new ApplicationException("Number should not be negative");
   }

   public void DoSomethingElse(int dividend)
   {
      if(dividend == 0) throw new DivideByZeroException("Number cannot be divided by zero");
   }

   public bool Validate()
   {
      if(_name == "invalid") throw new InvalidCredentialException("Authorization denied");
      return true;
   }

   public Foo GetNewFoo(string newName, int? newNumber, Foo oldFoo)
   {
      if(oldFoo == null) throw new NullReferenceException("Seriously Foo cannot be null");
      return new Foo(newNumber, newName);
   }
}

And this is how it would be used:

[TestMethod]
public void ConstructorThrowsExceptionWhenArgumentsAreInvalid()
{
   CustomAssert.IfExceptionIs(new ArgumentNullException("number"),
      new Func<int?, string, Foo>((num, name) =>; new Foo(num, name)),
      null, "any");

   CustomAssert.IfExceptionIs(new ArgumentNullException("name"),
      new Func<int?, string, Foo>((num, name) => new Foo(num, name)),
      1, string.Empty);
}

[TestMethod]
public void MethodsThrowExceptions()
{
   var firstFoo = new Foo(-5, "something");
   CustomAssert.IfExceptionIs(new ApplicationException("Number should not be negative"),
      new Action(firstFoo.DoSomething));

   var secondFoo = new Foo(5, "something");
   CustomAssert.IfExceptionIs(new DivideByZeroException("Number cannot be divided by zero"),
      new Action<int>(secondFoo.DoSomethingElse),
      0);
}

[TestMethod]
public void FunctionsThrowExceptions()
{
   var foo = new Foo(1, "invalid");

   CustomAssert.IfExceptionIs(new InvalidCredentialException("Authorization denied"),
      new Func<bool>(foo.Validate));

   CustomAssert.IfExceptionIs(new NullReferenceException("Seriously Foo cannot be null"),
      new Func<string, int?, Foo, Foo>(foo.GetNewFoo),
      "any", 3, null);
}

To have the complete picture here are some snippets on how would the tests look like if we were to use the custom attribute:


[TestMethod]
[ExpectedExceptionWithMessage(typeof(ArgumentNullException), "name")]
public void ConstructorThrowsExceptionWhenNameIsEmpty()
{
   new Foo(1, string.Empty);
}

[TestMethod]
[ExpectedExceptionWithMessage(typeof(ApplicationException), "Number should not be negative")]
public void MethodThrowsExceptionWhenNumberIsNegative()
{
   var foo = new Foo(-5, "something");
   foo.DoSomething();
}

Another thing to note is that the implementation I am proposing looks for an inner exception since sometimes the real problem might be hiding in there and the attribute one supports inherited exceptions. Both options could easily be added in either one.

So there you have it, two options to assert the message of an expected exception. Use the one you consider best for you, your team and your project and may your unit tests save you from grief and maintenance nightmares.

I dare say that anyone who has worked with software developers has come to the realization that we are very proud people. We take pride in solving challenging problems, in the smart way we implement solutions, sometimes even in the sheer beauty of the code we have written. And we should, software development is not a simple trade, we must develop a strong capacity to abstract the world around us and model it into a working solution by using some arcane language that not everybody can understand. In a nutshell, we must be good at describing and communicating our world to machines.

The only problem with that is that some developers take so much pride in these things that they forget why they are doing what they are doing.

Let’s think about it for a moment. There are only two possible reasons someone would be willing to pay huge amounts of money for a piece of software: make money, or save money. That’s it! That’s all there is to it. Those are the most important goals our incredibly smart crafted solutions should meet. Those are the motivations that drove our customers’ stakeholders to sponsor the project we are currently involved in.

The implications of this are really important. I cannot tell you how many times I have been involved in projects in which the customer wants to enhance its software, add features, adapt it to new requirements because of changes in their industry or their markets, or to fix defects only to come to the realization that all of this is going to take a very large amount of resources, mainly time and money, because of the poor quality of the code in their solution.

Usually I always hear the same reasons as to why this has happened. This code might have been produced by an amazingly brilliant developer who put it together in a few hours or a few days, but it is so complex that he is the only one who knows how to modify it (patch it, and keep patching it). Maybe it was the result of a very tight deadline and the team just did not have enough time to code it properly. It could have been because the managers were always pushing the developers to finish it quickly.

Regardless of the reason, the code is there and now its owner has to invest large amounts of time and money for even the simplest modifications. Guess what? The main goals cannot be met anymore since the company is spending too much money in their software’s maintenance, hence it is not helping them to save money, and it is not helping them to make the money it was supposed to.

We, as developers, need to start thinking about Total Cost of Ownership. This means how much will it cost to own this piece of software over time, and our main objective as developers should be to keep that number as low as humanly possible.

So be proud about the code you write, but focus that pride in the ease in which other developers are capable of adding features, fixing defects or modify that code in general without breaking it all over the place. Take pride in the small investment that your customer needs to put on its maintenance. Take pride in the huge business the code that you wrote is generating for your customer.

Go young grasshopper, code responsibly and make yourself proud.

Nowadays is really difficult to be reading anything about software development without stumbling upon a web page, blog, book, article or the like that discusses software development / design principles. SOLID, KISS, and YAGNI are acronyms that come up in lots of discussions among developers which is good, it means that most people are aware about the importance of good practices in order to build robust applications that do not end up in a maintenance nightmare. The only problem is that I have seen people use them in such a way that they end up being the justification for poor, tightly coupled, difficult to maintain software.

But, I hear you say, how can principles that are supposed to enforce good software be used to produce the opposite? I believe is part of human nature. We can find in history too many examples of things that were created for good that end up being used for evil. In the case of these design principles I believe that a lot of developers really do not want to change the way they have always develop software because, let’s face it, once you are comfortable doing something it is really painful to do the exact same think in a completely different way, so it seems that it is much easier to try to convince others, and ourselves, that our way of doing things is the way that comply with said principles.

Unfortunately most of the time this way is not the way things should be done because it is, well, the way things have always been done that have ended in tightly coupled, spaghetti code. In particular I have been involved in some discussions in which people argued that the idea that good development techniques to decouple software like IoC or dependency injection should not be used, at least not at the beginning, under the YAGNI flag. The “You Ain’t Gonna Need It” idea does have a lot of promise of not adding unnecessary complexity to a piece of software if at that moment there are no requirements that justify it. This is always helpful when you hear someone saying something like “but what if they need to add more types of foo?” That is a valid question to which the answer can always be “what if that requirement is never defined?” The problem comes when you hear people saying that decoupling infrastructure should only be implemented as the need arises because it is not needed from the beginning and it will only add complexity to the code.

By following this last recommendation, the question that comes to mind is when will this decoupling optimization be needed? When the code is so tangled up that it would take weeks to refactor it and break all the dependencies? Is it when each time we are trying to add a feature or fix a defect we end up breaking the application in lots of different places? Maybe when you start hearing developers say they do not want to touch or change a particular section of the code? If all of these scenarios are not something we would like to have in our code then how can employing proved techniques, like IoC or dependency injection, that really do not add that much to the complexity of the code be harmful?

Now days we can even unit test all kinds of tightly coupled messes thanks to tools like TypeMock and MS Fakes. The problem is that these tools are making some people believe that IoC and dependency injection are no longer needed because those techniques’ only purpose was to achieve testability and now we can mock hard coded dependencies. But even with this kind of code all covered with tests, refactoring, fixing defects, and adding features can be a titanic endeavor just because we did not take the time to properly design the application from the beginning because of YAGNI.

As professional, software developers, coders, software engineers, or whatever you want to call us, we need to understand that if someone is going to need more than a couple of hours to at least figure out how the code we wrote works, and that includes both the production code and the tests, then it is not quality code. Having loosely coupled, isolated code aids in this understanding and we need to defend the practices and ideas that have come to improve the way we develop software and not use them to go back and do the ones that we have already left behind.

In the last few years a lot has been written about the benefits of automated testing, unit testing, TDD, and mocking frameworks. So much has been written about it, even demonstrated in the field, that I find quite surprising, and frightening, that there are some blogs out there in which their authors claim that the effort required to write the tests and let the tests drive the design is a huge overhead, not necessary, increases the complexity of both the production code and the tests themselves, a bad approach among other things.

Unfortunately I was unable to find the exact quote, but I believe that it was in his book, Architecting Applications for the Enterprise, that Dino Esposito says, I am paraphrasing, that there is no difference between a working piece of software that is testable and another one that is not, but the one that is designed with testability in mind will have a much better design. Even though I do not agree with the first part of this claim, the reasons and arguments are the subject of another post, I absolutely agree with the second part of it. When we design our software with testability in mind the end result is a much more maintainable and loosely coupled design.

With these ideas in mind, the next series of posts will aim to address some of the, what I consider, misconceptions about designing for testability, loosely coupling, and the YAGNI and KISS principles. I hope that by the end of them I will be able to add a small contribution to the development community in trying to make software development a more robust discipline.

Designing for testability does pose a number of challenges that have been addressed in a number of ways. One of these ways has been the Inversion of Control (IoC) principle, and more specifically by using Dependency Injection. Another way has been by the creation of tools like TypeMock and Microsoft Fakes. In a nutshell these tools are able to reach into unreachable code and replace static methods and tightly coupled classes with stubs so that untestable code can be unit tested. We will discuss this type of tools at another time, what I am more interested right now is to talk about an online argument in the comments section of this blog post between two people about the use of these tools.

The person who was against the use of this tools was concerned about the unmaintainable and tightly coupled software that could be developed since the limitations of replacing those hard coded dependencies were no longer there. The other person, who was not only in favor of these tools but avidly encouraged their use, claimed that his concerns were the result of the idea of “the ability for code to be unit testable in classical terms” and that the use of an abstract factory would reduce the coupling even further. This last comment was what drew the line for me. Don’t get me wrong, I love the abstract factory pattern and I think is an invaluable tool to abstract the creation of objects from the components that will use them. This does not mean that an abstract factory in itself will be loosely coupled to the code that needs to decide which concrete factory to instantiate and have it create the required object.

To be able to demonstrate these ideas, let’s think about a smart phone manufacturer that has different lines of products and these products can be subdivide into sub-products to satisfy the increasing demands of a hungry market. Of course when the manufactured products reach a certain point in the assembly line a certain number of, what do you know, tests will have to be performed to make sure each product comply with the company’s quality policies. In order to do that all products need to have a common interface to be handed to the testing process, which will not be interested in the exact model of the product.

    public interface IPhone
    {
        void DoSomething();
    }

    public class HighEnd16Gig : IPhone
    {
        public void DoSomething(){}
    }

    public class HighEnd32Gig : IPhone
    {
        public void DoSomething(){}
    }

    public class LowEnd16Gig : IPhone
    {
        public void DoSomething(){}
    }

    public class LowEnd32Gig : IPhone
    {
        public void DoSomething(){}
    }

We can see that we have a high end and a low end phone and that each of them comes in 16 and 32 gigs of storage models. Since the testing process should not care about what kind of phone it will receive at any given time, we define an abstract factory to create the concrete product before starting the tests.

    public interface IPhoneFactory
    {
        IPhone Create();
    }

    public class HighEndFactory : IPhoneFactory
    {
        private readonly StorageSize _storageSize;

        public HighEndFactory(StorageSize storageSize)
        {
            _storageSize = storageSize;
        }

        public IPhone Create()
        {
            switch (_storageSize)
            {
                case StorageSize.Gig16:
                    return new HighEnd16Gig();
                case StorageSize.Gig32:
                    return new HighEnd32Gig();
                default:
                    throw new ArgumentException("Unknown high end model.");
            }
        }
    }

    public class LowEndFactory : IPhoneFactory
    {
        private readonly StorageSize _storageSize;

        public LowEndFactory(StorageSize storageSize)
        {
            _storageSize = storageSize;
        }

        public IPhone Create()
        {
            switch (_storageSize)
            {
                case StorageSize.Gig16:
                    return new LowEnd16Gig();
                case StorageSize.Gig32:
                    return new LowEnd32Gig();
                default:
                    throw new ArgumentException("Unknown low end model.");
            }
        }
    }
    public interface IManufacturingProcess
    {
        void ProcessProduct(ProductType productType, StorageSize storageSize);
    }

    public class TestingCell : IManufacturingProcess
    {
        public void ProcessProduct(ProductType productType, StorageSize storageSize)
        {
            //Do some stuff.

            IPhoneFactory factory;
            switch (productType)
            {
                case ProductType.HighEnd:
                    factory = new HighEndFactory(storageSize);
                    break;
                case ProductType.LowEnd:
                    factory = new LowEndFactory(storageSize);
                    break;
                default:
                    throw new ArgumentException("Unknown Product Type");
            }

            var product = factory.Create();
            ExecuteTests(product);
        }

        private void ExecuteTests(IPhone product)
        {
            product.DoSomething();
        }
    }

This is a very good design, we can see that the component that is going to start the testing process is completely abstracted from the creation mechanism of the different types of products, but there is still one problem, this component is tightly coupled not only to the abstraction but to the two concrete implementations as well. This component is deciding which kind of factory it needs to instantiate to ask for the product it will send to the testing process so if new product lines are added, it will have to be modified to include the new concrete factories and this is a big maintenance issue. Even worse, this is just one step of the whole process, other processes will have to implement the same logic to determine which factory to use.

A much better approach would be to inject the factory to this component so it is completely agnostic of the kind of factory it will be calling.

    public interface IManufacturingProcess
    {
        void ProcessProduct();
    }

    public class TestingCell : IManufacturingProcess
    {
        private readonly IPhoneFactory _factory;

        public TestingCell(IPhoneFactory factory)
        {
            if(factory == null) throw new ArgumentNullException("factory");
            _factory = factory;
        }

        public void ProcessProduct()
        {
            //Do some stuff.

            var product = _factory.Create();
            ExecuteTests(product);
        }

        private void ExecuteTests(IPhone product)
        {
            product.DoSomething();
        }
    }

The benefits of this approach are so many that it just cannot be underestimated:

  • These factories are used by a lot of components throughout the manufacturing process so the code to decide which factory to instantiate will not have to be duplicated in each one.
  • If the factories, the products, and their interfaces are each defined in their own assembly the components that use them only need a reference to the one with the interfaces so, even if new factories are constantly added the assemblies that use them will not have to be recompiled making all of your assemblies truly decoupled.
  • The instantiation of the factories can be isolated in a single place of the application, preferably in the Composition Root.
  • The components that require the factories’ adhere more to the Single Responsibility Principle.
  • The component that will perform the tests can be unit tested in complete isolation without any concern of how the factories create the products.
  • Once you have made your peace with the concepts of abstract factory, dependency injection, and composition root, a nice implementation of all of these concepts working together can be used.

Even though abstract factories are a wonderful tool to isolate and abstract the creation of components from their callers, they are not, in their own sense, a way to achieve a loosely coupled design. To this day the best design technique that I have seen to achieve absolute and truly loose coupling, not only from individual classes but from full assemblies, is dependency injection. I will not presume to say that this is the golden hammer of software engineering, but so far it has proven itself to be the best we’ve got.