ARCast.TV - Presenter First Pattern (Part 3)

The Discussion

  • User profile image
    joshnuss
    Thanks for the excellent series.

    I was wondering what you guys think about TDD from a perspective of the current tools?

    IMHO it would be nice if there was a Unit Testing/Mocking/C# hybrid language where expectations and assertions were first-class concepts.

    -Josh
  • User profile image
    rojacobs
    ARCast.net - Presenter First Pattern (Part 3)

    Announcer:
    It's Tuesday, June 19, 2007, and you're watching ARCast TV.
    Ron Jacobs: Hey, welcome back to ARCast TV. This is your host, Ron Jacobs, and today we're going back for part three of our talk with the Atomic Object guys, Dave Crosby and Scott Miller, about the Presenter First Pattern.

    I have to tell you, there are some people out there who are true believers in these agile concepts tester and development. But since I gave my talk at Tech-Ed, I have run into a lot of people who are serious doubters about this stuff, and they say, "Oh, come on. Tester and development doesn't really work, and all these patterns are way more code than you need."

    So, it's a big debate. You don't have to agree with me on anything. It's OK. But it's a good idea to listen to both sides and understand what people are saying. I think there are good ideas and arguments on both sides. How's that for being noncommittal? [laughs] All right, but let's get back to it. We're talking about mock objects.

    OK, so we talked about interfaces first. So, once I've defined a View interface, the very cool thing that I did when I spent some time writing code based on your ideas was to create a complete working Model View Presenter triad with no GUI whatsoever, which was kind of interesting. Right, I'm like, "I have a View, but it's not really a View. It's not really a GUI. It's not showing anything." So, this is what you guys call, and other people have called, a mock object. So tell me, what is a mock?
    Scott Miller: A mock, I would say, is an object that stands in for the real object.
    Ron: OK.
    Scott: It's almost always much simpler, and for the purposes that we use a mock, it's essentially a tape machine. It records expectations that you set on it and plays them back at the proper time. Some people use mocks more as stand-in objects to simulate calculations or other behavior, and I think of those more as stubs, that they have some simplified behavior in them.
    Ron: Yeah.
    Scott: To me a mock doesn't to try to simulate that behavior, but in fact, like I said, just acts as a recording device.
    Ron: Oh, OK.
    Scott: So that we can set up our expectations and when we run through the code under test, those expectations are played back. If they don't match up with what we expected, then we have a test failure.
    Ron: So, this is exactly the philosophy that NMock and other mock frameworks typically use. You just say, "Here's an interface. I'd like you to mock this." And it actually doesn't give you any ability to inject any behavior into that. All you can do is set your expectations, so by that you mean this property's going to be called once with this value. This event should fire whatever. These things should all happen, and sometimes you can even say "in this particular order."
    David Crosby: Which is unusual.
    Ron: Yeah. Then you just, at the end of your test, you say, "Playback the tape and tell me that everything happened just like I expected it would."
    Scott: That's right. And we call this interaction-based testing, as opposed to state-based testing where you're actually checking the calculations on some class occurred correctly.
    David: Yeah.
    Scott: With state-based testing, you're testing that the exercise of the object under test resulted in the state that you expected. With interaction-based testing, where mocks become really crucial, you're instead testing that the object's behavior was what I expected it to be.

    So, in a typical interaction-based test, if you've got an object that is injected with say two interfaces that it uses to accomplish some task, you would set up the test by generating these mocks, usually using an automated generator, injecting them into your object under test and then setting up your expectations for the single method that you are going to test. So I expect object 'A's method set to be called with this particular value and to return this other specific value. Then I expect object 'B's method to be called with this specific value.

    Once you set up those expectations, then you go ahead and exercise the method with the target object, the object that you are testing. You call the method under test. It actually goes into the object that you are testing, and starts going through the code. Since you have injected it with these mocks, the calls that it makes through these interfaces are going to go to the mocks. And the mocks have already been set up with their expectations. So if you make a call that was not expected, or at the end if there were calls that you did expect, but never happened, then the mark library typically has ways of telling you that this happened and causing the test to fail.
    Ron: Nice, nice. So you mentioned stubs there, which is also important from philosophical point about TDD. And I had a discussion on one of our TDD mailing lists with some folks about this. So in our example with the readings, you are going to need some data. You are going to need some bogus readings to test with, to, kind of get everything working. I was asking people should I create an XML file, just a little sample XML file, that we go and actually load up and then push that around. Or would you just start with, create a stub that just creates an XML document in memory and just hands that off. What's the philosophy there?
    Scott: Well, I can tell you how I would do it. I wouldn't even be concerned with XML. If one of my tasks was to read an XML and translate that into data, to me that's a separate story. The story you are asking me about is, given this data, the file name, I want to read it and generate the data that comes from it.

    So, in this particular case, the model will be injected with an XML reader interface. For a test we would set up expectations on that XML reader interface. That XML reader gets set with a file name, the XML reader's GetNode object is called, or GetNode method is called, the XML reader's GetElement method is called, and the result of those - the XML reader will return me the value 7.

    And then when I exercise the model, it will go through the XML interface, calling those methods as I had expected, assuming I have everything coded right. And then since I set up the expectation, the result is 7, it will in fact return 7. And then the model will have that data, and then whatever tests you need to do after that, once it has gotten the data, you go on from there.

    So you completely abstract away the XML-ness of the data for testing of the model. Because the model does not particularly care, what format the data is in, or how it is going to extract the data from this data source. That's somebody else's job.

    The XML reader would have its own set of tests and at that level you would probably get into either having this XML hard coded in memory, or maybe even having an XML file stored on disk somewhere in a known state. So that would be the XML reader's job and the XML reader's test that would use that sort of data, but not this model.
    David: It's not really a question of which one you use.
    Scott: Yes.
    David: Do you state-based testing these and interaction based testing? Do you use stubs or mocks? It's about when is it appropriate based on your design principles to use which, because when you get out toward the end, this object here takes a file and gives me back an XML dom structure.

    I wouldn't bother mocking out the file system or mocking out the dom. I would be interested in knowing that my utility class is capable of using the.NET XML parsing libraries and so I would go out and set that as more of a state based test. Maybe you stub the file system layer out. Sometimes that is helpful if you're running in a test environment that doesn't have access to an open file system.

    Sometimes it gets things a little trickier. We had heated debates about state based and Fowler's big talk on it leaves him on the fence. He says, "I'm going to stay on the fence."

    Well, if he were to do a product with us, he would say, when you're writing your Presenters and you're writing Presenters first, you have to do an interaction based testing. You can't build a test for a Presenter in terms of its model, a fully functional model, because he hasn't even built yet let alone the dozens of objects that he may require.

    Then when you get out into objects that thinly wrap third party API, objects, or.NET core library objects that he has no business mocking at all. Businesses mocking out socket connection you're sending sequel statements over.
    Scott: You mock and you don't stub.
    David: Well...right.
    Ron: [laughs] OK so the principle is you don't need to test the.NET based class libraries. They do what they do. They might have bugs but you don't need to test them. So that at some layer I go, well here's my iReadings model interface. If I mock that, meaning I can't supply any implementation, I've got to get some data to actually test it with from somewhere.
    Scott: You make it up.
    Ron: Where do I make it up? If I create a stub, a stub allows me to. When you call this method I'm going to return foo as the value, right?
    Scott: It's not that different with the mock. When you mock out; when you set up an expectation on a mock, you tell it what perimeters this method will be called with and what it's supposed to return, and it checks to make sure that those are the parameters that was called with and you tell it what data to return. And so it does in fact return that.

    You're basically assuming that all your interfaces work perfectly, because we're not testing those. We're only testing this one object that's the object under test.
    Ron: OK so this is the beautiful thing, right? Because what you're talking about is isolating the bounds of the thing we need to test. So we're going to assume the perfect model, the perfect View, we're testing the Presenter and that's all we're testing right now.
    Scott: That's right.
    Ron: Ah, so that totally makes a lot of sense because now it's very clear what we're doing.
    Scott: And where it gets a little bit tricky, say in your model; say it does different things depending on what value comes back from your XML reader. Now you have to set up a number of tests that test all the conditions.

    In that regard, it can be similar to state based testing. It's that you have to figure out what the changes in what I expect to get back from this interface. How's that going to affect my model and what extent do I need to make those tests.

    The easiest one is any branch in control flow in the model, typically will result in two tests. It you've got fetch this piece of data followed by an if statement and then you do one thing if it's in one range and another thing in another range, that would be two tests.

    One for if it came in this range and the other if it came in that range. It can get a little trickier if you're doing some heavy math calculations where you get some data fluctuations where you have to figure out exactly what ranges you care about.

    But if it gets too tricky, then you just abstract that down one more level into a helper class, so that this level is only concerned with behavior, and not the specific calculations.

    The lowest level is where you'll typically see most of your state-based.

    Testing. At the lowest level the model is just data. There you are in fact doing state-based testing.
    Ron: So, the question then is what about exceptions? How far do you go down this road--like you write a test for every possible exception you think you might get?
    Scott: Most of the modern mock libraries allow you to set expectations, that when I call this method an exception is thrown.
    Ron: Yeah.
    Scott: In Java it's real easy, because you have to specify explicitly which exceptions can be thrown.
    Ron: Yeah.
    Scott: In ".net" for instance, it's not as explicit.
    Ron: Yeah.
    Scott: So, you essentially set up expectations for any exception that you conceivably expect could happen.
    Ron: Right.
    Scott: So, when you set up the expectation you say, "I expect this method to be called with these parameters, and expect it to throw this exception."
    Ron: OK.
    Scott: Then you just go on from there. "Well, what else do you expect?" "I expect this to happen in response, and that to happen in response," and then you run through the tests. When it gets to that expectation of throwing--it throws an exception, and jumps into your exception handling code.
    Ron: So I can just hear some people saying, "It feels like maybe developers might go overboard writing... OK, it's one simple interface with two methods, and got 500 tests!"

    [laughter]
    Ron: Do you see that happen sometimes?
    David: When you've delegated most of the complex work out to other objects, the number of test cases for the object is doing the delegating is greatly reduced because it just doesn't branch that often.

    I make this call, I take the value and I plug it over here. Presenters don't often branch in their code. They get the data from the model, they send it to the View--they don't look at it, they don't think about it. Therefore, the test case for that scenario is really--there is one. You might prove by rating one of your mock objects to explode with an exception that if that happens, then the exception escapes because we don't often do error handling code in our Presenters.

    They don't have any themselves. The model would contain errors, or if the model thought it was appropriate to throw an error, that it would make it back out to the View--and your View layer exception handlers might kick in.
    Scott: The key is what behavior do you expect from the class on your test? If you expect the class that you're testing to do something in response to an exception, then that should be tested.

    That's easy to setup with some of these mocking libraries. If you don't expect your object to do anything in response to an exception, then there's nothing to test.

    If an object that you're using throws an exception, and the only thing that happens in your code is that exception gets thrown out because you're not handling it, then there's nothing to test there--because there is no behavior.
    David: At most one test that you could express that is what happens, if you're concerned about it. Perhaps on occasion we've removed exception handling code from one layer, in favor of letting it move up to a layer that's more generally capable of handling it--that you might add a test case before you change any code that proves that internal exceptions will escape upward. Then you'll see it fail because you are handling it, in reality, go to the code, remove the error handling code, and now your test passes.
    Ron: Oh, I see. Yeah.
    David: But it's a matter of time sequencing. It was doing something before, and now I want to prove that it didn't happen. You don't generally walk around trying to prove negatives.
    Ron: Let's deal with the case of -- let's say in the user's story we say -- if the file that the user selected turns out to be invalid, it's not an XML file, or it doesn't exist, or something. We're going to show an error message, maybe throw up a dialogue or something like that.

    So in that case, we would say filing gets passed through, the Presenter pushes it in the model, the model throws an exception, and you're saying the Presenter's not going to handle that? Or how will that work?
    Scott: Well, in the case you've described, I probably wouldn't have the model throw an exception...
    Ron: Oh, really?
    Scott: ...because there's nobody there to catch it. What I would probably do is, through dependency injection, have the model built with a file verifier object. It would have through its interface a couple of methods to determine whether this is a legitimate file or not. And the model would use that verifier object and its methods to determine whether it was valid or not and if it were not valid, it would use some other object that it's been injected with to display a dialogue.

    Like we talked about earlier with that file dialogue shower, we also commonly use a message box dialogue shower in a very similar fashion.
    Ron: OK, so if the model needs to communicate something user, like "You picked the bad file, " you use the message box shower?
    Scott: Right. And so from the model's perspective, it's just an interface and it's calling a method. What happens inside that method, the model doesn't care one wit.
    Ron: But let me think about this though, 'cause let's say I picked "file open" on the menu, that's something happen in the View. The View handles this event and then fires an event to the Presenter, says, "OK, we got a file name." Presenter calls a method in the model-- here's your file name. So if the model threw an exception at that point, the call stack goes all the way back to the View. So the View could catch that exception and then just show you a message box, couldn't it?
    Scott: It could do that, but now you're encoding behavior into the View, which we really don't want to do.
    David: Yes, the way we handle this is that when we have known the potential danger, then the model has been equipped to take specific action in that case (like the scenario that Scott outlined), deliberately determine if the file is valid according to what we know and take action one way if it's good, one way if it's bad.

    There's always the possibility of getting some kind of exception you weren't designing for. And the way we handled that in.NET was to actually set up global exception handler for all things on the event queue.

    So that if -- you're right; it would make it back up to the View -- but if we didn't write an error handler around the event committer in the first place, the source of that event (which we typically won't do), it makes it back up to.NET's event dispatching thread, which is far beyond our control for which we've installed a reference or installed a delegate to use that same error shower dialogue or message shower tool to say-- generally speaking-- something went wrong to stop the application from dying.

    It says this exception occurred and it's still formatted decently. We don't like to handle all our exceptions that way, because you lose control over taking further specific actions. This is just basically showing a log entry to the user-- "Sorry, something went wrong."
    Ron: OK.
    David: In that case, it is, from one perspective, the View is handling errors, but we haven't written code to do it. We've written code toward the beginning of the application that you can install these event handlers--
    Ron: Right--
    David: We actually have tests that demonstrate that--
    Scott: --if an exception gets thrown, that these dialogues have popped up.
    David: --some of that is actually possible.
    Ron: OK, OK, I'm with you. All right. So, wow, this is fascinating. There are so many questions in my head about it. But I think we're running out of time, so let me hit one last issue.

    So, when I sat done and created my little test, I created my iReadings View interface, my iReadings model, I have my Presenter object, I constructed, in constructor I pass in, "Here's your View, here's your model," and it starts wiring up events in all of that.

    And then I'm thinking, "OK, I'm writing a test." The user, in the normal app, would do an action that stimulates something to happen in the Presenter, in the model. Like the user clicked the File-Open on the menu. How do I make that happen in my test?" So, if I have a mock View, do I use the mock somehow to force that event?
    Scott: You are relying on your mocking ability at that point. In.Net, we use nUnit typically, NMock, sorry. And NMock has a facility for saying, this object has thrown an event. It gets to be... Well, indirectly we made a nice, neater wrapper for it. It's not as the built-in event admitting stuff in NMock, isn't quite as nice as we would like it to be for our testing purposes.

    So we wrote a wrapper around it that sort of encapsulates this event; holds on to the delegate that causes the event to fire and allows you to cause it to fire at a later time.
    David: In order for the View to fire an event, that the Presenter would hear, the Presenter has to call a method on the View to subscribe, or to add itself as a delegate. There's the openDelegate() method. We typically have wrapped that usage of delegates in standard regular old methods, called like - subscribe for such and such an event, and hand the delegate over.

    So every one of our tests has a mock expectation in the setup of the test suite. Whenever you build a Presenter, you are expecting that it's going to do the subscriptions, and it hands delegates out. And our mock objects will let us, as a tester, reach in and kind of hold on to those delegates, and invoke them to simulate the firing of these events.
    Ron: Oh, OK.
    David: It's technically a little bit tricky to work through the first time but it's always the same. Then you wrap it up, put it away, and just treat it as - fire this event, fire this event. Conceptually, it's simple. Technically, it's hairy sometimes.
    Scott: The short answer is, that you cause your mocks to generate an event. In some languages that's easy, and in other languages, it is more complex.
    Ron: OK. Because I was thinking, maybe I would just call the method on the Presenter that handles that event, kind of directly from the test code, but then in your paper, you were talking about the Presenter usually does not have any public methods.
    Scott: Right.
    David: Right. In fact, we used to create internal delegates that pointed at these private methods in.Net 1;.Net 2, there are anonymous delegates which do not even need to correspond to methods necessarily. You can write code right there, in the same way that you write closure code in Ruby, or anonymous inter-classes in Java.

    So our Presenters in Java, they have no methods, whatsoever. They define event handler code by passing these delegate objects in. So the code is floating in space. It all exists, but you can't invoke it directly.

    Furthermore, we, at first, did test that way. We would just trust that the delegates had been wired up and skipped that part, and invoked the functionality somehow by changing the access control rules for those methods at test time, or just leaving them public. We found that it was a little false in that we would miss cases and sometimes forget to wire up the delegates.
    Ron: Oh, yeah, yeah.
    David: So we figured that the test could easily catch that. So they should. And we worked around toward getting a little bit more outside and firing events that come in.
    Ron: OK. So I said that was going to be the last thing, but there's one more thing. [laughter]

    To a lot of people, you know, they say, "Wow. It seems if you've written your code this way, you got all this test coverage and unit tests. It's just got to result in much, much higher quality code." I'm a big believer in this. I think it does.
    Scott: We have evidentiary proof of it.
    Ron: Yeah, absolutely. And you know when people go, "Oh, that seems like it's going to take a long time," and I tell them, "You know what takes a long time? [laughs] It's when you spend three or four days tracking down that mysterious bug, you know? That's what takes a long time." [laughs]
    Scott: And really, once you get comfortable with the methodology, it's really not that time intensive. It sounds like it is, 'cause there's a lot more classes involved and in some cases there might actually be more code involved. But the code itself ends up being pretty simple in 90 percent of the cases, so it goes fast.
    Ron: Yeah.
    Scott: In.NET we use a tool called ReSharper. I'm sure you've heard of it. It automates a lot of our input typing. So if we want to generate a test that's just three key strokes-- and bang you've got a test template. If you want to create a Presenter workflow, you type a few characters and you've got your template, which is just waiting for you to be filled out.

    So we found that while there is definitely a learning curve, once you get through that you can really start to move along.
    Ron: So let me ask you though, even though you got this wonderful high quality code, do you still get bugs? I mean, they still show up some days?
    David: Yup.
    Ron: Yup? OK, so when you've written your tests, you've run them, they're passing, and then along comes from some other source testers, users--whatever-- they go, "Hey, here's a bug!" What's the process you go through? Aren't you thinking like, "How did that get past us?" right?

    [laughs]
    David: The first step is denial.

    [Ron laughs]
    Ron: And then grief--

    [laughter]
    David: Throw up a smokescreen.
    Ron: And then bargaining--

    [laughter]
    David: We immediately try to find out how to mentally zero in on the area of the code that might be the problem. And then we would go and-- once we think that we've got it-- you test the hypothesis of attempting to add a unit test some place in the system that would...
    Ron: Expose the area.
    David: ...expose that area, cause that area to happen.
    Ron: OK.
    David: In other words we treat most bugs as a missing test case.
    Ron: So your first reaction is: write a test that will fail because of this bug.
    Scott: Because of this bug.
    Ron: OK.
    David: Sometimes it takes a little bit of experimentation to make it to that point. Once you've found the code you found that, "I believe this class is screwing up." Or you're forced into using a debugger and you found out where there's a problem (like if you just couldn't mentally figure it out or it was really, really tricky or events coming back up and down to the system), you ran the debugger with the break point on; you found out there's no objects where they shouldn't be ("we never designed for that")... OK, now this test over here has to try it out with a null and see what happens.
    Ron: Huh-- OK.
    David: So we still, even if when we zero right in and use a debugger to find the line of the code that's broke, the worst thing we'd probably have to do is stop the program, change the code as a spike solution.
    Ron: Oh, yeah.
    David: "I think that if I change this, I'll be OK, but I'm not really sure." We don't like to spend time writing tests that prove that something changed that doesn't really fix the problem.
    Ron: Oh, yeah.
    David: So once you're sure that often it's a..., there was no design for a null (a null got through where we didn't expect it in the first place)-- and if we didn't expect it, we had no associated assumptions nor tests that would go along with it.

    Back to the test, add the null case-- hopefully you'd get the same exception and failure from that test that the users are experiencing. You usually get a pretty good feeling about... You see an exception come up and that's what, you know-- that's what the customer said she saw pop-up on the screen; it's the same message. We're in the same place. As soon as that message is gone, it can be released.
    Ron: So I can imagine that this might involve kind of iterations, right? So like you're way down in the depths of the code, you go, "I got a null here-- that should never happen." So you write your test that makes that case fail, which is kind of like reproducing the test situation, right?

    So you make that fail, you fix that. But then you find when you exercise from the end user's perspective, something else happens now, right? And then you go up a layer and you're like, "Oh, where did that null come from?" and you're trying to trace it down.
    Scott: That can happen, but I found that it happens way less often with this style of development than with like a typical waterfall development methodology, because you don't have this cascading series of interdependencies that can cause that kind of behavior. You work really hard during development to make sure that everything is as independent as physically possible. And because of that, when you find a bug, usually it's isolated.

    It's possible that there was some workflow that you didn't expect that causes some data to go out that causes adjoining objects to receive data that they didn't expect, but usually it stops right there. This object caused the objects that it's talking to to fail. So if I fix this object, most of the time, that will cause the rest of the objects to be fixed also. But if they don't than you go to that next level and 99 times out of 100 that's where it will end.
    David: Additionally being a test driven developer because you take a certain amount of elevated pride in the code you put out and a bug is an embarrassment. Once you think you've solved that bug that product won't be re-released until you're good and sure ourselves, before you even hand it over to a tester, if one's available and secondly no amount of unit testing makes unnecessary system level tests, automated or otherwise.

    There are some schools in early on XP days that thought that test driven development in unit test level would supplant so much need for QA because now that there's nothing broken. Tests are only as good as your assumptions of your knowledge. So a lot of things that are bugs are really just "I didn't understand that, I didn't understand that this app was supposed to behave in a certain way or did not expect this sort of awful input would come."

    I was told that only the CEO would be using it and I've got kitchen staff using it. So this is a misunderstanding and they did things a different way, we didn't make the app robust enough to handle random users, because we don't build a 100 percent bullet proof code, we build code to do exactly what it is asked for and assume that it is going to have to change at any time.
    Ron: One thing that you mentioned when you were talking about fixing a bug you might have to resort to using a debugger. I kind of chuckle inside because that was so much the style and still is for a lot of people is just kind of like, "Well I'll write a little code, run the debugger step step step," and they spend days and hours of doing this process over and over again, running through the debugger and I'm reading the writings of people who are into test driven developer and they talk about not using a debugger, not using it for days. Most people can't imagine that.
    Scott: If you've got code that you can't easily follow what choice to do you have but to walk through and see what the result of the input is? If following the TDD methodologies you usually end up with very short methods with very few conditional statements, so there's only one or two or some small number of paths that the data's going to come in to your object and can go out of it that it's pretty easy to spot where things can go wrong.
    Ron: Wow.

    I can go on for days like this. Anyways this has been great. Thanks so much for joining me today on ARCast.
    Scott: Thanks for listening.

    [music]
    Ron: Wow what a terrific time that was with Scott and David there at Atomic Object in beautiful Grand Rapids Michigan and we just got on and on for so long that we actually ran out of video tape, so we couldn't hear the fact that camera had shut off and so we just kept talking.

    So I hope you understand that's why we ran out there in the middle. But it was great, so much fun. I love learning about new things. Really, I'm pretty persuaded about this test-driven development thing. I know there's a lot of doubters there, a lot of doubters. It's OK. You don't have to agree. I think it's just a good idea to try these things out and see how it works for you.

    Hey, we'll see you next time on ARCast TV.

Add Your 2 Cents