Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Programming Technology

Writing Unit Tests for Existing Code? 86

out-of-order asks: "I recently became a member of a large software organization which has placed me in the role of preparing the Unit Test effort for a component of software. Problem is that everything that I've read about Unit Testing pertains to 'test-driven' design, writing test cases first, etc. What if the opposite situation is true? This organization was writing code before I walked in the door and now I need to test it. What methodology is there for writing test cases for code that already exists?"
This discussion has been archived. No new comments can be posted.

Writing Unit Tests for Existing Code?

Comments Filter:
  • Before you actually fix it.
    • Based on experience with existing projects, this is the way to go -- write unit tests for bug fixes and new features. It's an overwhelming, time-consuming, job to write unit tests for a big mass of existing code. I also find that once I get going, I end up throwing away or heavily refactoring a lot of legacy code anyway. So if I had written tests, I'd be throwing them out, too. End-to-end system test, even superficial ones, have more value. I will say that sometimes writing tests can help you understand
      • I also find that once I get going, I end up throwing away or heavily refactoring a lot of legacy code anyway. So if I had written tests, I'd be throwing them out, too.

        If you are heavily refactoring, it's probably worth putting in the effort to write the tests beforehand. Otherwise how can you be confident that your refactoring hasn't broken anything?

        • Good point. But not much value in wirting the unit tests until you are ready to start refactoring that bit of code. It's hard to predict ahead of time what you will work on.

          And sometimes the dependencies and interfaces are so bad, you're better off building system/integration test coverage to support your refactorings.
      • I also find that once I get going, I end up throwing away or heavily refactoring a lot of legacy code anyway. So if I had written tests, I'd be throwing them out, too.

        If your unit tests are testing observable behaviour as they should be, and your refactoring doesn't change observable behaviour at all as it shouldn't, then you don't need to throw away the tests. Similarly, if you're throwing away old code then presumably that's because you have new code that produces the same required behaviour, and th

        • Well, yes, poorly-written or not useful. I did a bad job of explaining myself. It's true that the best approach is to write tests around the legacy code.

          However, my practical experience is that it's time-consuming to build unit tests for bad legacy code. So I tend to invest my time in broader-scope system/integration/functional tests for existing code.

          I've also found that bad code tends to have many redundancies, unnecessary interfaces, "we'll need it someday" features, and "wouldn't it be cool ..." featu
  • After you have a testing infrastructure written and have a couple tests go learn about GCC code coverage profiling (assuming your language is supported) and have your tools generate coverage information. Then start writing tests to match holes in your coverage. It will take forever.

    Also require all new code to have matching tests and setup automatic tests to slap developers who add code that doesn't get tested.

    Good luck.

  • by okock ( 701281 ) on Friday May 06, 2005 @11:40AM (#12451476) Homepage

    I don't see TestFirst as a Test Strategy, but as a design technique. Writing Tests first forces you to think differently about what you want to write.
    This forces you to write testable code - writing tests afterwards does not force you to do that.

    Of course, having the tests available later proves valuable for testing your application, but the tests main purpose is to lead you to a testable design

    You'll most likely experience severe difficulties in adding Unit Tests to previously untested code. It might be easier to add acceptance tests (e.g. high-level scripts that utilize the application), especially if you want to cover more than small partitions of the application quickly

    • I'd move it back even a little further, and call TDD a specification strategy.

      A specification is just a statement of a testable way to tell if a program is behaving correctly or not; you can think of it as specifying a characteristic function for the set of input and output pairs of the program.

      In set theory (and similar things) there's a notion of two distinct kinds of definitions for a set: the intentional, and extensional.

      An intentional definition for a set specifies the characteristic function, like
  • Fake Ignorance (Score:4, Interesting)

    by samael ( 12612 ) <Andrew@Ducker.org.uk> on Friday May 06, 2005 @11:54AM (#12451682) Homepage
    Write the tests as if the code _hadn't_ been written. Get the requirements and then write the tests from them.

    Then if they fail the tests you'll have to discover if the requirements are wrong or if it's the code that's at fault. But at least you'll have something to start from - and you'll probably find some bugs they missed.
    • That's nice but... (Score:3, Interesting)

      by TheLink ( 130905 )
      Often there aren't detailed enough requirements.

      With requirements at a typical business level, you could have X totally different systems that meet them (and most better given hindsight). And often that level is as much as you're going to get when the original team has left.

      Anyway recreating requirements at a detailed technical level could be a waste of time - because some module could be required to do something stupid by another module. Once you fix things all round, this requirement will be thrown out.
      • Not only are there usually no written specs but the code will also have undocumented obscure fixes for particular problems. If you don't know what the problem is you can't test for it. It may be that under special circumstances for a particular customer certain things need to be done so that for most customers ripping out the code to conform to some half baked spec you just thought of will work. This is so common in legacy code its almost a law. Joel had a bit to say on this in his essay on why you shouldn'

  • by djfuzz ( 260522 ) on Friday May 06, 2005 @11:55AM (#12451701) Homepage
    You have to write tests as you change features. Lets say you have a simple change, tweaking a short method to do something different. First you write a test for its exisiting functionality, make sure it passes. Then add a test for the new functionality, run it and watch it fail. Make your change and make the test pass. This would also be the point where you can do some refactoring or clean up, or extend the test to catch boundy conditions.

    With legacy code, you just have to start writing tests with the code as you go, writing tests for functionality that you need to understand or review. If you try and take x number of weeks to write test cases, your doomed to fall behind and have obsolete tests when you are dumb.

    Also, see Working Effectively With Legacy Code by Michael Feathers --> http://www.amazon.com/exec/obidos/tg/detail/-/0131 177052/002-8698615-6720004?v=glance [amazon.com]
  • We use McCabe ( http://www.mccabe.com/ [mccabe.com]) to point us to problem code. Running their tool gives you a good idea of where the complexity of an application lies and where you should focus your testing.

    It works kinda like this: First the tool parses everything and generates a ton of metrics. This will point you to the complex modules of the application. Then it breaks down each function/method in the module into its possible execution paths and turns this into a graph. By looking at the graph, you can see wh
  • by north.coaster ( 136450 ) on Friday May 06, 2005 @11:56AM (#12451720) Homepage

    I spent several years managing a test team in a Fortune 100 company, and I have seen this situation many times (it's probably the norm, rather than the exception, in industry today).

    Let the documented requirements for the code (or product) be your guide. Use those requirements to develop test cases, then design one or more tests that hit all of the test cases.

    If there are no documented requirements, then you should ask yourself why you are working there. This situation usually leads to many arguments about what the code/product is really suppose to do, and you'll just become frustrated while you waste lots of time. It's not worth it.

    • As above, write your tests to the specs.

      Run the tests and document the results.

      Let someone else mod the specs ;-) if necessary.

      If n.c's third paragraph applies, you either have to find a managerial ally who will support you as you re-work the design process and local culture to be a bit more rigorous and disciplined. It will be tough, but can also be rewarding.
    • For the most part, the parent has the right of it. Specifications should drive what is tested. If the documents don't say the software should do "foo" then don't bother testing whether or not it does "foo", even if everyone is saying that it can, that it does and that it will do "foo". But, the parent poster is a bit pie-in-the-sky for his last paragraph:

      If there are no documented requirements, then you should ask yourself why you are working there. This situation usually leads to many arguments about

    • by dwpenney ( 608404 ) on Friday May 06, 2005 @12:34PM (#12452501) Journal
      Ok. I am confused by this. There is a distinct difference between System Test cases and Unit Test cases. If you are working from a design document detailing the requirements from a working Business or Systems Requirements document and testing the items to make sure that requirements are met you are performing a System Test - a test at a much higher level than Unit Testing. At the Unit Test level you are checking the boundries in the code itself to make sure that loops are exited correctly and logic is performed correctly. In essence a Unit Test is at a component level, intended to look inside the component and make sure that it operates correctly based on it's very limited sets of inputs and outputs. At a higher level, System Test cases look at how the component interacts with other components and wether this interaction meets the requirements. Stupid example using web code: Putting ^5$*1@ in a text search input box is a Unit test case makeing sure that the entry parameters if this code will not barf on the input. System test cases should not need to test this (with a proper procedure for unit testing) but should instead be focusing on if search results are in line with what was requested. These necessary definitions limit and focus the level of your testing and the understanding of the question that is being asked.
    • Exactly, you can't test "it" if you don't know wtf "it" is. Sadly it does seem to be the norm. It is often an expectation placed on test teams that they "know" what "it" is and when "it" is delivered "it" will be fully tested. This normally leads to a situation where the test team try to define the requirements after "it" is built, the whole thing gets tied up in an argument then released with nothing properly tested or defined. This gets even worse when "it" is sold to a powerfull customer. Since nobody pr
  • When I worked at Cisco on a project that was written in Java we used an automated unit testing tool that would test each method and report on what would break , for instance if you passed a particular value to a method it would fail, maybe you should fix the code to deal with that possibility. It was either Junit or Jtest ( one of them costs $3000 a seat, we used that one. ) It was good thing since QA categorically refused to "test" the software by trying to break it, they would only tested to see if it co
    • Re:Look at Junit (Score:3, Interesting)

      It was good thing since QA categorically refused to "test" the software by trying to break it, they would only tested to see if it could work if the customer did everything right.

      What kind of testing is that? You have to assume that the customer won't do everything right if you're going to find bugs. Just because you're using automated code testing, it doesn't mean that the unit tests themselves have been written correctly or all the code works perfectly together. A good QA team needs to have attitude th
      • Quote "We are QA, not testing! We do QA, we don't test"

        My suggestion that random banging on the keyboard, pushing buttons, and unexpectedly closing windows would be a good thing was not appreciated because there was no way to write it up as a test plan, or describe it as a repeatable bug.

        • Re:Look at Junit (Score:4, Interesting)

          by __aaclcg7560 ( 824291 ) on Friday May 06, 2005 @03:43PM (#12455679)
          My suggestion that random banging on the keyboard, pushing buttons, and unexpectedly closing windows would be a good thing was not appreciated because there was no way to write it up as a test plan, or describe it as a repeatable bug.

          In the video game industry, that's called button smashing. Programmers hated it because it meant that their input code didn't consider multiple buttons being pressed at the same time, and, worst, it was usually time dependent. Nintendo is very good at finding button smashing bugs.
  • Such tools can make after-the-fact testing quite a bit easier.

    We used automated regression testing scripts in the mainframe environment I worked in 12 years ago, and that made some aspects of unit testing relatively easy.

    Unisys had a tool (TTS1100) which allowed us to record each online transaction entry and computer response and then play it back later, and that made it possible to perform the exact same tests dozens or hundreds of times if needed. We used to run them after each set of changes was applied to make sure nothing broke. :-)

    One could also record a single occurrence of a lengthy interactive sequence and then add things like variables and looping structures into the recorded script to automate the handling of various test cases using different values.

    Such a tool makes after-the-fact test design a little bit easier because you can sit down and methodically address each and every variation of each and every input field on a given screen.

    Of course, the nature of the software you're using might make that sort of thing more difficult, or perhaps even easier.

    I've never been able to do up-front unit test design -- specifications can change rather quickly when doing in-house software development, and the overall environment is a lot more dynamic than a typical "software house" environment would be where one always has formal detailed product specs to code to. We're often writing code based on an e-mail or on a couple of phone conversations.
  • Unit testing is a method you use to achive something. Is the current component very buggy and you need to rewrite it or do you need to extend production quality software without breaking existing functionality?

    If you are testing a component try to figure out if it is possible to in some schematic way. If you can figure out a way for the "business" people to write the tests for you that will take a lot of knowledge off your shoulders.

    If it is an existing component maybe you could explore if it is possible
    • > Think about what your goals are. Then find the best tool to get there.

      I couldn't say it any better. If you want to make sure that functionality stays intact while you change things, write system tests. If you want to use code in a different context, document it and write unit tests first.

      Whatever you do, you need some "absolute reference" to find out what is right and what is a bug. Tests are no good if they just preserve old bugs for eternity.
  • Welcome, to the world of the Real.

    In most IT shops, I'm sorry to say, test cases are a low priority and almost always come after the code is complete or nearly complete.

    If you're looking for a "methodology" for creating unit test cases, I think you're overthinking the problem. You need to create a set of test caes that assure you that the unit is working in and of itself. There are a number of things you can do to accomplish that:

    1. Look at the design document.
    See what the design document says the uni
    • You can't actually write complete tests until after the code is complete, if you have no functional or design documentation. You can come up with use cases, but those aren't necessarily tests until the product/component is in final form. Then you determine how to turn use cases into test procedures.

      This goes for unit testing as much as it does for integration testing. If the design hasn't be (entirely) proscribed, then it's going to be pretty much (at least partially) invented at coding time -- meaning int
  • Whhhaaaa! (Score:4, Insightful)

    by LouCifer ( 771618 ) on Friday May 06, 2005 @12:01PM (#12451852)
    Gimme a fucking break.

    Every testing job I've ever had we've had ZERO documentation. NADA. ZIP.

    How do we survive? WE TEST. We put down the book (like we had one to begin with) and we test. Surely you have a server somewhere running dev-level code (at least) and you start poking around. Sure, its less than ideal, but you deal with it. And you bitch about how crappy it is and how it goes against all the principals of so-called 'real world' methodologies.

    The thing is, this is how the real world does it.

    Sure, in a perfect world, everyone has their shit in order. But in a perfect world we're not all competing against code monkeys working for 1/10th of what we make and that live in a 3rd world country.

    • I've been looking at code from coders that lived in a 1st world country and earned 1st world pay. And it really isn't that good... (BTW you're close enough about the zero documentation bit...)

      It seems there are very few good programmers. If you're going to get crap anyway, better to pay 1/10th for it :).

      Not saying that I'm a good programmer. Far from it. But heh, even I can do better than the crap I saw.

      With just the fixes/rearchitecting of some recent code, I might have justified most (if not all) of my
    • As someone who has been on a dedicated testing team I can say just poking around is a good idea. Very often the test scripts will work but doing things not on the test script will produce an error.

      I had one developer who got very pissed at me because I did things outside of the test script and caused some errors. He was like "That's not how your supposed to do it" and then showed me that it worked if you did it like the test script. I was like "Ummm I don't think you can count on end users to do it exa
      • Leave it up to a developer to tell a tester how to do our jobs.

        Reverse the tables and watch them blow their lids.

        Of course, they can't think outside the box (doesn't apply to all developers, but a lot I deal with).

        Hopefully your manager realized the developer was/is an idiot and told HIS manager as much.

        Fortunately, I've got a great manager (former developer) who knows what's what when it comes to testing.

      • Well, developers don't know how to test, or else we wouldn't fucking have TESTERS.

        ("We don't tell you guys how to code, do we?" -- well, actually, I make coding/fix suggestions once in a while, but I have some RW coding experience.)

        Sorry to hear you had to face the developer-tester impasse in front of your boss before you'd had it explained to h{im|er} beforehand.

        Developers' job is to understand how to make the product, testers' job is to understand how it will be used.
    • Re:Whhhaaaa! (Score:4, Insightful)

      by RomulusNR ( 29439 ) on Saturday May 07, 2005 @03:48PM (#12463539) Homepage
      Except when you're expected to have a test plan. You can't come up with a test plan without a functional spec. A design doc helps even more.

      You can't possibly ensure that the application does what it's supposed to if no one can communicate to you what that entails. Imagine testing a house by spraying it with water, banging on the windows, and tromping on the lawn. Those all sound like good things, until the future owner tries to open the front door, and can't.
  • by Chris_Jefferson ( 581445 ) on Friday May 06, 2005 @12:08PM (#12451993) Homepage
    Trying to write a test case for all the code you have will be very difficult, very long and to be honest not buy you a lot.

    A few open source projects have found themselves in the same situation as you, and they seem to work by 3 rules:

    1) If you change any code at all which doesn't have a test, add a test

    2) If you find a bug, make sure you add a test that fails before, and works now

    3) If you are ever wandering around trying to understand some code, then feel free to write some tests :)

    One thing I will say is to try very hard to keep your tests organised. Keeping them in a very similar directory structure to the actual code is helpful. Without this it's very hard to tell what has and hasn't got a test.
  • My Experience... (Score:5, Informative)

    by Dr. Bent ( 533421 ) <ben&int,com> on Friday May 06, 2005 @12:33PM (#12452480) Homepage
    I inherited a 1000 class Java based toolkit from my predecessor, which had exactly zero unit tests. Over the last two years, we've made a sustained effort to employ Test-Driven Development and add more tests to ensure that everything works as advertised. As of today the toolkit has over 830 tests, with line coverage of 61% and class coverage of 96%. We've still got a long way to go, but were much better off than we were. Here's how we got there...

    1) A lot of people are going to tell you that you need to write your tests from scratch. That you should assume that your code is broken and work out the expected results by hand and create the test assertions accordingly. I disagree [benrady.com]. If you're testing old code, it's much more useful to use the test to ensure that it does whatever it did before, instead of ensuring that it's "correct". I prefer to treat the code as though it is correct, and build the tests around it. Even if the assumption is occasionaly wrong, you can make the tests much quicker this way. That allows you to refactor and extend your system with confidence, knowing that you haven't broken anything. Remember, TDD isn't really about quality assurance, it's about design and evolving design through refactoring. More tests == more refactoring == better system.

    2) You're probably not going to get a lot of extra time to sit around and write tests. You need to captialize on the time that you have and turn problems into oppertunities to add tests. Whenever you find a bug, make a test that reproduces it. If you need to add supporting stub or mock objects, consider making them reusable so that future tests will be easier to write.

    3) If you need to add new functionality to the system, just follow the standard TDD steps of Test->Code->Refactor, and make sure that you add tests for anything that might be affected by the change.

    4) I'm assuming that you already have a continous integration build that runs the tests, but if you don't, make one. Now. Also consider adding other metrics to the build like code coverage (we use Emma), findbugs, and jdepend. These will help you track your progress and can be very useful if you have to defend your methdology to people who view TDD as a waste of time (The Code Coverage to Open Bugs ratio gets them every time).

    5) In general, you need to look for oppertunities to write tests. Don't understand how a module works? Write a test for it. Found a JDK bug? Reproduce it with a test. Performance too slow? Use timestamps to ensure that the performance of a alrorithm is in a reasonable range.

    You've probably got a long road ahead, but it's worth the work. Keep at it, and good luck.
  • The purpose of unit testing is to make sure that the unit works (and to characterize unit failures). It's the sanity check before you throw it 'over the wall' to the test and quality organization.

    You want to do path coverage, statement coverage, bounds checking on the inputs, error conditions, that kind of stuff.

    Pick up a book by Boris Beizer, read his stuff and ignore everyone else. I've been in QA and Test for almost twenty years now, Beizer is -the man- to read about testing. If you're really desperate
  • by Free_Trial_Thinking ( 818686 ) on Friday May 06, 2005 @01:02PM (#12453036)
    Guys, my legacy code doesn't have functions, just VB subroutines that modify global variables. Any idea how to make unit tests for this? And by the way, the functions aren't cohesive, each one is 100's of lines and does different sometimes unrelated things.
    • just VB subroutines that modify global variables [...] And by the way, the functions aren't cohesive, each one is 100's of lines

      Searching the web on 'Junit VB' you should be able to find test harness freeware.

      If there's the possibility of modifying the code organization that would help out a lot, obviously, because then you could make smaller functions doing more sharply defined things and therefore presumably easier to test.

      If not perhaps assume you can't test more of the existing code except at

    • See if you can write some tests to ensure basic functionality. Then tear it apart and start refactoring. Writing properly testable code is no simple task.
  • Something about a million monkeys sitting at a million keyboards comes to mine :)
  • If you're suggesting that your job is to be the unit test guy, then I would just start writing tests as you think of them. Critical mass is important - if you only have a few unit tests no one will care. Write some obvious ones for everything. And then go back and dig in where you think unit test make more sense. In the web app world in particular it is often very hard to write real unit tests without getting into a whole variety of special rigging. So focus on the logic components that tend to be more
    • In the web app world in particular it is often very hard to write real unit tests without getting into a whole variety of special rigging.

      True enough, but if you've written the code in good abstracted OO with the "special rigging" (ie, mock objects) in mind, you'll be much better off. I'd say there's very little complex code that doesn't require mock objects or the equivalent to test, so it's something well worth learning. As you say, once you've got all the groundwork set up, it's much easier to extend.

  • 1. Choose a testing framework. It depends on language. For C/C++ - cppunit, for Java - junit.
    2. Start writing unit tests from lowest level functions: those that use barebones system libraries and start moving upwards. Do not go too far - it's a unit (!) testing, not a general or rgeression testing.
    3. Ideally you should test all possible paths in functions. Apparently it is not feasible for large functions (that is one of the reasons function should be small). You should try to create a test to hit every con
  • Some people are recommending that you treat the modules as "black boxes" and write the tests according to their specs. Problem is (as others have pointed out) the specs are not detailed enough. So you will inevitably wind up looking at the code and then writing tests that prove the code does what it says it does.

    And this is actually OK, because it simply means that the "Unit test creation" process is actually a detailed code review process. Expect to find far more bugs from looking at the code than from
    • "So you will inevitably wind up looking at the code and then writing tests that prove the code does what it says it does."

      Code always does what it says it does. The problem is determining what the code should be doing and testing for that.

      • "Code always does what it says it does."

        Yes. Absolutely. 100%.

        Until someone changes it.

        When someone changes the code, it can be very enlightening to see the cascade of side-effects (especially if you thought there weren't going to be any!) Having unit tests that merely document the current behavior of code in a format that a machine can rapidly reproduce from the source code (compile unit test, run unit test, compare output to old output) can help you identify... Regressions! Very handy.

        The main wa
        • "Until someone changes it."

          No. It still does exactly what it "says" it does, it just "says" something different after the code has been modified.

          Regression testing can be very handy as long as the tests reflect what the code is supposed to do. Otherwise passing the regression simply means that your code is consistently failing to achieve its requirements.

          • Regression testing can be very handy as long as the tests reflect what the code is supposed to do.
            Looking at the code to see what it is supposed to do is silly. Just because code does something doesn't mean it is supposed to. What if it segfaults for some input? Was it supposed too? I've seen code that was supposed to seg fault!
            • "Looking at the code to see what it is supposed to do is silly."

              Well, there are times when looking at the code is useful but I never claimed you needed to do that to test it. All I'm saying is that a test should be devised to determine if its requirements are met.

              "What if it segfaults for some input? Was it supposed too? I've seen code that was supposed to seg fault!"

              As a former Atari 2600 programmer, I can appreciate the idea that correct behavior may be quite unconventional (such as performing an index
          • If you want to be purposefully obtuse, I can't stop you.

            My only point, and I feel I was abundantly clear on this, is that regression tests provide a record of what source code used to do.

            If you like what it used to do, then if the output is different, you can bet you did something wrong.

            If you didn't like what it used to do, you can use the output to try to figure out if you changed the parts you wanted to. ("Passing" a regression test is a bad thing, if your test captured the fact that you used to have
            • If the old behavior was what you "wanted" but different behavior then what your requirements state, you have problems that testing will not solve.
              • But what we're talking about is legacy code, where you probably don't even have requirements, or the requirements are so out of date that they're useless.

                Now what?

                Regression tests can help pinpoint the results of code changes - for good or bad. If the old behavior was what you wanted, then you undo your changes.

                Again - are you being purposefully obtuse?
                • "But what we're talking about is legacy code, where you probably don't even have requirements, or the requirements are so out of date that they're useless."

                  I don't see any reason why legacy code is less likely to have requirements than new code. In any case, if the requirements are not updated with the code, than you have a flaw in your development process.

                  You can certainly peform regression tests even if your process is flawed in other ways, I'm not disputing that.

                  "Again - are you being purposefully obt
                  • If you don't have regression tests, then you have a flaw in your development process.

                    That's what we're all talking about here.

                    You've ignored that from the beginning, and that's what I keep pointing out again and again.
                    • I've never objected to regression tests in this thread, apparently you haven't been paying attention.
                    • "Regression testing can be very handy as long as the tests reflect what the code is supposed to do." (Emphasis added.)

                      This statement is the crux of our disagreement. I assert that having regression testing is handy, regardless of whether the code is doing what it is supposed to be doing, whether the tests reflect what the code is supposed to do, or basically any other factor.

                      There are a few factors which make them valuable to me, even in the worst of conditions:

                      1) It provides kind of a minimal environm
                    • Then I suggest you write a single regression test for all of your projects since any arbitrary regression test can meet your minimum criteria of "regardless of ... whether the tests reflect what the code is supposed to do".
                    • Yes, yes - clearly by taking your flippant responses seriously I deserve to have you deride my post without any attempt to understand my core sentiment:

                      Even a bad regesssion test is better than no regression test.
  • Try to get your hands on Working effectively with legacy code [amazon.com] by Michael Feathers.
  • First of all you need to decide what you want to achieve from your testing. If you're following a methodology such as the 'V'-cycle, then the point of unit testing is to verify that the code correctly implements the design (system testing is where you check that the system implements the requirements).

    Many replies here are along the lines of "we don't have any documentation" - well in that case, you can't do truely meaningful testing. You can test what you think the code should do, but that'll always be
    • Writing tests for undocumented code serves a very important purpose: The tests will act as documentation for what behaviour you expect of the code. Those test can then serve as a guide to writing documentation in the knowledge that they document the actual behaviour of the code (since otherwise the tests would fail).

      Documenting legacy code that doesn't have test cases without at the same time building a test suite is a nightmare.

      • I think here that you've hit upon a fundamental issue - 'expected behaviour' as opposed to 'intended behaviour'.

        With no documentation of the existing code, you can write as many tests as you like, but you still can't prove that the code performs as it was originally intended to - you can only prove that the code does what the code does.

        Granted that this is of some use for the purpose of creating a regression test suite, but it doesn't alter the fact that your tests can't, by definition, find bugs in the e
  • by gstover ( 454162 ) on Friday May 06, 2005 @03:12PM (#12455156) Homepage

    A recent article in print about automated unit tests for legacy code was

    "Managing That Millstone"
    By Michael Feathers
    Software Development
    January 2005
    http://www.sdmagazine.com/documents/s=9472/sdm0501 c/sdm0501c.html [sdmagazine.com]

    It included suggestions for how to inject unit tests into code which isn't loosely coupled, some tips on how to refactor to get loosely coupled interfaces, & what you can do when neither of those approaches will work. It was a valuable & enjoyable read for me, at least.

    gene

  • Others have touched on this, but you shouldn't be looking to write unit tests just to say, "Hey, I've tested some of this code." Wait till you have to change something or add a new feature, then focus your energy on writing tests in those areas you need to protect. Then make your changes.
  • ...you are wasting your time.

    Unit testing is for finding bugs early on (preferably design errors, but also coding errors).

    If the code is already written and works, then it's not likely to be worth the effort to add random unit tests all over the place. What you need then is either (a) stress testing, to discover hidden bugs, or (b) regression tests, to make sure the software keeps working, even after programmers have "improved" upon it.

    • And guess what, unit tests are very much a part of a good regression test suite... So writing unit tests are most certainly not a waste of time even if the software is "working". Besides I've lost count of the number of times I've found serious errors in "working" code when adding unit tests.
  • ... there was an ADL [opengroup.org]-based toolset, called if memory serves, JavaSpec, which made api testing hard but doable. As opposed to "let's not but say we did". I admit I used a hacked-up subset, but for large-scale problems being able to generate tests and test data sets via a tool was A Good Thing.

    Even worth learning ADL (:-))

    --dave

  • If the product already exists, then you know what it is supposed to do. All you have to do is come up with scenerios to test what it does. You should already have a users guide, so you basically go to your users guide and look to see what it says its supposed to do. Then start testing the guide.
  • Hah!

    You are so screwed. Writing tests for untested code is a thankless job. You are going to find so many bugs, and everyone is going to get really pissed off about that new hire that is rocking the boat complaining about "quality problems".

    You are in a no win situation. They will tell you your tests are too picky, that no one will use it like that. Unit testing is thankless, you can't argue. Given that there was no test plan, I bet there isn't even a spec! Where there is smoke there is fire.

    I'd sta

    • Dude, you just pretty much defined all QA everywhere. The only thing you missed was the part about being crammed in at the end of the release schedule after development was days late on their end and being expected not to let the date slip.
  • ...By Michael Feathers. The scenario that you may find is "we can't refactor until we have unit tests and we can't have unit tests until we refactor". The book has some strategies for getting around that paradox. You may find, however, that some code is essentially un-testable as written.
  • Presumably since these components are all built, they have some idea of what they are supposed to do, and some sense of parameters.

    Make stubs or some other kind of testing tools, hammer data into them, examine the data coming out.

    I guess I don't quite follow the question. You're actually in a better position with already-developed code, for the simple reason that if these things are already developed, they already have a defined purpose (whether that was defined before or after the fact). Your only proble
  • First of all, don't panic, you're not alone! I have done this a few times already. Here's my approach:
    1. Identify "enduring business themes" in the code. This means basically a group of code that can be predictably tested by feeding certain input and expecting certain output. For example, you know that if you order two pens, a purchase order for two pens will come out the other end.
    2. Once you establish a few of these scenarios you can write a few high-level unit tests. These will help you acertain whether

THEGODDESSOFTHENETHASTWISTINGFINGERSANDHERVOICEISLIKEAJAVELININTHENIGHTDUDE

Working...