??xml version="1.0" encoding="utf-8" standalone="yes"?>
2. 不要依赖或假定测试运行的序Q因为JUnit利用
Vector保存?gu)试?gu)。所以不同的q_会按不同?br />序从Vector中取出测试方法?
对于我们来说Q有时是必须要提交,以至于有副作用的?/font>
据来。那么必d随后每个试自己消除自己的副
作用?/font>
点)或者一个h一个数据库Q?/font>
注意这些个Z?br />有区别的内容用常量在每个q所有程序中?br />用。而不是分布在各个位置?/font>
否则以后要改换测?br />服务器,所有的E序都需要改动?/font>
大家的数据库服务器的试数据全部一致?/font>
否则Q?br />׃能做到很Ҏ(gu)得拿到FJ也可以很Ҏ(gu)的运行,
所以需要准备“测试数据集“?/font>
包括QSchema ,table Q?br />stored procedure{数据库对象的结构一_
q包
括数据库的数据内容保持一致?/font>
4. 当承一个测试类Ӟ记得调用父类的setUp()?br />tearDown()Ҏ(gu)?
5. 测试代码和工作代码攑֜一P一边同步编?br />和更新。(使用Ant中有支持junit的task.Q?
6. 试cd试Ҏ(gu)应该有一致的命名Ҏ(gu)。如?br />工作cd前加上test从而Ş成测试类名?
7. 保试与时间无养I不要依赖使用q期的数?br />q行试。导致在随后的维护过E中很难重现试?
8. 如果你编写的软g面向国际市场Q编写测试时?br />考虑国际化的因素。不要仅用母语的Localeq行试?
9. 可能地利用JUnit提供地assert/failҎ(gu)以及
异常处理的方法,可以使代码更为简z?
的正确性的?/font>
10.试要尽可能地小Q执行速度快?
==========
1Q将所有的数据库的试数据用ODBCE序自动
生成的?用户可以单的修改ConnectionStringQ?br />然后q行E序Q就可以创徏生成?/font>据库/数据?br />?存储l构Qƈ且自动插入数据?br />
各自单独使用自己的数?/font>库。否则会因ؓ一个自
q错误Q媄响别人的工作?/font>
单独攑ֈ一个类中,用static帔R׃n使用Q这?br />׃于很Ҏ(gu)的更换环境再q行试Q做到很?br />易的UL试环境Q?/font>
个主键,我们在插入数据的?/font>候,保证试用例Q?br />试用例E序Q测试用例程序中的数据,q三?br />的编号一致v来。便于出现问题时Q可以排除数据?/font>
2. 不要依赖或假定测试运行的序Q因为JUnit利用
Vector保存?gu)试?gu)。所以不同的q_会按不同?br />序从Vector中取出测试方法?
数据。简单的会滚可以了?br />
多了其他的数据)?/font>
容用帔R在每个h自己的所有程序中公用。而不是分布在
各个位置?/font>
否则以后要改换测试服务器Q所有的E序都需
要改动?/font>
的数据库服务器的试数据全部一致?/font>
否则Q就不能做到
很容易得拿到FJ也可以很Ҏ(gu)的运行,所以需要准备“测
试数据集“?/font>
包括QSchema ,table Qstored procedure{数?br />库对象的l构一_
q包括数据库的数据内容保持一致?/font>
4. 当承一个测试类Ӟ记得调用父类的setUp()和tearDown()Ҏ(gu)?
5. 测试代码和工作代码攑֜一P一边同步编译和更新?br />Q用Ant中有支持junit的task.Q?
6. 试cd试Ҏ(gu)应该有一致的命名Ҏ(gu)。如在工作类
名前加上test从而Ş成测试类名?
7. 保试与时间无养I不要依赖使用q期的数据进行测试?br />D在随后的l护q程中很N现测试?
8. 如果你编写的软g面向国际市场Q编写测试时要考虑国际
化的因素。不要仅用母语的Localeq行试?
9. 可能地利用JUnit提供地assert/failҎ(gu)以及异常处理?br />Ҏ(gu)Q可以代码更ؓz?
10.试要尽可能地小Q执行速度快?
=============
1Q将所有的数据库的试数据E序自动生成的?
可以创建生成数据库/数据库表/存储l构Qƈ且自?br />插入数据?
2Qؓ了保证多个测试h员的不干扎ͼ分别各自
单独使用自己的数?/font>库。否则会因ؓ一个自q错误Q?br />影响别h的工作?/font>
攑ֈ一个类中,用static帔R׃n使用Q这样就便于?br />Ҏ(gu)的更换环境再q行试Q做到很Ҏ(gu)的移?/font>试
环境Q?/font>
我们在插入数据的?/font>候,保证试用例Q测试用例程序,
试用例E序中的数据Q这三者的~号一致v来。便?br />出现问题Ӟ可以排除数据?/font>
在理解其区别之前Q需要明白一点,他们都是Z同一个目标而出现的Q代替依赖部分,让原先的“整合测试”简化ؓ“单元测试”?nbsp;
mockQ用easymock{包Q在E序代码中向被测试代码注入“依赖部分”,通过代码可编E的方式模拟出函数调用返回的l果?BR>
stubQ自己写代码代替“依赖部分”。它本n是“依赖部分”的一个简化实现?BR>
实际上,在能够用mock的时候,׃应该选择使用stub。但是有时候是必须使用stub的,例如在对遗留代码q行试Ӟ该部分代码不支持“注入”,那么只能“替代”这个过E外U,使用stub完成此Q务了?img src ="http://www.aygfsteel.com/jinfeng_wang/aggbug/3721.html" width = "1" height = "1" />
Now that we've seen JUnit in action, let's step back a little and look at some good practices for writing tests. Although we'll discuss implementing them with JUnit, these practices are applicable to whatever test tool we may choose to use.
Write Tests to Interfaces
Wherever possible, write tests to interfaces, rather than classes. It's good OO design practice to program to interfaces, rather than classes, and testing should reflect this. Different test suites can easily be created to run the same tests against implementations of an interface (see Inheritance and Testing later).
Ҏ(gu)个类q行试Ӟ仅测试该cd外公开的接口,适当的测一些其内部的接口。当一个类从其他类l承Ҏ(gu)Ӟ那么对这些方法的试则不应该由subclass的测试完成,而由parent class的测试完成?/FONT>
Don't Bother Testing JavaBean Properties
It's usually unnecessary to test property getters and setters. It's usually a waste of time to develop such tests. Also, bloating test cases with code that isn't really useful makes them harder to read and maintain.
Maximizing Test Coverage
Test-first development is the best strategy for ensuring that we maximize test coverage. However, sometimes tools can help to verify that we have met our goals for test coverage. For example, a profiling tool such as Sitraka'sJProbe Profiler (discussed in Chapter 15) can be used to examine the execution path through an application under test and establish what code was (and wasn't) executed. Specialized tools such as JProbe Coverage (also part of theJProbe Suite) make this much easier. Jprobe Coverage can analyze one or more test runs along with the application codebase, to produce a list of methods| and even lines of source code that weren't executed. The modest investment in such a tool is likely to be worthwhile when it's necessary to implement a test suite for code that doesn't already have one.
Don't Rely on the Ordering of Test Cases
When using reflection to identify test methods to execute, JUnit does not guarantee the order in which it runs tests. Thus tests shouldn't rely on other tests having been executed previously. If ordering is vital, it's possible f to add tests to a TestSuite object programmatically. They will be executed in the order in which they were added. However, it's best to avoid ordering issues by using the setup () method appropriately.
Avoid Side Effects
For the same reasons, it's important to avoid side effects when testing. A side effect occurs when one test changes the state of the system being tested in a way that may affect subsequent tests. Changes to persistent data in a database are also potential side effects.
Read Test Data from the Classpath, Not the File System
It's essential that tests are easy to run. A minimum of configuration should be required. A common cause of problems when running a test suite is for tests to read their configuration from the file system. Using absolute file paths will cause problems when code is checked out to a different location; different file location and path conventions (such as \home\rodj \tests\foo.dat or C:\\Documents and Settings\ \rodj \ \ f oo.dat) can tie tests to a particular operating system. These problems can be avoided by loading test data from the classpath, with the Class.getResource () or Class.getResourceAsStream() methods. The necessary resources are usually best placed in the same directory as the test classes that use them.
Avoid Code Duplication in Test Cases
Test cases are an important part of the application. As with application code, the more code duplication they contain, the more likely they are to contain errors. The more code test cases contain the more of a chore they are to write and the less likely it is that they will be written. Avoid this problem by a small investment in test infrastructure. We've already seen the use of a private method by several test cases, which greatly simplifies the test methods using it.
Inheritance and Testing
We need to consider the implications of the inheritance hierarchy of classes we test. A class should pass all tests associated with its superclasses and the interfaces it implements. This is a corollary of the "Liskov Substitution Principle", which we'll meet in Chapter 4.
When using JUnit, we can use inheritance to our advantage. When one JUnit test case extends another (rather than extending junit.framework.TestCase directly), all the tests in the superclass are executed, as well as tests added in the subclass. This means that JUnit test cases can use an inheritance hierarchy paralleling the concrete inheritance hierarchy of the classes being tested.
In another use of inheritance among test cases, when a test case is written against an interface, we can make the test case abstract, and test individual implementations in concrete subclasses. The abstract superclass can declare a protected abstract method returning the actual object to be tested, forcing subclasses to implement it.
It's good practice to subclass a more general JUnit test case to add new tests for a subclass of an
object or a particular implementation of an interface.
The use of mock objects is a widely employed unit testing strategy. It shields external and unnecessary factors from testing and helps developers focus on a specific function to be tested.
java.lang.reflect.Proxy
, which can create dynamic proxy classes/objects according to given interfaces. But it has an inherent limitation from its use of Proxy
: it can create mock objects only for interfaces.
Mocquer is a similar mock tool, but one that extends the functionality of EasyMock to support mock object creation for classes as well as interfaces.
Mocquer is based on the Dunamis project, which is used to generate dynamic delegation classes/objects for specific interfaces/classes. For convenience, it follows the class and method naming conventions of EasyMock, but uses a different approach internally.
MockControl
is the main class in the Mocquer project. It is used to control the the mock object life cycle and behavior definition. There are four kinds methods in this class.
public void replay();
public void verify();
public void reset();
The mock object has three states in its life cycle: preparing, working, and checking. Figure 1 shows the mock object life cycle.
Figure 1. Mock object life cycle
replay()
changes the mock object's state to the working state. All method invocations on the mock object in this state will follow the behavior defined in the preparing state. After verify()
is called, the mock object is in the checking state. MockControl
will compare the mock object's predefined behavior and actual behavior to see whether they match. The match rule depends on which kind of MockControl
is used; this will be explained in a moment. The developer can use replay()
to reuse the predefined behavior if needed. Call reset()
, in any state, to clear the history state and change to the initial preparing state.
public static MockControl createNiceControl(...);
public static MockControl createControl(...);
public static MockControl createStrictControl(...);
Mocquer provides three kinds of MockControls
: Nice
, Normal
, and Strict
. The developer can choose an appropriate MockControl
in his or her test case, according to what is to be tested (the test point) and how the test will be carried out (the test strategy). The Nice
MockControl
is the loosest. It does not care about the order of method invocation on the mock object, or about unexpected method invocations, which just return a default value (that depends on the method's return value). The Normal
MockControl
is stricter than the Nice
MockControl
, as an unexpected method invocation on the mock object will lead to an AssertionFailedError
. The Strict
MockControl
is, naturally, the strictest. If the order of method invocation on the mock object in the working state is different than that in the preparing state, an AssertionFailedError
will be thrown. The table below shows the differences between these three kinds of MockControl
.
Nice |
Normal |
Strict | |
---|---|---|---|
Unexpected Order | Doesn't care | Doesn't care | AssertionFailedError |
Unexpected Method | Default value | AssertionFailedError |
AssertionFailedError |
There are two versions for each factory method.
public static MockControl createXXXControl(Class clazz);
public static MockControl createXXXControl(Class clazz,
Class[] argTypes, Object[] args);
If the class to be mocked is an interface or it has a public/protected default constructor, the first version is enough. Otherwise, the second version factory method is used to specify the signature and provide arguments to the desired constructor. For example, assuming ClassWithNoDefaultConstructor
is a class without a default constructor:
public class ClassWithNoDefaultConstructor {
public ClassWithNoDefaultConstructor(int i) {
...
}
...
}
The MockControl
can be obtained through:
MockControl control = MockControl.createControl(
ClassWithNoDefaultConstructor.class,
new Class[]{Integer.TYPE},
new Object[]{new Integer(0)});
public Object getMock();
Each MockControl
contains a reference to the generated mock object. The developer can use this method to get the mock object and cast it to the real type.
//get mock control
MockControl control = MockControl.createControl(Foo.class);
//Get the mock object from mock control
Foo foo = (Foo) control.getMock();
public void setReturnValue(... value);
public void setThrowable(Throwable throwable);
public void setVoidCallable();
public void setDefaultReturnValue(... value);
public void setDefaultThrowable(Throwable throwable);
public void setDefaultVoidCallable();
public void setMatcher(ArgumentsMatcher matcher);
public void setDefaultMatcher(ArgumentsMatcher matcher);
MockControl
allows the developer to define the mock object's behavior per each method invocation on it. When in the preparing state, the developer can call one of the mock object's methods first to specify which method invocation's behavior is to be defined. Then, the developer can use one of the behavior definition methods to specify the behavior. For example, take the following Foo
class:
//Foo.java
public class Foo {
public void dummy() throw ParseException {
...
}
public String bar(int i) {
...
}
public boolean isSame(String[] strs) {
...
}
public void add(StringBuffer sb, String s) {
...
}
}
The behavior of the mock object can be defined as in the following:
//get mock control
MockControl control = MockControl.createControl(Foo.class);
//get mock object
Foo foo = (Foo)control.getMock();
//begin behavior definition
//specify which method invocation's behavior
//to be defined.
foo.bar(10);
//define the behavior -- return "ok" when the
//argument is 10
control.setReturnValue("ok");
...
//end behavior definition
control.replay();
...
Most of the more than 50 methods in MockControl
are behavior definition methods. They can be grouped into following categories.
setReturnValue()
These methods are used to specify that the last method invocation should return a value as the parameter. There are seven versions of setReturnValue()
, each of which takes a primitive type as its parameter, such as setReturnValue(int i)
or setReturnValue(float f)
. setReturnValue(Object obj)
is used for a method that takes an object instead of a primitive. If the given value does not match the method's return type, an AssertionFailedError
will be thrown.
It is also possible to add the number of expected invocations into the behavior definition. This is called the invocation times limitation.
MockControl control = ...
Foo foo = (Foo)control.getMock();
...
foo.bar(10);
//define the behavior -- return "ok" when the
//argument is 10. And this method is expected
//to be called just once.
setReturnValue("ok", 1);
...
The code segment above specifies that the method invocation, bar(10)
, can only occur once. How about providing a range?
...
foo.bar(10);
//define the behavior -- return "ok" when the
//argument is 10. And this method is expected
//to be called at least once and at most 3
//times.
setReturnValue("ok", 1, 3);
...
Now bar(10)
is limited to be called at least once and at most, three times. More appealingly, a Range
can be given to specify the limitation.
...
foo.bar(10);
//define the behavior -- return "ok" when the
//argument is 10. And this method is expected
//to be called at least once.
setReturnValue("ok", Range.ONE_OR_MORE);
...
Range.ONE_OR_MORE
is a pre-defined Range
instance, which means the method should be called at least once. If there is no invocation-count limitation specified in setReturnValue()
, such as setReturnValue("Hello")
, it will use Range.ONE_OR_MORE
as its default invocation-count limitation. There are another two predefined Range
instances: Range.ONE
(exactly once) and Range.ZERO_OR_MORE
(there's no limit on how many times you can call it).
There is also a special set return value method: setDefaultReturnValue()
. It defines the return value of the method invocation despite the method parameter values. The invocation times limitation is Range.ONE_OR_MORE
. This is known as the method parameter values insensitive feature.
...
foo.bar(10);
//define the behavior -- return "ok" when calling
//bar(int) despite the argument value.
setDefaultReturnValue("ok");
...
setThrowable
setThrowable(Throwable throwable)
is used to define the method invocation's exception throwing behavior. If the given throwable does not match the exception declaration of the method, an AssertionFailedError
will be thrown. The invocation times limitation and method parameter values insensitive features can also be applied.
...
try {
foo.dummy();
} catch (Exception e) {
//skip
}
//define the behavior -- throw ParseException
//when call dummy(). And this method is expected
//to be called exactly once.
control.setThrowable(new ParseException("", 0), 1);
...
setVoidCallable()
setVoidCallable()
is used for a method that has a void
return type. The invocation times limitation and method parameter values insensitive features can also be applied.
...
try {
foo.dummy();
} catch (Exception e) {
//skip
}
//define the behavior -- no return value
//when calling dummy(). And this method is expected
//to be called at least once.
control.setVoidCallable();
...
ArgumentsMatcher
In the working state, the MockControl
will search the predefined behavior when any method invocation has happened on the mock object. There are three factors in the search criteria: method signature, parameter value, and invocation times limitation. The first and third factors are fixed. The second factor can be skipped by the parameter values insensitive feature described above. More flexibly, it is also possible to customize the parameter value match rule. setMatcher()
can be used in the preparing state with a customized ArgumentsMatcher
.
public interface ArgumentsMatcher {
public boolean matches(Object[] expected,
Object[] actual);
}
The only method in ArgumentsMatcher
, matches()
, takes two arguments. One is the expected parameter values array (null, if the parameter values insensitive feature applied). The other is the actual parameter values array. A true return value means that the parameter values match.
...
foo.isSame(null);
//set the argument match rule -- always match
//no matter what parameter is given
control.setMatcher(MockControl.ALWAYS_MATCHER);
//define the behavior -- return true when call
//isSame(). And this method is expected
//to be called at least once.
control.setReturnValue(true, 1);
...
There are three predefined ArgumentsMatcher
instances in MockControl
. MockControl.ALWAYS_MATCHER
always returns true when matching, no matter what parameter values are given. MockControl.EQUALS_MATCHER
calls equals()
on each element in the parameter value array. MockControl.ARRAY_MATCHER
is almost the same as MockControl.EQUALS_MATCHER
, except that it calls Arrays.equals()
instead of equals()
when the element in the parameter value array is an array type. Of course, the developer can implement his or her own ArgumentsMatcher
.
A side effect of a customized ArgumentsMatcher
is that it defines the method invocation's out parameter value.
...
//just to demonstrate the function
//of out parameter value definition
foo.add(new String[]{null, null});
//set the argument match rule -- always
//match no matter what parameter given.
//Also defined the value of out param.
control.setMatcher(new ArgumentsMatcher() {
public boolean matches(Object[] expected,
Object[] actual) {
((StringBuffer)actual[0])
.append(actual[1]);
return true;
}
});
//define the behavior of add().
//This method is expected to be called at
//least once.
control.setVoidCallable(true, 1);
...
setDefaultMatcher()
sets the MockControl
's default ArgumentsMatcher
instance. If no specific ArgumentsMatcher
is given, the default ArgumentsMatcher
will be used. This method should be called before any method invocation behavior definition. Otherwise, an AssertionFailedError
will be thrown.
//get mock control
MockControl control = ...;
//get mock object
Foo foo = (Foo)control.getMock();
//set default ArgumentsMatcher
control.setDefaultMatcher(
MockControl.ALWAYS_MATCHER);
//begin behavior definition
foo.bar(10);
control.setReturnValue("ok");
...
If setDefaultMatcher()
is not used, MockControl.ARRAY_MATCHER
is the system default ArgumentsMatcher
. Below is an example that demonstrates Mocquer's usage in unit testing.
Suppose there is a class named FTPConnector
.
package org.jingle.mocquer.sample;
import java.io.IOException;
import java.net.SocketException;
import org.apache.commons.net.ftp.FTPClient;
public class FTPConnector {
//ftp server host name
String hostName;
//ftp server port number
int port;
//user name
String user;
//password
String pass;
public FTPConnector(String hostName,
int port,
String user,
String pass) {
this.hostName = hostName;
this.port = port;
this.user = user;
this.pass = pass;
}
/**
* Connect to the ftp server.
* The max retry times is 3.
* @return true if succeed
*/
public boolean connect() {
boolean ret = false;
FTPClient ftp = getFTPClient();
int times = 1;
while ((times <= 3) && !ret) {
try {
ftp.connect(hostName, port);
ret = ftp.login(user, pass);
} catch (SocketException e) {
} catch (IOException e) {
} finally {
times++;
}
}
return ret;
}
/**
* get the FTPClient instance
* It seems that this method is a nonsense
* at first glance. Actually, this method
* is very important for unit test using
* mock technology.
* @return FTPClient instance
*/
protected FTPClient getFTPClient() {
return new FTPClient();
}
}
The connect()
method can try to connect to an FTP server and log in. If it fails, it can retry up to three times. If the operation succeeds, it returns true. Otherwise, it returns false. The class uses org.apache.commons.net.FTPClient
to make a real connection. There is a protected method, getFTPClient()
, in this class that looks like nonsense at first glance. Actually, this method is very important for unit testing using mock technology. I will explain that later.
A JUnit test case, FTPConnectorTest
, is provided to test the connect()
method logic. Because we want to isolate the unit test environment from any other factors such as an external FTP server, we use Mocquer to mock the FTPClient
.
package org.jingle.mocquer.sample;
import java.io.IOException;
import org.apache.commons.net.ftp.FTPClient;
import org.jingle.mocquer.MockControl;
import junit.framework.TestCase;
public class FTPConnectorTest extends TestCase {
/*
* @see TestCase#setUp()
*/
protected void setUp() throws Exception {
super.setUp();
}
/*
* @see TestCase#tearDown()
*/
protected void tearDown() throws Exception {
super.tearDown();
}
/**
* test FTPConnector.connect()
*/
public final void testConnect() {
//get strict mock control
MockControl control =
MockControl.createStrictControl(
FTPClient.class);
//get mock object
//why final? try to remove it
final FTPClient ftp =
(FTPClient)control.getMock();
//Test point 1
//begin behavior definition
try {
//specify the method invocation
ftp.connect("202.96.69.8", 7010);
//specify the behavior
//throw IOException when call
//connect() with parameters
//"202.96.69.8" and 7010. This method
//should be called exactly three times
control.setThrowable(
new IOException(), 3);
//change to working state
control.replay();
} catch (Exception e) {
fail("Unexpected exception: " + e);
}
//prepare the instance
//the overridden method is the bridge to
//introduce the mock object.
FTPConnector inst = new FTPConnector(
"202.96.69.8",
7010,
"user",
"pass") {
protected FTPClient getFTPClient() {
//do you understand why declare
//the ftp variable as final now?
return ftp;
}
};
//in this case, the connect() should
//return false
assertFalse(inst.connect());
//change to checking state
control.verify();
//Test point 2
try {
//return to preparing state first
control.reset();
//behavior definition
ftp.connect("202.96.69.8", 7010);
control.setThrowable(
new IOException(), 2);
ftp.connect("202.96.69.8", 7010);
control.setVoidCallable(1);
ftp.login("user", "pass");
control.setReturnValue(true, 1);
control.replay();
} catch (Exception e) {
fail("Unexpected exception: " + e);
}
//in this case, the connect() should
//return true
assertTrue(inst.connect());
//verify again
control.verify();
}
}
A strict MockObject
is created. The mock object variable declaration has a final modifier because the variable will be used in the inner anonymous class. Otherwise, a compilation error will be reported.
There are two test points in the test method. The first test point is when FTPClient.connect()
always throws an exception, meaning FTPConnector.connect()
will return false as result.
try {
ftp.connect("202.96.69.8", 7010);
control.setThrowable(new IOException(), 3);
control.replay();
} catch (Exception e) {
fail("Unexpected exception: " + e);
}
The MockControl
specifies that, when calling connect()
on the mock object with the parameters 202.96.96.8
as the host IP and 7010
as the port number, an IOException
will be thrown. This method invocation is expected to be called exactly three times. After the behavior definition, replay()
changes the mock object to the working state. The try
/catch
block here is to follow the declaration of FTPClient.connect()
, which has an IOException
defined in its throw
clause.
FTPConnector inst = new FTPConnector("202.96.69.8",
7010,
"user",
"pass") {
protected FTPClient getFTPClient() {
return ftp;
}
};
The code above creates a FTPConnector
instance with its getFTPClient()
overridden. It is a bridge to introduce the created mock object into the target to be tested.
assertFalse(inst.connect());
The expected result of connect()
should be false on this test point.
control.verify();
Finally, change the mock object to the checking state.
The second test point is when FTPClient.connect()
throws exceptions two times and succeeds on the third time, and FTPClient.login()
also succeeds, meaning FTPConnector.connect()
will return true as result.
This test point follows the procedure of previous test point, except that the MockObject
should change to the preparing state first, using reset()
.
Mock technology isolates the target to be tested from other external factors. Integrating mock technology into the JUnit framework makes the unit test much simpler and neater. EasyMock is a good mock tool that can create a mock object for a specified interface. With the help of Dunamis, Mocquer extends the function of EasyMock. It can create mock objects not only for interfaces, but also classes. This article gave a brief introduction to Mocquer's usage in unit testing. For more detailed information, please refer to the references below.
Lu Jian is a senior Java architect/developer with four years of Java development experience.
In this article I ask:
Note that this article doesn't discuss whether we should be testing (be it automated or manual). Nor is it about any particular type of testing, be it unit testing, system testing, or user-acceptance testing.
Instead, this article is intended to act as a prompt for discussion and contains opinions based upon my own experience.
A real-world example, Part 1
Let's start with the job that I recently worked on. It involved small changes to a moderately complex online Website—nothing special, just some new dynamically-generated text and images.
Because the system had no unit tests and was consequently not designed in a way that facilitated unit testing, isolating and unit-testing my code changes proved to be difficult. Consequently, my unit tests were more like miniature system tests in that they indirectly tested my changes by exercising the new functionality via the Web interface.
I figured automating the tests would prove pointless, as I guessed I would be running them only a couple of times. So I wrote some plain English test plans that described the manual steps for executing the tests.
Coding and testing the first change was easy. Coding and testing the second change was easy too, but then I also had to re-execute the tests for the first change to make sure I hadn't broken anything. Coding and testing the third change was easy, but then I had to re-execute the tests for the first and second changes to make sure I hadn't broken them. Coding and testing the fourth change was easy, but...well, you get the picture.
What a drag
Whenever I had to rerun the tests, I thought: "Gee running these tests is a drag."
I would then run the tests anyway and, on a couple of occasions, found that I had introduced a defect. On such occasions, I thought: "Gee I'm glad I ran those tests."
Since these two thoughts seemed to contradict each other, I started measuring how long it was actually taking me to run the tests.
Once I had obtained a stable development build, I deployed my changes into a system-testing environment where somebody else would test them. However, because the environment differed, I figured I should re-execute the tests just to make sure they worked there.
Somebody then system-tested my changes and found a defect (something that wasn't covered by my tests). So I had to fix the defect in my development environment, rerun the tests to make sure I hadn't introduced a side effect, and then redeploy.
The end result
By the time I'd finished everything, I had executed the full test suite about eight times. My time measurements suggested that each test cycle took about 10 minutes to execute. So that meant I had spent roughly 80 minutes on manual testing. And I was thinking to myself: "Would it have been easier if I'd just automated those tests early on?"
Do you ever test anything just a couple of times?
I believe the mistake I made in underestimating the effort required to test my work is a mistake also made by other developers. Developers are renowned for underestimating effort, and I don't think that test-effort estimation is any different. In fact, given the disregard many developers have for testing, I think they would be more likely to underestimate the effort required to test their code than they would be to underestimate anything else.
The main cause of this test-effort blow-out is not that executing the test cycle in itself takes longer than expected, but that the number of test cycles that need to be executed over the life of the software is greater than expected. In my experience, it seems that most developers think they'll only test their code a couple of times at most. To such a developer I ask this question: "Have you ever had to test anything just a couple of times?" I certainly haven't.
But what about my JUnit tests?
Sure, you might write lots of low-level JUnit tests, but I'm talking about the higher-level tests that test your system's end-to-end functionality. Many developers consider writing such tests, but put the task off because it seems like a lot of effort given the number of times they believe they will execute the tests. They then proceed to manually execute the tests and often reach a point where the task becomes a drag—which is usually just after the point when they thought they wouldn't be executing the tests any more.
Alternately, the developer working on a small piece of work on an existing product (as I was doing) can also fall into this trap. Because it's such a small piece of work, there's no point in writing an automated test. You're only going to execute it a couple of times—right? Not necessarily, as I learned in my own real-world example.
Somebody will want to change your software
While developers typically underestimate the number of test cycles, I think they're even less likely to consider the effort required for testing the software later. Having finished a piece of software and probably manually testing it more times than they ever wanted to, most developers are sick of the software and don't want to think about it any more. In doing so, they are ignoring the likelihood that at some time, somebody will have to test the code again.
Many developers believe that once they write a piece of software, it will require little change in the future and thus require no further testing. Yet in my experience, almost no code that I write (especially if it's written for somebody else) goes through the code-test-deploy lifecycle once and never touched again. In fact, even if the person that I'm writing the code for tells me that it's going to be thrown away, it almost never is (I've worked on a number of "throw-away" prototypes that were subsequently taken into production and have stayed there ever since).
Even if the software doesn't change, the environment will
Even if nobody changes your software, the environment that it lives within can still change. Most software doesn't live in isolation; thus, it cannot dictate the pace of change.
Virtual machines are upgraded. Database drivers are upgraded. Databases are upgraded. Application servers are upgraded. Operating systems are upgraded. These changes are inevitable—in fact, some argue that, as a best practice, administrators should proactively ensure that their databases, operating systems, and application servers are up-to-date, especially with the latest patches and fixes.
Then there are the changes within your organization's proprietary software. For example, an enterprise datasource developed by another division in your organization is upgraded—and you are entirely dependent upon it. Alternately, suppose your software is deployed to an application server that is also hosting some other in-house application. Suddenly, for the other application to work, it becomes critical that the application server is upgraded to the latest version. Your application is going along for the ride whether it wants to or not.
Change is constant, inevitable, and entails risk. To mitigate the risk, you test—but as we've seen, manual testing quickly becomes impractical. I believe that more automated testing is the way around this problem.
But what about change management?
Some argue that management should be responsible for coordinating changes; they should track dependencies and ensure that if one of your dependencies changes, you will retest. Cross-system changes will be synchronized with releases. However, in my experience, these dependencies are complex and rarely tracked successfully. I propose an alternate approach—that software systems are better able to both test themselves and cope with inevitable change.
his article is about whether we, meaning professional software developers, should be doing more automated testing. This article is targeted to those who find themselves repeating manual tests over and over again, be it developers, testers, or anyone else.
In this article I ask:
Note that this article doesn't discuss whether we should be testing (be it automated or manual). Nor is it about any particular type of testing, be it unit testing, system testing, or user-acceptance testing.
Instead, this article is intended to act as a prompt for discussion and contains opinions based upon my own experience.
A real-world example, Part 1
Let's start with the job that I recently worked on. It involved small changes to a moderately complex online Website—nothing special, just some new dynamically-generated text and images.
Because the system had no unit tests and was consequently not designed in a way that facilitated unit testing, isolating and unit-testing my code changes proved to be difficult. Consequently, my unit tests were more like miniature system tests in that they indirectly tested my changes by exercising the new functionality via the Web interface.
I figured automating the tests would prove pointless, as I guessed I would be running them only a couple of times. So I wrote some plain English test plans that described the manual steps for executing the tests.
Coding and testing the first change was easy. Coding and testing the second change was easy too, but then I also had to re-execute the tests for the first change to make sure I hadn't broken anything. Coding and testing the third change was easy, but then I had to re-execute the tests for the first and second changes to make sure I hadn't broken them. Coding and testing the fourth change was easy, but...well, you get the picture.
What a drag
Whenever I had to rerun the tests, I thought: "Gee running these tests is a drag."
I would then run the tests anyway and, on a couple of occasions, found that I had introduced a defect. On such occasions, I thought: "Gee I'm glad I ran those tests."
Since these two thoughts seemed to contradict each other, I started measuring how long it was actually taking me to run the tests.
Once I had obtained a stable development build, I deployed my changes into a system-testing environment where somebody else would test them. However, because the environment differed, I figured I should re-execute the tests just to make sure they worked there.
Somebody then system-tested my changes and found a defect (something that wasn't covered by my tests). So I had to fix the defect in my development environment, rerun the tests to make sure I hadn't introduced a side effect, and then redeploy.
The end result
By the time I'd finished everything, I had executed the full test suite about eight times. My time measurements suggested that each test cycle took about 10 minutes to execute. So that meant I had spent roughly 80 minutes on manual testing. And I was thinking to myself: "Would it have been easier if I'd just automated those tests early on?"
Do you ever test anything just a couple of times?
I believe the mistake I made in underestimating the effort required to test my work is a mistake also made by other developers. Developers are renowned for underestimating effort, and I don't think that test-effort estimation is any different. In fact, given the disregard many developers have for testing, I think they would be more likely to underestimate the effort required to test their code than they would be to underestimate anything else.
The main cause of this test-effort blow-out is not that executing the test cycle in itself takes longer than expected, but that the number of test cycles that need to be executed over the life of the software is greater than expected. In my experience, it seems that most developers think they'll only test their code a couple of times at most. To such a developer I ask this question: "Have you ever had to test anything just a couple of times?" I certainly haven't.
But what about my JUnit tests?
Sure, you might write lots of low-level JUnit tests, but I'm talking about the higher-level tests that test your system's end-to-end functionality. Many developers consider writing such tests, but put the task off because it seems like a lot of effort given the number of times they believe they will execute the tests. They then proceed to manually execute the tests and often reach a point where the task becomes a drag—which is usually just after the point when they thought they wouldn't be executing the tests any more.
Alternately, the developer working on a small piece of work on an existing product (as I was doing) can also fall into this trap. Because it's such a small piece of work, there's no point in writing an automated test. You're only going to execute it a couple of times—right? Not necessarily, as I learned in my own real-world example.
Somebody will want to change your software
While developers typically underestimate the number of test cycles, I think they're even less likely to consider the effort required for testing the software later. Having finished a piece of software and probably manually testing it more times than they ever wanted to, most developers are sick of the software and don't want to think about it any more. In doing so, they are ignoring the likelihood that at some time, somebody will have to test the code again.
Many developers believe that once they write a piece of software, it will require little change in the future and thus require no further testing. Yet in my experience, almost no code that I write (especially if it's written for somebody else) goes through the code-test-deploy lifecycle once and never touched again. In fact, even if the person that I'm writing the code for tells me that it's going to be thrown away, it almost never is (I've worked on a number of "throw-away" prototypes that were subsequently taken into production and have stayed there ever since).
Even if the software doesn't change, the environment will
Even if nobody changes your software, the environment that it lives within can still change. Most software doesn't live in isolation; thus, it cannot dictate the pace of change.
Virtual machines are upgraded. Database drivers are upgraded. Databases are upgraded. Application servers are upgraded. Operating systems are upgraded. These changes are inevitable—in fact, some argue that, as a best practice, administrators should proactively ensure that their databases, operating systems, and application servers are up-to-date, especially with the latest patches and fixes.
Then there are the changes within your organization's proprietary software. For example, an enterprise datasource developed by another division in your organization is upgraded—and you are entirely dependent upon it. Alternately, suppose your software is deployed to an application server that is also hosting some other in-house application. Suddenly, for the other application to work, it becomes critical that the application server is upgraded to the latest version. Your application is going along for the ride whether it wants to or not.
Change is constant, inevitable, and entails risk. To mitigate the risk, you test—but as we've seen, manual testing quickly becomes impractical. I believe that more automated testing is the way around this problem.
But what about change management?
Some argue that management should be responsible for coordinating changes; they should track dependencies and ensure that if one of your dependencies changes, you will retest. Cross-system changes will be synchronized with releases. However, in my experience, these dependencies are complex and rarely tracked successfully. I propose an alternate approach—that software systems are better able to both test themselves and cope with inevitable change.
About the author
Ben Teese is a software engineer at Shine Technologies.