Creating a Functional Test for the Product Catalog
Before any performance tuning takes place, you need a reliable functional test that guarantees the code you write does exactly what the business requires. In a typical enterprise scenario you’ll be exposing a stateless session bean that implements a catalog facade. Rather than jump straight into the bean implementation, write a JUnit test that describes the expected behavior. This test becomes a contract you can rely on during every subsequent change.
Take a product catalog that groups items by category. The service interface contains a method getProductsByCategory(String category). A good starting point for the test is a scenario that asks for all products in the “Snowboard” category and verifies two things: the number of returned items and that each item reports the correct category.
Here’s the full test class. Notice how the test is intentionally independent of any implementation details. The Catalog interface is used solely as a contract; the test cares only about the returned data. If the test passes, you have confidence that the EJB behaves correctly, regardless of the internals of the business logic or the persistence layer.
public class CatalogTest extends TestCase {</p> <p> public CatalogTest(String name) { super(name); }</p> <p> public void testGetProducts() throws Exception {</p> <p> String snowboardCategory = "Snowboard";</p> <p> Catalog catalog = (Catalog)getCatalogHome().create();</p> <p> Collection products = catalog.getProductsByCategory(snowboardCategory);</p> <p> assertEquals(25, products.size());</p> <p> Iterator productIter = products.iterator();</p> <p> while (productIter.hasNext()) {</p> <p> ProductDetails product = (ProductDetails) productIter.next();</p> <p> assertEquals(snowboardCategory, product.getCategory());</p> <p> }</p> <p> catalog.remove();</p> <p> }</p> <p>}</p>
Run this test locally. If it fails, you know that the business logic or the EJB container is misbehaving. Once the test succeeds, you can move on to measuring how long the call takes. Because the test uses only the public interface, any changes to the implementation that preserve the contract will not break the test. This isolation is vital for confidence during performance tuning.
While this test verifies functional correctness, it also sets the stage for performance measurement. The same test will later be wrapped by JUnitPerf to enforce a response‑time ceiling. That’s a key advantage: you write one test, and you can reuse it for multiple purposes - functional regression, baseline performance, and scalability checks. The test becomes the single source of truth, so every stakeholder sees the same data and the same metrics.
Remember that the test’s success depends on a pre‑loaded data set: 25 products in the snowboard category. In production, you’ll probably populate the database with a test fixture or use a mocking framework that simulates the data store. Keep the fixture small enough that the test remains fast, yet large enough to emulate realistic use. With a stable baseline test, you can now turn to profiling and measuring the time it takes to execute the method under test.
Pinpointing Performance Bottlenecks with a Code Profiler
Having a functional test is only the first step. To improve response time, you need to know where the time is spent. A code profiler is your best friend for that task. By instrumenting the running application, the profiler records which methods consume the most CPU time and how often they are called. The output gives you a clear map of the hot spots you should focus on.
In the catalog scenario, a typical profiler run reveals that CatalogEJB.getProductsByCategory() dominates the call stack. That means the time you spend creating ProductDetails objects from raw database rows is the real bottleneck, not the database query itself or the HTTP servlet rendering the page. The profiler also shows that obtaining a database connection is negligible - just a few milliseconds - so a single connection is not the source of delay.
When you run the profiler on the full request cycle - including the remote EJB call, the business logic, and the servlet output - you’ll see a breakdown like this: 30 % for data transformation, 20 % for EJB container overhead, 15 % for HTTP processing, and the remaining 35 % for other layers. That breakdown immediately tells you where the most effective improvements can be made. In this case, you’ll concentrate on the transformation logic.
One might be tempted to look at the database schema or query plan next. However, the profiler’s data shows that the query runs quickly. You’ve already validated that with a separate SQL profiler or by measuring the time spent inside the executeQuery() call. Since the query itself is not a problem, you can ignore it for now and focus on the Java code that turns rows into objects.
Another advantage of profiling is that it eliminates guesswork. Instead of arbitrarily refactoring the presentation layer or adding caching, you can target the exact method that consumes the majority of resources. Even if you later decide to introduce caching or change the query, the profiler will show the new hot spots, keeping your focus on the right areas. This iterative loop - profile, refactor, re‑profile - ensures that every change moves you closer to the desired performance goal.
Once the profiler has identified the bottleneck, you can start writing automated performance tests to measure progress. By wrapping the functional test in JUnitPerf, you get a test that will fail if the method takes longer than a specified threshold. The next section explains how to set that up.
Measuring Single‑User Response Time with JUnitPerf TimedTest
With the bottleneck identified, you can now quantify the performance requirement: “the page that lists up to 25 products must load in under one second for a single user.” JUnitPerf provides a lightweight wrapper that turns any JUnit test into a performance test. The TimedTest class measures the elapsed time of the wrapped test and fails if the time exceeds a configured limit.
Here’s the complete wrapper test. It creates a new instance of the functional test and passes it to a TimedTest along with a maximum time of 1,000 milliseconds. The suite() method returns the timed test, and the main() method lets you run it directly from the command line.
public class CatalogResponseTimeTest {</p> <p> public static Test suite() {</p> <p> long maxTimeInMillis = 1000;</p> <p> Test test = new CatalogTest("testGetProducts");</p> <p> return new TimedTest(test, maxTimeInMillis);</p> <p> }</p> <p> public static void main(String[] args) {</p> <p> junit.textui.TestRunner.run(suite());</p> <p> }</p> <p>}</p>
When you run this test, the output shows whether the response time meets the one‑second requirement. If the test fails, the message will include the actual elapsed time, helping you determine whether you need to optimize further or if the requirement itself is too aggressive.
It’s important to remember that TimedTest measures the entire test method, including any setUp() and tearDown() logic. If your fixture creates a database connection or performs cleanup, that time will count toward the measured value. Adjust the maximum time accordingly, or move expensive setup steps outside the timed block. Keeping the test focused on the core logic ensures that the performance measurement reflects what matters to the end user.
Once you have a passing test, you can commit it to your build pipeline. Every future change that introduces a regression will cause the test to fail, giving you instant feedback. The test becomes a safety net that guarantees the catalog’s single‑user response time stays below the threshold, even as new features or bug fixes are added.
In the next phase, you’ll extend this approach to cover concurrent users. By adding a load test wrapper, you can validate that the application scales as expected. The following section shows how to do that.
Validating Scalability Through Concurrent Load Tests
Performance is not only about a single user; it’s also about how the system behaves under load. Suppose the business now requires the catalog page to stay under one second when five users hit the service simultaneously. JUnitPerf’s LoadTest class lets you create multiple threads, each executing the same wrapped test. The test fails if any thread exceeds the time limit.
The load test wrapper is very similar to the timed test. It creates a TimedTest for the functional test, then passes that into a LoadTest along with the desired number of concurrent users. Below is the complete code snippet.
public class CatalogLoadTest {</p> <p> int concurrentUsers = 5;</p> <p> Test timedTest = new TimedTest(test, maxTimeInMillis);</p> <p> return new LoadTest(timedTest, concurrentUsers);</p> <p> }</p> <p> }</p> <p>}</p>
Run the load test and observe the output. Initially, you’ll see the first user’s response time within the limit, but subsequent users will suffer as the system saturates a shared resource - most often the database connection pool. The profiler will confirm this: the method that obtains a connection will now dominate the timeline, showing up as a contention point.
Once the bottleneck is identified, you can refactor the data access layer to use a connection pool. A pool allows multiple threads to borrow connections concurrently, eliminating the queue that was causing delays. Adjust the pool size to match the expected concurrency - five connections for five users is a good starting point. After making the change, rerun the load test to confirm that all users stay within the one‑second limit.
It’s worth noting that pooling is a well‑known pattern, but its effectiveness depends on correct configuration and usage. Misconfigured pools can lead to leaks or excessive overhead. Use a proven library or application server feature, and always verify with a profiler that the pool is indeed being utilized.
With the load test passing, you have a measurable assurance that the application scales. You can now add the test to your continuous integration process, so any future change that breaks scalability will surface immediately. The combination of a functional test, a timed test, and a load test gives you a complete performance safety net.
Integrating Performance Tests into Your Continuous Build Pipeline
Having the tests in place is only half the battle. To get real value, you need them to run automatically as part of your build process. Most continuous integration servers - Jenkins, TeamCity, Bamboo - support running arbitrary Java tests. Add the JUnitPerf tests to your test suite, and configure the CI job to fail when the tests fail.
When you commit a change, the CI server will compile the code, run the entire suite, and report the results. If the new code introduces a regression that makes the response time exceed one second or the load test fail, the build will fail immediately. This early feedback loop prevents performance regressions from creeping into production.
Beyond CI, you can generate reports from JUnitPerf that give you visual insights into performance trends. JUnitPerf can produce CSV or XML reports that you can feed into a dashboard. Over time, you’ll see a trend line of response times for the single‑user and load tests. If the line drifts upward, you know something in the code path has changed.
In a larger environment, you may also want to schedule periodic load tests against a staging environment that mirrors production. These “nightly” runs can surface scaling issues that only appear under sustained load, which a one‑off CI run may miss. Combine the nightly results with the CI feedback to maintain a rigorous performance discipline.
Finally, remember that performance testing is not a one‑time activity. As you add new features - say, a recommendation engine that joins several tables - you’ll need to add new functional and performance tests to cover the new code paths. Treat each new feature as a mini‑project: write a functional test, profile, wrap it with JUnitPerf, and add it to CI. Over time, your repository will contain a rich set of tests that guarantee both correctness and performance.
With these steps, you’ve transformed performance tuning from a guesswork exercise into a disciplined, automated practice. The same tests you used to verify business logic now double as safety nets that protect users from slow or unresponsive services. Continuous performance testing becomes a natural part of your development workflow, not a separate, after‑thought activity.





No comments yet. Be the first to comment!