Canoo Webtest – Interpreting the test results!

Canoo Webtest (http://webtest.canoo.com) is one of the current leading open source testing options for web applications. I have used both Selenium and Webtest on JEE projects in the recent past, and wanted to share some of the startup ah-hahs that I found in learning Webtest. There is a fair amount of documentation for Webtest available, especially for a younger open source project. However, since we used the Grails plug-in / Groovy scripting option instead of the more widely documented XML approach, it was a bit more difficult to piece together working solutions.

For this first post, I will focus on how to interpret what you get back for testing results when you run your tests.  There are screenshots on the Webtest site which show you a sample of what you should see when a test runs successfully (on the Homepage for the Manual, down at the bottom of the page). This is nice, but it does not really tell you how to actually read and navigate through the reports, particularly when tests don’t pass or you don’t even get as far as seeing this lovely report screen! These tips can really help when it comes to debugging test failures.

When I would run our Webtest(s), one of three things happened:

  1. The tests all ran, with some passing and/or failing. (This is when you get the pretty dashboard report page- yay!)
  2. The tests seem to run partway, I saw the little popup that says it is preparing the webtest report, but then I just got the WebTest Temporary Report page, with the line that it is Running Tests:.
  3. The test(s) don’t even run, due to a compilation error.

For scenarios 2 & 3, I needed to do some detective work to figure out what needed to change in the test that caused the failure.  I found that when there were failing tests in scenario 1, there were some details available in the reports that weren’t readily apparent that were extremely helpful in tracking down the root cause of the issue.

Debugging when the tests run (Scenario 1):

When all of the tests execute properly, you will get a WebTest Test Report page that pops up in your default browser. *

The Report page has two sections: a Result Summary and a Test Scenario Overview. When I am trying to debug a single test, just seeing which step number it failed on (under the # of Steps in the Overview section) could usually tell me if I had solved the most recent problem. Most often, however, I use the links to the test steps (aka, the Test Scenario Name) to get to the specific test results for failing tests.

Once you click on a Test Scenario Name, you will get a page that lists all of the steps that were executed for that test scenario, their status, the parameter values used or stored and the time taken. A really nice feature is that when a step results in a new or refreshed page, there will be a link to a file that has a version of the page. This helps when you want to see whether the field or control that is fails in a later step was actually there, prior to the failure.**

If there is a failing step, you can click the Error link or scroll to the bottom of the page to see more about the error. Depending on the type of error, this may include a stacktrace or, if, for example, your step uses an xpath locator, show you what value it found compared to the value it expected. My favorite thing about this feature is that it may show you what appears to be exactly the same result for Expected and Found values.  That usually meant that there was space padding, or a special character (e.g., /n), in the Found value that was invisible on the display. I tracked down that kind of problem by highlighting the values on the page and seeing the extra spaces or by looking at the page source (e.g., with Firebug’s Inspect Element feature).

At times, the failure may make no sense based on what you see in the results for other reasons. At that point, I check the console log to see if there was a compilation error behind the scenes that is the real culprit (see the “Scenarios 2 & 3” section of this post for debugging that type of problem).

One last part of this page that was occasionally useful was the header information at the top.  It confirms the base URL used, in case you test against different deployments, and the Simulated Browser. You can specify which browser you want it to simulate in the Webtest configuration file (look at … testwebtestconfwebtest.properties  and add the following  “wt.config.browser             = Firefox3” to simulate FF3, for example). Since my primary browser for testing this project needed to be IE6, but none of the team has an IE6 installation, this allowed me to run the webtests simulating IE6. There were a few times when this picked up errors that were specific to IE6 that otherwise would have been missed, while relieving me (as the tester) from having to use IE6 on a separate machine for my manual testing.

Finding useful information in a stacktrace

If the test runs, but you get some type of failure that includes a stacktrace, you can scroll down through the list until you see something that looks like this:

Caused by: : Problem: failed to create task or type getLoggedIn
Cause: The name is undefined.
Action: Check the spelling.
Action: Check that any custom tasks/types have been declared.
Action: Check that any <presetdef>/<macrodef> declarations have taken place.

I forced this error by commenting out a function named “getLoggedIn” that I call in my test.

Debugging when the tests don’t even run, or the step failure makes no sense (Scenarios 1, 2 or 3)

This usually meant that I had a syntax error someplace in the test. Since I was writing them in Groovy, tracking down exactly how to fix these kinds of errors tended to be either very easy (swap that right parenthesis for a right curly bracket!), or very annoying to resolve. I’ll tackle some of the solutions we came up for the “very annoying” problems in later posts; for now, this is how I generally managed to figure out what was going on.

When I didn’t get a Webtest report at all, the first stop is the console log. (I run my tests from the command prompt on a Windows box, using a local server. If you use a central server, you will need to find the logs on that machine, such as through Hudson or using a remote login to access the server directly.)

If I scroll back in the console window, I would get to a point that might say something like this:

Server running. Browse to http://localhost:8080/app-grails
Running tests of type ‘webtest’
[groovyc] Compiling 2 source files to C:Documents and SettingsTest.grails1.1.1projectsapp-grailstest-classeswebtest
[groovyc] org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed, C:app-grailstestwebtestUserPagesSmokeTests.groovy: 3: unexpected token: ( @ line 3, column 28.
[groovyc]     void testUserPagesCheck() (
[groovyc]                               ^
[groovyc]
[groovyc] 1 error
Compilation Error: Compilation Failed

This tells me that my application server started, but when Webtest tried to compile my test, there was a syntax error (the unexpected token) on line 3 of my .groovy file.  It also shows me visually where it failed and that it is a compilation error. In this case, the visual indicator  is close to the problem, but not quite there, while the column number is actually correct.  I had changed the opening curly brace to a left parenthesis to force this failure. If I open the file in jEdit and turn on the line numbering, I can easily find the line and then use the indicator in the bottom bar to find the column.

If the tests aren’t running at all, it might have something like this:

Server running. Browse to http://localhost:8080/app-grails
Running tests of type ‘webtest’
[groovyc] Compiling 9 source files to C:Documents and SettingsTest.grails1.1.1projectsapp-grailstest-classeswebtest
No tests found in test/webtest to execute …
Server stopped

I forced this by commenting out the line “class UserPagesSmokeTests extends grails.util.WebTest ” at the top of my test that is required to make this a valid webtest.

If you get the Temporary Report page instead of the full Report, it is back to the console log again. In this case, you may see something like this:


Running 1 webtest test…
Running test CodeSnippetTests…ERROR CodeSnippetTests Unable to invoke test method test000propertyShifting
groovy.lang.MissingPropertyException: No such property: sql for class: CodeSnippetTests
at CodeSnippetTests.test000propertyShifting(CodeSnippetTests.groovy:25)

In this case, I am trying to use a variable “sql” in line 25 of my test, but left out the line that defined that variable as my database connection.

Notes:

* If you do not have a browser running and the page never comes up when the console log says it ran correctly, make sure that you don’t have new plug-in updates for your browser that might prevent the browser launching all the way.  Once you launch your browser manually and take care of the update popup, you should still be able to access the report page from your Webtest directory at: (your directory path)..test/reports/webtest/index.html).
** This page is not a “screen shot”  per se; it is stripped of graphics and layout niceties and the links are non-functional. You can right-click on the page and select the option to make the links live or load the css / images from the original site, if you need these. Making the link live, though, likely won’t help if your Webtest is run in such a way that you shut the server down at the end of the tests!)
Leave a Reply

Your email address will not be published. Required fields are marked *

*

*