I originally wrote this piece as an article submission for IEEE on "The History of Automated Testing". During my research, I was surprised to learn that nowhere on the net could I find a good description of how automated testing came to be or anything about the evolution of automated testing technology. So I wrote what I knew of it. Due to a swift rejection from IEEE (see reasons below - not because they did not like the content) i am going to share this article here instead.
Life, the Universe, and Testing
One of the toughest things about starting a company is learning to deal with rejection. When Ed and I started talking to VCs about Newmerix, we had to come to terms with this very quickly. To give you an example of just how bad it was, Ed and I talked to 72 VC firms (this is not a typo) to try and get our Series A funding. Keep in mind I was coming from a huge win as the CTO and co-founder of Service Metrics, Ed had raised over 150MM in his startup lifetime, and we had both made a number of VC firms 10s of millions of dollars in the process. But times were tough for raising startup capital, and we had no idea how incredibly bad the market for Series A deals was at that time. Pretty quickly Ed and I shifted our strategy from getting a quick “Yes” to getting a quick “No” so we could focus on the firms that were really interested.
Well, I have to thank Mobius and IDG for taking a gamble on Ed and I again and putting up the first round (they were the two Series A players in Service Metrics and Mobius had backed Ed before at Asia Online). I also have to give them credit for trusting our vision. We had no code, no team, and a market focus on packaged applications that was about as far from the hot new thing as you could get in 2002. We didn’t even operate out of someone’s spare garage to at least give us some cool points in the deal. They took a flier that we saw the market coming and stuck with us. And after three years, I think they are pretty happy they took the bet.
This blarticle is a rework of another painful rejection that I have gotten during my time at Newmerix. About 2 months ago we had an opportunity to submit and article to IEEE, the esteemed technology and computing journal, on the subject of software testing. Who better to write this article than Newmerix, and what better publication to bring us credibility than IEEE? I thought long and hard about it and decided that it would be interesting to write a technical history of how automated testing tools have evolved. And that is what I sat down and wrote about. All 5500 words of it. Being careful not to plaster Newmerix marketing all over it, we scrambled to finish the article and submit it before the deadline. With bated breath and some sore fingers we waited for a response. And we got one: rejection!
Apparently to be accepted by IEEE, you need to talk about new inventions and not just technical retrospectives. The problem was we had. Everything about Newmerix’s approach to testing is a new invention. But we had packaged it in such a marketing-neutral way that it end up looking like a survey of emerging testing methodologies. Silly me, I should have known this.
So, when life hands you rejections, you make lemonade (or something like that). So I am reprinting the article, so to speak, for Parallax readers. You’ll note that the format of this article is much different than my usual Parallax entries (as it was written for a journal), but I encourage you to read it. One of the things I like so much about this article is that I could not find one single source on the web that actually documented the history of how automated testing tools worked. So I offer this to you, in unedited format, for your educational pleasure.
One of the defining factors of today’s information-based economy is that every corporation depends heavily on software. Software runs our factories, it schedules our shipping fleets, it manages our inventory, and it fulfills most financial transactions. It is almost impossible today to find a business or business process that does not, to some extent, depend on one or more software systems. As with any other requirement of doing business, with increased reliance comes increased risk.
Commensurately, over the last 10 years, the phrase “software quality” has emerged as fundamental component of the daily dialog in most IT departments. Broad publication and awareness of software quality failures (and the costs associated with them) have propelled what was once an afterthought of most IT projects, to the top of the list of concerns for many corporate IT initiatives today.
There are a number of factors shining a renewed light on software quality. One of the most important would be the renewed scrutiny around the costs of IT. As the stock market shrunk in the beginning of this decade (and corporate IT budgets with it), increased pressure to do more with less became the mantra of IT. Areas of the IT department with uncontrolled cost became high risk areas. Fueling this fire was a critical study published by NIST (U.S. Department of Commerce's National Institute of Standards and Technology) in June 2002. This study found that software defects (poor quality) cost the U.S. economy upwards of a shocking $60 billion annually. Furthermore, the study pointed out that about 80 percent of the cost of software development was consumed by developers identifying and correcting defects. Similar studies have validated the dramatic numbers claimed in the NIST report. Clearly software quality was entering the high risk/high cost category in IT.
A second factor in the increased focus on software quality has been the shift away from pure custom software development to a mixture of custom development and pre-built - or packaged - applications (such as those from PeopleSoft, Oracle, SAP, Siebel, et al). With increased dependence on third party software systems to support mission critical business processes, there has been intense scrutiny of the quality of these packaged applications. To some extent, the packaged application industry was not ready for this scrutiny. A flurry of stories came out in the 2000-2003 timeframe correlating botched implementations of packaged applications to poor or hard to manage quality issues. A high publicized example would be the failed implementation of Siebel at AT&T Wireless. During the implementation, poor software quality prevented the timely rollout of new features required to support government mandated number portability standards. Unable to get their Siebel system working well, AT&T ended up providing less than adequate support for number portability, and eventually lost a large number of customers. An article published in CIO magazine even predicted the disaster was of such proportions that it put AT&T Wireless into a prone position. It was shortly thereafter that Cingular Wireless merged with AT&T Wireless. Packaged application vendors were quick to respond to their Achilles heel and many implemented significant quality initiatives. PeopleSoft’s launched their Total Ownership Experience (TOE) which included over 600 employees focused on software quality, installation, and maintenance costs issues. Oracle similarly turned inward and launched a massive initiative to automate software quality across all product lines.
The last significant factor contributing to the need for increased software quality has been the advent of the Sarbanes-Oxley Act (SOX). Since its introduction in 2002, the SOX mandate looms heavily over most public companies. As disclosure of material SOX weaknesses start to be published through the PCAOB (Public Company Accounting Oversight Board) – the government body set up to enforce SOX compliance, it is becoming clear that companies need to continually test and retest all critical internal controls related to financial business processes after any change to their software systems. This is really the only way to ensure they have stayed within the SOX compliance guidelines.
With so many factors driving the need for increased software quality, it is likely your IT department is being confronted with the same quality initiatives all in a cost, resource, and time conscious environment. Fortunately, solutions for automating the software testing process have evolved along with the demands of IT. These solutions attempt to reduce the human resources needed in the testing process, increasing testing coverage as much as possible, and increasing the frequency at which software systems can be retested.
This article outlines the history of automated software solutions. It addresses the problems that have been conquered and the new challenges posed by the continual evolution of software systems and the IT landscape.
What is Automated Testing?
At its core, automated testing attempts to mimic a series of manual interactions between a human and an application. If the actions are carefully selected, key areas of functionality in the application can be effectively exercised. The most common use of automated testing is to perform regression tests. In a regression test, input data is entered, specific application features are executed, and outputs are validated against those expected. If the application outputs the expected results, the regression test passes. Automated testing attempts to expand the frequency and coverage of regression testing by removing the human from this process.
Over time, many different types of automation have emerged to tackle the changing architectures of software applications. In the early days of MS-DOS and CP/M based applications, testing was performed via calls to the command line. As distributed applications emerged (CORBA, DCOM, etc..) testing expanded to include APIs called across a network. But the most common form of automated testing occurs by testing an application’s graphical user interface (GUI). The popularity of GUI testing is primarily related to two things: the ability to test an application in the same way a user would use it, and the ability to create automated tests without needing the actual application code. The rest of this article will focus primarily on GUI testing as the basis for a discussion on the history and future of automated testing.
History of GUI Based Testing
To fully understand the evolution of GUI-based automated testing, one must consider the parallel evolution of application software interfaces. The first formal automated testing solutions emerged with the advent of graphical user interfaces (GUIs) in the mid-80s. GUIs provided the first consistent way for an automated testing tool to exercise an application in the same manner an end user would, without needing to know much about the application itself. If one distills human usage of a GUI into its simplest actions there are really only four things that occur. And these are the basic actions that an automated testing tool had to work with:
• The user moves the mouse (or pointing device of choice)
• The user clicks the mouse
• The user enters data into the application
• The user validates data presented by the application
As a basis for describing the evolution of automated testing, let’s create a simple software test that a user might want to perform against their application: testing the printing functionality. If the user is asked what steps are required to print, they might describe the following:
1. Open the print dialog
2. Select the printer
3. Enter the number of copies desired (1)
4. Print the selected document
While this is a useful description for a human, this description can’t be well understood by an automated testing tool. Remembering that an automated testing tool knows about only four types of actions, the same set of steps would need to be translated as follows:
1. Click on the “File” menu
2. Click on the “Print..” menu item
3. Click three times on the “scroll down” button to find the correct printer
4. Click on the selection box item named “Default printer”
5. Click on the “Copied” textbox
6. Type 1 on the keyboard
7. Click the “Print” button
While this description is still to general for an automated testing tool to use very easily, clearly translating between the human steps and some type of automated testing tool steps would be cumbersome to perform by hand. What was needed was a tool to generate these automated testing steps (we will refer to them as test scripts from here on out) as easily as possible.
Record and Replay
The first solution to this problem was record/replay. With record/replay functionality, an automated testing tool would watch a user interact with an application GUI. By monitoring the two primary input mechanisms available to the user, keyboard and mouse events, the exact set of actions a user performed could be recorded into a test script. The test script was designed in such a way that it could be replayed on command to recreate the exact actions the user had performed. During replay, the automated testing tool simply took over control of the input devices, sending them events through the graphical user interface manager (a part of the operating system that handles taking input from devices like mice, keyboards, and other pointer devices and sending them to applications). This proved a practical solution for generating a test script that an automated testing tool could use. But, not surprisingly, there were challenges with this approach.
All basic GUI actions depend fundamentally on where the user clicks in the application. Selecting the right menu item is about clicking in the right place. Entering text into a textbox is about selecting the right textbox with the mouse and then entering text into it. The first automated testing solutions were solved this problem by recording the (x,y) coordinate locations for each mouse click. When the test script was replayed, a corresponding click event would be sent to that (x,y) location on the screen. To use our example above, an automated testing solution might record a test script as follows:
1. Click (10, 10) (Click on “File” menu)
2. Click (10, 80) (Click on “Print..” menu item)
3. Click (237, 323) (First click on scrollbar down button)
4. Click (237, 323) (Second click on scrollbar down button)
5. Click (237, 323) (Third click on scrollbar down button)
6. Click (237, 356) (Click on “Default Printer” item in selection box)
7. Click (220, 420) (Click on “Copies” textbox)
8. Type “1” (Enter “1” into Copied textbox)
9. Click (410, 453) (Click on “Print” button)
There are a number of problems with this approach. First and most obviously, these scripts are almost impossible to read and understand. It is unclear what the difference between Click(10,10) and Click (220,420) means in terms of the application. This characteristic made maintaining and editing automated test scripts very difficult. The second problem is more subtle, but more important. If automated test replay uses (x,y) coordinate locations to send mouse click events to the application, what happens if the UI controls (buttons, textboxes, checkboxes) are not in the same location? For example, if the original user selected a textbox into which to enter data, the recorded (x,y) coordinate position was critical to selection of the same textbox during replay. If the actual (x,y) coordinates of the intended textbox had changed even slightly, the intended textbox might not be selected (or worse a different one would be) and any subsequent keystrokes would be ignored or entered into the wrong location. Figure 1 shows an example of this problem.
There were many reasons why coordinates of GUI controls might change. The most common is when a developer redesigned a dialog for usability reasons. A less obvious but largely encountered issue was in the portability of these automates tests between different computers. If a user wanted to run the automated test on multiple different computers and each happened to have a monitor with a different resolution (keep in mind the late 80’s timeframe), the user would encounter vastly varying screen resolutions (see Figure 2). This would severely confuse the (x,y) coordinate system. Clearly this solution was very brittle. Coordinate locations are hardly used today except in very specific circumstances when all other methods of GUI object identification have proved fruitless.
The Move towards Objects
Towards the end of the 80’s, a number of widely used GUIs frameworks emerged to replace bare bones interface development. These frameworks provide two key advances for UI developers: a common set of UI objects to build user interfaces and an object oriented model to go along with them. The most notable frameworks were XWindow/Motif (for Unix systems) and MFC (for Windows based applications). Both frameworks are still in existence and widely used today (although there are many alternatives in use such as .NET WinForms). These user interface frameworks treat every GUI control on the screen as a distinct object, organizing those objects into hierarchical trees similar to how a user might perceive the application itself. For example, a dialog appears in the UI object tree as a child of the parent application window object. All controls in the dialog (textboxes, menus, buttons, etc..) appear as child objects of the dialog itself. Additionally, each GUI object has a set of methods that pertain specifically to the type of UI object it is. For example, a button has a Click() method. A textbox has a Focus() method for focusing the text entry pointer into it and a SetText() method for entering text into it. As a corollary, a button does not have a SetText() method because data entry is not allowed in a button object.
Automated testing solutions were quick to take advantage of this new model. One of the benefits of this model was that an external application (such as an automated testing tool) could hook into the GUI object tree and listen to and send events to specific objects. When an automated testing solution saw that the user clicked on the application, the actual UI object could be found instead of just the (x,y) coordinates. Generally speaking, each UI object had a unique handle (identifier). This handle could then be recorded into the automated test script as a way to identify the right UI object during replay. When the test script was replayed and a handle encountered, the UI object tree could be searched through to find the handle. This handle allowed the automated testing tool to send events to the object such as a click event, drag and drop event, text entry event, or data inspection without having to do it generically through the OS. No longer were automated tests constrained to just mouse clicks and keystrokes – real semantic actions such as “click on a textbox” could be captured and performed. With this new approach in hand, our example test script might look something like the following:
5. Application.Dialog(“4232”).Textbox(“3737”).Text = “1”
This new method brought a dramatic increase in the level of resiliency against application UI changes. Screens could be rearranged, different resolutions could be used, and the starting context (location) of the application did not matter at all. In addition, no longer were test scripts a linear series of rudimentary keyboard and mouse events. Recordings had become a collection of object oriented actions. This provided a dramatic increase in the maintainability and reuse of automated test scripts. If the developer changed the handle of an underlying object, only the handle references needed to be changed – the rest of the script steps could stay the same.
While use of the GUI object frameworks removed the brittleness of (x,y) coordinates and solved a number of test script maintenance problems, automated testers were still encountering another roadblock. The problem stemmed from the inability of most GUI object frameworks to guarantee the same object handles would be given to GUI controls from one run of an application to another. Primarily this problem stemmed from the use of dynamic handles created by the GUI framework manager. If, for example, the user opened a window in the application and a dialog within that window, a dynamic handle would be given to the dialog box that was opened. If the user restarted the application a second time and opened a different window and a dialog within it, the handle given to the first dialog might have appeared as the handle given to dialog in the second instance. This stumbling block led to the invention of object mapping. Object mapping really brings two things to the table. First, it allows a GUI object to be identified by its properties, not only by its unique handle. For example, rather than finding a textbox with handle “123456”, object mapping would allow the user to look for a textbox with the window name “Copies”, the length 10, and the height 23. Even if the GUI object’s underlying handle is different during each replay session, the right GUI object can be found by matching all possible objects against the required properties. During a recording session, each object interacted with would be given a generic name (e.g. “Menu – File”) and all properties for the object recorded. The second important innovation of object mapping was to externalize the definitions of the GUI objects from the test script. This “Object Map” allowed application changes to be managed through an external interface rather than through tedious updates of a test script. If an automated test interacts with the same GUI object over and over again, all references in the test script to that object could be updated at one time through the Object Map. To use our printing example, an Object Map-based test script might look like the following:
3. Application.Dialog(“Printer Selection”).Focus()
4. Application.Dialog(“Printer Selection”).SelectionBox(“Printers”).Select(“Default”)
5. Application.Dialog(“Printer Selection”).Textbox(“Copies”).Text = “1”
6. Application.Dialog(“Printer Selection”).Button(“Print”).Click()
Object Mapping proved a timely invention as web-based applications became the de rigueur user interface of the mid-90’s. One of the problems with testing web based applications is that all HTML form controls (textboxes, links, buttons, dropdowns, etc..) are dynamically created. By definition, a browser reads an HTML file and creates a graphical interface representing it on the fly. In early versions of the browser, form controls where not even created through the GUI framework, but mocked up internally to the browser. Over time, browsers exposed their own object frameworks through their Document Object Models (DOMS). DOM’s essentially provide a hierarchical description of the UI objects parsed out of an HTML page, including HTML form controls like textboxes and buttons. Most browsers provide APIs similar to GUI frameworks to manipulate those objects such as setting the text in a textbox. The introduction of Object Mapping was perfectly aligned with the new problems that web browsers brought to testing. Object Mapping allowed the automated testing solution to record browser form controls using their HTML element properties (e.g. name, value, id) and not by position or location inside of the browser. If our printer dialog was represented as a web page, this would be the transformation of our simple test script into the browser context.
3. Browser.Form(“PrinterForm”).Textbox(“Copies”).Text = “1”
Testing Technology Takes a Turn
For many years, a majority of advances in the state of automated testing art were focused on record/replay and GUI object recognition. While record/replay provides a simple mechanism to create and replay test scripts, there are some general challenges with using it as the sole mechanism in a complete automated testing solution. To understand the problems with record/replay, it is useful to sketch out a set of test cases that would completely test a set of functionality. Using our printer example, lets hypothesize some other tests needed to fully test printing.
• Test different printers (that may be connected with different print drivers)
• Test printing multiple copies
• Test printing subsets of pages
• Test printing different types of data in the application
If you look carefully you will see two themes emerge: repetition and reuse. In a number of the examples above, the specific steps to perform the test are actually quite similar. For example, printing multiple copies as well as subsets of pages still requires the user to open the print dialog as well as select a printer. In the record/replay paradigm the user would record a separate test script for each of these tests and much of each recording would be the same. The second aspect to notice is that some tests require the same steps but with different data. Testing different printers is the exact same test with the exception of picking a different printer from the printer list. Testing printing subsets of pages is the same test with different starting and end page numbers. Once again, the basic record/replay paradigm requires the user to record the same steps over and over but with different data. When put in the context of an application with a large amount of features, the sheer size of an automated test suite grows exponentially. A change in direction was clearly needed.
Data-driven testing was the first major advance in automated testing to address the two fundamental problems of repetition and reuse. Data-driven testing allowed the user to substitute hard coded data in a test script (e.g. the printer name) with a variable. This variable could then be connected to an external data source from which variable values were attained. Thus, the same test script could be run but with a different row of data each time. Connect these data variables to a spreadsheet or database query and a powerful data-driven approach to testing emerges. In our printing example, a data-driven test script might look like the following:
7. Browser.Form(“PrinterForm”).Textbox(“Copies”).Text = “1”
In our example, %PRINTER_NAME% is a data variable that is derived from an external data source (such as an Excel spreadsheet or the results of a database query). To test printing on all the different printers available, the spreadsheet simply needed to be populated with all printer names and the automated testing tool would run the same test script over and over for each available name. For data intensive tests, the data-driven approach dramatically reduced the amount of time it took to build a comprehensive test suite. Consider an example in Figure 3, the PeopleSoft new hire business process. Over 700 data elements can be used during the hiring process. Clearly recording a test script for each permutation of this data is untenable. With data-driven testing, one test script can be recorded and each data element can be substituted from a spreadsheet or database query.
At the same time that data-driven testing was solving the data problem, automated testing vendors introduced the concept of modularization into their testing architectures. Keyword-driven testing is the simple process of breaking test scripts into small reusable pieces and then stringing these pieces back together again to make complete test scripts again. If many test scripts had similar steps, each step only needed to be record once into a keyword and various arrangements of the keywords could comprise many different test scripts. Using keyword-driven testing, a tester might describe different test scripts as in Figure 4. Notice that there are a total of 4x4 test script steps listed, but there are only nine total keywords that are arranged to create these 16 steps. Without much leading, one can also see that the data-driven approach and the keyword-driven approach could be mixed together to provided an amazing level of modularization and reuse.
The Shift to Business Process Testing
By the end of the 90’s automated testers had their hands full with different GUI object recognition approaches, data-driven testing, and modularization through keyword-driven approaches. Then along came the Y2K crisis. Y2K provided a confluence of events that would push the automated testing market in a completely new direction. First and foremost, the need to test Y2K compliance in as many applications as possible (and the seemingly unlimited budgets to ensure flawless transition to the new millennium) drove massive adoption of automated testing solutions. At the same time, many companies decided to throw out their legacy systems code in favor of replacing them with Y2K-compliant packaged applications. Thus, the Y2K phenomenon catalyzed huge adoption of package applications as core software systems. A side effect of installing packaged application software is that it puts intense focus on businesses to define their business processes clearly. Packaged applications are designed to facilitate business process and thus only work well when a business process can be discretely defined.
This focus on business process caused a major change in how IT departments approached managing software systems. No longer were applications managed as loosely confederated groups of hardware and software. IT departments were now responsible for technologically supporting business processes. With this shift in IT direction, automated testing shifted direction as well.
The most significant change was the clear inclusion of the business users into the testing process. Software testers know technology; business users know the business process. However, business users were not thrilled to have regular testing activities included in their daily tasks and most were uninterested and technically limited in their ability to adopt an automated testing product. A new divide was created that automated testing solutions needed to bridge.
Keyword-driven testing was the first admirable attempt at seriously getting the business user involved in the test process. By allowing the business user to simply design the right business processes steps with appropriate keywords, the technical user could be left to worry about the specific implementation of how the testing tool would execute each keyword in the application. Additionally, if only one part of the application changed during a release cycle, the automated tester need only update the technical implementation of the keywords affected and not bother the business user at all. If a business process changed, the business user simply rearranged keywords into the right order.
While keyword-driven testing sounds like a panacea, there are some practical problems with it in implementation. First, keyword testing still depends on an underlying, code-based, test script to execute the keyword. If the business application changes (a developer made some modifications to the new hire process for example) the underlying test script might not work with the new changes and the test script becomes obsolete. Arguably only a single keyword in the test case might need to be updated, but in reality application changes are usually more complex (e.g. introducing a new required piece of data used throughout a business process). A second issue is the granularity at which keywords are built. If a keyword encompasses a large section of a business process, (e.g. “hire an employee”, “sign an employee up for benefits”) then a large amount of business process knowledge is still required by the technical user to understand what really happens under the covers of that keyword. As mentioned before, the PeopleSoft new hire process has about 12 different screens and over 700 possible pieces of data that can be entered during it. If a much more granular approach is taken to keyword development, the business user is forced to think more like an automater, describing very discrete steps (log in, click a button, enter text). It is very hard to get a business user to participate at this level of specification.
A number of approaches have been conceived to bridge this gap effectively. Scriptless testing is an attempt to remove the technical test script from the testing process completely. The theory goes - if there is no test script, then there must be no test script maintenance, if there is no maintenance then there is no need for a technical automater, and everything should be able to to be done by a business user. Scriptless testing takes a similar approach as keyword-driven testing, describing test scripts (business processes) in small bites. When the test script is run, the automation software itself knows how to turn a keyword into interaction with the underlying application. An example scriptless testing step might be:
Input %PRINTER_NAME% into Printer SelectBox
The sciptless testing software can readily understand that a SelectionBox with the name “Printer” needs to be found and a selection matching the %PRINTER_NAME% variable needs to be made. No test script is required. A more complete scriptless test case might look like the Figure 5. Scriptless testing can definitely reduce the amount of technical resources required in the testing process. However, it forces a business user to think through a business process at an incredibly granular level. For a complex business process (such as the hiring process in PeopleSoft), the enumeration of every test step might result in a test script that is 5000 steps long. It is hard to believe that business users will adopt such a cumbersome format even if presented in a familiar Excel interface.
Business Process Testing
The latest entrant into the automated testing lexicon is Business Process Testing. Business Process Testing attempts to take the middle road, admitting that both a simplistic scriptless interface is needed for the business user and a flexible code-based test script is needed for their technical counterpart. The primary advance with this approach is a much cleaner integration between the two interfaces. Business users can see available keywords and request keywords be created from the technical user when they need one that is not available. Technical users can work on keywords externally to business users using them to define business processes.
In addition, Business Process Testing attempts to handle the movement of data between keywords in a much more organized way. One of the main problems with keyword-driven testing is that many keywords depend on correct application state and data to run correctly. For example, a keyword that looks up an employee compensation code might require another keyword before it that extracts the appropriate employee ID used in the lookup. Business process testing attempts to help manage both the data and state interdependencies of these keywords.
One of the primary drawbacks of Business Process Testing (and keyword-driven testing) is that it removes the business user from the interface they are most familiar with – the actual application itself. For example, most business users would find it hard to describe the complete PeopleSoft new hire process step by step without being able to look at PeopleSoft itself as a reference.
Another new entrant into the testing space is metadata-based testing. Metadata based testing introduces the concept of using application metadata to make test scripts more aware of the application (and business processes) they are testing. This method has only been enabled by the rapid adoption of packaged applications in the last five years. Packaged applications are consistently built using a metadata architecture. This architecture lets the developer design the application using standards controls, pages, database models etc.. and the actual packaged application (e.g. PeopleSoft) renders the final presentation layer. Embedded in the presentation layer are tags that relate HTML form controls back to the original metadata objects that created them. This metadata approach captures not only the GUI properties of objects but their metadata properties as well. The primary advantage to this methodology is that all application changes occur at the metadata level in a packaged application. If changes in the metadata can be recognized and both the metadata and GUI properties have been captured during a test recording, an automation tool can point directly to parts of the test script that are affected by the metadata/application changes. This can significantly reduce the amount of time involved in test script maintenance, one of the most costly aspects of owning testing automation.
What to Make of All of This?
Software testing can be a costly exercise. The simple promise of automation is to reduce the cost of attaining higher software quality while decreasing the risk of defects in production through expanded test coverage. Automated testing is the quintessential example of doing more with less. But the history of automated testing is really the history of software application development. As application interfaces, architectures, and teams have changed, so to have automated testing solutions changed along with them. It is critical to understand these parallel tracks when determining how to approach software quality in your own organization, now and into the future. Consider carefully what methodologies will be most useful for your current software infrastructure and what demands future applications will put on your testing team and process.
Can your applications be exercised through a GUI? Do you have highly technical resources that you can devote to automated testing? Are your applications feature-light and data-intensive? Do you need to test that an application won’t crash, or do you need to focus more on validating that a business process works? How much commitment will you get from the business users in the testing process? How many of your business processes share common steps? How frequently will you be making changes to your applications and how maintainable are your test scripts?
Answering these questions should lead you to favor one or more of the testing methodologies described above. Careful consideration of these types of questions mixed with a realistic expectation of the effort involved in any type of automation will dramatically reduce the overall cost of bringing a new level of software quality to your organization.
What I really should have said in this article is that Newmerix invented the metadata approach to automated testing tools. It’s a totally new concept. Well, sort of. I would be a bit remiss if I did not at least tip my hat to some of the work produced under John Montgomery at PeopleSoft. John runs the automated testing tools team for PeopleSoft and JD Edwards. If you want to test something, you get your tools from John. Over time, his team has built a set of testing tools used internally at PeopleSoft, some of which are based on similar concepts. While we ended up with very different final solutions, it was those early discussions with John that gave me comfort that the approach Newmerix was taking was not going to end up in one big, extremely expensive cul-de-sac. So thanks John. And thanks again to Mobius, IDG, and most recently Siemens Venture Capital for getting us to where we are now.