SOLUTION TEST METHODOLOGIES

There are many names for a solution test. User Acceptance Test (UAT), Solution Acceptance Test (SAT), Pilot, End-to-End testing, etc. But the concept is always the same. During an implementation, upgrade, or major re-design:

  • Periodically test the full IFS solution in all of the key process areas

  • Use realistic data to represent the real actions in a go-live world

  • Take the results to determine where the areas of focus need to be going forward

The goal is to validate that the business can perform with the configuration and information that is in IFS, as documented by the project team.

There are many factors to decide on when running a Solution Test. Here we'll discuss some of the pros and cons of some of the key factors. There is no perfect way to structure these Tests, that is driven by the business standards and the way the team works. But defining the right structure can go a long way to setting up a path to succeed. I generally prefer a large tests be performed by teams that come in and out of a 'war room', run based on achieving scenario goals, and are executed in roughly one week (with additional time to prep and de-brief). And don't bury yourself in documentation that is often never referred to again.

Scripted versus Scenarios

Most Solution Tests have a high level path that they follow. They can be highly scripted, with nearly every step in the process listed out on a spreadsheet. Every User knows exactly what step comes next, because it's in black and white. They can also be more 'scenario' based, where a more rough set of activities may be identified. But these can be more goal-based, with the team following a less defined path.

The Scripted approach has some pros:

  • They can run more quickly, as each step is known

  • They lend themselves to documenting the results as the steps are clear

  • They allow pre- and post-Test validation as they are easy to review

But there are also cons:

  • Execution can feel 'rote', with the team not having to think about what/why they are doing things

  • They can take a great deal of time to build out, as all processes have to be well documented

Scenario-based Tests also have positives:

  • With a Scenario being primarily a 'goal', the team is essentially forced to decide what the steps are as they do them. So it's more real world

  • They can then also vary in how they are done to achieve the same goals…often leading to new ways of doing things

  • They can also be defined more quickly, with less preparation and pre-built documentation required

On the downside, these Tests can also 'wander' a bit:

  • Less structure can have teams deviate from the goals

  • They are also harder to replicate (if that is a goal), as it can't just be handed off and done the same way again

As noted, there is no perfect way. In my experience they can both work. But the Scenario-based approach forces the team to make more decisions, to think more about what they are doing (and why) instead of just tracking through a worksheet, and is simply more realistic to what will occur after go-live.

In Person or Remote

I have seen three ways to run a solutions test as it comes to the question of 'where'. Everyone can be in one big room for the duration, the team can run their operations where they are (remote or at their desk), or the team can come and go as they are needed.

The most common way, the project team performing the tests in a big conference or training room, has the obvious benefit of time. Everyone is in the room consistently. So moving from one activity to another can be seamless. As can potentially quickly resolving questions and issues. But there are potentially major issues with this option as well. A team of a dozen can have ten or more Users having very little to do…so they don't pay attention anyway, or worse they can chime in on items they know little about. Additionally it takes them away from other tasks. But he biggest drawback might offset the main positive. If the team can see what's next, they often don't do their processes as they normally would. You can even sometimes see Users already starting on the next step…because they know what Order to process (for example).

Having the team scattered (at their desks or in groups in conference rooms) lends itself to a more real-world view of transactions. Ping the warehouse User and tell them it's time to receive. They have to then show how to find what they might expect to receive instead of already knowing what PO the Buyer just Released, for just one example. But it can be frankly brutal to get through a process when you have to chase down the team one-by-one. "Sorry, the warehouse is all at lunch."

A surprisingly good way to run these solution tests is to blend these approaches. The team is scheduled (probably roughly) into time blocks. The entire test set is shared in an online meeting so everyone can watch in remotely, but that tends to be pretty passive. As a process area comes up, that team joins the project management (and sometimes a few key cross-functional leads) in the big room to do their part. This method combines the benefits of everyone participating without the 'noise' of a whole team being in the same place. It can provide that real-world timing where the Users don't already know the answer, without having to chase people down.

The Length of Time

I have seen solution tests that allotted a week each for prep and review, and two or three weeks of actual hands-on. I've seen (in small-ish companies) the entire thing completed in a short week. I can't give a magic bullet answer. The best guide is 'how did it go the first time' for second and beyond test cycles. A basic guideline might be this:

  • Assume a short week for preparation. Staging 'in process' data and documenting the workflow that will be tested for example

  • At the end, assume at least a day or two to really review the results and put in place steps to address issues

  • The execution itself depends a lot on the scope of the solution, the number of resources involved, etc. But a normal week should suffice for most companies

    • Structuring the testing to allow more 'one off' processes to be performed at the end (such as Year End processing in the GL or running some of the Analysis reports) gives you the flexibility for these items to slide into another week if needed

One big suggestion here is try not to make it TOO long. Especially in early passes, these tests can be tedious for the team and grinding on PMs. Running out of time and not hitting every scenario is acceptable as a well-structured plan focuses on the most important flows first. And if things run slowly…it's probably because issues were uncovered. Generally a real good one-week test is the way to go!

The Documentation Dilemma

Worksheets, worksheets, worksheets.

Just how many different documents, and how deep and detailed, are suggested for an AST? Do you use the same one(s) for future iterations.

This one is pretty easy in my opinion. Don't start new Issues Logs just for the AST, a new CRIM list isn't required, nor are new RACI or team lists or the like. Focus on simple but complete.

A good AST management tool just needs to be self-contained with everything in one place. Just make separate worksheets. And they do NOT need to be overly complex. They will die on the vine in a short time, so make them usable for post-testing analysis and nothing else. A page each should suffice for:

  • The schedule

    • Who does what when in the test flow This is NOT a full-blown project plan in nearly every case

    • It WILL go out the window by day 3, it just will

  • The process/scenarios

    • What scenario steps are being performed in the script/scenario (NOT what keystrokes are being performed)

    • Who does them

    • Any specific items to pay attention to

    • Anything unique about this pass vs. another (put a Shop Order on Park, over-receipt, quality issues, etc.)

  • A control list of data to use

    • A partially received PO for a scenario should be noted for example ahead of time

    • Parts that have known issues if that's something to use

    • What reports/analysis do we want to run to validate they make sense

    • This sort of thing, only the key things that matter to the scenario, and just one place should cover it

That literally could be IT. Do you need a macro to track that the UAT is 82% complete with 91% pass/fail? I don't usually, but some might. If we note issues in the Issues Log and highlight the items that need further workshopping…we've met our goal. EVERYONE in the room knows if it was a good test or not, and they ALL know if the second pass is better than the first. Over-complicating the level of documentation just makes everyone's job harder…when they should be focused on testing and figuring what's really working (or not).

Hopefully this gives you some small insights or ideas on how to run your solution test. I like a good one-week, scenario-based, simply documented test with Users being involved as needed.