Blog | Nov 12, 2014

Never Enough Testing

Any significant project like an Oracle 11i to R12.x upgrade starts as a daunting task for the project team and particularly the business users assigned to the project.  These team members often draw double duty to maintain their existing full time activities while also committing 25-50% of their time to an Oracle upgrade.  Attempting to understand their assignments and balancing their tasks can feel like an uphill marathon and generally leads to the common question “How much testing do we need to do on this project?”

My first thought in formulating a response comes from my dentist’s office – “Only floss the teeth you want to keep”.  The critical success driver for an Oracle Upgrade project stems directly from the breadth and quality of the testing put into the initiative.  So my simple answer is “Only test the functions that you want to work correctly in Production” or basically everything. 

The good news has been that Oracle did a tremendous job with their 11i to 12.x upgrade scripts.  TriCore’s experience has been very positive in moving the data and implementing the new functionality with relative ease.   Watching clients immediately navigate through a majority of their daily activities after the first upgrade cycle has become common and fantastic. 

Unfortunately, that success can instill a sense of security and belief that if function “A” works, everything should work.  “Should” is the key word in this assumption.  These projects are large and can easily take 6-9 months or longer (a marathon).  Watch out for the business user that signs off on the functionality too soon.   The sense of early success and a production reality in the distant future can hinder the user’s commitment and focus on the project and typically leads to issues not being uncovered.

Most project team members understand the concept that not finding an issue during the testing process tops the list of production issues causes.  However, second on this list is the timing of issue identification.  Issues found later in the project are often the ones that find their way back into the production environment. 

Most upgrade projects typically include three (3) or more testing cycles (CRP I, CRP II and UAT).  Each is critical to the overall process and holds its own value to the project.  Normally, CRPI focuses on the standard Oracle functionality, CRP II incorporates all customizations and full integration testing, and UAT provides a final acceptance by the users.  While finding issues in the UAT Test cycle is better than finding it in Production, ideally you would uncover these issues earlier in the process.

Issue identification only starts the production prevention process.  Once identifying and resolved in a test cycle, the implementation team still needs to document the steps taken for the corrective action.  This may include applying a patch or changing some configuration settings.  These changes get captured in a conversion document and then included in the transition process.  Each upgrade iteration provides an opportunity for the implementation team to validate the corrective action steps and related process documentation.  Therefore, issues found in CRP I can be documented and confirmed in two additional cycles prior to a production implementation. 

The risk of mistakes in the cutover process increases the later in the project the issue is initially identified.  This is not an excuse for sloppy work, but a simple reflection that for issues found late in the game (UAT), the conversion documentation gets updated but never fully tested before the production cutover.  Your go-live should not be the test run for your conversion documents which is the reality for issues found during UAT.

To best answer the question “How much testing do we need to do on this project”, remember that both testing completeness and timing are critical to a project’s success.  Like a marathon, getting ahead and staying ahead is much easier than trying to make up time at the end.   So make sure the project team has every opportunity to execute a seamless transition to production, by testing “everything” early and often.