Even more complex and difficult to predict will be the "third wave" of ensuing losses, said co-presenter Thomas A. Longstaff, director, technical staff research and development, for the CERT Coordination Center, located at Carnegie Mellon University's Software Engineering Institute in Pittsburgh.
In the case of "non-functional," or passive, systems that may not have to function until long after Jan. 1, a Y2K problem may still be lurking, undetected, expensive to find and very difficult to distinguish from a non-Y2K coincidental loss."With functional systems, you get immediate results on a certain trigger date.It's basically everything a computer does," Longstaff
said."But a non-functional component, like a security system, basically does nothing - it just sits and watches.Its failure is not predictable just by looking at the code itself.You may not know about a fault in the system until years down the road."
To help sort out the dizzying number of possible Y2K exposures, Pasciullo and Longstaff
recommended that adjusters and insurers develop a Y2K "claim triage" system, which matches different sets of claims with different types of coverage analysis.
Dependency analysis - This method is a bit more invasive than the open-source, Longstaff
said, in that it looks at the connectivity map and the overall system architecture.By looking at a flow diagram of the computer's data, a computer technician can trace back to areas where breaks in connections occur, logically deducing which areas of the system are affected by Y2K.
The problem with this method however, is that rarely does the documentation for the programs match the actual functions of the computer, which has often been upgraded many times during its life."In all the analysis I've done, not a single time has the documented architecture matched the actual architecture," Longstaff
Black box testing - After the dependency analysis, the cost of the methods start to rise dramatically, starting with black box testing, Longstaff
said."This method looks at objects purely for what they do, not for why they failed," he
First the technician looks at results, manipulates the environment of individual components, describes the form and content of all input and output, then finds the error."You need to have an expert who understands how a component was hooked up to others," he
"Don't let the technical expert make the decision on which method to use, because he
will always make this one," Longstaff
warned.This type of reverse engineering is "more expensive, it's more complete and it's usually unnecessary," he
said."Limit your report to focus on the areas you need."
White box testing - This is the most expensive and exhaustive analysis, Longstaff
said."The first and last question is in this method is, "What is the source of the problem?"To find this, the technician exposes every nook and cranny of a computer system, examining every possible variable that can occur, including analysis of any Y2K mitigation strategies or modifications that have been attempted."Use this source extremely wisely and sparingly.This is the traditional method of expert analysis," he
"Understanding [the four methods] helps you make the best decisions," Longstaff
said."That's one of the reasons you want to bring in someone who can traverse all the levels of investigation."
Though much of the minutiae of these analytical methods is well beyond the depth of the average adjuster, that does not mean adjusters should step aside during the testing.In fact, Longstaff
suggested that adjusters pull the reins in on these hired experts and remind them about what insurers are looking for - the origin and cause of loss.