Lost and Found... the newsletter of Volume 10, Issue 12
8 December 2005
Editors: Tom Rinck, Mike Dugger, and Tom Russo

Cibola Search and Rescue
"That Others May Live..."
Top of the Hill Boots and Blisters Gearing Up
Feature Article Disclaimer/Copyright
Recent Missions
Calendar
Callout Information
Back to Top
Top of the Hill by Tony Gaier , President

Thanks to Chris Murray and Mike Dugger for conducting the land navigation evaluation in November, everyone passed.

The last evaluation for the year is scheduled December 17th at 9:00AM. It will be a litter handling evaluation located at Three Gun Spring Trailhead. If you plan to attend this evaluation please leave a voice message on the hotline by Friday, December 16th. In order to properly conduct this evaluation I would like to have 6 or more individuals. Currently there are 11 people on the team who need this evaluation.

Thanks to everyone this year that helped out with trainings, evaluations, and other special events. I would like to challenge everyone next year to get more involved with the team. Think about writing an article for the newsletter or conducting a training for the team. Think of ways to recruit new members to the team. Ask a friend who may be interested in search and rescue to come to a meeting or to contact the membership officer.

I hope everyone has a great Christmas and Happy New Year. Please drive and play carefully this holiday season and hope to see you at the first mission of the New Year!

Back to Top
Boots and Blisters by Mike Dugger, Training Officer
In November we had training on SAR Fundamentals, at the north end of the Piedra Lisa Trail in Placitas. Seven people attended. We did a litter haul up the trail and in the surrounding foothills for about mile. Tom Russo then led some instruction on map reading, terrain identification and resectioning to locate our position on the map. Finally, the team did an area search looking for realistic clues, and found 73%. A few of the clues were very small - for example a small compass, a mirror and a comb. For our regular search techniques evaluation, the clues will be larger than the ones missed during this training exercise. The team took more time than the allotted time for the given area, based on an average speed of 1 mile/hour and searcher spacing of 50 feet. This illustrates the importance of maintaining the rate of motion of the line as well as searcher spacing to cover the required area in the allotted time. If the team had been stopped at the end of the allotted time, they would have missed a few more clues and failed to get 65% probability of detection. An area search requires some time to form the line, mark boundaries with trail tape, wait for searchers to check out obstructions and possible clues, as well as "purposeful wandering" from a straight line during a pass. Therefore, a walking speed of about 2 miles/hour is required to average 1 mile/hour for the entire assignment.

December will present two training opportunities. Tony Gaier will lead a night land navigation training on the evening of December 3 at Balsam Glade. The following weekend on December 11, there will be a litter and low angle technical training. Since we will not have a classroom session in which to practice knots and subject tie-in, we will go over these in detail at the training including working with a new strapping system that should make tie-ins go much faster. I will also have all of Cibola's technical gear at the training, and we will go over some basic uses of this equipment, such as lowering using a brake tube, and a low angle litter raising system. The training will begin at 0900, and please bring a rock helmet or bike helmet if you have one. Check in with the team voicemail hotline for the location of this training.

Back to Top
Gearing Up by Mark Espelien, Equipment Committee chair
A Review of NMESC Map Service

NMESC has recently started providing map printing services available to all NM teams and individual members with an active membership. For further details, go to www.nmerc.org and click on the "Map Information" link. I recently bought copies (paper and waterproof) of the four quads we normally use, and did a little "testing".

On initial inspection, the big advantage of these maps is the UTM grid overlay. One of our interpolators matched up very well on the grid, so the scale looks correct. There are some "jaggies" from the digital printing process when compared to the USGS maps, but legability is still very good. Color is accurate with the exception that the blue, designating water courses and springs, seemed fainter than the USGS maps.

Now for the "testing", primarily on the waterproof maps. I folded the maps accordian-style with as sharp a crease as I could apply. The waterproof material does not hold the crease as well as paper, but the color at the crease held up better than the paper. The waterproof maps are bulker than the paper version. They don't tear, but, with a lot of mechanical force, will stretch and distort.

A splash of water wiped off with no bleeding, as expected. A relatively fierce application of a blue scrub sponge was used to try take off the SKU label, resulting in almost undetectable smearing. Then I soaked the map in a bucket of water for 24 hours. There was a definite green tinge to the water, and the green on the map was slightly faded. Red from any brown contour lines, red boundaries, and black text bled through to the back of the map, and was harder to read on the front. While this isn't a very realistic test, the lesson would be to dry off your maps before storing.

The only other test was writing instruments. Sharpies work best, with ball point pens just behind, although the ball point pens will distort the plastic. A #2 lead pencil did not work well at all, and was barely visible. A softer lead pencil may work better, but I did not try this. I also did not try grease pens or whiteboard markers.

One test result of interest to all the ICs/FCs out there - coffee stains the waterproof material badly and permanently!

In summary, these maps are good quality with the biggest pros being the UTM grid and the price. The only practical con was the pencil test, which means we would still need paper maps for our navagation resection exercises. Back to Top
"New" perspectives on ground SAR planning by Tom Russo
I am titling this article '"New" perspectives on ground SAR planning' with the quotation marks in place because the ideas here are in fact not new at all, but have simply been neglected, underemphasized, or completely misrepresented in traditional land SAR texts over the last thirty years or so. They have, however, been well established and proven in maritime SAR since World War II. This article in fact presents nothing new at all. It is my intention only to call attention to the contents of a few articles that have been published in the last five years that have attempted to bring the science of "search theory" to inland SAR in a way that previous attempts have failed to do. It is my hope that at least a few Cibola members might be intrigued enough to read some of this literature, too.

The inspiration for writing this article came from reading the final report on a series of "sweep width estimation" experiments that included a data-taking opportunity at ESCAPE 2004 near Ruidoso. That paper --- all 14MB and 245 pages of it --- can be found in PDF form at http://www.uscg.mil/hq/g-o/g-opr/nsarc/DetExpReport_2004_final_s.pdf). From there, I found myself reading an earlier paper that developed the experimental procedure used there ( http://www.uscg.mil/hq/g-o/g-opr/nsarc/LandSweepWidthDemoReportFinal.pdf), a NASAR publication by Jack Frost entitled "Principles of Search Theory" (available from the NASAR bookstore, or in PDF form from http://www.sarinfo.bc.ca/Library/Planning/PrincSrchThry_S.pdf), a paper entitled "Controversial Topics in Inland SAR Planning" (which is available in two forms at http://www.newsar.org/White%20Paper.htm --- I highly recommend reading the version with Jack Frost's comments inline, followed by the "civil critique" by Charles Twardy at http://sarbayes.org/newsar.shtml), and finally "Compatibility of Land SAR Procedures with Search Theory" (http://www.uscg.mil/hq/g-o/g-opr/nsarc/LandSearchMethodsReview.pdf and its critique at http://sarbayes.org/cooper.shtml. I have also joined a mailing list, "SAR-L", in which many of the people who wrote these papers participate (a quick Google search should help you find this list should you be interested) --- there is lively discussion about how these ideas might begin to be used in land SAR.

The result was a bit of an eye-opener. In combination, these papers challenge much of the "lore" we have been taught for years --- when compared to the rigor of the scientific method applied to maritime SAR, the inland SAR "theory" we've been taught is not very convincing.

Background

The scientific foundation of search planning is grounded in the applied mathematics field of "operations research." The field of search theory was pioneered in the 1940's by B. O. Koopman in a classified document entitled "Searching and Screening", a motivation for which was to improve the techniques for locating enemy submarines. After the war this document was declassified and later expanded into a book of the same name (currently available from the Military Operations Research Society at http://www.mors.org/monographs.htm ). Cursory reference to this material has been scattered about the various land SAR management texts over the years, but no coherent application of its principles has been presented -- in some books the equation "POS=POA*POD" is stated early on and then ignored for the rest of the book, in others it is simply presented as a fact with no meaningful impact on search planning.

Effective Sweep Width

Central to the mathematical theory of search is the concept of effective sweep width. This is a measure of the "detectability" of a type of object by a specific type of sensor (or searcher) in a particular set of conditions. Considering a sensor moving through a uniform randomly distributed swarm of identical objects, it is defined as the rate at which the sensor detects search objects per unit time divided by the density of objects per unity area times the speed of the sensor:

  Effective Sweep Width = (objects found/time)/(objects/area*speed)
 

Frost illustrates this concept with an extended analogy using brooms as sensors and sand as the "objects" to be detected --- his series of articles is very readable and I highly recommend starting with it. The point, though, is that this effective sweep width is something that can be directly measured for a combination of sensor, search object type, and search condition (speed of sensor motion, terrain, vegetation, etc.). Two sensors with equal effective sweep width would not necessarily detect exactly the same objects in the swarm, but would find these objects at the same rate as they moved at the same speed through the search area.

A sensor that can detect every single search object (say a mannequin in blue coveralls) in a 60 meter swath centered on the path of motion but no search objects outside of the swath would be said to have an effective sweep width of 60 meters. This "detection profile" is referred to as the "definite detection profile" and while unrealistic provides one bounding special case for later consideration.

A second sensor type that could detect half of the objects in a 120 meter swath and no objects outside that swath would also have an effective sweep width of 60 meters --- these two sensors detect different sets of objects from the swarm of identical objects, but they detect the same number of them per unit time and therefore have the same effective sweep width.

Finally, it is easy to imagine a sensor (say, yourself) that can detect objects of the specified type much better when the lateral range (perpendicular distance to the path travelled) is small, and progressively worse as lateral range increases. In this case, one can still compute an effective sweep width. The exact method used can vary, but an important quality of the result is that such a sensor will detect as many objects at lateral ranges outside half its effective sweep width as it misses at lateral ranges inside half the width --- alternatively if one imagines the searcher cutting a symmetric swath of width equal to the effective sweep width through a search area centered on the path walked, the number of objects found outside the swath will equal the number missed inside. In fact, this is found (in http://www.uscg.mil/hq/g-o/g-opr/nsarc/DetExpReport_2004_final_s.pdf) to be the simplest way to compute effective sweep width from field experiments. A mathematically equivalent approach is to construct a "detection profile" --- a graph of the probability of finding a search object of a specific type as a function of lateral range --- and find the area under the curve.

It's important to keep in mind that the concept of effective sweep width includes in it an assumption about the sensor's speed --- doubling the sensor's speed might very well require an adjustment to its effective sweep width. Imagine doubling your speed through a search segment: you will almost certainly miss more search objects than at the original speed, and the objects found per unit time might drop enough so that the ratio that defines effective sweep width might not remain the same.

Effort

Another important concept in search theory is that of effort. The definition of effort used in the theory is simply the distance travelled by the sensor along its track, or equivalently the speed of the sensor multiplied by the time it spends searching. Effort has units of distance, just as effective sweep width does. We'll use the variable z to denote effort.

Area Effectively Swept

The product of effective sweep width and effort is known as Area Effectively Swept --- obviously, as the product of two distances, it has units of area. A sensor with a definite detection profile of width W applying an effort z will sweep up all the objects in an area W times z. Similarly, any other sensor with effective sweep width W applying an effort of z will find as many objects as that perfect one, but will not leave a path of width W devoid of objects behind it. It is because of this equivalence of the number of objects detected between these two cases that the product of effective sweep width and effort is known as the area effectively swept --- given a particular density of objects to be found, both sensors produce the effect of finding a specific number of objects equal to the density of objects times the area effectively swept.

Coverage

Knowing how effectively a sensor locates search objects as it moves, we come to another important quantity: the coverage. Coverage is defined as the ratio of the area effectively swept to the search area. Knowing the effective sweep width of a sensor type in a given set of conditions, one can compute the coverage as:
 C = W*z/A
 
where W is the effective sweep width, z is the effort, and A is the actual area to be searched. How's that work?

Imagine that we have measured that a searcher on foot, moving at 1mph through terrain of the sort found in the Sandia Foothills on a clear, sunny day, has an effective sweep width of 196 feet when looking for unresponsive adult subjects wearing blue coveralls. We have five such searchers who are willing to search for 2 hours. Each searcher, therefore, brings 1mph*2hours=2 miles of effort to the endeavor, and so we have 10 miles of effort to be assigned. We convert 196 feet to miles to obtain an effective sweep width for each searcher of .037mile. Thus, the area effectively swept by this team will simply be 10 miles*.037 mile = .37 square miles. By the way these numbers are not picked randomly --- the sweep width experiments done in 2004 found that in the area around Bonita Park, searchers had a sweep width of about 60 meters when looking for manniquins in blue coveralls. I'm simply making the assumption that the terrain in the foothills is similar, an assumption that is almost certainly unfounded due to vegetation differences --- sweep widths for unresponsive subjects in the foothills might even be much wider.

This team is asked to search an area of .125 square miles using all of their two hours of availabiltiy. What is their coverage? Simple: .37 square miles/ .125 square miles = 2.96.

Note that in order to sweep this area in exactly 2 hours with 5 searchers, using the assumption of parallel track searching, 1 mph average speed, and Cibola's standard method of grid searching (where the searchers laying out trail tape retrace their steps on consecutive passes), that the search area is 0.5 mile wide and 0.25 mile deep, and that the search pattern is parallel to the short side one can compute how far apart these searchers will have to be to do the job: each pass in the 0.25 mile dimension will take 15 minutes, so there will need to be 8 passes to do the job in 2 hours. Dividing the .5 mile width into 8 equal passes, and dividing each pass by four (one less than the number of searchers) one gets an inter-searcher spacing of only 82.5 feet. Furthermore, note that the effort of the searchers at the end of the line is applied twice over each of their passes --- one should probably account for this; while these searchers are indeed searching their swath twice, their effective sweep width is probably lower than the other searchers' due to their need to concentrate on the navigation and marking of the search area. Accounting for that would probably decrease the area effectively swept and therefore the coverage. Note also that the assumption of 1mph average speed is probably optimistic as evidenced by recent revelations regarding Cibola's search techniques evaluations.

So what? Who cares? What's in it for SAR?

Probility of Detection

We in land SAR are used to thinking of probability of detection the way it is taught in too many texts on land SAR:

If there were 100 clues in the area you searched, how many do you think you'd have found?

But this is not what search theory uses for Probability of Detection. The meaningful definition is:

What is the probability that the specified sensor, moving in the way it did, would find the specified object given that the object is in the area?

The "standard" land SAR meaning is subjective and essentially meaningless from a mathematical standpoint, and useless in planning. Experiments (notably the one describing the sweep width experiments held in 2004, referenced above) show that these subjective "POD" estimates bear little relation to the actual probability of finding the objects. In fact, there is a slight negative correlation in that the lower the probability estimate the more likely it is that it is an underestimate, and the higher the probability estimate the more likely it is to be an overestimate --- and in no case was it observed that experience, training, age, or other factors markedly improved a searcher's ability to estimate the answer to the first question any better than pulling random numbers out of a hat.

The second question, however, can be answered by search theory using the concepts described earlier and an assumption about a "detection function."

For the perfect sensor that finds all the objects in its path W and nothing outside, the probability of detection is simply proportional to the coverage until it reaches 100% when coverage reaches 1.0, and is 100% for all coverages higher than that. This "definite detection function" is the best possible relationship between coverage and POD. It is also nothing more than an upper bound, since such detectors do not exist.

Koopman computed two other detection functions. The "inverse cube detection function" is based on the geometry of aircraft searching for ships wakes using specific techniques and how the wakes subtend certain angles when they're glimpsed at various ranges --- I won't mention it further. The "exponential detection function," however, is meant to be used whenever random influences are present --- influences such as variations in track spacing, terrain, visibility, etc. This exponential detection function is the one that is most appropriate to conditions in land searches and leads to the following, very simple relationship between POD and coverage:

   POD = 1 - exp(-C)
 
where "exp(-C)" is the function which raises the quantity e to the negative C power.

Using this relation and a pocket calculator, one can see that our hypothetical team of 5 searchers with effective sweep width of 196 feet, spending two hours doing an evenly spaced grid search looking for an unresponsive subject in blue overalls in the Sandia Foothills, achieving a coverage of 2.96 reaches a POD of 1-exp(-2.96)= .948, or 94.8%. Let's call it 95% for use later.

Note that this specific example says nothing of the team's probability of finding bits of trash shed by the subject, footprints, broken twigs trod upon by the subject, or anything else. It is simply the probability that the team would find the subject, dressed in the assumed clothing the subject was wearing, in conditions prevailing in the area under search if the subject were in fact in that area while they were searching it.

Probability of Area

Also central to the theory of search is the notion of "probability of area" (POA), the probability that the subject is in a given area.

POA also known in the search theory literature as "Probability of Containment" (POC). The term "POC" was also unfortunately used for something completely unrelated ("probability of coverage") in a 1996 article in the NASAR journal RESPONSE, and as a result the use of that acronym for that purpose appears in a few of Cibola's older search techniques handouts and articles in this newsletter. That usage is peculiar to that one article and those derived from it, and is pretty much an attempt to develop a "fudge factor" to apply to teams' estimated POD values based on the ratio of the time they'd be expected to search given a few assumptions versus the time they actual did search (i.e. if they spent too little time in the area one could downgrade their estimate by this factor). To avoid any further confusion, I'll completely avoid using the term POC in either sense, but it should be noted that the meaning of POC in formal search theory is a synonym for POA, and the other usage is not in any way standard.

The probability of area for a specific section of a search area is estimated in a variety of ways that aren't all that important for a ground searcher to be concerned with --- the point is that at some point in the planning of a large area search the relative likelyhood of a subject being in various parts of the search area must be estimated. In many inland SAR books it is stated that one must first divide the search area into searchable segments of managable size, and then assign probabilities to each of those areas. In fact, no such requirement exists --- one can use all available information to assign probabilities to any set of areas (even if insufficient information exists to assign probability to anything but general regions), and then create search segments that divide these areas up in an operationally convenient way, "peanut-buttering" probability across the original areas and dividing it up among the search segments. There might be some relationship between the suitability of an area as a search segment and the likelihood of it containing a search subject, but one cannot be assumed --- while it may be the case that the very things that make a search segment suitable for a searcher to traverse might make it a likely place for a subject to traverse, it is also possible that the ease with which an area can be searched could reflect a low probability that the subject would get stuck there.

Naturally, when new information comes in the probability estimates need to be revised. Such new information could include finding a clue somewhere, some new revalation about the subject's habits and plans, or information such as "team 5 completed their assignment and didn't find the subject in their segment."

Probability of Success

Given an estimate over the search area of the relative likelyhood of various areas containing the subject, one can assess the probability of success of a search of one of the areas:

 POS = POA*POD
 

That is, the probability of finding a subject in the area is the probability of the subject actually being there multiplied by the probability of an assigned resource finding the subject assuming it's in that area.

While this equation is present in most land SAR texts, its importance is neglected, and sometimes it is explicitly stated that it is meaningless. This is not the case. The fact is, given a calculated probability of success, we can compute the probability that an unsuccessful search of an area is due to having missed the subject --- i.e. the probability that the subject is in the area but we didn't make a find due to limited POD:

 POA_new = POA_old*(1-POD)
 or
 POA_new = POA_old - POS
 

That is, the probability that the subject is there is equal to our original estimated probability times the probability that we would miss detetection assuming the subject's presence. It could be the case that after searching an area to a certain computed (not estimated!) POD the area remains the most likely place --- such an area could warrant additional searching. Or it could be that after searching the area the residual POA is low enough to justify committing resources to a different area in an attempt to increase overall POS faster with limited available resources.

Those familiar with probability might at this point think that the entire probability map needs to be "renormalized" so that the probability of all areas of the map add up to 100% again. This is not true, despite being stated with absolute authority by most land SAR texts. Those concerned with this apparent clash with intuition are referred to the papers and texts noted in the second paragraph.

Overall POS, the sum of all the individual POS values from each area's search effort, is the quantity that search management is trying to maximize. The goal of search planning is to allocate resources so that the POS increases at the maximum rate possible given the available effort. The science of search theory provides quantitative algorithms that can map effort (given measured effective sweep widths) to areas (given estimated probabilities of area) in a manner that maximizes the rate of POS increase. An allocation of resources that assures optimum growth of POS with time as limited effort is expended has the best chance of finding the subject (success) sooner than a less optimal plan.

So what's it mean to us?

The papers I cited at the beginning of this article are beginning to effect change in the way that searches will be planned in land SAR --- or at least they stand the chance of doing so if we start listening.

Search theory's rigorous mathematical foundation (to which I've not done any justice at all) has been proven to be of great value in the planning of maritime searches, and is all that is taught to search planners in that field. Search theory is used to optimize the allocation of effort to search areas; in SAR this is to reduce the time to find the subject and save a life, in other searches it can be to reduce the cost of an operation or minimize a danger to national security --- some of these applications are referenced in the work of Cooper, et al..

Reading the articles mentioned at the beginning reveals that the techniques taught in classic land SAR theory have in many cases been developed without connection to the underlying science of search theory, or with only nodding acknowledgement that there is a science somewhere related to the subject. Seeking to do away with a lot of the real math, these texts take shortcuts by substituting "POD targets" (e.g. "Objective: search the following areas to a cumulative POD of 70% by 06:00 tomorrow morning"), sub-optimal plans specifying that areas of high POA be searched before areas of low POA (irrespective of the effort required to obtain high POS or the rate at which cumulative POS increases by this effort), and a completely fallacious lore regarding the effectiveness of multiple searches through an area compared to single searches. Objective measure of POD (through effective sweep width experiments coupled with calculation of coverage and the use of the exponential detection function) are neglected in favor of a meaningless question asked of search teams: "If there were 100 clues (of unspecified size, shape, color or detectability) in the area you searched how many would you have found?"

If these theoretical underpinnings of search planning do indeed get adopted by land SAR planners as they should, one might think that the average grunt need know nothing of it --- searchers are not usually involved in the planning of a search operation at this level even now. But clearly, some things will change, and the average Cibola member should begin thinking about how these changes in planning focus might change the nature of assignments and the ways we should train for them.

Clearly, one would not be asked to "search this area to 70% POD" anymore. Instead, armed with good experimentally determined effective sweep widths, a search planner would send a team into an area to achieve a certain POS --- this would be done primarily by telling the team to search the area uniformly at a certain speed over a certain time.

Let's do an example: when our hypothetical 5-person team was assigned above to spend two hours searching a .125 square mile rectangle in the Sandia Foothills, the search planners might instead have determined that the area had a 3% chance of containing the subject, and needed that reduced to a 1.5% chance to be consistent with other operational requirements and their optimal resource allocation. To accomplish this requires that the team achieve a 50% POD. Using the exponential detection function, achieving a POD of 50% requires a coverage of .69. The area is .125 sq. mile, each searchers' effective sweep width is .037 mile --- using the formula:

 C = W*z/A
 
with C=.69 and W=.037mi, we see that we require an effort of:
  z = C*A/W
 

or 2.3 miles of effort. If each of our 5 searchers can travel at 1 mile per hour, we can clearly get 5 miles of total effort out of the team in one hour (one mile from each of them), or 2.3 miles of effort in about 27 minutes --- for the sake of example let's round to 30 minutes. So the assignment would be to spend half an hour searching the entire area by distributing their effort as uniformly as possible assuming a 1mph rate of travel. In order to sweep this area to (approximately) 50% POD that fast at the assigned 1mph speed our searchers would have to make two very wide passes through the .25x.5 mile area, spaced at about 330 feet! (While balking at the huge separation, remember that the 50% POD expressed is the probability of finding an unresponsive human subject in a certain color clothing in the specific terrain under consideration, and NOT the "POD" we're used to thinking of, the probability of finding "half of the 'stuff' from footprints to cigarette butts to discarded clothing that might be out there if there were anything out there to find.")

Clearly, if this team of five hot-stuff searchers decides for themselves that such a spacing is ridiculous and wants to achieve higher POD just because they can, they have squandered a limited resource (searcher effort) for negligible gain --- the probability of finding the subject in that area (the probability of success) is POA*POD, or in this case 1.5% if the team does as they are asked vs. 2.85% if they quadruple the time they spend in the area contrary to their assignment (and thereby acheiving much higher coverage and 95% POD). Assuming that the team was asked to do the search in this way as the result of a computed optimal resource allocation, they aren't doing anyone any favors. The additional 1.5 hours they spent on beating this segment to death might easily have provided effort to increase the overall POS to a level that would lead to either a find, suspension of the search, or a reassessment of lost-person scenarios and a reallocation of precious resources. Note that if there are four similarly-sized regions of 3% POA that all require the same amount of effort for a given coverage and all allow the same effective sweep width from the chosen resource, applying half an hour of the five-person team's time (2 miles of total searcher effort) to each one (thereby achieving the same assumed 50% POD in each area) leads to an increase in cumulative POS of 1.5% per segment, or a total of 6% increase in overall POS between the four of them (neglecting the important issue of movement between search segments for now) --- but searching our one area of 3% POA using four times the effort only increased the overall POS by 2.85%; this is clearly a sub-optimal way of spreading effort around.

What next?

None of this makes any sense without more objective measures of effective sweep width. For years we have been taught about "critical separation" --- that spacing searchers out at twice the "average maximum detection range" (AMDR, the average distance beyond which a searcher cannot see a typical search object set out for the purpose of measurement) for a specified clue type gives a "theoretical" POD of 50%. The work establishing this lore was not based on search theory, and has no experimentally derived validity --- in fact using the results shown in the experiments at ESCAPE, the conclusion that a "critical separation" search for the search objects in that study would yield a 50% POD is demonstrably false. The concept of sweep width, however, has a rigorous mathematical basis and has been the subject of experiments in the maritime search community for decades --- the National SAR Manual used by the Coast Guard contains tables of sweep width for various types of search objects and sensors (planes, helicoptors, etc.) under a variety of conditions, all derived from decades of carefully designed experiments. Experiments along the same lines for ground SAR have only just been started. The pilot study in 2002 in Virginia and the follow-up study that included the effort in 2004 at ESCAPE provide some of the first glimpses of how sweep widths depend on object type, searcher profile, terrain, vegetation, weather and other factors.

In these very preliminary studies one important thing to note is that there is not a direct relationship between Average Maximum Detection Range and effective sweep width. In some environments it was seen that AMDR was near half the sweep width (making spacing by the sweep width for coverage 1.0 equivalent to what is done when using critical separation), but in others the sweep width was almost equal to the AMDR (in which case obtaining a coverage of 1.0 would imply half the spacing and twice the effort as critical separation), and there were variations in between the two extremes.

There is a hope that a relationship between AMDR (easily measured) and effective sweep width (time consuming and difficult to measure, requiring carefully designed experiments) can be found, but there is no a priori reason for there to be one --- AMDR is a rough measure of how well a searcher's eyes can detect an object knowing roughly where it is, whereas effective sweep width is a measure of detectability of the object while moving, not knowing where it is or even if there is anything there at all. Clearly, we can expect that there will be more such experiments as time progresses, and it's possible that one result of these experiments might be to come up with empirically derived rules-of-thumb to estimate effective sweep width faster than by running a huge data-gathering session for each possible variation of clue, terrain, weather and so on.

The discussion above emphasizes "grid" searching somewhat more than is appropriate, because it's the easiest to describe. One of the chief objections expressed to adopting this type of theory is that it appears to imply an inappropriate reliance on grid searching where this is not necessarily a good use of searchers on the ground. However the explanation of how the math works is just easier to express when we're talking about grid searching. The proponents of search theory application to ground SAR planning are not advocating the adoption maritime techniques of grid searching where it is not appropriate, only the adoption of mathematical rigor to the problem of optimal allocation of search effort --- to which end objective, scientific measure of effective sweep widths and an understanding of detection functions and probability distribution is critical, and for which subjective, ad hoc procedures is detrimental.

Knowledge that these changes are coming should provide reason for increased attention to these concepts in Cibola's training, and should spark some interest in performing careful experiments of our own (based on the methodology set forth in the papers referenced above). For example, it would be useful to know sweep widths for the various types of terrain we deploy in, and the various types of search objects we expect to find in addition to an unresponsive subject. Perhaps we should be de-emphasizing the subjective estimation of "POD" that answers the "if there were 100 clues out there..." question. The possibilities for incorporating more advanced thinking into our training and evaluation program are exciting, and I hope that Cibola members will explore the literature with a critical eye to how we can improve ourselves as these ideas get incorporated into the larger body of land SAR planning.

Back to Top
Disclaimer and Copyright notice the Editors
The contents of this newsletter are copyright © 2005 by their respective authors or by Cibola Search and Rescue, Inc., and individual articles represent the opinions of the author. Cibola SAR makes no representation, express or implied, with regard to the accuracy of the information contained in these articles, and cannot accept any legal responsibility or liability for any errors or omissions that may be made. Articles contained in this newsletter may be reproduced, with attribution given to Cibola SAR and the author, by any member of the Search and Rescue community for use in other team's publications.