Chapter 1 frames the discussion by exploring how individuals can be alternately defined as competent and incompetent. Similarly, through accounts of early and recent studies of facilitated communication, we begin to uncover the contextual conditions that interact with people's performance in authorship tests. It seems that recent experiences with facilitated communication parallel certain historical events and the social understanding of intelligence testing. (from the Introduction)
THE METHOD WAS BECOMING WHAT THE TESTS SAID IT WAS
The effect of the research debate over the method was to frame all discussions of facilitated communication in terms of "validity" or 'invalidity,' thus tying its meaning to the tests. The tests ceased to be representations of the method; rather, the test situation (e.g., identifying pictures, passing messages) was presumed to be synonymous with facilitated communication.
A similarity between facilitated communication testing and intelligence testing is unmistakable. Intelligence testing reified the idea that people possess amounts of intelligence, that these amounts can be measured, and that these amounts can be articulated as scores (intelligence quotients). The fact that the very idea of intelligence, let alone particular notions about what it is or how it might be measured, is a social construction is soon forgotten. In the wake of extensive testing, IQ has become an almost tangible thing. Hanson (1993) has described the process by which this can occur, where the idea of intelligence as a single thing reflects the fact that intelligence test results are nearly always reported in terms of the intelligence quotient, or IQ, even though the test is actually comprised of different parts measuring different skills. The idea that intelligence can be quantified and that some people have higher amounts than others also seems to emanate from the intelligence quotient or quantification of test results. Thus, the notion that individual intelligence is something that people 'have" or 'do not have," and that it is basically "fixed for life," in other words a personal attribute, "stems from the belief that intelligence tests measure not what one already knows but one's ability to learn" (p. 255, emphasis in original). Thus, while factors such as opportunity and even desire to learn may vary throughout one's life, depending upon circumstance, ability to learn (intelligence) is presumed 'hard wired in the person. Hence each individual's intelligence is considered to be fixed by heredity" (p. 256). Such ideas are not naturally self-evident, though many people believe they are, for such notions about intelligence have 'achieved the status of bedrock assumption" (p. 256), treated as 'a simple fact of nature" (p. 256).
Facilitation is, of course, newer and not yet as monolithically defined. Yet ideas about testing the method have led to assumptions that people are either " communicating" or "not communicating" and "influenced" or "not influenced," implying that influence in communication is abnormal, bad, a type of contamination. Failure to pass a test is taken as evidence that the method does not work and that the people using it are truly retarded (Klewe, 1993; Smith et al., 1994), unfortunate pawns of a fraud (Green & Shane, 1994), prisoners of their facilitators (Palfreman, 1994), or vehicles for other people's words (Smith et al., 1994).
Alternatively, however, we might ask what factors in the experience of people with communication impairments the researchers have considered. Most obvious among these, have they examined the individuals' lack of experience with test-taking and considered how to overcome this? Have they considered problems of failed confidence and ways to boost confidence? Have they considered the role of practice with test-taking? Have they considered multiple strategies by which individuals might confirm authorship? Have they investigated those instances in which people have demonstrated success with confirming authorship to discover the conditions that mi have aided their success? [CHAPTER 1, PP. 27-8]
In Chapter 2, Cardinal, Hanson, and Wakeham provide the largest-scale study (to date) of facilitation, involving more trials than all the trials in all other studies, published prior to mid-1995, combined. In this study, "Who's Doing the Typing? An Experimental Study," the authors examine the ups and downs of individual performances. The study involves 43 individuals, ranging in age from 11 to 22 years, attending public schools. This study asks the most basic question regarding the validation of facilitated communication: Given 'ideal' conditions, can a person using facilitated communication who has previously demonstrated that she cannot pass a message to a blind facilitator without facilitation do so under certain other conditions? This study supports the notion that a facilitator-user pair can generate output that is completely originated by the facilitated communication user. (from the introduction)
A Possible reason why the results of this study vary significantly from many of the previous authorship studies is that the previous studies may have overcontrolled the 'test" condition when it was not scientifically obligatory to do so. It is hypothesized by these researchers that controlling for normally occurring environmental variables, when there is no reasonable rationale to do so (e.g., partitions between the FC user and the facilitator, asking participants to "perform" in unknown settings, FC users wearing earphones, etc.), may contaminate the FC authorship experiment and actually breach the facilitator-user support mechanism, thereby hindering the general ability of the FC user to communicate.
This study's intent was to develop a protocol that controlled for variables that could threaten the study's validity (e.g., the facilitator must be blind to the message) and allow the FC user to focus on communicating her thoughts, but not to overcontrol so as to jeopardize the user-facilitator relationship-thus producing a naturally controlled environment. The fact that many students did appear to author their own facilitation in this study provides beginning evidence that some past studies may have overcontrolled their testing procedures, thus hindering the FC users' performance on those tests.
Another interesting finding of this study was that if one looked only at how FC users did on their first day of the facilitated condition, as compared to their baseline-1 scores, one would have to conclude that not 1 of the 43 participants in this study could pass the test-very similar to the results of past studies when practice of the protocol was not provided (Bligh & Kupperman, 1993; Hudson, Melita, & Arnold, 1993; Klewe, 1993; Moore, Donovan, & Hudson, 1993; Moore, Donovan, Hudson, Dykstra, & Lawrence, 1993; Shane, 1993; Wheeler et al., 1993). There was no significant difference between the baseline-1 highest scores and the highest scores on the first day of 'testing." Viewing this result next to the fact that there was a significant difference between highest baseline-1 scores and highest overall facilitated scores after practice had occurred, one can easily see how possibly the one-place-in-time tests reported in earlier FC authorship research would show little or no successful performance on validation tests.
The one-place-in-time protocol condition is found in nearly all of the past quantitative FC validation experiments (e.g., Hudson et al., 1993; Simon et al., 1994; Wheeler et al., 1993). Our review of the studies' protocols indicates that FC users tend to be unable to pass information to 'blind" facilitators when they are requested to do so without adequate practice of the testing conditions. Since this current study also shows that without practice FC users were unable to pass tests, but with practice many could do so to a significant level, then it appears logical to conclude that these past experiments were subject to hindrances to the measurement of original FC production. Practice of the testing procedure appears to be an important component when testing for FC authorship.[CHAPTER 2, PP. 49-50]
In Chapter 3, Biklen, Saha, and Kliewer report on a study entitled "How Teachers Confirm Authorship of Facilitated Communication: A Portfolio Approach.' Despite the controversy over facilitation, thousands of teachers, parents, and researchers continue to use the method nationally and internationally. We might ask why. What do practitioners point to as evidence that convinces them that the words typed are those of the people with disabilities, not of the facilitators? This chapter looks at that question though the experiences of 7 facilitators and 17 students, examining in detail the kinds of evidence they amass, often informally through daily use. The chapter, set in the tradition of qualitative, ethnographic research, suggests a portfolio analysis approach to confirming authorship. This chapter, as do all of the chapters, examines the ways of understanding that underlie the method of inquiry used in the research and the kinds of understandings that can be derived from it. Related to this, the authors examine the dilemmas encountered in doing the research, for example, problems of separating the perspectives of those observed from the researchers' own perspectives and debates about how to present the data and how much data to include. In this account the authors also describe motor disorders that may influence individuals' communication difficulties. (from the introduction)
HOW TEACHERS DECIDE THE TYPING IS THEIR STUDENTS' OWN
As might be expected, teachers do not all use the same terms to describe similar phenomena or identify the same factors to establish students' authorship of typed work. A speech teacher said that the clearest, most persistent indicator that one student was doing the typing was in how her eyes went immediately to a target letter, even though it then often took her several seconds to begin moving her index finger toward the selection. Another teacher pointed to a particular student's selection of big words that other students with whom she facilitated did not use; still another was convinced when a student typed swear words at her. Our approach was not to immediately adopt any of these as "our" categories. Rather, we listened to the teachers, asked them to clarify comments, issues, or events, and tried to observe in action the phenomena to which they referred. Ultimately, we began to collapse their points into larger categories.
The central theme running through the teachers' accounts of the typing was that the students differed from one another. The next four sections identify, explain, and explore these differences within the categories of (1) how students attend to typing, (2) the relationship of students' speaking to their typing, (3) communication form, content, and style, and (4) conveying accurate information not known to the facilitators. [CHAPTER 3, P. 59]
...In addition to the above-mentioned different ways in which teachers described each student's physical style of attending to the typing, one quality emerged repeatedly as especially important. This was independent typing. Several students were observed typing their names and the date independently but receiving forearm support during conversational typing. Evan typed one phrase (MORE TIME WITH MONICA) independently when his teacher let go of his arm during a facilitated conversation. A preschooler, Terry, and two elementary age students, Jacque and B. J., have typed individual words independently, for example, favorite cartoon figures, school activities, and spelling words. Patrick, a middle school student, was able to type a few words with just a hand on the shoulder; but when he began to stumble, his teacher moved the location of support to his elbow. The kindergarten student, Jacque, could point independently and reliably to correct multiple-choice answers in school work. Sixth-grader B. J. typed the weekly spelling words independently. All of the students demonstrated independent ability to go to a desk and to get out a communication device (e.g., typewriter, dedicated communicator, letterboard, portable computer), and the researchers observed numerous instances in which individuals (e.g., Evan, Stephen, Doug) hit the return, space bar/button, or communication device power supply independently at the appropriate time; such independent acts were often pointed out to the researchers by the teachers. The speech teacher at the middle school seemed to summarize the importance that all of the teachers attributed to independence when she remarked, 'Independence would confirm it for anyone -I would think." Similarly, the sixth-grade teacher felt that B. J.'s independent typing of structured work (e.g., math problems, spelling words, multiple-choice tests) should settle the matter of authorship: 'He's independent, so that's like-how much more obvious could that be?" [CHAPTER 3, PP. 61-2]
In Chapter 4, 'Factors Affecting Performance in Facilitated Communication,' Baldac and Parsons report on a six-person experiment involving a variety of message-passing tasks. Parsons is a leading Australian scholar in the field of communication sciences and has observed the emergence of facilitated communication in that country over the past two decades. Crossley was one of the facilitators in the study and comments from her perspective as a participant as well as analyst of facilitated communication research in the Postscript. Crossley rediscovered the use of facilitated communication training in Australia in 1977. As noted in Chapter 1, she has written about this period in the book Annie's Coming Out (Crossley & McDonald, 1984), filmed as A Test of Love (Brealey, 1984); she is also the author of the standard account of the method, Facilitated Communication Training (Crossley, 1994). In her postscript to this book, Crossley discusses the communication theory underlying message transmission through facilitated communication. She illustrates her remarks with examples from Australian validation tests conducted between 1979 and 1994, ending with the current study by Baldac and Parsons, which includes people who have Down syndrome or diagnoses of autism or intellectual impairment. (from the introduction)
If a participant does not satisfy the validation criteria set, what conclusion will be drawn? For example, previous researchers have used the results of their validation studies to conclude that their participants were unable to communicate via facilitation (Wheeler, Jacobson, Paglieri, & Schwartz, 1993). The results of the present study, however, have highlighted that researchers should make conclusive statements only regarding the participants' abilities to perform the tasks set. For example, P3 in this study scored 63 % for the matching task and 21 % for the labeling task. P3 scored 0 out of 3 for the message-passing task. Therefore P3's responses in the message-passing task via facilitation did not satisfy the validation criteria set by this study. However, P3 has been reported to type independently (without a facilitator) and recently has been suspended from school for independently typing obscenities on his communication device. Prior to being introduced to facilitated communication training, he was not allowed to attend school because he was regarded as intellectually disabled and unable to participate in academic activities.
Time to respond
No set time frame for the completion of a response or a task was enforced in this study. However, it became apparent that the time taken to produce a response was affected by the tasks. That is, as the demands of the tasks increased, the participants required more time to respond.
Initially participants were expected to complete a total of 85 responses, a total chosen to provide ample opportunities to establish the validity of the participants' communications via facilitation. During the data collection phase of the study it became evident that the set target of 85 responses was an unrealistic and possibly unfair goal. The set number of tasks was unrealistic due to the time constraints placed on the study. However, the message-passing task would seem to solve this problem, as the facilitator was completely unaware of the intended response; therefore one correct response would demonstrate the participants' ability to communicate via facilitation. The need to complete further validation tasks, therefore, would not be necessary if one is only trying to determine if facilitated communication works for a specific individual.
The time provided for participants to respond in previous validation research has not been clearly stated; however, from this study it was clear that the allowed response time may affect a participant's performance scores. For example, PI on one occasion required 30 minutes to successfully complete one message-passing task. If this time frame had not been permitted, Pl's ability to communicate via facilitation may have been questioned. It was also evident from the study that individuals performed at different speeds and needed to be judged on an individual basis. [CHAPTER 4, PP. 94-5]
In Chapter 5, "A Controlled Study of Facilitated Communication Using Computer Games," Olney reports the results of a study in which 9 experienced facilitated communication users between the ages of 16 and 42 and their regular facilitators engaged in computer game play over a series of 7 to 10 sessions. A "closed condition" (i.e., blind)-in which the facilitated speakers, but not the facilitators, could see the computer screen was introduced after the facilitators and facilitated communication speakers learned the requirements of the games. Although the introduction of the closed condition was initially problematic for all participants, a number of them demonstrated authorship by providing accurate responses to game items in the absence of facilitator knowledge. The study provides insights into the complexities of testing facilitated communication. One of the most interesting aspects of this study is the author's collection of typed commentary, by the people with disabilities, concerning how they experienced the closed condition. In her own reflections on the study, Olney identifies implications of this study for future investigations. (from the introduction)
Impact of Scaffolding Interventions on Outcomes
Performance on computer games by experienced facilitated communication users varied dramatically among participants, from session to session, and in open and blind conditions. Simple response formats (choosing A, B, C, or D, or adding a missing letter) and opportunities for multiple trials seemed to increase the likelihood of validation for participants. Scaffolding interventions were used to assess and/or ameliorate five specific issues: (1) test anxiety, (2) fatigue and other stressors, (3) interest in and knowledge of game content, (4) physical support needs, and (5) perceived relationships among the researcher, facilitator, and participant. [CHAPTER5, P. 111]
Universal Problems with Movement
A major criticism of facilitated communication is that its users do not look at the target, making it appear that facilitators are controlling the typed output. Although participants in this study were partially selected because of their excellent facilitated communication skills, looking at the right target at the right time was problematic for each of them.
Analysis of videotapes revealed that all participants had difficulty moving their eyes and heads from the computer screen to the keyboard and back. This may have hampered participants' abilities to read items, think about them, and then respond accurately. It was hard to ascertain in what instances participants hit a response before reading, as opposed to hitting the wrong key because they did not know the answers.
Movement problems were most pronounced in the facilitator-blind condition and when fatigue or stress were manifest, but they were apparent to some degree throughout all sessions and in both open and blind conditions. Accommodations for participants included frequent verbal prompts to look at the monitor and keyboard, gestural prompts to the monitor, and physical prompts such as gently turning the head toward the visual target. [CHAPTER 5, P. 113]
Chapter 6, 'Sorting It Out Under Fire: Our Journey," is a test designed by the person being tested. Unlike any of the other studies, it has been carried out by a person who uses facilitated communication. Marcus, a young man with autism, heard about the Wheeler, Jacobson, Paglieri, and Schwartz (1993) study that had been carried out at a large state institution in upstate New York. The test had been described on the Public Broadcasting System's Frontline exposé of facilitated communication (Palfreman, 1993). Hearing of the controversy, he asked a friend, Shevin, who is a linguist and a consultant to the Facilitated Communication Institute, to assist him with trying to pass the test himself. His goal was to prove he could pass it and then to help other people with developmental disabilities pass the same test. In fact, he wanted to design a study for other people who use facilitated communication. Together Marcus and Shevin set about replicating the well-known 0. D. Heck study (Wheeler, Jacobson, Paglieri, & Schwartz, 1993). This chapter is their narrative of the experience. (from the introduction)
Under the third condition, both Eugene and I saw pictures, of which three were the same and three were different; during this condition, I provided physical facilitation. Again, except for minor misspellings, all six were answered correctly.
Bob and I were euphoric--Eugene had accomplished the task he had set out to master more than a year ago. However, when I asked Eugene what he wanted to do to celebrate, he typed MAYER LETS WRITE and wrote YES when I asked him if he wanted to go upstairs to the computer. Once at the computer, this is what he wrote:
Today I retook the test, and I passed it, Mayer says brilliantly. But I feel sad. Sad for people who can't do it and are silenced. Sad for those who will run from the depressing truth that I was right and they were wrong. Sad that I will be fighting this fight for years to come. And sad that this was even necessary. Friends will celebrate, but then the work must continue. [CHAPTER 6, P. 132]
[Conclusion by Eugene Marcus] Research is really useless as its own reward. The only good purpose for research is liberation from our limitations. Research designed to make those limitations more real and more legitimate must be stopped.
Great discoveries may be found in what others have overlooked. They will sometimes not recognize what they have been looking at all along. Real science takes time and experience and the ability to look critically at your own actions. That kind of science I am good at. The kind I will never be good at is the kind where one person studies another like a kind of grape or fruitfly or shell. We need allies, not people to sacrifice us on the altars of their careers. We are not something to be squeezed or swatted or listened to and dropped back on the beach. We are to be rejoiced with. We are like rare red forest demons, and so dance with us. really dance with us, or rest assured we will dance without you. [CHAPTER 6, P.133]
Chapter 7 takes an altogether different approach. It is a report of two case studies, the first of which concerns a boy who was thought to be severely retarded prior to being introduced to facilitated communication. The authors, Weiss and Wagner, recount their efforts to have the student listen to stories and then to report on them to his facilitator, who had been kept blind to the stories' content. In a second case, the researchers describe a student's progress from being unable to communicate through typing or writing to typing independently. When the authors first heard about and observed facilitated communication, they thought it made no sense and were convinced it was a hoax. Then over a period of weeks, after observing individuals using it, they began to be challenged by their own research. In this chapter, they describe the process they went through and the questions they now believe must be asked by disability researchers. (from the introduction)
Facilitated Communication Is Evanescent and Fragile. Since our first clear evidence of valid facilitated communication was revealed, we have initiated a small number of additional case studies that are currently progressing. Although these studies are not yet complete enough to report, they have contributed to our subjective impressions that this phenomenon is fragile vis-A-vis its reliability; some days we get it (i.e., valid communication) and some days we do not. For example, we are currently investigating facilitated communication with a young man who, with his partner-facilitator, has succeeded at passing information accurately about 10 to 15% of the time. Similarly Kenny, described above in our case study (Weiss et al., 1996), did not show valid communication in the second of our three trials. Therefore, had Trial 2 been the only trial administered, we would have concluded that facilitation was not a valid form of communication. There are at least two hypotheses regarding the reliability of facilitated communication that we must evaluate further. It maybe that facilitated communication can exist (i.e., it is valid) but is not always operative (i.e., lower reliability). Alternatively, the actual validity and reliability of facilitated communication may be quite high, at least for some, but many of the experimental designs employed thus far have been unreliable in capturing the phenomenon.
It is clearly premature to speculate on the reliability of this phenomenon. However, recognizing the morass of complications associated with the fragility of facilitated communication may help us to interpret some findings and navigate our future research efforts. First, generalizing conclusions from a single evaluation is, at best, misleading. It seems essential to evaluate the phenomenon repeatedly and from a variety of experimental and experiential perspectives, to do it often and in varied ways. [CHAPTER 7, P.155]
Chapter 8 examines the perspectives of facilitated communication users toward independent typing. Located in the tradition of qualitative research, this study examines the experiences of eight school-age students who have learned to communicate with facilitation. Three of the students have achieved the ability to type some sentence-level communication without physical support; the other five are at varying levels of support. They talk about the various meanings independence has for them. These accounts are then analyzed in the context of classroom observations in which the researcher looks at how, when, and what they type. (from the introduction)
In his autobiographical book, I Don't Want to Be Inside Me Anymore, Birger Sellin (1995) asks:
which would you rather
for me not to live(,) without help and stay handicapped
or for me to become independent
if so you must just demand more from me. (p. 184)
This was a prevailing theme among the eight students in this study. Being pushed to do more typing with less physical support seemed to be a sine qua non for doing it, yet going without physical support was an impossibility unless the student was willing and/or wanted it. And then, the student still required prodding and thoughtful interaction, as described with Joseph and Cathy.
To hear a person's ideas on this or any topic required an expectant listener. In the course of the study, I observed that the students were forthcoming, even expansive, when engaged in conversation and parsimonious with their words when asked to give routine responses, to engage in conversations absent of ideas (action for action's sake), or to express themselves in ways that ignored the conditions imposed by their mode of communication. They demanded what students everywhere have always demanded-to be taken seriously. [CHAPTER 8, P. 170]
Chapter 9, 'Suggested Procedures for Confirming Authorship," examines the factors in study procedures that appear to influence the likelihood of facilitated communication users' success or failure on authorship tests. The authors present a rating system for evaluating research studies and then demonstrate the rating system by applying it to six of the recent, major studies of authorship. The chapter concludes with advice to researchers who may be planning to evaluate facilitated communication. (from the introduction)
In this chapter, we have presented evidence that controlled studies, which are designed to measure authentic communication produced by facilitated communication speakers, may be highly sensitive to their protocol conditions. Stated in a slightly different way, people who use facilitated communication may be more sensitive to test conditions than are individuals using other methods of communication. The question then becomes: What are the procedural conditions that are most optimal to the measurement of authentic interaction between the facilitator and the facilitated communication user? For now, the 14 conditions outlined in this chapter, when present in a study, appear to represent the best practices for protocol development.
Further, certain conditions may be far more important than others, even to the point of determining the presence or absence of authentic communication. It appears, for example, that extensive experience with the method, practice of the "test" conditions, and conducting the study in the participant's natural environment are especially crucial factors. At the same time, we are mindful that any single person's experience in such testing must be treated as its own case study. Particular people may be more sensitive to some conditions than to others: One person may have extreme anxiety with any testing and may benefit especially from practice; another person might be stymied by word-retrieval tasks. For this reason, any future research must always be based on the assumption that the research design, including the procedural conditions, may be as crucial to the results as are the particular skills of the individual participant.
This chapter focused directly on quantitative procedures for confirming authorship in research studies, which may not be suited for personal confirmations of authorship. For individual confirmation of communication, we recommend a portfolio approach. [CHAPTER 9, P. 186]
In Chapter 10, "Reframing the Issue: Presuming Competence," we examine the politics of disability research and the meaning of mental retardation. The controversy over facilitation has been fueled by other questions concerning the nature and meaning of disability, prevailing assumptions about competence and incompetence and its measurement, conflicting notions about science and research, and the context of popular culture. We address these issues through an examination of one of the most controversial aspects of facilitated communication-allegations of abuse made via facilitation. At the same time, this chapter includes an analysis of contextual issues related to the discourse about facilitated communication, drawing parallels between this discourse and other historical and contemporary debates in the fields of education and social science. (from the introduction)
Often, prevailing thoughts and interpretations are treated as truths handed down by impartial oracles. The most common assumption, or "truth," in disability research has to do with the idea of competence and incompetence. The prevailing cultural and professional theory about people with developmental disabilities is that they have a deficit and that the role of science is to measure and understand the deficit, and even to certify who is and is not competent, who is and is not mentally retarded. Presumptions of incompetence in people labeled developmentally disabled, autistic, mentally retarded, and so on are so often repeated by researchers, diagnosticians, and practitioners in tests and classification manuals that their mere restatement becomes a kind of evidence of their truth. Yet we must question these as any claims of truth, preferring instead a condition of uncertainty, fueled by competing discourses, competing truths. In Giroux's (1992) terms, we align ourselves with "refusals of all 'natural laws' and transcendental claims that by definition attempt to 'escape' from any type of historical and normative grounding" (p. 44); Fine (1992) refers to this as "denaturalizing" the "natural." And so, too, did the courts in the cases of Luz P., Kochmeister, and JK, for they were forced by circumstance to consider whether particular people, irrespective of label, were communicating their ideas in particular situations with a particular method. [CHAPTER 10, PP. 196-7]
In Chapter 11, we conclude with an accounting, based on our reading of the research, of what should not and what can be said about facilitated communication. (from the introduction)
It is becoming clear that the inability of some experiments to measure authentic facilitated communication has been at least as much a result of the testing/ experimental procedures used as it has been the failure of the method or the person(s) using the method. As discussed in Chapters 4 and 9, certain procedures in a controlled experiment seem to improve the likelihood of facilitated communication users producing authentic communication.
Experiments are frequently viewed as valid only as they are able to control for threats to internal and external validity. In the physical and biological sciences, the pure experimental research method is clearly believed to be the best approach for determining the causal effect of an isolated, single variable on its dependent variable, because of the potential for a high degree of control of extraneous conditions (Babbie, 1995). Even in the social sciences the traditional experiment (aka: pure or true experi ment), and to a lesser degree its sibling designs of preexperiments and quasi-experiments, can be superior methods to test hypotheses (Campbell & Stanley, 1963; Cook & Campbell, 1979). Unfortunately, for studying human behavior in educational settings (as in facilitated communication authorship studies), the pure experimental method can, at times, disrupt the intended target phenomenon, since control is most easily achieved with research on humans in restrictive and artificial settings (Babbie, 1995; Cook & Campbell, 1979). The problem is that humans react to these controlled conditions differently from the way they react to naturally occurring conditions; thus the degree to which results can be generalized to the greater world is severely limited (Cook & Campbell, 1979).
An "experiment" is much better designed to be explanatory rather than descriptive (Babbie, 1995). This may be why early experiments in facilitated communication attempted to "explain" whether facilitated communication worked or not, not to empirically 'describe" why people were unable to validate their communication.
The systematic observation of people using facilitated communication in their natural environment, as well as discussions with users of the facilitated communication method and their facilitators, can assist in the scientific endeavor to measure the authenticity of facilitated communication by systematically 'describing" the phenomenon. Using natural, but empirical, observations to develop experiments, as witnessed by several of the studies presented in this book and elsewhere, has provided research designs that can boast of scientific validity as well as generalizability. Dialogue between these two research methods has yielded experimental designs that have, at least in part, begun to overcome the experimental pitfall of artificial conditions and therefore a lack of generalizability, while maintaining the optimal experimental controls necessary for valid findings (i.e., they meet prevailing standards of validity). [CHAPTER 11, PP. 202-3].
Return to Table of Contents, V. 5 No. 3.
Return to main Facilitated Communication Digest Table of Contents.
Return to Facilitated Communication Institute home