RECENT GRADUATES FROM THE INSTITUTE

Recent Graduates' Theses Abstracts


back

Tara Abraham, "'Microscopic Cybernetics:' Mathematical Logic, Automata Theory, and the Formalisation of Biological Phenomena, 1936-1970," 2000 (Max Planck Institute, Berlin)

ABSTRACT

This thesis highlights the intellectual origins of theoretical studies of biological phenomena based on automata theory and mathematical logic. The study spans the period from the groundbreaking work of mathematician Alan Turing (1912-1954), who in 1936 developed a "logical machine" for the process of mathematical computation, through to the work of Stuart Kauffman (b. 1939), who, circa 1970, developed a model of genetic regulation in cells using mathematical logic. It is argued that automata theory and methods of logic offered a way of simplifying complex biological phenomena into logical, discrete terms, while at the same time characterizing these processes as complex, dynamic, and interactive.
Automata theory arose out of the cybernetics movement, which involved making analogies between organisms and machines. "Automata" in this sense were abstract, logical systems, or "theoretical machines", with a finite size, a finite number of internal states, and certain specified inputs and outputs. The functioning of an automaton was governed by the rules of mathematical logic, embodied in an "algorithm" or program. The algorithm was rigorous, exhaustive, and unambiguous, and a central point in automata theory was that simple rules could lead to complex behaviour. This thesis demonstrates that the "algorithm" concept proved to be a powerful conceptual tool for many scientists who developed models of complex biological phenomena. It will be shown that these applications of automata theory to biological phenomena were consistent with many of the philosophical assumptions of many theoretical biologists of the period.
Chapter One examines Alan Turing's conception of the Turing machine, highlighting the role of mathematical logic in its functioning. Turing presented "computation" as a process that could be carried out in a finite number of logically defined, discrete steps. Chapter Two illustrates that a community of mathematical biologists in Chicago, who formalized biological phenomena, provided an important intellectual space for Warren McCulloch and Walter Pitts, who in 1943 developed a model of neural activity based on the principles of Boolean logic. Chapter Three examines the work of John von Neumann. Influenced by Turing's work and the McCulloch-Pitts model, von Neumann developed a general theory of automata that addressed, in logical terms, the complexity of biological self-reproduction. Chapter Four focuses on the work of Michael Apter, who made strong arguments for the value of automata theory in modeling the process of biological development. Chapter Five further explores the connections between automata theory and theoretical studies of biological phenomena, highlighting the work of Stuart Kauffman, who, in collaboration with Warren McCulloch, developed a logical model of genetic networks. (Advisor: Mary P. Winsor)


back

Stephen Bocking, "Environmental Concerns and Ecological Research in Great Briatin and the United States," 1992 (Trent U, ON)

ABSTRACT

The last 40 years have witnessed both increasing popular concern for the environment, and an expansion of the discipline of ecology. The interaction of these phenomena has helped shape the development of ecological research. In the 1940s British ecologists, led by Arthur Tansley, argued that an institution for research and management of nature reserves was necessary to protect natural flora and fauna. This, and ecologists' demands for scientific autonomy led in 1949 to the Nature Conservancy, which dominated British ecology for the next two decades. In the 1950s J. D. Ovington at the Merlewood station developed insights into ecosystem productivity and the conservation of forested reserves and the English Highlands. Beginning in 1954, ecologists at the Oak Ridge National Laboratory, led by Stanley Auerbach, used radioactive tracers and computer models to study the movement of materials within ecosystems. While relevant to radioactive contamination, this work was also of theoretical interest to ecologists. Although enjoying considerable autonomy, institutional constraints obliged these ecologists to develop quantitative research based on physicochemical phenomena. After 1970 many Oak Ridge ecologists focused on responding to demands for environmental research, while the International Biological Program made possible more ecosystem research. Contrasting demands of ecosystem theory and environmental management led to a divergence of research devoted to these two objectives. In 1963 F. H. Bormann and Gene Likens began the Hubbard Brook Ecosystem Study. By monitoring the chemistry of precipitation entering and streamwater leaving undisturbed and clearcut watersheds, they determined the import and export of nutrients by forest ecosystems. Such studies contributed to an understanding of the functioning of intact forests, and their recovery from cutting and other stresses. Bormann and Likens drew methods and corroborating data from forestry research, and argued that their conclusions were relevant to forestry management. These episodes reflected interest in quantitative, distinctive experimental techniques for ecosystem study. While environmental concerns contributed opportunities and techniques for research, these concerns were often subordinate to development of a theoretical understanding of ecosystems. From the late 1960s demands for specific predictions of human impacts on the environment led to greater fragmentation of the discipline of ecology. (Advisor: Mary P. Winsor)


back

Andrew Ede, "Colloid Chemistry in North America 1900-1935: the neglected dimension," 1993 (Johns Hopkins / Calgary)

ABSTRACT

This thesis investigates the rise and decline of research interest in colloid chemistry in North America. After a period of high status, colloid research became marginalized despite the efforts of some of North America's most important scientists to promote the study of colloids and organize a national research centre.
In 1861 Thomas Graham classified two types of material by their degree of diffusion through a parchment paper membrane. Material that passed through easily he called 'crystalloids,' and that which passed through slowly or not at all, he named 'colloids.' Graham's work went unexplored until 1900, when a debate developed about the existence of atoms and molecules. Colloids were used to show the kinetic nature of atoms and molecules and demonstrate their existence. In the process, interest was revived in studying colloids.
By 1925 colloids became the focus of intense research. They offered North American scientists the opportunity, despite poor funding and lack of equipment, to contribute to scientific research on a par with European scientists. Besides fundamental research such as that into the existence of atoms, colloids were increasingly important for industry. During World War I, scientists and politicians in North America became concerned about improving science as part of national defense. Colloid chemistry had been important to the war effort, and many chemists had seen the benefits of large scale research and national organization. Following the war, colloid chemists began to organize and planned to establish a national research laboratory. They wanted to place colloid chemistry on an equal footing with physical and organic chemistry.
Despite scientific successes and the utility of colloids, by 1930 the status of colloids was being questioned. Colloid research became associated with outmoded methodology and seemed irrelevant to fundamental scientific questions. Colloid chemistry was also tainted with scandal after the best known American colloid chemist, Wilder Bancroft, presented the American Academy of Sciences with a theory of insanity based on the degree of coagulation of brain cells. His theory was publicly denounced by the AMA as quackery. Colloids eventually became a minor field in physical chemistry, leaving unfulfilled the great expectations of the early period. (Advisor: Trevor Levere)


back

Richard England, "Aubrey Moore and the Anglo-Catholic Assimilation of Science at Oxford," 1997 (St. Michael's College, VT; Franklin and Marshall College, PA)

ABSTRACT

Aubrey Moore (1848-1890) "thoroughly accepted" Darwinism said his Marxist friend, the biologist E. Ray Lankester. Moore was a liberal Anglo-catholic priest, an Oxford don, and a contributor to Lux Mundi (1889). His assimilation of science into orthodox apologetics is studied here in its Oxford context. Moore's epistemology can be seen to owe something to the more cautious attitude to science expressed by his Oxford Movement antecedent. Moore found support for the doctrine of divine immanence both in the Church fathers and in the views of the British Idealist T.H. Green. He gained insights into evolution from his neo-Darwinist friend Edward Poulton. Through the 1880s Moore supported the side of science in debates over Genesis and geology, vivisection, and biblical criticism. His acceptance of Darwinism was part of his larger recasting of the history of ideas. I discuss Moore's theory of the evolution of morality, his theodicy, and his unique appreciation of the "wider teleology" of Darwinian evolution. His views were well received, especially by Darwin's disciple, George Romanes. I conclude with reflections on the significance of this contextualized history of Aubrey Moore for the historiography of Victorian science and religion. (Advisor: Mary P. Winsor)


back

Marianne Fedunkiw, "Dollars and Change: The Effect of Rockefeller Foundation Funding on Canadian Medical Education at the University of Toronto, McGill University, and Dalhousie University," 2000 (Oxford U)

ABSTRACT

The Rockefeller Foundation gift of five million dollars in 1920 had a lasting effect on the scientization of medical education in Canada. By examining three medical schools - the University of Toronto, McGill University and Dalhousie University - this work will show the differences and similarities in the way in which the individual grants were received, used to change curriculum, and used to bring in other government and private funding. Central to this is the adoption of the full-time system of clinical teaching and this dissertation will set the efforts to put the full-time system into place in Canada within the context of full-time clinical teaching, as funded by the Rockefeller Foundation and General Education Board, in North America. Furthermore, this dissertation examines the resistance, particularly in Toronto, to the full-time system and the criticisms of private donors, including the Rockefellers and the Eatons, who were seen to be dictating curriculum and educational policy. In addition to the role the funding played in introducing full-time teaching, the Rockefeller money also led to increased public and private support for medical education, helped to define the medical profession, and contributed to making the emerging medical research ideal a reality. (Advisor: Pauline Mazumdar)


back

Fr. Martin Hilbert, "Pierre Duhem and Neo-Thomist Interpretations of Physical Science," 2000

ABSTRACT

In 1879, Pope Leo XIII demanded that Catholic philosophers and theologians adopt scholastic philosophy and especially Thomism in their studies and teaching. Although not primarily about science, the encyclical Aeterni Patris expressed the hope that scholastic philosophy would be a means to understand and even to further science. The thesis examines how neo-Thomists in France and Belgium tried to understand contemporary physical science from the time of the papal mandate to the outbreak of the First World War. These geographical and temporal limits coincide with the immediate sphere of influence of Pierre Duhem (1861-1916), the well-known Catholic physicist, philosopher of science, and historian of science. After putting Aeterni Patris into historical context and focusing both on its own agenda with regard to the philosophy of science and on the challenges that it faced in a scientistic climate, the thesis identifies the major centres of neo-Thomism in the two countries and shows that Duhem was historically connected to all of them. Neo-Thomists were especially determined to re-establish hylomorphism by arguing that mechanical theories of the universe were deficient. Duhem too critiqued mechanism; but his criticism and agenda differed from that of the self-proclaimed neo-Thomists, by arguing that physical theory is not a metaphysical explanation. The thesis first examines the relation between physics and metaphysics through case studies of contemporary debates into which Duhem also entered: human freedom, creation in time, and the proof for the existence of God the Prime Mover. A more theoretical look at the relation shows both that Duhem developed some of his ideas in the philosophy of science in response to neo-Thomist criticism and that his thought in turn influenced some leading figures in the movement. It is argued that Jacques Maritain's Distinguer pour unir depends heavily albeit unconsciously on Duhem's work. This proves that Duhem's thought is compatible with one influential school of neo-Thomism and even contributed to its development. The thesis concludes by making the necessary distinctions to counter arguments that Duhem was hostile to the neo-Thomist enterprise on account of his Pascalian inspiration, his friendship with Maurice Blondel, and his panning of Thomas in the Système du monde. (Advisor: Trevor Levere)


back

Katherine Hill, "The Evolution of Concepts of the Continuum in Early Modern British Mathematics," 1996 (Sydney, U of Sydney)

ABSTRACT

The traditional Greek distinction between number as discrete collections of units and continuous magnitude was challenged in the early modern period in several ways. Some mathematicians dismissed the classical conception, declaring numbers to continuous, for both practical and theoretical reasons. Practical activities made increasing demands on mathematics, and encouraged the use of broader number concepts. Theoretical considerations, such as a new form of symbolism that could denote both unknown magnitudes and numbers, indicated that numbers and magnitudes were, in a sense, interchangeable. This association of numbers with magnitudes contributed to the notion that numbers could also be treated as though they were continuous.
This broadening of acceptable numerical results and the association of numbers with magnitudes had a significant impact on early modern mathematical development. It allowed numerical solutions to be offered in areas such as navigation, surveying, and commerce. The expanding power of practical mathematics helped to develop public interest in mathematics; more students sought instruction in mathematics and more patrons were willing to support mathematicians. The broadened concept of number also benefitted theoretical mathematics in the theory of equations, developments in the binomial theorem, and work with sequences and series. Moreover, this broadening of number concepts also led to attempts to supply new foundations for mathematical practice.
This thesis explores the blurring of the distinction between number and magnitude and the development of new number concepts in early modern England and Scotland, for a specific period, from the introduction of algebraic techniques to the first attempts to formulate justifications for these developments. Examining three different groups of mathematicians allows us to explore a wide range of mathematical practice. First, we will investigate Robert Recorde (1510-1558), William Oughtred (1575-1660), and Thomas Harriot (1560-1621), whose work centred on the new algebraic methods and the theory of equations. Second, we will examine the work of John Napier (1550-1617) and Henry Briggs (1561-1630), who were influenced by an interest in practical computation techniques. Finally, we will explore the work of Isaac Barrow (1630-1677) and John Wallis (1616-1703), the scholars who first attempted to supply a foundation for the new numbers. (Advisor: Craig Fraser)


back

Jennifer Hubbard, "An Independent Progress: the development of marine biology on the Atlantic coast of Canada," 1993 (Ryerson U./UofToronto)

ABSTRACT

Between 1898 and 1939, the Biological Board of Canada became an important exponent of fisheries biology and marine ecology. As the organization grew to become the Fisheries Research Board, it put Canada in the forefront of international fisheries research. How and why this was so is not made clear in the existing literature. This development also had important social ramifications. First, fisheries biology contributed to Canada's emergence as a science-based industrial nation, allowing Canada to develop independently its fisheries policies. Second, marine biological stations in Canada and elsewhere played an important role in professionalizing biology, providing centres of research and employment for biologists outside of the universities. The present study, focusing on the Canadian Atlantic marine biological stations, examines the ties between Canadian academic and industry-related marine research between the two World Wars. When E. E. Prince founded the first Canadian marine biological station in 1898, biology was still in the process of being defined and consolidated, and marine biological stations were critical to its emergence as a mature science. Since the German research ideal arrived late at Canadian universities, most Canadian university biologists gained their first opportunity for ongoing experimental research in Canada's marine stations. The marine stations also became sites for advanced-level training, designed to turn out professionals. This study examines the Biological Board's association with Canadian universities through shared students, expertise, and teaching and research facilities, thus providing a case study in the professionalization of biology; it also illustrates how professional science case to terms with government financing and control in Canada. The research interests of academic biologists did not always dovetail with the desires of the Dominion government, which funded the marine biological stations. By the 1920s, the Biological Board had specialized in fisheries biology, with its practical implications for fish management; its scientists also became involved in showing fishermen how to improve their techniques. This study examines the extent to which this new specialization arose out of the biologists' own research interests, and assesses the importance of external pressures coming from government. (Advisor: Trevor Levere)


back

Jane Jenkins, "Matter and Vacuum in Robert Boyle's Natural Philosophy," 1996 (U. of Calgary)

ABSTRACT

This dissertation examines the concept of vacuum in seventeenth-century natural philosophy and in particular, looks at how Robert Boyle dealt with the issues surrounding the concept of vacuum in his theory of matter. Traditional studies of seventeenth-century natural philosophy agreed that a series of experiments with the mercury barometer and later with Boyle's air-pump unequivocally established the existence of the void, thus refuting Aristotle's contention that nature abhors a vacuum. Recent scholarship has shown that the issues were far more complex than this positivist account reveals.
I studied a dispute between Boyle and Henry More, a prominent Cambridge Platonist, over the proper interpretation of Boyle's early pneumatical experiments and demonstrate firstly, that arguments about the existence of the void were not settled on experimental and empirical grounds alone, but involved more general theological and philosophical considerations. Secondly, I show that Boyle was able to get around traditional objections to the void (the absurdity of claiming the existence of nothing) by reconceptualizing the problem and thereby allowing natural philosophers to incorporate a notion of void into scientific reasoning while keeping it theologically benign.
Boyle used a concept of void heuristically in his scientific reasoning, while at the same time not according it any ontological status. He accomplished this shift by suggesting that vacuum could be conceptualized in concrete terms as the privation of a characteristic in a subject with a natural disposition for that characteristic. In this way vacuities in the world were analogous to blindness in a person. Boyle avoided the persistent philosophical problems surrounding the concept void by referring to it as the absence of matter rather than the presence of a new entity.
Rather than considering this reconceptualization as Boyle's original innovation, marking a dynamic break with traditional thinking, I found evidence that the Renaissance author, Johann Alsted, had also presented the concept of void-as-privation. This lends weight to my claim that innovative concepts in seventeenth-century natural philosophy, developed as part of the successful challenge to traditional Aristotelianism, reflected ideas already formulated by Renaissance thinkers. (Advisor: Jed Buchwald)


back

Edward P. Jurkowitz, "Interpreting Superconductivity: The Application of Quantum Ideas in the Construction of Theories of Conductivity, 1930-1962," 1996

ABSTRACT

Focusing on the theoretical conceptualization of the phenomena of superconductivity and superfluidity, this thesis provides a description of how phyicists' work in these fields was influenced by their different training in, and understandings of quantum physics. Also described are the roles which classical conceptions, from hydrodynamics and electrodynamics, and analogy to quantum systems, played in physicists' thinking about the quantum phenomena of superconductivity and superfluidity. A description of the characteristic features of the different approaches to developing and understanding quantum mechanics acts as a framework. The works of two of the most prominent theorists in the field, Lev Landau and Fritz London, are examined in detail in order to exhibit how subtle differences in physicists' understanding of quantum mechanics could influence their work on superconductivity and superfluidity. The structure of the most widely accepted interpretation of quantum mechanics (the 'Copenhagen interpretation') played a key role. Physicists' different resolutions of wave-particle duality, and their thinking about the phase of the wavefunction and 'gauge invariance' were important for their work. The development of a microscopic theory, the 'Barden-Cooper-Schrieffer' theory, is discussed and related to the issues developed earlier in the thesis. The close-knit character of the community of theoretical physicists in 1930s and 1940s provided for the use of analogies from quantum field theory in developing microscopic theories of superconductivity in the 1950s. The differences among physicists' notions of 'coherence' are examined. Different views on the essence of quantum mechanics and on what was a valid argument and an acceptable use of classical pictures divided physicists of this period, and were reflected in discussions of superconductivity. The lines of division and the import of differences in outlook for work on superconductivity and superfluidity are characterized. (Advisor: Jed Buchwald)


back

Barbara Keyser, "Victorian Chromatics," 1992

ABSTRACT

Debates about the nature of colour transcended the boundaries between the sciences and arts of Victorian England. While 'Grammars' of ornament and colour were a prominent feature of the Victorian design reform movement, their seemingly pedantic rules for colouring have repelled most art historians. The precepts are meaningless until they are placed in the context of contemporary Victorian sciences: not only optics, but also chemistry, physiology, and comparative anatomy. The design reformers invoked arguments from sciences tinged with Romanticism and applied them to arts tinged with Utilitarianism. Foreign to current aesthetics, this combination of rationales is the key to understanding Victorian 'laws' of colouring.
Furthermore, the epistemological role of colour was a major source of tension between German idealism, for which colour signified the interaction of the mind with the world, and indigenous British pragmatism. These two philosophical frameworks collided and then blended in British art theory between 1830 and 1850. The nature of colour was debated vehemently because entire ideologies were built around it. Victorian Chromatics traces their development from the physicist-philosopher J. H. Lambert in the late eighteenth-century through Goethe, Sir David Brewster, and the dye chemist Michel Chevreul to the historian and philosopher of science William Whewell and the painter and art scholar Sir Charles Eastlake, who translated Goethe's Farbenlehre into English. It concludes with the botanist and designer Christopher Dresser, who created strikingly 'modern' abstract designs in the 1870's.
The history of colour in the nineteenth-century not only reveals the complex interaction of sciences and sensibilities in the period, but also answers a specific question in the history of nineteenth-century art. While art theorists proposed abstract colour harmonics in the eighteenth century, representational colour persisted in fine art for a century. However, in the 1850's arbitrary colour was rationalized and practiced by Owen Jones and other design reformers at the London Schools of Design and used by decorative painters and illustrators who were also easel painters. Thus the arguments from science to support theories of ornament played an important role in the transition from academicism to abstraction in nineteenth-century art. (Advisor: Trevor Levere)


back

Kenton Kroker, "From Reflex to Rhythm: Sleep, Dreaming, and the Discovery of Rapid Eye Movement, 1870-1960," 2000

ABSTRACT

Rapid movement (REM) is a phenomenon of sleep easily visible to the naked eye of a careful observer. Yet it was not discovered until 1953. Why did it take so long for this phenomenon to come under scientific scrutiny? From 1870 to 1960, disciplinary, institutional and instrumental factors transformed the cognitive basis of sleep research. Once a passive and reflexive response to fatigue, sleep emerged as a self-regulating biological rhythm. As an objective sign of dreaming, REM encapsulated this concept of "activated" sleep. Modern sleep research emerged from a number of nineteenth-century clinical and physiological problems, including insomnia, hypnotism and fatigue. Around 1900, Sigmund Freud, Henri Bergson, and Edouard Claparède introduced a biological perspective, describing sleep and dreaming as functions, not effects. Henri Piéron used the method of "enforced wakefulness" to develop a concept of sleep as both fatigue and rhythm. During the 1920s, Nathaniel Kleitman adopted a similar method in his research at the University of Chicago, where REM was later discovered. Medical and technological developments during the 1920s and 1930s had a great impact on the study of sleep. Epidemics of encephalitis lethargica, or "sleeping sickness" turned sleep into a highly visible object of neurophysiological research. Constantin von Economo linked its symptoms to a damaged "sleep centre" in the brain, reinforcing the idea that sleep was an active function. Vacuum-tube amplification created the electroencephalogram, which inscribed sleep as brain activity. This period also witnessed the creation of departments of "neuropsychiatry" across the U.S. Such interdisciplinary and holistic approaches to biomedical research bound together psychological and physiological concepts with clinical practice and instrumental performances. The discovery of "sleep stages" at the Tuxedo Park laboratories of Alfred Lee Loomis in 1937 - a crucial event in the story of REM - was made possible only through such interdisciplinary, technologically-driven efforts. Cognitive goals of American neurophysiologists and neuropsychologists were realigned from the 1930s into the immediate postwar period. Freudian psychoanalysis turned dreaming into a central problem for sleep researchers, who hoped to unify body and mind through an analysis of the rhythms of REM. (Advisor: Pauline Mazumdar)


back

Andris Krumins, "Symmetry, Conservation Laws and Theoretical Particle Physics (1918-1979)," 1999

ABSTRACT

In this work, we trace the role of symmetry throughout the history of theoretical particle physics, paying particular attention to the role of group theory, the formal mathematics of symmetry. After an analysis of the role of conservation laws and invariance in the theory of general relativity, we move on to Weyl's gauge theory of 1918, which was developed within the context of general relativity as an attempt to unify gravitation and electromagnetism. Weyl was trying to exploit an invariance of scale, and although his theory was experimentally refuted, it provided a formulation of the conservation of charge. After the advent of quantum mechanics, gauge theory was reinterpreted by London as an invariance of the wave-function. Weyl and Wigner studied group theory in the context of quantum mechanics, but the broadness of its application had yet to be appreciated. Symmetry was soon exploited in the nuclear interactions, however, and we examine the events leading to the discovery of SU(2) of isotopic spin. We analyze how the discovery of strangeness was linked to the generalization of SU(2) to SU(3), and also how it led to a differentiation between the strong interactions, which conserve isotopic spin and strangeness, and the weak interactions, which violate these conservation laws, along with the conservation of parity. Yang and Mills were impressed with gauge invariance, and in 1954, they took the bold step of imposing it upon the Lagrangian of the strong interactions, forcing the introduction of three new gauge fields. There was a problem, however, because although the short-range of the strong interactions implied that these gauge bosons should be massive, they needed to be massless in order to preserve gauge invariance. In addition, efforts were made to extend Yang-Mills theory to the weak interactions, but they also faced the same zero-mass problem. This problem was finally solved in 1967, when Weinberg and Salam showed how gauge boson masses could be generated using spontaneous symmetry breaking. They based a unification of the electromagnetic and weak interactions upon local gauge invariance, and this principle was soon applied to the strong interactions as well. (Advisor: Brian Baigre)


back

Andre LeBlanc, "On Hypnosis, Simulation, and Faith" Post-Hypnotic Suggestion in France, 1884-1896," 2000

ABSTRACT

The first half of this dissertation demonstrates how the concept of dissociation originated as a solution to the problem of post-hypnotic suggestion. The second half continues with investigations into hypnosis and simulation and concludes with an analogy between hypnosis and religion. In 1884, the philosopher Paul Janet introduced the problem of post-hypnotic suggestion. Give a hypnotic subject the post-hypnotic command to return in 13 days. Awake, the subject remembers nothing yet he nonetheless fulfills the command to return. The problem then is this: how does the subject count 13 days without knowing it? The philosopher and psychologist, Pierre Janet (Paul's nephew) proposed the concept of dissociation as a solution in 1886 which is discussed in the second chapter. Pierre Janet argued that a second consciousness kept track of time outside the awareness of the subject's main consciousness. Chapter 3 presents an alternative solution to the problem: the physician Hyppolite Bernheim and the philosopher Joseph Delboeuf argued in 1886 that subjects occasionally drifted into a hypnotic state in which they were reminded of the suggestion. Chapter 4 describes Janet's attempts to argue against this explanation. The fifth chapter demonstrates a logical flaw in the concept of dissociation and introduces the idea that hypnosis may well be a form of pretending. The theme of pretending is carried on in chapters 6 and 7 in relation to the impossibility of empirically confirming or refuting simulation in hypnosis. The final two chapters build on Delboeuf's work using an analogy between hypnosis and religion. Drawing upon Pascal, it is argued that, like hypnosis, religious belief may well contain an element of pretending in the way one's faith is produced and maintained. Chapter 8 relates hypnosis to what Pascal labeled "discourse concerning the machine" (Infini-rien): the notion that custom and habit, by a machine-like process, shape human thought and belief. Chapter 9 discusses Pascal's analysis of the differences between superstition and religion and applies it to our understanding of hypnosis. (Advisor: Ian Hacking)


back

Daryn LeHoux, "Parapegmata, or, Astrology, Weather, and Calendars in the Ancient World," 2000

ABSTRACT

I examine a set of texts and instruments, called parapegmata , which were used in the classical world for tracking cyclical phenomena such as stellar phases, weather, hebdomadal cycles, lunar cycles, and more. I argue that these texts are primarily astrological rather than astronomical or calendrical. I trace the possible connection between parapegmata and calendrical cycles in Greece, Rome, and Babylon, but I maintain a sharp distinction between calendars and parapegmata: the parapegmata were not used for chronological purposes, but rather for the regulation of various activities, most prominently agriculture. Different types of parapegmata were used by the Greeks and Romans for tracking stellar and lunar phenomena, and these distinct phenomena were used by them as signs for the timing of various activities, partly in an attempt to align their actions with sympathetic forces in the Cosmos. In order to understand how the parapegmata were used, I devote a chapter to unraveling the modes of predictive signification in the parapegmata, showing how these texts and instruments eliminated the need for astronomical observation. I show that some similar astronomical phenomena were tracked by the Babylonians and Egyptians for similar purposes, although the parallels we find in these cultures show a much closer connection to other, more diverse types of omina than the classical texts do. The work includes a descriptive catalogue of all the parapegmata known to me. (Advisor: Alex Jones)


back

Alison Li, "J.B. Collip and the Making of Medical Research in Canada," 1993 (York U.*)

ABSTRACT

Medical research became an institutionalized activity in Canada during the first half of the twentieth century. In the early decades, it was the pursuit of an exceptional few; by mid-century, it emerged as a systematic, large-scale enterprise employing teams of professional scientists. This thesis is an examination of the scientific career of J. B. Collip, one of the pre-eminent medical researchers of the era. Collip, a biochemist, made important contributions to the preparation of active extracts of insulin, the parathyroid hormone, and adrenocorticotropic hormone. His scientific career not only reflected, but influenced, many of the transformations that occurred in the medical research community as a whole.
Collip was a leading figure in Canadian medical research during the period in which original investigation became accepted as an integral part of academic life. Key structures such as systematic funding, graduate science education, scientific societies, and research journals were established. This thesis traces Collip's life through his undergraduate and graduate education in biochemistry at the University of Toronto (1908-16), his early work as a lecturer at the University of Alberta, and his contribution to the discovery of insulin (1921-22). Its primary focus is Collip's successful career in endocrinology after the insulin discovery, first at the University of Alberta (1915-1927) and then at McGill University (1927-1947). Collip faced four principal challenges: (1) to balance laboratory and clinical interests in determining his research programme, (2) to raise funds for his work in an era before large-scale government support of medical research, (3) to respond to the opportunities and demands posed by the commercial development of medical products, and (4) to adapt the laboratory group to the changing scale and character of experimental work.
This thesis demonstrates how Collip's career was shaped through meeting these challenges. It concludes with an examination of Collip's work as head of the National Research Council's Associate Committee of Medical Research and later the Division of Medical Research (1938-1957). His own experiences are reflected in the policies and practices which served to establish systematic governmental funding for medical research. (Advisor: Michael Bliss)


back

Wilfrid G. Lockett, "Jacob Leupold and the Theatrum Mechinarum," 1994

ABSTRACT

The present study is a review of the Theatrum Machinarum of Jacob Leupold (1674-1727), with emphasis on the hydraulic engineering aspects of the work. The Introduction includes a short account of Leupold's career as an instrument maker and engineer, set against the background of early eighteenth century Saxony, and follows with a summarized overview of the Theatrum as a whole. The second chapter examines Leupold's application of classical statics to what he calls the 'internal forces in mechanics'--that is the transmission of force by the levers, pulleys, gearing etc. which may be incorporated in a machine. The chapter closes with review of Leupold's understanding of friction and structural analysis. Leupold considers the use of available sources of energy (animate power, wind, fire and water) to do useful work as 'the external forces in mechanics'. The first three of these sources are examined in Chapter 3. The prevailing understanding of hydraulics (hydrostatic pressure and the measurement of water flow) is reviewed in Chapter 4, as an introduction to the account in Chapter 5 of Leupold's comprehensive presentation of vertical and horizontal water-wheels. This ends with a review of Leupold's 'case study' of an actual mine-pumping installation. Chapter 6 examines the various aspects of the planning of hydraulic developments (project layout, hydrology, site exploration) as revealed in Volume II and partly in Volume IX of the Theatrum. The civil engineering of hydraulic works including pipes and channels, river training, impounding structures, and navigation canals is discussed in Chapter 7. Volumes III and IV of the Theatrum are devoted to the raising of water by a wide variety of methods ranging from the classical noria and Archimedean screw to conventional reciprocating pumps, rotary pumps and a primitive centrifugal pump. These are reviewed in Chapter 8. Commenting on the study as a whole in Chapter 9, the Theatrum and particularly its illustrations are criticised. Also discussed are: the evolution of the term Engineer; the use of simple arithmetic versus algebra in the mathematical analysis of machines; the concept of 'efficiency'; empiricism and technological experimentation. The chapter closes with some speculative comments on Leupold's possible influence. (Advisor: Bert Hall)


back

Dave McGee, "Floating Bodies, Naval Science: science, design, and the Captain controversy, 1860-1870," 1994 (Max Planck Inst, Berlin)

ABSTRACT

Shortly after midnight on September 7, 1870, H.M.S. Captain capsized and sank in a storm in the Bay of Biscay bringing an end to a decade of controversy in the British naval community over the design of ironclad warships. On one side of the conflict was naval officer and inventor of the turret, Cowper Coles, who had designed the Captain as a fully-masted sailing warship, having a main deck only six feet out of the water. Leading the opposition was Edward James Reed, a proponent of a scientific approach to naval architecture, who opposed Coles' ideas as the dangerous plans of an untrained amateur. Reed would appear to have been correct and the sinking of the Captain has usually been interpreted as a victory of the "professionals" over the "amateurs", but this study argues the Captain disaster is best understood as a collapse of the social process of design.
The elements of a social and historical theory of design are developed in Chapter One, where the strengths and weaknesses of ship design techniques as well as social organization of British warship design are identified. A major problem was that the Admiralty's design staff was too small to be able to cope with the technical demands placed upon it. This meant seeking designs from outsiders, but competitive proposals quickly led to violent controversies, which disrupted the design process and led to the collapse of Admiralty's design organization by 1848.
During the 1850's Surveyor Sir Baldwin Walker sought to end the chaos by again closing the design path to all outsiders, but this only starting the cycle of conflict once more. Walker's staff was too small to be able to cope with the introduction of ironclads after 1858. By 1861, the Admiralty was forced to seek outside designs from both Captain Coles and Edward James Reed. These two men soon came into conflict over the merits of their competitive proposals, especially after Reed became Chief Constructor of the Navy in 1863. The controversy then spread to the rest of the design community as Coles publicly campaigned to force the Admiralty to build low-freeboard turret ships, succeeding only at a cost of fracturing relations in the design community. These fractures ensured that almost every step in the building of the Captain violated established design procedures. As a result the ship was completed two feet lower in the water than expected, her stability seriously compromised. Steps could still have been taken to prevent disaster, but continued controversy so polarized relations inside the Admiralty that nothing was done. Instead, the ship was sent to sea and 473 lives were lost. (Advisor: Janis Langins)


back

Gordon McOuat, "Species, Names, and Things: from Darwin to the experimentalists," 1993 (King's College, Halifax*)

ABSTRACT

Beginning with the achievement of the "Biological Species Concept" (B.S.C.) following the 1930's, we have come to believe that the history of species is one of the development of true definitions and the growth of the "populational" view against the malevolent influence of "nominalism," "essentialism" and "typology." In our histories, the post-Darwinian period has been effectively sealed off as, at best, a confused transitional period where no clear definition of species emerged.
If we open up the post-Darwinian period, we find it was indeed marked by a vigorous debate over the meaning of species. Yet, it is not what we expected. The debate about species was not so much founded upon strict "definitions" or "concepts", "essentialism" or "nominalism". Rather, nineteenth century debates over species were mainly absorbed in questions of authority, of who could properly designate something as a species. Species, as the "ground for all further biological inductions", gave the possessors of species great power.
Species debates, then, were about where to ground authority for species. Professional naturalists struggled against the amateur. Metropolitan museum naturalists competed against the periphery. Field naturalists resisted the metropolis. All claimed rights over species. "Definitions" were only one rather unimportant weapon in the battle over species. More importantly, the fight came down to words, the right to name species. In understanding the nature of species debates in the nineteenth century, we must understand the nature of these claims over naming.
I locate three major sites for species debates: "nomenclature rules", "museum catalogues", and "field experience". Each were central points in the struggle over the right to name species. Each represented a different social group with different practices, different institutions, and different attitudes about the study of nature. Each group would proudly call themselves "naturalist."
Finally, I look at the shattering of the naturalist hegemony with the rise of experimental biology and the specialisation of biological science in the early twentieth century. Each subdivision within experimental biology looked for its own grounding of the "fundamental units" of biology. Species could no longer be said to be "what competent naturalists say they are." Into that world the supporters of the B.S.C. presented their new definition. (Advisor: Mary P. Winsor)


back

Ben Olshin, "A Sea Discovered: pre-Columbian conceptions and depictions of the Atlantic Ocean," 1994

ABSTRACT

By the time Christopher Columbus set out on his voyages into the Atlantic Ocean, these seas had been well explored--at least on paper. Rather than simply accepting this body of water as an unknown, cartographers and geographical writers of the period presented a variety of depictions and speculations as to what lay "out there": numerous islands, coastlines, and even other, antipodal continents. In addition, there were accounts of various voyages from the centuries prior to the Columbian ventures, with suggestions that Chinese, Arab, African, Greek, and Roman navigators had explored the open seas beyond the known world, the oikoumene of Europe, Africa, and Asia.
Modern historians have often taken the available primary sources, i.e., maps and texts, and applied rather dubious interpretative techniques to them. Often the maps are viewed with contemporary cartographic criteria in mind, and judgements are passed on their accuracy in representing islands, coastal outlines, and so on. Sometimes, too, exaggerated claims are made for a particular early map, with statements that it demonstrates a pre-Columbian landing in the New World, for example. The textual accounts of early voyages into the Atlantic are frequently treated the same way, taken out of context, and without reference to the numerous myths and traditions concerning the seas prevalent in the medieval period.
The maps and texts, however, provide us with abundant historical insight when examined simply in their own right. The cartographical works reveal the numerous conjectures put forward by mapmakers in their representations of the Atlantic. The textual accounts of voyages exhibit the widespread interest in these same seas by navigators from many different times and many different cultures. Most important, the contents of both the maps and the texts demonstrate the important roles which myth, tradition, religion, and the Classics, played in the construction of geographical representations and depictions. (Advisor: Bert Hall)


back

David Pantalony, "Rudolph Koenig (1832-1901), Hermann Von Helmholtz (1821-1894) and the Birth of Modern Acoustics," 2002

ABSTRACT

Abstract will be added shortly.


back

Steve Walton, "The Art of Gunnery in Renaissance England," 1999 (Michigan Technological University, Houghton, MI)

ABSTRACT

Previous histories of artillery have concentrated on the guns themselves and their use in military actions, whereas this dissertation attempts to understand those guns as the core of a technological system in late Tudor England and the meaning of that system to its contemporaries. Theoreticians, authors, and gunners all looked to "gunnery" as a field of inquiry, and this thesis proceeds from theoretical gunnery, through its practical operation, to its bureaucratic and intellectual organization in Renaissance England.
First I investigate the ballistic work of Thomas Harriot (c1560-1621), to provide insight into the theoretical analysis of gunnery in the 1590s. Re-dating Harriot's work to c1598-1600 and surveying HarriotÌs career, personal influences, and scientific sources suggests that Harriot's interest in gunnery was not generated by his first patron, the professional soldier Sir Walter Raleigh (as is usually assumed), but rather from his second patron, the military dilettante Henry Percy, ninth Earl of Northumberland. Next, printed English works on gunnery up to 1600 populate two species: practical manuals by Peter Whitehorne, William Bourne and Cyprian Lucar; and arithmetical/analytical works by Leonard and Thomas Digges and Thomas Smith. Then an analysis of two manuscript gunnersÌ manuals, written by practicing gunners, shows what the users themselves recorded.
Next, a survey of artillery use by the Tudor monarchs establishes the extent and role of cannon in sixteenth-century England, noting that Tudor warfare predisposed them not to develop their artillery skills. Analysis of two Ordnance Office surveys of 1580 and 1592 show what they did develop and records of ancillary gunnery equipment and gunner employment records more fully represent the practice of gunnery. And, as both confirmation and augmentation of this picture, the field notebook of a practicing gunner in the Irish wars rounds out the picture of gunnery as a personal occupation.
Finally, the bureaucratic and intellectual position of gunnery is told in the story of the Artillery Garden outside Bishopsgate and of William ThomasÌ petitions to the Council for a formally chartered corporation for the licensing of gunners . Gunnery as a "mathematical" art and the gunners as "mathematical practitioners" concludes the thesis and indicates where gunnery "fit" into the late Elizabethan epistemology of practices. (Advisor: Bert Hall)


back

Joanne Woiak, "Drunkenness, Degeneration, and Eugenics in Britain, 1900-1914," 1998

ABSTRACT

This dissertation presents a reinterpretation of the early British eugenics movement. It focuses on a previously unrecognized 'Lamarckian' style of eugenics that resembled the dominant approaches to improving racial health in countries such as France and Brazil, where eugenics was closely allied to the public health and social hygiene movements. This study further attempts to situate certain Edwardian eugenic discourses within the context of debate over the relative value of laboratory versus statistical approaches to medical-scientific research. The main narrative addresses the history of scientific notions about how parental alcohol consumption could cause physical and mental degeneracy in offspring. These ideas found support from laboratory, clinical, and social studies carried out mainly by supporters of the temperance movement, and they were often conceived as an alternative soft hereditarian or non-hereditarian version of eugenic thought. The leading proponent of this „preventive" style of eugenics was the medical writer and temperance advocate Caleb Saleeby.
Lamarckian eugenics and its scientific underpinnings were first seriously challenged in 1910 by the eugenist Karl Pearson's controversial statistical study purporting to show that parental alcoholism did not have deleterious effects on offspring. Pearson's alcoholism study represented one of many efforts made by his biometrical school to establish its new techniques of mathematical statistics as a research methodology and a scientific basis for social policy making. The 1910 alcoholism memoir led to a bitter dispute with medical professionals and social scientists representing Lamarckian eugenics, who championed their own solutions to the crisis of national degeneration and their own methods of scientific research. Yet hard hereditarian eugenics, Lamarckian eugenics, and public health reforms also exhibited significant ideological affinities, namely in their shared imperialist rhetoric, class and gender biases, and emphasis on reforming individual behaviour rather than environmental and economic circumstances. In accordance with recent historiography of eugenics in other national contexts, the current study suggests that in its less rigidly hereditarian incarnations this biologically based reform programme was more closely related to preventive medicine than to human genetics. (Advisor: Pauline Mazumdar)


Image MAP!!! - turn on images.