Tuesday, June 2, 2009

The Imagery Debate, Pt. 2 - Kosslyn v. Pylyshyn

Following Pylyshyn's paper, many psychologists came to the defense of mental imagery, most notably Stephen M. Kosslyn. Over the following three decades, Kosslyn and Pylyshyn (as well as allies on both sides) would conduct a debate over the relationship between propositional and imagistic content in mental processes. Though both sides presently declare themselves victors (of course), the reality of the matter is somewhat hazier, but also more interesting and more fruitful, than a single set of laurels could indicate.

The debate was conducted on many fronts simultaneously and involved many innovative experiments. To cite just one example, Kosslyn and his collaborator Steven Schwartz responded to Pylyshyn's fears that homuncular sensorium implied a logical paradox by creating a computer simulation that incorporated internal representations of imagery from which propositional content was, in turn, derived. Doing so eliminated Pylyshyn's in-principle objections to the infinitely regressive nature of homunculi. Those interested in more detail and other experimnets are encouraged to consult either Michael Tye's The Imagery Debate or Howard Gardner's The Mind's New Science. Instead, I want to approach the debate with some imagery of my own.

In order to do so, I consulted the bibliography of a major, late-career book on the debate by both Pylyshyn (Seeing and Visualizin: It's Not What You Think, 2001) and Kosslyn (Image and Brain: The Resolution of the Imagery Debate). I then noted every example of either author citing a paper of which he was himself the lead author. Because Pylyshyn is somewhat less prolific than Kosslyn (a function of his running a lab at Harvard), and also because Kosslyn, by mid-career was almost always lead author, whereas Pylyshyn often granted that honor to his frequent co-author and active image debate participant Jerry Fodor, I include Fodor-Pylyshyn collaborations as well. I then dumped the each set of papers into a single .pdf which I in turn fed into IBM's free text visualizing software Many Eyes. I have included links to bibliographies for each author that I compiled in the comments feed.

I first created a wordle, which creates a visual representation of the most common words in a body of texts. Doing so produced the following image for Pylyshyn's work:

Compare that with the wordle of Kosslyn's work:
The first thing one notices is that Kosslyn uses "image," "images," and "imagery" considerably more often than Pylyshyn, but this is perhaps to be expected. Pylyshyn wanting to prove that mental imagery doesn't exist, so one can hardly expect him to use the term as much as Kosslyn, who is defending its existence. The more interesting aberration is the prominent position that "hemisphere" has in Kosslyn's wordle, whereas it does not appear in Pylyshyn's at all. Closer inspection (using Many Eyes' Word Tree Generator) revealed that Kosslyn used the term 521 times, or once every 107 words (an absurdly high rate). By contrast, Pylyshyn uses the word exactly once in the 146,810 words of his articles. "Come, Watson!" I thought. "The game is afoot."

So what's so important about hemispheres for Kosslyn, and why is Pylyshyn ignoring them? It turns out that much of the detail of Kosslyn's work that focuses on how the brain structures the relationship between imagery and propositional content involves identifying processing that occurs primarily in the left hemisphere (where more propositional processing happens) or in the right (where more imagery processing occurs). For Pylyshyn, the distinction is moot because images are ultimately reducible to computation anyway. For him, Kosslyn's distinction merely shows where the epiphenomenon is generated. This distinction arose, I think, because Pylyshyn's belief (following Chomsky and Fodor) that the mind need necessarily be computational, led him to be less interested in empirical evidence about the brain itself.

Interestingly, Kosslyn credits Pylyshyn's critiques with urging him and his collaborators to refine their understanding of how the relationship between these subsystems were structured. Ventriloquizing Pylyshyn, Kosslyn pondered "Does the claim that image representations are depictive imply various paradoxes and incoherencies?" He responds that
Addressing this set of issues was no mean task. My strategy was to take advantage of the abundant evidence that imagery piggybacked upon a a theory of high-level visual perception. This approach allowed my colleagues and I [sic] to develop theories of specific processing subsystems, and helped us to hypothesize where these subsystems are implemented in the brain. (Image and Brain 406)
Kosslyn, it seems, attributes much of his own successto having been prodded by the Pylyshyn the gadfly philosopher (Kosslyn, btw, is currently a dean at Harvard where he also runs an eponymous research lab and holds a chair dedicated to William James; the prodding seems to have worked quite well). More importantly, though, his research platform has proved more influential than Pylyshyn's, though the latter author remains highly respected as well, having recently won France's prestigious Jean Nicod Prize.

Evidence of this may perhaps be seen in each author's continuing relationship with the iamagery debate itself. Kosslyn has, since the early 90s, stopped defending his approach and concentrated on elaborating its details, whereas Pylyshyn continued to make the case well into the 00s. Perhaps the most telling sign of how the debate was conducted comes again from Many Eyes. Returning to the Word Trees, I checked to see in what contexts each author mentioned the other. Here are Kosslyn's references to Pylyshyn:

Bee76008-4fa3-11de-84d0-000255111976 Blog_this_caption

You'll notice that of Kosslyn's 60 uses of the Pylyshyn's name, he most commonly references his antagonist in a bibliographic setting, as indicated by the commonness of his name being followed by a common and his initials, but that the lower part of the tree is full of words indicating that Kossylyn is rehearsing one of Pylsyhyn's arguments (e.g. "argues that," "wants," "claims that," "suggests"). Now consider Pylyshyn's references to Kosslyn:

Cceb8f48-4fa4-11de-be77-000255111976 Blog_this_caption

Pylyshyn mentions Kosslyn 138 times, almost twice as often as the other way around, but almost exclusively in a bibliographic context. One possible reason for this is that Kosslyn is, indeed, a more prolific author (again, a function of his running a research lab). This does not, however, explain the absence of contextual references. My suggestion is that this absence may be indicative of Pylyshyn's unwillingess to adapt his model to the criticism of his interlocutors. By excluding them from the explicit discussion, he does not need to frame the debate as a debate, but merely as revealed knowledge. He can occupy a set conceptual ground without appearing entrenched.

Intriguingly, Kosslyn, in recent years, has focused on digital visual communication. Though he has not turned to text visualization software like Many Eyes, he has analyzed strategies for presenting information in different graphical formats, most notably PowerPoint. The move toward engineering and toward applied uses for theory developed during the imagery debate also indicates that the underlying field has stabilized sufficiently to serve as a foundation for new structures.

At present, the debate has moved away from treatments of pure imagery and toward inquiries into more holistic models of perception. This is partly a function of improved research methods, such as head-mounted eye-tracking equipment and improved fMRI technology, partly a function of theoretical refinements, such as attempts by connectionist theorists to understand how the brain's massively parallel micro-structure influences its operation or of dynamicists, such as Tim van Gelder, to understand how the fact of mental computations occurring in real-time must constrain theoretical models. At the same time, Artificial Life roboticists such as Rodney Brooks were constructing robots that performed perceptual tasks with insect-level competency and no internal representations and no central processing whatsoever, offering a critique both to Kosslyn's interest in mental imagery and also to Pylyshyn's insistence in a computational model of mind. Meanwhile, an increasing willingness on the part of psychologists to pay attention to first-hand accounts of phenomenological experience has led to groundbreaking research on subjects ranging from synesthesia (as in the work of Richard Cytowic) to accounts of agency (as in the work of Evan Thompson). This drive toward more holistic descriptions of perceptions is seen also in the work of Alva Noë, who argues that perception, and indeed cognition more generally, must be understood as a fully embodied process; an argument extended to include extra-bodily technologies by philosophers like Andy Clark.

Our understanding of how vision functions is by no means finalized, and by reflecting back on the imagery debate, and the the terms by which it was conducted, we may be able to make more fruitful the debates that currently occupy researchers. The rivalry between Pylyshyn and Kosslyn was one of the most productive in the history of cognitive science, leading to voluminous new research and considerably refined theoretical understanding. Part of the reason for this was the very fact of its being framed as a debate, and part was the different disciplinary approaches that each party brought to the table. Though Kosslyn's position may have proved more influential eventually, it likely could not have done so without Pylyshyn's ongoing critique. In the end, it demonstrates a rivalry can, when conducted well, prove as successful as a collaboration.

Monday, June 1, 2009

The Imagery Debate, Pt. 1 - Background

The history of the theorization of mind can be described as a long list of analogies to different technologies. "Aristotle had a wax tablet theory of memory, . . . Leibniz saw the universe as clockworks, [and] Freud used a hydraulic model of libido flowing through the system [coupled with a] telephone-switchboard model of intelligence" (Rumelhart 205). By the 1950s and 60s, early efforts in the Artificial Intelligence community had borne enough fruit that researchers increasingly began to favor analogies to digital computers for the bases of their models. Doing so produced great strides in the psychological community and jumpstarted what has become known as the "Cognitive Revolution".

One of the most significant developments that early cognitive scientists argued for was the rejection of the conceit held by most working within behavioral psychology (then the most prominent research paradigm operating in the United States) that the mind is a blank slate. In particular, Noam Chomsky's veritable trouncing of B. F. Skinner's Verbal Behavior set the tone for much of the research that was to follow. The existence of his proposed universal "generative grammar," Chomsky argued, required as a corollary a module in the brain that could produce such universal structures. Later researchers -- particularly the philosopher Jerry Fodor -- extended this assertion of modularity to cover the brain as a whole, increasing the analogy to different hardware structures in a computer. In order for their information-processing metaphor to work, researchers claimed, the brain had to run on an underlying mathematical descriptions such as the binary code that forms the basis of computer software. Fodor prominently argued for this position in his influential 1975 monograph, The Language of Thought.

Initially, this research paradigm proved extremely successful, but at the same time, it quickly became clear that certain mental phenomena were difficult to describe in its terms. In particular, the computer researcher and philosopher Zenon W. Pylyshyn argued that recent refinements of cognitive theory required psychologists to dramatically rethink their understanding of mental imagery. In his debate-provoking 1973 article "What the Mind's Eye Tells the Mind's Brain," Pylyshyn claimed that the new research paradigm revealed imagery to be merely epiphenomenal. That is to say, mental imagery has a phenomenological or experienced reality, but it is inessential, a mere side effect of "invisible" subconscious processing to which the mind cannot be made privy. He argued that mental imagery bore roughly the same relationship to the "real" mental process as the images on a computer screen bear to the computations that occur within a computer processor.

He presented two primary arguments for this position. In the first case, he argued, those who believed in the "reality" of mental imagery implicitly reinstated the Cartesian dualism between mind and body. This position was a reaction to then-recent ground-breaking vision research, most notably the incredibly influential 1959 paper (which inspired the title of Pylyshyn's), "What the Frog's Eye Tells the Frog's Brain," in which a team of international researchers led by the MIT psychologist Jerome Lettvin showed that much important vision processing in a frog actually took place along in the frog's eye. Cells on the retina that responded to small, fast moving objects (which Lettvin et al. deemed "bug detectors") triggered the response of attacking the triggering object with the frog's tongue. The fact that the frog's eye and not the frog's brain seemed to do the work of identifying its perceptual object showed that researchers could advance mechanistic descriptions of cognitive systems that could also account for where meaning come from without needing also to posit some sort of non-mechanical "mind" which attributed meaning and made judgments. This research was then replicated (in somewhat less bug-centered terms) in many other animals including humans.

Pylyshyn claimed that those who believed in the reality of mental imagery gave up the ground that Lettvin and his colleagues had gained. He saw mental imagery as a return to dualism, arguing that the mind showed no sign of containing an additional "eye" with which to view the represented image and also reiterating the philosopher's classical critique of dualism: that it implied an infinite regression of nested homunculi. After all, if the brain required an interior structure to view the represented image, then that internal structure presumably required an observing internal structure as well, which in turn required, ad infinitum.

Pylyshyn's other major claim was that the dichotomy then posited by many vision researchers that mental content was either propositional or perceptual was too rigid. Indeed, he claimed, since perceptual content was ultimately reducible to propositional content, the distinction was meaningless. The renowned British opera director, Jonathan Miller, who was originally trained as a neurologist, made a similar (albeit softened) claim some years later:
I recently went to Cambridge to deliver a lecture, and in my apprehension I had rehearsed my arrival in a series of increasingly alarming dreams. About four nights before, I dreamed that I was parking my car in what I knew to be Trinity Lane, although it was from its visible appearance also a narrow side street behind the Santo in Padua. At the end of the lane, I could see the master of Trinity waving hospitably at me, and he was the actor Michael Horden. Now there was no sense in my dream that Horden, the actor, was playing the part, or that he had popped in between the election or anyone else's hopes. As far as I was concerned in the dream, Horden and the master were one and the same, in spite of the fact that I also knew, simultaneously, that the master and Sir Andrew Huxley were identical. The rest of the dream was so humiliating, for which I blame neither Sir Michael nor Sir Andrew, that I shall draw a veil over it. (198)
Miller's point is that his mental image of Michael Horden also contained the propositional content "master," thus blending the two categories. This, it must be stressed, is a softer claim than Pylyshyn's, Miller not being interested in proving the epiphenomenality of images, but it operates along the same vectors.

Pylyshyn's was the first salvo in what would become a proplonged debate over the nature of mental imagery.

[Sources appear in the Comments feed]

Monday, April 27, 2009

The Historiography of Information

Tracking the historiography of information is complicated by a number of factors. First, the term lacks a single, accepted meaning. Several definitions, some quite technical, have been advanced since the word’s derivation from the verb to inform (originally to enform) during the 14th century, but these meanings have not always proved wholly commensurable (OED). Second, according to many of these definitions, information has existed in one form or another far back in human history, typically well prior to the coinage of the term. In some cases, particularly as it is used in the sciences, the term is essentially a-historical, even taking on metaphysical meaning in the writing of some physicists and computer scientists. A historiography of history should, therefore, include investigations both into information itself, and into changing theories of information, the distinction between which is sometimes hazy. Third, the term has currency in many spheres and disciplines, applying potentially to philosophy, pure mathematics, sciences ranging from physics and to anthropology, media and communications studies, the tech sector, and, of course, common parlance. Its wide-spread application is both what makes the concept so powerful (used as a common referent, it can provide a conceptual, mathematical, and terminological common ground for disciplines that might otherwise be unable to communicate) as well as what makes the concept so frustrating and potentially dangerous (its application across disciplines can serve to erase important differences in how the term functions in various language games, causing stymieing confusion or ill-founded actions).

Prior to the 20th c., the act of informing had always required a receptive audience. Information enformed the form of the receiver’s mind much as an envelope enveloped a missive. But at some point in the mid-20th c., a new sense developed, one that re-defined the term abstractly, as that which is conveyed by a medium, making no reference to a receiving mind. Information was instead a formal or organizational scheme that was instantiated in a particular physical medium. Several historians have pointed to two documents in particular, Norbert Wiener’s Cybernetics (1948 / 1961) and Claude Shannon and Warren Weaver’s The Mathematical Theory of Communication (1949) as occupying an originating this new understanding of information. The historical number and range of histories that find their beginning here – N. Katherine Hayles’s How We Became Posthuman (1999), an investigation of technological critiques of liberal humanism in contemporary literature; Christian von Baeyer’s Information (2003), a history of the concept of information in the 20th century; and Fred Turner’s From Counterculture to Cyberculture (2006), a tracking of the influence of 60s countercultural figures and thought in the contemporary tech sector, all begin with one or both of the above texts – the diversity of these histories indicates the influence that these two texts had on how people conceived of information. This is, of course, too great a simplification. The OED points out precursors to this abstracted notion of information as far back as 1927, but it seems safe to say that the work of Wiener and of Shannon and Weaver codified and popularized the new notion.

It is important to realize as well that this new notion of information was not simply a process of abstraction (though it was that), but that it also contained more-or-less explicit historiographic element. In particular, Cybernetics argues for information a new heuristic for understanding history. Weiner calls his thesis “neither unfamiliar nor new,” arguing only that his work is the first to codify such assumptions about information mathematically, and even acknowledging that such rigorous codification has its limitations and that there remains “much which we must leave, whether we like it or note, to the un-‘scientific,’ narrative method of the professional historian” (155, 164). Shannon, and particularly Weaver, make it clear that they believe that their abstracted notion of information, one that they explicitly divorce from meaning, could be applicable beyond the realm of telephone network organization for which it was written (99, 114-17).

Indeed, several histories written well before this period seem to foreshadow the informational heuristic. In particular, William Godwin, both in his Enquiry Concerning Political Justice (particularly the more radical 1793 edition) and in his historical biography, Life of Geoffrey Chaucer (1804), show an interest in tracking communications networks and how their structuration led to the propagation of certain ideas. Godwin’s line of thinking was likely influenced both by connection with the community of Rational Dissenters, a Christian sect that argued that the Kingdom of Heaven would be realized when, by conversing with fellow believers, humanity gradually improved its understanding of God’s will and through his familiarity with major players in the French Revolution (Philps passim). Such an approach to history also finds roots in 19th century histories of the railroads, such as Michael Angelo Garvey’s The Silent Revolution (1852). Garvey’s book is notable, too, because (like Godwin’s Political Justice) such networks become the means of achieving a utopian future, a theme that will re-occur later in the work of information-inspired futurists.

Information theory’s early technical articulation, as well as its antecedents in railroad history, have perhaps led to the typically materialist bent of most histories of information. Of particular note is the work of one of the most noted 20th century railroad historians, Harold A. Innis, who turned, by the end of his career, away from the study of transportation systems and to that of communications systems. Composed shortly before Wiener and Shannon and Weaver’s texts, Innis’s The Bias of Communication (1951) tracked the way communication networks inform broad historical trends from ancient Egypt up to the present day. His lectures cum essays argued that technologies of information storage have produced, such as the development of writing or the printing press, have led to a dangerous biases in our understanding of the world. In particular, he suggested that the rise of techno-science which such networks enabled led to dehumanization which might best be remedied by a new re-emphasis of oral communication (a belief that bears a curious resemblance to that of Godwin and the Rational Dissenters). Wiener’s The Human Use of Human Beings, published the same year, dealt with similar themes but treated them explicitly in terms of information theory as he had elaborated it. Equally materialistic, Wiener explicitly analogized communications networks in society to those in industrial machinery much as Innis implicitly analogized communications networks to railroads. Ten years later, Innis’s acolyte, Marshall McLuhan would further popularize materialist heuristics for understanding social phenomena, most notably in his Understanding Media (1964), in which he argued (in a move that mimicked Shannon and Weaver’s divorce of information from content) that it is not the content of a message that matters, but the medium that carries it, a notion he had the savvy to phrase in the advertisement-like slogan, “The Medium is the Message.”

Such interest in communication networks gained increasing specificity and detail in subsequent decades, focusing on specific aspects of different networks. These works deviated from the sometimes radical mediality of McLuhan’s work, but nevertheless maintained a focus on means of communication, even if they occasionally also admitted that the message could itself function as the message. Two example of this are Thomas Streeter’s Selling the Air (1996), which details the rise of “liberal corporatism” led to a broadcast model of radio rather than a peer-to-peer approach as typified ham radio, and Lawrence Lessig’s The History of Ideas (2001), which charts how the interaction between governmental policy and technological approaches to information sharing effect communications networks. Similarly, information theory serves as a heuristic for evolutionary anthropologist Kim Sterelny’s The Third Chimpanzee (forthcoming), which takes advantage of the broad applicability of information theory to create a common discourse between evolutionary biology, archeological, and anthropology in which he argues that the pivotal innovation that led humans to break off from other primates was our ability to use cultural forces to structure learning environments in which high-bandwidth information about tool-use could be passed down through generations.

Such focus on communication media was only the most obvious use of information by historians. As different disciplines began to take advantage of the concept’s robust applicability, and as engineers began using information theory to construct increasingly powerful computers, electronics, and telecommunications devices, the heuristic spread throughout the discourse of the sciences. The most notable example of this is probably James D. Watson and Francis Crick, who, by describing the structure of DNA, showed that the human reproductive system could be described informationally. Similar innovations in physics, most notably the rise of computational physics, which argues that the entire physical universe can be described as a massive computation and of the increasing use of computer modeling in the sciences have led at some physicists and historians of science (notably John Archibald Wheeler, Stephen Wolfram and Hans Christian von Baeyer) to argue that information comprises a sort of universal language in which the entire universe is describable.

Much of the debate surrounding information at present revolves around its use as a predictive heuristic. Since Edward Lorenz’s pioneering use of mathematical models to predict weather in the 60s, the sciences have increasingly come to use computer models to describe nonlinear systems. Such models can be thought of as informational histories of the future, and their demonstrable predictive power is evidence of their productivity. Such innovations have led some information scientists to declare that we are, to cite the title of Stephen Wolfram’s recent book, engaged in A New Kind of Science (2002). The presumed universality of such predictive techniques has proved particularly alluring to futurists, such as Ray Kurzweil, whose The Singularity Is Near (2005) argues that information technology will soon become hyper-intelligent and propel humanity into a paradisiacal world of infinite informational (and sensorial) satisfaction. At the same time, many information theorists argue that such future histories are deeply flawed (the technical term is cockamamie) and that informational heuristics entail non-trivial limitations (see, e.g. Wimsatt, Re-Engineering Philosophy for Limited Beings (2007)).

[Works Cited in comments]

Monday, April 13, 2009

Ginger-Grapefruit Vodka Martini

2 oz Smirnoff vodka
1 oz fresh grapefruit juice
.75 oz ginger suspension
.25 oz honey

Combine grapefruit juice, vodka, and ginger suspension. Add honey and mix until totally dissolved. Stir over ice. Serve up with cherry garnish.

Ginger suspension: Bring 1 c. finely chopped ginger to boil in ample water. Simmer for > 90 min. Strain and reduce to about 4 oz.

[Tried it with gin, btw, and was sadly, sorely disappointed. The juniper ran wild over the other flavors. Particularly tragic because Gin & Ginger is so darned snappy as a drink name.]

Sunday, April 12, 2009

A Lyric Form of Wary Positivism

The Bias of Communication, by Harold A. Innis, the great Canadian economic-historian, tracks material constraints on communication over the course of roughly 7000 years of Western history in just under 200 pages. The terrific temporal coverage of the volume required that Innis move very rapidly through his material leading to a peculiar prose style:
The failure of the Counter-Reformation reflected the influence of force. The growth of industrialism, the interest in science and mathematics, and the rise of cities had their effects in the use of gunpowder and artillery. The application of artillery in destroying the defences of Constantinople in 1453 was spectacular evidence of the decline of cavalry and systems of defence which had characterized feudalism. The instruments of attack became more powerful than those of defence and decentralization began to give way to centralization. The limitations had been evident in the mountainous region of Switzerland and the low country of the Netherlands and in the success of movements toward independence in those regions. The military genius of Cromwell and of Gustavus Adolphus in using new instruments of warfare guarunteed the position of Protestantism in England and Germany. (Innis, The Bias of Communication, Toronto: U of Toronto P, 1951 / reprinted 2006, p. 25.)
Structurally and rhetorically, this is a strange paragraph. It has a topic sentence, and its sentences generally support that topic, but it is basically without transitions. Each sentence just accrues upon the last. Note also that there are very few agents -- living breathing people -- acting within the sentences. Instead, the sentences' subjects are typically abstract qualities or inanimate objects ("the faliure of the Counter-Reformation," "the application of artillery," "the instruments of attack"), and predicates are typically governed by state of being verbs. When humans make an apearance (Cromwell and Adolphus) it is merely to take advantage of the inanimate "new instruments of warfare" which provide the humans' agency. The paragraph ends without a conclusion, without a sense that the evidence pesented led to any specific thing.

I want to liken this to a certain approach to landscape poetry exemplified by John Ashbery's poem "Into the Dusk-Charged Air." A typical passage:
The Rhône slogs along through whitish banks
And the Rio Grande spins tales of the past.
The Loir bursts its frozen shackles
But the Moldau's wet mud ensnares it.
The East catches the light.
Near the Escaut the noise of factories echoes
And the sinuous Humboldt gurgles wildly.
The Po too flows, and the many-colored
Thames. Into the Atlantic Ocean
Pours the Garonne. Few ships navigate
On the Housatonic, but quite a few can be seen
On the Elbe. For centuries
The Afton has flowed.
(Ashbery, Rivers and Mountains, New York: Ecco, 1962, p. 18)
Ashbery's poem moves the way Innis's paragraph moves. In both cases, the passages are structured like a landscape in that their internal components do not have a causal relationship with one another but are related primarily through spatial arrangement. Importantly, though, Ashbery's poem is not landscape but riverscape. Its spatiality is combined with a brutal temporal element normally absent from the peaceful genre of landscape (tho not necessarily from the Hudson River school's landscapes, a fact of which Ashbery, also an art critic and resident of upstate NY, would have been keenly aware). The poem shares a forward momentum with Innis's prose, which itself hurtles through history with the unremitting undertow of the St. Lawrence surging toward Niagra.

The reader experiences this as a sort of caught-up-in-ness. We ride the crest of Innis's prose and Ashbery's poetry not quite comprehending but unable to turn back upstream or swim to shore. This a historical anxiety as well, one particularly exemplified by the lack of historical agency typically presumed to accompany the ever-increasing current of techno-sceintific innovation. Which bring me to my last exemplum: Harper's Magazine's "Findings" column, which briefly recapitulates recent scientific studies in a rhetorical fashion which will by now appear quite familiar:
During the past summer, the number of zombie computers worldwide tripled, and the number of attacks by stupid grizzly bears in Anchorage, Alaska, increased sharply. Polar-bear cannibalism continued to rise. The earth's magnetic field may weaken sufficiently by 3500 A.D. [sic] to allow the poles to reverse. Half of all mammals were in decline, and globabl warming was chasing many plant and animal species uphill. A quarter of elephants in south Indian temples were found to suffer from tuberculosis, a moose in Wyoming was found to have contracted chronic wasting disease, yellow stunt rice disease struck the Mekong Delta, and scientists determined that HIV first infected humans as early as 1884. ("Findings," Harper's Magazine 317, no. 1903 (December 2008): 104.)
The column stresses moments of non-agency (zombiedom, massive environmental change, disease outbreaks), but it does so in the context of scientific discovery. It is the constant realization that we humans are hampered in novel, nasty ways. A weird, wary positivism inheres in this format. Just as the sublime terror that suffuses the images of the Hudson River Valley was coupled with an implicit politics of exploration and expansionism. Innis, it should be added, viewed his book as a polemic for education reform: another wary positivism.

Friday, April 10, 2009

From Cyberculture to Counterculture

I'm reading Fred Turner's engaging, From Counterculture to Cyberculture, which tracks the influence of a certain breed of libertarian-leaning, cybernetics-influenced, Whole Earth Catalog-publishing hippie (personified by Whole Earth and Wired impresario Stewart Brand) on the computer industry. While I'm perfectly happy to acknowledge that the counterculture had a pronounced influence on the tech industry, I wonder what sort of audience would be interested in a book about the influence of starched and stodgy IBM engineers with 2.5 children, a slide rule, and a golden retriever. Probably no one, and because such a history is utterly uncool, even tho it may well have been the "dominant" influence, it remains oddly untold.

Contemporary tech people want the cultural caché derived from an association with the counterculture, especially as they themselves increasingly corporatized post-bubble-burst (the book was published in 2006, btw). As much as communalist-based organization techniques may have influenced some comp companies, just as many drew their structure from conventional business models and succeeded in spite of or even because of those decidedly un-hippie hierarchies. Turner's book -- and even more pointedly, Turner's book's success -- may say more about the desire to read the counterculture back into the history of cyberculture than on the counterculture's actual influence.

Sunday, January 18, 2009

Following Flaubert

Flaubert studies draftsmanship. He sits, alone in his room, undertaking a deliberate rewiring of the convoluted, pre-Hausman street map of neurons which configure his brain. In so doing, he expands alleys into avenues. Line, previously a cramped dark path becomes something his eyes flow over like so many phaetons bustling along the Champs-Élysées. At the same time, he habituates his brain to this behavior. Each line he draws on the clean white parchment scratches a groove like a wheel rut into the neural pathways of his brain.

Even as line is reified, the laundry white uniformity of a blank field begins to wriggle in his mind’s eye like laundry in a warm breeze. Laying down shading, becoming aware of depth and shadow, a uniform field becomes rich with detail, infinitely discriminable. An undifferentiated mass of neurons slowly learns to make distinctions, each neuron weighted differently, each vibrating differently, each firing according to its newly distinguishably different weight. Even if, like the line of laundry, the array of neurons all sit along the same channel, the same sloping clothes line, the same mental pathway, each is newly differentiated as a result of Flaubert, day after day, trying to sketch this bowl of fruit or that accommodating prostitute. Slowly, Flaubert learns to see not a wall, but a thousand and one bricks, not a man in blue, but cerulean silk folded and rilled by the rumpled concavity of the slouching man’s chest.

Balzac did much the same, learning to notice, to see the detail, to account for each buckle on the soldier’s shoe, every pine speckling the distant hillside. Unlike Flaubert, though, Balzac does not seem to be aware that his senses are more acute. He sees the change as existing out there, merely sees an abundance of new stuff, not, as Flaubert does, the same old world, but registered through a new kind of mind.

At the same time, throughout France, a generation of young bourgeoisie undertook similar studies. They collectively rewired their brains to register the formal and the phenomenological, to be less aware of objects as things, to register them as sights, but at the same time to reify those sights into a new class of things. It is precisely this population, the bohemian student body of the La Rive Gauche, that is absent as Frederic plays the flâneur, but it is precisely these same young men, now progressed, like Flaubert himself, to middle age, at whom the book is addressed. The image in Frederic’s eyes was an image in theirs, is being summoned again by Flaubert to live before them once more. The leap across the decades is no less abrupt and no less invisible than the saccadic leap. Flaubert manages to recreate the real by striding over [thirty years] in the leap of an eye, as unconscious of the movement as a flâneur, his eyes on the shop windows and fruit carts about him, steps over a puddle in the street, never noticing the water, but adjusting his gate nonetheless. Flaubert manages to recall to life for the now older bourgeoisie their long ago ramblings down the streets and pathways of their own neuronal circuitry, their navigation of the folded-over forms of Paris, through the medieval alleyways of the history of their own minds.