Lost Password?

Movements at The Moving Planet Blog

Sunday
Apr 23rd
Home arrow Science & Technology
Science & Technology
Science, Mind, and Limits of Understanding by Noam Chomsky
February 15, 2015

One of the most profound insights into language and mind, I think, was Descartes’s recognition of what we may call “the creative aspect of language use”: the ordinary use of language is typically innovative without bounds, appropriate to circumstances but not caused by them – a crucial distinction – and can engender thoughts in others that they recognize they could have expressed themselves. Given the intimate relation of language and thought, these are properties of human thought as well. This insight is the primary basis for Descartes’s scientific theory of mind and body. There is no sound reason to question its validity, as far as I am aware. Its implications, if valid, are far-reaching, among them what it suggests about the limits of human understanding, as becomes more clear when we consider the place of these reflections in the development of modern science from the earliest days.

It is important to bear in mind that insofar as it was grounded in these terms, Cartesian dualism was a respectable scientific theory, proven wrong (in ways that are often misunderstood), but that is the common fate of respectable theories.

The background is the so-called “mechanical philosophy” – mechanical science in modern terminology. This doctrine, originating with Galileo and his contemporaries, held that the world is a machine, operating by mechanical principles, much like the remarkable devices that were being constructed by skilled artisans of the day and that stimulated the scientific imagination much as computers do today; devices with gears, levers, and other mechanical components, interacting through direct contact with no mysterious forces relating them. The doctrine held that the entire world is similar: it could in principle be constructed by a skilled artisan, and was in fact created by a super-skilled artisan. The doctrine was intended to replace the resort to “occult properties” on the part of the neoscholastics: their appeal to mysterious sympathies and antipathies, to forms flitting through the air as the means of perception, the idea that rocks fall and steam rises because they are moving to their natural place, and similar notions that were mocked by the new science.

The mechanical philosophy provided the very criterion for intelligibility in the sciences. Galileo insisted that theories are intelligible, in his words, only if we can “duplicate [their posits] by means of appropriate artificial devices.” The same conception, which became the reigning orthodoxy, was maintained and developed by the other leading figures of the scientific revolution: Descartes, Leibniz, Huygens, Newton, and others.

Today Descartes is remembered mainly for his philosophical reflections, but he was primarily a working scientist and presumably thought of himself that way, as his contemporaries did. His great achievement, he believed, was to have firmly established the mechanical philosophy, to have shown that the world is indeed a machine, that the phenomena of nature could be accounted for in mechanical terms in the sense of the science of the day. But he discovered phenomena that appeared to escape the reach of mechanical science. Primary among them, for Descartes, was the creative aspect of language use, a capacity unique to humans that cannot be duplicated by machines and does not exist among animals, which in fact were a variety of machines, in his conception.

As a serious and honest scientist, Descartes therefore invoked a new principle to accommodate these non-mechanical phenomena, a kind of creative principle. In the substance philosophy of the day, this was a new substance, res cogitans, which stood alongside of res extensa. This dichotomy constitutes the mind-body theory in its scientific version. Then followed further tasks: to explain how the two substances interact and to devise experimental tests to determine whether some other creature has a mind like ours. These tasks were undertaken by Descartes and his followers, notably Géraud de Cordemoy; and in the domain of language, by the logician-grammarians of Port Royal and the tradition of rational and philosophical grammar that succeeded them, not strictly Cartesian but influenced by Cartesian ideas.

All of this is normal science, and like much normal science, it was soon shown to be incorrect. Newton demonstrated that one of the two substances does not exist: res extensa. The properties of matter, Newton showed, escape the bounds of the mechanical philosophy. To account for them it is necessary to resort to interaction without contact. Not surprisingly, Newton was condemned by the great physicists of the day for invoking the despised occult properties of the neo-scholastics. Newton largely agreed. He regarded action at a distance, in his words, as “so great an Absurdity, that I believe no Man who has in philosophical matters a competent Faculty of thinking, can ever fall into it.” Newton however argued that these ideas, though absurd, were not “occult” in the traditional despised sense. Nevertheless, by invoking this absurdity, we concede that we do not understand the phenomena of the material world. To quote one standard scholarly source, “By `understand’ Newton still meant what his critics meant: `understand in mechanical terms of contact action’.”

It is commonly believed that Newton showed that the world is a machine, following mechanical principles, and that we can therefore dismiss “the ghost in the machine,” the mind, with appropriate ridicule. The facts are the opposite: Newton exorcised the machine, leaving the ghost intact. The mind-body problem in its scientific form did indeed vanish as unformulable, because one of its terms, body, does not exist in any intelligible form. Newton knew this very well, and so did his great contemporaries.

John Locke wrote that we remain in “incurable ignorance of what we desire to know” about matter and its effects, and no “science of bodies [that provides true explanations is] within our reach.” Nevertheless, he continued, he was “convinced by the judicious Mr. Newton’s incomparable book, that it is too bold a presumption to limit God’s power, in this point, by my narrow conceptions.” Though gravitation of matter to matter is “inconceivable to me,” nevertheless, as Newton demonstrated, we must recognize that it is within God’s power “to put into bodies, powers and ways of operations, above what can be derived from our idea of body, or can be explained by what we know of matter.” And thanks to Newton’s work, we know that God “has done so.” The properties of the material world are “inconceivable to us,” but real nevertheless. Newton understood the quandary. For the rest of his life, he sought some way to overcome the absurdity, suggesting various possibilities, but not committing himself to any of them because he could not show how they might work and, as he always insisted, he would not “feign hypotheses” beyond what can be experimentally established.

Replacing the theological with a cognitive framework, David Hume agreed with these conclusions. In his history of England, Hume describes Newton as “the greatest and rarest genius that ever arose for the ornament and instruction of the species.” His most spectacular achievement was that while he “seemed to draw the veil from some of the mysteries of nature, he shewed at the same time the imperfections of the mechanical philosophy; and thereby restored [Nature’s] ultimate secrets to that obscurity, in which they ever did and ever will remain.”

As the import of Newton’s discoveries was gradually assimilated in the sciences, the “absurdity’ recognized by Newton and his great contemporaries became scientific common sense. The properties of the natural world are inconceivable to us, but that does not matter. The goals of scientific inquiry were implicitly restricted: from the kind of conceivability that was a criterion for true understanding in early modern science from Galileo through Newton and beyond, to something much more limited: intelligibility of theories about the world. This seems to me a step of considerable significance in the history of human thought and inquiry, more so than is generally recognized, though it has been understood by historians of science.

Friedrich Lange, in his classic 19th century history of materialism, observed that we have “so accustomed ourselves to the abstract notion of forces, or rather to a notion hovering in a mystic obscurity between abstraction and concrete comprehension, that we no longer find any difficulty in making one particle of matter act upon another without immediate contact,…through void space without any material link. From such ideas the great mathematicians and physicists of the seventeenth century were far removed. They were all in so far genuine Materialists in the sense of ancient Materialism that they made immediate contact a condition of influence.” This transition over time is “one of the most important turning-points in the whole history of Materialism,” he continued, depriving the doctrine of much significance, if any at all. “What Newton held to be so great an absurdity that no philosophic thinker could light upon it, is prized by posterity as Newton’s great discovery of the harmony of the universe!”

Similar conclusions are commonplace in the history of science. In the mid-twentieth century, Alexander Koyré observed that Newton demonstrated that “a purely materialistic pattern of nature is utterly impossible (and a purely materialistic or mechanistic physics, such as that of Lucretius or of Descartes, is utterly impossible, too)”; his mathematical physics required the “admission into the body of science of incomprehensible and inexplicable `facts’ imposed up on us by empiricism,” by what is observed and our conclusions from these observations.

With the disappearance of the scientific concept of body (material, physical, etc.), what happens to the “second substance,” res cogitans/mind, which was left untouched by Newton’s startling discoveries? A plausible answer was suggested by John Locke, also within the reigning theological framework. He wrote that just as God added to matter such inconceivable properties as gravitational attraction, he might also have “superadded” to matter the capacity of thought. In the years that followed, Locke’s “God” was reinterpreted as “nature,” a move that opened the topic to inquiry. That path was pursued extensively in the years that followed, leading to the conclusion that mental processes are properties of certain kinds of organized matter. Restating the fairly common understanding of the time, Charles Darwin, in his early notebooks, wrote that there is no need to regard thought, “a secretion of the brain,” as “more wonderful than gravity, a property of matter” – all inconceivable to us, but that is not a fact about the external world; rather, about our cognitive limitations.

It is of some interest that all of this has been forgotten, and is now being rediscovered. Nobel laureate Francis Crick, famous for the discovery of DNA, formulated what he called the “astonishing hypothesis” that our mental and emotional states are “in fact no more than the behavior of a vast assembly of nerve cells and their associated molecules.” In the philosophical literature, this rediscovery has sometimes been regarded as a radical new idea in the study of mind. To cite one prominent source, the radical new idea is “the bold assertion that mental phenomena are entirely natural and caused by the neurophysiological activities of the brain.” In fact, the many proposals of this sort reiterate, in virtually the same words, formulations of centuries ago, after the traditional mind-body problem became unformulable with Newton’s demolition of the only coherent notion of body (or physical, material, etc.). For example, 18th century chemist/philosopher Joseph Priestley’s conclusion that properties “termed mental” reduce to “the organical structure of the brain,” stated in different words by Locke, Hume, Darwin, and many others, and almost inescapable, it would seem, after the collapse of the mechanical philosophy that provided the foundations for early modern science, and its criteria of intelligibility.

The last decade of the twentieth century was designated “the Decade of the Brain.” In introducing a collection of essays reviewing its results, neuroscientist Vernon Mountcastle formulated the guiding theme of the volume as the thesis of the new biology that “Things mental, indeed minds, are emergent properties of brains, [though] these emergences are…produced by principles that… we do not yet understand” – again reiterating eighteenth century insights in virtually the same words.

The phrase “we do not yet understand,” however, should strike a note of caution. We might recall Bertrand Russell’s observation in 1927 that chemical laws “cannot at present be reduced to physical laws.” That was true, leading eminent scientists, including Nobel laureates, to regard chemistry as no more than a mode of computation that could predict experimental results, but not real science. Soon after Russell wrote, it was discovered that his observation, though correct, was understated. Chemical laws never would be reducible to physical laws, as physics was then understood. After physics underwent radical changes, with the quantum-theoretic revolution, the new physics was unified with a virtually unchanged chemistry, but there was never reduction in the anticipated sense.

There may be some lessons here for neuroscience and philosophy of mind. Contemporary neuroscience is hardly as well-established as physics was a century ago. There are what seem to me to be cogent critiques of its foundational assumptions, notably recent work by cognitive neuroscientists C.R. Gallistel and Adam Philip King. The common slogan that study of mind is neuroscience at an abstract level might turn out to be just as misleading as comparable statements about chemistry and physics ninety years ago. Unification may take place, but that might require radical rethinking of the neurosciences, perhaps guided by computational theories of cognitive processes, as Gallistel and King suggest.

The development of chemistry after Newton also has lessons for neuroscience and cognitive science. The 18th century chemist Joseph Black recommended that “chemical affinity be received as a first principle, which we cannot explain any more than Newton could explain gravitation, and let us defer accounting for the laws of affinity, till we have established such a body of doctrine as he has established concerning the laws of gravitation.” The course Black outlined is the one that was actually followed as chemistry proceeded to establish a rich body of doctrine. Historian of chemistry Arnold Thackray observes that the “triumphs” of chemistry were “built on no reductionist foundation but rather achieved in isolation from the newly emerging science of physics.” Interestingly, Thackray continues, Newton and his followers did attempt to “pursue the thoroughly Newtonian and reductionist task of uncovering the general mathematical laws which govern all chemical behavior” and to develop a principled science of chemical mechanisms based on physics and its concepts of interactions among “the ultimate permanent particles of matter.” But the Newtonian program was undercut by Dalton’s “astonishingly successful weight-quantification of chemical units,” Thackray continues, shifting “the whole area of philosophical debate among chemists from that of chemical mechanisms (the why? of reaction) to that of chemical units (the what? and how much?),” a theory that “was profoundly antiphysicalist and anti-Newtonian in its rejection of the unity of matter, and its dismissal of short-range forces.” Continuing, Thackray writes that “Dalton’s ideas were chemically successful. Hence they have enjoyed the homage of history, unlike the philosophically more coherent, if less successful, reductionist schemes of the Newtonians.”

Adopting contemporary terminology, we might say that Dalton disregarded the “explanatory gap” between chemistry and physics by ignoring the underlying physics, much as post-Newtonian physicists disregarded the explanatory gap between Newtonian dynamics and the mechanical philosophy by rejecting the latter, and thereby tacitly lowering the goals of science in a highly significant way, as I mentioned.

Contemporary studies of mind are deeply troubled by the “explanatory gap” between the science of mind and neuroscience – in particular, between computational theories of cognition, including language, and neuroscience. I think they would be well-advised to take seriously the history of chemistry. Today’s task is to develop a “body of doctrine” to explain what appear to be the critically significant phenomena of language and mind, much as chemists did. It is of course wise to keep the explanatory gap in mind, to seek ultimate unification, and to pursue what seem to be promising steps towards unification, while nevertheless recognizing that as often in the past, unification may not be reduction, but rather revision of what is regarded as the “fundamental discipline,” the reduction basis, the brain sciences in this case.

Locke and Hume, and many less-remembered figures of the day, understood that much of the nature of the world is “inconceivable” to us. There were actually two different kinds of reasons for this. For Locke and Hume, the reasons were primarily epistemological. Hume in particular developed the idea that we can only be confident of immediate impressions, of “appearances.” Everything else is a mental construction. In particular, and of crucial significance, that is true of identity through time, problems that trace back to the pre-Socratics: the identity of a river or a tree or most importantly a person as they change through time. These are mental constructions; we cannot know whether they are properties of the world, a metaphysical reality. As Hume put the matter, we must maintain “a modest skepticism to a certain degree, and a fair confession of ignorance in subjects, that exceed all human capacity” – which for Hume includes virtually everything beyond appearances. We must “refrain from disquisitions concerning their real nature and operations.” It is the imagination that leads us to believe that we experience external continuing objects, including a mind or self. The imagination, furthermore, is “a kind of magical faculty in the soul, which…is inexplicable by the utmost efforts of human understanding,” so Hume argued.

A different kind of reason why the nature of the world is inconceivable to us was provided by “the judicious Mr. Newton,” who apparently was not interested in the epistemological problems that vexed Locke and Hume. Newton scholar Andrew Janiak concludes that Newton regarded such global skepticism as “irrelevant – he takes the possibility of our knowledge of nature for granted.” For Newton, “the primary epistemic questions confronting us are raised by physical theory itself.” Locke and Hume, as I mentioned, took quite seriously the new science-based skepticism that resulted from Newton’s demolition of the mechanical philosophy, which had provided the very criterion of intelligibility for the scientific revolution. That is why Hume lauded Newton for having “restored [Nature’s] ultimate secrets to that obscurity, in which they ever did and ever will remain.”

For these quite different kinds of reasons, the great figures of the scientific revolution and the Enlightenment believed that there are phenomena that fall beyond human understanding. Their reasoning seems to me substantial, and not easily dismissed. But contemporary doctrine is quite different. The conclusions are regarded as a dangerous heresy. They are derided as “the new mysterianism,” a term coined by philosopher Owen Flanagan, who defined it as “a postmodern position designed to drive a railroad spike through the heart of scientism.” Flanagan is referring specifically to explanation of consciousness, but the same concerns hold of mental processes in general.

The “new mysterianism” is compared today with the “old mysterianism,” Cartesian dualism, its fate typically misunderstood. To repeat, Cartesian dualism was a perfectly respectable scientific doctrine, disproven by Newton, who exorcised the machine, leaving the ghost intact, contrary to what is commonly believed.

The “new mysterianism,” I believe, is misnamed. It should be called “truism” -- at least, for anyone who accepts the major findings of modern biology, which regards humans as part of the organic world. If so, then they will be like all other organisms in having a genetic endowment that enables them to grow and develop to their mature form. By simple logic, the endowment that makes this possible also excludes other paths of development. The endowment that yields scope also establishes limits. What enables us to grow legs and arms, and a mammalian visual system, prevents us from growing wings and having an insect visual system.

All of this is indeed truism, and for non-mystics, the same should be expected to hold for cognitive capacities. We understand this well for other organisms. Thus we are not surprised to discover that rats are unable to run prime number mazes no matter how much training they receive; they simply lack the relevant concept in their cognitive repertoire. By the same token, we are not surprised that humans are incapable of the remarkable navigational feats of ants and bees; we simply lack the cognitive capacities, though we can sometimes duplicate their feats with sophisticated instruments. The truisms extend to higher mental faculties. For such reasons, we should, I think, be prepared to join the distinguished company of Newton, Locke, Hume and other dedicated mysterians.

For accuracy, we should qualify the concept of “mysteries” by relativizing it to organisms. Thus what is a mystery for rats might not be a mystery for humans, and what is a mystery for humans is instinctive for ants and bees.

Dismissal of mysterianism seems to me one illustration of a widespread form of dualism, a kind of epistemological and methodological dualism, which tacitly adopts the principle that study of mental aspects of the world should proceed in some fundamentally different way from study of what are considered physical aspects of the world, rejecting what are regarded as truisms outside the domain of mental processes. This new dualism seems to me truly pernicious, unlike Cartesian dualism, which was respectable science. The new methodological dualism, in contrast, seems to me to have nothing to recommend it.

Far from bewailing the existence of mysteries-for-humans, we should be extremely grateful for it. With no limits to growth and development, our cognitive capacities would also have no scope. Similarly, if the genetic endowment imposed no constraints on growth and development of an organism it could become only a shapeless amoeboid creature, reflecting accidents of an unanalyzed environment, each quite unlike the next. Classical aesthetic theory recognized the same relation between scope and limits. Without rules, there can be no genuinely creative activity, even when creative work challenges and revises prevailing rules.

Contemporary rejection of mysterianism – that is, truism – is quite widespread. One recent example that has received considerable attention is an interesting and informative book by physicist David Deutsch. He writes that potential progress is “unbounded” as a result of the achievements of the Enlightenment and early modern science, which directed science to the search for best explanations. As philosopher/physicist David Albert expounds his thesis, “with the introduction of that particular habit of concocting and evaluating new hypotheses, there was a sense in which we could do anything. The capacities of a community that has mastered that method to survive, and to learn, and to remake the world according to its inclinations, are (in the long run) literally, mathematically, infinite.”

The quest for better explanations may well indeed be infinite, but infinite is of course not the same as limitless. English is infinite, but doesn’t include Greek. The integers are an infinite set, but do not include the reals. I cannot discern any argument here that addresses the concerns and conclusions of the great mysterians of the scientific revolution and the Enlightenment.

We are left with a serious and challenging scientific inquiry: to determine the innate components of our cognitive nature in language, perception, concept formation, reflection, inference, theory construction, artistic creation, and all other domains of life, including the most ordinary ones. By pursuing this task we may hope to determine the scope and limits of human understanding, while recognizing that some differently structured intelligence might regard human mysteries as simple problems and wonder that we cannot find the answers, much as we can observe the inability of rats to run prime number mazes because of the very design of their cognitive nature.

There is no contradiction in supposing that we might be able to probe the limits of human understanding and try to sharpen the boundary between problems that fall within our cognitive range and mysteries that do not. There are possible experimental inquiries. Another approach would be to take seriously the concerns of the great figures of the early scientific revolution and the Enlightenment: to pay attention to what they found “inconceivable,” and particularly their reasons. The “mechanical philosophy” itself has a claim to be an approximation to common sense understanding of the world, a suggestion that might be clarified by experimental inquiry. Despite much sophisticated commentary, it is also hard to escape the force of Descartes’s conviction that free will is “the noblest thing” we have, that “there is nothing we comprehend more evidently and more perfectly” and that “it would be absurd” to doubt something that “we comprehend intimately, and experience within ourselves” merely because it is “by its nature incomprehensible to us,” if indeed we do not “have intelligence enough” to understand the workings of mind, as he speculated. Concepts of determinacy and randomness fall within our intellectual grasp. But it might turn out that “free actions of men” cannot be accommodated in these terms, including the creative aspect of language and thought. If so, that might be a matter of cognitive limitations – which would not preclude an intelligible theory of such actions, far as this is from today’s scientific understanding.

Honesty should lead us to concede, I think, that we understand little more today about these matters than the Spanish physician-philosopher Juan Huarte did 500 years ago when he distinguished the kind of intelligence humans shared with animals from the higher grade that humans alone possess and is illustrated in the creative use of language, and proceeding beyond that, from the still higher grade illustrated in true artistic and scientific creativity. Nor do we even know whether these are questions that lie within the scope of human understanding, or whether they fall among what Hume took to be Nature’s ultimate secrets, consigned to “that obscurity in which they ever did and ever will remain.”
 
Magic mushrooms, international law and the failed 'war on drugs' by Amanda Feilding
February 6th, 2012
Bookmark and Share

It's been a busy fortnight. First the publication of two major peer-reviewed research papers about magic mushrooms that attracted worldwide publicity. Then off to Prague for an international  drugs  policy symposium. And just last week, news of a large grant for our next collaborative study with Imperial College. But I'm getting ahead of myself.

I established the  Beckley Foundation  some 14 years ago as a think tank on  drugs policy . It was apparent even then that the "war on drugs" had failed.  A 1997 report  by the United Nations Drugs Control Programme put the value of the global trade in illicit drugs at around $400bn.  Recent UN figures  show that global production of opium (used mostly to make heroin) rose by almost 80% between 1998 and 2009. The market in illicit drugs is the third largest market in the world, after food and oil.

The health statistics are equally grim. In some countries – including some within the EU – more than three-quarters of intravenous drug users are infected with hepatitis C. Worldwide, there are several million non-fatal drug overdoses each year. Drug wars themselves also claim a dreadful toll: more than 47,000 deaths in the past five years for Mexico alone, according to the  latest estimates .

However, while it is clear that existing policies are crying out for reform, what is less clear is how to foster the required political will.

The Beckley Foundation is the only organisation to combine rigorous scientific research with detailed policy analysis in an attempt to address that question. Our premise is simple: drugs policies should focus on health, harm reduction and cost-effectiveness, and should be based on the best available scientific evidence. That means trying out and evaluating a variety of policy ideas, as well as researching the physical effects of drugs.

Drugs policies around the world are based on three  UN conventions , dating from 1961, 1971 and 1988. The conventions allow limited production and possession of drugs, but only for scientific and therapeutic use. In particular, parties to the 1988 Convention (which include the vast majority of UN member states) are obliged to criminalise the production, distribution, sale, purchase and possession of listed drugs other than for approved scientific and medical purposes. The result is the criminalisation of millions of people guilty of nothing other than personal drug use.

It is important to realise that an illegal market is a completely unregulated market. The evidence indicates that decriminalising personal possession and use saves valuable police time and criminal justice resources, and does not increase the prevalence of drug use. Moreover, because users are no longer regarded as criminals, their access to education and treatment is improved and the harm caused by problem drug use is reduced. That is why, together with the  All-Party Parliamentary Group for Drug Policy Reform , we organised a meeting of government leaders, policy makers and experts at the House of Lords in November at which we launched a  Global Initiative for Drug Policy Reform .

At that meeting, we presented a report commissioned by the Beckley Foundation into how the UN conventions could be amended to allow countries more freedom to create national policies based on their individual needs. We heard fascinating evidence from the Czech Republic, Portugal and elsewhere about their experiences of moving – within the "wiggle room" permitted by the UN conventions – towards policies based on public health, education and harm reduction rather than criminal enforcement.

At the symposium in Prague last week, a group of international experts again discussed possible reform mechanisms: partial decriminalisation under the existing conventions, and explicit decriminalisation or strict government regulation under amended conventions. We also considered problems caused by the current legal regime, such as the difficulty Bolivia faces in trying to get an exemption to permit the millennia-old indigenous tradition of chewing coca leaves.

The Beckley Foundation's focus on health-oriented policies demands a research programme to gather relevant evidence. That evidence also affords profound insights into how the brain works and potential therapeutic uses of psychoactive drugs.

Which brings me back to  those recent scientific papers , products of a collaboration between the Beckley Foundation and  Professor David Nutt's  department at Imperial College London. Using the latest functional magnetic resonance imaging (fMRI) techniques, the team looked at the brains of subjects as they received an intravenous dose of psilocybin, a psychedelic drug found in magic mushrooms. The papers were  published in the Proceedings of the National Academy of Sciences  and the  British Journal of Psychiatry .

Many users of psychedelics report the experience as a consciousness-expanding one, and conventional wisdom suggests that such drugs should increase brain activity and blood flow to the brain.

Instead, the research in PNAS showed that psilocybin  decreased  blood flow to specific regions of the brain that act as "connector hubs", where information converges and from where it is disseminated. In the paper, we suggest that these hubs normally facilitate efficient communication between brain regions by filtering out the majority of input in order to avoid over-stimulation and confusion. But the hubs also constrain brain activity by forcing traffic to use a limited number of well-worn routes. Psilocybin appears to lift some of these constraints, allowing a freer and more fluid state of consciousness.

In the second study, subjects were given cues to recall positive events in their lives. With psilocybin, their memories were extremely vivid, almost as if they were reliving the events rather than just imagining them.

The findings suggest potential uses for psilocybin in the treatment of depression, a condition characterised by rigidly pessimistic thinking patterns. These fixated patterns are associated with overactivity in the medial prefrontal cortex – one of the same connector hubs deactivated by psilocybin. Psilocybin may also be a useful adjunct to psychotherapy, helping patients who are stuck in negative thought patterns to access distant memories and work through them.

The newly published results are exciting enough to have generated funding for a major study into psilocybin and depression, which will begin shortly. Watch this space.

Amanda Feilding is director of the  Beckley Foundation , a think tank working for health-oriented drug policies based on scientific research.
 
Death Chip Drones kill innocent Pakistanis by Tom Burghardt
What Pentagon theorists describe as a "Revolution in Military Affairs" (RMA) leverages information technology to facilitate (so they allege) command decision-making processes and mission effectiveness, i.e. the waging of aggressive wars of conquest.  It is assumed that U.S. technological preeminence, referred to euphemistically by Airforce Magazine as "compressing the kill chain," will assure American military hegemony well into the 21st century.
 
Auctioning the D-Block
FCC says Congress didn't provide funds for public safety airwaves.


All five members of the Federal Communications Commission (FCC) took some heat Tuesday over the lackluster response to the public safety D-block spectrum auction, but the commissioners placed some of the blame on Congress and its failure to allocate adequate funding for the project.


"Congress has not yet passed any law that would require funding to go to the public safety networks," FCC Chairman Kevin Martin said during a House Energy and Commerce Internet and Telecom subcommittee hearing on the 700-MHz auction.


The preferable option would be to fund and build a public safety network with federal dollars, but without that option, a public-private partnership was the only alternative, Martin said.

"Congress has some responsibility too," said Democratic Commissioner Jonathan Adelstein.