吳修銘:電腦弱化人腦
1912年的布拉格,當時的卡夫卡還是一名29歲的律師,9月22日晚上10點,他在打字機前坐下,開始寫作。8小時之後,他完成了《審判》。
在當天的日記裏,卡夫卡這樣寫道:“我幾乎無法從桌子底下抽出雙腿,久坐使它們變得僵硬。可怕的緊張與愉悦,故事在我面前呈現開來,如逆水而行。”後來他描述這種他所傾向的一坐到底、一氣呵成的寫作方法:“唯有這樣的連貫一致,伴隨肉身與靈魂的完全開放,才能完成寫作。”
1951年4月,紐約雀兒喜區(Chelsea neighborhood)一棟褐砂石建築的六樓,傑克•凱魯亞克開始把描圖紙用透明膠粘在一起,做成他稱之為“卷軸”的120英尺(約合37米——譯註)長的紙張。這樣連續打字而無需停下換紙,三週後,他在“卷軸”上完成了《在路上》(On the Road)的初稿,既不分段,又無頁邊空白。
1975年,史蒂夫•喬布斯在雅達利(Atari)上晚班。他被要求在四天之內為《突出重圍》(Breakout)這款電子遊戲設計原型。他接下任務,向朋友史蒂夫•沃茲尼亞克尋求幫助。沃茲尼亞克這樣描述當時的壯舉:“四天?我以為我辦不到。四天之內,我不眠不休。我和史蒂夫都患上了單核白血球增多症,一種睡眠方面的疾病,而我們交付了運行完好的《突出重圍》。”
卡夫卡、凱魯亞克和沃茲尼亞克的成就當然令人刮目相看,然而有才華的人在高度集中的狀態下,有這樣的成就並非完全聞所未聞。更有意思的問題是:要在當下達成他們的壯舉,是更難了?還是更容易?
一方面,當今配備了編程與寫作工具的電腦,比起二十世紀的任何工具都更為強大。然而在其他方面,他們每個人的工作都變得更加困難:在電腦上,每個人都要與注意力分散殊死搏鬥。卡夫卡或許會像大部分律師一樣,開始寫作後又意識到最好去查查電郵,《審判》也就這樣不了了之。凱魯亞克或許會在推特上流連許久,或者是在博客上寫旅行見聞。沃茲尼亞克或許會在工作中修改維基百科上有錯誤的條目,而最終使後來成為蘋果的合作毀於一旦。
相對於我們來説,卡夫卡、克魯亞克和沃茲尼亞克有一項優勢:他們的書寫工具不能同時處理幾件事情,人們也因此不會輕易向糾結的慾望低頭。雖然注意力的分散無處不在——看報紙,和朋友聊天——但是其間有本質上的區別。現在的電腦不單單分散注意力,而且是在促使這種分散。網絡像叫賣者一樣無時無刻不在召喚我們;電腦未能使我們專注於工作,反而使我們深陷泥潭,甚至是提供了使我們分心的誘惑。簡而言之,我們創造了一代“分散注意力電腦”,使自己更加難以做到專心致志。
今時今日,我們應當創造的寫作工具,需要幫助我們的大腦應對它所不擅長的,比如專心工作;幫助我們達成極度專注的狀態,而不是分散我們的注意力。我們所需要的新一代科技,其功效應當是克魯亞克的卷軸和卡夫卡的打字機。
要理解當下發生的一切,我們需要回溯到上個世紀六十年代。當時的電腦巨大而緩慢,同時為十幾人甚至是上百人服務。這樣的電腦需要一種辦法來應對紛至沓來的資料處理需求。工程師開發了不同的技術來解決這一問題——從起初的時間共享到後來的多任務處理系統。本質上,多任務算法使用精確的技術,在多名使用者之間公平流暢地共享電腦資源。有了多任務處理,同一台電腦的共享者都會以為他們是在用只屬於他們自己的電腦。
開發出時間共享和多任務處理的工程師們大概從來不曾想象過,他們的概念會被應用到個人電腦上——如果每位使用者都有了自己的電腦,又何必要多任務處理呢?當蘋果II這樣的個人電腦在七十年代末大規模上市的時候,它們的處理能力非常有限,一次僅能完成一項任務,或是編程,或是文字處理,兩者無法同時進行。
個人電腦多任務處理能力的興起,與其他的研發是分不開的,起初是六十年代開始出現的桌面/windows界面,在八十年代通過蘋果的Mac系統走進大眾視野。有不同“窗口”的“桌面”這一概念本身,意味着使用者可以在不同工作項目之間切換。在七十年代的施樂公司(Xerox),艾倫•凱(Alan Kay)是最早的窗口系統研發者之一,在一次訪問中他談到:“一般來説我們都想同時編輯瀏覽幾個場景——這可能就是簡單的圖像與文字的合併,或者是處理多個任務,或者是比較同一模式的不同面向。”

計算機的多任務處理功能反而使人們難以集中於自己手頭的工作
多任務處理的目的,從支持同一台電腦的多個使用者,轉向了支持同一個使用者同一時間的多項需求。前者解決了多人間的衝突;後者反而是帶入了一種內在的衝突,仔細想想,試圖同時達成多項需求恰恰就是與專注相違背。
另一項重大進展是過去三十年裏電腦處理器速度的飛躍。也只有配備了這樣的能力,個人電腦才能以人們可以接受的方式進行多任務處理。毋庸置疑,多任務處理較之於“單任務處理”電腦,體現了重要的科技進展。舉例來説,舊版本的蘋果處理系統使用手冊宣稱:“當Mac電腦還是新生事物的時候,處理系統照理應當是一名使用者處理一個程序。這顯然已經過時。今時今日,我們需要電腦做得更多更快,而我們需要做的工作越來越少。”
當然,多任務處理電腦在技術層面上更為先進。然而我們已經可以看到有些事情走偏了。我們並不需要電腦做得更多——而是作為人類的我們要把事情做完。這一小點才是最重要的,也表明了讓電腦重回最基本功用的需要。
六十年代,J.C.R•立克里德(J. C. R. Licklider最早預測類似互聯網的系統會將全世界的人和計算機聯繫在一起的人之一——譯註)和道格拉斯•恩格波特(Douglas Engelbart鼠標發明者——譯註)提出,電腦的終極任務是人類強化的工具,他們改變了電腦發展的走向。他們認為,電腦不應當像星球大戰中的R2-D2機器人一樣具有獨立的人工智能。相反,電腦應當服務於人類的大腦,使之變得更強大。立克里德將這一概念稱為“人機共生(man-computer symbiosis)”。
從這個角度來説,電腦的多任務處理能力有時也是一種進步——也僅僅是有時而已。人們可以在瀏覽頁面與待辦事項之間切換,或者是在Skype通話的同時閲讀文件。但很多時候,我們使用電腦是為了完成需要我們聚精會神的工作,也正是在一點上,電腦降低了人類的潛能。
儘管人類大腦有諸般專擅,它同時也有其弱點。在保持持續的注意力後,大腦不太能達到極度專注的狀態。由於大腦很容易被有意無意的要求所影響,因此要對同一對象保持持久的注意(也就是佛教所説的專注冥想),需要大量的訓練與努力。其次,大腦並不擅長自覺的多任務處理,也不能同時主動活躍地集中在多項事務上。也許電腦設計者們曾希望,電腦可以訓練大腦更有效地處理多項任務,然而最近的研究表明,這一努力已告失敗。
簡而言之,我們很容易分心,不擅長同時做兩件以上的事情。雖然電腦理所應當為我們服務,可它卻不斷使我們分心,要求我們同時處理一連串的不同信息。我們不得不問,這種時候,究竟誰是主導?
確實,人們已經作出努力去應對我所描述的問題。“自由程序”(Freedom program)的設計者切斷因特網這一注意力的首要分散源,以此提升使用者的生產效率。有人使用咖啡因或是“聰明藥”(adderall)來幫助集中注意力,有人藉助期限將至或是可能被裁員這樣的恐懼情緒來達到專注狀態。
然而我們應當尋求的解決辦法不應該依靠藥物或是迫近的失業。我們需要的電腦,是從一開始就特意降低分心,幫助我們持久專注於棘手的工作上。我們需要的電腦和設備,應當回到人類強化的任務上來,認真對待大腦的極限,並幫助我們克服這些極限。
我不能完全確定這意味着什麼。我所確定的是,我們應該這麼做。或許我們需要的只是可以鎖定不同模式的電腦:瑣事模式,交流模式,工作模式。在工作模式裏,電腦應該盡其所能讓我們不偏離軌道。設計者要認識到大腦的弱點,盡力消除或降低不必要的分心,諸如提醒電郵的嘟嘟聲,跳動的圖標和毫無必要的彈出窗口。
總會有人説,解決這些問題的不二法門就是訓練與意志力——即使生活在2013年,卡夫卡還是會像1912年一樣專注。對此我不太確定。訓練當然有用,但環境和工具也的確有用。奇怪的是,我們現在擁有了改變環境的技術能力——這對於前人來説是無法想象的,不過我們並沒有認清大腦的缺點就在使用這些技術了。
或許,一條簡單的規則就足夠表達:電腦不應該讓我們變得更蠢。
(本文載於《紐約客》網站2013年9月9日,原標題How Today’s Computers Weaken Our Brain ;任致均/譯)
翻頁請看英文原文
How Today’s Computers Weaken Our Brain
By Tim Wu
September 9, 2013
At 10 P.M. on September 22, 1912, Franz Kafka, then a twenty-nine-year-old lawyer, sat down at his typewriter in Prague and began to write. He wrote and wrote, and eight hours later he had finished “Das Urteil” (“The Judgment”).
Kafka wrote in his diary, “I was hardly able to pull my legs out from under the desk, they had got so stiff from sitting. The fearful strain and joy, how the story developed before me, as if I were advancing over water.” He later described the one-sitting method as his preferred means of writing. “Only in this way can writing be done, only with such coherence, with such a complete opening out of the body and soul.”
In April, 1951, on the sixth floor of a brownstone in New York’s Chelsea neighborhood, Jack Kerouac began taping together pieces of tracing paper to create a hundred-and-twenty-foot-long roll of paper, which he called “the scroll.” Three weeks later, typing without needing to pause and change sheets, he’d filled his scroll with the first draft of “On the Road,” without paragraph breaks or margins.
In 1975, Steve Jobs, working the night shift at Atari, was asked if he could design a prototype of a new video game, Breakout, in four days. He took the assignment and contacted his friend Steve Wozniak for help. Wozniak described the feat this way: “Four days? I didn’t think I could do it. I went four days with no sleep. Steve and I both got mononucleosis, the sleeping sickness, and we delivered a working Breakout game.”
The accomplishments of Kafka, Kerouac, and Wozniak are impressive, but not completely atypical of what can be achieved by talented people in states of supreme concentration. The more interesting question is this: Would their feats be harder today, or easier?
On the one hand, today’s computers feature programming and writing tools more powerful than anything available in the twentieth century. But, in a different way, each of these tasks would be much harder: on a modern machine, each man would face a more challenging battle with distraction. Kafka might start writing his book and then, like most lawyers, realize he’d better check e-mail; so much for “Das Urteil.” Kerouac might get caught in his Twitter feed, or start blogging about his road trip. Wozniak might have corrected an erroneous Wikipedia entry in the midst of working on Breakout, and wrecked the collaboration that later became Apple.
Kafka, Kerouac, and Wozniak had one advantage over us: they worked on machines that did not readily do more than one thing at a time, easily yielding to our conflicting desires. And, while distraction was surely available—say, by reading the newspaper, or chatting with friends—there was a crucial difference. Today’s machines don’t just allow distraction; they promote it. The Web calls us constantly, like a carnival barker, and the machines, instead of keeping us on task, make it easy to get drawn in—and even add their own distractions to the mix. In short: we have built a generation of “distraction machines” that make great feats of concentrated effort harder instead of easier.
It’s time to create more tools that help us with what our brains are bad at, such as staying on task. They should help us achieve states of extreme concentration and focus, not aid in distraction. We need a new generation of technologies that function more like Kerouac’s scroll or Kafka’s typewriter.
To understand what has happened, we need to return to the nineteen-sixties, when computers were giant, slow machines that served dozens and sometimes hundreds of people at once. Such computers needed a way to deal with competing requests for processing resources. Engineers devised various techniques for handling this problem—known first as time-sharing, and later as multitasking, operating systems. In essence, multitasking algorithms used clever techniques to share the computing power available among multiple users as fairly and smoothly as possible. With multitasking, it was possible with a single computer for many people to have the illusion of having their own machine.
The engineers who designed time-sharing and multitasking probably never imagined that their ideas would be used for personal computers—if each user already had a computer, why would he or she need multitasking? And when the first mass-market personal computers, like the Apple II, arrived in the late seventies, their highly limited processing power was used to perform a single task at a time. It was programming or word processing, but not both at once.
The rise of multitasking capabilities in personal computers cannot be separated from other developments, beginning with the introduction of the familiar desktop/window interface that began in the sixties and reached the public in the eighties, via the original Apple Macintosh. The very idea of a “desktop” with different “windows” implies a user who can switch between tasks. As Alan Kay, one of the inventors of the first functioning window-style system, at Xerox in the seventies, explained in an interview, “We generally want to view and edit more than one kind of scene at the same time—this could be as simple as combining pictures and text in the same glimpse, or deal with more than one kind of task, or compare different perspectives of the same model.”
The purpose of multitasking had gone from supporting multiple users on one computer to supporting multiple desires within one person at the same time. The former usage resolves conflicts among the many, while the latter can introduce internal conflict; when you think about it, trying to fulfill multiple desires at once is the opposite of concentration.
A second crucial advance was the huge increase in the speed of computer processors over the past three decades. Only with this kind of power could personal computers multitask in an acceptable way. It was immediately assumed that, once achieved, multitasking represented an important technical advance over “single-tasking” machines. For example, an old guide to Apple operating systems declared, “Way back when Macs were new, operating systems were meant to be operated by one user working with one program. Obviously, this is no longer the case. Today, we want our computers to do more, faster, with less work on our part.”
Of course, in a technical sense a multitasking machine is more advanced. But we can already see where things might be going astray. We don’t really want our computers to accomplish more—it’s us, the humans, who need to get things done. This subtle point is all-important, and shows a need to return to the basics of what computers are for.
When, in the sixties, J. C. R. Licklider and Douglas Engelbart proposed that computers should ultimately serve as a tool of human augmentation, they changed what computers would come to be. The computer, they argued, shouldn’t try to be independently intelligent, like R2-D2. Rather, it should be a tool that works with the human brain to make it more powerful, a concept that Licklider called “man-computer symbiosis.”
From this perspective, the multitasking capabilities of today’s computers are sometimes a form of augmentation—but only sometimes. It can be helpful to toggle between browser pages and a to-do list, or to talk on Skype while looking at a document. But other times we need to use computers for tasks that require sustained concentration, and it is here that machines sometimes degrade human potential.
While the brain is good at many things, it is rather bad at others. It’s not very good at achieving extreme states of concentration through sustained attention. It takes great training and effort to maintain attention on one object—in what Buddhists call concentration meditation—because the brain is highly susceptible to both voluntary and involuntary demands on its attention. Second, the brain is not good at conscious multitasking, or trying to pay active attention to more than one thing at once. Perhaps computer designers once hoped that our machines could train the brain to multitask more effectively, but recent research suggests that this effort has failed.
In short, we are easy to distract, and very bad at doing two or more things at the same time. Yet our computers, supposedly our servants, constantly distract us and ask us to process multiple streams of information at the same time. It can make you wonder, Just who is in charge here?
To be sure, efforts are being made to deal with the problems I’ve described. The designers of the Freedom program give users a way to boost productivity by switching off the Internet, the chief source of distraction in our times. Some people turn to caffeine or Adderall as an aid to concentration, or achieve similar effects through the use of emotions like the fear created by deadlines or the possibility of being fired.
But we should be searching for solutions that don’t rely on drugs or imminent job loss. What we need are machines that are built from the ground up purposely to minimize distraction and help us sustain attention for hard tasks. We need computers and devices that return to the project of human augmentation by taking the brain’s limits seriously, and helping us overcome them.
What this looks like, I’m not exactly sure, although I am sure we should be trying to find out. Perhaps all we need are computers that lock into different modes: chore mode, communication mode and concentrated work mode. In the work modes, the machine would do what it could to keep you on track, in ways both subtle and less so. We also need designers cognizant of the brain’s weaknesses, who strive to eliminate or minimize unnecessary distractions, such as beeps for e-mails, bouncing icons and unnecessary pop-up windows.
There will always be some who say that all anyone needs to deal with these problems is better discipline or will power—that Kafka, being Kafka, would stay on task in 2013 just as well as in 1912. I’m not so sure. Discipline is useful, but so is an environment and tools that actually help, rather than hinder. The strange part is that we now have technological powers to shape our environment that were unimaginable to earlier generations, yet we don’t use them with a realistic view of the brain’s weaknesses.
Perhaps a single rule is enough: our computers should never make us stupider.