Arquivo da tag: alan kay

Open Your Mind to Real Computer Science

I Believe – And am not entirely alone in saying this, that today’s Computer Science paradigms and philosophies are completely deranged in a lot of ways.

For starters, computer architectures have become a bloated, complex and unnintelligible bundle of hacks designed to perpetrate the same wrong idea about computer programming and design. As John Backus said in his Turing Award Lecture from 1977 (35 years ago!):

Surely there must be a less primitive way of making big changes in the store than by pushing vast numbers of words back and forth through the von Neumann bottleneck. Not only is this tube a literal bottleneck for the data traffic of a problem, but, more importantly, it is an intellectual bottleneck that has kept us tied to word-at-a-time thinking instead of encouraging us to think in terms of the larger conceptual units of the task at hand. Thus programming is basically planning and detailing the enormous traffic of words through the von Neumann bottleneck, and much of that traffic concerns not significant data itself, but where to find it.

In the very same article, entitled ‘Can Programming be Liberated from the Von Neumann Style?’, he outlines possible alternatives, advantages and drawbacks of non-von-neumann architecture paradigms, like the functional and applicative approach. Ironically enough, 1 year after this writing, Intel 8086 architecture was introduced to the market and became the dominant architecture till today, mainly because of the need of retro-conpatibility with legacy software. After a while, all the new and exciting ideas proposed by Backus lost focus and all the efforts were concentrated on getting the most out of x86 and the von neumann architecture, and together with it came the dominance of the C programming language, designed to work closely together with the way the computer architecture behaved, all nuts and bolts considered. All the exciting research about thinking differently simply vanished from the world.

The reasons why von neumann architecture and its most proeminent example, the x86, became the dominant way of thinking about computing can be said to be technical from some points of view, inertial in others, but the more important thing I believe this is so is neatly demonstrated in a famous article called “The Rise of “Worse is Better”“, from Richard Gabriel. The article should be read in its entirety, but I bring to light an excerption that exemplifies the difference between what he called the ‘worse is better’ approach and ‘the right thing’:

Two famous people, one from MIT and another from Berkeley (but working on Unix) once met to discuss operating system issues. The person from MIT was knowledgeable about ITS (the MIT AI Lab operating system) and had been reading the Unix sources. He was interested in how Unix solved the PC loser-ing problem. The PC loser-ing problem occurs when a user program invokes a system routine to perform a lengthy operation that might have significant state, such as IO buffers. If an interrupt occurs during the operation, the state of the user program must be saved. Because the invocation of the system routine is usually a single instruction, the PC of the user program does not adequately capture the state of the process. The system routine must either back out or press forward. The right thing is to back out and restore the user program PC to the instruction that invoked the system routine so that resumption of the user program after the interrupt, for example, re-enters the system routine. It is called “PC loser-ing” because the PC is being coerced into “loser mode,” where “loser” is the affectionate name for “user” at MIT.

The MIT guy did not see any code that handled this case and asked the New Jersey guy how the problem was handled. The New Jersey guy said that the Unix folks were aware of the problem, but the solution was for the system routine to always finish, but sometimes an error code would be returned that signaled that the system routine had failed to complete its action. A correct user program, then, had to check the error code to determine whether to simply try the system routine again. The MIT guy did not like this solution because it was not the right thing.

The New Jersey guy said that the Unix solution was right because the design philosophy of Unix was simplicity and that the right thing was too complex. Besides, programmers could easily insert this extra test and loop. The MIT guy pointed out that the implementation was simple but the interface to the functionality was complex. The New Jersey guy said that the right tradeoff has been selected in Unix-namely, implementation simplicity was more important than interface simplicity.

The MIT guy then muttered that sometimes it takes a tough man to make a tender chicken, but the New Jersey guy didn’t understand (I’m not sure I do either).

To put it shortly, worse is better approach is simple to design and implement, therefore potentially advances at a quick pace and is fast. However, it places on the programmer most of the burdens that should have been dealt elsewhere. It gets worse when people start to think that worse is better is the only possible thing, and then decide they should improve on top of worse is better, resulting in complex, bloated and insane (the opposite of sane) designs that were not the right thing to begin with and can never be.

The Worse is Better is all around Computer Science today. It goes as deep as the computer architecture itself (The Von-neumann design), and contaminates everything up in the chain: the operating systems (Unix-like and C-oriented), the Programming Languages, the software. The ultimate result being unreliable, unstable, unnecessarily complex computer systems.

As an example of what a sane computing environment should be like, consider reading this article on the ‘Seven Laws on Sane Personal Computing‘. Also very recommended is a famous Alan Kay presentation entitled “The Computer Revolution Hasn’t Happened Yet“.

To close it up, I think something really important needs to be said: Computer Science is not about what is there. It is not about the tools we use, it is not about the computers themselves. Computer Science is about what could be. It is an abstract science strongly rooted in logic and mathematics. The limits are our only imagination. The fact that we use computers to realize these visions doesn’t mean we are attached to them, it doesn’t mean there is only one way to do things. We should be doing real computer science, we should be loosing the strings and starting to think about how everything could be entirely different. We need to start the computer revolution, because alas, it hasn’t happened yet.


Minha Experiência com Smalltalk

Esses dias eu resolvi finalmente experimentar Smalltalk, convencido por relatos de que é uma linguagem verdadeiramente única e que é uma daquelas que abre a sua mente para como a programação deve realmente ser. Também por causa desse cara, Alan Kay, nada menos do que o cara que cunhou o termo Orientação a Objeto e a própria linguagem Smalltalk. Nesse palestra (http://video.google.com/videoplay?docid=-2950949730059754521), entitulada “The Computer Revolution Hasn’t Happened Yet”, ele fala, entre outras coisas, dos caminhos errados tomados pelas linguagens de programação e pela computação em geral.

Realmente, eu já havia adquirido um certo desgosto por linguagens de programação como Java e C++ e depois de experimentar linguagens funcionais como Haskell, eu me irritava toda fatidica vez que eu tinha que escrever um loop for. Estava na hora de um pouco de Smalltalk.

Bom, para começar, a particularidade mais notável de Smalltalk é o ambiente. O desenvolvimento ocorre dentro de uma imagem executada por uma máquina virtual Smalltalk. Mas tudo, absolutamente tudo dentro do ambiente é um objeto manipulável pelo programador – as janelas, os botões, os menus etc. A facilidade com que você cria, manipula e edita objetos em tempo real é incomparável a qualquer outro ambiente de programação existente.

A segunda característica notável é a sintaxe. Ou melhor dizendo, a falta dela: Smalltalk tem somente 6 palavras reservadas: true,  falsenilself,  super e thisContext. Todo o resto da programação se reduz basicamente ao envio de mensagens para objetos, pois tudo no mundo de Smalltalk é um objeto. Como criar uma nova classe? Passe uma mensagem para a classe Object (uma classe é um objeto), assim: “Object subclass: #NovoObjeto”. Como fazer um if ou um else? Passe uma mensagem para um objeto do tipo booleano, assim: (x < 3) ifTrue: [doThis] ifFalse: [doThat]. Um loop? Uma mensagem para uma coleção de objetos… e assim vai. Não é muito diferente do que novas linguagens ‘modernas’ andam fazendo, como Python e Ruby. Mas obviamente de um jeito muito mais sujo e deselegante. Ruby por sinal, tem uma forte influência de Smalltalk, inclusive dizem que Ruby e Smalltalk são na verdade a mesma coisa. Mas na verdade Ruby está  vários passos atrás, tanto em elegância e simplicidade da sintaxe quanto de implementação. Pra não deixar em branco, é bom dizer também que Python é um lixo, pra não dizer uma piada, em termos de design de linguagem.

Em um dia, eu aprendi tudo sobre como programar em Smalltalk. E o mérito não é meu, não há mesmo muito o que aprender em termos de sintaxe. O resto é basicamente se familiarizar com as bibliotecas e o ambiente para construção dos seus próprios objetos. O resto, é claro, requer bem mais tempo e familiaridade, mas pelo menos a sintaxe não vai ficar no seu caminho e tudo vai parecer mais como um passeio no parque.

Minha conclusão: Que raios as pessoas estão fazendo com OO? Java, C++, Python, até mesmo Ruby… Por quê? Uma possível explicação pode estar contida nesse texto muito interessante: http://www.jwz.org/doc/worse-is-better.html. Aprendam Smalltalk, mesmo que não cheguem a usar na prática, apenas para ter um choque de realidade. Não tem como se arrepender. Y U NO SMALLTALK?