“Vannevar Bush, the engineer who designed the world’s most powerful analog computer, envisioned the development of a new kind of computing machine he called Memex. For many computer and information scientists, Bush’s Memex has been the prototype for a machine to help people think. This volume, which the editors have divided into sections on the creation, extension, and legacy of the Memex, combines seven essays by Bush with eleven others by others that set his ideas within a variety of contexts. The essays by Bush range chronologically from the early “The Inscrutable Thirties” (1933), “Memorandum Regarding Memex” (1941), and “As We May Think” (1945), to “Memex II” (1959), “Science Pauses” (1967), “Memex Revisited” (1967), and a passage from “Of Inventions and Inventors” (1970). Bush’s essays are surrounded by four chapters that place his changing plans for the Memex within his career and within information technology before digital computing. The contributors include Larry Owens, Colin Burke, Douglas C. Engelbart, Theodor H. Nelson, Linda C. Smith, Norman Meyrowitz, Tim Oren, Gregory Crane, and Randall H. Trigg.”
“The book examines Kant’s influence on five strands of nineteenth-century scientific thought: Naturphilosophie and the effect of German Romanticism (especially Goethe) on biology; Fries’s philosophy of science; Helmholtz’s rejection of Naturphilosophie and Romanticism; neo-Kantianism and its return to “methodological” concerns in natural science and academic philosophy; and Poincaré and his reflections on scientific epistemology. The essays give a nuanced picture of Kant’s legacy to nineteenth-century thinkers and of the rich interaction between philosophical ideas and discoveries in the natural and mathematical sciences during this period. They point to the ways that the scientific developments of the nineteenth century link Kant’s thought to the science of the twentieth century.”
“Will we understand how such intelligent networks work? Perhaps the networks will be opaque to us, with weights and biases we don’t understand, because they’ve been learned automatically. In the early days of AI research people hoped that the effort to build an AI would also help us understand the principles behind intelligence and, maybe, the functioning of the human brain. But perhaps the outcome will be that we end up understanding neither the brain nor how artificial intelligence works!”
The fact that Babbage’s Analytical Engine was to be entirely mechanical will help us to rid ourselves of a superstition. Importance is often attached to the fact that modern digital computers are electrical, and that the nervous system also is electrical. Since Babbage’s machine was not electrical, and since all digital computers are in a sense equivalent, we see that this use of electricity cannot be of theoretical importance. Of course electricity usually comes in where fast signalling is concerned, so that it is not surprising that we find it in both these connections. In the nervous system chemical phenomena are at least as important as electrical. In certain computers the storage system is mainly acoustic. The feature of using electricity is thus seen to be only a very superficial similarity. If we wish to find such similarities we should took rather for mathematical analogies of function.
“This paper demonstrates that the waves produced on the surface of water can be used as the medium for a “Liquid State Machine” that pre-processes inputs so allowing a simple perceptron to solve the XOR problem and undertake speech recognition. Interference between waves allows non-linear parallel computation upon simultaneous sensory inputs. Temporal patterns of stimulation are converted to spatial patterns of water waves upon which a linear discrimination can be made. Whereas Wolfgang Maass’ Liquid State Machine requires fine tuning of the spiking neural network parameters, water has inherent self-organising properties such as strong local interactions, time-dependent spread of activation to distant areas, inherent stability to a wide variety of inputs, and high complexity. Water achieves this “for free”, and does so without the time-consuming computation required by realistic neural models. An analogy is made between water molecules and neurons in a recurrent neural network.”