• X105.82
  • For help or in case of trouble please call network control center at Bolt Beranek and Newman: 617-661-0100
  • Top of IMP in manilla envelope labeled "Emergency Start-Up Procedure for IMP"
  • Developed for ARPA by BBN
  • U.S. Government Property
  • 16 bit memory word

The First Internet Router

Newer Older

For the APRANET in 1969, the BBN Interface Message Processor (IMP) did the packet routing.

(It's another cool artifact of history in the back room, awaiting installation in the new Computer History Museum exhibition hall.)

The Honeywell 516 minicomputer inside had only 6,000 words of software (in 12K of core memory) and monitored network status and statistics. Cost: $82,200

The first transmission between UCLA and SRI took place on October 29, 1969.
Larry Roberts, architect of the Internet, diagrammed some of that richhistory at our CEO Summit.

View 11 more comments

  1. pegleg000 97 months ago | reply

    dave halliday If Newton had the internet, would he have been sitting under the tree, waiting for the apple to fall?-o Or would he have been on-line playing the early predecessor of Doom?-)

  2. Astrocatou 97 months ago | reply

    pegleg000
    Oh ye of little faith.....he would have been chatting up all those Continental geniuses...instead of hanging out at the London Academy of Science...or wherever he hung out.

    solerena
    Yes its happening....finally.

    But no one is answering my query...
    Why is all this connectivity not being better utilized for science etc...?
    We have so much data coming in from some space probes for example that it needs years to analyze it all...
    But maybe we are bogging down on minutiae..

  3. obskura 97 months ago | reply

    > Why is all this connectivity not being better utilized for science etc...?
    To some extent it is, but of course everything could always be better. The 100% availability of almost everything has transformed science I think: every research paper on the planet, and more and more also e-versions of textbooks, data sets (human genome, weather data, space data, even the latest DNA sequences of animals you never heard of), Petabytes of particle collision sensor data, etc. Libraries are becoming obsolete. Even Scientific Journals are becoming less important - who still reads the monthly paper copy, instead of just googling for interesting papers? In technical fields like computer science, conference papers are becoming more important now, as they are peer reviewed as well, and usually more up-to-date. And due to copyright issues, conference papers are usually on the author's website ready for download, while journals usually don't allow that.
    Maybe I'm easily impressed - but when Douglas Adams wrote about 20 years ago that there is the "hitchhiker's guide" which contains everything interesting about the whole universe, and updates itself through sub-etha it seemed plausible but pretty futuristic. Now I have the entire text of Wikipedia on a fingernail-sized SD card in my tablet, and via 3G I can access almost anything known to mankind within seconds. Pretty cool... I think Newton would have been blown away.

    > But maybe we are bogging down on minutiae..

    There is some truth to that. A lot of computing tasks are fairly simple really, and some may remember that we used to do the same things 10-15 years ago on computers with 100 Mhz and 8 Mb of RAM.
    Now, if we just remembered how software was written in those days, we would get an instant boost in computing power by about a factor of 1000... I think the Apollo mission would have been glad to have the computing power of a current laptop for just that one second it takes to open a menu...

  4. Astrocatou 97 months ago | reply

    obskura

    "Now, if we just remembered how software was written in those days, we would get an instant boost in computing power by about a factor of 1000...."

    I am a Luddite...can you expand on this comment..?

    I was mulling over my rather cynical thoughts on the way to work...
    Maybe looking back 400 years science seemed "faster or better" in some way because optically it looks like a large step was made by one man...ie in a short space of time.
    But I suppose science is advancing faster now...
    Maybe like a child growing...you don't notice it if it is right in front of you..

  5. jurvetson 97 months ago | reply

    I agree completely. Our perception of time is clocked by the pace of salient events.

    Science has come a long way in 400 years! Ref. Kevin Kelly new book.

    Software has improved in a number of areas. Geordie Rose of D-Wave argues that advances in algorithms outperform advances in hardware for hard problems. For example, a 1977-era computer running the best algorithm from 2007 for factoring integers would dramatically outperform a 2007 computer running code from 1977. (Specifically, the Apple ][ running a quadratic sieve would outperform the IBM BlueGene/L supercomputer at LLNL running Pollard's Rho algorithm.)

    When we had sparse resources, parsimonious code was an art. The entire Apollo spacecraft had less computing power than a Furby. They turned to various analog and mechanical feedback loops to embed "computing" in the system. In the early days of Apple ][ programming, game programmers found non-obvious ways to "magically" squeeze graphics performance and real-time gameplay factors from a fairly limited set of resources. When the iron curtain lifted, the programmers there were used to making more out of of sparse resources (compare Skype from Estonia to the MSFT bloatware of the time). With abundant memory, we are a bit more sloppy. Memory leaks galore. Heck, Microsoft Office is now larger than the human genome.

    The same concern, or opportunity depending on your perspective, can be seen in the evolution of science in the age of big data. "Unlimited power limits intellectual parsimony" as I heard at a SFI event @ Google recently. “With machine learning, we are creating electronic savants. They are happy in a high-dimensional space. They have no desire to reduce. What we want is electronic Keplers that can recognize the ellipse, not savants that can force fit a heliocentric model.”

    There was a nod to Norvig's paper on The Unreasonable Effectiveness of Data:
    “Perhaps we ‘re doomed to complex theories that will never have the elegance of physics equations?” (like F=ma)

    Some excerpts from the Google perspective:
    “simple models and a lot of data trump more elaborate models based on less data.”

    “So, follow the data. Choose a representation that can use unsupervised learning on unlabeled data, which is so much more plentiful than labeled data. Represent all the data with a nonparametric model rather than trying to summarize it with a parametric model, because with very large data sources, the data holds a lot of detail. For natural language applications, trust that human language has already evolved words for the important concepts. See how far you can go by tying together the words that are already there, rather than by inventing new concepts with clusters of words. Now go out and gather some data, and see what it can do.”

  6. Astrocatou 97 months ago | reply

    read in "excerpts"...
    "we’ve still got a long way to go before the large-scale scientific data locked up in journal articles is freely available for unrestricted mining."

    And the Norvig article is 19$ to read...
    So I guess that is a case in point !!

    Any good reference to a (free) discussion somewhere about the limitations of the internet as a tool for accelerated learning ?

    !!

  7. rocketmavericks 97 months ago | reply

    Getting to the discussion late here. A couple comments on previous threads....

    Large finger switches (And LED's)- These I think are the address latches as I recall. Worked with some of them on the old HP 1000's and DEC PDP machines. To boot the machine, you had to input the jump location address from the program after you loaded it. You would program the loader code into memory first, and use a paper tape or large floppy (8") to load the program into memory, then use these switches to set the processor jump address to start the program. Basically, you were the operating system in these older machines. Literally.

    Algorithm vs Efficiency Discussion - Interestingly, I think Taylorism surprisingly comes to play in all this. In the early days, most of the code written was for dedicated application functions. This allowed a single individual or small group of individuals to have complete knowledge of the machine and the software. One of the keys to efficient algorithms is the software engineers having good knowledge of the underlying hardware, how it worked, and most importantly how the software written impacted the hardware.

    As the data space increases, the desire for application of abstraction becomes most important. Meanwhile, as the machine complexity increases, specialization becomes more and more important. When I was running tech start-ups int eh valley and was looking to hire software engineers, I tended to look for engineers that not only knew how to write efficient algorithms, but also understood how to take advantage of the hardware they ran upon. problem was that Taylorism was driving down the availability of engineers with this skill set out of a desire to generalize work and therefore reduce costs. Most the engineers began to specialize either at the machine level (device drivers) in a level to apply abstraction to the operating system levels, or the data level, through application of object oriented methodologies, the object libraries thus being specific to operating system platforms and abstracted away from the hardware and device driver components. Newer algorithms were thus written not for performance, but for cost leverage by a team, and the hardware performance was completely lost. Thus the genome vs Outlook scenario.

    I think to some respect we have traveled down the wrong tree branches which has lead us to this fork in that software has become too leveraged of server platforms because of the economic advantages to centralized administration and reduction of operating costs. The alternative would be to drive specialization through distribution to embedded solutions of simplified task application distributed to processors, where efficiency in machine usage can once again be attained. Problem is that the communications overhead between interobject communications becomes too burdensome as demonstrated by CORBA and other attempts to overlay an object oriented architecture over a distributed computational architecture.

    All of this stuff gives me a headache now. I long for the days of simplicity in computational systems, but I guess we have gotten to Von Neumann's ultimate joke, in a fixed instruction set to solve arbitrary problem. I suspect that there is an equally interesting discussion to be had about the second law of thermodynamics to be had with regard to something I call computational entropy. I suspect that what we have tripped over in the discussion with Steve is a classic case of computational entropy, or as I more simplistically put it, " If you want to use Microsoft Windows to do something, anything, you have a fixed about of your energy that must be wasted, before you can accomplish any useful work". I call this Atchison's computational law of entropy, as applied to Microsoft Products, but I suspect it applies in a number of computational domaines in general. Would be an interesting concept to explore as a quantitative measure of software efficiency. Guess I would have to think about Atchison's first law of computational energy conservation. Maybe for a fixed watt, how many machine cycles can be accomplished. Hmmm.... maybe I am on to something here?

    Would be interesting to measure the computational entropy of the genome. Maybe a discussion I should have with Craig Venter or George Church, should I run into them at Steve's office one day......

  8. Astrocatou 97 months ago | reply

    Mavericks Civilian Space Foundation
    Wow... a bit deep for a Luddite...
    I better talk to my nephew more often...but fascinating to read about.

    You guys sound a bit like WWII veterans back in the 60's....(no offense !)
    Sort of a "hands on oral history" of something I never paid any attention to...
    I just bitched that it took 15 minutes for "AOL" to deliver a satellite weather image on my dial up modem...(!!)
    My father,in the ? early seventies (I THINK) used to write his own software for a geological (gravity survey) mapping program...(? COBOL)...I think back then things were more user specific...
    Texas Instruments,for some reason supplied a lot of there gear...(? 1968)
    I think the software was just driving a glorified printer....bored me to death..
    NOW it all interests me...but more as history, which generally interests me.
    So thanks...and hoping to hear more.

  9. obskura 97 months ago | reply

    When I studied computer science, my software engineering professor told us to forget about code optimisation - as by the time the project will be finished, computers have doubled their speed and memory.
    I think the problem is that the "older" generation of programmers and developers have such a deeply engrained understanding of hardware, software and careful resource management that they sometimes have to force themselves to forget about it, as the hardware is so much more powerful now. Optimisation can make code unmaintainable. In my own case, as I started on a 7Mhz computer (I'm not that old, that was in the 1990's...) even if I write clean, abstract, readable code I still have some brain cells switched on worried about efficiency and resource requirements. I think this works rather well.
    However, we have now a new generation of IT students and professionals that never had the experience of sparse resources, and even worse, they also have forgotten that computers used to be responsive. Of course loading a program from floppy disks took a while, but once it was loaded the user interface would always respond virtually instantly. Now it can easily take a second to open the "start" menu, a Finder / Explorer window, or a few seconds to open a simple text editor. During that second the hard drive can move 100 Mb of data, RAM can transfer a Gb, the CPU(s) can execute 100 billion instructions. To open a list of text items... this doesn't make much sense. But as everyone got used to it there is no perceived need to fix it.
    Ironically, mobile devices like the iPhone are a lot more responsive than their Laptop counterparts, despite their much slower, smaller hardware. Also, Apple has spent a lot of effort to make their new version of MacOSX faster and smaller - probably the first time that a software upgrades doesn't have many new features, but a lot less code... So maybe things will change.

    I agree that there has been huge progress in algorithms for specific problems. Getting the complexity down from a O(n^3) even to just O(n^2 log n) will make a huge difference for large problem sizes. What I was trying to say though is that most of the time, we waste most of our computing power with incredibly inefficient software that doesn't even achieve much (like opening a menu...). To a large extent this is due to attempts to make software quicker and cheaper to write, more flexible, more adjustable. For example, if you want to add a button to your program, you just call a graphical API function that creates a button. However, without you knowing, this function will not just create a button. First it might create a button factory object, which might then create a bunch of other "factory" objects for language, colour scheme, look-and-feel, fonts, icons, etc. The information about the look and style of the button is not saved in a binary format, but in human-readable, plain text XML files which need to be parsed (= another few 100-1000 objects created). These 'factories' then create objects for the components of the button, which are finally put together, and returned to the caller. So creating a simple button might have triggered a few 100 function calls all throughout the system, created 1000's of objects, parsed a dozen XML files, etc., and then all these objects are de-allocated again as they're not needed any more. De-allocation is also not done explicitly, instead a "garbage collector" cleans up afterwards. The levels of abstraction are very flexible and powerful, but without an eye for efficiency, an incredible waste of resources. Even if one wanted to keep this approach for flexibility, just simply "pre-compiling" everything once (all the XML files, language settings, etc.) and storing the binary image of the result until some options changed would already make a huge difference.
    The "other problem" in my opinion is that most languages don't offer very good support for concurrency (creating new processes/tasks, protecting data, synchronisation, etc.), and as a result, most code is single-threaded. If everything that takes longer than 1/10 of a second would be spawned off as a separate thread that reports back when it's done, then the core components of the user interface could be 100% responsive all the time, while some more time consuming things (e.g. scanning a networked drive) would update a second or so later.
    Related to that is realtime scheduling for desktop OS'es. There is really no technical reason why a computer can't respond within a millisecond to any event (or at least indicate that it has started processing the request if it takes longer). Yet it is a struggle to achieve millisecond timing for professional audio applications.
    Ironically, the tools and languages to address these issues have existed for decades, and are still in use in niches. The community at large ignores it though, and complains that they don't know what to do with multi-core processors, that concurrent programming is "too hard", etc. Unfortunately it seems the wheel is re-invented over and over again in some regards... :)

    Despite all that, the pace of progress on all fronts is staggering. I think currently the software side is lagging behind trying to catch up with the new hardware. I expect way more "cool stuff" coming our way related to multi-touch displays, high-performance computing for everyone, interactive computing, home automation, 3D printing, and generally, system integration. On a side note, I was a big fan Tony Stark's workshop in the first Iron Man. I actually think quite a lot of that technology already exists one way or the other, what is needed is clever software and a thorough system integration to put it all together...

  10. Astrocatou 97 months ago | reply

    obskura
    wow...that was interesting.
    thanks.

  11. jurvetson 97 months ago | reply

    I second that. Very interesting reads. I wonder if we have reached the limits of human design, and the accumulation of artifacts to try to push the frontier has built a brittle edifice that fails for even the simplest tasks.

    And perhaps there is a loose analogy to the abundance of transistors on the average complex IC. Intel and others have long ago given up on scaling the logic in accordance with Moore's Law. They just lather on more and more cache memory to improve performance between generations. 96.5% of the transistors on the Montecito processor are used for memory.

    We are entering an era for complex chips where well over 90% of all transistors manufactured are for memory, not logic. (much like the brain)

  12. rocketmavericks 97 months ago | reply

    Ahhhhh..... but just like the early period of classical physics, we become so absorbed with the dogma of the world we lived in, we lose site of the new world around us, and the assumptions we made based upon the world as we understood it, and not how we now understand it.

    Bohr caught a hint of this with his early experiments that hinted to him and Paul Drude that perhaps there was something lurking in quantization of energy and momentum, even at the analog analogy of classical optics and physics before quantum mechanics was proposed. Einstein similarly saw flaws with classical physics to describe the universe and had to go back to the basic assumptions of thermodynamics to see the flaws in our understanding of the world that led to general relativity.

    I did this several years ago and found a flaw in Von Neumanns assumptions about his machine. It has to do with the gate densities of a processor and the economics of what it would take to create a machine to solve arbitrary problems. All he had were vacuum tubes, so to solve useful problems he had to come up with an architecture that could run on the hardware he had. Vacuum tubes do not scale very well. The problem is that modern computational hardware architectures are dependent upon continuing Von Neumanns assumption to support the investment in that code base and capability, to create a limited instruction set with hardware that can solve an arbitrary set of problems. This has worked well, with low gate densities first in vacuum tubes and then in silicon thanks to the invention of transistors.

    But the problem now, is that the incremental costs of pushing these complexities forward from an economic perspective requires market application that exceed our ability to justify the investment return. Billions of dollars for next generation semiconductor fabs required to keep Moore's Law humming, and the physics is challenged again and again to make the hardware to keep the game going but we need huge distribution of the product volume to make the numbers work. Focus then becomes on driving down costs.

    But what if you changed the initial assumptions? What if you broke from Von Neumann's architecture based upon a vacuum tube machine and proposed to make dedicated circuits to solve a computational problem on the fly? Think of a CPU with a liquid sea of gates, like an FPGA, that gets instructions to assmeble a custom circuit on teh fly to solve the problem, rather than generating machine instructions. What if you could come up with a machine architecture that could switch together these circuits every clock cycle? No more instruction sets. Just pre-generated circuits on the fly from the compiler, generated by compilers to solve an individual problem in one machine cycle with a dedicated liquid circuit.

    You would need much higher gate densities to do this, which explains why Von Neumman came up with his solution. He had to work with vacuum tube computers, and you could not produce a circuit that could solve a problem of significance, without huge costs. You can now.....

    I set out this architecture with some of the best compiler designers and machine architects back in 2001 to do just this. We still needed a couple generations of silicon gate densities to make it work, but it suggests that a non-Von Neumann architecture is superior once gate densities reach a critical threshold. But nobody questions this assumption now. Compilers will no longer break language logic into machine instructions, but they can be made to assemble a dedicated circuit on the fly to solve a problem in a single machine cycle. The explosive growth in computation processing you get no longer is gated by the current limits of processing technology. Imagine doing a Fourier transform in a single clock cycle on todays silicon based machines. How about a circuit to optimize a protein folding combinatorics? The power is immense, but the instruction set has to go bye-bye.

    So maybe it is time to give up on Von Neumann architectures, just like we gave up on propellor aircraft in exchange for ducted rocket engines, we now call jet aircraft. I think that the possibilities are there, but we have to go back to questioning basic assumptions, like Von Neumann made, when all he had to work with were vacuum tubes, which clearly had fixed densities. When all you have are nails, the tools always look like a hammer!

    We no longer build vacuum based computational machines. Its time to question the basic assumptions that got us started. All of a sudden, the world looks like a very different place.

  13. thepretenda 97 months ago | reply

    Just incredible

  14. mikekingphoto 97 months ago | reply

    I'm not tech or a scientist. BUT do you clever guys and girls think that we will be capable of space travel to save mankind when the inevitable happens nuclear/asteroid/Sun? Not looking for a thesis, just a few words. If not, are there political reasons etc or merely physical ones? I don't tend to move in these circles! Unlike the Planets. Best, Mike.

  15. -fCh- 97 months ago | reply

    No! The reason is that we need a quantum-leap in thinking--that may come after we digest/internalize the current stage, made possible by the scientific and technological leap of the past ~150 years. For the time being, we are stuck with a paradigm whose logical conclusion, even in linear terms, is some asteroid-like blowup from within.

    However, it's necessary to keep the story alive. Put money in fundamental research instead of echoing the Byzantines...

  16. mikekingphoto 97 months ago | reply

    Does that also mean that Man's inner selfishness renders him incapable of saving himself from himself? Sadly, I suspect so.

  17. -fCh- 97 months ago | reply

    Selfishness comes and goes. It's high time it went. Enlightenment itself moves in mysterious ways.

  18. mikekingphoto 97 months ago | reply

    Let's hope so. Clearly you and I, plus all that view our thread must buy in. Well at least I hope so. Selfishness comes with inherited wealth and takes nouveaux riche generations to unlearn in my short time on Earth.

  19. Jim Rees 97 months ago | reply

    We had a C/30 IMP when I helped cut over University of Washington from NCP to TCP/IP in 1982. It was quite an advance for us, since NCP didn't do internetworking, and we could only afford to put a single machine, a PDP-10, on the net. With the TCP cutover we put an adaptor in a VAX 750 and networked the whole department, which included a 780, another 750, and a bunch of 730s, on 3Mb ethernet.

    I can't remember what the processor was in the C/30 but I don't think it was Honeywell. Maybe some proprietary BBN thing.

  20. Pete Tillman 69 months ago | reply

    Thanks for the cool photo & neat discussion.

    There's now a copy at Wikipedia:
    commons.wikimedia.org/wiki/File:ARPANET_first_router.jpg

    As always, thanks for posting as Creative Commons!

keyboard shortcuts: previous photo next photo L view in light box F favorite < scroll film strip left > scroll film strip right ? show all shortcuts