kps a day ago

~wavy lines~~ I've never used this in anger, but I owned a second-hand Xerox Daybreak for a while to play around with. Later, there was a some freely available project (I've now forgotten) that used Interlisp running on an emulator running on a DEC Alpha, and so I added some minor bits to NetBSD's Ultrix compatibility.

jshaqaw 2 days ago

Retro lisp machines are cool. Kudos to the team. Love it.

That said… we need the “lisp machine” of the future more than we need a recreation.

  • lproven 3 hours ago

    > we need the “lisp machine” of the future

    Totally agree.

    Here's my idea: stick a bunch of NVRAM DIMMs into a big server box, along with some ordinary SDRAM. So, say, you get a machine with the first, say, 16GB of RAM is ordinary RAM, and then the 512GB or 1TB of RAM above that in the memory map is persistent RAM. It keeps its contents when the machine is shut off.

    That is it. No drives at all. No SSD. All its storage is directly in the CPU memory map.

    Modify Interim or Mezzano to boot off a USB key into RAM and store a resume image in the PMEM part of the memory map, so you can suspend, turn off the power, and resume where you were when the power comes back.

    https://github.com/froggey/Mezzano

    https://github.com/mntmn/interim

    Now try to crowbar SBCL into this, and as many libraries and frameworks as can be sucked in. All of Medley/Interlisp, and some kind of convertor so SBCL can run Interlisp.

    You now have an x86-64 LispM, with a whole new architectural model: no files, no disks, no filesystem. It's all just RAM. Workspace at the bottom, disposable. OS and apps higher up where it's nonvolatile.

    I fleshed this out a bit here:

    https://archive.fosdem.org/2021/schedule/event/new_type_of_c...

    And here...

    https://www.theregister.com/2024/02/26/starting_over_rebooti...

  • rjsw 2 days ago

    What does a Lisp Machine of the future look like?

    There is Mezzano [1] as well as the Interlisp project described in the linked paper and another project resurrecting the LMI software.

    [1] https://github.com/froggey/Mezzano

    • eadmund a day ago

      > What does a Lisp Machine of the future look like?

      Depends on what one means by that.

      Dedicated hardware? I doubt that we’ll ever see that again, although of course I could be wrong.

      A full OS? That’s more likely, but only just. If it had some way to run Windows, macOS or Linux programs (maybe just emulation?) then it might have a chance.

      As a program? Arguably Emacs is a Lisp Machine for 2025.

      Provocative question: would a modern Lisp Machine necessarily use Lisp? I think that it probably has to be a language like Lisp, Smalltalk, Forth or Tcl. It’s hard to put into words what these very different languages share that languages such as C, Java and Python lack, but I think that maybe it reduces down to elegant dynamism?

      • amszmidt a day ago

        > Provocative question: would a modern Lisp Machine necessarily use Lisp?

        Seeing that not even "Original Gangster" Lisp Machine used Lisp ...

        Both the Lambda and CADR are RISCy machines with very little specific to Lisp (the CADR was designed specifically to just run generic VM instructions, one cool hack on the CADR was to run PDP-10 instructions).

        By Emacs you definitely mean GNU Emacs -- there are other implementations of Emacs. To most people, what the Lisp Machine was (is?), was a full operating system with editor, compiler, debugger and very easy access to all levels of the system. Lisp .. wasn't the really interesting thing, Smalltalk, Oberon .. share the same idea.

      • 0xpgm a day ago

        > Dedicated hardware? I doubt that we’ll ever see that again, although of course I could be wrong.

        Since we're now building specialized hardware for AI, emergence of languages like Mojo that take advantage of hardware architecture and what I interpret as a renewed interest in FPGAs perhaps specialized hardware is making a comeback.

        If I understand computing history correctly, chip manufacturers like Intel optimized their chips for C language compilers to take advantage of economies of scale created by C/Unix popularity. This came with the cost of killing off lisp/smalltalk specialized hardware that gave these high level languages decent performance.

        Alan Kay famously said that people who are serious about their software should make their own hardware.

    • imglorp 2 days ago
      • amszmidt a day ago

        Mostly dead. Current Lisp Machine shenanigans related to MIT/LMI are at https://tumbleweed.nu/lm-3 ...

        Currently working on an accurate model of the MIT CADR in VHDL, and merging the various System source trees into one that should work for Lambda, and CADR.

        • diggan a day ago

          > Currently working on an accurate model of the MIT CADR in VHDL

          Sounds extremely interesting, any links/feeds one could follow the progress at?

          The dream of running lisp on hardware made for lisp lives on, against all odds :)

          • amszmidt a day ago

            Current work is at http://github.com/ams/cadr4

            And of course .. https://tumbleweed.nu/lm-3 .

            • rjsw a day ago

              Maybe try replacing the ALU with one written directly in Verilog, I suspect this will run a lot faster than building it up from 74181+74182 components.

              • amszmidt a day ago

                From what I see -- that is not the case.

                The current state is _very_ fast in simulation to the point where it is uninteresting (there are other things to figure out) to write something as a behavioral model of the '181/'182.

                ~100 microcode instructions takes about 0.1 seconds to run.

                • rjsw 17 hours ago

                  I was thinking more of a behavioral model of the whole ALU, just so that the FPGA tools can map it onto a collection of the smaller ALUs built into each slice.

                  What clock speed does your latest design synthesize at?

                  • therealcamino 14 hours ago

                    At the top of the readme it says "There will be no attempt at making this synthesizable (at this time)!".

  • chillpenguin a day ago

    Smalltalk was the lisp machine of the future. Of course, now even Smalltalk is a thing of the past.

pfdietz 2 days ago

I love this quote (from Interlisp-D: Overview and Status):

"Interlisp is a very large software system and large software systems are not easy to construct. Interlisp-D has on the order of 17,000 lines of Lisp code, 6,000 lines of Bcpl, and 4,000 lines of microcode."

So large. :)

http://www.softwarepreservation.net/projects/LISP/interlisp-...

  • topspin a day ago

    > So large. :)

    Consider this: at any moment in history, the prevailing state the art in computing and information systems are characterized has huge, very large, massive, etc. Some years later, it's a portable device with a battery, and we forgive and snicker at those naïve souls that had no idea at the time.

    It's still true today. Whatever thing you have in mind: vast software systems running clouds, megawatt powered NVidia GPU clusters, mighty LLMs... given 10-20 years, the equivalent will be an ad subsidized toy you'll impulse purchase on Black Friday.

    You're instinct will be to reject this as absurd. Keep in mind, that is the same impulse experienced by those that came before us.

  • zelphirkalt 2 days ago

    Same thing could probably be some 200k or more LoC in enterprise Java.

    • pjmlp a day ago

      Or Objective-C,

      https://github.com/Quotation/LongestCocoa

      Or Smalltalk or C++,

      https://en.wikipedia.org/wiki/Design_Patterns

      Or even C,

      https://www.amazon.com/Modern-Structured-Analysis-Edward-You...

      Point being, people like to blame Java, while forgetting history of enterprise architecture.

      • zelphirkalt a day ago

        Smalltalk though is famous for having blocks (closures!), a very minimal syntax, and late binding. It has had those for decades before Java got something like it.

        But I think it is also at least partially a culture thing. Wanting to make every noun into a class and then going for 3 levels of design patterns, when actually a single function with appropriate types of input arguments and output would be sufficient, if it can be decided on at runtime. Then we run into issues of Java the language, which doesn't let you use the most elementary building block, a function. No, it forces you to put things into classes and that makes people think, that now they want to make objects, instead of merely have a static method. Then they complicate design more and more building on that. A culture of not being familiar with other languages, which emphasize more basic building blocks than classes. A culture of "I can do everything with Java, why learn something else?". A similar culture exists in the C++ world or among C aficionados.

        • pjmlp 9 hours ago

          Smalltalk is a pure OOP language, even the blocks you praise are objects.

          Java isn't the only language missing closures, plenty of them took their nice time getting them into the language.

          Everything else about Java on your comment, applies equally well to Smalltalk, that is why famous books like Design Patters exist, were written about 5 years predating the idea of a programming language like Java, and mostly uses Smalltalk examples, with some C++ as well.

          In Smalltalk image world, everything is Smalltalk, the IDE, the platform, the OS, there isn't something else.

          Many Java frameworks like JUnit, or industry trends like XP and Agile, have their roots in Smalltalk consulting projects, using IDEs like Visual Age for Smalltalk.

          J2EE started its life as an Objective-C framework at Sun, during their collaboration with NeXT, called Distributed Objects Everywhere.

          In similar vein, NeXT ported their Web Objects framework in Objective-C to Java, even before Apple's acquisition, with pretty much the same kind of abstraction ideas.