[FoRK] l.py --- musing about lisp implementations

Jeff Bone <jbone at place.org> on Sun Feb 24 16:55:32 PST 2008

On Feb 24, 2008, at 7:07 PM, rst at ai.mit.edu wrote:

> Jeff Bone writes:
>> It seems very clear to me that we're just polishing turds in pursuing
>> this course of language evolution.  If you want fundamental
>> improvements in programming language expressivity and hence  
>> programmer
>> productivity, you have to approach things from a fundamentally
>> different model of computation.  Erlang (join calculus), Haskell and
>> other typed functional languages (equational reduction), Prolog  
>> (logic
>> and Horn clauses), APL (linear algebra), and so on do this.  Yet
>> Another Lisp does not.
>
> You could also attack problems that weren't important in the past,
> and that good solutions aren't really available for yet.  Graham's
> advertising this as "the language for the next hundred years."  Well,
> when I think about the challenges of not the next hundred years, but
> the next twenty, I think about:
>
>  *) Exploiting hardware with hundreds or thousands of cores,
>     non-"von-Neumann" computational pipelines (as in current
>     GPUs), and exotic nonuniform communications interconnects.

+1, spot-on --- I think this is perhaps the most significant short- 
term problem faced by language evolution.

It's interesting that Erlang --- which definitely smacks of late-80s  
to me in many ways --- is nonetheless probably the lead contender in  
this area at present.  I'm not sure it represents *the* model ---  
particularly for things where in fact SIMD is more appropriate  
(StarLisp, anyone?) or where semi-batch processing models ala  
MapReduce are appropriate --- but it definitely represents the present  
state of the art for better or worse.

I note that some Scheme implementations (PLT's next version, which  
given this can hardly be called a Scheme;  also Termite over Scheme48)  
are moving towards shared-nothing.  That's going to be an important  
difference from other previous approaches.


>  *) Growing in a different direction, exploiting more loosely
>     coupled parallel arrays in bulk operations over massive
>     amounts of stored data (a la Google's MapReduce, BigTable,
>     and --- no doubt --- the more interesting stuff they still
>     have under wraps).

+1, though I see this and the above as, in a sense, special cases of  
the larger problem of massively scalable concurrent computation.  I  
hope and expect that the same basic model (shared-nothing, call-by- 
value, process / join calculus based) may well work for both.  I  
suspect that we'll see "application systems" [1] evolve beyond that to  
provide services for automatically distributed applications  
architected as process ensembles over available cores / hosts / etc.,  
optimally distributed things according to needed resources  
(processing, storage, specialized hardware ala GPUs, even power /  
cooling etc.  Nb, the latter's actually critically important for some  
complex computations, in a sense what my company does these days is  
turn electricity into money and one of our major constraints is how  
tightly we can pack the racks due to cooling considerations...)

>  *) Providing stronger guarantees about security-related properties
>     (e.g., the various blog pages that are increasingly easy to find
>     about various ways to use the Haskell type system to flag  
> potential
>     SQL injection attacks).

Yeah, this is important.  Unfortunately I'm not sure I'm very  
optimistic about any of the research tracks I've seen in this area  
(Microsoft's Singularity comes to mind.  It, like most of the other  
things I've seen in this area, seems grounded in shaky mathematical  
underpinnings and a bit too abstruse for my tastes.  This from the guy  
that grudgingly recognizes the power of the well-founded abstraction  
in programming languages e.g. join calculus in Erlang, category theory  
in Haskell, etc.  "Contracts" seem very ad hoc, proof-carrying etc. a  
bit too dependent on implementation particulars --- particularly type  
systems --- to be very general, and "ambient calculus" --- well, what  
the hell is that? YMMV... ;-)

>  *) Computational agents distributed on a planetary scale, in a manner
>     robust against network partitions and the like, in part at remote
>     sensor nodes where there are strong constraints on power  
> consumption
>     and network connectivity.
>
> (Note that this might well include playing with new computational
> models as well, as noted particularly wrt the security issue; it's
> just a question of different emphasis...)
>
> Arc, by contrast, seems so far to be addressing the problems of the
> last century, and not in a way profoundly different that's gone  
> before;
> the differences with other Lisp dialects are so far mostly syntactic,
> and, say, Dylan went a whole lot farther in that direction, without
> tossing macros out the window...
>
> rst

+1, absolutely...

jb


More information about the FoRK mailing list