[FoRK] parallelism Re: Welcome to the era of radical innovation

Gregory Alan Bolcer greg at bolcer.org
Thu Jan 9 10:52:35 PST 2014


Isn't that just quantum computing? You calculate all things to all
variables, across all values, all the time for all correlations regardless
of whether or not anyone wants it or not thus providing all the answers.

What was the question again?

Greg


On Thu, Jan 9, 2014 at 10:12 AM, J. Andrew Rogers <andrew at jarbox.org> wrote:

>
> On Jan 9, 2014, at 7:18 AM, Dr. Ernie Prabhakar <
> drernie at radicalcentrism.org> wrote:
> > On Jan 9, 2014, at 1:12 AM, J. Andrew Rogers <andrew at jarbox.org> wrote:
> >
> >> It may be a fundamental human problem. The HPC world has been in the
> same boat for years, and it is nominally their specialty. Mostly, HPC just
> papered over bad parallel software design with really expensive hardware.
> Fancy new architectures is the proverbial man looking under the streetlight
> for his lost keys.
> >
> > I’ve been wondering about this. I’m starting to think the root of the
> problem may be the original von Neumann architecture, which was designed to
> impose serial programming on inherently parallel electronic circuits.
> >
> > I’ve been fantasizing about what it would be like to program a “Golden
> Girls*” architecture…
>
>
> No need, current architectures do expose the complex execution parallelism
> but you can choose to ignore it.
>
> Non-trivial parallel algorithms are either latency-aware or
> schedule-aware. In the former, every detail of the execution environment is
> implicitly encoded in the algorithm, which is knowledge you almost never
> have in real software. In the latter, the algorithm dynamically “bends" to
> fit the apparent execution environment like water filling a complex
> container, which leads to considerably more complex algorithm
> implementations than most software engineers are used to reasoning about.
>
> Traditional algorithms and data structures literature is devoid of either
> concept because of a serial execution assumption. The “parallel algorithms”
> literature usually limits itself to those things that can be expressed
> without those concepts, using crude constructs (e.g. MapReduce) that
> coincidentally approximate latency- or schedule-awareness for narrow cases.
> There is almost no knowledge of how to explicitly construct arbitrary
> parallel algorithms outside of those cases where you can be relatively
> ignorant and still produce something more or less like the correct result.
> It is why the software world is littered with completely broken “parallel”
> graph search algorithms even though efficiently parallel algorithms,
> requiring explicit reasoning, are known to exist.
>
>
> In principle, you could build a runtime environment where the generalized
> parallelism mechanics are largely hidden from the programmer.
> Unfortunately, it is a chicken and egg problem. To build such an
> environment you would still need a critical mass of software engineers that
> actually understand data structure and algorithm parallelism willing to
> build it. This is the missing piece of innovation and I am not sanguine
> that we’ll get it anytime soon.
>
> Andrew
> _______________________________________________
> FoRK mailing list
> http://xent.com/mailman/listinfo/fork
>



-- 
greg at bolcer.org, http://bolcer.org, c: +1.714.928.5476


More information about the FoRK mailing list