[FoRK] [NSG-d] [NSG] Meeting Announcement 08/20
lindbergh at 92F1.com
Tue Aug 20 08:54:27 PDT 2013
The goal is not to automate "bullshit" jobs, but to eliminate them.
For example, I spend a ridiculous amount of time negotiating contracts with
customers. We are never negotiating about anything substantive, but arguing
about whose petty rules will win and what will happen in various extremely
We'd both (as organizations) be better off simply doing things with a
handshake, trusting that if there are problems we'll deal with them in a
civilized and cooperative way, and that, in the event that we can't agree,
a court will enforce a reasonable compromise.
But "the system" - by which I mean insurance companies, the SEC, the law,
and most of all the legal system itself, makes it either illegal,
impossible, or extremely risky to do this. Because all you need is one side
to go nuts and start demanding unreasonable things, or a shareholder to
claim that insufficient diligence was exercised (what - you let them store
your study data in the cloud? What if hackers got it? - as if that could
never happen if it's stored locally) - to generate immense, nearly
unlimited liability for yourself.
So huge resources (the time of highly skilled people) get poured into,
basically, ass-covering on all sides. Does it really reduce the risk? It's
not clear, but it definitely gets individuals (including top managers) off
the hook - at least they tried.
Given the system we have, participants really have no choice. But the
system is broken and causes these huge, needless, inefficiencies. And
because "the system" is a web of interlocking and counter-balancing rules,
reform is difficult - fix one thing and you open up loopholes somewhere
else. (And of course there are powerful vested interests in the status quo
- the lawyers who make big salaries doing this stuff, etc.)
I think it's probably the same with the other "bullshit" jobs you mention -
bureaucracy and paperwork are self-justifying and serve vested interests,
and provide ass-cover for those in charge.
It's not clear to me that automating this would make things better. First,
vested interests would work hard to exploit AI's weaknesses to benefit
themselves and their clients (humans are fairly matched). Second, recall
Parkinson's Law - requirements expand to consume available resources.
AI-driven bullshit might be the start of an arms race - as AI time is cheap
even more ass-covering would take place. (Right now the cost of this
bullshit provides some limit on how much of it is tolerated.) Eventually
this could grow to the extent that it ended up costing as much as now (==
as much as can be tolerated). And THAT would have huge negative effects on
the ability of the rest of us to get anything done.
The goal is elimination of needless bullshit, not its automation.
On Tue, Aug 20, 2013 at 11:36 AM, Fred Hapgood <hapgood at pobox.com> wrote:
> Meeting notice for 08/20
> Our usual meeting space, Tom Yum Koong II, at 1377 Mass. Ave. in
> Arlington, is closed for renovations. We will be meeting at Szechan's
> Dumpling, just across the street.http://szechuans.weebly.com/
> Possibly worth discussing:
> Speculations about what the Next Big Thing probably have to begin with
> these two points: to be Big it will probably have to be about the
> delivery of services, since services of one sort or another represent
> three-fourths of the US economy.
> It is not clear whether the development of such jobs represents much of
> an advance in civilization in the first place. The anthropologist David
> Graeber thinks not. He suggests such jobs, especially the "salaried
> pushing" jobs associated with administration, like law, academic and
> health administration, human resources, and public relations, be swept
> together under the dismissive category "bullshit jobs". Graeber argues
> that such jobs fail the simplest test of quality -- by and large, the
> people actually doing them will tell you they are bullshit. "I’m not
> sure I’ve ever met a corporate lawyer who didn’t think their job was
> bullshit," he writes. In other words, the work might be necessary in
> some sense, but human beings shouldn't be doing it.
> The second point is that AI is not as yet quite ready to disemploy
> salaried paper-pushers on any scale. We all believe that day will
> come and society will be the better for it when it does, but it is
> not here yet.
> So the question possibly worth discussing is: given the constraints of
> technology as it is, is there a practical way of taking at least a step
> toward the automation of bullshit jobs?
> If so, that would be a Big Thing indeed.
> Announcement Archive: http://www.pobox.com/~fhapgood/nsgpage.html.
> "NSG" expands to Nanotechnology Study Group.
> The NSG mailing list carries announcements of these meetings and little
> else. If you wish to subscribe to this list (perhaps having received a
> sample via a forward) send the string 'subscribe nsg' to
> majordomo at polymathy.org. Unsubs follow the same model.
> Comments, petitions, and suggestions re list management to:
> nsg at pobox.com. www.pobox.com/~fhapgood
> Nanotechnology Study Group NSG Announcements
> Send replies (no attachments) to: NSG-d___no-spam at marshome.org
> Questions for list admin: NSG-owner___no-spam at marshome.org
> Archive: http://MarsHome.org/mailman/private/NSG
> Unsubscribe: NSG-unsubscribe at marshome.org
> Password or Options or Unsubscribe:
> Hosted by CyberTeams.com and Mars Foundation(tm), http://MarsHome.org
** The following attachments were removed: multipart/alternative
Nanotechnology Study Group NSG-d open discussion group
Send replies (no attachments) to: NSG-d___no-spam at marshome.org
Questions for list admin: NSG-d-owner___no-spam at marshome.org
Unsubscribe: NSG-d-unsubscribe at marshome.org
Password or Options or Unsubscribe: http://MarsHome.org/mailman/options/NSG-d
Hosted by CyberTeams.com and Mars Foundation(tm), http://MarsHome.org
More information about the FoRK