[FoRK] Brownback defines science
<eugen at leitl.org> on
Sat Jun 2 04:57:11 PDT 2007
On Sat, Jun 02, 2007 at 04:20:02AM -0700, Lion Kimbro wrote:
> If we can make it out of the solar system, *my* worries are over,
The hard part is building autonomous self-rep systems in a small
package, so once you're out of this gravity well, you're golden.
We already know how to launch relativistic probes, it's just
you need machine versions of bacteria capable to live from sunlight
and congealed star drek.
See http://www.molecularassembler.com/KSRM.htm for why this
is not prohibitively difficult.
> >I don't think the majority gives a damn about that. The vast
> >majority tries to save their own ass, to feed and clothe their
> >children, and to give them an education. Everything else is a distant
> >forth and fifth.
> Are you sure you see so very clearly into the hearts of other people?
Yes. Both because they say so, and do so. Most people outside of
the rich west eke out a shitty life, and are trying to get out of it
by emulating us. Can't blame them for a moment for that without
being a hypocritical sack of shit.
> I think it's right that people try to save their own ass, feed and
> clothe their children, and give them an education.
I think it's their right, too, it's just it's a zero-sum (or even
negative-sum game), and they insist to play.
> But I think that after they're done doing that, they go, "Oh yeah.
> Someone should ... you know... do something about the penguins."
I recommend you try stopping a few loggers in Brazil from transporting
away the timber so they can feed, clothe and educate their children.
Bring plenty of ammo, though. They have guns, and like to use them.
> And as far as I can see by reason, this is right and proper--
> We can't take care beyond our own capabilities, until we have them.
> We don't expect kids to take care of the neighborhood, first.
> Rather, we make sure that they can take care of themselves, first.
> THEN, they can take care of things beyond themselves.
I think it's a big mess, and everyone is trying to swim to shore.
Quite a few will sink, quite a few will make it to the shore. We'll
find out shortly.
> I think we, humanity, are just coming into contact with information
> and the ability to respond to each other and the environment.
I would be quite interested to see what the 3rd world will do
with smartphones, once they all will get them. So far, they're dumb
phones, only good for yakking.
> >Our governments reflect the capabilities of their agents. Educated and
> >powerful entities can recognise and fix issues bottom-up, thankyouverymuch.
> Okay, I guess if I say something cryptic, I get a cryptic response,
> and that's fair.
Not cryptic at all. Organisation (and cooperation) level is a function
of the agent smarts. Dumb and weak agents let themselves be push around.
Smart and powerful agents act instead of reacting, and do not allow themselves
to push around.
Education and material wealth is a potential source of enablement, but
in practice material wealth is demotivating for most but a small fraction
> For example, there are starving people in the world. There's more than
> enough food to feed everybody. And yet people starve.
The problem is simple: DISTRIBUTION. Just because you have the food doesn't
mean you can spirit it directly into people's bellies (and even then, quite
a few would start slitting bellies).
> Now, a lot of activist groups, they beat the ground and lash our minds,
> and shout, "It's because of your moral failings! It's because you're
> bad people!" Or, "It's because of our governments failings! It's
> because the government is rotten!"
Actually, it's a lot because local stationary and mobile bandits are in
the way. You can't bypass them.
> But I don't think that's right. I've looked into it a little bit, (and,
> admittedly, ONLY a little bit,) and what I've come away with, is that
> it may well be that the logistics of distributing food is actually HARD.
Well, no shit, Sherlock.
> That it's NOT just a matter of going into town, and saying, "Food!
> Clean water!"
Well, if you haven't forgotten an armed convoy, and lots of infrastructure,
it would work. But you would have to shoot a lot of natives, and you know
what will happen after that. Especially, after you leave.
> That politics and economics and other systems actually *matter.*
> And that we simply *don't know how* to take all those things into
> The problem isn't that humans are *bad,* it's that humans are
> *freaking complicated.* Answers are *not* clear.
The problem is that most humans are stupid and weak, but some of them
are ruthless and smart. I can think of some technological fixes to that,
but it's in the future, and it would results in coercive enhancements,
and probably some behaviour constraints. I'm not sure it's a good idea.
> >It's easy enough to do -- just build a human-enhancing self-replicating
> >system, and dump it into the jetstream. Do you have any of that nanofairy
> >dust, perchance?
> Eh... What?
> Okay, I don't think this one was very cryptic of me.
> I'm going to need some explaining from you on this one.
Let's see, you wake up one morning, and suddenly know a lot things
which you've never learned, and you can learn and build on that, and
you're much, much harder to kill, and you notice when you're trying
to kill someone it's really really hard (especially, if you can't
even think of that particular idea).
> Nanofairy dust isn't going to do it, alone.
It actually would, long-term. The minus part, we don't have
that nanofairy dust yet. Also, I'm not sure I would have the
right to do it, even if I could. It would probably require
some pilots, and some modelling.
> Machines can be just as religious as humans can be-
Micron-sized machines are not religious, they're just enhancers.
> I take it we're all programmers, right?
> We should all know better. ..!
> (Yes, this is tongue in cheek.)
> But, no, seriously-- If you do a study of "reason,"
> if you do a serious inquiry into reason, you'll find that reason
> is divergent, and there's good reason to believe that AI's would
> be just as likely to go off into religious ways of thinking
> as humans are.
I never said anything about AIs, and you can permanently inhibit
the deity circuit, you know. Not that it would be a problem, because
cooperation is a function of smarts onboard, and religious people
can cooperate just fine, too.
> sdw and I had this lovely conversation about this, (I'm still
> awaiting his response on it,) and that's where I'd point that
> one, if you're suggesting that AI and nanotech will make
> everyone into religion-free people.
These are not the robots you're looking for.
> I mean, I'll grant that it's *possible,* but, ... We may find
> something different than religion, but resembling religion,
> or oppressive like religion, on the other end of things.
I don't know why you're harping about religion, it is not that
a big problem so far.
> >It is possible to detach yourself from the supporting ecology, but
> >it's hard work. And nobody's got it done yet, so we're still
> Well, it's even harder to detach yourself from your heart.
I know a guy who did that, and he still lives. But it's not that
enjoyable, lugging that piece of equipment around. I still don't
know what it has to do with engineering closed-loop ecosystems,
the Russians and the U.S. used to work on that, but they stopped
because the space race was over.
> It's like Captain Teeg says in Pirates III:
> "Living forever's not the hard part,
> It's living with yourself all that time."
I don't listen to Hollywood hand puppets. In practice supercentennarians
are not suicidal, and living forever is just the entry ticket, not
> If we can save penguins, and other species, (and I'd rather,
> regardless of whether we need them or not,) I strongly believe
> it's going to require global effort to do so.
Yeah, and that's why you can kiss the polar bears good-bye (the penguins
are not really endangered).
> Some people will balk and complain about how they're being
> forced to care for what they don't care for, "at the point of a
> gun!" like they always say, but honestly, I can live with that.
Coercion never works. At worst it exacerbates the problem; imagine
what nuclear terrorism would do to our lifestyle.
> Oh, well-- the specific course of action is up for debate.
Consensus is not a way to validate knowledge, it's just about which
group of morons is larger. The trick is to make morons stop being
morons without them noticing.
> But the concept of "try to figure it out, and labor towards it,"
> is to me, the obvious choice to make.
I'll see you with machine guns along the logging road in Brazil.
> Granted, the devil is in the details.
Isn't she always.
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
More information about the FoRK