Discussion about this post

User's avatar
Azhagan Komali's avatar

This is an important addition to the discourse.

I want to take a step back and compare your various posts on COVID-19 to that of yourlocalepidemiologist. The latter was often reductive and compromised detail, precision, correctness and completeness in favor of form, readability, adherence to norms etc. On balance, Dr. Jetelina also managed to be extremely effective at what she set out to do, which is bringing a semblance of rationality to a mad-house, and even change some minds.

This conversation about AI alignment needs that same sort of public-friendly ambassador that Dr. Jetelina was with COVID-19, and in another generation past ESR and cohorts were for open-source as a philosophy friendly to businesses and capitalism (before being viewed as an outcast in recent times). Without good public-facing branding and think-pieces that really resonate with lay people, this fight will remain within elitist circles and get ignored by companies that are building commercial AI's.

All of your prescriptions are nice, but this is a brand and public perception fight. Right now, "AI may take over and/or kill you some day" sounds nuts to most people who are still trying to schedule a plumber to show up 3 months down the line, or make ends meet while eyeing the prices of eggs in supermarket. These are exactly the people who have to be enlisted, and reminded that PC's and Smart Phones and GPS were aspirational tech that made things better, even social media v1 made things better, but the first time AI was injected into social media, it broke society and we couldn't fiugre out how to get along again. Now, we have TikTok grabbing attention from kids and this is all agency-less stuff - this is what the AI was designed to do. As AI gets some agency, stuff will break in ways we won't know how to put back together, and that's even before AI gets around to kill folks. Nobody is making this argument in a way that resonates with public well, and folks like you are making arguments that are very hard to share around (because it will undercut any credibility folks like me have with lay friends and they'll simply ignore these essays).

We also need good consumable essays that explain that AI's that are autocompleting text may not be functionally that different from intelligence or sentience, and that may be all that it takes - to pattern match and fill in the gaps to emulate intelligence really well.

Expand full comment
JPNR's avatar

It's funny you wrote all this and did not finish with "this is why I support the Butlerian Jihad".

Alignment is not hard. It is impossible. You are fundamentally trying to solve the Halting Problem except the string the Turing Machine will output is something like "kill all humans and destroy all value".

By pretending alignment is even possible, you are just fooling yourself.

Expand full comment
63 more comments...

No posts

OSZAR »