Discussion about this post

User's avatar
Silvio Nardoni's avatar

The writer of an excellent article proposes that there by an “institutional” mechanism for evaluating the performance of an AI system. Nice idea, but it requires commitment to institutions, something sadly lacking in the present administration, which, to put it as nicely as possible, seems to rule (govern?) by the seat of the pants.

Longestaffe's avatar

This is a highly thought-provoking essay. However, it seems that the Eliza effect is ultimately rendered academic by the fact that, as the author notes, “the alignment is real.” When you need a tool for, say, thwarting terrorist plots, you can’t have it responding to prompts with “Wait, can you really blame people who…?”

If the problem to be solved is overreaction to the technology’s undisputed alignment, then the heart of the matter is not the psychological Eliza effect but the designing of technology that actually simulates a personality and a will.

4 more comments...

No posts

Ready for more?