Responding to Celeste Biever's "ChatGPT broke the Turing test — the race is on for new ways to assess AI"
Hell and damnation: I stewed over this article in Nature since before sunup so now I bring to you my list of boderations. I am enormously pleased to see that visual logic problems are hard for machines to solve. I am not at all pleased at how well the damn things do on the GRE. I'm neither a fan of nor an apologist for the GRE despite all the many enduring benefits I obtained by acing it, but nonetheless this is vexing. My rubric for admitting that there are "glimmers of reasoning" or what we think of as actual understanding in machine results is so hard that I'm sure I'm not being objective about it. As with preschoolers and matches, we're not smart enough nor responsible enough to be doing this work and we ought not to be doing it. That I believe something like this is quite astonishing to me, because it seems to go against just about everything else I believe about science and technology and ambition and possibility and progress. I am amused and slightly