While I think the headline of the piece is overblown, John Herrman gets to the heart of the challenges Google and OpenAI are facing with LLM personification:
With Gemini, incredibly, Google assigned itself a literal voice, spoken by a leader-employee-assistant-naïf character pulled in so many different directions that it doesn’t act like a human at all and whose core competency is generating infinite grievances in users who were already skeptical of the company, if not outright hostile to it.
Herrman doesn’t use the word “personification” in his piece, but I just keep coming back to that word when I think about the personalities of LLMs. (A reminder from Wikipedia: “Personification is the representation of a thing or abstraction as a person.”)
Herrman, without explicitly naming it, gets to an interesting “jobs to be done” framing of what users are hiring Chat GPT or Gemini to do, vs how the narrower “job” that users are hiring specialized AI applications or even Custom GPTs for makes the personification problem easier.
Specialized AI represents real products and an aggregate situation in which questions about AI bias, training data, and ideology at least feel less salient to customers and users. The “characters” performed by scoped, purpose-built AI are performing joblike roles with employeelike personae. They don’t need to have an opinion on Hitler or Elon Musk because the customers aren’t looking for one, and the bosses won’t let it have one, and that makes perfect sense to everyone in the contexts in which they’re being deployed. They’re expected to be careful about what they say and to avoid subjects that aren’t germane to the task for which they’ve been “hired.” In contrast, general-purpose public chatbots like ChatGPT and Gemini are practically begging to be asked about Hitler. After all, they’re open text boxes on the internet.