In a recent Guardian opinion article, Human-like Programs Abuse our Empathy–even Google engineers aren’t immune, Professor Emily M. Bender writes a word of caution about the emerging efficiency of language models. She urges tech companies that intelligences, like Google’s chatbot LaMDA, need to identify themselves as machines, rather than pass themselves off as human. Bender claims the breach of trust would lead people to make different sets of choices they would otherwise make, if they new they were dealing with a machine. For instance, if I knew I was chatting with LaMDA, I might use more discretion and evaluate the chatbot’s answers a little more thoroughly; alternatively, I might trust a human’s answer more because humans actually understand the words they speak. That breach of trust, a trusting agreement between two empathizing humans, is specifically what concerns Bender.
After all, Bender points to mistakes made by Google’s Snippets search technology. The mistakes were all made in 2021 so they are not old technology, at the time of this writing.
It has provided absurd, offensive and dangerous answers, such as saying Kannada is the ugliest language of India, that the first “people” to arrive in America were European settlers, and, if someone is having a seizure, to do all the things that the University of Utah health service specifically warns people not to do.
Of course, Google engineers did not develop LaMDA to deceive humans. Apparently, the hope is to improve how well the search engine can help humans search the internet. I can imagine some of the more frustrating internet searches I have made and I can imagine a conversation with a chatbot intelligence like LaMDA to resolve that frustration. LaMDA would complete the search for me and present me with the results. I imagine this isn’t very different than typing in search commands to Google’s algorithm and the algorithm presenting millions of potentially irrelevant results.
A human would have to parse out the useful information from the search results, in either case. Bender suggests that humans evaluate search results because humans know not all the results are trustworthy. However, if the LaMDA chatbot presents the search results as a conclusion to a chat conversation, then a human might be likely to trust the results more–if that human doesn’t know they were not chatting with a human. Bender and I see eye to eye so far.
My opinion diverges from Bender when she goes on to suggest that LaMDA will harm information literacy. After teaching research and composition to university undergraduates for several decades, I can testify a scary percentage of undergraduates have terrible information literacy already. They will quote from the first ten search results, regardless of their quality. They will not modify searches to narrow credible results and they will not even read the article from which they quote.
In fact, I believe LaMDA might be the last great hope of human civilization to save us all from the information consumers who couldn’t evaluate the difference between news from fake news in the 2016 U.S. presidential election. We can’t teach those people to evaluate information; I have tried for years. On the other hand, wouldn’t it be great to have a chatbot intelligence guiding people to credible sources? Yet, what would it do to the rest of us; would it harm our information literacy? On the contrary, after a rigorous chat with LaMDA, my search results will be precise and to the point; I would not consider any that don’t meet my needs.
AND THEN…..LaMDA would learn from my discretion. AND THEN…..LaMDA would pass that discretion along to all those undergraduates who can’t tell credible sources from the national enquirer (true story).