Bing Chat AI: Potentially Useful, Potentially Dangerous Apophenia
I was able to get the new Bing Chat AI, so I gave it a try.
Overall, my opinion about all the new AI tools is very low, much like my opinion of cryptocurrency and NFTs is quite low. I think that while the AI tools have some utility, it isn’t nearly as much as their proponents believe. I also think they’re massively overhyped.
There’s a bit of “Narcissus Effect”, I suspect – the tools just vomit the portions of their dataset indicated by the generative prompt to their users, who gaze in admiration at the reflection of their psyches. Or it’s a bit like the Israelites and the Golden Calf in the book of Exodus. Look at this beautiful thing we made! Surely it must be a god!
In the modern version of the Golden Calf, it’s look at this mathematical formula we made that brute forces probabilities! Surely it must be an artificial intelligence made in imitation of our own minds, and not an Infinite Crap Generator!
It’s not a golden calf and no one thinks generative AI is a god, but I think the same psychological mechanism is at play.
(If I’m feeling really snarky, I’d say that Web1.0 was static content, Web2.0 was user-generated content, and Web3.0 is the Scammers’ Paradise.)
In particular, my opinion about AI image generation remains highly negative – image AI generation is essentially copying the images it’s been “trained” on. I don’t even like the word “trained” to describe it, since it boils down to essentially a million photocopiers taking a million pieces from a million different images. The artificial “intelligence” involved is basically brute-force copying patterns of the images in its data set, and then when a user enters a prompt, it uses those copied patterns to spit out an allegedly new image. For example, if you go to any one of the AI image generators and use “Magic The Gathering Plains Card” as the prompt, the AI will dutifully produce a mishmash of every Magic The Gathering plains card its dataset scraped from the Internet, complete with weird symbols where the text would be because it’s essentially trying to copy every single card and produce an average of them. (You can see an example as the picture for this post.)
While I am dubious about AI text generation, I am slightly less dubious about it than image generation, because it tends to be really fancy autocomplete. Granted, I don’t think highly of it, but I don’t think it’s as ethically sleazy as AI image generation. And it’s possible these tools have a use I don’t see yet. I mean, I don’t like voice assistants at all, but I recognize they’ve been hugely helpful to people, especially during the pandemic – particularly elderly people and people with mobility/health concerns. Or while I am dubious about cryptocurrency, it can be very helpful to people who live in countries with weak financial regulation or authoritarian governments. That said, there are enormous problems with AI text generation, which have already been thoroughly explored by people smarter than I am.
With that long-winded introduction and somewhat cranky introduction out of the way, let’s get to Bing Chat!
When Microsoft started opening up it’s OpenAI-fueled Bing Chat, I decided to give it a try.
It’s possible that they might be on to the beginnings of a good idea. Possibly.
One of the big problems with the present form of the Internet is that search is dominated by Google, and Google derives most of its revenue from online ads. This has a distortion effect, which means that the top search results for many Google searches now is just a bunch of SEO-optimized ad farms. I’m sure we’ve all Googled for a recipe and ended up on a page with a billion ads and the recipe way at the bottom. Finding accurate information with a Google search has become harder and harder because a lot of very smart people have optimized Google search for maximum ad revenue, so the top results for any particular search are are often equally optimized for maximum search ad revenue.
Bing Chat, by contrast, is designed for questions. The way it works is you ask the chatbot a question, it searches for relevant results, and then summarizes them in a few tidy paragraphs. Then you can ask more refining questions to get better results. Every answer also contains hyperlinks indicating where the chatbot got its information for the answer.
This is more efficient than scrolling through page after page of search results, though you can see the weakness – the answers are only as good as the information it is drawing from, so it’s possible the AI could give you a neat and definitive answer full of absolute nonsense.
For example, I was talking with someone familiar with horse research, and she suggested a very specific question related to a very specific equine medical problem. Bing Chat ground out an answer that was authoritative-sounding but both very vague and entirely incorrect, which we suspected would happen because there simply isn’t very much medical research on this particular equine health problem and therefore nothing upon which Bing Chat could draw.
It’s also very bad at value judgments. I was talking about this with some education people, and they suggested that I ask if a very specific question – whether Montessori preschools or Waldorf preschools have better outcomes for child development. (I have no idea what that means, either.) When posed the question, the chatbot ended up providing a summary of both types of preschools. At a casual read, it seemed like it answered the question, but it totally didn’t. Which remains the biggest problem with generative text AI – it sounds authoritative and knowledgeable, but it isn’t at all – it’s just a fancy autocomplete stringing together words that are most statistically probable to be located together in a sentence.
So it’s possible that, in this particular instance, AI could improve search. Though I retain my overall negative opinion of AI.
I think the biggest danger for this kind of chatbot is apophenia – the human tendency of seeing patterns where none exist. (You can see this on Twitter all the time, where many people assume every news event is part of a Sinister Plot perpetrated by a cabal of all-powerful yet highly incompetent conspirators.) Because while typing, it feels like you’re chatting with an actual, well-organized person on the other end of the connection. It’s not, of course, it’s just an illusion created by the way the human mind works. We tend to anthropomorphize everything – our pets, our tools, the weather, and we so often see patterns where one simply doesn’t exist. For many people, this can be an intoxicating illusion. I can easily see people developing unhealthy relationships with this kind of chatbot, and accepting uncritically anything it tells them.
I’m susceptible to this anthropomorphizing as well – I know Bing Chat is just a mathematical formula, but I still use “please” and “thank you” while typing to it. (Or maybe I was just raised well.)
Despite those dangers, Bing Chat might actually be a useful implementation of AI.
Overall, however, my opinion of generative AI as a technology remains negative for 3 reasons – 1.) it doesn’t solve a a serious problem, 2.) the trivial problems it does solve are outweighed by the massive new problems it creates, and 3.) overall, it makes the world slightly worse. If one creates an Infinite Crap Generator, it’s reasonable to expect an increase in the overall level of crap.
Hopefully, all the corporations investing massive resources into AI generation will take a massive loss, and then the technology can be marginalized.
-JM
So to summarize you have a problem with something sounding knowledgeable and authoritative but in reality being complete crap?
Something on the internet….
I feel like the chat AI is not really the crux of the problem. Like at all.
Something that sounds good but is in reality full of crap is a pretty good definition for the internet. Or a politician. Just saying.
It’s less that inaccurate information exists on the Internet and more that generative AI ramps up the potential for it by a thousandfold.
LOL!
JM wrote: “…Bing Chat ground out an answer that was authoritative-sounding but both very vague and entirely incorrect…”
What’s funny to me is that earlier in the blog post JM wrote stuff that was authoritative-sounding but was vague and incorrect. To me, it’s the human pot calling the AI kettle black!
Here’s one example, JM wrote: “I don’t even like the word “trained” to describe it, since it boils down to essentially a million photocopiers taking a million pieces from a million different images. The artificial “intelligence” involved is basically brute-force copying patterns of the images in its data set, and then when a user enters a prompt, it uses those copied patterns to spit out an allegedly new image.”
Super authoritative, kinda vague (what do photocopiers have to do with anything?!?), and completely wrong (there simply is no copying, anymore than your eyes copy an image when you look at it).
Perhaps amusingly, I was talking with a university employee today, and she said the big problem her institution has with ChatGPT isn’t that students are using it to crank out essays, it’s that they’re using it to do research, and it very frequently generates citations that are formatted correctly but are entirely and completely fake. So they get students upset that the databases don’t have the cited articles, only to be astonished when they realize the articles in question do not actually exist.
I freely concede that my negative opinion of generative AI is an opinion that could be changed if the facts change. That said, every interaction I’ve had with generative AI and everything I’ve read about it further persuades me that not only is it as bad of an idea as I think it is, it isn’t even as good as pro-AI advocates think that it is.
I wonder if you’re being a little short-sighted and that wondering inspired me (and ChatGPT) to write as essay titled “Artificial Intelligence and the Screechy Violin” (at https://greatguys.blogspot.com/2023/03/artificial-intelligence-and-screechy.html ).
I imagine when (if) you look at the ChatGPT portion of the essay, you’ll be (rightfully) highly critical. It is at least somewhat robotic. But, unlike you, I’m NOT a writer and ChatGPT is far better a writing than I am. After not blogging for years I find myself blogging again because ChatGPT enables me to do it in a fraction of the time and writes better than I ever could.
I also use ChatGPT to help me write code. While I’m a very good C programmer (I joke that English is my 2nd language – C is my first), on a given day, I also use C++, Javascript, PHP, Python, Perl, and various Linux scripting languages (bash, awk, sed, etc.) and to remember all of the different library calls and even syntax for each one is impossible for me. So I ask ChatGPT to write a program block in the language of the moment and it spits it out and explains each statement in detail. It’s only right about 80% of the time, but so what? I paste its solution in, and if it works, great, if not, oh well, then I do the google search and figure it out the old hard way.
So even as a rank beginner using these tools, they already regularly save me quite a bit of time and effort.
I do concede that the generative AIs are pretty good at code generation. At times I’ve tested it by telling it to come up with even extremely complicated bash shell or PowerShell commands (or shell scripts), and I think you’re right – about 80% of the time it comes up with something workable.
Lol. What’s interesting is I clearly fall between these two very divergent opinions. Looking for facys and well reasoned anything online is like looking for water in the desert. It’s there, but it ain’t obvious and most people are going to die before finding it.
AI is a tool. It therefore is only as good as its wielder. That has literally been true since always. Use it right, you get the pastor who saves 10 hrs a week on his side hustle so he can spend more time with his family. Use it wrong and you get what JM referenced.