Cover image for How I failed at using Chat GPT as a research assistant
Mikael Wehner
Mikael Wehner

Posted on • Updated on

How I failed at using Chat GPT as a research assistant

During the pandemic I was doing research on old video mixers. I spent months trawling the Internet for facts, talked to people, read books, and looked through old video business magazines. Some facts were really hard to find. I remember finally, after several attempts, getting a reply from Panasonic’s headquarters after sending a Google translated email to the headquarters in Japan.

Like many of us, I’ve been trying out Open AI in order to understand how it works and what it can be used for. I learnt the importance of writing very specific prompts. And I have now also learnt that while it could be very helpful in certain tasks, it could also behave like a lazy teenager doing homework when performing other tasks. How did I come to this conclusion?

Image description

I came into this thinking that since Open AI has access to so much information, Chat GPT could possibly find more answers or more precise answers than I was able to find. Since I had research to compare with, I tried asking the very same questions to the AI.

Image description

At first glance the output looked good and came fast. Basically, all the work I had spent months doing was calculated within a few minutes. I had asked the AI for references for every fact so that I could back check. But when comparing the output with my research I found that the AI had jumped to the wrong conclusion in most cases and all the links to the sources were either not working at all or linking to something else.

Image description

To test Chat GPT even further I asked it to compare the Panasonic MX20 and the AVE55 mixers that I know are very similar, basically two versions of the same product. The AI made up a bunch of differences that do not even exist.
My conclusion is that Chat GPT 4.0 is totally unreliable for researching historic facts, at least when it comes to video mixers. It jumps to conclusions and tries to pass false information as facts while sounding really convincing. Am I surprised? Not really but I didn’t think the AI would behave like it was so sure of itself. I think it would be both sad and dangerous if people did research this way and took the findings as the ultimate truth.

Please use AI responsibly!

(The image has been generated by Midjourney. My first humble attempt at letting the AI create a 90's Panasonic video mixer.)

Discussion (0)