ChatGPT in science: Is AI suitable for texts in research?

18.04.2023 | An interview with group manager Dr. Volker Bruns

Everyone’s talking about ChatGPT. The AI-powered chatbot from US company OpenAI answers questions and produces texts – and is currently available to everyone for free. Volker Bruns, Group Manager Medical Image Processing in the Digital Health Systems department, “talked” to the bot about digital pathology. He found the results impressive, but they also caused him to wonder: What are the implications of robots like ChatGPT for scientific work? He talked to us about this in an interview.

Where did you get the idea for a “conversation” with ChatGPT?

Volker Bruns: From one of our licensees in the medical field. He spoke about his experience with the bot on LinkedIn. It sounded exciting, so I wanted to try it out.

Is ChatGPT suitable for the topic of digital pathology?

Bruns: Yes, absolutely. It was the licensee’s positive experience that gave me the idea in the first place. And it’s really remarkable how ChatGPT can answer such a range of medical questions. Of course, I was also curious about the bot’s expertise in my area, where I can judge how good the answers are and whether there are any wrong ones.

We added a bit of value to the results after the fact – for example, on the question of approved AI applications in pathology. The bot mentioned only two or three of them, but they weren’t quite right; one was just a research project. We wove our own research into the response and listed the commercial AIs we know of that are currently available in this area. But I definitely noticed that ChatGPT really understood the conversation; for example, I didn’t have to mention in every single question that we were talking about digital pathology. The bot also accepted our corrections.

Did you yourself gain any new insights?

Bruns: I didn’t gain any new knowledge about the subject, simply because this is my area of expertise. However, I’m currently doing research for an article on a topic I’m not quite as well-versed in. Here I first asked ChatGPT to give me definitions for all the terms I wasn’t sure about (“What is X?”). That produced several pages, which I read through at my leisure and thus got a good overview. The material also supplied me with a solid basis of paragraphs, sentences, or at least wording that I can then apply when doing my own writing. With official texts, however, I’d be more cautious here because of possible copyright issues and wouldn’t copy them verbatim.

What are the advantages of an AI like ChatGPT?

Volker Bruns: I believe it will be a helpful tool to increase productivity in creating texts. Here’s one everyday example: just as I occasionally use DeepL for minor translation tasks, I can imagine simply having ChatGPT open as I work and asking it to write a paragraph every now and then, which I would check and ultimately incorporate into my own writing. ChatGPT is also a good way to get acquainted with a topic: for instance, it provides you with the most important buzzwords and, if required, defines them for you, too.

Also, I can picture the bot helping when it comes to researching hard facts. At one point in my conversation with ChatGPT, I asked how many pathologists there are in Germany and what is the ratio of pathologists to the general population. However, we can’t rule out that the bot might inadvertently report incorrect figures. This highlights one of its biggest shortcomings: it doesn’t cite any scientific sources. If you ask for them, you get just a very brief and general overview of the literature it used without page references, so you still have to research and double-check every detail.

That’s why I think you have to carefully consider how to use the output skillfully and sensibly. If you just let the bot produce something and then use it basically sight unseen, you’ll quickly get burned. In any case, its responses don’t meet scientific standards.

So to use it properly, it’s best to have your own expertise and look over the results with a critical eye?

So to use it properly, it’s best to have your own expertise and look over the results with a critical eye?

Volker Bruns: Exactly. That’s why I can understand, say, teachers who worry that when students maybe don’t feel like doing their homework, they might think that ChatGPT results are plausible. At the same time, the students may lack the expertise to challenge the results at the right points. 

This kind of critical reflection is also needed in research. Almost all research projects today are interdisciplinary, especially our work here in medical engineering. Ultimately, we’re computer scientists who work a lot with physicians. For example, we recently wrote a proposal for something related to brain tumors. This isn’t my area of expertise and I didn’t want to take up too much of the physicians’ time, so before I met with them again, I did some research on the subject. For this I turned again to ChatGPT. But writing parts of the proposal directly on this basis would certainly have been out of the question. How are you supposed to tell if the answers are correct, or if something important has been left out? I prefer to rely on the expertise of our clinical partners.

In the end, however, you also have to realize that, especially in research, we have many contemplative thinkers who go through life with a healthy skepticism and who thus run little risk of trusting an AI like this too much. With this in mind, I could see ChatGPT really becoming a helpful tool in academia, because users tend to be well-educated and are naturally the kind of people who question things. 

What role do you see ChatGPT playing in the near future?

Volker Bruns: I would still be rather hesitant myself and advise against relying on it too heavily. It simply isn’t clear what will happen to it now – will it stay available in this form for the time being? I imagine Microsoft and OpenAI will eventually end the testing phase. Also, a subscription model for 20 euros per month has just been launched. The free version is still available, but who knows for how long? 

Article by Lucas Westermann, Editor Fraunhofer IIS Magazine

Infobox OpenGPT-X for Europe

OpenGPT-X

To enable European companies to exploit the potential for innovation while remaining digitally independent, "OpenGPT-X" is creating a large-scale AI language model for Europe. Under the leadership of the Fraunhofer Institutes for Intelligent Analysis and Information Systems IAIS and for Integrated Circuits IIS, a consortium of ten partners from business, science and the media industry is developing the new speech AI. OpenGPT-X is creating intelligent speech applications that will be available to companies across Europe.

 

Contact

Your questions and suggestions are always welcome!

Please do not hesitate to email us.

Stay connected

The newsletter

Sign up for the the Fraunhofer IIS Magazine newsletter and get the hotlist topics in one email.

Homepage

Back to homepage of Fraunhofer IIS Magazine