ChatGPT - a threat to the future of journalism?
Artificial intelligence (AI) is increasingly revolutionizing the media landscape and fundamentally changing how journalism is done.
ChatGPT - a threat to the future of journalism?
Artificial intelligence (AI) is increasingly revolutionizing the media landscape and fundamentally changing how journalism is practiced. In this article, we explore how AI technologies are being used in journalistic practice, the challenges they pose and the opportunities they offer to improve the quality and efficiency of news production.
Definition: What is ChatGPT?
ChatGPT is an artificial intelligence-based text generator that is (currently) based on OpenAI's GPT-4 architecture. This sophisticated model uses machine learning and large amounts of text data to generate human-like texts in natural language.ChatGPT can be used in a variety of ways, such as writing articles, answering questions or creating creative texts. Although ChatGPT is very powerful, it may provide inaccurate or incomplete information as its knowledge is based on data available until September 2021.
ChatGPT is also the program that has been able to reach the limits of 100 million users at a rapid pace. Even social media services like Instagram could not benefit from such growth within such a short period of time.
It is important to note that the things we call AI today are still predictive models. The models calculate a prediction of which use case is best for the user. For example, ChatGPT calculates the probability of certain words following each other. Today's AIs are still specific, but not yet general. Specific AI refers to an AI that can perform a task. Whereas a general AI can perform all forms of intellectual output. The chatbot does not (yet?) have any so-called world knowledge. So far, general AIs have only appeared in science fiction. So we are still quite a while away from a Star Wars scenario with an AI like C-3PO.
What happens with a large language model?
A Large Language Model (LLM) is an AI application trained to generate human-like text based on input information. At its core, an LLM consists of an algorithm that analyzes large amounts of text, recognizing patterns and relationships in the natural language. The best-known language model at the moment is theGPT series from OpenAI.
A Language Model works in three basic steps:
- Pre-training: In the first step, the LLM is trained on huge amounts of text, for example from the Internet. It learns basic language structures as well as complex contexts and information from various subject areas. The algorithm uses statistical methods to calculate the probabilities of successive words in a sequence.
- Fine-tuning: In the second step, the LLM is further trained on specific data sets or text sets that contain certain topics or styles. This makes it possible to teach the model a specific expertise or style of writing. Human feedback loops are often included to check the quality of the generated texts and adjust the model parameters accordingly.
- Generation: Finally, the LLM can be used to generate texts. The model is given a prompt or part of a text, whereupon it attempts to continue the text in a meaningful way based on the patterns and contexts it has learned. The text is generated token by token, whereby a token is usually a word or a punctuation mark. The quality and relevance of the generated texts depend on the complexity of the model and the quality of the training data.
A Large Language Model can be used in various areas such as automatic translation, text summarization, text classification, chatbots and many other applications. However, the magic comes when this training is further trained with feedback from users.
It is important to note that an LLM is only as good as the data on which it was trained. Therefore, models like GPT-4 cannot provide up-to-date information about events after September 2021. Ask the AI if Lionel Messi is world champion and ChatGPT will answer in the negative.
What can ChatGPT do (not yet)?
To make it short: ChatGPT can understand texts and create texts. What sounds obvious at first glance, however, is groundbreaking. However, ChatGPT currently still has two Achilles' heels, namely the aspects of truth and reliability.
Anyone who has dealt with ChatGPT will have already noticed that not all information is 100% coherent and true. In the current version, the tools still tend to fantasies. For example, if you want to have the sources for a given answer, ChatGPT complies with this request and also names some. However, it is by no means certain that these sources also correspond to the given answer. If you want to give sources in your texts, you should check them for safety.
Furthermore, ChatGPT still lacks reliability. If you repeatedly ask ChatGPT the same question or the same task, it presents different answers. This is due to the fact that AI models are based on probabilities and not on absolute truth.
How ChatGPT will change journalism
Artificial intelligence (AI) has the potential to revolutionize journalism in many ways. However, as with any technological innovation, there are both advantages and disadvantages that need to be considered.
It is obvious that people in the journalistic field in particular are worried that their work could soon be taken over by bots. However, this concern is unfounded. After all, journalistic work does not only consist of writing texts, but rather of researching the content and context of events. When actually writing the final text, the writers put their findings together and organize what they have compiled.
In recent years, it has become increasingly clear that providing textual content is hardly enough to generate visibility on the Internet. Those who wanted to be played out by Google on the top ranks needed not only a good text, but also links, videos or implemented feeds from the social networks. The journalistic process became more and more elaborate; in this context, one can speak of a condensation of activities. Because the final texts also have to be adapted to the various output channels.
Experts assume that especially in this environment the work of journalists can benefit from AI technology. For example, SEO titles and descriptions can be created by an AI without any problems and research for SEO laymen is no longer necessary. In daily work, less creative work can thus be minimized.
Another strength in the future will be the support of research. It will be possible to feed the AI with texts, search for specific contexts in these texts, and have the results displayed graphically.
Another advantage for publishers lies in the possibilities of using their own content. Publishers can thus provide their users with new interactive offers by using AI models that exclusively access their own content. For example, an advice chat is conceivable that uses the data from the publisher's own advice section.
The cost of a text creation has been coming down over the past few years. The new possibilities will lead to a glut of AI-generated texts. Many of these texts will be qualitatively sufficient for certain areas of application and will make copywriters largely superfluous.
On the other hand, it may be that handwritten texts gain in value for this very reason. Preparing a text creatively, enriching it with anecdotes, or linking it intertextually is still the domain-specific territory of human writers.
AI can automate simple and time-consuming tasks such as creating short news articles or compiling data. This allows journalists to invest more time in more demanding research and reporting. Furthermore, AI will increasingly help to increase efficiency in the editorial process, in particular by compiling research results faster or optimizing the flow of news. AI systems can help to verify facts in articles faster and more accurately, which contributes to the dissemination of reliable and credible information.
These enticing advantages are also offset by a number of disadvantages.
As disruptive as AI technologies may be, they also harbor risks. For example, it has not yet been clarified at all what the copyright situation is for the texts created by ChatGPT. In the near future, it can be assumed that this area in particular will require a lot of attention.
Automation could lead to the loss of some jobs in journalism, predominantly in areas where human skills can be replaced by AI. There is also a risk that the use of AI will lead to a greater emphasis on quantity rather than quality, possibly contributing to a deterioration in reporting.
AI systems can unintentionally adopt human biases and stereotypes contained in the data they are trained with. This can lead to biased reporting. Over-reliance on AI could result in basic journalistic skills being lost or neglected.
Handwritten content could gain value in an artificial intelligence (AI)-dominated world for a number of reasons:
Handwritten texts convey emotions and personality better than machine-generated content. They convey an authentic and individual touch that enables readers to establish a stronger connection with the author.
AI-based texts can be efficient and informative, but they often lack the creativity and originality that characterize handwritten texts. Especially in areas such as literature, columns or commentaries, the human factor can be crucial to stand out from the crowd.
While artificial intelligence can recognize patterns and correlations, it is not (yet) able to think critically in the same way as a human. Handwritten content can better represent complex interrelationships and reflections, which is of high importance especially in opinion formation and investigative journalism.
Outlook: What's next?
ChatGPT will continue to evolve and the amount of data from which the AI draws its responses will become better and better trained. Furthermore, the results will become more accurate and also more reliable. It will be exciting if the AI can also access the Internet on a daily basis in the future and relate the current data to the old data. It will also be exciting to see to what extent different AI models and programs can be combined with each other.