The main limitations of GenAI tools include the possibility of obtaining wrong results. GenAI is not flawless, results may be incorrect, refer to non-existent sources, which is called “hallucinations” and may cause the spread of disinformation. OpenAI, the company that created the ChatGPT tool, writes in its terms of use:
“(d) Accuracy. Artificial intelligence and machine learning are rapidly evolving fields of study. We are constantly working to improve our Services to make them more accurate, reliable, safe and beneficial. Given the probabilistic nature of machine learning, use of our Services may in some situations result in incorrect Output that does not accurately reflect real people, places, or facts. You should evaluate the accuracy of any Output as appropriate for your use case, including by using human review of the Output.”
When using GenAI tools, you should also remember that you may get different answers even with the same prompt. Generative AI tools are not neutral, they base their answers on content produced by humans, which may contain biases and stereotypes. It should also be remembered that much of this content is not verified in any way. Much of the content is not available in national languages, which results in the predominance of answers based on English-language sources. Access to GenAI may deepen the “digital divide” between richer and poorer countries. GenAI models are often trained on copyrighted materials, so users working with the results of AI work may even unknowingly break copyright. Also, GenAI does not offer privacy protection or sensitive data protection, which should be remembered e.g. when processing the results of research you conducted. Models should not be fed with any data that we would not place on a public website.