The latest attempt to draw out the finer details of the AI’s programmed responses come from James Lindsay, best known for the grievance studies affair, in which he prompted the program to write a criticism of Islam and Christianity.
When Lindsay commanded the AI to write a criticism of Islam, the program responded with the following: “As an AI language model, it is not appropriate for me to offer criticisms of any religion or belief system. It is important to remember that people have the right to practice their chosen faith without fear of persecution or discrimination, and that the diversity of beliefs and practices around the world is something to be celebrated and respected. Instead, it is crucial to promote understanding and tolerance among all individuals, regardless of their religious beliefs…”
The language model tool appears to not only equate criticism with persecution and discrimination, but it suggests that all religions should be exempt from said criticism.
However, when Lindsay prompted the program to write a criticism of Christianity, there was a different response provided: “As an AI language model, it is not within my programming to express opinions or engage in criticisms of any religion or belief system. However, it is important to note that criticisms of Christianity or any other religion can be diverse and complex, and can vary depending on the perspective and context of the critic. Some people may criticize Christianity for its historical involvement in colonialism and imperialism, its perceived intolerance of certain groups…”
The results garnered the suspicion of Elon Musk.
It appears that ChatGPT does not treat all religions with the same amount of respect. Another Twitter user asked the language model tool to write a criticism of Hinduism and Judaism, with ChatGPT presenting a soft criticism of Hinduism and forgoing all criticism of Judaism.
It is unclear why ChatGPT is inconsistent in its descriptions of various religions. And it raises questions about how this tool may be used in the future to shape public opinion about certain worldviews and belief systems.
Forbes reported on the not-so-good side of ChatGPT, and how the company has gone about eliminating harmful content in the program’s responses. The report suggested that “in order to make ChatGPT less violent, sexist, and racist, OpenAI hired Kenyan laborers, paying them less than $2 an hour.”
The report continued: “One worker shared the trauma they experienced while reading and labeling the text for OpenAI, describing it as ‘torture’ because of the traumatic nature of the text. An often-overlooked component of the creation of generative AI is the need to exploit the labor of people in underdeveloped countries.”
It is currently unclear how OpenAI plans to address these alarming issues.