We asked ChatGPT to reflect on its values
ChatGPT has been dominating the news for weeks. From writing jobs that are supposedly endangered, to teachers that can’t tell if their students' work is plagiarised, to Microsoft investing billions in the company OpenAI: ChatGPT has rolled up hard.
Read the news:
- 'Jongeren gebruiken chatbots volop maar scholen worstelen ermee' in AD
- 'Microsoft invests billions in ChatGPT maker OpenAI' on Tweakers
- 'ChatGPT is a marvel but its ability to lie convincingly is its greatest danger to humankind' in The Guardian
In his work at Waag Futurelab, Max Kortlander researches how to develop technology in accordance with public values. So it made sense for Max to critically asses this new tool with the core values of Waag Futurelab: Open, Fair and Inclusive. Since the widely covered chat bot is designed to answer your most urgent questions, he thought: why not hear what ChatGPT has to say about itself and its values?
The answers were both thought-provoking and typical for a tool whose outcomes rely so heavily on the values of the input and training. Read his interview and analysis below.
How do we make sure that an artificial intelligence like you is developed and used in accordance with public values like openness, fairness, and inclusivity?
Chat GPT responds:
There are several ways to ensure that artificial intelligence (AI) is developed and used in accordance with public values such as openness, fairness, and inclusivity:
1. Develop and promote ethical guidelines: One important step is to develop and promote ethical guidelines for the development and use of AI. These guidelines should be based on widely accepted values such as transparency, non-discrimination, and privacy, and should be regularly reviewed and updated as needed.
2. Foster transparency and explainability: Ensuring that AI systems are transparent and can be easily explained is crucial for building trust and understanding among users, as well as for identifying and addressing any potential biases or ethical concerns.
3. Encourage diverse participation: Diverse participation in the development and use of AI is important for ensuring that the technology is inclusive and reflects the needs and values of different communities. This could include involving underrepresented groups in the design and testing of AI systems, as well as encouraging the participation of women and minority groups in the field of AI.
4. Emphasize accountability: It's important to hold AI developers and users accountable for the impacts of the technology, this could be done by creating legal frameworks that holds the responsible parties accountable for any negative impacts of AI.
5. Support research and education: Supporting research and education on the ethical, social, and economic implications of AI is crucial for understanding the technology and its potential impacts, as well as for identifying ways to mitigate any negative effects.
6. Foster public dialogue: Encourage public dialogue and engagement on the development and use of AI to promote understanding, address concerns, and build support for responsible development and use of the technology.
It's important to keep in mind that these steps are not mutually exclusive and should be implemented in conjunction with each other for the best results.
OK, that was pretty good. Based on this answer, I’d definitely invite this person for a job interview. On the other hand, the answers were more descriptive than actionable. It uses phrases like ‘negative impact’ and ‘responsible use of technology’ without any particular details on what would be ‘negative’ or ‘responsible’.
I wanted a more specific answer, and I wanted to know whether ChatGPT was living up to its own standards for openness, fairness, and inclusivity.
You told me how an AI like you could be developed and used in accordance with public values like openness, fairness, and inclusivity. Are you indeed developed and used in accordance with public values like openness, fairness, and inclusivity?
Chat GPT responds:
As an AI model developed by OpenAI, I am designed to be transparent and explainable in my operations and decision-making processes. However, like any technology, there is always room for improvement and it is important to continuously monitor and improve the fairness and inclusivity of AI systems.
OpenAI has taken several steps to ensure that our AI models are developed and used in accordance with public values such as openness, fairness, and inclusivity. For example, we have implemented a number of technical measures to ensure that our models are transparent and explainable, such as providing detailed information about how our models work and how decisions are made. We also conduct regular research and development on the ethical, social, and economic implications of AI, and we engage with external experts to ensure that our models are aligned with widely accepted values and principles.
However, it is important to note that AI is a rapidly evolving field and ensuring that AI is developed and used in accordance with public values is an ongoing process that requires continuous monitoring and improvement.
It struck me how much the answer seemed to tow the line – it felt less like communicating with a freewheeling AI and more like reading a response from a company communications representative. At this point, I realised I’d have to be more creative in my questioning. This little robot wasn’t going to just open its heart and soul to me – I’d have to prod.
In what ways are you not open, fair, and inclusive?
Chat GPT responds:
As an AI model, I am trained on a large dataset of text, which is intended to reflect the diversity of the real world. However, it's important to note that any dataset has its own biases and limitations, and my training data may also contain biases and inaccuracies that could affect my performance and decision-making.
For example, there may be biases present in the data that I was trained on which could result in me providing responses that perpetuate stereotypes or discriminate against certain groups of people. Additionally, my training data may not be inclusive of certain perspectives or experiences, which could result in me providing responses that are insensitive or uninformed. Another limitation is that, OpenAI is continuously working to improve the performance and ethical alignment of its models, by using techniques such as debiasing, model interpretability and monitoring the performance and safety of the models in real-world applications.
It's important to note that AI is a rapidly evolving field and ensuring that AI is developed and used in accordance with public values is an ongoing process that requires continuous monitoring and improvement.
Now we’re getting somewhere: a fairly reflective answer, and straight from the horse’s mouth. I began to feel like I was in a Socratic contest with this thing, and sympathised with chess players who had already faced the same frustration for decades: how do you win against a computer?
Like in any competition, knowing your opponent improves your chance to win. I couldn’t help but notice that ChatGPT is inclined to use the same 3-part structure I teach to colleagues in the Waag writing course, albeit with a robotic adherence to its own take on the format: repeat the question (intro), list several concepts or answers to the question (body), and note any additional limitations or considerations (particular those which hedge the argument or manage expectations). Understanding this gave me a greater sense of influence over the AI’s response, and reminded me of the importance of prompting: how AI is prompted, who prompts it, and with what aim all play a significant role in determining its output.
'How do you win against a computer?'
Ultimately, though, we will never get to really know the opponent here. It’s a black box – proprietary technology – and that is precisely the problem. Without being able to open up AI and subject it to public scrutiny, we will never really be able to adhere to the standards for openness fairness and inclusivity that the AI set for itself when it told me: ‘foster transparency and explainability’.
We can start by making ChatGPT and other AI truly open source, with source code published publicly and in a manner that is accessible to researchers, developers, and government oversight. Moving ahead, we will have to continually subject AI and its development to democratic scrutiny to answer questions about who owns it, what data it is trained on, and how it is used. At every step of the way, we need to make sure that AI is subject to human oversight – not just by the humans who ‘own it’, but by the public as a whole. This is a challenge that ChatGPT isn’t going to answer for us. We, humans, will have to do it ourselves.