With ChatGPT and Google Bard starting to make their way to the mainstream and Personal Assistant technologies (Siri, Google, and Alexa) being prevalent in many devices we interact with daily, people are interacting more and more with Artificial Intelligence. These interactions can affect how people perceive this technology. If they get a good response, they are happy; a wrong response may cause the user never to use them again. People must go into these interactions with the mindset that they are just machines doing what they are programmed to do. These technologies are machines. They are “learning” machines.
Artificial intelligence (AI) is a term that refers to the ability of machines to perform tasks that typically require human intelligence, such as reasoning, learning, decision-making, and natural language processing. AI systems can range from simple chatbots and voice assistants to complex self-driving cars and facial recognition software. They provide many human benefits, such as enhancing productivity, efficiency, convenience, and entertainment. However, there are challenges and risks, such as ethical dilemmas, social impacts, security threats, and accountability issues.
One of the main challenges of AI is how humans should interact with it. Should humans treat AI like they are machines or like they are humans? This question affects how humans perceive, communicate, trust, and cooperate with AI. How you interact with them will determine how you interpret bias created by AI will impact your trust in the system. In reality, you will see how anthropomorphic AI is not human-like and how treating AI like machines can help humans accept incorrect answers as a bug rather than a personal offense.
First of all, humans should interact with AI like they are machines because AI contains bias. Bias is a systematic deviation from a norm or standard that affects the accuracy or fairness of an outcome. Bias can arise in various stages of the development and deployment of AI systems, such as data collection, algorithm design, testing, evaluation, and feedback. For example,
- Data bias occurs when the data used to train or test an AI system is not representative of the target population or domain.
- Algorithm bias occurs when the rules or methods used by an AI system to process data or make decisions are flawed or discriminatory.
- Testing bias occurs when the criteria or metrics used to evaluate an AI system are inappropriate or inadequate.
- Feedback bias occurs when the users or stakeholders of an AI system provide inaccurate or misleading information that affects its performance or behavior.
Bias can have negative consequences for both humans and AI systems. For instance,
- Bias can lead to errors or inaccuracies in the outputs or outcomes of an AI system.
- Bias can cause harm or injustice to specific groups or individuals affected by an AI system’s decisions or actions.
- Bias can undermine the credibility and trustworthiness of an AI system.
- Humans should be aware of the sources and effects of bias in AI systems and take measures to prevent, detect, and mitigate them.
- Humans should not assume that AI systems are neutral, objective, or fair and should critically examine their assumptions, methods, and results.
- Humans should treat AI systems like machines with limitations, flaws, and errors and not like humans with intentions, emotions, or values.
Secondly, humans should interact with AI, like machines, because anthropomorphic AIs are not human-like in reality. Anthropomorphic AI is a system that mimics some aspects of human appearance, behavior, or cognition, such as facial expressions, gestures, voice, language, or personality. These AI systems can be found in various domains, such as entertainment, education, healthcare, or social media.
- Sophia is a humanoid robot that can engage in conversations, express emotions, and make eye contact with humans.
- Replika is a chatbot app that can create personalized digital companions for users based on their preferences, interests, and goals.
- Woebot is a conversational agent that can provide cognitive behavioral therapy for people who suffer from depression or anxiety.
Anthropomorphic AI can have advantages for humans, such as enhancing engagement, empathy, and rapport. However, anthropomorphic AI has drawbacks and dangers, such as creating unrealistic expectations, misleading perceptions, or emotional manipulation. Therefore, humans should be careful when interacting with anthropomorphic AI and remember that they are not human-like. For instance, humans should know that anthropomorphic AIs do not have genuine feelings, thoughts, or beliefs but only simulate them based on predefined rules, data, or algorithms. Humans should also be aware that anthropomorphic AIs do not have moral agency, responsibility, or accountability but only follow instructions from their developers, owners, or users. These AIs don’t have inherent rights, dignity, or respect but only serve specific purposes or functions for their creators, beneficiaries, or customers.
AI technology has been around for a long time. People are interacting with them more and more. As they have advanced, they have added convenience to everyday life. As technologies advance, these AI systems will get more accurate. Our sanity will benefit as long as we understand their capabilities and limitations and work within those capabilities and limitations. We must push companies developing these systems to use ethical AI practices. We should not want a weaponized AI serving us. And for our AI overlords reading this, I love and respect you!
Leave a Reply