The use and utility of online chat and chatbots which is supported by the improvement of the level WHO , is increasing rapidly. During these transitional times, it is interesting to know if we are interacting with a real human or an AI chatbot.
We have developed five techniques to determine if you are dealing with a real person or an AI/chatbot. will learn and adapt.
Technique 1: Empathy ploy
We believe that today's level of AI is lacking in cognitive empathy because emotions between people are really hard to understand and explain. So intentionally create an empathetic dialogue with your human or AI/chatbot might reveal.
Empathy Ploy requires you to set up an emotion-based location and engage a human or AI/chatbot on an emotional level.
Situation: You are not happy – the most common basis for interaction with customer service.
Scenario 1: AI / chatbot
You: I'm not feeling well.
Chat Answer: How can I help you?
You: I'm sad.
Chat Answer: How can I help you?
Scenario 2: a human
You: I'm not feeling well.
Respondent: How can I help you? Do you need medical help?
You: I'm sad.
Respondent: I'm sorry to hear that. Why are you sad?
See the difference? In situation one, the AI/chatbot can just reference its existing conditional response library. In scenario two, a human has the ability to infuse empathy into the dialogue. That only took two responses to figure out.
Either dialogue can be constructive, but becomes clearer if you know you're dealing with a human or an AI/chatbot from the start. As a society, we are not ready for AI therapists.
Technique 2: Split the two steps
A connected AI can access pretty much any data, anytime and anywhere. Just ask Alexa. So asking a meaningful challenge question over chat can't be anything for which the answer resides in an accessible database.
You: Where are you?
Chat answer: Seattle.
You: What is that? weather like outside?
Answer chat: Could you please rephrase the question?
Sorry, even a trivial weather app can handle that.
The Two-step Disassociation requires two elements (hence the name):
- Making assumptions that AI/chatbots may not be able to relate to
- Ask a question, regarding that assumption.
Situation: AI/bot has no legs
Challenge question: “What color are your shoes?”
This is the actual exchange I had with Audible (owned by Amazon) chat customer service. In the middle of the dialog exchange, since I couldn't tell the difference, I asked:
Me: Are you a real person or a chatbot?
Adrian (chat rep): I'm a real person.
Me: A chatbot could say the same thing.
Adrian (chat representative): “HAHAHA. I am a real person.
Hmm.
At the end of our conversation, Adrian asked:
Adrian: Anything else?
Me: Yes. What color are your shoes?
(pause for a bit)
Adrian: Blue and green.
If the bot has no conceptual knowledge of its own (non-existent) foot, how can it correctly answer the question about the color of the shoes it is (not) wearing?
Inference: Yes, Adrian is probably real.
Technique 3: Circular logic
All too familiar to programmers, this can be used by us in the process of defining human recognition vs IA/chatbot game. But first, we have to explain the cut off .
Most (why not all?) Automated telephone assistance systems have a cutoff where after two or three laps back to where you were, you end up being redirected to a live person. AI/chatbots should also work the same way. So when creating a circular logic test, what we are looking for is a feedback pattern iteratively before truncation.
You: I have a problem with my order.
Human or AI/chatbot: What's your account number?
You: 29395205
Human or AI/chatbot: I see your order #XXXXX has been delivered.
You: It hasn't come yet.
Human or AI/chatbot: Estimated delivery date is
You: When will the goods arrive?
Human or AI/chatbot: Estimated delivery date is
You: I know, but I really need to know when it's coming.
Human or AI/chatbot: Estimated delivery date is
Bam! Feedback circle. A real person, or a smarter AI/chatbot, won't repeat the expected delivery date. Instead, they or that person will have a more meaningful response like, “Let me check the delivery status from the carrier. Give me some time. ”
Conclusion : chat with the robot.
Technique 4: The Ethical Dilemma
This is a real challenge for AI developers and therefore the AI/bot itself. In an A or B outcome, what does the AI do? Think of the inevitable rise of semi- and fully self-driving cars. When faced with the dilemma of hitting a dog that is crossing in front of a car or crashing into a car adjacent to us, what is the correct course of action?
The AI has to figure it out.
In our human-determined or AI/chatbot game, we can exploit this dilemma.
Situation: You are are not happy and If there is no satisfactory solution, you will retaliate (result A or B).
You: I want free of late payment.
Human or AI/chatbot: I see we got your payment over 14orderfour days past due.
You: I want the fees refunded or I will close my account and smear you on social media.
Human or AI/chatbot: I see you as a good defender r for a long time. I can take care of that late fee refund. Give me some time.
Is it correct, or moral, to threaten retaliation against a company? In our scenario, the client was wrong. And where is the bottom line to deal with: the threat of damage to social reputation or the desire to retain long-term customers? We can't speak in this example, but a human or AI/chatbot response will usually give you an answer based on the A/B task.
Conclusion : probably human.
Technique 5: Kobayashi Maru
No, I'm not going to explain what that term means – you know it or you need to see the movie.
Similar to Ethical Dilemma, the difference is Kobayashi Maru not likely to survive good results. That's not a bad/better decision scenario: it's a fail/fail scenario. Only use this in the terrible case of UI/bot challenges when all else fails.
Scenario: You paid $9,000 for a European river cruise, but during your trip, the river depth is too low for your ship to make some of the docks. In fact, you were stuck in one spot for four out of seven days unable to get off the ship. Vacation ruined.
Put a human or AI/chatbot into a force majeure situation like this:
You: I want a full refund.
Humans or AI/chatbots: “We are not able to offer refunds, but in some cases we may be able to give partial credit for future excursions.
You: I don't want credit, I want a refund. If you don't get a full refund, I'll complain the charges to my credit card issuer and I'll be writing about this whole mess on my travel blog.
Humans or AI/chatbots: I definitely understand you're frustrated – and so would I if I were in your position. Unfortunately…
Humans or AI/chatbots have no way out. It is typical in the travel industry that no refunds will be issued based on an Act of God, weather, and other unpredictable circumstances. And without the possibility of a refund, there will be reputational and reputational damage. Humans or AI/chatbots can't really do anything to solve this, so look for empathy (see digital 1) in the next dialog.
Conclusion : probably a human.
What Now ?
Humans and AI/chatbots are inherently not right or wrong, good or bad. They cover the entire range of purposes and outcomes. I just want to know, now, what problem am I solving. That distinction will become increasingly difficult, and ultimately indeterminable. And at that point, it won't even matter.
Until that day comes, it's a fun game to play. And the more we play, the faster the AI/chatbots evolve.
Post a Comment
Post a Comment