By Aaron Miller
As a solutions engineer here at Agent.ai, looking back on my interactions with Artificial Intelligence across the better part of a year, a few takeaways:
1. It’s not as smart as you think (or hope). Or fear.
2. Your job is not in any immediate, nor mid-term, danger.
3. It requires careful supervision.
For now, it is best suited for narrow interactions where a conversational interface feels natural. Be aware that humans are very good at judging the intelligence and abilities of other humans, so you need a way for someone impatient with a low-level chat to escalate to a real human. As tempting and as fun as it might be, the goal of chat right now shouldn’t be to fool another human…the goal is to deliver basic information quickly, efficiently, and economically (what are your hours? how much does your product cost? where is my order?) so that the time actual humans spend with customers can be used on nuanced issues that require an actual human to answer.
With respect to the hype that surrounds AI right now, it seems that our imaginations will always outstrip our abilities. And, relative to AI, that is a unique and valuable human trait! However, AI does have real efficiencies to offer today and its promise is that it can learn, so we’re all hopeful it will soon be even more useful. Just don’t expect that you can set it up and forget about it. Well, you could but the results will not be what you desired. The problems are: A) It can’t learn that quickly and B) It’s not yet that smart. Most importantly, C) Your users aren’t that forgiving. Nor should they be. In fact, it’s quite possible that modern technologies have made us all less patient and less forgiving, given all the automation we expect to be part of the modern experience (two-day or same day delivery, ride shares need to arrive in less than 10 minutes, email is too slow, why is the person I’m chatting with taking sooo long to type their reply. You get the point.).
In this way, understanding the ever-changing face of customer interactions and their (really our) evolving expectations, we need not worry about a jobless future. Rather, similar to computers and the internet before it, it seems AI will be augmenting what we do for a long time to come. It’s reasonable to expect that as AI gets smarter, we will too, though in less predictable ways. For now, it seems better to think about which use cases benefit first and most from augmentation rather than worry about being replaced by machines. Augmenting each other helps us each do what we do well, even better, just like working on teams does. Besides, it’s not like everything that needs doing is being done already. But that’s a topic for another post.