There was an extraordinary demonstration at the latest Google conference yesterday. It involved a phone call and booking a haircut. Despite not sounding like anything out of the ordinary, it most definitely was. The conversation did not take place with a human; instead an automated Google Assistant. Phone calls involving automation have been around for many years, so again you may think it hardly sounds revolutionary. However, there was a significant difference to other automated systems. The voice was programmed to ask exactly the right questions, pausing in the right places, and even throwing in the odd “mmhmm” for realism.
During the demonstration, the most astonishing aspect was the inability of the individual to suspect they were talking to an AI. The conversation was so realistic that no automation was even detected. Even though this represents a victory for Google in what they set out to achieve, it does raise several questions. Is it ethical to deceive callers into believing they are talking to a human, when in fact they’re not? Should people potentially be discussing sensitive information with AI? The answer to these questions are most certainly no.
A potential easy target for hackers?
Another area of concern is that these systems could be readily targeted by hackers. The activity of such individuals or groups is already rising sharply enough, without providing them with even more encouragement. Security breaches have been on the rise over recent years, with several large businesses suffering as a result. If hackers are able to gain entry to such a scale of company, they are unlikely to have much trouble with an automated system. Even though they are sophisticated processes, they will not be able to match human intellectual capabilities. Therefore, introducing such systems is only adding to an already troublesome situation.
Could result in more fraudulent activity
If such automated systems start to become commonplace when making bookings, it will likely see more fraudulent activity being attempted. People are likely to sense an opportunity to take advantage of AI, and pretend to be someone they’re not. It will become accepted that you have to go through the process, and the known realism of the speaker is likely to cause problems. Individuals will make false claims about who they are, and the convincing nature of the Google Assistant will make it more difficult to detect suspicious activity.
An interesting technological advancement it may be, but risk seems to outweigh reward.