| 15th Dec 2020 | 3Min. To Read
Artificial intelligence is widely accepted as being the development which will take online IT systems to the next level, with its massive potential to predict trends and reproduce user activity. Usually mentioned in the same breath or sentence is the term machine learning, or deep learning. These terms refer to the depth of information available in big data, and automated, AI attempts to learn from it.
Sadly, if not unpredictably, there are security concerns associated with deep learning; specifically, the emergence of the Deepfake. Thanks to big data, people’s identity can sometimes be replicated to such an extent that even victims themselves can’t believe it’s not them. Needless to say, deepfake needs to be taken seriously by the online verification industry.
Fraudulent identity creation of the sort represented by deepfake is potentially most likely to be attempted during onboarding. This is when meticulously recreated human behaviour can be generated using CGI, enough (it may be hoped by the fraudsters) to fool electronic know your customer (eKYC) software systems.
As documents themselves become ever harder to fake, hackers may now be hoping to use deep learning to take it to the next stage, having skipped as many of the documentary checks as possible. With the continuing arms race being fought to create unforgeable e-docs, it is at least conceivable that a fake human being may help get through the onboarding process.
For the more paranoid (or, careful) members of the online security community, a fake video of Barack Obama talking about flying saucers will no doubt send shivers down the spine. With CGI and deep learning so highly advanced, surely this could represent a potential threat to onboarding?
Fortunately, at least for the moment, security experts do not see deepfake techniques being a threat to online identity verification. As things stand, there are two factors which mitigate against the likelihood of an artificially generated online human presence managing to fool eKYC processes.
Firstly, biometric identification technology is extremely effective. Finely tuned mathematical modelling enables security checking software to detect whether a video stream is the result of real time, real person activity, rather than a pre-recorded file generated by computer. Ironically, the computer doing the detecting knows when it’s watching something created by another computer.
Secondly, the amount of information needed to create deepfake is huge. Hoaxes posted online of politicians and celebrities saying things they didn’t are only possible because those individuals have thousands of hours of high definition video files available for the deepfakers to work with. For the average individual, this amount of video is impossible to find.
As with so much else in the world of online identity verification, there is no room for complacency. Fraudsters and hackers continuously strive to find ways to fool checking systems, which in turn become ever more efficient.
At present, hologram technology is just one of the many hidden features which careful security software can employ to verify documents, and streaming of real people in real time for onboarding separates the genuine from the fake. For the moment at least, deepfake is not fooling the online verification industry.