In a recent indictment announced last week, a now former engineer at Google was accused of sharing proprietary technology with foreign countries in a way that risked not only the company’s sensitive technological know how, but western economic superiority more broadly. Linwei Ding, who, in the past participated in a Chinese government backed tech incubator and should have raised alarms among HR executives when he applied for his job at Google, faces up to 15 years in prison if convicted of espionage, along with 10 years per count related to the conviction of the theft of trade secrets. Having worked at Google since 2019, Ding had access to sensitive knowledge related to the company’s artificial intelligence (AI) programs and specifically the development of its AI program, Gemini currently installed on millions of Google accounts globally. This case highlights the increasingly growing concern over technological espionage, alongside the sharing of sensitive technological knowledge with dangerous foreign actors, by those with access to much sought-after knowledge, in exchange for payment and for the benefit of foreign actors with a vested interest in acquiring such sensitive information.
The threat posed to American economic interests and the security of company data not only stems from foreign citizens who have gained access to sensitive American technology but also sometimes, from American citizens themselves who are looking to make money with little regard for the implications. Chris Hannifin, a former American military officer and employee at a series of tech and consulting firms, including defense contractor RSM, the USAA financial services company for veterans, SiloTech and North South Consulting Group, is now the center of a still unfolding case in Texas. Although his American military credentials and past security clearance deemed him appropriate to be granted access to sensitive technical knowledge, alarms were raised by members of Chris Hannifin’s inner circle, when a series of purchases were made that were not commensurate with his regular salary. Realizing that he had not been the beneficiary of any recent transaction that would explain this newfound wealth, eyebrows were raised regarding the source of these funds.
It is yet to be determined how much damage was actually done by Chris Hannifin but severe concern over the irresponsible conduct by a professional in the field who should have known better has been expressed. It is also unknown if Chris Hannifin acted alone, or with other accomplices and how much he was paid for his services and by whom. It has also yet to be determined what drove him to commit such crimes and if the incentive was financial gain alone, or if he was driven by ideology. Although some have suggested the information in question was sold to a competing company, others have made clear that his efforts involved a state actor and a plot almost as sinister as that of Linwei Ding at Google.
As the field of artificial intelligence continues to develop, the technological challenges that go with it will continue to increase in the foreseeable future. Without the necessary safeguards in place to both preempt and address potential challenges, the risks are tremendous. As was noted, this includes the theft of intellectual property not only by other companies, but on behalf of state actors, something which is currently being alleged by ChatGPT vis-à-vis its Chinese competitor, DeepSeek. This includes the transfer of technological knowledge through underhanded means, be they offensive cyber capabilities, or cases like that of Linwei Ding and Chris Hannifin where allegations are being raised regarding the sharing of sensitive technological information in exchange for payment.
Although impossible to prevent entirely, the sharing of sensitive technological knowledge with competing companies, and to a more concerning extent, state actors defined as adversaries of the West, could certainly be mitigated with the help of a number of simple steps. Comprehensive background checks should serve as a central part of any hiring process for positions with access to information that cannot and should not be compromised. This should not only be the case with government employees. As the Chris Hannifin story has shown, even some individuals with ostensibly safe backgrounds could be a risk as there is no telling who runs the risk of being compromised.
A further step that could be taken by companies is limiting the access that any one employee has to sensitive data and knowledge, such that without complementary information from a colleague, a single person’s individual access is significantly less monetizable. Had this been the case with Linwei Ding and Chris Hannifin, the damage done by their being compromised would have been far less extensive. As similar cases arise in the future, delicate and sensitive industries will certainly take these lessons learned to heart, enacting change that mitigates such severe risks before it’s too late.
Discussion about this post