SAN FRANCISCO — There is little doubt that the Defense Department needs help from Silicon Valley’s biggest companies as it pursues work on artificial intelligence. The question is whether the people who work at those companies are willing to cooperate.
On Thursday, Robert O. Work, a former deputy secretary of defense, will announce that he is teaming up with the Center for a New American Security, an influential Washington think tank that specializes in national security, to create a task force of former government officials, academics and representatives from private industry. Their goal is to explore how the federal government should embrace A.I. technology and work better with big tech companies and other organizations.
There is a growing sense of urgency to the question of what the United States is doing in artificial intelligence. China has vowed to become the world’s leader in A.I. by 2030, committing billions of dollars to the effort. Like many other officials from government and industry, Mr. Work believes the United States risks falling behind.
“The question is how should the United States respond to this challenge?” he said. “This is a Sputnik moment.”
The military and intelligence communities have long played a big role in the technology industry and had close ties with many of Silicon Valley’s early tech giants. David Packard, Hewlett-Packard’s co-founder, even served as the deputy secretary of defense under President Richard M. Nixon.
But those relations have soured in recent years — at least with the rank and file of some better-known companies. In 2013, documents leaked by the former defense contractor Edward J. Snowden revealed the breadth of spying on Americans by intelligence services, including monitoring the users of several large internet companies.
Two years ago, that antagonism grew worse after the F.B.I. demanded that Apple create special software to help it gain access to a locked iPhone that had belonged to a gunman involved in a mass shooting in San Bernardino, Calif.
“In the wake of Edward Snowden, there has been a lot of concern over what it would mean for Silicon Valley companies to work with the national security community,” said Gregory Allen, an adjunct fellow with the Center for a New American Security. “These companies are — understandably — very cautious about these relationships.”
The Pentagon needs help on A.I. from Silicon Valley because that’s where the talent is. The tech industry’s biggest companies have been hoarding A.I. expertise, sometimes offering multimillion-dollar pay packages that the government could never hope to match.
Mr. Work was the driving force behind the creation of Project Maven, the Defense Department’s sweeping effort to embrace artificial intelligence. His new task force will include Terah Lyons, the executive director of the Partnership on AI, an industry group that includes many of Silicon Valley’s biggest companies.
Mr. Work will lead the 18-member task force with Andrew Moore, the dean of computer science at Carnegie Mellon University. Mr. Moore has warned that too much of the country’s computer science talent is going to work at America’s largest internet companies.
With tech companies gobbling up all that talent, who will train the next generation of A.I. experts? Who will lead government efforts?
“Even if the U.S. does have the best A.I. companies, it is not clear they are going to be involved in national security in a substantive way,” Mr. Allen said.
Google illustrates the challenges that big internet companies face in working more closely with the Pentagon. Google’s former executive chairman, Eric Schmidt, who is still a member of the board of directors of its parent company, Alphabet, also leads the Defense Innovation Board, a federal advisory committee that recommends closer collaboration with industry on A.I. technologies.
Last week, two news outlets revealed that the Defense Department had been working with Google in developing A.I. technology that can analyze aerial footage captured by flying drones. The effort was part of Project Maven, led by Mr. Work. Some employees were angered that the company was contributing to military work.
Google runs two of the best A.I. research labs in the world — Google Brain in California and DeepMind in London.
Top researchers inside both Google A.I. labs have expressed concern over the use of A.I. by the military. When Google acquired DeepMind, the company agreed to set up an internal board that would help ensure that the lab’s technology was used in an ethical way. And one of the lab’s founders, Demis Hassabis, has explicitly said its A.I. would not be used for military purposes.
Google acknowledged in a statement that the military use of A.I. “raises valid concerns” and said it was working on policies around the use of its so-called machine learning technologies.
Among A.I. researchers and other technologists, there is widespread fear that today’s machine learning techniques could put too much power in dangerous hands. A recent report from prominent labs and think tanks in both the United States and Britain detailed the risks, including issues with weapons and surveillance equipment.
Google said it was working with the Defense Department to build technology for “non-offensive uses only.” And Mr. Work said the government explored many technologies that did not involve “lethal force.” But it is unclear where Google and other top internet companies will draw the line.
“This is a conversation we have to have,” Mr. Work said.