Alexander Karp, the co-founder and CEO of Palantir Technologies, a software company that specializes in big data analytics, has expressed his support for the development and use of artificial intelligence (AI) weapons for national security purposes. In an interview with Forbes, Karp said that AI weapons are “a necessary evil” to counter the threats posed by adversaries who are also pursuing AI abilities capability.
Karp argued that AI weapons, such as lethal autonomous weapon systems (L.A.W.S.), which can select and apply force to targets without human intervention, are not only inevitable, but also desirable for ensuring the safety and sovereignty of democratic nations. He said that AI weapons can reduce the risk of human error, bias, and casualties, as well as enhance the speed, accuracy, and efficiency of military operations. He also claimed that AI weapons can be designed to comply with the laws of war and ethical principles, and that human oversight and accountability can be maintained through appropriate regulations and governance mechanisms.
Karp’s views contrast with those of many experts and activists who have raised serious concerns about the humanitarian, legal, and ethical implications of AI weapons. For instance, the International Committee of the Red Cross (ICRC) has warned that AI weapons pose a frontier risk to humanity, as they could undermine human dignity, responsibility, and control over the use of force. The ICRC has called for a global ban on fully autonomous weapons that cannot ensure meaningful human control and compliance with international humanitarian law.
Furthermore, thousands of scientists, engineers, and researchers in the field of AI have signed an open letter urging the United Nations to prohibit the development and use of AI weapons, as they could potentially spark a new arms race, lower the threshold for armed conflict. , and increase the risk of accidental or unauthorized use. The letter also warns that AI weapons could fall into the hands of terrorists, dictators, or hackers, who could use them for malicious purposes.
Despite the growing debate and controversy over AI weapons, there is currently no international treaty or law that specifically regulates or prohibits them. The United Nations has been holding discussions on the issue since 2014, but no concrete outcome has been reached so far. Some countries, such as China, Russia, and the United States, have been reluctant to support any binding restrictions on AI weapons, while others, such as Austria, Brazil, and Mexico, have advocated for a preventive ban.
Palantir CEO Karp, who has described himself as a socialist and a progressive, said that he is aware of the moral dilemmas and challenges posed by AI weapons, but he believes that they are necessary to protect the values and interests of the free world. He said that Palantir, which provides data analysis software to various government agencies and private companies, including the U.S. The Department of Defense, the CIA, and the FBI, is committed to ensuring that its products are used in a lawful and ethical manner. He also said that Palantir is willing to collaborate with other stakeholders to develop and implement standards and best practices for the responsible use of AI.