When ChatGPT was first released in November 2022, it fundamentally changed the world. After becoming the fastest web app to reach 100 million users in history, taking only two months, it completely reshaped how the public understood what generative AI is capable of. Instantly, a revolution began as people wanted to apply it to every area of life they could, and every industry competitor sought to take a piece of the growth for themselves. It feels almost impossible for the average American to escape talk of AI as it pervades every aspect of the modern technologized life. Now, we stand on the precipice of something new and unknown, but it is clear that the future of AI will greatly influence the future of all our lives. Yet, AI is a tool, and its direction will be decided by those at the forefront of technological development right now, OpenAI, Google, Meta and other large tech companies, who fundamentally do not have society’s best interests at heart. So long as this is the case, AI is not going to save us.

To be clear, this article is not meant to be one necessarily anti-progress or anti-tech. Although there have been many negative effects that AI has already had on society, which need to be reckoned with in a genuine way, the potential for benefit is also clear.
However, technological development in a capitalist society is inextricably tied to the interests of capital, which are decidedly separate from the interests of people. To see this in play, note how the funding for new AI projects and start-ups is dominated overwhelmingly by corporate tech giants who, in 2023 for example, were the source of 2/3 of the $27 billion raised by emerging AI startups, with much of the rest coming from venture capital. The most well-capitalized companies on the planet are racing to pour the most money and partner with the most lucrative prospects to establish their position in the new AI world order. Meanwhile, a majority of Americans are more concerned about AI in their daily life than excited, according to recent Pew Research data. In terms of interest in growth and daily application, it is clear there is a base level difference between those profiting from it and those being affected by it.
The federal government, for its part, is no better. For the past few years the Department of Defense’s spending has dominated how the federal government has interacted with the sector. According to a Brookings Institute analysis of new federal agency AI contracts from 2022 to 2023, DoD spending grew exponentially to the point that it made the combined efforts of every single other federal agency look like a rounding error on the overall budget. The fact that the government’s seemingly sole focus is on developing technologies that will undoubtedly be used to further surveil, covertly influence or just kill people at home and abroad should be of serious concern to American public.
The point is that similar material conditions lead to similar interests and outcomes. Across the tech industry, people are coming to similar conclusions because their positions ensure their priorities are profit. When militarism is profitable, as it very much is (especially right now), tech companies will naturally see opportunity in tuning their products towards military uses. When invading people’s privacy and surveilling them is profitable, companies will naturally move towards these ends. Then, as a result of the excess control money has in our government’s politics, this profit motive will naturally begin to feed back into the federal government and push them towards supporting these endeavors as well. It is a positive feedback loop that will continually make the avenues of power in this nation go certain directions. This is not an issue of specific individuals at the helm of this company or that federal agency, but simply a structural problem with the nature of technological development and what pushes it forward in this country.

The examples across the world of the consequences of this issue are clear. For example, Google, one of the tech giants at the forefront of AI development, has long been selling its cutting-edge technology, known as Project Nimbus, to the state of Israel for military purposes, according to leaked documents obtained by The Intercept. This technology, along with others it has allowed Israel to use, have been used to better surveil, track, identify and eliminate Palestinians for years. Although the profits have surely been significant, the use of AI for murder, genocide and apartheid cannot be accepted as valid reasons to push forward this technology.
For a more domestic focus, look no further than how the DoD wants to create new technology to better impersonate real people on social media explicitly for the purpose of information gathering. This is not only a dangerous game to play when it comes to increasing the proliferation of this type of technology to other actors around the world, but also presents the genuine and not unsubstantiated fear that this could be used to surveil and gather information on US citizens as well.
The thing is that there is the opportunity for so much more to come from this technology, both in terms of the bad if this trend continues, but also the good it can bring when used for the benefit of people. For a better example than the earlier two, the medical field has seen genuine use out of analytical models and its ability to accurately diagnose and treat patients. When technological efforts are put towards these ends, that is when people truly benefit from progress.
In the same way that an airplane can both drop a bomb on a city and help transport lifesaving goods, we need to find ways to sustainably develop AI for the public good and not private wealth.
