‘Move fast and break things’: Trump’s $500 billion AI project has major risks
In one of his first moves as the 47th President of the United States, Donald Trump announced a new US$500 billion project called Stargate to accelerate the development of artificial intelligence (AI) in the US
Estimated reading time: 7 minutes
The project is a partnership between three large tech companies – OpenAI, SoftBank and Oracle. Trump called it “the largest AI infrastructure project by far in history” and said it would help keep “the future of technology” in the US.
Tech billionaire Elon Musk, however, had a different take, claiming without evidence on his platform X that the project’s backers “don’t actually have the money”. X, which is not included in Stargate, is also working on developing AI and Musk is a rival to OpenAI CEO Sam Altman.
Alongside announcing Stargate, Trump also revoked an executive order signed by his predecessor Joe Biden that was aimed at addressing and controlling AI risks.
Seen together, these two moves embody a mentality common in tech development that can best be summed up by the phrase: “move fast and break things”.
What is Stargate?
The US is already the world’s frontrunner when it comes to AI development.
The Stargate project will significantly extend this lead over other nations.
It will see a network of data centres built across the US. These centres will house enormous computer servers necessary for running AI programs such as ChatGPT. These servers will run 24/7 and will require significant amounts of electricity and water to operate.
According to a statement by OpenAI, construction of new data centres as part of Stargate is already underway in the US state of Texas:
[W]e are evaluating potential sites across the country for more campuses as we finalise definitive agreements.
An imperfect – but promising – order
The increased investment into AI development by Trump is encouraging. It could help advance the many potential benefits of AI. For example, AI can improve cancer patients’ prognosis by rapidly analysing medical data and detecting early signs of disease.
But Trump’s simultaneous revocation of Biden’s executive order on the “safe, secure and trustworthy development and use of AI” is deeply concerning. It could mean that any potential benefits of Stargate are quickly trumped by its potential to exacerbate existing harms of AI technologies.
Yes, Biden’s order lacked important technical details. But it was a promising start towards developing safer and more responsible AI systems.
One major issue it was meant to address was tech companies collecting personal data for AI training without first obtaining consent.
AI systems collect data from all over the internet. Even if data are freely accessible on the internet for human use, it does not mean AI systems should use them for training. Also, once a photo or text is fed into an AI model, it cannot be removed. There have been numerous cases of artists suing AI art generators for unauthorised use of their work.
Another issue Biden’s order aimed to tackle was the risk of harm – especially to people from minority communities.
Most AI tools aim to increase accuracy for the majority. Without proper design, they can make extremely dangerous decisions for a few.
For example, in 2015, an image-recognition algorithm developed by Google automatically tagged pictures of black people as “gorillas”. This same issue was later found in AI systems of other companies such as Yahoo and Apple, and remains unresolved a decade later because these systems are so often inscrutable even to their creators.
This opacity makes it crucial to design AI systems correctly from the start. Problems can be deeply embedded in the AI system itself, worsening over time and becoming nearly impossible to fix.
As AI tools increasingly make important decisions, such as résumé screening, minorities are being even more disproportionately affected. For example, AI-powered face recognition software more commonly misidentifies black people and other people of colour, which has lead to false arrests and imprisonment.
Faster, more powerful AI systems
Trump’s twin AI announcements in the first days of his second term as US president show his main focus in terms of AI – and that of the biggest tech companies in the world – is on developing ever more faster, more powerful AI systems.
If we compare an AI system with a car, this is like developing the fastest car possible while ignoring crucial safety features like seat belts or airbags in order to keep it lighter and thus faster.
For both cars and AI, this approach could mean putting very dangerous machines into the hands of billions of people around the world.
Armin Chitizadeh, Lecturer, School of Computer Science, University of Sydney
This article is republished from The Conversation under a Creative Commons license. Read the original article.
What's Your Reaction?