Is AI (Artificial Intelligence) a blessing or a curse?
That question will be answered by what happens in the years to come. The reason for that is even though AI has been evolving, and a topic of wide-eyed speculation for decades, we are still in the relatively early stages of the AI revolution.
Depending on how this revolution plays out, AI has the potential to be either a blessing, a curse, or both.
We first wrote about the potential for an AI revolution in a blog posted in 2012. At that time, we stated:
Deep Learning is the application of artificial intelligence and software programming through “neural networks” to develop machines that can do a wide variety of things including driving cars, working in factories, conversing with humans, translating speeches, recognizing and analyzing images and data patterns, and diagnosing complex operational or procedural problems.
And opined, “… Big Data and Deep Learning can be big deals and the bases for an American innovation and economic revolution.”
From 2012 until 2022, there was considerable progress in the areas identified in that blog. Then, near the end of 2022, the chatbot ChatGPT (Chat) was launched by Open AI, and Chat and other generative AI (GenAI) tools began to talk with and do tasks for humans. AI was transformed from merely a technological resource to a human companion and assistant.
This transformation significantly expanded the utilization of AI and helped accelerate the speed of the AI revolution. As Megan McCardle comments in a recent article for the Washington Post, this revolution will take a little time to unfold because “although AI may be evolving faster than any technology in history, institutions can only adapt at the same old human speeds.”
In spite of this, McCardle warns, in concluding her article, “We are resting in the eye of a gathering storm, and those who fail themselves now risk being swept away when the storm unleashes its full power.”
What does the full power of the AI storm look like? Here are a few forecasts.
- Drawing upon various sources, in October 2024, Katherine Haan of Forbes Advisor reported:
-
- The AI market is projected to reach $1.339 billion by 2030. This compares to a market size of $214 billion in revenue in 2024.
- AI is expected to contribute a significant 21% net increase to the United States GDP by 2030.
- The industries with the highest long term AI adoption rate are healthcare (40%) and automotive (18%).
- The World Economic Forum, in its Future of Jobs Report (2025), projects:
-
- The creation of 170 million new jobs and the loss of 92 million current jobs by 2030.
- 40% of employers expect to reduce their workforce where AI can automate tasks.
- An article in the Harvard Business Review states, “In the near future, gen AI is likely to affect some 50 million jobs, automating away elements of some jobs and augmenting workers’ abilities in others.”
- Drawing upon an interview with Dario Amodei, CEO Of Anthropic, an AI company, Jim VandeHei and Mike Allen in their Axios article report, “AI could wipe out half of all entry-level white-collar jobs — and spike unemployment to 10–20% in the next one to five years.”
If those forecasts are accurate, AI’s impact in the future will not be a perfect storm with only negative consequences. But it will be a very powerful storm that will dramatically change institutional, industrial, and individual landscapes. By doing so, AI will present numerous risks and threats.
The disruptive effect of AI on the job market is the risk or threat that has been written about most extensively, and has been for some time. We addressed this concern ourselves in a blog posted in 2019.
Other major risks or threats include, but are not limited to: a deviation from objectivity and reality; reduced learning and development of cognitive and human skills; and potentially unmanned military weapons running amuck. Let’s examine each of these in turn.
AI can’t necessarily separate fact from fiction and sometimes invents its own stories to tell. Cade Metz and Karen Weise describe this problematic condition, as follows, in their New York Times essay:
The newest and most powerful technologies — so-called reasoning systems from companies like OpenAI, Google, and the Chinese start-up DeepSeek — are generating more errors, not fewer. As their math skills have notably improved, their handle on facts has gotten shakier. It is not entirely clear why.
Today’s A.I. bots are based on complex mathematical systems that learn their skills by analyzing enormous amounts of digital data. They do not — and cannot — decide what is true and what is false. Sometimes, they just make stuff up, a phenomenon some A.I. researchers call hallucinations. On one test, the hallucination rates of newer A.I. systems were as high as 79 percent.
AI could be a tool that diminishes the human ability to think, to reason, and to behave ethically. Much has been written about what AI can do to enhance education and learning. Recently, problems are being pointed out, and concerns raised, about AI’s negative impact in K-12, higher education and human development in general.
In her May 14 opinion piece for the New York Times, reacting to the Trump Executive Order calling for AI education from kindergarten through 12th grade, Jessica Grose states, “If A.I. is carelessly incorporated all the way down to pre-K, it would be a horrible mistake. It could inhibit children’s critical thinking and literacy skills and damage their trust in the learning process and in one another.”
Moving up the educational ladder, two articles published in May highlight the threats being presented by AI in higher education. The title of James Walsh’s New York article, “Everyone is Cheating Their Way through College,” says it all. Apparently, it’s not just the students who are cheating. As Kashmir Hill reports in his New York Times article, “Professors Face Student Rancor Over Use of A.I.”, “it’s professors who use AI to do things such as prepare presentations that they tell students they are not allowed to do.”
This concern about “cheating” is not overstated. Inside Higher Ed’s 2025 Survey of Campus Chief Technology/Information Officers found that “…three in four — said that it (generative AI) has proven to be a moderate (59 percent) or significant (15 percent) risk.”
What has not gotten as much attention as the cheating issue in higher education is the effect AI is having on students’ developmental process. Monica Hesse, Washington Post columnist, stressed that issue in her podcast comments in which she declared,
To be able to be an ordered and logical and discerning thinker is more important in this current era than it has been in at other points in my lifetime. …. I think that colleges can and should still be a place to produce those kinds of abilities, which are hard skills — skills that can and should be taught.
AI ends up doing an end run around a lot of those skills. And I think that that’s a gap that needs to be filled. Not because it will make us better or more productive workers, but because it’s necessary for humanity to not completely dissolve.
Finally, as Helen Thomson shows in her excellent article for The Guardian, AI can have an effect on all of its users, regardless of age, in which she notes,
The fear comes, however, from the fact that automating these tasks deprives us of the opportunity to practice those skills ourselves, weakening the neural architecture that supports them. Just as neglecting our physical workouts leads to muscle deterioration, outsourcing cognitive effort atrophies neural pathways.
Thomson points out a critical area that can be affected is “critical thinking” and “Studies have suggested that the use of AI for memory-related tasks may lead to a decline in an individual’s own memory capacity.”
AI produces autonomous military weapons which could act inappropriately or be secured by individuals or states who would do so. As CBS’s 60 Minutes showed in its May 18 interview with Palmer Luckey, billionaire founder of the defense technology firm Anduril Industries in 2017, his business is already producing autonomous weapons that fuse artificial intelligence with the latest hardware advancements.
Those weapons include: cruise missiles; a submarine (Dive XL), and a fighter jet (Fury). All of the Anduril products are designed and programmed to act independently with no human operators involved.
When Mr. Luckey was questioned about peoples’ concern that an autonomous “robotic” weapon system might go rogue, he responded:
I would say that it is something to be aware of. But in the grand scheme of things, things to be afraid of, there’s things that I’m much more terrified of and I’m a lot more worried about evil people with mediocre advances in technology than AI deciding that it’s gonna wipe us all out.
During his interview, Luckey stressed that his business was different from other defense companies, stating: The idea behind Anduril was to build not a defense contractor, but a defense products company.” He continued to explain that contractors are paid to do work, while a products company invests its own money to create and sell a “working product.”
60 Minutes reported that “By the end of this year, Anduril says it will have secured more than $6 billion in government contracts worldwide.” So it’s obvious Anduril’s is already selling its “working products.” And, based upon Luckey’s interview and the Anduril website, it’s also obvious the target customers are the “United States and its allies.”
That’s good news. But there still must be a concern that somehow these autonomous weapons, produced by Anduril or other autonomous military weapon manufacturers, could end up in the hands of “evil people” or enemies of the U.S. The consequences would be devastating.
In sum, the storm that might be wrought by AI should not be ignored. And as indicated at the outset of this piece, we are still only in the early stages of the AI revolution.
There is still much more to come, from both ongoing and emerging endeavors, such as the announcement by OpenAI on May 21 that it was buying IO.
Writing for the New York Times on that day, Mike Isaacs and Cade Metz reported:
Sam Altman, OpenAI’s chief executive, said the company was paying $6.5 billion to buy IO, a one-year-old start-up created by Jony Ive, a former top Apple executive who designed the iPhone. The all-stock deal, which effectively unites Silicon Valley royalty, is intended to usher in what the two men call “a new family of products” for the age of artificial general intelligence, or A.G.I., which is shorthand for a future technology that achieves human-level intelligence.
Whether it is AI or AGI, this storm appears to be moving from being a mere whirlwind to becoming a tornado or hurricane depending on where one lives and what one does.
A second factor contributing to the AI storm’s increasing power throughout the United States is the Trump administration’s positioning of AI’s pivotal importance to the future of the United States. On January 23, shortly after returning to Office, President Trump signed an executive order that declared:
This Executive Order establishes the commitment of the United States to sustain and enhance America’s dominance in AI to promote human flourishing, economic competitiveness, and national security.
Three months later, on April 23, Trump signed an executive order titled “Advancing Artificial Intelligence Education for American Youth”. That order states:
It is the policy of the United States to promote AI literacy and proficiency among Americans by promoting the appropriate integration of AI into education, providing comprehensive AI training for educators, and fostering early exposure to AI concepts and technology to develop an AI-ready workforce and the next generation of American AI innovators.
And in mid-May, President Trump visited the Middle East. During that visit, he signed deals with the United Arab Emirates to build the largest AI campus outside the U.S. in Abu Dhabi, and to allow provision of computer chips to an AI start-up in Saudi Arabia.
These Trump executive orders and deals provide little to no control over what is done under the name of or by AI.
The January 23 order revoked an Executive Order by President Biden which imposed “government control over AI development and deployment.”
The April 23 order established a White House Task Force on Artificial Intelligence Education. That Task Force was basically charged with developing a plan to grow AI through a Challenge and the establishment of public-private partnerships. No emphasis was placed on ensuring adequate protections of individuals or institutions during the implementation.
Similarly, the Middle East deals established no apparent constraints on AI deployment and development. In fact, they appeared designed primarily to benefit the countries of Saudi Arabia and the United Arab Emirates and the U.S. technology businesses such as Open AI and Nvidia who would work with or be suppliers to them. Sam Altman, Open AI’s CEO, Jensen Huang, Nvidia’s CEO, and more than two dozen other AI technology executives joined Trump during his Middle East visit.
This brings us back to the question at the beginning of this blog. Will AI be a blessing or a curse? As the foregoing discussion illustrates, in spite of the tremendous upside for AI, there is also a substantial downside.
Because of this, the federal government should put a comprehensive and detailed American AI Initiative Master Plan in place to maximize AI’s upside potential and minimize its downside risks and threats. We first advocated for such a plan in 2019, during Trump’s first term as President.
In 2025, Trump’s White House Task Force on Artificial Intelligence Education is putting a plan in place, but it addresses only one aspect of AI, and is not multi-dimensional. Nor does that plan address any of the risks or threats that are, or could emerge, in education because of AI.
More importantly, as mentioned above, the Trump administration is placing no controls over AI development and deployment. Due to that vacuum, Nicole Turner Lee and Josie Stewart, in a commentary written for the Brookings Institution on May 14, report:
States are rapidly introducing various bills governing the design and use of artificial intelligence (AI) technologies…Nearly 700 AI-related state bills were introduced in 2024, and this number is expected to grow in 2025.
The states are filling the AI risk and threat gap being left open by the federal government. There is only one problem at this point in time.
That is the budget passed by the US House on May 22, which establishes a moratorium on state enacting their own AI laws for ten years. If the Senate signs off on this aspect of budget, it would mean that AI would be unregulated under this current administration. And AI could be unregulated for years to come, if a MAGA-oriented Republican is elected President in 2028, and the Republicans retain control of the House and the Senate.
It is impossible to predict the future. In a best-case scenario, in the absence of governmental regulations or control, AI inventors and innovators might step forward to ensure their AI products not only automate and perform processes, but do that in a way that is safe for consumers, and beneficial to the American economy and the American workforce writ large.
That scenario is highly improbable. More realistically, an unregulated AI industry will forge ahead and produce profitable products that are a blessing for a minority of U.S. businesses and Americans, and possibly a curse for the majority,
If that’s the way this unfolds, it will be up to concerned citizens to mobilize and advocate for controlling AI, and educating their fellow citizens on the potential problems associated with AI utilization. One of the things those citizens could do is to turn to Pope Leo XIV for support.
Pope Leo understands the dilemma AI presents. Shortly after he was inaugurated, he stated “… developments in the field of artificial intelligence pose new challenges for the defense of human dignity, justice, and labor.”
Those citizens could ask the Pope for his prayers and blessing. That blessing will not eliminate the potential curse associated with AI. But it will provide them hope and inspiration to work together until, as he suggested, the “responsibility and discernment” is gained to deploy AI’s “immense potential” to benefit rather than degrade humankind.