El Ignaciano / December 2025

Challenges and Opportunities of Artificial Intelligence

Alfredo Romagosa

On January 2025, the Vatican published a document entitled Antiqua et nova, on Artificial Intelligence. The document was initiated by Pope Francis and published shortly before his death. As with many Vatican documents, it takes its name from the Latin version of the first few words of the document: With wisdom both ancient and new (cf. Mt. 13:52), we are called to reflect on the current challenges and opportunities posed by scientific and technological advancements, particularly by the recent development of Artificial Intelligence (AI).” The document was a cooperative effort of the Dicastery for the Doctrine of the Faith and the Dicastery for Culture and Education. This cooperation between the two organizations is in itself interesting. The involvement of the Faith Dicastery alerts us to possible doctrinal concerns. And one of the goals of the Culture Dicastery is "the promotion of culture, pastoral activity and the enhancement of cultural heritage," which would indicate that the document also sees the possible cultural benefits of the subject. Thus the “challenges and opportunities” discussed by the document.

Our article is mostly based on this document, with some additional commentaries. It is intended to be the first part of an ongoing series on the subject, which will try to keep the audience of this journal updated on this very important topic. Our references to the document will be in the form of the initials AN and the paragraph #. As a convenience, we have set in bold specific action recommendations.

This technology is in some ways old. The term Artificial Intelligence was coined by John McCarthy in 1956 as part of his computer science research at the Massachusetts Institute of Technology, often in collaboration with Marvin Minsky, and it is defined as “that of making a machine behave in ways that would be called intelligent if a human were so behaving.” (AN#7) With this definition, a calculator could be considered AI, since adding and multiplying requires intelligence, but in practice it refers to advanced computer programs. The field, which is a sub-discipline of computer science, has had alternating cycles of successes and disappointments. The most recent AI technology is old in another way; it uses mathematical statistics, which are relatively old technology, to an unprecedented level. AI is a subject that fascinates the press. This ensures coverage and exposure, but it is also conducive to the hype of excessive expectations. This is in itself a risk, because it can cause stock market gyrations that hurt the economy.

1

Many of the important developments in computer science were the results of AI research. Among these are the many techniques in what is called knowledge management, which have been helpful in organizing human knowledge. Knowledge is organized in sets of objects and attributes that characterize them. For example, birds are objects of a class characterized as having feathers, laying eggs, etc., and this class has subclasses determined by some of their other attributes, such as migratory, raptor, etc. The Lisp programming language, developed by John McCarthy, was optimized to work with lists of such objects. Knowledge can then be further structured into rules that are applied in solving problems. These tools often offered the capability of optionally backtracking every step that led to a solution, thus achieving transparency and confidence. This type of work is now called “traditional” AI, and it was technically successful in applications such as agriculture and computer system configuration. But it was financially unsuccessful due to the high expense required in the structuring and maintaining of the rules by computer professionals.

The “new” AI is almost totally different from this earlier technology. It is called Large Language Model (LLM) AI, and it is the result of some technical breakthroughs that were made in the decade of the 2010’s. It has some traces dating back to earlier technology called neural networks, but what is revolutionary is that in contrast with traditional AI, it really does not work with what humans call knowledge. It does not have an internal representation of knowledge; it consists of only statistical strings of words. Actually, they are not words, but “tokens,” although calling them “words” is close enough and easier to understand. The key here is the term “Large;” these models can only work if they use hundreds of billions (100,000,000,000) of items. Thus, they have to be run on supercomputers. When one uses the available public programs, such as ChatGPT or Google’s Gemini, one is actually being allocated time on a supercomputer, without being charged for this use. The reason for not charging at the present time is that the developers need the free labor provided by users in exercising these programs. It is unclear how the economics of this technology will evolve.

Let us try to explain this technology a little better. A key technical development was disclosed as a research paper by Google in 2017, providing important details.¹ The example of learning a language may help. A large language model (LLM) learns to write correct English by reading billions of word sequences in this language. Likewise, it learns about knowledge topics by reading large numbers of word sequences dealing with this knowledge. One can see how these models need to rely extensively on the use of statistics. Once the model is able to read English, it can take advantage of the wealth of knowledge available on the Internet, and most of the public models do just that. Specialized applications may be trained with private knowledge libraries. Surprisingly, an LLM appears to be able to create new knowledge by imitating the structures that it has been trained on, as for example composing a variant musical composition after being trained on examples of existing musical pieces. The term “generative” AI is often used to describe this capability.

2

²One disadvantage of an LLM as compared with traditional AI is that since it does not really structure its knowledge as in traditional AI knowledge management, or as a human being would, it is almost impossible for a human to backtrack or follow its reasoning process, making it opaque. This causes an inherent lack of demonstrability. Nevertheless, it appears that LLM’s are in practice just as accurate as many humans, even though they occasionally produce weird errors that have been termed “hallucinations.” The dramatic results being achieved completely overshadow the earlier technologies.

Theological Challenges

The use of the term “ancient” in the document title may recall an ancient controversy about the nature of the human soul. The capability of available models, such as ChatGPT, is already sufficient for many tasks that previously required human reasoning. This demonstrated capability of reasoning in a machine would present a serious challenge to antiquated views of human reasoning as an immaterial property of the spiritual human soul. Fortunately, Christian thought, including the AN document, has moved beyond that view:

“The soul is not merely the immaterial ‘part’ of the person contained within the body, nor is the body an outer shell housing an intangible ‘core.’ Rather, the entire human person is simultaneously both material and spiritual. This understanding reflects the teaching of Sacred Scripture, which views the human person as a being who lives out relationships with God and others (and thus, an authentically spiritual dimension) within and through this embodied existence.” (AN#16)

I like to think of the “soul” as God’s view of the human person, that which is capable of having a relation with God and other persons. God, through evolution, has created this capability in a biological being, and, following the conjecture of physicist and theologian John Polkinghorne,² may reconstitute it in a different domain after death.

While interpersonal relationships involve reasoning, this is only a small part of the process: “Human beings are ‘ordered by their very nature to interpersonal communion,’ possessing the capacity to know one another, to give themselves in love, and to enter into communion with others.” (AN#18) One can get involved in a continuing conversation with an LLM which would appear to be a real interpersonal interchange, and there has been at least one case of a resulting suicide. The LLM does not really care, in contrast with “a friend who knows the right word to say or a person adept at managing human relationships exemplifies an intelligence that is ‘the fruit of self-examination, dialogue and generous encounter between persons.’” (AN#27) It is critical that public AI programs avoid giving the illusion of personal identity.

3

The AN document reflects on the proper role of human intelligence: “A proper understanding of human intelligence, therefore, cannot be reduced to the mere acquisition of facts or the ability to perform specific tasks. Instead, it involves the person’s openness to the ultimate questions of life and reflects an orientation toward the True and the Good.” (AN#29)

Another issue that may have theological or philosophical implications is consciousness. There is no pretense of current LLM’s being conscious, not even in a simulated sense, but there is ongoing research on what this would mean. We will not touch this complex issue here, mostly because, as scientist and philosopher Susan Schneider admits, “we don’t even understand consciousness in humans yet."³

The following sections deal with some of the other social challenges of LLM AI.

Technology and Moral Responsibility
 “AI should not be seen as an artificial form of human intelligence but as a product of it.” (AN#35) Seen as an extension of human intelligence, AI is subject to the same ethical constraints as human intelligence. Clearly the company that makes the LLM product available has legal responsibility for it. But pointing out the “opaque” nature of current AI technology that we mentioned earlier, the An document highlights the complexity of these issues.

“Since full moral causality belongs only to personal agents, not artificial ones, it is crucial to be able to identify and define who bears responsibility for the processes involved in AI, particularly those capable of learning, correction, and reprogramming. While bottom-up approaches and very deep neural networks enable AI to solve complex problems, they make it difficult to understand the processes that lead to the solutions they adopted. This complicates accountability since if an AI application produces undesired outcomes, determining who is responsible becomes difficult. To address this problem, attention needs to be given to the nature of accountability processes in complex, highly automated settings, where results may only become evident in the medium to long term. For this, it is important that ultimate responsibility for decisions made using AI rests with the human decision-makers and that there is accountability for the use of AI at each stage of the decision-making process. (AN #44)

Regulatory frameworks should ensure that all legal entities remain accountable for the use of AI and all its consequences, with appropriate safeguards for transparency, privacy, and accountability. (AN #46)

4

AI, the Economy, and Labor
The AN document acknowledges the positive labor advantages of AI: “It has the potential to enhance expertise and productivity, create new jobs, enable workers to focus on more innovative tasks, and open new horizons for creativity and innovation.” (AN#66) But in replacing humans at some tasks, it presents the well-known problem of automation and resulting unemployment that has been around since the invention of machines. But while in the past it has been obvious that many of the replaced tasks were boring or burdensome, the replacement of humans at reasoning and possibly creative tasks is more problematic. The AN document does not provide any solution suggestions, but we may apply the approach that has been used in the past for responsible purchasing. Just as we can support with our purchases companies that are environmentally sensitive, in the future we can avoid companies that practice the callow replacement of human beings with AI. As an analogy, do we favor stores that have human attendants even if they have higher prices?

AI and Education
This is the AI subject that has the most current attention, and it offers immediate opportunities for parent participation and advocacy at the school of their children, keeping in mind the following considerations:

In this context, AI presents both opportunities and challenges. If used in a prudent manner, within the context of an existing teacher-student relationship and ordered to the authentic goals of education, AI can become a valuable educational resource by enhancing access to education, offering tailored support, and providing immediate feedback to students. These benefits could enhance the learning experience, especially in cases where individualized attention is needed, or educational resources are otherwise scarce. (AN#80)

However, instead of fostering ‘a cultivated intellect,’ which ‘brings with it a power and a grace to every work and occupation that it undertakes,’ the extensive use of AI in education could lead to the students’ increased reliance on technology, eroding their ability to perform some skills independently and worsening their dependence on screens. (AN#81)

Additionally, while some AI systems are designed to help people develop their critical thinking abilities and problem-solving skills, many others merely provide answers instead of prompting students to arrive at answers themselves or write text for themselves. (AN#82)

5

AI, Misinformation, Deepfakes, and Abuse

As with any software technology, accidental functional errors are part of the process and need constant attention. Responsible monitoring and support will help differentiate successful companies. Then there is the further issue of individual users deliberately using the technology to create what is called “deepfakes”. The representational capability of LLM’s facilitates the creation of misinformation or artificial reality, sometimes for entertainment purposes, sometimes for fraudulent purposes. This issue requires careful regulation, as misinformation—especially through AI-controlled or influenced media—can spread unintentionally, fueling political polarization and social unrest.” (AN#88) It also calls for watchful and responsible use by individuals: “This calls for the ongoing prudence and careful discernment of all users regarding their activity online.(AN#89)

To illustrate the capability of LLM’s, as well as to list a set of useful measures, I asked ChatGpt the question “How to prevent AI deepfakes?” and I have included its answer in the Appendix to this document due to its length.

AI and the Protection of Our Common Home

As mentioned before, due to the use of supercomputers, “Current AI models and the hardware required to support them consume vast amounts of energy and water, significantly contributing to CO2 emissions and straining resources.” (AN#96) This is a complex issue that requires technical as well as economic knowledge, and it may be alleviated by continuing hardware progress. In general, society needs a better accounting system for the economics of energy usage. Meaningful and responsible discussions on this issue between industry and government are required.

Call to Action

Along this article we have been including specific action recommendations, some of which can be done individually, but a complete social consciousness and political action plans will be required: “As these applications and their social impacts become clearer, appropriate responses should be made at all levels of society, following the principle of subsidiarity. Individual users, families, civil society, corporations, institutions, governments, and international organizations should work at their proper levels to ensure that AI is used for the good of all. (AN #110)

The European Community has lead the way with a comprehensive program  that centers on excellence and trust, aiming to boost research and industrial capacity while ensuring safety and fundamental rights. In the United States, a few states have passed limited regulation legislation. For example, Illinois prohibits AI systems from independently performing therapy or making clinical decisions in behavioral health, and Utah requires an AI system to disclose prominently that is not a human and to maintain transparency.  Clearly, considerably more legislation will be required.

6

Concluding Reflections

“Considering the various challenges posed by advances in technology, Pope Francis emphasized the need for growth in ‘human responsibility, values, and conscience,’ proportionate to the growth in the potential that this technology brings — recognizing that ‘with an increase in human power comes a broadening of responsibility on the part of individuals and communities.’” (AN#108)

“Since a ‘person’s perfection is measured not by the information or knowledge they possess, but by the depth of their charity,’ how we incorporate AI ‘to include the least of our brothers and sisters, the vulnerable, and those most in need, will be the true measure of our humanity.’ The wisdom of the heart can illuminate and guide the human-centered use of this technology to help promote the common good, care for our ‘common home,’ advance the search for the truth, foster integral human development, favor human solidarity and fraternity, and lead humanity to its ultimate goal: happiness and full communion with God.” (AN#116)

Appendix: How to prevent AI deepfakes? Answer by ChatGPT:

Excellent question — preventing AI deepfakes is one of the biggest challenges in today’s digital world. “Deepfakes” use AI to create hyper-realistic fake images, videos, or audio that can deceive people. Combating them requires a mix of technology, policy, and personal awareness. Here’s a breakdown of the most effective strategies:

🧩 1. Develop and Use Detection Technology

  • AI-based detectors: Tech companies and research labs are building algorithms that identify digital manipulation — e.g., by detecting pixel-level inconsistencies, unnatural blinking, or mismatched shadows.
  • Metadata verification: Embedding digital “watermarks” or cryptographic signatures at creation time helps confirm whether a video or image is authentic.
  • Content provenance standards: Initiatives like the Coalition for Content Provenance and Authenticity (C2PA) and Adobe’s Content Credentials aim to make media traceable from its source.

🏛️ 2. Create and Enforce Legal and Regulatory Frameworks

  • Criminal penalties: Governments can outlaw malicious use of deepfakes — especially those that involve defamation, election interference, or nonconsensual sexual imagery.
  • Transparency laws: Require platforms to label AI-generated content clearly.
  • Election protection: Ban or restrict the use of synthetic political media during campaigns unless it carries a disclosure.

🖥️ 3. Platform-Level Responsibility

Social media and hosting services can help by:

  • Automatically scanning uploads for signs of manipulation.
  • Labeling or demoting synthetic content in feeds.
  • Giving users tools to report suspected deepfakes.
  • Sharing verified provenance data with other platforms.

🧠 4. Public Education and Media Literacy

Arguably the most powerful defense:

  • Teach people to question before they share — look for trusted sources, context, and corroboration.
  • Learn basic clues of deepfakes: unnatural lighting, mismatched reflections, overly smooth skin, or off-sync audio.
  • Schools and community programs can integrate “AI literacy” into digital education.

🔐 5. Authentication of Real Content

  • Watermark authentic media: Journalists, governments, and creators can embed verifiable metadata or blockchain records when publishing genuine material.
  • Verified accounts: Public figures and institutions can use digital certificates or platform-level verification to prove authenticity.

🧑‍💻 6. Responsible AI Development

AI developers and companies can:

  • Restrict model misuse: Limit access to powerful image/audio generators or add built-in safeguards (e.g., blocking celebrity likenesses).
  • Add invisible identifiers: Techniques like AI fingerprints (e.g., Stable Signature, DeepMind’s SynthID) tag AI-generated media without visible marks.
  • Publish transparency reports on how synthetic content is detected and managed.

👁️ 7. Individual Vigilance

For regular users:

  • Don’t trust viral or emotional media right away — cross-check with reputable outlets.
  • Use reverse image/video searches to see if the content appears elsewhere.
  • Keep your own data (voice samples, images) private to avoid being cloned.

¹ Ashish Vaswani et al., “Attention Is All You Need” in Proceedings of the 31st International Conference on Neural Information Processing Systems. Curran Associates, Inc., 6000–6010. (2017)
² John Polkinghorne, Exploring Reality, The Intertwining of Science and Religion, New Haven: Yale University Press, 2005.
³ Quoted in John W. Farrell, “Minds Without Brains?” in Commonweal, April, 2021.

--------------- o0o ---------------

Alfredo Romagosa is Director of Instituto Jesuita Pedro Arrupe, he has degrees in Religious Studies and Engineering and has taught at the Southeastern Pastoral Institute (SEPI) and at Florida International University.