Magazine
Artificial intelligence has existed in various forms since the 1950s but has emerged as a culture-shaping technology with the advent of ChatGPT and similar generative AI tools. In 2024, businesses, schools, and individuals are learning to harness the power of machines to foster innovation, growth, productivity, and profits. Students stand to benefit immensely from the wise use of this technology, and faculty are adapting pedagogy and curriculum to develop new and necessary skills.
At the same time, AI can present new challenges in the form of plagiarism and other learning shortcuts. For example, faculty express increasing concern that students are not reading—nor properly analyzing or critiquing—assigned texts, instead depending on AIgenerated summaries that shortchange their educational experience and formation. In a sea of change, this powerful technology provides both exciting opportunities and various challenges to those who use it and are otherwise impacted by its influence.
To explore this timely topic, I invited three leaders from CCCU institutions to discuss AI. Dr. Joy Buchanan is an associate professor of quantitative analysis and economics at Samford University (Birmingham, AL). Dr. Andrew Lang is a senior professor and the chair of the computing and mathematics department at Oral Roberts University (Tulsa, OK). Dr. David Bourgeois is the associate dean, a professor of business analytics and artificial intelligence, and the director of the AI Lab at Biola University (La Mirada, CA). Here are their thoughts on the benefits of and concerns about artificial intelligence and what this means for students, both now and in the future.
Dr. Stan Rosenberg: Do you see in your own work that human beings are being seen more and more in terms of information? What might be good about this? What might be sacrificed?
Dr. Joy Buchanan: In the field of economics, we already have a reputation for thinking about people in a simplified, analytical way. So, human beings aren’t really being seen differently now in economics due to the rise of AI technologies via large language models (LLMs).
Human intellectual skills once seemed mystical. Now we are learning that machines can do mental labor, given enough electricity. To give credit to humans, these LLMs are training on human writing. At this stage, they are mostly recycling what I call “our” words and ideas. Still, it’s remarkable that machines could sound so intelligent. This might mean that humans have to reshuffle jobs in a painful way over the next decade.
The trend of viewing humans through the lens of data analytics does carry potential benefits. It can lead to more efficient systems, improved processes, and advancements in fields like medicine.
Humans are more than LLMs, however, and it’s often been the church that has articulated a value for humanity beyond our intellect. King Solomon, who wanted to achieve great wisdom, wrote that, ultimately, being smart is not enough for a meaningful life. Within religion, we are not primarily valued for our intelligence, but rather for our personhood.
Dr. Andrew Lang: Yes, in my own work, I do see a growing trend of viewing human beings in terms of information, especially as we develop models that replicate human decision-making. There is undeniable potential in this approach. By reducing human behavior to data, we can design more efficient systems and make significant strides in medicine, education, and beyond. However, what may be sacrificed is the recognition of the full complexity of human experience—something that transcends the data we generate or the mental tasks we perform.
While AI has made impressive progress, particularly through the pursuit of artificial general intelligence (AGI), these systems currently reflect and recycle existing human knowledge rather than generate original, creative thought. They may mimic human intellect, but they do not yet grasp the depth of human insight or creativity. What sets us apart is not just our cognitive abilities but our intrinsic personhood and the richness that machines, at present, cannot replicate.
As the race toward AGI continues, I believe we may well achieve this milestone within the next decade—a remarkable and transformative possibility. However, my deeper concern lies with the development of “Strong AI,” an intelligence that not only mirrors human reasoning but also simulates consciousness and self-awareness, potentially possessing mental states, beliefs, desires, and emotions akin to humans. This prospect, more than AGI itself, is where the true ethical and existential questions begin.
As AI tools become more powerful, they are streamlining data analysis and decisionmaking, allowing professionals to focus on more creative and strategic tasks.—Dr. Andrew Lang
Rosenberg: What impact do you foresee in your field due to the increasing sophistication of AI, and what kind of skills do you think your students will need to be successful?
Buchanan: AI will reshape economic analysis and modeling, making complex data processing and predictive analytics more accessible. This will lead to more sophisticated economic forecasting and policy design. Economists will become more productive, and expectations will rise accordingly. While some fields might resist change, economics will be at the forefront of AI integration.
For students aiming to succeed, it’s crucial to embrace AI tools without relying on them excessively during college. Strong fundamentals in economic theory and critical thinking remain essential, coupled with data science and programming skills.
Interdisciplinary knowledge, especially in tech and social sciences, will be valuable. Adaptability and lifelong learning are key in this evolving field. Human skills like creativity, communication, and ethical reasoning will remain crucial.
While AI will alter economics, it will also present opportunities for those who can adapt and effectively combine economic thinking with technological proficiency.
Lang: AI’s increasing sophistication is already changing many fields, including my own. As AI tools become more powerful, they are streamlining data analysis and decision-making, allowing professionals to focus on more creative and strategic tasks. For now, students will need to develop a blend of technical skills—data science, machine learning, and programming—alongside their core expertise to harness AI effectively. However, while these skills are essential in the near term, this focus maybe short-sighted.
In my view, the current impact of AI is only temporary. As we move toward the development of Strong AI, the landscape will change dramatically. Human qualities like creativity and ethical reasoning will remain important, but they will be challenged in ways we can’t yet fully predict. The advent of AGI and Strong AI will likely demand a different kind of thinking—one that is not limited by the tools and techniques of today but is prepared to address the profound shifts that AI could bring to our understanding of intelligence, personhood, and the nature of work itself.
I foresee the emergence of human-level Strong AI within the next 30 years, bringing with it profound questions about “rights” and “personhood.” I anticipate that the secular world will gradually begin to advocate for “rights” for these machines, challenging us to contemplate even more deeply what it truly means to be human.
As Christians, how should we respond? Should we extend the same consideration to these entities as we might to intelligent extraterrestrial beings or self-aware animals?
Dr. David Bourgeois: Our students are already using AI to varying degrees. Employers are expecting that students will understand how to use it to improve productivity. Generative AI can increase productivity in two ways: (1) increasing the overall rate of content creation, be it website code, grant requests, or social media posts, and (2) upskilling employees by giving them the ability to complete tasks they were previously incapable of doing (such as programming or creating logos). This second productivity increase comes with a caveat: generative AI will not make us experts at anything; it will only give us capabilities deemed average (or below average) by other professionals in the field. Whether this changes is yet to be seen.
AI can never be human: it can never love; it can never have empathy; it can never be truly creative. So, besides learning how to use AI, our students need to have experiences to improve these very human skills. Finally, they need to understand the ethical and spiritual issues raised by the consistent use of these tools. Our AI Lab has recently published a set of biblical principles to guide our use of artificial intelligence.
AI can never be human: it can never love; it can never have empathy; it can never truly be creative. So, besides learning how to use AI, our students need to have experiences to improve these very human skills.—Dr. David Bourgeois
Rosenberg: Do you think humans are unique? What makes them unique? How does this impact what you teach or how you teach it?
Lang: Humanity is undeniably unique. We are created in the image of God, which grants us a dignity and purpose that far exceeds mere biological existence. Our uniqueness doesn’t just lie in our intellect or creativity, but in the fact that we are spiritual beings, intrinsically connected to the divine. This profound spiritual nature sets us apart, elevating us beyond any machine, no matter how advanced.
Unlike AI, we are not simply processors of information; we are beings capable of love, empathy, and a deep, moral consciousness. This spiritual identity influences not only what I teach but how I teach it. I emphasize to my students that technology, while powerful, cannot replace the richness of human experience or the sacredness of our existence. As AI progresses, this truth becomes more important.
Bourgeois: Like Dr. Lang, I too believe that humans are created in the image of God, which means that we are loving, relational, and creative, among many attributes. Sometimes these characteristics lie dormant or unused as we rush from screen to screen. Instead, we should be including activities in our classes that cause students to exercise these abilities and then understand how to use AI to supplement, not replace them.
Humanity is undeniably unique. We are created in the image of God, which grants us a dignity and purpose that far exceeds mere biological existence.—Dr. Andrew Lang
Rosenberg: How has the erosion of expertise, authority, and trust impacted the work you are seeing from your students? How have you responded? What do you think should be some characteristics or components of “digital wisdom” on your campus and in your field?
Lang: The erosion of expertise, authority, and trust is profoundly affecting students, particularly as they increasingly rely on AI as a primary information source. This reliance often bypasses the critical assessment necessary to distinguish credible information from misleading content, thereby challenging the traditional roles of educators.
Buchanan: The internet-based culture that current students have inherited from millennials has failed them in some ways and is often just plain shallow. Don’t underestimate Gen Z’s ability to find out the truth for themselves. The erosion of traditional authority has led students to seek authenticity elsewhere, often bypassing conventional sources of information.
While young people today may not have a universally trusted media figure like Walter Cronkite, they have unprecedented access to information. This presents both challenges and opportunities. Don’t be surprised if there is a cultural comeback for the true classics among a generation burned out on TikTok, reminiscent of how Renaissance scholars rediscovered ancient Greek works. People want something authentic. Religious colleges are places to find wisdom that has stood the test of time, not just entertainment that’s had a viral moment. I’m in a diverse, nonreligious group chat of writers, and one Gen Z member recently remarked that, “Every young person I know is reading the Bible.” The world religions will certainly be one of the places people turn to find a rock of stability while they are buffeted about by their social media feeds and 24/7 global news cycles.
Colleges need to find a way to balance two things. It is true that attention spans have gotten shorter, and we do need to make sure that assigned readings and activities are accessible to students. However, I think we can also seize this moment to help them find something they are really looking for: authenticity. Most students see overscrolling and screen addiction as a problem. A college campus is a place where we can help hold each other accountable (including faculty) to stop scrolling, reach higher, and turn our attention to media that teaches timeless wisdom.
Also, students need to have a canon of quality writing which they can compare against quickly generated LLM outputs. They need to know a core set of facts to measure against claims their computers will spout.
Rosenberg: Have you seen the power of information technology shape social and political concerns on your campus and amongst your students? Perhaps you’ve seen it at work in your own field, as well. What have been some of the benefits, and what are some of the costs?
Lang: This issue is compounded by the fact that many AI systems are developed in alignment with prevailing academic philosophies, which may conflict with Christian values and other perspectives. The “guardrails” designed to mitigate bias in AI are often rooted in “critical theory,” a dominant framework in academia, which can lead to AI systems that inadvertently marginalize certain groups, such as white, male, rural, and Christian populations. These biases are frequently underexamined, raising significant concerns about the unchecked acceptance of AI-generated content as truth in educational settings.
The broader, more pressing issue with AI, then, lies in the philosophical and ethical implications of its development. This challenge is known as the alignment problem—whether AI can be aligned with human values. What exactly are “human values”? AI alignment is occurring in ways that are, in my opinion, very concerning. For instance, the Chinese Communist Party (CCP) has explicitly mandated that AI development support the party’s broader goals and values, such as national rejuvenation and the promotion of socialism with Chinese characteristics. The CCP’s directives emphasize that these technologies contribute positively to societal development and do not contradict the established ideological norms.
Similarly, recent AI-alignment executive orders from the U.S. government exhibit an ideologically narrow focus on equity, as seen in the directive: “Artificial Intelligence policies must be consistent with my Administration’s dedication to advancing equity and civil rights.” This approach, however, is not without its dangers. It could be just as perilous as a future government mandating that AI must always promote capitalist ideologies. Such directives risk shaping social and political opinions through AI in ways that may not always serve the best interests of society.
The critical question remains: What should we align AI to? As Christian educators, I believe we have a moral obligation to engage with and thoughtfully address the challenges posed by AI. It is essential to provide guidance to students on the responsible use of AI. Without such guidance, the potential for AI to shape the social and political opinions of our students in harmful ways becomes a significant concern.
Rosenberg: Do you think moral de-skilling—because of AI taking on more and more decisions—is an issue we should be concerned about? What impact might judgment atrophy have on your field and your students?
Bourgeois: To use AI effectively, we need to have some understanding of the types of work we are asking it to do. In the case of decision-making, our students should be learning how to make the decisions necessary to be successful both in their personal and professional lives. If machines will begin making decisions for us, then we need to first understand how those decisions are made and provide oversight to them.
Lang: AI is a powerful tool that exemplifies the dual-use dilemma—it holds the potential to be harnessed for tremendous good but also for great harm. This dual-use potential raises critical concerns, as the increasing reliance on AI for decision-making may diminish our students’ ability to make ethical judgments for themselves.
As institutions of Christian higher education, it is imperative that we approach AI with the gravity it deserves. We must integrate AI literacy into our curricula, either as a distinct component or as part of broader information literacy initiatives grounded in a liberal arts education. This approach will help ensure that our students remain capable of critical thinking and ethical reasoning, even in an AI-driven world.
Our ultimate goal should be to cultivate well-rounded individuals who are not only technically proficient but also morally and ethically grounded. By nurturing both intellect and conscience, we prepare our students to navigate a new world with wisdom and moral integrity.
Buchanan: Back in 2013, economist Tyler Cowen suggested in his book Average Is Over that AI was already embedded in our lives. We trusted computers to tell us where to drive and even whom to date. Our sense of direction, literally, has decayed as we’ve handed those decisions over to our phones.
Have we been delegating moral decisions to AI? What about how we spend our time? Time is your truly scarce resource. If you are letting the social media algorithms devour too much of it, that’s an atrophy of judgment we need to fight. Attending a university in person might help provide that accountability, in putting our time toward a higher goal of acquiring learning and preparing ourselves for a meaningful future. I think we can make a moral case for everyone pursuing good scholarship.
Time is your truly scarce resource. If you are letting the social media algorithms devour too much of it, that’s an atrophy of judgment we need to fight. Attending a university in person might help provide that accountability, in putting our time toward a higher goal of acquiring learning and preparing ourselves for a meaningful future.—Dr. Joy Buchanan
Writing used to be a form of mental, and arguably, moral discipline when done well. Now, for the first time, machines can do writing for us. LLMs can provide access to valuable information, but they can also make serious mistakes, as my research shows. We still need dedicated human scholars to guide the process of knowledge creation and safeguard progress.
LLMs might enable the professionals of tomorrow to be more creative and productive. However, I think that will only happen if students gain a base of knowledge and wisdom that will help them evaluate AI output. It is important for them to develop good judgment. Even though reading core texts fell out of fashion compared to STEM skills in the past few decades, current students need a more balanced education precisely to be able to use computers at full capacity in the future.
Dr. Stan Rosenberg is the CCCU’s vice president for research and scholarship and the executive director of SCIO: Scholarship & Christianity in Oxford, the CCCU’s U.K. subsidiary.
Dr. Joy Buchanan is associate professor of quantitative analysis & economics at Samford University.
Dr. Andrew Lang is senior professor & chair of the computing and mathematics department at Oral Roberts University.
Dr. David Bourgeois is associate dean, professor fo business analytics & artificial intelligence, and director of the AI lab at Biola University.