Artificial Intelligence and the Weightier Matters of the Law

    Series: Equipping the Saints
    August 13, 2023
    Josh Preston

    Artificial Intelligence (AI) has been ubiquitous in the news cycle recently. While AI itself is not new, the relatively novel ChatGPT, with its impressive capabilities and widespread availability and use to the public, has captured the attention and imagination of companies, investors, and the general public alike. Due to its novelty, it is impossible to provide a comprehensive analysis. What we can do is define what it is, consider what it has to teach us about God and ourselves, and assess its positive and negative potential in relation to the weightier matters of the law: justice, mercy, and faithfulness.

    What is it?

    Artificial intelligence is software or machines capable of performing tasks that typically require human intelligence (e.g., problem solving, pattern recognition, decision-making, content generation). ChatGPT, which has been the most popular application of AI recently, is a large language model. Essentially, it is a program designed to utilize algorithms to process, understand, and generate natural language.[1] Much of the interest it has garnered is due to its impressive capabilities. Drawing on its vast training and resources, it is capable of producing incredibly creative and nuanced content at incredible speeds. I even asked it to write a version of this article and was mildly impressed with what it produced (I wrote the article myself, in case you’re wondering).

    What can it teach us about God?

    Popular author and computer scientist, Cal Newport, wrote a helpful article for The New Yorker in which he attempts to mitigate some of the hysteria surrounding AI by explaining exactly how the technology works. One important insight he makes is that “a system like ChatGPT doesn’t create, it imitates.”[2] God alone creates ex nihilo (“out of nothing,” see Genesis 1). Therefore, fears that machines will gain sentience are overblown. God alone can create in his image and endow one with a mind, a soul, and a conscience. 

    What can it teach us about ourselves?

    I believe AI can teach us at least three things about ourselves. For one, it reinforces the fact that we have always tried to play God (see Genesis 3) by gaining limitless knowledge and control. Systems like ChatGPT are a powerful demonstration of that. Secondly, they are also a powerful demonstration of the incredible creativity God has endowed us with. This should be no surprise since we are made in his image. Thirdly, AI and AI systems like ChatGPT are neither inherently good nor evil. Human beings create and employ them for both good and evil purposes, since good and evil are bound up in each of our hearts.[3]

    This reinforces the need for a higher standard, namely, the weightier matters of the law. These help us seek to imagine and apply good uses and seek to discern and reduce evil uses.

    Justice

    We have previously defined justice as giving what is owed to every image bearer of God. It should be obvious, then, that AI cannot do justice, as it is not imprinted with a conscience and so unanswerable to a higher moral standard. While it can certainly be helpful for many tedious administrative tasks, as a source of information, it is also problematic. For one, it does not provide data for its source material nor credit to the original producer of said content. More than that, it takes away the ability to vet material based on its source. There are plenty of examples of ChatGPT producing content that is just plain wrong.[4]

    Whereas the task of research can be hard work, it is worth it because it requires both author and reader to carefully consider their subject, write and read with a target audience in mind, and render credit to the sources where unoriginal ideas originated. As a machine with no such obligation, ChatGPT simply gathers data and produces it according to the prompts received. This totally undermines the unique human interaction necessary for good writing, teaching, and learning.

    So as a primary source of information, I believe systems like ChatGPT are problematic. I believe they can be helpful, however, for reducing the amount of time spent on certain tasks when carefully monitored by someone who understands the system.[5] Various authors have imagined this arrangement as “centaur tasks,” where a person uses AI to complete certain tasks or “IA” (Intelligence Amplifying), “suggesting people and machines will be able to do far more than AI alone.”[6] 

    Mercy

    We have defined mercy as giving someone what is not required and is often surprising. An interesting application of mercy to AI relates to work. One of the primary drivers of interest in and fear of systems like ChatGPT is that it will eliminate thousands of jobs, totally upending the workforce and the economy in the process. Some of this fear may be warranted, if you consider the greed that drives some companies to pursue the bottom line at all costs. However, applying the reasoning above is sufficient to prove that human beings are indispensable for the vast majority of all work. 

    One way mercy comes to bear on work in specific is that it is both a just and merciful action to entrust a person with a job. In doing so, you can dignify that person with meaningful work, provide the opportunity to earn a living, and retain the human component so necessary for just and merciful work to happen in the first place. A machine cannot recognize distress and so appeal to supervisors for an exception to policy. A machine cannot recognize trends in human interactions and so revise processes to better honor human needs and desires.

    In an interview with Cal Newport, David Epstein offers an intriguing idea for how AI can be utilized to enable us to work more mercifully (though he doesn’t use that term). Epstein imagines ways AI might automate some administrative tasks, such as medical notes for doctors, allowing doctors and other workers “to focus more on the areas where we can uniquely add value.” That is, human interaction. For the medical example, he imagines doctors having more time to spend “understanding the context of a patient’s life” or “strategizing with [patients] about how to respond to a diagnosis.”[7] In short, if AI can increase intentional, merciful human interaction without reducing human employment, it could serve a great good. Christians, then, should think carefully about potentially redemptive uses of AI.

    Faithfulness

    We have defined faithfulness as loyalty to God’s commands. In other words, this aspect of the weightier matters of the law causes us to step back and ask, “what is the goal of life?” If the goal of life is human progress and productivity, we should exhaust every viable option for the use of AI to achieve that end. But if, as we believe it to be, it is God’s glory, we must think much more carefully.

    One helpful for resource for doing so is known as McLuhan’s Tetrad, or The Laws of Media, developed by Marshall McLuhan to analyze the effects of technological advancement.[8] The “Laws” are: 

    1. What does the technology enhance? What does it amplify or intensify?
    2. What does the technology make obsolete? What does it replace or reduce?
    3. What does the technology retrieve? What does it recover that was previously lost?
    4. How does the technology reverse? How do these effects flip when the technology is pushed to its limits?

    For example, social media enhances one’s ability to connect with a large number of people. It obsolesces, or reduces, the need for letters, phone calls, or visits. It retrieves the ability to connect with those from whom you live far away. Despite these benefits, several reversals happen. Despite connection with a greater number of people, we may lose a level of connection with those with whom we live in close proximity due to an increased amount of time on our phones. This also provides a greater number of people with whom one can compare themselves, and that with only a curated exposure to their lives.

    And so we may ask with AI, by delegating email communication to AI, will we be more prone to have face-to-face interaction, or will we simply exchange ever less personal emails? By delegating tedious administrative tasks to AI, will we gain greater ability to do focused work, or will we lose a sense of our finiteness and the brokenness of work that we are reminded of each time we face the unpleasant reality of completing tasks we do not enjoy? By delegating creative tasks to AI, will we gain greater ability to advertise helpful resources in an engaging way, or will we lose the opportunity to bear witness to the gospel in the way we communicate and the content with which we communicate?

    There is no hard and fast rule for every business, company, or individual, so it is incumbent on us as Christians seeking to live faithfully to carefully consider how we might utilize technological advancements like AI for just and merciful purposes and what applications of it we should avoid or mitigate.  

    There is much more that can and should be said under the above headings. Hopefully this has provided a cursory overview of the factors to consider as we learn about and engage with AI. 

    [1] I was helped on these definitions by Dr. Gregory Lanier’s work on this topic, which can be accessed at https://riveroakschurch.com/ai-and-the-church/.

    [2] “What Kind of Mind Does ChatGPT Have?” The New Yorker, April 13, 2023, https://www.newyorker.com/science/annals-of-artificial-intelligence/what-kind-of-mind-does-chatgpt-have

    [3] Social media provides an apt illustration of this point.

    [4] Beyond providing inaccurate information, because it imitates rather than creates, when asked to search for articles on a certain topic, it will occasionally generate nonexistent articles that sound similar to something that would be written on a certain topic. I was alerted to this by Dr. Gregory Lanier’s study for Reformed Theological Seminary, ChatGPT, AI, & Reformed Theological Seminary, https://abwe.org/wp-content/uploads/2023/06/ChatGPT-Report-Lanier.pdf.

    [5] See Joe Carter, “Should My Church Staff Be Hesitant About Using ChatGPT?” The Gospel Coalition, June 8, 2023, https://www.thegospelcoalition.org/article/church-staff-chatgpt/.

    [6] Frederick P. Brooks, The Mythical Man-Month: Essays on Software Engineering (Boston: Addison-Wesley, 1995), 7.

     

    [7] David Epstein, “Inside the ‘Mind’ of ChatGPT,” Range Widely, April 25, 2023, https://davidepstein.substack.com/p/inside-the-mind-of-chatgpt?utm_source=post-email-title&publication_id=1024339&post_id=116077842&isFreemail=true.

    [8] Initially introduced to me by Professor Michael Glodo, Reformed Theological Seminary, Orlando, FL, lecture.

    Back to Media Library