Deep-learning machines—Artificial Intelligence (AI)—are staking out more ground in literature. They make the work of authors and publishers easier every day. Need InDesign to check your XML formatting, no problem; want to adjust the tone of an email, Grammarly has your back. But as machines become increasingly complex, so do the algorithms that help them understand and learn—algorithms written by people. People have life experience, and within the context of this article, that should be seen as extremely limited when compared against the other 7.75 billion people in the world. The ethics of representation as we build and train machines to do more work for us is as important as AI doing the work itself.
Ethics, like the Humanities, weaves its way into our lives and decisions slowly, making our training in it and experience practicing it hard to spot on any given day. Unlike more scientific pursuits, like math or engineering which have fairly linear signs of success or failure, ethics must be intentionally practiced and included in our endeavors. Those who can write code should not be the only ones inserting their ethics into machines.
Therefore, we must have social scientists working with AI engineers from the start. As Dr. Leah Henrickson said in our interview about AI in literature, “Words are the only way we can express what is in our minds to others.” Language is nuanced, subtle, and personal.
Have you ever asked someone whose primary language is not English what something meant in their language? Often that answer goes something like, “Well, there isn’t really a word for that in English, but it kind of means…” The unofficial language of coding is English. Contemplate a deep-learning machine coded by English speakers but asked to write or edit a book in Hindi. Although that machine can know all of the words and grammatical rules of Hindi, does it have the experience of a Hindi speaker? Does it have nuances of that language running through its electrical veins? Does this make it a translated book?
Now is the time to be seeking subject matter experts to assist coders, engineers, and scientists in the writing of algorithms for AI systems. In literature, that should include literary experts from a broad spectrum of languages, cultures, and genres. Imke van Heerden and Amil Bas have been researching this very thing. In their paper on AI as author, they say, “This article suggests that a network of researchers from literary studies and machine learning could work together to create a shared language between disciplines with vastly different methodologies.”
So how do we get there? In the publishing industry we must begin to understand things like AI, deep-learning machines, and natural language generation (NLG). We must become curious about how they are built and trained, and how they learn. There is a deep commitment to equity in most of the publishing world, but just over the horizon is a whole new set of partners creeping into our industry. Will we just stand by and let Silicon Valley decide how those partners will think about the art that we hold dear?