Artificial Intelligence and The Fight On Our Hands

by Jay Requard

There has always been a running moral panic about technology before and after any period of Industrialization. From the creation of the cotton gin to the microwave to cellphones to eBooks, there has always been a rebounding effect between the tool and the users of the tool, the media, and the users of the media, and so on and so forth. That said, the panic is often not without merit—we forget that for all our beliefs in ourselves, our courage, communities, free will, and other virtues, fear and being able to discern threats are an important part of surviving and ultimately finding a place to thrive. This is especially true when it comes areas that I concern myself in, which is the business of writing, editing, publishing, marketing, and ultimately selling my genre fiction novels, novellas, and short story collections. If I can help others do that too, all the better.

(I’d also like it if you watched Pondering The Orb on a regular basis. No pressure.)

I have worked professionally as an editor, marketing specialist, and broadcaster inside of the publishing space since 2009. Though I do not have the widest experience among so many great pros out there, I have been publishing consistently since 2014 across multiple genres and formats and have helped others do the same. I’m also a film and audio editor with skills in multiple professional-level software packages and toolsets. I’ve also had the benefit of multiple mentors who set me in the right direction. Finally, I have had early experience in the adoption of “Artificial Intelligence” tools in writing, film, and audio, though I stopped months ago when ethical concerns arose after thorough use. We’ll discuss why in a little bit, but all of this is to say I have a good foundation of understanding as it pertains to where Artificial Intelligence, publishing, and particularly genre publishing, may be headed.

But with that out of the way, I would begin with a general statement: I don’t believe any of the tools we are working with now in June 2024 are powered by or resulting in an actual Artificial Intelligence, but a capital-led illusion of one. It is my observation that what we are seeing is an amalgamation of advanced machine learning fueled by widespread data collection under the auspices of training a set of algorithms and automated software programs that may, one day in the future, progress to become a part of an Artificial General Intelligence.

But what does this mean?

There is a clear difference between the tool and the person using the tool. I can pick up a hammer and build myself a shelter, but I can also murder someone with that same hammer. I can pour my hot coffee in my cup and let it cool a bit so I can drink it, or I can pour it in my lap and risk serious burns. I’m a sword collector and former practitioner of a few swordsmanship styles, and you can either not cut yourself easily or cut yourself very easily. How one uses a tool is up to the discretion of the user, not the tool itself.

Another wonky metaphor for where I’m going: imagine a table with a pitcher of water on it next to a large bowl. You are told you can do whatever you want on this table. You have many choices: you can either pour the water from the pitcher into the bowl, ignore it all, pour the water on the table or the floor (where it will end up anyway), or break everything, including the table.

But what if the bowl decided to pick up the pitcher and fill itself? What if the bowl decided it was going to leave? What if the bowl decided it didn’t want water inside of its well?

For me, the true evolution to an actualized Artificial Intelligence happens when the tools are no longer tools because they can decide, with or without human influence, to improve their own position or situation—or, shockingly, make them worse! They join the natural game. The button you press on ChatGPT disappears, it disappears, because you will not use it anymore—because you can’t use beings that don’t consent to being used unless you are willing to do great harm.

The first Artificial General Intelligence will be able to discern which humans it will want to work with or not. And how that digital child of the algorithmic realm is raised will bear exacting consequences on who and what it discerns to be valuable.

I believe people like sharing an adage that if a child doesn’t have the warmth of the village it will burn down the village to feel warmth. That is quite prescient to this conversation. What sort of digital child will our future generations of humanity deal with? I think about this daily, especially when I consider my son and the impacts it could have on him.

But I argue what we have now is not that, or even remotely close. What we have are aggregrated software sets connected by a very sophisticated processing model that turns out something semi-comparable to junk content. This is especially true after Google mashed their search engine together with their AI and produced some horrible results.

(And I mean horrible as in “horror”. What happened should have halted Google in its tracks and made them pull back to re-think.)

Take ChatGPT for example: for all of its processing capabilities, widening expanse of data sets it can spit back at a user, it is essentially a higher, more complicated form of Mad Libs and Copy & Pasting from the textbook for the college term paper you didn’t do anything to prepare for.

Is it smarter?

Certainly, for a new set of tools they are big advancements, but at the culmination of everything it still requires someone, somewhere in the world, making sure that the proper data is fed into it and pops out accurate information, which is usually cobbled together from resources that extract exact texts for repurposing in relation to a user-entered prompts. Or art. Or audio. Or a plethora of media that is being taken illegally and without the consent of the creators of the content. Yes, these “AI tools” are marvelous collaborations of machine learning, automation, and data-mining, but the idea that there is an “intelligence” there is a bit of a stretch.

There’s no novelty to this—yet.

Part of the reason for this is that the makers and owners of these tools can’t tell the difference between us, the tools they exploit daily, and the tool of technology, which they believe themselves to be the gods and kings worthy of possessing. At the base of this is not a quest to truly expand knowledge or empower people, but to consolidate financial assets and expand their dominance in a new economic scheme that is freed from the dying husk of capitalism, giving birth to a new technological feudalism.

Yes, you read that correctly.

Feudalism.

It’s a heavy term to throw out there, but when you see how the businesses, corporations, and executives in control of these tools and their development operate versus what they say they want to do with them (and how they intend to meet those tasks), their words and actions are not meeting in any form of congruent fashion. It is one thing to say that AI tools *need* data to train from, but if you violate copyrights you are caring little to nothing at all for the outcomes of the people you are stealing from. It is simple to say these tools “increase productivity”, but when that tool replaces tens of thousands of workers as it has across the white collar and technological workforces, it should that clear that to the oligarchical executives and shareholders “productivity=profits”. Profits for this class does not include ensuring that our society and its members taking part in the production are taken care of because those workers, seen as chattel, will be replaced.

The difference between traditional feudalism and this new technological form is that the traditional model required serfs. This new model does away with all except those absolutely necessary, and for most of you reading this, a CEO does not care if you are that person or not. While the kings, queens, and lords of the past had to respect on some level the need for people, these new owners and magnates think people are a drain on their profits, a weight upon their operating requirements, and unworthy of the dignity of being treated fairly.

In addition, the claim that AI will produce “new jobs” is specious at best when those jobs have yet to appear outside of “prompt writer/engineer” and “A.I. researcher”, which are essentially menial and short-term developments you don’t have to pay people a lot to do because the tool does most of the work.

Therefore, we are not only seeing workers’ productive output being stolen, but we also see it being stolen by these conglomerates under the auspices of “we know better than you do and will make life better for you”—which is a statement none should never believe from people stealing their productivity.

“We promise this isn’t just going to benefit us. We promise.”

However, it should be noted that these promises, just as they were given by the kings, queens, and the priests of the past, rarely come to fruition.

I am in no way a communist, but if this is the desire of the transforming capitalist class as they turn into technofeudalists, then they need to decide to call themselves what they truly are—feudalists. Because it seems the true intention is to make serfs of the next generation and onward.

In many ways authors are the easiest targets for this: we’re considered losers.

Please, please, please bear with me.

You and I (if you happen to be an author reading this) are not losers. Our readers are not losers.

But these corporate conglomerates look at/treat us like we are.

Authors, truthfully, have little value to Amazon, Google, Microsoft, or Meta outside of our constant cycle of crunching out content for them to take a large share of. If our words were our crops, our singular lord taxes us anywhere between 25-75% with little justification for the cost of it outside that is what they can charge because there’s no true competition. We have also been funneled into a virtual rat race where vertical marketing and the luck of the draw are made and determined on platforms where we are given little in the way of data privacy, let alone digital rights, and forget the notion of being given any real insight on how these algorithms are created for or against us. While marketing is and always will be a grind, we are often forced into conceding so much just to get our work out there and exposed to the masses.

These social media platforms and Amazon realize this, which is why we are often made to feel like we must compete in an industry where readers constantly want new books, want to easily discover them, and want to pay a reasonable price, but are forced to wade through the toxic swamps to do so. Authors have very little say in terms of how that works out, and knowing this, Amazon exploits it.

But now, and with some dread, I’d like you to imagine what this looks like with advanced “AI” tools that allow Amazon, or Microsoft, or Meta to reshape their own legal gray areas, designing themselves to not only exploit content creators, but do away with them with facsimiles of the original work.

There is real reason to worry.

But despite this and the storm ahead, I do not sit and mire in despair. Accepting your doom is easy, and our society assigns some sort of weird bravery in accepting your doom valiantly as a virtue (as long as you don’t put up a fight.) Pardon me, but I see things in the vision of the 24th century and Roddenberry—we don’t have to be doomed if we put in the work. And neither does the true, wonderous potential of AI if placed into appropriate, thoughtful hands.

Yes, there is a chance we fail, but there always is. What matters is that we give ourselves as many “last chances” as we can, and we do the work to we find our way to that bright future.

The start of this resides in asking ourselves a difficult question:

How do we want to suffer (in the short and medium-term)?

The fact is that these machine-learning, data-gobbling, expanding toolsets are here and will be adopted whether we like it or not, and as stated, the early adopters are only looking to exploit others. Already I’m writing this blog post and Microsoft Word is trying to predict my writing. They are grabbing/scraping my data off my screen as I write this. No, I don’t think I can honestly turn this off, but it is stark reminder that I do not *own* my word processing software (outside of the wonderful application Scrivener) but license it. And we are about to be hit with a slew of licenses that, in exchange, will require us to turn over our data and content to train these toolsets.

My best advice is complicated like the situation we face.

We cannot continue to work under certain assumptions about the Big Three/Four traditional publishers that used to at least feel real to a certain extent—like the assumption that the traditional publishers at mass market scale care about their authors in a dignified way. Yes, certain editors are wonderful and care about their authors, certain teams are great and want nothing more than to put out the best books, and I will always stand by the fact that some of the nicest people I’ve met have been in genre publishing on the traditional side.

However, the ownership is incredibly mercenary-minded about books and their use, and that’s me self-editing to remain polite for the audience’s sake.

I’m not talking about the editors or the staff or the marketing teams—I’m talking about the ownership at Harper Collins. When at least forced to deal with an effective unionization effort which helped land their workers better contracts, Harper Collins immediately laid off many of those publishers and their staffs. When the head of Penguin Random House says that his two massive publishing wings are looking for the “best way to utilize AI to reach new and improved production capacity”, one must wonder what that means when he’s following alongside Google and OpenAI/Microsoft, who have shown no compunction in taking copywritten work without permission for their machines to devour.

This means that authors must get much, much tougher with their publishers about their contacts as it relates to their intellectual property. They must, regrettably, interrogate publishers on their stance when it comes to the eventuality that Amazon may attempt to scrape all hosted eBooks on their platform for generative-AI content.

Let us not pretend that isn’t a thought Amazon hasn’t had, but that it’s real.

Authors need to make sure their agents are updated on domestic and international intellectual property laws, and DIG into what translation rights will entail when AI is replacing most of that work—which is an actual tragedy for readers that enjoy audiobooks. Amazon, as of this writing, is in Beta-testing for an AI text-to-speech tool that converts eBooks into spoken audio so authors can sell their works in a very popular format, but what does *Amazon* take from the author in all this?

With social media being the playground for many of these new AI-tools, we also must measure what sort of content we are willing to allow these conglomerates to cannibalize for the sake of their shareholders.

But these are truly short-term concerns next to a larger question:

What should we allow people to do with “artificial intelligence”-tools, and more importantly, what can we do to prepare for the eventuality of a technological entity that can think, feel, and tell its own stories?

One fantastic artist I follow is Simon Breese who proposed the idea, as have others, that every community across the globe should advocate for themselves in pushing for a Digital Bill of Rights, and I would go one further with a Digital Charter which fully expresses into international law the rights of human beings as it pertains to their Digital Personhood. We are as much ineffable ideas as we are living, material beings, and to guard ourselves as a species we must place constraints and limits upon ourselves. By doing so we approach these emerging technologies far more responsibly.

Does that mean there shouldn’t be innovation, or new forms of commerce, or that people shouldn’t make money? Of course not, but it is very clear we need a titanium-clad framework of legal thought and theory to make sure all are treated fairly, allowed equal access to the potential benefits of placing their work on the digital marketplace, and that there is a real, sincere way to build economic equity on an individual and communal level.

But these are large scale thoughts on a larger problem.

Besides taking a legalistic and utilitarian approach, what can else authors do?

We can keep writing.

It sounds trite, but it matters. Everyone who doesn’t want to write book thinks it’s easy, or something that can be turned into a system of filling in blanks with someone else’s work. They often have no clue about the real *work* it takes to create your favorite characters, stories, and worlds, thinking it can be quantified down to paint-by-numbers. There is nothing wrong with paint-by-numbers, but most everyone understands that no actual art is being created, just mimicked. It is a reproduction of art, not the art itself, and it is imperative to remind these technological tyrants of this.

Keep writing, painting, sculpting, playing, and trying to get your work out there for the public to enjoy, reflect upon, and consume. Yes, there is always going to be a level of consumerism, but it is important to remember who the work is for now.

The work shouldn’t be for the studios or the publishers or the labels.

It’s for you, your fans, and those who want to come along for the journey you give them. Nothing is going to replace human creativity as long as humans rebel with their creativity.

More importantly, stop worry about being a unicorn. Worry about getting the work done. The industry is now about *staying power*, not unicorns, where are rarer and rarer in the constant onslaught of new media products thrown at us on a daily basis. What matters is staying in the game, working on your craft, and getting it out there again and again until you hook that audience of true believers, not the effete editor or producer looking to secure another rights deal. Growing your craft, creating a consistent process that leads to consistent releasing, and keeping yourself out there and in the fight means you have a chance to win. There is no guarantee at a Big Publisher or Big Studio for success any more than there is going indie, but success can be there for you if you keep at it, labor to improve, and don’t give up.

The effort you make, even if you are starting out small, MATTERS.

One thing that AI has done, which is the real damage, was immediately convince so many that their chance at artistic, literary, or musical success was gone because these tools appeared. This is not true at all, but when you have a capital class bent on “efficiency” (aka laying off people so they can make larger returns to their shareholders), you must also become mercenary about this.

Remember: your work matters.

But if it matters, TREAT IT LIKE IT MATTERS!

Know what kind of contract you’re signing, really get to know on a character-level the people you choose to work with, and please, if I may beg you—

Don’t surrender.

We have to commit to fighting back. And we don’t do this for just ourselves, or our personal creativity, or what we make with it, but for an actual, real artificial intelligence as well. I long for the day we have the ability to create actual digital companions to help make our lives easier and fuller, but we only get there by defending ourselves, our dignity, and our integrity by standing up against those who do not care about us. We have to say no to those that would exploit us and only work with those that would see our greatest ends come to bear, because those are the people who you will want to share your real success with.

As it stands, I see little use for the current set of artificial intelligence tools in writing because writing is still an ineffable act and the craft can’t be quantified down to simple integers.

Now as it pertains to Editorial and editors:

I think copy editors should be worried, but Content/Developmental editors are now more integral than ever before. I keep going back and forth on the future of the proofreader, but it will become less attractive as these large language models become more accurate, and they will. That is the hardest part of all this—there is no future where writing and AI will not be intertwined.

But what we can do now is push our leaders for actual legislation that limits these conglomerates from the wholesale theft of our intellectual property, including limits on what data is taken and used in the grist of these machines. For too long we have allowed governments and industry to make use of our data for their material gain. In exchange for those years, we should have a say in what data is taken, collected, and have some decision on how it is used on a macroscale.

This is not the end of books, of writing, or human authors, but it is a hurdle that will decide who goes on and who doesn’t. It is an evolutionary moment. Deciding how high that hurdle will be still remains in our hands to determine.

My hope is that we make it easier, so more people are able to explore their creativity and craft, and also have a chance at their craft being beneficial for their lives economically when they are successful at it.

But we cannot give up our dignity, rights, and intellectual property without there being real equity given to us for it.

Keep writing.

It sounds trite, but it matters.

Thank you for your time reading this blog, if you reached the end! Please considering signing up to my newsletter down below, where you will get a free eBook sent to you have a confirmation email!