Our AI Statement

We’ve been looking to put out a statement on AI for a while now. It’s a contentious topic in itself, but in the creative world, it’s become full-on trench warfare. We felt it was important to address what our view on the topic is, whether we allow it, how we see its future, and what our thoughts are on the impact that it is having on artists. Here, for simplicity’s sake, we will use the term “Artists” to encompass all creatives, including writers.

It would have started by explaining extensively what AI is—the technology, history, and development of LLMs—which would then serve as a backdrop for our own attitude towards it.

However, the more I engaged with both sides of this argument (detractors vs. supporters) the more it became apparent this would be an exercise in futility. Primarily because the common individual is either apathetic to the existence of AI (and as such has no opinion on the matter) or sees it as just another piece of tech that, like faxes, phones, computers, and the internet, has positives and negatives which ultimately feel inconsequential enough that they just shrug it off. Putting it simply, they have no skin in the game.

Our statement, then, would be of interest primarily to those that do. Individuals that seek out these statements to determine if we are aligned with their existing opinions on the matter. Detailed explorations of the particulars of this technology won’t tell them much they don’t already know, nor will it help to change their preconceived beliefs on the matter.

Our statement is this: We not only don’t have an issue with the existence or usage of AI, but we welcome it. We believe that, when used right, it can be a revolutionary tool with tremendous benefits and that its (many) problems are inherent to the system in which it operates, not the software.

Now comes the part where we explain ourselves.

I think it’s important, before anything, to understand that we entirely reject the pervasiveness of the dualistic determinism that has seemingly infected the entire online discourse. For every topic, one must either fully support it (even its most extreme propositions) or fully reject it (even its most reasonable ones).

You are free, believe it or not, to support an idea even if it causes harm to one or more individuals, as long as the collective is better off; something well understood for the past six thousand years, yet now seems to be a contentious position in itself. This means that our stance on various issues can be both practical and ideological, and that the two are not exclusionary.

The same applies to our view of AI.

AI is being used and will continue to be used. It’s being used by all. Even if you are devoutly anti-AI and refuse to use it for any reason whatsoever, the apps you install, the movie you’re watching, the recipes of the restaurant you’re eating at, the email from customer support you’ve received, the report your government is basing its legislation upon (true story)—almost certainly, at some point, an LLM was used to either generate this content, refine it, brainstorm, clarify.

You can’t escape it.

This pervasiveness means that “looking” for AI signs is pointless, especially in literature. There are, indeed, certain “tells” such as overly descriptive language or varying patterns of quality between how the writer normally communicates and how they write, but even here you’re at best guessing, and at worst dismissing someone’s work (and potentially ruining their career) on a hunch. Even so-called “AI-detectors” are fundamentally flawed and regularly mislabel well-written and polished human writing as “AI-generated.”

These methods don’t work because AI is trained on an almost incomprehensible amount of written data. Written by humans. The “tells” we discussed above are, in effect, its own particular style of writing, just like you’d find in any human writer. A style that can be identified but never proven, because it will, inevitably, also be someone else’s style.

For magazines, like ours, that have their doors open to unsolicited submissions, you can see how this can be an issue. Yet, not in the way you’d think.

You see, this great overview of human literature that AI has is its greatest strength for general use but also its greatest weakness in creative writing. The problem isn’t that we, as a publication, are scared of publishing an AI-generated work because, by its very architecture, it will always be formulaic and uninspired. It does nothing new, because it “does” nothing. LLMs are predictive algorithms and, based on the data they have been fed, they take a good guess on what words should appear in sequence based on your request; but this guess will always be based on the data that came before its training. It is, by definition, formulaic. The problem lies in the speed at which these “works” can be produced.

When we launched Faun by Moonlight, our goal was always to keep submissions and readership free. We initially didn’t plan to pay for stories, but as we heard from various writers that being published by a paying market (even if a token amount) would help them tremendously moving on to bigger and better-paying publications, we put our piggy banks together and decided to pay our writers for their stories from our own pockets.

All was going well. We were receiving a steady but manageable flow of submissions (10–20 a week) while preparing our website, developing the design of the magazine, reviewing said submissions, etc. But that’s when something strange happened. Three weeks after opening our “doors,” around mid-afternoon, we started receiving submission after submission after submission. It was non-stop. In about 24 hours our inbox was flooded with hundreds of them.

We were confused. We were expecting a slight boost in submissions from being listed by Submission Grinder and CLMP, but that had happened a week or so before. We finally ended up asking one of our submitters where they’d heard about us, and they told us it had been from “Author’s Publish.” We had no idea what “Author’s Publish” was, but true enough, we were featured in an article by them titled “5 Paying Literary Magazines to Submit to in October 2025.”

For a magazine with the stated goal of ALWAYS reading every submission and ALWAYS providing detailed and comprehensive feedback, regardless of outcome, you can understand how a magazine composed of one full-time member (me) and a few associates, who were all learning this whole literary magazine business from scratch—and I mean SCRATCH, we didn’t even know how to use Photoshop or InDesign—while preparing for its first issue, would struggle to follow its stated goal. And struggle we did. Before the day’s end, I was sending form rejections after reading only the first few paragraphs, which was exactly what I detested. After a full day, I had managed to clear out most of my inbox and set aside the rest I thought were of potential interest. I went to bed at 4 a.m., woke up at 8 a.m. to go to work. My inbox was full again.

And in all of these submissions, this pervasive “style” kept reappearing, as if the same writer was submitting hundreds of stories a day. A writer with perfectly neat prose, a ton of adjectives, who had these strange paragraphs that read something like, “Sure! Here’s a short story about a…” which was a very interesting stylistic choice. We had the same person submit three times, all different stories. It was incessant, and although not all, I'd venture that half of all submissions were very likely AI-generated.

So then we had a very difficult decision to make. I was immediately opposed to the idea of closing submissions for two reasons: 1. Because we hadn’t even published our first issue and were already closing our doors to new writers, while being stuck with the slop we’d already received; and 2. Because we didn’t ask for this feature, and so it felt like we were potentially damaging ourselves for something that was external to us.

It’s important to note that we don’t blame Author’s Publish for this situation. We had opened our doors, started paying, and so they did what they do and announced it to their readers. We just weren’t prepared.

I decided to contact the writers that had already submitted their work, explain the situation, and explain that we literally had no way to review all of the submissions while at the same time preparing for our first issue. And so we offered three options: 1. They could make a 5-dollar contribution 2. They could wait until I managed to go through every submission, or 3. They could withdraw their application.

The contribution would allow me to effectively distinguish between serious submitters and AI or shotgun submitters. I also applied, from then on, a 5-dollar submission fee, which then changed to the current model, where you must purchase an issue before submitting ($4.99).

Was it a perfect solution? Absolutely not. Should we have done things differently? Absolutely. Although there would never be a good way to filter out what we already had while keeping submissions open, we should never have been in a position where this could’ve happened. Here, we can only blame our inexperience and naivety.

The reception we had was mixed, to say the least.

We had many who came forward and showed their support. All in all, we made a total of $150, which has been set aside to pay for our contributors. This way they can be paid quickly, without relying on our own disposable income.

Some, however, were not impressed, which is understandable. They expressed reluctance or outright refused, to which we offered the possibility of contributing an equal or larger amount to the Palestine Children's Relief Fund, which some did. We considered their stories just as if they had paid us.

An interesting reaction we received from a few writers was that “our problem wasn’t their problem.” Actually, let me get the exact quote from one writer for you:

As I learned many years ago from a sign permanently installed at the entrance of Rogue Music Rentals in New York City: "Your emergency is not our problem."

Which is an interesting position to have, regarding the publication you wish to be published in.

Some did reach out and explain they were unable to make any sort of payment, and we had one case where a creative writing teacher was submitting their student's work for review. To these, I gave the same level of attention as if they had paid.

Many of these writers also expressed their indignation to Author’s Publish directly. They promptly removed our listing and added the following note:

Initially the new publication Faun by Moonlight was on this list, but they started demanding a 5.00 fee to offset the influx of submissions. While it’s understandable to close to submissions early or request to be de-listed from our site, adding an additional fee and emailing people about it is not acceptable or appropriate behavior.

I contacted them to explain the situation I’ve just expanded on here. They were understanding of the situation yet still (rightly) expressed their disappointment about our actions, but were very nice to us. In simpler terms, we got a good rapping on our knuckles, which was deserved. The note remained, which is fair.

I was so embarrassed by that whole situation that I retroactively sent all those who paid—as well as those who could not pay or paid charities instead—a free copy of our inaugural issue and a feature on our “FBM Thanks” section.

Some who sympathized with our situation reached out and were worried about lasting reputational damage from this mistake. However, our concern is treating writers fairly, publishing excellent work, and being transparent when we fall short. We're a small team learning as we go; we will make mistakes. When we do, we'll own them and make them right, as we've tried to do here.

Institutional respectability or prestige is something we’re not particularly concerned about. We're a few people passionate about literature, doing this out-of-pocket, figuring things out in real time. If that means we'll never be taken seriously by certain corners of the literary establishment... then I don’t know what to tell you. Our responsibility is to the writers who trust us with their work, not to abstract notions of professional reputation.

Still, despite this self-inflicted disaster, the strategy worked: the number of submissions dropped dramatically, while the quality increased substantially.

This experience has defined our practical attitude towards the challenges AI poses.

To us, requiring the purchase of a volume is that solution. It means our prospective writers support and engage with the magazine in which they wish to be published. The old industry adage that “money should flow to the writer” certainly applies to large publishers who have the means to handle this volume (mostly through form rejections and prioritizing writers with prior publishing credits), but for magazines like ours, the better model seems to be a sharing of value between the writer and the magazine. They entrust us with their capital; we provide them with a product and detailed feedback. We find it to be fair and practical.

Ideologically, we believe AI to be a tool with unparalleled potential to benefit humanity, as long as its faults are addressed. These can be split into three: overconsumption, overreliance and theft.

Overconsumption through the current “bigger is better” AI race, which has led these corporations to focus on scale rather than efficiency. The result is a tremendous environmental impact, because training and operating ever-larger models requires enormous computational and energy resources. Massive data centers, necessary to handle the increased workload from these giant models, consume huge amounts of electricity, which leads to higher carbon emissions and strains local power grids.

Overreliance by corporations, who have been convinced that AI is capable of replacing their workers—a delusion which will very soon lead to perhaps the biggest financial crash since the 2008 housing market bubble. AI is a research tool, and one that can support human staff, but it can never replace them. And it can’t replace them not because there’s some innate quality of human work, but because it’s a guessing software, one that will output whatever it best “guesses” at that time. It is fundamentally incapable of independent thought, and so, inherently unreliable. This has already become apparent. Coca-Cola recently released another AI-generated commercial. It was received appallingly, and most online pointed out just how terrible it looked (characters had no continuity, the assets looked terrible, many characters kept changing depending on the scene) while still requiring around 100 people to prompt, edit, and prepare. This is an equivalent number of people they’d need to design an actual ad. The cost to produce it was, reportedly, also not much different.

Before we move on to the third, and most contentious topic of intellectual property theft, I believe it’s important you understand what we see as the benefits of this technology, to contextualize our response.

There is something inherently socialist about the nature of AI. It has democratized access to knowledge in a way that can only be compared to the creation of the first libraries. Even the internet simply expanded (tremendously) the access to information. AI, on the other hand, collects said data, repackages it, and provides you with the knowledge within it. So where in a library or on the internet you may find a legal document, you can now find out how it applies to you particularly.

With AI you have a chef, a book critic, a historian, a proofreader, all at your fingertips, free and ready to use. With this tool you are free from your socioeconomic constraints—you can’t afford a lawyer? Discuss your situation with an AI, find out what resources you have available, what laws your employer may be infringing upon, if your insurer is overcharging you. Can’t afford a business consultant to help you set up your small business? Speak to AI and plan out the best strategies, resources, and learn how to use the various tools at your disposal. You don’t know the first thing about gardening? You don’t need expensive books or to waste your money on overpriced products; you can now grow your own organic and sustainable food.

This is unparalleled freedom. And it comes with all the risks such a concept naturally creates. You are now in possession of the entire body of mankind’s knowledge. Will you use it to be paralyzed by your decisions or to inform them? Will you use it to help develop your writing or to do it for you? Will you use it to research topics or write the paper for you?

It's how an individual uses a tool, not if, that will determine their relationship with it.

We must, then, contend with how this knowledge is obtained: much of it is through unauthorized and unpaid use of copyrighted material.

Yet, is the solution to be "anti" a tool with such potential? Does it make sense to argue for the destruction of a tool that offers unparalleled access to information, which was previously restricted to those without means?

Artists MUST be compensated for the work stolen from them to train these models. There must also be legislative solutions that prohibit this abuse, and provides fair compensation to the individuals who created the work it is using. Yet more often than not, these arguments appear to be the foundation for their moral claims, rather than the arguments themselves, which are then used as attacks against those who use AI.

This is counterproductive for a few reasons: 1. The data has already been scraped. That bell can’t be unrung, and the models can’t “un-learn” this data any more than you can “unread” a book. Even if we shut down all AI corporations right now (Google, Anthropic, OpenAI), many of these models are already deployed and open source. And 2. Moralizing ones arguments is acceptable, but when it serves as the backdrop to vilify users, it also loses the base of popular support needed to push these changes. Seeing as there are over 1 Billion users of AI, that’s a losing war if I’ve ever seen one.

A company called Mentava, which specializes in didactic software, recently released a free e-book (hardcover version is sold) designed to help small children, especially those with learning difficulties, learn to read. A backlash quickly erupted from the artistic community, particularly Twitter (I feel right-wing saying X), criticizing the company for using AI-generated images in the book; things like cute flowers and cows, etc… Mentava defended the book, which some testimonials claim is highly effective, by stating the use of AI images was due to a lack of budget. They claim the book generates no profit and is primarily meant to support child development. This explanation, however, only fueled the artists' criticism. One of them wrote the following:

“If you don't have budget, you don't do this book. It's too easy to find excuses. Do a crowdfunding, or learn to scope it properly.”

I think the position, “You shouldn’t do a book that helps children because I’m not getting paid,” isn’t one that will help the cause of digital artists. In fact, it may end up doing the exact opposite. And I fear that, at some point, these and many other reactions like these will lead the common person to simply “check-out.” Many already have.

Ideally, we should nationalize these corporations and keep this software state-run with public access. But that’s a very long conversation for another time.

Ultimately, our goal is to explain to you, our writer, that if you have used AI to brainstorm, research, or edit your work, then we’d be happy to receive it. If you used it to generate it, then we are not. We’ve outlined the reasons why above.

Faun by Moonlight's AI Policy

This document outlines the official AI statement from Faun by Moonlight literary magazine. We address our position on AI in creative writing, its practical and ideological implications, and our policies for submissions.

Our Stance

We not only accept the existence and use of AI but welcome it as a revolutionary tool. We believe its problems are not inherent to the software but to the system in which it operates. We reject dualistic online discourse and hold nuanced positions.

AI in Submissions

We welcome work that uses AI for brainstorming, research, or editing. We do not accept work *generated* by AI. AI-generated content is inherently formulaic and predictive, lacking the originality we seek. Our submission model (requiring a purchase) helps filter out low-effort, AI-generated spam.

Practical Challenges and Solutions

We share our experience with a massive influx of AI-generated submissions after being featured on a prominent list. This led us to implement our current submission model, which requires purchasing an issue. This model ensures submitters are genuinely engaged with our magazine and helps us manage submission volume fairly, allowing us to provide detailed feedback.

Ideological View

We view AI as a democratizing tool with unparalleled potential, offering free access to knowledge (legal, business, personal) previously restricted by socioeconomic barriers. We also address its faults, like overconsumption (environmental impact) and overreliance (corporate delusion of replacing workers).

Intellectual Property

We acknowledge the critical issue of AI models being trained on unauthorized copyrighted material. We believe artists MUST be compensated and that legislative solutions are necessary. However, we argue that moralizing the issue or vilifying users is counterproductive and will not achieve the desired change.

Conclusion

Our goal is transparency. Use AI as a tool to enhance your human-created work, and we will be happy to read it. Do not use it to generate the work itself.

Keywords

literary magazine AI policy, AI statement, artificial intelligence creative writing, AI submissions, AI generated content policy, editorial stance on AI, Faun by Moonlight AI, intellectual property AI, AI art and writing, LLM in literature, AI tools for writers, magazine submission guidelines AI