Can AI generated text be reliably detected?

Engineering Programming

Nope, that can’t be done.


The process of identifying text that was generated by AI is adversarial. That is to say, there are going to be people who are interested in keeping themselves hidden. This serves only to ensure that there will never be a solution that can be relied upon. There will be a few potential solutions, but none of them will be trustworthy.

There are still instances of spam email and counterfeit banknotes. Appears to be a very simple issue to resolve, but there are always some counterfeit bills and there is always some email spam that gets through detection. This is due to the fact that whenever one person discovers a method to plug an existing hole, another individual discovers a way to create a new one.

Additionally, this is the incorrect way to approach the issue at hand. It is not necessary for us to prohibit AI text; the fact that a young person on Reddit finds it amusing to operate an account that posts GPT-3 replies is not an issue in and of itself. To tell you the truth, some of the human interactions are not any more helpful than what you would get from a bot; in fact, in some instances, the opposite is true.
The significant risk is that the internet will be flooded with content that is generated by machines. every single social network, the entirety of Wikipedia, and everything else. In order to put an end to this, we need sign-up procedures that are resistant to sybil attacks. We have to make sure that the user who is administering the account is a real person and not some kind of automated system that controls a thousand other accounts. There must be evidence that a person exists, and we ought to have had this by yesterday at the very latest. At this very moment, social networking platforms like Twitter and Facebook are overrun with automated accounts.

Consider all of the interactions you’ve had in the virtual world. The positive aspects. The thought-provoking but disorganized blog post that caused you to reconsider something. Having a conversation with a person on a social network for the sole purpose of sharing each other’s work and developing a wonderful professional relationship. Direct message conversation with a friend who understands your suffering. obtaining comments from a guide or instructor. Hearing people praise your work and talk about how it has influenced them positively is a wonderful feeling. The negative aspects as well. The critical remarks made by a variety of individuals at random. People are making fun of your tweet. People have referred to your HN comment as stupid.

All of these things have an effect on us because we assume there is a person behind them, and because of this, we can deduce their purpose. In the absence of that, we have to compensate by devaluing all forms of communication. As if to exaggerate the effect of how we look at likes on social media, where something like “20,000 likes” means absolutely nothing to us.

I honestly believe that the only reason we are okay with it now is because it is simple to identify what is generated by AI, and these things are generally confined. When artificial intelligence is creating videos on Twitter, starting threads on Reddit, posting blogs on their personal site to be shared to Mastodon, and responding to other bots on HN, I doubt that we will be able to determine whether or not something is of high quality anymore.

There’s also the differentiation theory, which holds that we should act in the opposite way in order to differentiate human connection from that of bots. We are going to start injecting prompts into everything that we post, and we are also going to start being purposefully “low quality” because bots are trained to avoid that. Right now, we’re making a very specific kind of a mess of things in society.