Many enterprises have established their own rules for using ChatGPT, CoPilot, and other AI tools. While most find it acceptable to use GenAI to guide the gathering of insights and information, they usually take a much less charitable position when employees attempt to pass off the outputs of GenAI as their own work. But how do we police our people when enterprise guidelines tell them GenAI must NOT be used?
That bastion of balance, the BBC, updated its guidelines in February. Even the Beeb is experimenting with turning speech from sports commentaries into rapid written reports (for example). In this, it is at least adapting and reusing its own content. It does not suggest using GenAI to generate news reports from a set of facts – or copying third-party content. “Our principles commit to harnessing the new technology to support our public mission, prioritizing talent and creativity, and being open and transparent with our audiences whenever and wherever we deploy GenAI,” says the BBC.
But how can organizations prevent content creators from simply copying and pasting chunks of ChatGPT responses and passing it off as their work?
Many professional writers and editors can just ‘sense’ the hand of GenAI. They recognize tell-tale signs such as overly complex sentences, unusual word use, lack of sources, and vague statements that don’t add much to the narrative.
But a ‘sense’ isn’t enough to have an offending employee hauled over the coals or a student fail an exam. So, where can we turn for some proof?
GPTZero offers the opportunity to copy and paste the text you want to check, and it will provide a score. It is available as a freemium model, but you need the $16/month premium version for full plagiarism detection. The app’s focus has been on the education community.
Originality.AI provides a comprehensive approach that encompasses readability, automatic fact-checking, and report sharing. It also will show you who has checked the same content previously, making it more aligned with enterprise content production processes.
With GPTZero and Originality.AI, Winston AI completes a trio of tools most mentioned by high performers across multiple review sites. Winston claims a 99.98% detection rate and a commitment to staying current with each LLM iteration.
Others to consider include Sapling’s AI Detector, Writer, Copyleaks, Crossplag AI Content Detector, Content at Scale’s AI Text Detector, and AISEO.
While any and all of these tools may prove useful, none is 100% foolproof. The Mozilla Foundation this year found that AI detection tools are not always as reliable as developers claim. Researchers also called out a reality that anyone who regularly prompts already knows – additional prompts can guide GenAI responses to sound more human.
Our guidance is that only professional writers and content editors should apply AI detection tools, and only then when their ‘Spidey sense’ tingles from the patterns and oddities they may spot in the content. Finally, we should only apply the output of these tools to instigate further investigation – not to form hard and fast conclusions.
Register now for immediate access of HFS' research, data and forward looking trends.
Get StartedIf you don't have an account, Register here |
Register now for immediate access of HFS' research, data and forward looking trends.
Get Started