
During a recent application process, the company stated the following: “We encourage using AI tools in your day-to-day work. That said, we ask that candidates do not use AI to generate responses in this application or during interviews. We want to hear directly from you. Do you agree not to use AI-generated content in this process?”
So AI is fine to use internally, to your peers, your team, your group, the company. It is ok to use with your investors, the media, the government, clients, and potential clients. But not ok to use in the interview process? That seems hypocritical.
It also displays a misunderstanding of whether or not AI aided content is original content. If I ask ChatGPT to construct a summary for my resume, using the position description and my profile, it returns a starting point, not a finished product. It has provided content that teases out connections between the description and my experience. That’s grunt work. And it is completely, totally, new and unique. Exactly the type of work we should be using AI to produce. What follows is prompt after prompt. I review the summary and tell it to remove x, y, and z. I tell it to include p, d, and q. I tell it to rewrite with a different tone. I tell it to rewrite to a particular target persona. I tell it to include specific numbers. The finished product looks much different from the first draft, and the changes are by my hand.
With a cover letter, I tell it to look up and include the address for the company headquarters. Would my cover letter be better if I manually looked up the address? Would my product be better or more true? No, it wouldn’t. When the prompting is over, there is still editing to do to the passage or letter. By the time the content is finished, it is my content.
In the end it makes no sense to ignore AI, and it makes no sense to stigmatize its use.
Share your opinions on this issue! Please comment below.
Finally, what does AI have to say? I asked, with a few prompts, ChatGPT, and this is what it said:
“Banning candidates from using AI tools during the application and interview process may appear to promote fairness and authenticity, but in practice, such restrictions often reinforce inequity, stifle innovation, and fail to reflect the realities of a modern workplace. Today’s professionals use AI tools like ChatGPT, Grammarly, and GitHub Copilot to draft content, debug code, conduct research, and streamline workflows. If the use of these tools is considered legitimate—and even essential—on the job, it is illogical to prohibit them in the hiring process. In effect, such bans penalize candidates for using the very tools they will be expected to master once employed1.
More critically, restricting AI use disproportionately harms candidates without access to traditional career advantages, such as coaching, expensive resume writing services, or professional networks. AI can help level the playing field by offering accessible support for resume optimization, interview preparation, and writing assistance. A 2023 study by the MIT Sloan School of Management found that access to AI tools significantly improved the performance of lower-skilled workers, effectively narrowing skill gaps2. In this context, banning AI use reinforces privilege rather than merit, ultimately undermining diversity and equity in hiring.
AI use also signals a candidate’s adaptability, digital fluency, and willingness to engage with modern technologies—traits that are increasingly valuable in knowledge work. According to McKinsey’s 2023 report on workforce transformation, roles across industries are being redefined by generative AI, and employers are seeking talent who can partner with such tools effectively3. Punishing candidates for demonstrating these skills during the application process discourages precisely the kind of continuous learning and innovation that many organizations claim to value.
From an operational standpoint, enforcing AI bans is not only difficult but also invites flawed judgment. AI-generated content can be edited, humanized, or blended with original writing in ways that make detection unreliable. Tools for identifying AI-generated text are still evolving and frequently produce false positives or fail to detect nuanced human-AI collaboration4. This creates a climate of suspicion and risk that diverts attention from evaluating the actual value and quality of a candidate’s output.
Moreover, this focus on enforcement reflects an outdated mindset that values process over outcome. In knowledge work, what matters is the final product—whether that’s clean code, a persuasive pitch deck, or a well-researched report—not whether it was created with the aid of a tool. In fact, the use of AI to enhance productivity, clarity, or accuracy often mirrors real-world workflows. A recent report by PwC highlights how organizations that integrate AI effectively into workflows see significant gains in productivity and creativity5. Prohibiting AI in hiring screens out the very behaviors that drive performance in modern work settings.
There is also an ethical inconsistency in companies banning AI use by candidates while relying on AI internally to screen resumes, score assessments, and evaluate fit. This creates a double standard where the employer leverages automation to save time and reduce cost, yet denies the same benefit to candidates. This asymmetry erodes trust and signals a lack of transparency. The Brookings Institution has warned that such unbalanced uses of AI can undermine public confidence in hiring systems and entrench power imbalances6.
Rather than banning AI, the more effective and future-aligned strategy is to allow candidates to use AI while assessing how thoughtfully they use it. Can they prompt effectively? Do they review and revise AI-generated responses with discernment? Can they explain how they arrived at their conclusions? These are the types of behaviors that matter in AI-augmented workplaces. The World Economic Forum, in its 2024 Future of Jobs report, recommends that employers embrace responsible AI usage and evaluate candidates on their ability to leverage it ethically and effectively7.
In summary, banning candidates from using AI in hiring processes is not only impractical—it’s regressive. It punishes resourcefulness, widens inequities, and reflects a misunderstanding of how modern work gets done. A better approach is to embrace AI as a tool that, when used responsibly, can make the hiring process more equitable, efficient, and future-ready.”
Footnotes
World Economic Forum. “Future of Jobs Report 2024.” WEF, May 2024. ↩
World Economic Forum. “How AI Is Reshaping the Workplace.” WEF, 2023. ↩
Brynjolfsson, Erik, et al. “Generative AI at Work.” MIT Sloan Management Review, 2023. ↩
McKinsey & Company. “The Economic Potential of Generative AI.” McKinsey Global Institute, June 2023. ↩
OpenAI. “Limitations of AI Content Detectors.” OpenAI Technical Notes, 2023. ↩
PwC. “AI Jobs Barometer.” PwC UK, 2023. ↩
West, Darrell M. “The Ethics of Artificial Intelligence in Hiring.” Brookings Institution, 2023. ↩