
Key takeaways:
- According to SHRM’s 2025 Talent Trends report, social media is the most-used recruiting strategy across industries, surpassing job boards and compensation improvements,
- Courts are treating AI vendors as “agents” of the employer. Your vendor’s algorithm is your legal problem.
- California, Colorado, and New York City have all enacted or are currently enforcing AI hiring regulations. More states are following.
Social media recruiting isn’t a trend anymore. According to SHRM’s 2025 Talent Trends report, it’s the single most-used recruiting strategy in the industry, ahead of compensation improvements, flexible work offers, and job board advertising. Three years ago, 70% of companies were using social platforms to research candidates. That adoption has only deepened.
But the strategy has gotten more complicated. The platforms candidates use have split by generation. AI tools have embedded themselves into the screening process. And a wave of state legislation has turned “we use software to screen applicants” into a compliance statement that needs legal review.
LinkedIn still leads, but only for some of your candidates
LinkedIn’s billion-plus member base remains the default for professional recruiting. For senior and mid-level roles, it still dominates.
What has shifted is the Gen Z side of your pipeline. Nearly half (46%) of Gen Z candidates found a job or internship on TikTok, and this generation leans toward Instagram (76%) much more heavily than LinkedIn (34%) for career content and networking. For the youngest segment of the workforce, an entertainment platform has officially overtaken the world’s largest professional network. The #CareerTok hashtag alone has over 2 billion views.
This doesn’t mean every firm needs a TikTok strategy. The platform has shown real results for high-volume roles, skilled trades, and positions where showing the actual work environment helps pre-qualify candidates. Executive and senior technical roles still perform better on other channels. But if there are roles in your mix that are ideal for Gen Z, you need to reach those candidates where they’re actually searching.
AI has changed what “social media screening” even means
Just a few years ago, recruiters used social channels to manually look up candidate profiles. That still happens, but it’s been joined by something with higher volume and higher stakes: AI-powered screening tools that process applications, score candidates, and filter pipelines automatically, often without a human reviewing individual decisions.
AI-powered hiring tools have processed millions of applications and triggered hundreds of discrimination complaints. These tools promise speed and efficiency. Many deliver both. But they also inherit the biases embedded in their training data, frequently without the employer knowing.
Stanford researchers warned in October 2025 that AI resume-screening tools gave older male candidates higher ratings than female candidates and younger candidates, even when all resumes were generated from identical underlying data. A separate VoxDev study published in May 2025 found that AI hiring tools systematically favored female applicants over Black male applicants with identical qualifications. These are predictable outputs of tools trained on historically skewed hiring data.
Staffing firms are often both using these tools internally and deploying or relying on them on behalf of clients. That puts you at two liability exposure points simultaneously.
Your vendor’s algorithm is your legal problem
If your vendor’s tool produces discriminatory outcomes, you can’t point to the vendor contract and walk away. Liability follows the decision, not the software license.
Major legal cases are moving quickly and more will follow:
- A federal judge in the high-profile Mobley v. Workday litigation determined that AI-driven screening platforms can be classified as an “agent” of the hiring organization.
- In January, Eightfold AI, a major player in the AI recruiting space, was targeted in a pioneering lawsuit filed by job applicants.
What about the regulatory response? On the federal level, the U.S. Senate voted 99–1 to remove a proposed moratorium on state AI hiring laws from the One Big Beautiful Bill Act. But state-level compliance requirements aren’t going away, regardless of the current administration’s posture on AI regulation:
- California regulations that took effect October 1, 2025 require employers to maintain records of automated hiring decisions for four years and prohibit deploying AI that screens out applicants based on protected characteristics.
- New York City’s Local Law 144 is already in force, requiring annual independent bias audits for any automated employment decision tool and public disclosure of audit results.
- Colorado’s AI Act takes effect in June 2026, extending similar requirements to employers with more than 50 employees.
If you’re using any tool that automatically screens, ranks, or filters candidates, including plugins inside your ATS, those tools need to be inventoried, audited, and documented. “We didn’t know the vendor used AI” won’t hold up in court.
Best practices for screening and sourcing today
1. Audit your tools
Document every system that touches candidate screening, ranking, or routing, including third-party integrations inside your ATS. Classify each by how much influence it has on outcomes. If you can’t explain why a candidate was rejected, you have a documentation gap.
2. Match your platform strategy to your candidate demographics
Senior and professional roles still belong on LinkedIn. For high-volume, skilled trades, and early-career roles, evaluate whether your sourcing is actually reaching Gen Z candidates on the platforms where they’re searching.
3. Review your vendor contracts
Bias audit cooperation, record retention, jurisdiction-specific disclosure requirements, and data access should all be addressed explicitly. If they’re not in the contract, you’re absorbing liability that should be shared.
FAQ for staffing agency leaders
Q: Is social media candidate screening still legal?
A: Yes, but the legal risk has shifted. Manually researching a candidate’s public social profiles is still a common and legal practice. The greater exposure now sits with automated tools that screen or score candidates based on data that may include social signals. If a tool creates disparate impact against a protected class, the employer is liable under Title VII, the ADA, and their state equivalents, regardless of whether the decision was made by a person or an algorithm.
Q: Does my staffing firm need to comply with NYC Local Law 144 or California’s AI regulations?
A: If you’re placing candidates in New York City or using AI tools to screen applicants who work or apply in California, the answer is likely yes. These laws apply to employment agencies and staffing firms, not just direct employers. This is worth a direct conversation with employment counsel if your firm operates in either market.
Q: Which platform should we prioritize for sourcing?
A: It depends on the roles you’re filling. LinkedIn remains the strongest channel for professional and senior-level sourcing. TikTok has delivered real results for high-volume, trades, and early-career roles where showing the actual work environment resonates with younger candidates. Facebook and Instagram still perform for employer branding campaigns targeting a broader demographic. Firms operating across multiple verticals benefit from matching platform strategy to role type rather than defaulting to a single channel.
Q: What’s the difference between manual social screening and AI screening?
A: Manual screening is a recruiter looking up a candidate’s public profile, which carries its own legal risks if done inconsistently across candidate pools. AI screening refers to automated tools that process applications, score resumes, rank candidates, or filter pipelines at scale without individual human review of each decision. The legal exposure is higher with AI tools because bias is amplified across thousands of candidates simultaneously.
Q: Should we stop using AI screening tools?
A: These tools can meaningfully reduce time-to-fill and recruiter workload. The goal is responsible deployment: inventory your tools, understand what they’re doing, conduct or require bias audits, and maintain documentation. The risk is using AI without oversight.



