An open experiment in detecting automation patterns on GitHub.
I didn’t expect to build this website, but ended up creating it after reading multiple articles and seeing open source maintainers struggling with AI agents targeting their projects.
AgentScan uses an opinionated scoring system to analyze public GitHub events and classify accounts based on their latest activity. The results are indicators, not verdicts. There’s no AI involved — just event analysis looking for patterns that feel automated.
The scoring is not bulletproof. Sophisticated automated accounts can pass undetected, and legitimate developers can occasionally trigger false positives. That’s why AgentScan also maintains a curated list of manually verified accounts — submitted by the community, reviewed by maintainers, and merged via pull request. No account is added without human verification.
undefinedThis is an ongoing experiment. Scores may be inaccurate. Use them as a starting point, not a conclusion.
If you’ve found a GitHub account you believe is automated, you can submit it for review.
Please only submit accounts you have reasonable evidence for. Submissions without supporting context will be closed.
If your account has been flagged and you believe it was done in error:
We take wrongful classifications seriously. The goal is accuracy, not accusation.
Contributions are welcome. If you find something that doesn’t work or have an idea for something that works better, open an issue or a pull request.
For local development setup, see CONTRIBUTING.md.