See how well you use AI to code
Upload a session log from Cursor, Claude, Codex, Copilot, or Windsurf. Slait scores your session across 8 dimensions, revealing your AI coding strengths and blind spots.
How It Works
A simple three-step flow
Step 01
Upload
Drop your .md, .txt, .json, or .jsonl session log from any major AI coding tool.
Step 02
Analyze
Slait scores you across 8 modules (Planning, Debugging, Iteration, Tool Usage, etc.) with evidence pulled directly from your transcript.
Step 03
Compete
Your scores feed into The Slate, a public leaderboard where you can track progress and see how you stack up.
Go to The SlateScoring Model
What we score
Planning
How well you scope and structure tasks
Debugging
Efficiency finding and fixing bugs
Constraints
Clarity of requirements and boundaries
Iteration
Refining prompts based on output
Correction
Handling and recovering from errors
Tool Usage
Effective use of AI tools and features
Repetition
Avoiding redundant prompts
Understanding
Grasping AI output and next steps
Teams
Want to try this for your team and candidates?
Leave your email and we'll get in touch. We'll help you evaluate how your team and candidates use AI to code.
Feedback
Send us feedback
Tell us who you are, how to reach you, and what you think. We read every message.
