Most candidates go in hoping they'll perform. 1on1.fyi tells you exactly where you're losing points — and exactly how to fix them. Practice with rigor. Get scored with honesty.
No account. No card. Just practice.
Companies actively source engineers here. Your profile stays private unless you opt in.
The Problem
You can solve the problem in your head. You can whiteboard the system design. But in the interview room, it falls apart — because you never practiced speaking your reasoning out loud. Never got scored on clarity. Never heard exactly where your answer lost points.
1on1.fyi doesn't just give you questions. It gives you the coaching you need to understand why your answers aren't landing — and shows you exactly what a better answer looks like.
We generate questions from your resume and the job description. You answer. Speak your thinking aloud — for coding, for system design, for behavioral. It feels like the real thing.
Then we show you. Not just a score — but where you lost points, why you lost them, and what a stronger answer looks like. Point by point. Specific. Honest.
Run it again. Track your progress. Show up on interview day knowing exactly where you stand — and exactly what you need to work on.
Behavioral, technical, system design — we cover the full interview stack. Each question tailored to you, each answer evaluated honestly.
We analyze your resume and the job description. Questions aren't generic — they're designed to surface your specific experience and the gaps you need to address. Answer honestly. Get scored honestly.
No more preparing "Tell me about yourself" for every company. Your questions are built for where you're applying and what you've actually done.
Real coding environments. Leetcode/Kattis problems. Speak your reasoning aloud — the AI evaluates not just your code, but how clearly you think.
Most candidates fail the communication part, not the code. We score both.
Architecture questions with diagrams. Talk through your tradeoffs — the AI flags where you're oversimplifying, where you're missing scale considerations, where your explanation loses interviewers.
System design is where candidates get exposed. Not because they don't know the concepts — but because they don't communicate them well under pressure.
For Candidates
When you practice on 1on1.fyi, you build a real track record — your scores, your interview types, your progress over time. Opt in to our talent pool and companies actively sourcing engineers can find you. No applying cold — they come to you.
Your profile only shows what you want. You control visibility — and you can opt out anytime.
"I thought my system design was solid. The coaching loop flagged that I never discussed database tradeoffs — and that I'd lost 3 interviewers that way in real interviews before I found this."
"Behavioral questions always caught me off guard. 1on1.fyi showed me I was answering the 'what' but never the 'why' or the 'so what.' Fixed it, got to final round at Google."
"I was bombing live coding because I'd never practiced speaking while I coded. The real-time feedback on my communication was more valuable than the code feedback."
Start free. Upgrade when you're serious.
Try it out. See if it's for you.
For candidates who are serious about their next role.
For recruiting teams and interview coaches.
Cancel anytime. No contracts. Your data is never shared.
No. We don't share your data with anyone — not employers, not recruiters. Your interview answers are private. We don't even track which companies you're applying to unless you tell us in the job description you paste.
Both. Questions adapt to your experience level. A new grad gets different behavioral questions and different coding problems than a senior engineer — and the evaluation criteria adjust accordingly.
Leetcode tests if you can solve problems. 1on1.fyi tests if you can solve problems under interview conditions — speaking your reasoning, handling tradeoffs, explaining your choices. It's the whole interview, not just the code.
After each answer, you see exactly where you lost points and why. Not a number — a breakdown. "You answered what but not why." "You missed the tradeoff discussion." "Here's what a stronger answer looks like." It's the feedback you wish you'd gotten before your last interview.
Behavioral interviews work on mobile. For coding and system design, we recommend desktop — you need a real keyboard and screen space for the environment. Mobile is fine for review and feedback though.
We don't have a bank of "Google questions" or "Meta questions." Instead, we generate questions based on your resume and the job description — which means you practice relevant questions for whatever role you're targeting. Most companies don't use fixed question banks anyway.
Only you, by default. If you opt in to the talent pool, companies browsing for engineers can see your scores and interview history — no company can see your data without your explicit permission. Your profile stays private unless you choose otherwise.
Yes. You choose which scores and interview types are visible. You can opt out of the talent pool at any time and your profile will be removed from company searches immediately.
For Teams & Companies
1on1.fyi gives your hiring team a structured way to evaluate — with rubric-based scoring, async interviews, and honest feedback on every candidate. Stop relying on gut feel. Start hiring with rigor.
No credit card. No contracts. Full platform trial.
The Problem
You've been burned. The candidate who aced the coding challenge turned out to not write production code. The one with the great stories didn't deliver when it mattered. You're not bad at hiring — you're relying on the wrong signals.
Right: gut instinct. Wrong: unstructured interviews where every interviewer scores differently, candidates prep the same generic answers, and you have no way to compare across people except a feeling.
Post your job. Set the skills and level you're hiring for. Candidates find you — or you send them a direct link to apply. Their profile goes straight into your pipeline.
Pick candidates. Send them a 1on1.fyi interview invite — behavioral, coding, or system design. They practice async on their own time. You get the scorecard before your first call.
Your interviewers review the AI-scored evaluation — point by point. Flag what matters. Add your own notes. See how each candidate compares across your rubric, side by side.
You're not guessing anymore. You have a structured score, specific feedback, and a team that's aligned on what "good" looks like. Make the call with confidence.
From job posting to final score — 1on1.fyi gives your team the evaluation infrastructure that makes hiring consistent, comparable, and honest.
Post jobs, collect candidates, move them through stages. Every candidate has a profile — their evaluation scores, interview history, and team notes in one place. No more spreadsheets. No more "did we evaluate them yet?"
See every candidate's journey from application to offer, with structured data at every stage.
Every interview type has a rubric. Not "they did fine" — structured criteria: communication, technical depth, tradeoff discussion, culture alignment. Every interviewer scores the same dimensions. Comparisons become real.
Your team knows exactly what "good" looks like — and everyone scores against the same ruler.
| Criterion | Score | Notes |
|---|---|---|
| Communication | 7/10 | Clear but verbose |
| Technical depth | 9/10 | Strong tradeoffs discussion |
| Tradeoff discussion | 6/10 | Missed scaling considerations |
| Culture alignment | 8/10 | Good team fit indicators |
Send an invite. Candidates answer questions on their own time — speaking their reasoning aloud, just like a real interview. No scheduling. No pressure. You get the full evaluation asynchronously, scored and broken down, before your first call.
Cut your interview loop in half. Review the evaluation before the call — so every call is a real conversation, not a first pass.
Scores are internal by default. When you're ready, release them — per candidate. Candidates can also request their score, and you approve or decline. No surprises on either side.
You control the feedback experience. Some candidates get their scores. Some don't. You decide when — and whether — to share.
Browse engineers who've been practicing on 1on1.fyi — with real scores, real feedback, real interview history. No self-reported resumes. No gut feel. You see what they've actually been evaluated on.
One view. Every candidate. Every score. Every interview type. See how they rank across your rubric dimensions. See where your team agreed and where they diverged. Export PDF reports for hiring committees or compliance.
Data-driven hiring decisions — without the data chaos.
| Candidate | Technical | Communication | Culture | Overall |
|---|---|---|---|---|
| Alex M. | 8.5 | 7.2 | 8.0 | 7.9 |
| Jordan P. | 9.1 | 6.8 | 7.5 | 7.8 |
| Sam K. | 7.4 | 8.3 | 8.2 | 8.0 |
"Before 1on1.fyi, we'd have three interviewers give three different scores with no way to compare them. Now everyone scores the same rubric — and the comparison data is actually useful."
"We were spending 6+ hours per candidate on interview loops. The async format cut that in half — we review the evaluation before the call, so the call is actually a conversation, not a first pass."
"The rubric made us realize our 'culture fit' questions were giving us no signal. We redesigned the whole process. Our next 3 hires have all been strong contributors."
Start free. Grow when you're ready to hire.
For small teams getting started with structured hiring.
For growing teams that need real evaluation infrastructure.
For organizations that need scale, security, and control.
Cancel anytime. No contracts. Your candidate data is never shared.
Candidates receive a link, create a free account, and complete the interview asynchronously — on their own time, from their own laptop. They get feedback on their answers immediately. The whole thing takes 30-60 minutes. No scheduling required.
Yes. Growth and Enterprise plans let you upload your own question banks or work with our team to build company-specific rubrics. You own your questions — they're never shared with other companies.
Every answer is scored across structured rubric dimensions. Not just a number — a breakdown: where they scored well, where they lost points, and why. Think of it as AI-assisted scoring that flags what to pay attention to, with your team making the final call.
We have a built-in ATS for the interview pipeline. For Enterprise customers, we offer API access and integrations with popular ATS platforms. Contact us to discuss your specific stack.
Candidate evaluations are private by default — only your team members with access can view them. We don't sell or share candidate data. Contact us for our full security posture summary and SOC 2 status.
Not automatically. Scores are internal to your company by default — you control when, and whether, to release them to any candidate. Candidates can also request their score through the platform, and you decide whether to fulfill that request. This gives your team full control over the feedback experience.