Skip to main content

How We Built Adaptive Learning Into a Career Platform: Spaced Retrieval Practice in the Learn Engine

The Retention Problem

Most career learning tools are glorified video playlists. A platform gives you 40 hours of content, you watch it over a few weeks, and you feel productive the entire time. Then someone asks you a technical question in an interview three weeks later and you draw a blank.

This is not a motivation problem. The learner did the work. It is a scheduling problem—specifically, a retrieval scheduling problem.

The data on this is not ambiguous. Karpicke & Roediger (2008) showed that students who studied material once and then practiced retrieving it three times significantly outperformed students who studied the material four times on a delayed test. The study group felt more confident. The retrieval group actually remembered more. This asymmetry between perceived learning and actual retention is well-documented and almost completely ignored by career learning platforms.

When we started building the Learn Engine, we looked at how working professionals actually prepare for career transitions. The dominant pattern was: binge a course, forget most of it, panic before an interview, re-study the same material. That cycle wastes time and erodes confidence. We wanted to build a system that scheduled learning based on how human memory actually works, not based on how long someone sits in front of a screen.

The Research Foundation

Spaced repetition is not new. The foundational research goes back over a century, and the core findings have been replicated extensively.

Ebbinghaus (1885). Hermann Ebbinghaus published the first quantitative study of memory decay. Using himself as a subject, he memorized lists of nonsense syllables and measured how quickly he forgot them. The resulting forgetting curve showed that memory decays exponentially—roughly 50% of newly learned information is lost within the first hour, and about 70% within 24 hours, unless the material is actively reviewed. This work established that forgetting is predictable and that the timing of review matters more than the number of exposures.

Pimsleur (1967). Paul Pimsleur proposed graduated-interval recall for language learning. His key insight was that review intervals should increase geometrically: first at 5 seconds, then 25 seconds, then 2 minutes, then 10 minutes, and so on. Each successful recall at a longer interval strengthens the memory trace and justifies an even longer gap before the next review. Pimsleur demonstrated that this approach produced better long-term retention than fixed-interval or massed practice schedules.

Leitner (1972). Sebastian Leitner developed a practical system for implementing spaced repetition with physical flashcards. Cards moved between numbered boxes based on whether the learner answered correctly. A correct answer moved the card to a higher-numbered box (reviewed less frequently). An incorrect answer sent it back to box one (reviewed immediately). The Leitner system translated the abstract principle of expanding intervals into a concrete protocol that anyone could follow without a computer.

The testing effect. Subsequent research by Roediger, Karpicke, and others established that the act of retrieval itself—not re-reading, not re-watching—is what strengthens memory. Testing at the point of near-forgetting produces the strongest encoding effect. This is sometimes called desirable difficulty: making the retrieval effortful (but still achievable) produces more durable learning than easy review (Bjork, 1994).

These findings are well-established in cognitive psychology. They are used in language learning tools (Pimsleur, Anki, Duolingo) and medical education (Anki is standard among medical students). But almost no career learning platform applies them systematically. Most platforms still measure engagement as time-on-video and completion as the number of modules finished.

Our Implementation

The Learn Engine in TailorMeSwiftly uses spaced retrieval practice as its core learning mechanism. Here is how the system works.

Gap identification

The learner declares their current skill set and their target role. The system identifies the gap between the two using structured skill taxonomies, drawing on O*NET occupational data and job market signals to determine which specific competencies the target role requires that the learner has not yet demonstrated.

This step is critical. Unlike a general-purpose flashcard tool, the Learn Engine does not present a static deck of content. It builds a personalized curriculum by computing the difference between where the learner is and where they need to be. Skills the learner already has are skipped entirely.

Curriculum generation

Once skill gaps are identified, the system generates a structured curriculum organized by competency area. Each concept within the curriculum has an associated mastery level on a 1–5 scale:

  • Level 1 — Initial exposure. The learner has seen the concept but has not been tested on it.
  • Level 2 — First successful retrieval. The learner recalled the concept correctly once.
  • Level 3 — Repeated retrieval at short intervals (1–3 days). Demonstrates early retention.
  • Level 4 — Retrieval at medium intervals (5–10 days). Demonstrates stable retention.
  • Level 5 — Retrieval at long intervals (14+ days). The concept is considered durably learned.

Mastery levels advance only through successful retrieval testing. Watching a video or reading an explanation does not increase the mastery level. The learner must demonstrate recall. This is the single most important design decision in the system: progression is mastery-based, not time-based.

Adaptive review scheduling

Review intervals adapt to individual retention curves. When a learner answers correctly, the interval before the next review increases. When they answer incorrectly, the interval resets to a shorter value.

The schedule is not uniform across concepts. Concepts the learner struggles with get tested more frequently. Concepts they recall easily get pushed further out. Over time, the system builds a per-learner, per-concept model of retention that concentrates study time where it is needed most.

The practical effect is that two learners targeting the same role will have very different review schedules after the first week. One might spend more time on statistical concepts while another spends more time on SQL syntax. The schedule reflects the learner's actual retention patterns, not a predetermined syllabus timeline.

What Makes This Different From Anki/Duolingo

The spaced repetition algorithm itself is not novel. SM-2 (SuperMemo), Anki's modified SM-2, and Duolingo's half-life regression model are well-documented. The contribution here is in applying spaced retrieval practice to a specific domain—career skill acquisition—in ways that general-purpose tools cannot.

Anki is a general flashcard tool. The user creates their own cards, defines the content, and manages the deck. It is powerful and flexible, but it places the entire burden of content creation and curriculum design on the learner. For career transitions, this means the learner has to already know what they need to learn before they can build the cards to learn it. That is a significant barrier, especially for career changers entering an unfamiliar field.

Duolingo applies spaced repetition to language learning with pre-built content. It is closer to our approach in that the curriculum is structured, but it is domain-specific to languages and uses a fixed progression for all learners targeting the same language.

The Learn Engine differs in three ways:

  • Content is generated from the gap analysis, not from a static deck. The curriculum is different for every learner because it is computed from the difference between their current skills and their target role. Two people targeting "data analyst" will get different curricula if one already knows SQL and the other already knows statistics.
  • The curriculum adapts the content, not just the schedule. In Anki, the algorithm adapts when you see a card. In the Learn Engine, the system also adapts what you study based on how your skill profile evolves. As you master foundational concepts, higher-order concepts that depend on them become available.
  • The target is occupational competency, not general knowledge. The system is anchored to structured occupational taxonomies. Mastery levels map to demonstrable skills that appear in job descriptions and interview rubrics, not to abstract knowledge categories.

Open Questions

Several research directions remain active.

Adaptive difficulty calibration. The current system uses a binary correct/incorrect signal to adjust intervals. A more granular difficulty model—one that accounts for response latency, partial recall, and confidence level—could produce more precise scheduling. The challenge is doing this without adding friction to the learner's experience. We are investigating whether implicit signals (time-to-answer, revision patterns) can substitute for explicit confidence ratings.

Cross-engine intelligence. TailorMeSwiftly has multiple engines: the Resume Engine, the Interview Prep Engine, and the Learn Engine. Currently, these systems share a user profile but operate somewhat independently. The natural next step is bidirectional information flow: skill gaps identified by the Learn Engine should inform keyword strategy in the Resume Engine, and interview performance data should feed back into the Learn Engine's priority queue. The technical challenge is building a shared competency model that all engines can read from and write to without creating circular dependencies.

Measuring actual interview performance. The hardest open question is connecting learning outcomes to employment outcomes. We can measure mastery levels, retention curves, and study consistency within the platform. What we cannot yet measure well is whether higher mastery levels in the Learn Engine correlate with better interview performance and faster time-to-hire. This requires longitudinal data and a study design that controls for confounding variables (job market conditions, resume quality, network effects). We are working on this, but honest measurement of employment outcomes is methodologically difficult and we do not want to overstate results.

Why This Matters for Workforce Development

Career changers and first-generation professionals do not have the luxury of learning the same material twice. They are often studying while working, with limited time and no professional network to fill knowledge gaps informally. For these learners, retention efficiency is not an optimization—it is a constraint.

If a career changer spends 60 hours studying data analytics and retains 15% of it after a month, they have to spend another 40 hours re-studying before an interview. A spaced retrieval system that produces 70–80% retention over the same period gives them back those 40 hours. That is the difference between a viable career transition and one that stalls out from exhaustion.

The workforce development industry has largely adopted the MOOC model: record content, put it online, track completion rates. Completion is treated as a proxy for learning. It is not. Completion means someone watched the videos. Retention means they can actually use the knowledge under pressure—in an interview, on day one of a new job, when a manager asks a technical question.

The cognitive science on this is settled. Spaced retrieval practice produces substantially better long-term retention than massed study or passive review. The question is not whether it works but why so few career learning tools use it. Our hypothesis is that it is a product design problem: spaced repetition feels slower than binging content, even though it produces better outcomes. Learners feel less productive because they are reviewing old material instead of advancing through new modules. Building a system that uses evidence-based scheduling while keeping learners motivated through the perceived slowness of review is a design challenge we are actively working on.

We built the Learn Engine because we believe career learning tools should be held to the same evidence standard as educational interventions. If a method produces better retention in controlled studies, career platforms should use it. If a metric (like video completion) does not correlate with actual learning, career platforms should stop optimizing for it.

References

  • Ebbinghaus, H. (1885). Memory: A Contribution to Experimental Psychology. Teachers College, Columbia University (1913 English translation).
  • Pimsleur, P. (1967). A memory schedule. The Modern Language Journal, 51(2), 73–75.
  • Leitner, S. (1972). So lernt man lernen: Der Weg zum Erfolg [Learning to learn: The road to success]. Freiburg: Herder.
  • Bjork, R. A. (1994). Memory and metamemory considerations in the training of human beings. In J. Metcalfe & A. Shimamura (Eds.), Metacognition: Knowing about knowing (pp. 185–205). MIT Press.
  • Karpicke, J. D., & Roediger, H. L. (2008). The critical importance of retrieval for learning. Science, 319(5865), 966–968.
  • Wozniak, P. A. (1990). Optimization of learning: Application of the SuperMemo method. University of Technology in Poznan.
  • Settles, B., & Meeder, B. (2016). A trainable spaced repetition model for language learning. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, 1848–1858.

Try These AI Tools

Learn Engine
Adaptive skill building
Resume Tailoring
AI-optimize for any job