Starting today, our AI-powered “Instant Feedback” feature is available on hundreds of coding challenges across our most popular intro courses and the Frontend Developer Career Path. This will help you learn faster, be more confident in your skills, and boost your motivation.

How it works

On some scrim timelines, you will see a purple ghost icon. This marks a coding challenge that include “Instant Feedback.” After you submit your solution to the challenge, a Large Language Model (LLM) analyzes your code and lets you know if you’ve passed the test.

If your solution is incorrect, you’ll be nudged in the right direction. But you won’t be given the final answer. Figuring it out yourself is an important part of learning, and we wouldn’t want to cheat you out of that!

Here is a quick video walk-through of the UX:

The main goal with this feature is to ensure that you actually grasp the learning objectives of our courses, and that you do so without wasting time. So we want you to learn more in a shorter period of time.

Secondly, there are always multiple ways to achieve the same goal when you code. Instant Feedback aims to give you confidence in your solution even if it is different to the teacher’s solution. As you progress faster progression and get more confidence in your own skills, we think you will have more fun and stay motivated for longer.

Tackling Bloom’s 2 Sigma Problem

The background for this feature is an educational phenomenon called “Bloom’s 2 Sigma Problem”, which proves that students who get one-on-one tutoring while following a so-called Mastery Learning track by far outperform other students. They actually do better than 98% of them. However, the combination of personalised tutoring and mastery learning is usually far too expensive to offer all students, hence the “problem”.

One-on-one tutoring is self explanatory, but what exactly is mastery learning? It refers to a setup where students always have to prove that they master “learning objective A” before they are allowed to move on to “learning objective B”, typically through a skill test.

In our courses at Scrimba, we have always relied on mastery learning techniques. We carefully spread out the learning objectives, and never introduce i.e. function parameters until you’ve solved a coding challenge about functions in general. One of our golden rules is that “a coding concept hasn’t been taught unless the student has written the code themselves”.

However, we have always relied on the students self-assessing whether or not they have passed the test, comparing their own your solution with the teacher’s. The “Instant Feedback” feature changes this.

We are finally able to give you a clear answer as to whether you’ve passed the test and are ready to move forward in the course.

In addition to this, you will get a little bit of one-on-one tutoring through the feedback if your solution isn’t correct. The LLM has been tuned to not give away the final answer, but rather push you in the right direction to help you figure it out on your own.

The response from our two weeks of beta testing has been unanimously positive, as you can see from the image above. While we won’t claim we’ve completely solved the “Bloom’s 2 Sigma Problem”, we feel certain this will give you a better learning experience.

LLM quirks to be aware of

While using this feature, it’s important to remember that LLMs aren’t flawless. They sometimes make mistakes or “hallucinate.” We’re doing extensive testing and prompt engineering to make “Instant Feedback” as reliable as possible, but on rare occasions, it might give you a false negative or a false positive, meaning it incorrectly marks your solution as right or wrong.

This is particularly annoying if you know your solution is correct, but the AI says otherwise. For these situations, we have implemented a “Dispute” feature inside the modal. Once you click on this, your dispute will be sent to the Scrimba team, and your challenge will be marked as “Completed”.

We will regularly go through the disputes and make necessary updates in the challenges to avoid this situation from happening again.

Behind The Scenes: Constructing the Prompt

Let’s also have a look at how this feature works under the hood. In short, once you click “Check code”, we send your code to an LLM along with the necessary context, which includes the following:

  • Initial Code (snapshot of the starting state)
  • The Teacher’s Solution
  • The Challenge Description (written specifically for the LLM)

The LLM is instructed to put a higher emphasis on checking if you’ve fulfilled the criteria in the Challenge Description as opposed to checking if you match the Teacher Solution. This is important, as there are many ways to solve coding challenges, and unless stated otherwise, you’re free to use the approach you prefer.

We’ve also spent a lot of time constructing the System Message, which is the underlying instructions the AI are to follow.

The LLM finally returns a JSON object that contain two keys, a “student_is_correct” boolean value, and an “explanation” string value.

How We Evaluate Models

We have also implemented a thorough test suite for roughly 300 challenges. This consists of correct and incorrect solutions that the LLMs are to label accordingly when we evaluate them against each other. This enables us to instantly try out new models and see how accurate they are.

I have personally been surprised by how much some LLMs struggle with being a “judge”, and following specific criteria for evaluating students’ code. GPT-4o is the only model we’ve found to be capable of running our entire test suite flawlessly. And even that has required us to write Challenge Descriptions that are painstakingly specific, which I’m certain we wouldn’t have had to do if it was a human teacher on the other end.

Another example is how GPT-4o-mini isn’t able to reject solutions that use traditional quotes ("") instead of template literals (``), even when such a requirement has been explicitly asked for in the Challenge Description.

Check out the video below if you’re interested in seeing our evals in action:

Which courses have "Instant Feedback"?

So far, we have implemented this feature across our flagship learning pathway, The Frontend Developer Career Path. However, it has also been included in some of our most popular stand-alone courses:

To take full advantage of Instant Feedback, you'll need to be a Pro member. However, non-Pro users get 10 free challenges when they sign up, meaning you can try it our before deciding whether or not to upgrade.

Finally, I want to point out that this is just our very first step towards integrating AI at Scrimba. We are going to continue working on this feature along with other exciting ways of improving how you learn and build.

Stay tuned for more updates!