Do you test and evaluate your E-learning Courses?

Sometime back I had raised some questions with respect to testing and evaluating an e-learning course here.

There is another interesting post on this topic here.

It will be nice if you guys can share your ideas/suggestions on testing and evaluating an e-learning course.


Whoa! A Bug – The Importance of Testing and Evaluating an E-Learning Product

I was going through this course called Plimoth Plantation course . I liked everything about the course such as:

  • The course menu design
  • The storyline
  • The riddles which popped up at the outset of each module
  • The interactivity used especially the one you find below where you virtually move the magnifying glass to decipher the words in the age old manuscript.

Note: Click on the image to view a clearer version.

The design is absolutely fantastic.The course has been weaved around images, text and basic interactivity models. The hallmark of the course is the design and presentation. Above all it’s the story that engages you.

The images used represent the good olds days and you virtually get transported to that period.The course has two perspectives, one of the Wampanaog people and the other that of the English Colonists. Each of them has a story about Thanksgiving.

However when I went through the course carefully I found a usability issue. For example here is the Main Menu of the course:


I clicked on the English Colonists link which takes you to the main page of that module.

The main page of the module appears as below:


I moved my mouse across the page and there was this cue – the hand sign that appeared over two houses.

I kept clicking on the first house, hoping that it will lead me to another page. Nothing happened. Simply text appeared.

As you notice, the first house was not meant to be a clickable object.However the cue that is the hand sign is misleading.The same cue is being used for both clickable and non clickable objects. Now how is the user to know if there is a problem with the links or not?

Let’s look at more examples. Clicking on the right house leads you to another page representing interiors of the house as follows:


Here again I found lots of cues which indicates clickable objects. Some lead to fresh pages, some do not.

Apart from this usability issue there is some confusion in the navigation design too.

This course has just 6 links which can be treated as 6 modules. Each module has information presented in just one html page. Only the English Colonists module or link cover three html pages.

So when you click on a link, you enter the page and then you click Home to return to the menu.

This works with all links/modules but for the English Colonists link which covers three html pages.

For example , in the interiors of the house page clicking on each object leads to a page that has a short quiz as found below.


Clicking Back leads you back to the interiors of the house.


I was expecting a Back link in this page too. I was hoping to get back the page that had houses. But this page didn’t have a Back link. I had to click Home which led me to the Main Menu. Since the module had just one link and no other navigation, probably the designer deliberately excluded the Back button in that page.

These are trivial , yet worth noting.

This is just to show that when we test any product we do not test it completely. We do not test from a user’s perspective. We do not foresee the learner’s reaction to the course.

If the course itself is so fabulous, am sure most of us will not notice such errors.

However as an instructional designer we need to make sure that our course is more or less bug free.

In the IT industry Software Testing is high priority. In most IT companies you have a team of Testers who try to ensure that any product that goes out of the company is free of bugs. There exists White Box and Black Box testing for software applications.

Black Box testing for a software application essentially means checking all functionalities. For example if you double click on a tab, the corresponding window must open.A software application might have so many such functionalities. All of these must function appropriately. A Black Box tester essentially check for functionality issues.

White Box testing indicates advanced testing where the tester checks the code for issues.This is not indeed a superficial testing.

I am sure you guys know the difference between evaluating a product and testing a product.

In e-learning, evaluation of a course essentially signifies measuring the effectiveness of a course.

Kirkpatrick’s Model of Evaluation is quite well-known. You might also have heard of formative and summative evaluation in e-learning. Formative evaluation is evaluation during course development. Summative evaluation is done after the course is delivered to the target audience and after a considerable amount of time post delivery of the course.

Kirkpatrick’s model has it’s own shortcomings because it does not suggest any practical methods to measure the effectiveness of an e-learning course.

The reason why I am writing this post is to raise some questions with respect to testing and evaluating e-learning products.

  • How many companies systematically test their e-learning products?
  • Do they have test cases in place? Do they know what to test for?
  • Does a testing group exist which does both White Box and Black Box testing?
  • How many companies have a systematic evaluation process?
  • Do they make attempts to talk to clients and check how helpful the e-learning product is?
  • Are Instructional Designers informed of the evaluation results?

I am not very sure about the answers for these questions.

Developing an e-learning course is similar to building a complex software application. Intense testing needs to be done at every stage right from Analysis to Design to Development and Implementation. Post delivery of the course evaluation needs to be done.

In big companies soon after storyboarding, the storyboard undergoes an Instructional Design review where the reviewer essentially checks for th quality and overall effectiveness of the course content and presentation.

Next the storyboard goes for an Edit review where language correctness is checked. After these two reviews the course is digitized.

Now how rigorously this digitized course is tested and evaluated is a big question mark.Tight deadlines is also a major constraint for effective testing and evaluation.

Usually Instructional Designers are expected to do the testing. The work pressure is so much that Instructional Designers most often do a quick testing. So any bug in the course is discovered after course delivery.

When graphic designers digitize the course, they simply paste the content and add visual elements. So there are high chances that mistakes might happen.

Coming to evaluation, does the e-learning development team evaluate the e-learning product before it goes out? I am not sure of the answer for this too.

  • After the course is developed and digitized, does a team spend time analyzing the e-learning product to check if the course is really effective and worth delivering to the client?
  • Does the team check if the course is bug-free in the sense does the course have no language , functionality or usability issues?

Please note that I am raising these questions so that you guys can provide some factual information. Please share how testing and evaluation works in your company? Are you satisfied with the processes in place? How would you like to improve the process?

Just type in your thoughts now!