T.J. Barber, Justin Pille, Kody Jackson, A.J. Ostlund
Create a five-minute onboarding training course that would quickly customize to a wide range of job roles and locations.
Delaware North is a staffing agency that specializes in hospitality services. They staff a large variety of locations ranging from vendors at airports to service staff at fine dining restaurants. They wanted a short training course covering four main learning objectives: Be Ready to Serve, Create a Welcoming Environment, Personalize the Experience, and Demonstrate Gratitude. Our goal was to make something that was very quick for the user and customizable to the individuals taking the course.
The first Objective, "Being Ready," was about dress and attitude for work. Since this applied to each job role and location in the same way, we set the scene inside the user’s home and had them swipe on examples that were and weren’t appropriate. This was designed to be very simple -- during our discovery phase the client noted that almost all users got right regardless of training. We decided to only give any feedback on an incorrect choice and that incorrect choice would then be re-added to the queue of examples in random order. The user would need to get all ten correct to move on, and we estimated this taking less than a minute of time.
You can see the flow below, as well as an example of appropriate dress. The correct and incorrect paths refer to the accuracy of the user’s response, not the appropriateness of the outfit.
After the swipe interaction, we focused on customizing the remaining interactions so they applied to the many locations our client serviced. To this end, we created six unique scenarios (seen below listed across the top of the excel doc) and eight locations to choose from.
The user would choose the location that most represented where they were being hired to work (Like an airport or casino) and the grid below would then string three out of the six scenarios together based on that choice (Example: Someone working at an airport might be at a concession stand, in seated dining, or at a retail register). These scenarios would have the same text across all locations but the media would adjust accordingly. If someone chose airport for example, the retail scenario would take place in an airport.
This was so effective that, during review, our client thought we had written 36 unique situations and that QA would take weeks. We ended up creating a cheat path for them that walked through all the scenarios in one pass for review purposes.
Below, you can see some examples of different backgrounds that would plug into the scenarios depending on the locations chosen. From left to right, top to bottom: Airport - Concessions, Gaming - Retail, Patina Rest Group - Foodservice, Other - Retail, Parks and Resorts - Lodging, and Australia/NZ - Retail.
The remaining learning objectives for the course are so entwined that we decided to handle them in each scenario through conversations with a customer or guest.
Each scenario would have three interactions as part of its flow and each interaction would correlate to one of the objectives. We wanted the experience to feel as natural for the user as possible so there are few prompts on the screen. The only time we interrupt the conversation is to provide feedback on the decision points the user makes.
We wanted the user to be able to look back at everything, so the feedback doesn't cover any of the interaction but simply ads to the bottom, and the screen would automatically scroll down to it. The user could then see their choice, the state of the conversation, and any corrective feedback by scrolling up and down on their device.
During each scenario, the customer hands something to our user, an ID, player’s club card, or Visa for various purposes (e.g., purchasing alcohol, getting a tab started.) We didn't want to bog down the user with unnecessary prompts about the items; however, we still wanted to test whether they could remember a customer’s name and if he/she was of age. This allowed us to really mimic the real-world interaction they would have in a very natural way.
For example, if they look at an ID and then decide to serve an underaged guest alcohol in Q3, the interaction would correct them only after they pass the point of no return. We didn't want a prompt outside of the conversation saying: "Is the Guest of age: Yes / No" as it isn't part of the conversation they would be having in the real world.
The final feedback and conclusion for each scene is a "yep!" review of the business that references the user’s decisions. We really wanted to leave an impression here as few new staff see the impact of customer service on their work life. We wanted to capture how these interactions (in the real world) can impact how the employee is viewed for his/her work performance. Doing good work could result in something they could reference in job reviews and future employment opportunities.
Below you can see several examples for different customers / guests, their attitude and posture changes as they react to choices, and the corresponding item they would hand to a user depending on their scenario.
In the end, we created a training course that covered a large amount of variability in our user base but was fast enough to do on the bus to work or while listening to music.
Branching Conversation Template
T.J. Barber, Ann Iverson
Simplify the standard branching model for a conversation flow in an interaction.
This model functions as a "choose your own adventure" style interaction with content based around a conversation with another individual. The UI of an interaction like this might look like some variation of what you see below.
In a traditional branching conversation structure, each user selected option (represented below as "alts," or Alternative Options) will branch to a new question (represented below as "cons," or Conversations Segments). When covering several tiers of content (think of these as user objectives,) this model will exponentially grow to an unruly size regarding the content that needs to be written. In the example below, you can see that to cover four tiers of content, you will end up with 31 separate cons or "branch points" and 93 total alts for the user to choose from.
While the above model is an accurate representation of a back-and-forth conversation, in testing we found that users rarely would repeat the same interaction to explore the different paths. This made the return on developing all the options very low as users would see one path and get the idea of what they need to accomplish. We still wanted to stay away from a strictly linear progression as it becomes quickly apparent that the programmed responses are ignoring the user’s choices. So, I proposed the idea of reusing certain alts across the same tier and building in "correction paths" that would redirect the user down an ideal path for the content. In the redesigned model below, we end up with 12 "cons” and 13 "alts” for the user to choose from.
This greatly reduced the size and cost of development for this model but was not a great solution beyond its initial design. It did not scale well and the path could become very complicated depending on the number of correction paths that were added throughout the flow.
After a few iterations of the above design, I developed the flow below. Each content tier would be collected into a "Question and Safety Chunk" with a correct and incorrect path. The user initially starts on the correct path and if they get off topic or answer outright incorrectly, the incorrect path gives them a single chance to readjust to save some point. If this interaction is scored, staying on the correct path would net 2 points, and then if you answer correctly from the incorrect path, you net 1 point. The scoring would have a perfect and ideal score that would need to be tested per interaction to create a proper fidelity level for passing.
The final iteration below allowed users to have a unique experience for each pass through the interaction as the paths led to a varying number of questions to get through all the objective tiers. For example, a four-tiered interaction answered perfectly would be only 4 cons long, but with a few incorrect choices or any that get off topic, the interaction could expand to 8 cons to support the user in getting on topic or answering all the questions appropriately. In the end, we had a model that could have any number of tiers with 3 cons and 6 alts per tier.
This solution solved our issue with exponentially growing content and allowed for a fluid and smooth experience for the all while allowing for the complexity and diversity of real-life conversations.