The sales engineering competition is the heart of NSEC. Students engage in a multi-part roleplay designed to simulate a real-life sales engineering experience. While competing, students work to solve a technical problem for a potential customer described in the case study developed with help from our industry partners. Real professional sales engineers act as buyers and judges for the competition.
The competition challenges students to build rapport with the buyers, develop customer empathy and understand their true pains, discover business opportunities, identify key decision makers and uncover the decision-making process, demonstrate value for the customer, give technical presentations and demos, and more!
The competition is a multi-part roleplay. Each part consists of a meeting with one or more stakeholders in the deal, played by judges, from the buying company. Competitors and judges receive meeting instructions prior to the event.
Each meeting will be given a 45 minute time-slot. We expect the first five minutes to involve everyone getting into the room, making sure their microphones and video are working, and getting acquainted with each other. Feel free to get started as soon as everyone's ready.
The roleplay itself should last up to 30 minutes, but it's not necessary to push it to the last second. It's fine to end up to five minutes early.
After the roleplay, the judges will take around five minutes to provide the competitors with feedback. After that, there are about five minutes left for judges to take a quick break before the next meeting. Competitors will only have one meeting per event, so they will be able to leave as soon as the feedback is finished.
Judges and competitors may chat for a couple minutes outside the room to clarify which judge they will be talking to and if anyone in the room is to be ignored during the roleplay.
Timing and roleplay begin as soon as competitors enter the room. Judges and competitors should play their role until the meeting has concluded and the feedback period has started. At that point, everyone is free to drop their roles.
The feedback period will begin with a self-reflection. The judges will ask competitors to consider how they did, what they did well, and what they could improve on. Then they will share their thoughts and suggestions.
There will be a competition debrief on each day. The purpose is to allow for group feedback and questions. It’s a great opportunity to discuss the case study, unexpected challenges, and creative solutions. First, the judges are given an opportunity to discuss their general observations. Following, competitors have the opportunity to ask questions of judges and competition organizers.
Below is a rough outline of the meeting requirements. These requirements vary year-to-year, not all requirements may be fully represented below. Competitors and judges receive in-depth meeting details prior to and during the competition.
The competitors’ goal for the first meeting is to build rapport and discover the customer’s business needs and pains that have motivated them to seek a solution. The competitors must figure out who else needs to be involved to move the sale forward. This meeting should not be spent talking about the product or solution.
The competitors’ goal for the second meeting is to clarify the specific requirements for the technical solution and the customer's decision-making process. They need to dig deep into the problems that the customer is trying to solve so that by the end of this meeting they can fully articulate why the customer is looking for a solution, which attributes of the solution are critical and which are nice to have, and what precise value the solution will bring.
The competitors’ goal in the third meeting is to deliver a compelling presentation to the key decision-makers. They should focus on the specific needs of the customer, show how the solution fulfills their requirements, and establish how this will help them reach their business goals. This presentation is key to convincing the decision-makers that the competitors and their solution is worth pursuing.
The case study takes competitors from the discovery stage, through technical and business pain identification, budget and decision qualification, and a business presentation. The ultimate goal is to dig deep into the customer’s challenges, match them to appropriate solutions, and deliver a compelling presentation that persuades the customer to move forward in the purchase process. The competition does not involve final cost and terms negotiations which are typically the realm of sales representatives or account executives.
Competitors will develop interpersonal communication and technical problem-solving skills. They will learn how to question customer and navigate through identifying, understanding, and quantifying their pain. They will exercise a consultative sales mentality and process that challenges the customer to understand their own business better before providing a solution that is tailored to the customer. Competitors will also develop their presentation skills which will be useful in all aspects of business communication.
For each event, each judge will get to meet with multiple teams. During and after the meetings, the judges will be able to take notes on the performance of the competitors. After an event, the judges will rank the teams they met with and submit these rankings. Each judge gets to determine how well the teams he or she met with did relative to one another. The judges are not assigning absolute scores.
We do not use absolute scores by judges as a scoring method because it is extremely prone to judge bias and makes competitors who are paired with tougher judges at a distinct disadvantage to those paired with easier judges. It’s practically impossible to get all the judges to agree on what a 7/10 looks like compared to a 9/10. This system eliminates that issue.
The data from all judges over the course of all three events will be gathered and entered into a matrix. At the end of the competition, an optimization algorithm will be run to determine a final ranking of the teams.
The judges determine a team's ranking based on a holistic view of the meetings. The judges will not be given a list of specific actions that the competitors need to do, but rather a list of outcomes that the competitors are attempting to achieve. In other words, the competitors have a set of WHATs to accomplish, but are not constrained by any specific HOWs.
There are hundreds of techniques and behaviors that the competitors may employ to varying levels of effectiveness and it is pointless to try to enumerate them or judge based on any given set. The experience of the customer during the sales process, and how they feel about it, will be part of the evaluation process. Judges will consider not just whether the competitors were able to reach their desired outcomes, but also how they did it.
Judging System Reception
While this system may seem arcane, it works better than numerical ratings assigned by judges to specific, predefined categories. It is not a system that was invented solely for NSEC by us. It is a modification of a system that was invented to judge the MIT hackathon. For more information about the fundamental theories and mathematics behind this system read Designing a Better Judging System.
This system has been used for our last three competitions to a good reception from both judges and competitors. They generally agreed that when they looked at the final ranking, the teams they had met with performed at a level commensurate to their position in the rankings.