Risk Insider: Paula Vene Smith

The Risk Register: Dead or Alive?

By: | July 27, 2016

Paula Vene Smith directs the Purposeful Risk Engagement Project (PREP) and is a professor at Grinnell College. Paula consults on risk in higher education, and has written Engaging Risk: A Guide for College Leaders. She can be reached at [email protected].

If your organization practices Enterprise Risk Management, senior leaders probably have endorsed a list of top risks that demand action.

Once ERM is in place, the list undergoes review — maybe annually. After a few cycles it’s time to ask: Are we practicing risk management or list management? A risk register can either drive change, or persist as a lifeless spreadsheet.

Use these questions to test the vitality of your institutional list:

How long is your register, and how often does it change?

In my research on higher education risk management, I spoke with leaders at small to mid-sized colleges. Institutions with lengthy risk registers (50-80 items) didn’t see much change from year to year.

Their annual review consists of “checking in” to make sure people are working on their risks. Organizations with shorter lists (5-15 items) saw higher turnover. They acknowledge progress, shift gears and set new goals.

Have entire areas been overlooked?

For the initial risk assessment at Grinnell College, we brought in consultants. They interviewed senior leaders and staff to identify risks.

The purpose of a risk register is to drive action. A short, dynamic list serves this purpose better than a compendium that makes people feel they’re chipping away at something so massive that no amount of effort can make a difference.

Remarkably, the “top seven” list they produced did not include risks related to student life. Later, it emerged that the person then serving as vice president of student affairs had such a confident attitude, he convinced the consultants there were no risks in his area that he couldn’t handle.

Reviewing the register internally, we agreed that student risks had to be recognized.

Does one item encompass several distinct risks?

Risk identification often starts with concerns about a broad area. When, for example, we resolved to add something about students, the initial risk description was wordy and cumbersome. By the next cycle, we recognized multiple risks: substance abuse, mental health services, Title IX processes and improved student retention. Separating these items helped us develop mitigation strategies.

What happens to a risk that drops off the list?

Some of the risk leaders I interviewed found no need to drop risks from the register. They pointed out that no significant risk is ever completely mitigated. Nonetheless, we refresh our “top 10” periodically. Once significant mitigation is in place, the item can move off — but we’ll continue to monitor progress. If the risk remains well managed, we check in more briefly and less frequently.

Can risks come off the list too soon?

In the first year we worked hard on developing a policy to protect minors. It felt great to remove that item from the register. But we still needed to conduct training, step up background checks, and assist program directors.

Should we have kept “minors on campus” on the register for another year? The time invested was not so different from that spent on top-priority items. Still, no harm was done by renaming this work as “monitoring.”

The purpose of a risk register is to drive action. A short, dynamic list serves this purpose better than a compendium that makes people feel they’re chipping away at something so massive that no amount of effort can make a difference.

More from Risk & Insurance