What’s Wrong with Benchmarking?

March 10, 2014 (PLANSPONSOR.com) - The focus on comparison rather than outcomes is misguided, and comparing one retirement plan to another can be pointless, says Josh Itzoe, partner and managing director of Greenspring Wealth Management.

First-generation benchmarking solutions have a major weakness, according to Itzoe, who manages the institutional client group of the firm, in Towson, Maryland. “These tools are primarily comparison-focused rather than improvement-focused,” he tells PLANSPONSOR. “As an industry, I think we’ve misled plan sponsors into thinking that the only thing that really matters is whether their plan compares favorably to other plans.”

The problem with this logic is the “curse of the comparison mindset,” Itzoe says. “Imagine if I benchmark Plan A, which is dreadful, against Plan B, which is really dreadful. The fiduciaries of Plan A will probably feel pretty happy with themselves, because in the land of dreadfulness their plan reigns supreme.”

A favorable comparison can make plan sponsors feel good, Itzoe says, but he asks whether benchmarking in the traditional sense—that is, the simple comparison of one plan to other plans—really makes any difference for the company and for the lives of its workers.  

The Employee Retirement Income Security Act (ERISA) says fees need to be reasonable, Itzoe points out, but it doesn’t say anything about benchmarking. “Comparisons can lull us into a false sense of security, depending on whom or what we are measuring ourselves against,” he says. In his view, most of the existing tools are too complex or overwhelming, or lack the specificity for most plans to use and make actionable.

Both companies and employees have goals and needs surrounding a corporate retirement plan, according to Itzoe. Some are complementary, and some are not. “We’ve found that most companies care about managing risk, and most participants worry about being able to retire successfully,” he notes.

The biggest risk factor plan sponsors face is lack of prudent process, Itzoe contends. “We think it makes sense for companies to determine whether they've implemented industry standard best practices in terms of how they structured and equipped their committee, following a clearly defined and consistent investment monitoring process, designing the features of the plan and the investment options in a way that makes it simple for employees to save enough and invest it appropriately, and not just understanding fees but containing those costs over time,” he says.

Four Key Variables

Itzoe feels just four variables truly impact the retirement equation for participants: what goes into the plan (i.e. total contributions), what comes out (fees, distributions, loans), rate of return, and time.

For participants, it makes sense to focus on whether proven best practices that drive improvement in those areas have been implemented. Itzoe says the key drivers of plan improvement and success are: aggressive auto enrollment and auto escalation, total savings rates, thoughtfully designed investment menus, asset allocation and utilization, and structuring fees so they trend downward over time for participants.

Companies need to honestly ask themselves if they are serious about having a corporate retirement program that makes a difference, Itzoe says. “Benchmarking by itself is pretty meaningless because it doesn’t really define the destination.” Plan sponsors need to ask what an effective plan would look like, and then figure out what needs to be measured to see if they have achieved the goal.

Itzoe’s firm created (k)larity Quotient to serve as a framework  that assesses a plan and offers a checklist to improve it. The assessment and plan are free, and plan sponsors can receive results within a week to 10 days. Plans are measured in four areas: fiduciary responsibility; plan design and performance; fees and compensation; and employee engagement. The optimal score is 100.

“I don’t consider the (k)larity Quotient to be a benchmarking tool in the traditional sense,” Itzoe says. The tool has a number of common comparison capabilities: number of participants, plan assets, type of industry, provider and so on, but it is really a decision-making framework that helps plan fiduciaries focus on controllable factors and shows precisely what is working in a plan as well as a game plan to fix the things that are not working.

In the category of fiduciary responsibility, each indicator determines whether the retirement program has implemented industry-leading investment and fiduciary best practices to minimize both corporate and personal liability for plan fiduciaries. The firm assesses whether a formal committee is in place, if fiduciary training has been provided, if there is an investment policy statement (IPS), if there is a reporting process in place that actually aligns with what is specified in the IPS, and whether there is evidence that meeting minutes are consistently taken.

Fees and Compensation

For fees and compensation, a plan is scored on whether it delivers economic value to both participants and plan sponsors by aligning corporate goals and initiatives with competitive pricing and access to best-in-class investments from top-tier vendors.  Greenspring Wealth Management looks at the percentage of index funds in the plan relative to the overall menu, if there are any conflicts of interest (such as uneven compensation or proprietary requirements), the weighted average cost of the plan compared with specific thresholds (lower being better), how fees are structured (fixed vs. asset-based), the source of fees (plan sponsor vs. participants) and the overall process for ensuring that fees are reasonable.

When scoring plan design and performance, Greenspring determines if the plan is designed, operated and consistently measured in a way that drives successful outcomes for participants with less administrative burden for plan sponsors. The firm borrows from Schlomo Benartzi’s 90-10-90 rule regarding participation and total savings rates—with a 90% participation rate, average deferrals of 10%, and 90% using professional investment advice or a professionally managed fund, such as a managed account or target-date fund. It looks to see whether the plan is leveraging automatic features (auto enrollment, auto escalation), whether the default allows employees to get the maximum contribution, and what the total average savings rate is for the plan, including employee deferrals and employer contributions.

To gauge employee engagement, the (k)larity Quotient determines whether the plan provides the resources participants need to move the needle to a more financially sound retirement by combining unbiased personal guidance along with smarter, simpler and more efficient investment strategies. The plan should be simple for participants to use, and Greenspring evaluates the quality, structure and size of the core investment menu, whether participants have access to fiduciary advice and the percentage of assets in diversified options.

“We tell plan sponsors not to be too concerned with how they compare to others, even if they compare very favorably,” Itzoe says.  “Their focus should be on how they compare to the optimal (k)Q of 100, why they are falling short in each area and what specific actions they need to take to drive improvement.”

A High Bar

The bar is high, according to Itzoe, and the average score for Greenspring clients is 73, with a range from 55 to 90. He hopes that within two to three years the average (k)Q score for Greenspring clients will be 90-plus. “I think if we do that we will have made serious progress and impacted both our plan sponsors and their people in a very substantial way,” he says. “One of our mantras is measure, compare, improve, with a heavy focus on the measure and improve aspects.”

Each plan dimension is equally weighted for a possible total of 25. The evaluation process is pass/fail. Itzoe says it is simple and clear for plan sponsors to see how they’re doing compared with the recommended best practices. A section called Your Customized Game Plan provides specific recommendations to pass any failed section, the impact the action would have on the plan’s overall score, and evidence-based rationale for making the change from sources such as Morningstar, Harvard Business Review and Journal of Public Economics.

Itzoe hopes the focus of benchmarking will change to incorporate more outcome-based goals. The compiled information will be added to a growing database of benchmarking data to show plans of participant size or assets or industry type. Every plan that Greenspring scores will strengthen the sample.

“How a plan compares to other plans is of limited value,” Itzoe says. More important is how a plan ranks on a checklist of substantive plan actions and features, and doing whatever it takes to move the plan in the right direction.

«