This report is available exclusively to subscribers of Inman Intel, the data and research arm of Inman offering deep insights and market intelligence on the business of residential real estate and proptech. Subscribe today.
One of the most widely cited measures of U.S. home prices has come under fire in recent weeks after an upstart firm’s critique ignited a broader discussion on what data concepts the industry should track — and how.
A public post in late January by Parcl Labs captured the imagination of real estate data insiders when it called into question the S&P CoreLogic Case-Shiller Index, a monthly price tracker considered by many to be a go-to source for home price trends.
It’s not the first time such a widely referenced industry measuring stick has been scrutinized, and it won’t be the last.
To understand these arguments, Intel examines what Case-Shiller and similar models attempt to accomplish, how to interpret them, and what blind spots other data providers are increasingly jockeying to fill in.
Read more in the full report below.
Origin of a ‘gold standard’
For decades, real estate professionals have acknowledged many problems with simply tracking raw home prices.
One of the biggest issues? The group of homes that sell in one year might not look like the homes that sell in the next period. A sudden mortgage rate surge, for example, might drive more buyers to a lower price tier, without exerting as much downward pressure on home prices within the same tier.
That’s one of the problems that the Case-Shiller index was designed to solve.
Economists Karl Case, Robert Shiller, and Allan Weiss formulated this index in the late 1980s. It is based on the concept of “repeat sales.” Instead of tracking the prices of houses sold in a period, the index tracks the prices of individual houses over time.
It’s far from the only measure set up this way. The Loan Performance Home Price Index, another CoreLogic data series, uses the repeat-sales pricing technique, as does the Federal Housing Finance Agency House Price Index.
In a blog post discussing the relationship between appraisal values and home price movements, FHFA’s Justin Contat and Daniel Lane wrote, “The repeat-sales index is the industry gold standard since it is ‘constant-quality’ and suffers less than mean or median values from sampling differences.”
Case-Shiller’s National Home Price Index is more than a simple up-and-down gauge; over time, it has become a benchmark for both housing and the nation’s broader economy. The index, and its subset of multiple major metro areas, is a key tool used by policymakers and investors in their decisions.
While many point out one of its potential drawbacks — a two-month lag in the data — they also generally point to another time-based element for its popularity. A multi-decade time series with a rigorously tested methodology doesn’t come along every day.
“What Case and Shiller put together is really the gold standard for price changes in the housing market,” Edward Glaeser, a professor of economics at Harvard University, said in an interview for The New York Times obituary of Karl Case. “It has the beauty of being both transparent and reliable.”
Taking a swing at the king
On the last Tuesday of February, as on every month dating back years, the S&P CoreLogic Case-Shiller Indices were released. And like clockwork, they generated headlines seconds later.
But it was another headline, published a few weeks earlier, that made a splash in data and research circles when it called into question decades of accepted price-monitoring standards.
This bold-faced shot across the standard bearer’s bow came from a January article by Parcl Labs, one of a growing number of data providers that are challenging the institutional order that sets its clock to indicators like the Case-Shiller release.
A spokesperson for S&P Global declined to respond in detail to a request for comment on the post, directing Intel instead to the Case-Shiller methodology page.
Parcl Labs, riding a pandemic-era digital real estate mentality shift, offers investors the opportunity to bet on markets rather than physical property. It focuses on determining daily value and trend action. In doing so, Parcl argues it adds a novel layer of information in real estate pricing and analytics.
Parcl’s article, penned by co-founder Jason Lewris and Vice President of Strategy Lucy Ferguson, argued Case-Shiller “lacks utility for the modern housing market.”
Their list of issues with Case-Shiller was long and included the following:
- Backward-looking data that is two months old. In recent years, more data providers have moved toward offering customers daily updating reports instead of quarterly or monthly ones. Parcl’s post argues that this trend leaves Case-Shiller — which releases with a two-month delay — further behind the curve than ever.
- Utilizing only single-family repeat sales, but not even all of them, to measure home value change. In addition to excluding new construction homes, co-ops, and condominiums, the Case-Shiller methodology also negates any trades that occur within six months of one another. A study by Parcl in 2022 asserted that, due to these exclusions, the Case-Shiller 10-City Composite Home Price Index left out 42 percent of sales in the 10 largest metropolitan statistical areas.
- Discounting older or low-turnover homes by the better part of 50 percent in some cases. While Case-Shiller does not necessarily exclude older homes or ones with significant gaps between sales, methodology weighting adjustments greatly alter their impact. Parcl concluded that, due to what is trading in San Francisco of late, most sales within that metro area’s index are being discounted and some as much as 45 percent.
- Using MSA boundaries in its 10- and 20-metro area indices paints with too broad a brush. People live in New York City, or Boston, or Chicago, but just like any other real estate, supply, demand, and value are localized dynamics. The following chart, for example, illustrates the performance delta between metro San Francisco and the city proper.
A single source of truth?
While believing Case-Shiller is an imperfect data source, Lewris still sees some utility in it for now: Namely, in helping the Parcl Labs team get smarter and understand specific market conditions or how most adherents use it.
Lewris wrote in a recent blog post that the Parcl team attempted to reconstruct the Case-Shiller methodology as best it could to help “predict” how it would behave in more recent weeks.
“This report gives us insight into how markets are evolving for single-family, repeated sales homes that fall outside the definition of home flipping,” Lewris wrote.
Each time the Case-Shiller is released, Parcl provides a post-mortem on how close it was to predicting the results. December’s was largely on par with most months, with Parcl’s estimates generally very close, even if they were off directionally.
Ultimately, though, Parcl Labs has about as bold a goal as any information provider in any industry: Its stated mission is to create a new global standard for residential real estate pricing and analytics, largely by creating a single source for home valuation.
This idea is both elegant in concept and daunting in practice. Instead of having multiple systems servers and access points, the idea is to create one system that integrates, interrogates, aggregates and disseminates data. Neither the concept nor the chase to produce such a data reservoir is new, and whether Parcl — or another upstart data provider — will persuade the industry it has cracked the code remains to be seen.
However, some experts believe having different sources that competently and efficiently offer different data products has worked well for decades. If something isn’t broken, they argue, there’s no need to fix it.
“We use the FHFA series, which is a repeat-sale model, and we like it. But Case-Shiller is proven, and I don’t think it’s broken,” said Ali Wolf, chief economist for Zonda. “Parcl is doing something new and different, and there’s a value to their data. But it doesn’t make Case-Shiller wrong or irrelevant.”