Sunday, June 26, 2005


My Report of Lower Industry Profits Creates Concerns... Again

I finished up a big project and then took a few days off with the family to go to Philadelphia. Our son takes American history next year, so it was an opportune time to visit the city. The fact that our beloved Mets were playing the Phillies there had nothing to do with it. Citizens Bank Park may be the finest baseball stadium we have ever visited, made especially nicer when the Mets won 4-3.

Having spent three days away from phones, e-mails, and instant messaging, I came back to some voice mails and e-mails about the profits report I wrote about on my blog on 6/13 but which did not appear on until nine days later on 6/22 (I guess they were waiting for a really slow news day) and was fully discussed in my column which appeared on 6/24 (which I wrote on 6/20 and finished editing on 6/21) Of course, it being posted on my blog resulted in nothing. The 6/22 report on WTT created a bunch of e-mails. My readership is far greater on WTT than it is here, so far at least.

I still find it funny how the Commerce Department and Census Bureau publishes all kinds of stuff for "free" (it's really our tax dollars hard at work), and they just sit there with few people using them. I'll save that grousing for another day, though.

My Friday column will discuss the e-mails I got and my responses in more detail. There have been times in past situations that I have wondered if there isn't an element of prejudgment going on: bad data that result in favorable conclusions must have had good methodologies and good data that say unfavorable things must be unreliable. I've had my share of both, as has anyone involved in research. Data are data, and one thing we should look at are whether or not the same methodologies are applied on a consistent basis.

One e-mailer admonished me that the data were unreliable because the report they come from have the statistician's disclaimer "sampling variability precludes characterizing change" which is basically statisticians saying "hey, don't blame us if we can't tell if ithere's a methodological issue that results in showing a change when there really was none." I've called these things the "contents may settle during shipping" clause, which I remember being on virtually every breakfast ceral box I ever saw (gee, it was full when it left the factory). Or perhaps its more like "your results may vary" from every diet ad we've ever seen. (Or how about "adult supervision required"? With the kinds of problems adults can cause, I was never quite certain that was a good idea. It certainly doesn't apply to personal computers or iPods, which should have "teenager required" stickers on them.)

All sample-based surveys have issues, yet we still use them, because we want data soon enough to be acted on, at a reasonable cost, and we make whatever trade-offs are needed to deal with them. (I worry about executives who take all research at face value, and never apply the same skepticism about their own accounting reports, which they should).

The question is whether or not there has been a real change or not, and to what extent that change is. That is, some change may the result of the respondent base, who might not represent the full marketplace. We deal with this all the time. Some of you may have heard about "non-response bias" in the past, and this is it. This is one thing that "statistical confidence" is all about. A 95% confidence interval means that if you did a survey 20 times, there is one time that your results will be quite different. So whenever you get survey data you should have a little voice in your head that mutters "this may be that 1 in 20 time." (This is why good researchers are obsessive compulsive personalities: if you saw my desk, you would know why I would never be happy in a research-only environment. I can't live with muttering voices and in a constant state of "statistical fear.")

I've never fully trusted sample-based research, though I use it and implement it, and am good at it. As a research user, you want to seek consistently applied, well-designed methodologies; there are unfortunately too few of them. I always augment sample-based surveys with secondary research and anything independent I can find. The other thing I've done is to use techniques such as inflation-adjustments (which opens up problems of its own discussed in my column but also moving totals always using a 4-quarter total to minimize variances and to be sure that seasonality is less of a problem.

Another thing to keep in mind is that revenues and sales lines don't have to change much to cause wild swings in profits. For example, and very simply (because it's never this simple), if a company has sales of 100 and their breakeven point is 90, then their profit on sales is 10. If sales go up to 105, then their profit is 15, a 50% increase off only a 5% increase in sales. This is why understanding breakeven points is so important, because moving your breakeven point affects profits greatly. It's also important to understand topics like contribution margin, which is related to this topic, about which I won't go into here lest boredom set in, if it hasn't already. But when an industry is beset by increases in transportation, paper costs (not all of which are always passable to customers), benefits costs, and other items, plus continuing pricing pressures that are compounded my new media (just this week I saw a report that e-mail effectiveness is increasing; i'll post the link when i find it again), it's not a surprise that the calculation of the change in profits would have wild swings even from the smallest of sales changes.

There are some data incidents I will never forget... One of them was before Print'01 when there was a press release about a speech I was giving which indicated that the industry was slowing and the buyers had an upper hand in negotiations with vendors. A vendor executive complained about what I said in the release, having someone complain through someone else about it, though the conclusion was an issue of simple economics. Vendors were having slow sales and openly complaining about it; independent data showed it quite clearly. Just like there's nothing like buying a new car when Detroit is having problems, there's nothing like the deals you can get when a company is having trouble meeting its sales forecasts. I learned that early: that's how Chrysler got into trouble. Dealers learned that they could make their best deals at the end of the month, so they never ordered until then, and acted as reluctant and as pained as they could be, holding out until the last minute to put in their orders. Dealers told me that. Iacocca wrote about it in his book. It's practical economics, and you don't even have to take an economics course to learn it; it's something even your dumbest uncle knows. My speech was actually positive about the industry, and it may have been too positive at the time. You can download it by filling out the brief form at -- shortly after the show, it was downloaded over 1000 times. About 500 people saw the speech. There was a recording of it at one time, but I never received it; the Q&A went on for 45 minutes with about 50 or 60 people there, and that was never recorded. It was quite lively and went well.

There may be some concern that some trade association surveys don't reflect these dour profits data. First of all, these surveys are usually of members only, so they reflect the conditions of members. (There is nothing wrong with this, and no one pretends otherwise, and no one should even complain about it; it is what it is, and they serve a vital role in providing that consistently applied methodology, etc., etc.). Trade association members are among the most profitable, most successful companies in our industry, and I at one time had the data to prove it (I haven't surveyed about this for a very long time, but I have seen nothing that would refute it from other data I have seen since then). I would not expect anything but the profits data for the industry at large would be less than those of association members. First, association membership is a discretionary expense, so you must have profits in order to pay for them; companies that don't have profits often make association membership their first cut, however counterproductive such a decision might be. Second, one would hope that increased knowledge about successful plant operations would be something you would pick up from being an active association member. Yes, as shocking as that sounds, getting to stand around a resort swimming pool for a cocktail hour is not as big a benefit as what you can really learn from the publications, events, and networking benefits you get through associations. So don't be surprised by any "inconsistency" in this regard.

These profits data lead me to believe that poorly run firms are getting an economic shove out of the business. If anything, the disparity between well-run firms and poorly run firms is probably widening. That's a good thing. With all of the complaints of "too many printers" and "too many presses" or "competitors who don't know their real costs" you'd think these data would be welcomed with open arms, however perverse that sounds.

Check my column this Friday. In the meantime, be sure you've signed up for my free WTT webinar this Wednesday at 2PM Eastern time, sponsored by Creo, at .

Comments: Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?

Get legal. Get