Despite all the effort put into a fuel economy test, the results can easily be buried in the noise. Even if the technology in question promises to boost fuel economy by 2%, the calculated confidence in the test results – say, plus or minus 5% — could be larger than the promised fuel savings.
This poses a real dilemma for fleets. Investing in a fuel-saving technology can be an expensive proposition, so you want to ensure the chosen technology is going to produce a return on the investment.
Trailer skirts offer a case study.
There are huge differences across the available lineup of trailer skirts, says Daryl Bear, chief operating officer and lead engineer for Mesilla Valley Transportation Solutions. The company tests and validates aerodynamic equipment, and is closely tied to Mesilla Valley Transportation, one of the fuel economy leaders in the U.S. for-hire trucking market.
The company boasts a 9 mpg (26.135 L/100 km) average over 1,650 trucks. It really is in a league of its own when it comes to fuel economy testing.
“You can get a trailer skirt that delivers 3% savings, and you can get some that are over 6%,” Bear says. “A lot of it comes down to size. The more surface area the skirt has, generally, the better. Positioning has some influence, too. And generally, the larger skirts — the ones that come closer to the ground — are more prone to damage. Now you’re faced with a trade-off: better fuel savings versus possibly increased maintenance costs. Will the additional fuel savings offset increased maintenance costs?”
Bear says fleets might try in-service evaluations of several different skirts, but won’t see a difference because of a lack of conclusive proof in the evaluations themselves.
“In the end, the fleet will often pick the one that’s either the cheapest or the least maintenance,” he says. “The fuel savings aren’t factored into the financial decision because the fleet couldn’t measure it.”
Highly controlled tests
Some tests can produce statistically sound results, but they are performed under highly controlled circumstances, and data is carefully logged and interpreted by experts.
“One of the issues with in-operation tests is the number of variables that cannot be controlled the way they can be on a test track,” says Jan Michaelsen, FPInnovations’ PIT Group leader.
PIT Group testing activities under the ISO 17025 certification include fuel consumption testing for heavy-duty vehicles — such as testing according to SAE J1321 and TMC – Type II RP 1102A; SAE J1526 and TMC – Type III RP 1103A; EPA SmartWay test methods; and emissions testing using portable emissions measurement systems according to EPA regulations.
“We try to control every variable, removing as much as we can from driver technique, environmental variations, etc., which can all impact fuel consumption.”
All the testing is done in a stable environment, with protocols in place that stop the test if it’s too windy or raining, for example. PIT Group uses identically spec’d control and test vehicles and has specific driver procedures to ensure the trucks are driven in the same manner.
PIT Group first establishes a baseline with a test truck versus a control truck to see if there are any differences that may need to be accounted for in the analysis — right down to the tire inflation pressure, the trailer ride height, and the trailer gap.
“Neutralizing that many variables for an in-operation test is very difficult to do,” Michaelsen says. “Even if you are on the same route every day, you have different weather and traffic conditions to factor in, temperature effects, and even the mood of the driver.”
Eliminating the data noise
Two factors can work to you advantage with an in-operation test: time and a large group of test trucks. The longer you run the test, the more the variables like weather and traffic will even out. It may be bad one day, but not do bad the next. Over 30 or 60 days, those external influences almost disappear. Almost.
Using a large sample size of trucks helps in the same way. The anomalies experienced by one truck may not be experienced by another, or five others. The larger the test group the better.
“You put everything together and get rid of driver influences and weather, etc.,” he says. “If you’re testing with one truck, anything under 5% is almost impossible to see. Even with a large population you will have a hard time getting anything under 2% or 3%. There’s just that much variability.”
In addition to the test trucks, you will also need a group of baseline trucks, a portion of the fleet that doesn’t get the new technology, so that you have a before and after comparison. The baseline group also helps track the environmental influences, assuming that if the test truck and the baseline truck experience similar weather, the impact on each truck should be similar.
“You should really have a baseline portion of the fleet that remains unchanged from when you install a new technology, you know, before adoption and after adoption, so that you have a portion of your fleet that you can compare to what hasn’t been changed,” he advises.
Because weather has such an impact on fuel testing, Michaelsen says a practical limit for an in-operation test is about two months, preferably in the late-spring to early-summer, or late-summer to early-fall, when the weather is fairly stable. Cold air, for example is more dense than hot air, so any aerodynamic performance improvements will be greater if you test in 10- to 15-degree temperatures compared to 25- to 30-degree temperatures, when the air is less dense. It really does make a difference.
Many fleets start out with the best intentions, but the heightened level of diligence with the test fleet is difficult to maintain. For example, basic maintenance such as tire pressure checks must be performed daily. A tire change can disqualify a truck from the test fleet. And a fairly high level of cooperation is required between operations and dispatch to ensure the trucks and drivers are kept on the same loads and routes for the duration of the test. All that is a lot to ask on top of the usual operational pressures.
Is testing worth the effort?
It would be very discouraging to test a device for two months and find no difference between the baseline group and the test equipment. The difference may be there, but it could be lost in the data noise, especially with a device offering a small improvement — say 2% to 3%. So how does a fleet make a decision on a particular fuel saving device? Have it professionally tested.
If you’re thinking you can’t afford that, you’d be dead wrong. Both the PIT Group and Mesilla Valley Transportation Solutions offer their testing expertise at a reasonable cost. Actually, the testing doesn’t cost you money, it saves you money.
“If you’re unable to measure a device with an in-service test, you have to find another way to do it, or you just leave those savings on the table,” says Bear, who is from Toronto and has a motor-racing background.
“For example, we tested a brand of aerodynamic mud-flaps a while ago and concluded they produced a 1% fuel savings. That’s a number you’d never see testing it yourself, and if you couldn’t prove it, would you buy it? We did. The test cost $10,000, but they saved us $300,000 in the first year. Fuel savings of just 1% will save more in a year then we’ll spend in five years on testing.”
If that’s not compelling enough, Bear says a recent evaluation of some SmartWay approved fuel-efficient tires revealed a difference of eight gallons of fuel over 1,000 miles between the brands tested. That’s $2,000 a year savings. Even a sophisticated fleet would have trouble pulling off that kind of test, but wouldn’t you like to know what the top performing tire was?
MVTS does testing for fleets on a contract basis, with the cost running about $20,000 to $30,000 for up to three tests. Often, Bear says, “The device suppliers will help with the cost.”
And the service has probably already tested any device you might be considering. In that case, they can build a model using data from your fleet to compare against their baseline track testing, in a program called Real World Fuel Saving Analysis.
“We take the fleet’s information — their duty cycle, where they run, percentage of on-highway miles versus urban duty cycles, etc., and we can predict the results the device will get under that application,” says Bear. “It’s purely science. We’re able to get the fleet a valid real-world savings number from a four-hour test.”
PIT Group takes a different approach. Fleets can buy into the group, become partners of a consortium, and get access to much of its non-proprietary test data. Or, for additional cost, fleets can contract PIT to do in-service tests, data collection and evaluation.
The cost of joining the PIT Group is $52.50 per power unit at the highest price point. The minimum charge is based on 100 power units, or $5,250 per year. Partners can suggest technologies they would like to have tested, or just wait for data to emerge on other tests as they are completed. Once you’re a partner, you have access to the test results, good and bad.
“The top-notch fleets in Canada actually have people working in quality improvement and tracking fuel. Those people are paid a salary and the company sees some benefit from that,” Michaelsen says. “That’s what we are doing here at PIT Group, helping those fleets implement and track new technologies. For smaller fleets that can’t afford to have someone like that on staff, we try to be that person for them.”
Have your say
We won't publish or share your data