FAQs

Does the Discrete-Choice Suite include a statistical estimation program?   
No. The suite supports a number of third-party statistical estimation programs, such as Statistical Innovations’ Latent GOLD Choice which we use as the primary estimation tool in our consulting engagements. However, we do support Biogeme, a free estimation program by Michel Bierlaire.

Which statistical estimation programs do you support?
Statistical Innovations’ Latent GOLD Choice
William Greene’s NLOGIT
Kenneth Train’s Gauss-based Mixed Logit program
Sawtooth Software's CBC/HB
Michel Bierlaire's Biogeme
R mlogit package
R RSGHB package

In which formats are your programs available?
Download and/or CD-ROM

Do you offer consulting support for discrete-choice projects?
We have over 15 years' experience in helping global 1000 clients with discrete-choice projects. StatWizards was born from these efforts. Contact support@statwizards.com for more information.

Is support available?
Yes, via phone (619) 373.0008 or email support@statwizards.com.

Are demonstration versions available?
Yes, Click here to download a Demonstration Version.

Are academic versions available?
Yes.  We offer full-featured academic versions at steep discounts.

How many copies can I make?
You can make unlimited copies of the demonstration versions and two copies of
the full version. Full versions must be authenticated before they can work.

How do I authenticate the full version of each wizard?
The first time you run a wizard after installation, you will be shown two unique codes
that you must e-mail to support@statwizards.com. You will be sent an authentication
code that you must enter to proceed. From that point on, the wizard is unlocked.
For more information, view our Product Activation document.

Are multiseat discounts available?
Yes. Contact George Boomer at support@statwizards.com.

What constitutes the best choice design?
This sounds like a simple question, but it is actually not. From our perspective the issue is not yet settled and probably never will be.

Consider accepted criteria for experimental designs. Classical experimental designs for linear processes are deemed to be efficient if they are orthogonal and balanced. These generally accepted criteria led to the development of various measures of efficiency such as A-efficiency and D-efficiency, all intended to reduce the variance around parameter estimates. Later, Huber and Zwerina along with others argued that nonlinear choice models impose their own requirements, and added minimal overlap and balance to criteria for good design.

Recently, Warren Kuhfeld at SAS has accumulated an extensive design library and developed a modified Federov algorithm in SAS for constructing D-efficient design. This is terrific work, probably the best around. Anyone who has access to SAS and is facile with SAS programming probably doesn’t need the Design Wizard.

However, as Warren would probably be the first to admit, even D-efficient designs are by their very nature elusive. The problem is that the covariance matrix for discrete-choice estimators depends on knowing the value of the parameters being estimated. In other words, you cannot calculate the efficiency of a design until after the experiment is run. There are ways to mitigate this problem, such as Sandor & Wedel’s Bayesian approach that depends on prior assumptions about parameter values, but that shifts the burden of design efficiency to the quality of the priors. Efficient designs will always be to some extent a compromise—an approximation to an unattainable goal. Some approximations will be better than others, but that is the best we can say.

Moreover, challenges to D-efficiency maximization are beginning to emerge. Hausman and Toubia at MIT (“Note on Managerially Efficient Experimental Designs”, April 26, 2005,  argue that unbalanced designs may be superior to D-efficient designs when some attributes are more important to management decisions than others. They introduce the concept of M-efficiency to describe this effect. Kessels, Goos and Vandegroek, in the August, 2006, issue of the Journal of Marketing Research, argue that efficiency calculations based on response predictions are better than D-efficient designs for choice experiments that lead to forecasts. They borrow the concepts of G-efficiency and V-efficiency from classical design theory and update them for nonlinear choice models.

It seems clear from all this that the last word on choice designs has yet to be written. The theory is still under development, so from StatWizards’ standpoint it makes no sense to embark on a multi-year development effort to embrace one of these approaches until the theory begins to stabilize.

How the Design Wizard works
Because design theory remains in a state of evolution, with even D-efficiency under attack, we decided (for better or worse) to take a simple yet workable approach to developing experimental designs.

First, we searched available design libraries and software-based design generators for the best designs we could find. We suspect that Warren Kuhfeld’s routines can generate better ones in many cases, but the ones StatWizards employs are at least orthogonal and balanced with high—though not always perfect—efficiency. The Design Wizard matches these designs to the user’s specifications and adds foldovers or replications where needed.

Next, the wizard takes another simple yet serviceable approach to minimizing overlap. It constructs an overlap index equal to the sum of squares of the number of times a level appears in each choice set. It then randomizes the design 100 times, keeping the design with the lowest overlap index. (100 was chosen from Monte Carlo experiments.) This approach results in designs with orthogonality and balance, and good—though not perfect—overlap characteristics. Also, the overlap objective remains subordinate to the objectives of orthogonality and balance.

In our view, utility balance, the remaining objective, is better handled through inspection than through prior estimates of coefficients. If any choice set appears dominated by an alternative or an alternative appears excessively weak, design rows can be cut and pasted in StatWizards’ Design sheet in Excel until the condition is alleviated. The orthogonality and balance of the design are not affected, and the user can evaluate the impact on the overlap index, which is updated in real time.

While StatWizards’ design library is not comprehensive, we find that practically it handles almost any situation. Those designs for which the library doesn’t have a match can be easily tweaked to fit one, usually by increasing or reducing the number of levels in a pricing term. Also, it is often the case that starting attribute lists are just too long for respondents to handle and are almost always revised before going to field. It is usually a simple matter to trim them so that they are still reasonable and fit a design.

Clearly the StatWizards approach is nowhere as comprehensive and powerful as Warren Kuhfeld’s SAS routines, the current gold standard in the field. However, the approach does result in good, practical designs that are quite, but not always perfectly, efficient. For those practitioners who have access to and familiarity with the SAS design routines, the Design Wizard is not for them. However, for most applications, including commercial-grade applications, the Design Wizard does a reasonably good job, does it quickly and with a flat learning curve.

Is a “reasonably good job” good enough? As a former McKinsey colleague once said, you only need enough precision to get the right answer. Concerns about D-efficiency aside, a design with a D-efficiency of 100 is better than one with a D-efficiency of 96, but not by much. We submit that other things, such as getting the specification right and even having enough alternatives in each choice set can have a much greater impact on not only the efficiency of the design but the outcome of a choice study.

How we priced StatWizards products
The pricing of StatWizards’ Discrete-Choice Suite plus a compatible estimation package is not low, to be sure, but the question is always, compared to what? Compared to the alternatives, our Wizards are a bargain. A single PC copy of SAS, depending on the configuration, costs around $2000 per year. That’s almost as much as a perpetual license for the entire Discrete-Choice suite. Plus, the learning curve for SAS’ command-driven interface is steep, imposing an additional economic cost on the customer.

Sawtooth Software’s prices are equally high. A CBC plus HB license we purchased last year cost over $7000. That’s twice the cost of the Discrete-Choice Suite plus any of the estimation packages it supports. Seen in this light, the Discrete-Choice Suite is simply positioned in a different niche.

On the supply side, the economics of software in such a small niche market are stark, resembling those of an electric utility. The marginal cost of production and distribution is tiny, but the fixed costs of development, marketing and computer services are significant. For a small potential market like this one, which we blindly estimate to be around 400 potential customers worldwide, the business model consists of meeting average costs while providing enough internal capital to grow and develop new products. The current pricing seems to do that.

From a customer’s demand-side point of view, the pricing was set so that the time savings on a typical discrete-choice project could pay for the suite in one project. After having conducted more than 50 discrete-choice projects for global 1000 companies over the last 14 years, we found that it took about 5 days of professional time to complete an engagement. Using StatWizards cuts that time by 2 days. At a wholesale rate of $1500 per day and retail rate of $2500, the wizards pay for themselves in one project. Also bear in mind that the StatWizards license is a perpetual license; we don’t return every year to pick customers’ pockets for “maintenance fees”.

Our experience so far supports this view. In a recent competitive bid for a discrete-choice project with a prominent Web search-engine company, we were asked to explain why our bid was almost half of the competing ones. The answer was the time savings from using StatWizards. We ultimately won the bid and conducted a successful project.