Auction markets for evaluations.
Wilson, Bart J.
1. Introduction
As the technology of electronic exchange advances, new
opportunities emerge for developing markets for products and services
whose innate properties hamper efforts to do so in traditional settings.
Even though one such good, product evaluations, has been long used for
durable and nondurable goods alike, the transaction costs associated
with providing and disseminating product evaluations have limited the
scope of their use. Historically, evaluations have been limited to
"word of mouth" exchanges among acquaintances and the reports
of paid critics. (1) The Internet not only has the potential to
significantly reduce the transaction costs for evaluation sharing, but
it can also significantly reduce the costs for forming a centralized market mechanism to allocate evaluations. Although numerous evaluations
are currently freely available on the Internet for everything from CDs
and books to articles and professors, these services are incomplete. (2)
For example, Amazon.com provides free book reviews by other customers
who voluntarily provided the review, but there are numerous books that
have not been reviewed. Ratemyprofessors.com enables students to share
evaluations of their college instructors, but not every teacher has been
rated. A market for evaluations creates the incentives for individuals
to provide evaluations that might otherwise not be provided.
Avery, Resnick, and Zeckhauser (1999) provide a first step in
creating a pricing mechanism to induce the efficient provision of
evaluations. They discuss how, without a market for evaluations, risk
neutral agents will provide a suboptimal level of evaluations because
consumption is nonrivalrous (3) and because one person's experience
might not perfectly predict another's outcome. The nonrivalrous
nature of evaluations could also result in an inefficient ordering of
the evaluators themselves, if the potential evaluators have
heterogeneous opportunity costs for producing an evaluation. The optimal
quantity can also depend on what the early evaluations reveal about the
value of the product.
Avery, Resnick, and Zeckhauser (1999) show that a market mechanism
can be used to solve these various social problems. They consider both
sequential and batch evaluation games and demonstrate that a centralized
broker who knows the pool of values and opportunity costs for the
players can set a price for providing and receiving information so as to
eliminate the social inefficiency. This paper relaxes the assumption
that a central broker holds all of the pertinent information, which, as
a point of practicality, would not hold in an actual implementation of
such a mechanism. Using the experimental method, we study how
efficiently four different market mechanisms voluntarily elicit an
evaluation in exchange for payment, without any centralized information
about the pool of values. This subjects the market mechanisms to the
challenging task of inducing the optimal person to voluntarily provide a
nonrivalrous evaluation. As an initial foray into implementing a market
for evaluations, we consider variations of and then compare the
efficiency and prices for uniform price sealed bid, discriminatory price
sealed bid, English clock auction, and Dutch clock auction.
We follow Avery, Resnick, and Zeckhauser (1999) in considering a
framework in which people have already made the decision to enter a
product market. As a simplifying assumption for this exploratory
research, the individuals in the markets have identical tastes for
whether a product is "good" or "bad" and the same
ability to discern those tastes. Individuals, however, differ in their
values for a "good" or "bad" product. Thus, unlike
Avery, Resnick, and Zeckhauser (1999), a single evaluation perfectly
reveals whether the product is "good" or "bad." This
assumption eliminates the inefficient ordering and optimal quantity
problems and makes the underprovision problem binary, thereby allowing
us to focus on the crucial issue of how the mechanisms aggregate private
information, determine prices, and affect efficiency.
In our controlled test, we find that the provision of evaluations
is markedly inefficient without a market mechanism, but not nearly as
inefficient as what is predicted. Additionally, we find that each of the
four mechanisms succeeds at increasing market efficiency by encouraging
the socially optimal agent to undertake the costly evaluation when no
one else is willing to do so. Finally, we observe that the four
mechanisms are behaviorally equivalent with respect to the prices
received by the evaluator. The structure of the paper is as follows.
Section 2 presents the experimental design that we consider, and section
3 outlines the treatments and procedures. Section 4 discusses our
results, and section 5 briefly concludes.
2. Experimental Design
Suppose there are two identical risk-neutral individuals
considering reading a book. A particular book could be "good,"
resulting in a payoff of g > 0, or "bad," resulting in a
payoff of b < 0 with [absolute value of b] > [absolute value of
g]. If each outcome is equally likely, then both individuals have a
negative expected value for reading the book and hence should opt for
their next best alternative, provided that alternative payoff c is
greater than (g + b)/2. However, it is not necessarily socially optimal
for neither person to read the book. Consider the case in which one
individual decides to read the book and then provides an evaluation to
the other person. In this case, the evaluator expects to receive (g +
b)/2, but the person receiving the evaluation can make a more informed
decision. This informed person will read the book if it is
"good" and receive a payoff of g. However, if the evaluator
reports that the book is "bad," the other person will not read
it, choosing the opportunity cost c over the bad payoff b. Hence the
person receiving the evaluation has an expected payoff of (g + c)/2, and
the social payoff from one person evaluating the book is [(g + b)/2] +
[(g + c)/2], which is greater than 2c so long as c < (2g + b)/3.
The solution to this social problem lies with the creation of a
market that allows one potential consumer to compensate another for
undertaking the costly evaluation. Avery, Resnick, and Zeckhauser (1999)
present a formal treatment of this problem and demonstrate the existence
of an equilibrium price that attains the socially efficient outcome. (4)
This price is paid to the evaluator by the person waiting to make an
informed decision. In the example presented above, the unique
equilibrium price, P, is (c - b)/4. This price is found by equating the
expected payoff of the evaluator to the expected payoff of the person
who waits, or P insures that (g + b)/2 + P = (g + c)/2 - P, thereby
making the two identical people indifferent between evaluating and
waiting.
Institutions
Equipped with the theoretical foundation that a market can solve
this problem, the next step is to identify what market institutions
should be implemented in practice. The identification of the market
price in the preceding paragraph requires collective knowledge of three
pieces of information for each individual i, namely [g.sub.i],
[b.sub.i], and [c.sub.i], plus the probabilities that the product or
service is "'good" or "bad." In practice, these
pieces of information are typically private and unobservable (or
unverifiable). Thus, one role of a functioning market is to aggregate
private information and coordinate behavior. Therefore, any market
institution must determine (i) who will be the evaluator and (ii) the
price paid or received by each agent. As discussed in Smith (1994) the
institution can significantly influence behavior and therefore market
outcomes. For example, the four most common types of private value
auctions, uniform price sealed bid, discriminatory price sealed bid,
English clock, and Dutch clock, are all theoretically revenue equivalent
under certain assumptions; yet, there is widespread evidence from the
laboratory that these distinct formats elicit different behaviors, which
affect market performance. (5) This paper takes the next step toward
constructing markets for evaluations by developing variants of these
four well-known market institutions and comparing the performance of
each. An important distinction between our environment and that of the
standard private value auction is that the evaluation is nonrivalrous.
(6) Within the controlled confines of the laboratory, these wind tunnel tests directly compare these institutions to each other, as well as to a
baseline case in which no market exists.
No-market Baseline
In the baseline case, a fictitious product is available for
evaluation for a limited time, T. If at any point during this time one
of the n individuals consumes and evaluates the product, then the payoff
state, good or bad, is revealed to everyone. As a simplification, if the
product is good, then everyone who waits receives their own good payoff
[g.sub.i], but if the product is bad, all of the people who wait receive
their opportunity cost el. The evaluator also receives [g.sub.i] if the
product is good, but when it is bad, the evaluator receives [b.sub.i].
In this situation, the dominant strategy is to wait and see whether the
others evaluate, regardless of how much time is remaining, assuming that
the opportunity cost is sufficiently high. Formally, let
[[lambda].sub.i] denote a player i's belief about the probability
someone else will evaluate during the remaining time. The expected
payoff to waiting is [[lambda].sub.i]([g.sub.i] + [c.sub.i])/2 + (1 -
[[lambda].sub.i])[c.sub.i], which is greater than ([g.sub.i] +
[b.sub.i])/2, the expected value of evaluating, if [[lambda].sub.i] >
1 + ([b.sub.i] - [c.sub.i])/([g.sub.i] - [c.sub.i]). This condition is
equivalent to [[lambda].sub.i] [greater than or equal to] 0 if [c.sub.i]
> ([g.sub.i] + [b.sub.i])/2. We now describe four distinct
institutions for providing an evaluation.
Uniform Price Sealed Bid Auction
For the uniform price sealed bid auction in an independent private
value environment, each bidder privately submits a bid. The bids are
arrayed from highest to lowest and the winners are the bidders who
submitted the highest bid. Each winner pays the same price, which is
determined by the lowest winning bid and/or the highest loosing bid. In
a market for a nonrivalrous evaluation, each of the n agents submit a
single bid [[beta].sub.i]. This bid represents the minimum amount the
person is willing to accept for evaluating the fictitious product. Once
the n bidders have submitted their bids, the bids are ranked in
ascending order. Let [[beta].bar] and [??] denote, respectively, the
lowest and second lowest submitted bids. The agent submitting bid
[[beta].bar] is chosen as the evaluator, and the price he receives for
evaluating is ([[beta].bar] + [[beta].bar])/2. The other n - 1 agents
wait for the evaluation, and each pays an equal portion of the price.
The price paid by each individual for the information is ([[beta].bar] +
[[beta].bar])/2(n - 1). Hence when entering a bid, agent i knows that
[[beta].sub.i]/(n - 1) is the maximum amount she will have to pay for
the evaluation. In the event that multiple agents submit the lowest bid,
the evaluator is chosen randomly from that subset of bidders.
Discriminatory Sealed Bid Auction
Like the uniform price sealed bid auction, in this auction, each
bidder privately submits a bid, and the winners are the bidders who
submit the highest bids. However, in the independent private values
environment, each winner pays a price equal to his own bid. In an
evaluations market, a bid [[beta].sub.i], in this institution represents
a minimum price that agent i will be paid to evaluate, and
[[beta].sub.i]/(n - 1) represents a maximum amount that agent i might
have to pay for the information, as in the uniform price auction. Again,
the bids are ranked from lowest to highest and the person submitting the
lowest bid is chosen to evaluate the product. However, the price paid by
each agent who waits depends on his bid and the evaluator's bid.
Let j denote the person who submitted the lowest bid 13. Agent i [not
equal to] j pays ([[beta].bar] + [[beta].sub.i])/2(n - 1) to the
evaluator. The total amount paid to agent j, the evaluator, is
([[beta].bar]/2) + [[summation].sub.i [not equal to]j][[[beta].sub.i]/2
(n - 1)]. Again, ties are broken randomly.
Descending (English) Clock Auction
An English clock auction with independent private values begins
with a low price that the bidders are willing to pay. As long as there
are more bidders who are willing to pay the current price than there are
units for sale, the price increases. Once there is no excess demand at
the current price, the auction ends and all remaining bidders buy the
final price. An English clock auction for evaluations is also
operationalized by setting an initial price, such that multiple agents
are willing to accept the proposed price, and then moving the price in
the less favorable direction until all but one agent drops out of the
auction. Because an evaluation market is attempting to procure an
evaluation, the process works in the reverse of that in an independent
private value environment. The initial price on the clock is set
sufficiently high such that multiple agents are willing to evaluate the
fictitious product and then the price falls until all but one person has
indicated a preference to wait for the information (i.e., withdraw from
the market and not evaluate) at the current price on the clock. Once a
bidder signals to wait, a bidder cannot re-enter the market for that
period. The number of active bidders is not publicly stated as the price
decrements and bidders withdraw. The clock price, like the bid amount in
the sealed bid institutions, refers to the amount that the evaluator
receives. Those who wait pay 1/(n - 1) of the final clock price. Again,
ties are settled randomly. Unlike the sealed bid institutions, this
mechanism requires up-front parameterization in the form of a starting
price, the amount by which the clock decrements, and the rate of time
for the decrement. Additionally, a stopping rule is imposed in the event
that the clock price reaches zero with at least two agents still in the
market. Because a clock price of zero indicates that at least two agents
are willing to undertake the evaluation for no payment, one of these
agents is randomly selected to provide the information for free to the
remainder of the group. It is worth noting that although truthful
revelation is a dominant strategy in the independent private value
environment, this is not the case in an evaluations market because the
price at which the penultimate bidder exits the market affects the price
that bidder will pay for the information.
Ascending (Dutch) Clock Auction
As with an English clock auction, a Dutch clock auction also
involves systematically changing the price until a winner is declared.
With this auction, the price is initially set such that nobody is
willing to accept the transaction. The price is then gradually improved
until some agent accepts the terms. With independent private values, the
price begins high and then is lowered until a bidder accepts the current
price. However, in the market for evaluations the price is set
sufficiently low such that everyone wants to wait initially, then the
price is increased until the first agent agrees to undertake the
evaluation for the price shown on the clock. Again, the clock price
refers to the price received by the evaluator. Each of the n - 1 agents
who did not indicate a willingness to evaluate pays 1/(n - 1) of the
amount received by the evaluator. This institution also requires
additional parameters for the starting price, minimum amount of the
price increment, and rate at which the price is incremented.
Parameters
We chose to compare the institutions with n = 4 participants in
each market. The experimental literature is ripe with examples of
markets in which four sellers or four bidders can be considerably
competitive, depending on other details of the environment (see, e.g.,
Cox, Roberson, and Smith 1982; Isaac and Walker 1985; Isaac and Reynolds
2002: Thomas and Wilson 2002; and Deck and Wilson 2003). Table 1 reports
the good values, bad values, and opportunity costs for each of four
participants in a market. We continue to assume that the product is good
or bad with a 50% probability.
Within a group of four agents is one type 1, one type 2, and two
type 3 agents. (7) Each agent has an expected value of -10 for
evaluating, which is less than the common opportunity cost of 24.
However, agents differ in their expected values from another agent
evaluating. Table 2 lists the expected social surplus depending on which
agent, if any, evaluates.
The socially efficient payoff is for one of the two type 3 agents
to undertake the evaluation, and because there are two type 3 agents, a
unique price exists that supports this outcome, assuming all agents are
risk neutral. (8) Because the type 3 agents are identical, the
equilibrium price is such that the two type 3 agents are indifferent
between evaluating and waiting. That is, the price structure satisfies
([g.sub.3] + [b.sub.3])/2 q-price received for evaluating = ([g.sub.3] +
[c.sub.3])/2 -price paid for information. When the price paid for
information is 1/(n - 1) times the price received by the evaluator, the
equilibrium price that supports the efficient outcome is 54 for the
parameters in Table 1. This price prediction is applicable to the
uniform price sealed bid, descending English clock auction, and
ascending Dutch clock auction because each agent that waits pays the
same price. However, this price prediction does not hold for the
discriminatory sealed bid auction because the choices of the type 1 and
type 2 agents will affect the price the evaluator receives. Furthermore,
an explicit price prediction would depend on the beliefs bidders have
about the likely bids and the values of others. (9) Nevertheless, as an
exploratory exercise we include it in our comparison of the other three
institutions.
We chose the payoffs for the type 1 and 2 agents so that they have
the same expected value for evaluating as the type 3 agents, but if a
type 1 or 2 agent evaluates, the expected social loss is nontrivial and
dependent on which of the two evaluates. The opportunity cost of 24
satisfies the desirable property that [b.sub.i] + [g.sub.i] < 24: for
each agent type. It also creates an essential separation between the
social payoff in the case in which no one evaluates and the efficient
case, while maintaining the property that a person who receives one good
payoff, one bad payoff, and one opportunity cost payoff will experience
nonnegative earnings. (10) Also, an opportunity cost of 24 leads to an
integer price prediction that is not a natural focal point.
As discussed above, some of the institutional treatments also
require parameterization. In the no-market baseline, the time available
for product evaluations is T = 30 seconds. Both clock institutions
require an increment and an initial price. The clock increment/decrement
is 1 and updated every second. The initial prices are set such that, a
priori, the mechanisms would last as long as the baseline no-market
treatment. In the no-market baseline, no one should evaluate and a
period lasts 30 seconds. Therefore, the starting price in the English
clock auction is 54 + 30 = 84, and the starting price in the Dutch
auction is 54 - 30 = 24 (which is conveniently equal to the opportunity
cost). For the two sealed bid institutions, subjects also have 30
seconds to enter their bids. If a subject does not enter a bid in the
allotted time, then his previous bid serves as the default. (11) For the
payoffs to be comparable across treatments, we desired that each
laboratory session consist of the same number of periods and thus each
should last approximately the same amount of time.
3. Experimental Procedures
A market consisted of four subjects who were constantly and
anonymously matched throughout the 48 decision periods of the
experiment. Subjects retained the same agent type each period and never
knew the payoff parameters associated with the other subjects or even
the distribution of those parameters. Before the experiment began, each
subject was given a set of written instructions. After all subjects
completed the instructions and had the opportunity to ask questions, the
computerized experiment began.
For the first 24 periods, all subjects, regardless of institution
treatment, participated in the no-market baseline. This insures that
before introducing the market mechanism the subjects have substantial
experience with the payoff implications for evaluating and not
evaluating with good and bad values. When making decisions in the
baseline environment, subjects knew only their own payoff parameters and
the time remaining in the period.
After each period, subjects received feedback about their own
payoff and whether the product was good or bad that period if and only
if someone evaluated. Subjects were not told who evaluated or anyone
else's payoff. At any point during the experiment, regardless of
treatment, a subject could scroll through a table that displayed for all
previous periods their own action, whether or not someone else had
evaluated: the payoff state if revealed: and their own profit. After the
first 24 periods were completed, subjects in the baseline treatment
continued in this environment for an additional 24 periods. Subjects in
the other four treatments were given additional written instructions
about a single market mechanism that would be in place for the remaining
24 periods. After all subjects read the instructions and had the
opportunity to ask questions, the computerized experiment resumed. In
addition to the information revealed in the baseline case, subjects in
the four market treatments were told the price they paid for waiting or
the price received for evaluating. This design constitutes a nontrivial
challenge to the auction mechanisms. First, every subject is identical
in expected value terms (-10) for not evaluating, and second, subjects
have no information at all on the values of the other participants in
the market. Subjects were not told the number of periods in the
experiment or a portion thereof, nor were they informed in advance that
a mechanism would be imposed in the latter part of the session.
A total of 25 sessions were conducted, five for each of the five
treatments. We held constant across all sessions a random sequence of 48
good and bad value states; that is, one sequence was randomly determined
in advance and then employed in all sessions. This serves to reduce the
variation across sessions.
The 100 participants in this study were undergraduates from the
general student population at George Mason University, where the
experiments were conducted. For participating in the one-hour
experiment, each subject was paid $7 for showing up on time, plus his or
her salient earnings. All payoffs, parameters, and prices were denoted
in terms of experimental dollars (EXP). At the conclusion of the
experiment, a subject's cumulative profit was converted into U.S.
dollars at the rate of EXP 200 = US$1, which was stated to the subjects
before the beginning of the experiment. (12) The average earnings in the
experiment were approximately $15.25, excluding the $7 show-up fee.
4. Experimental Results
The data consist of observed behavior in 720 periods under the
no-market baseline and 120 periods under each of the four market
mechanisms. We found that each of the market mechanisms was successful
at increasing efficiency relative to the baseline by increasing the
frequency with which the optimal agent type evaluated. This result is
presented as a series of findings, each with supporting analysis that
treats each session as an independent observation. To control for
learning, the analysis focuses exclusively on data from the latter half
of the periods in a particular institution (periods 13-24 and 37-48).
The first 24 periods in each session consist solely of the baseline
situation. If observed differences in the latter part of the experiment
are attributable to the institution and not subject heterogeneity,
behavior should be similar across the groups before the auction
mechanisms are introduced. The first finding is largely a calibration result demonstrating that subject behavior is indeed similar across all
treatments before the implementation of a market mechanism. To compare
the choices of individuals, and hence performance of the five
institutions, we use the metric of average ex ante efficiency from the
appropriate periods within a session. This is a measure of the expected
social welfare conditioned on the frequency with which, if any, agents
of a particular type evaluated. Thus, two sessions in which the same
numbers of each type undertook the evaluation would be considered as
performing identically, even though realized surplus might vary across
the sessions depending on who evaluated when the product was good or
bad. Because each session is independent of the others, this metric
allows for a comparison of independent observational units.
FINDING 1. Ex ante efficiency is statistically indistinguishable
across all five treatments in the first half of the experiment before an
auction mechanism is introduced.
SUPPORT. For the null hypothesis that ex ante efficiency in the
initial no-market phase of each session did not differ by treatment, we
employ a Kruskal-Wallis test on the 25 average ex ante efficiency
observations (one for each session) for periods 13-24. The test
statistic, corrected for ties in the ranking, is 0.467, which cannot be
rejected in favor of the two-sided null at any standard level of
significance.
The frequency with which subjects choose to evaluate is nontrivial
in periods 13-24. Figure 1 illustrates when the subjects choose to
evaluate during the 30-second period without a market mechanism in
place. The hatched bar indicates how many times no one evaluated, and
the solid bars indicate how many evaluations occurred for the six,
five-second blocks of time. Notice that when a subject evaluates, it is
most often within the first 5 or last 5 seconds of the period. This
suggests that our choice of a 30-second period is not too short because
subjects choose either to evaluate quickly or to wait until the end of
the period. Having established that the sessions do not differ before
the implementation of a market mechanism, any differences across
treatments can be attributed directly to the institutions. Therefore,
our focus now turns to the effect of the market mechanisms as observed
over the last 12 periods of each session.
[FIGURE 1 OMITTED]
FINDING 2. The introduction of a market mechanism significantly
increases efficiency.
SUPPORT. Figure 2 displays the average ex ante efficiency over the
last 12 periods by session. We use the Wilcoxon rank sum test (W
statistic) to determine whether each mechanism improves efficiency
relative to the no market baseline. In each pairwise comparison, the W
statistic was 40, the largest value possible. Thus, the null hypothesis
of no change in efficiency can be rejected at the 99% confidence level
in favor of the alternative that the mechanism improved efficiency for
each of the four market mechanisms considered.
[FIGURE 2 OMITTED]
The above analysis clearly shows that implementing a market for
evaluations increases the expected efficiency. The next finding explores
the differences between the four market mechanisms in terms of
efficiency. A separate finding then addresses the source of the
institutions' success.
FINDING 3. The four market mechanisms are statistically
indistinguishable with respect to efficiency.
SUPPORT. Given no a priori ordering of ex ante efficiency by
institution treatment, we employ a Kruskal-Wallis test to test the null
of no difference by mechanism against the alternative that efficiency
differs for some institution. Adjusting for ties in the efficiency
rankings of individual sessions, the test statistic is 0.352, which
means that the null hypothesis cannot be rejected at standard levels of
significance.
Market mechanisms necessarily assign one person to evaluate the
fictitious product in periods 25-48. For example, in either sealed bid
auction someone must submit the lowest bid. Given the payment schemes,
participation is individually rational for each agent. This is a
distinct advantage of a market because it avoids the worst-case scenario in terms of ex ante efficiency, namely no one evaluating, which can and
does happen without a market. It is reasonable to ask whether the
greater efficiency is due simply to this aspect of the design. To answer
this question, we determine what efficiency would have been in the
no-market baseline if one person had been randomly selected as the
evaluator when no one volunteered. With this more conservative
accounting for the no-evaluation outcomes, efficiency with the tour
market mechanisms is still statistically greater than efficiency in the
no-market baseline, although the increase is only 5.3 percentage points.
Formally, let [M.sub.j] denote the frequency over periods 37-48 at which
no one evaluated in session j in the no-market baseline groups, and let
[m.sub.ij] denote the frequency at which an agent of type i was observed
to evaluate voluntarily over the same period in session j. For this
conservative test, we recalculate the ex ante efficiency for the
no-market baseline treatment by allocating one agent, in proportion to
the types listed in Table 1, to be the evaluator for each of the periods
in which no one voluntarily evaluated. More specifically, the frequency
of type i agents evaluating in a no-market baseline session is imputed to be [m.sub.ij] + [[theta].sub.i][M.sub.j], where [[theta].sub.i] = 0.5
if i = 3 and [[theta].sub.i] = 0.25 for i = 1 and 2. Let [E.sub.i]
denote the ex ante efficiency when a type i agent evaluates (see Table
2). The recalculated ex ante efficiency for baseline session j is thus
[[summation].sub.i] ([m.sub.ij] + [[theta].sub.i][M.sub.j])[E.sub.i].
We use a Wilcoxon rank sum test to compare the recalculated
efficiency of the baseline to the observed aggregated efficiency of the
market treatments. The null hypothesis of no effect from a market can be
rejected at the 95% confidence level in favor of the alternative
hypothesis that the market mechanisms increase ex ante efficiency (W =
38, p = 0.0331). This demonstrates that an auction market increases
efficiency by more than would be expected from merely randomly and
involuntarily assigning one person to evaluate. (13) It should be
emphasized that in the market mechanism, the subjects are volunteering
to evaluate because they each have the choice in their bid to indicate
their willingness to evaluate or wait.
Although someone will be chosen to evaluate with the market
institutions, these auctions do not always induce the optimal agent to
evaluate in each period as show in Figure 2. Also, it is not the case
that type 1 and 2 agents are never willing to undertake the evaluation
for free. In fact, a suboptimal agent evaluated 30% of the time on
average for the last 12 periods in the no-market baseline treatment.
Thus it remains to be explained the extent to which the markets succeed.
Our conjecture is that the performance increase is due to the mechanisms
encouraging type 3 agents to evaluate when no one else will. Finding 4
discusses this explanation formally.
FINDING 4. The auction mechanism induces the optimal agent to
undertake the evaluation when no one else is willing to do so, thereby
increasing efficiency.
SUPPORT. Figure 3 illustrates the qualitative support for this
finding. The hatched (solid) bars for each treatment indicate the
frequency with which each agent type evaluated in periods 13-24 (37-48).
The "None" category specifies how many times no agent
voluntarily evaluated and contains the same data displayed in Figure 1
with the hatched bars. In the no-market baseline treatment, all types of
agents reduce their evaluations, most noticeably the type 1's (the
least efficient evaluators). In marked contrast, the number of
evaluations by type 3 agents increases substantially for all of the
market institutions and generally by the number of times no one is
willing to evaluate in the no-market periods 13-24. This finding is
supported quantitatively by a comparison of observed market behavior
with behavior in the no-market baseline in which type 3 agents are
assumed to have evaluated if no one else volunteered. For brevity, this
requires an imputation similar to that discussed following Finding 3.
Specifically, the frequency of type i evaluating in a no-market baseline
session j was recalculated as [m.sub.ij] + [[phi].sub.i][M.sub.j], where
[[phi].sub.i] = 1 if i = 3 and [[phi].sub.i] = 0 otherwise. Because the
conjecture involves the likelihood that a type 3 agent evaluated, the
employed metric is the frequency with which a type 3 agent evaluated
over the last 12 periods and not ex- ante efficiency. We do not suppose
that the four institutions are identical under this metric. Thus, a
Kruskal-Wallis test is used to test the null hypothesis that the
frequency of type 3 evaluations is the same across all five treatments
against the alternative hypothesis that this frequency differs for some
treatment. The tie-corrected rank test statistic is [[chi square].sub.3]
= 0.352, which cannot be rejected (p = 0.9499). Hence, the mechanisms
encourage the appropriate agents to undertake the evaluations. (14) *
[FIGURE 3 OMITTED]
With strictly private information on values and heterogeneous
attitudes toward risk, the market mechanism induces the socially optimal
person to evaluate the product. This is nontrivial in that no explicit
cues from the environment are driving this behavior. Rather, it is the
incentives created by the market mechanism that lead the socially
optimal agent to provide an evaluation when without a market such an
agent would not. Note that the market mechanism could also have induced other agents to evaluate: agents who have the same risk-neutral expected
value for evaluating and not evaluating the product. But this is not we
what find.
Many experimental studies have shown that efficiency is quite high,
generally greater than 95%, for uniform and discriminatory sealed bid
and Dutch and English clock auctions with independent private values.
Prices, however, often differ substantially by auction format, even
though all four institutions are theoretically revenue equivalent with
risk-neutral bidders (see footnote 3 for references). Our next finding
compares the prices received by the evaluator for each institution. We
employ a linear mixed effects model for analyzing the data with repeated
measures as the basis for quantitative support. (15) Sessions are
indexed by j = 1, ..., 20 and periods by t = 37, ..., 48. This
parametric estimation treats each session as a random effect
[[epsilon].sub.j] and each institution as a fixed effect [[beta].sub.j].
The dependent variable [Price.sub.jkt] is the price received by
evaluator k in period t of session j. Within the session random effect,
we also include a random effect [e.sub.k] for subject k within the
session that submits the winning bid to be the evaluator. Specifically,
we estimate the model
[Price.sub.jkt] = [[beta].sub.0] + [[beta].sub.1]
[Descending.sub.j] + [[beta].sub.2] [Uniform.sub.j] + [[beta].sub.3]
[Discriminatory.sub.j] + [[epsilon].sub.j] + [e.sub.k] + [u.sub.jkt],
where [[epsilon].sub.i] ~ N(0, [[sigma].sup.2.sub.[epsilon],j]),
[e.sub.k] ~ N(0, [[sigma].sup.2.sub.e]), and [u.sub.jkt] ~ N(0,
[[sigma].sup.2.sub.u]). (16)
FINDING 5. The null hypothesis of identical market prices across
all four market mechanisms cannot be rejected. Additionally, the
theoretical risk-neutral price prediction is included in the confidence
interval for any standard level of significance.
SUPPORT. Table 3 reports the fixed effects parameter estimates with
the ascending Dutch clock auction serving as the basis for comparison.
The considerable lack of significance on [[beta].sub.1] [[beta].sub.2],
and [[beta].sub.3] fixed effects indicates that, on average, the
Descending English clock and Uniform price and Discriminatory price
sealed bid auctions result in the same price, respectively, as the
Ascending Dutch clock auction. The second part of the finding is
supported by a t-test of the null hypothesis that [[beta].sub.0] = 54
against the two-sided alternative. The test statistic is 0.59 with 178
degrees of freedom, which cannot be rejected at standard levels of
significance.
Although the average price is statistically indistinguishable under
each treatment, it is important to notice that there is considerable
variation in observed prices. Figure 4 depicts this price variability.
Over the last 12 periods, one session in each treatment has a median
price less than 35. Also, over the last 12 periods, one session in each
treatment has a median price over 75. Overall, the lowest median price
in a session is 3.25, and the highest is 109.17. Even after controlling
for variation from the random effects of sessions and the evaluators
within each session, considerable variation in the observed prices
remains, as evidenced by the size of the standard error on
[[beta].sub.0] reported in Table 3. This suggests that while the
theoretical price may characterize the central tendency of behavior in
these institutions, it is not the case that people are behaving in
strict accordance with the prediction. This can also be gleamed from
Figure 2. If each subject behaves as theoretically predicted then in
sessions with an operating market institution the ex ante efficiency
should be 100%, which it clearly is not.
[FIGURE 4 OMITTED]
The same heterogeneity among subjects in terms of willingness to
evaluate that explains the relatively low efficiency could also
contribute to the price variability. Of the 100 subjects in the
experiment, 12 clearly enjoyed providing the evaluation, taking on the
risk at least 50% of the time in periods 13-24, the last 12 periods
before a market mechanism was introduced. (17) Table 4 compares the
frequency with which these risk takers evaluated when a market was in
operation with the frequency of evaluation by others of the same type in
the same institution. (18) Except in two cases, subjects who frequently
volunteered to provide an evaluation for free were also more likely to
be paid for providing an evaluation than subjects in similar situations.
But we also note that of the six type 1 and 2 (inefficient) agents that
evaluate more than 50% of the time in the premarket periods, only one
evaluates more than 50% of the time after a market mechanism is
introduced.
5. Conclusion
As demonstrated by Avery, Resnick, and Zeckhauser (1999), a pricing
mechanism for product evaluations can increase efficiency by voluntarily
eliciting an evaluation that would otherwise not be provided. With a
controlled laboratory experiment, we evaluate the performance of four
market mechanisms for providing product evaluations: uniform price
sealed bid, discriminatory price sealed bid, descending (English) clock
auction, and ascending (Dutch) clock auction. Our results indicate for
this nonrivalrous information good that (i) each of these institutions
improves social welfare and (ii) the four mechanisms are behaviorally
equivalent with respect to the prices received by the evaluator. Each of
the four institutions improves efficiency by inducing the socially
optimal agent to undertake the evaluation when no one else is willing to
do so. In our design, this is not an easy feat. All of the subjects had
strictly private information on their values and the same (negative)
expected value for evaluating the good. Moreover, the subjects might
have heterogeneous attitudes toward risk, and yet, a market mechanism
leads the socially optimal agent to voluntarily provide an evaluation
when without a market such an agent would not. However, it is not the
case that the four market mechanisms achieve 100% efficiency. There is
heterogeneity in our subjects, leading some socially suboptimal agents
to evaluate. This is consistent with naturally occurring voluntary
evaluations, such as book evaluations provided (and not provided) by
Amazon.com.
These efficiency and price results contrast with the standard
experimental results of these auction formats for independent values. In
laboratory auctions with independent private values, prices clearly
separate with a relatively small variance across sessions, and
efficiency is consistently greater than 95%. In those experiments,
buyers' values and surplus are induced with certainty each period.
In the case of nonrivalrous good such as evaluation, there is risk in
the product value, which might be a source of the price variability that
we find here. The large heterogeneity in willingness to evaluate often
led agents that were suboptimal from an expected payoff perspective to
evaluate. Therefore, the possible efficiency improvement that the market
mechanisms could generate was limited. This might account for the
similar efficiency performance across the four institutions. Collective
or individual heterogeneity of subjects, uncertainty about the outcome,
and the nonrivalrous nature of evaluations could explain why the market
outcomes here differ from standard auction results in the laboratory. Of
course, only a systematic analysis of these differences can reveal the
explanation, but that we leave to future research.
Appendix
Experiment Instructions
This is an experiment in the economics of decision making. Various
research foundations have provided funds for this research. The
instructions are simple, and if you understand them, you may earn a
considerable amount of money that will be paid to you in CASH at the end
of the experiment. Your earnings will be determined partly by your
decisions and partly by the decisions of others. If you have questions
at airy time while reading the instructions, please raise your hand and
a lab monitor will assist you.
This is what your screen will look like in the experiment. In each
period of the experiment you will be matched with three other people,
your counterparts. All four of you have a decision to make: either
evaluate a fictitious product or wait for a counterpart to do so. Your
payoff will be determined in part by the decisions you and your
counterparts make.
You and your counterparts for the period will have 30 seconds to
decide if you want to evaluate. At any point during the 30 seconds, you
or your counterparts can click on the Evaluate button. Only one of you
can evaluate during a period. If the clock expires without any of you
choosing to evaluate, then all of you have waited for that period.
How is your payoff determined? The fictitious product will either
be Good or Bad. When you and your counterparts are making your decision
to either evaluate or not, you will not know if the product is good or
bad. You and your counterparts will know whether it is good or bad only
after one of you has chosen to evaluate. There is a 50% chance that the
fictitious product will be Good and a 50% chance that it will be Bad.
Your payoff depends on whether the product is Good or Bad, if you or a
counterpart has chosen to evaluate. If any of you evaluate the good,
then your payoff is higher if the product is Good than if it is Bad.
Now we will go through an example of how to read the payoffs which
are listed in the table. The payoffs in these instructions are for
illustrative purposes only. The payoffs in the experiment will be
different from those displayed here.
Suppose that you chose to evaluate. Then your payoff would be
determined by the first row of the table. The payoffs depend on whether
the product is Good or Bad. If the product is Good, your payoff would be
140. If the product is Bad, you payoff would be -180.
Suppose that while you are waiting, your counterpart chooses to
evaluate. In this case the second row displays your payoff. If the
product is Good, your payoff would be 140. If the product is Bad, your
payoff would be 30.
Finally, is none of you decide to evaluate, then your payoff is
shown on the bottom row. Notice that your payoff will be 30, regardless
of the product being Good or Bad. None of you will know the state of the
product that period because none of you chose to evaluate it.
At the end of the period you will have 5 seconds to review the
results. At the end of that time, the clock will reappear and the next
period will begin. Your counterparts' payoffs may or may not be the
same as yours.
At the end of the experiment, your experimental dollars will be
converted into cash at the rate of 200 experimental dollars for US$1.
Any questions? If not, please raise your hand to indicate that you
have finished reading the instructions.
Uniform Price Sealed Bid Auction Instructions
For the next portion of the experiment, the way you and your
counterparts' payoffs are determined based on the fictitious good
will remain the same, but all of you will be bidding to pay to Wait and
to get paid to Evaluate. In each period of the experiment you will
continue to be matched with the same three counterparts.
This is what your screen will look like in the next portion of the
experiment. Each period there will be three people who will wait and one
person who will evaluate. Whether you wait or evaluate depends upon the
bids that you and the other three people submit.
At the beginning of each period you will submit a bid for the
"Price Received to Evaluate." This is the amount you are
willing to be paid to evaluate. One third of this bid also serves as the
price you are willing to pay to wait.
Once all of the bids have been submitted, the computer ranks the
bids from lowest to highest. The person who is chosen to evaluate is the
person who submits the lowest bid. (Any ties will be broken randomly.)
This person will receive as payment the average of her bid and the
second lowest bid. Hence, the person who evaluates will always be paid
at least as much as the bid that she submitted. This amount will be
recorded in the "Price" column and will be added to the
"'My Payoff" column for that period.
The three people who submitted the three highest bids will each pay
1/3 of the amount received by the evaluator. This amount will be
recorded in the "Price" column and will be subtracted from the
"My Payoff" column for that period. Notice that these throe people will not pay more than the amount they each submitted as the
"Price Paid to Wait."
Just as before, the evaluator will also receive a payoff depending
upon whether the product is Good or Bad.
At the end of the period you will have 5 seconds to review the
results. At the end of that time, the next period will begin. If you do
not submit a new bid before the time on the clock expires, the computer
will use last period's bid as the bid for the current period.
At the end of the experiment, your experimental dollars will also
be converted into cash at the rate of 200 experimental dollars for US$1.
Any questions? If not, please raise your hand to indicate that you
have finished reading the instructions.
Discriminatory Price Sealed Bid Auction Instructions
For the next portion of the experiment, the way you and your
counterparts' payoffs are determined based on the fictitious good
will remain the same, but all of you will be bidding to pay to Wait and
to get paid to Evaluate. In each period of the experiment you will
continue to be matched with the same three counterparts.
This is what your screen will look like in the next portion of the
experiment. Each period there will be three people who will wait and one
person who will evaluate. Whether you wait or evaluate depends upon the
bids that you and the other three people submit.
At the beginning of each period you will submit a bid for the
"Price Received to Evaluate." This is the amount you are
willing to he paid to evaluate. One third of this bid also serves as the
price you are willing to pay to wait.
Once all of the bids have been submitted, the computer ranks the
bids from lowest to highest. The person who is chosen to evaluate is the
person who submits the lowest bid. (Any ties will be broken randomly.)
The three people who submitted the three highest bids will each pay
the evaluator. The amount that each waiter pays has two factors. The
first factor is the average of their own bid and the lowest bid. This
amount is then multiplied by 1/3 (or equivalently divided by 3) because
each of the three waiters pays the evaluator. This amount will be
recorded in the "Price" column and will be subtracted from the
"My Payoff" column for that period. Notice that these three
people will not pay more than the amount they each submitted as the
"Price Paid to Wait."
The evaluator will receive as payment each of the amounts paid by
the three waiters. Notice that the evaluator will always be paid at
least as much as the bid that she submitted. This amount will be
recorded in the "Price" column and will be added to the
"My Payoff" column for that period.
Just as before, the evaluator will also receive a payoff depending
upon whether the product is Good or Bad.
At the end of the period you will have 5 seconds to review the
results. At the end of that time, the next period will begin. If you do
not submit a new bid before the time on the clock expires, the computer
will use last period's bid as the bid for the current period.
At the end of the experiment, your experimental dollars will also
be converted into cash at the rate of 200 experimental dollars for US$1.
Any questions? If not, please raise your hand to indicate that you
have finished reading the instructions.
Ascending Clock Auction Instructions
For the next portion of the experiment, the way you and your
counterparts' payoffs are determined based on the fictitious good
will remain the same, but all of you will be bidding to pay to Wait and
to get paid to Evaluate. In each period of the experiment you will
continue to be matched with the same three counterparts.
This is what your screen will look like in the next portion of the
experiment. At the beginning of each period the "Price Received to
Evaluate" starts at a price of 24 and then continues to increase by
one experimental dollar each second. The "Price Received to
Evaluate" will increase until the first person clicks on the
Evaluate button. The first person who clicks on the Evaluate button will
evaluate the product and receive as payment from the three counterparts
the amount in the "Price Received to Evaluate" box. This
amount will be recorded in the "Price" column and will be
added to the "My Payoff" column for that period.
Just as before, the evaluator will also receive a payoff depending
upon whether the product is Good or Bad.
The three other people who did not click on the Evaluate button
will wait that particular period. The three waiters will each pay the
amount next to the label "Price Paid to Wait." This amount
will be subtracted from the "My Payoff" column. Notice that
because there are three waiters, the amount paid by each waiter is 1/3
of the price received by the evaluator.
At the end of the period you will have 5 seconds to review the
results. At the end of that time, the prices will again start at 24 and
will increase until one person clicks on the Evaluate button.
At the end of the experiment, your experimental dollars will also
be converted into cash at the rate of 200 experimental dollars for US$1.
Any questions? If not, please raise your hand to indicate that you
have finished reading the instructions.
Descending Clock Auction Instructions
For the next portion of the experiment, the way you and your
counterparts' payoffs are determined based on the fictitious good
will remain the same, but all of you will be bidding to pay to Wait and
to get paid to Evaluate. In each period of the experiment you will
continue to be matched with the same three counterparts.
This is what your screen will look like in the next portion of the
experiment. At the beginning of each period the "Price Received to
Evaluate" starts at a price of 84 and then continues to decrease by
one experimental dollar each second. The "Price Received to
Evaluate" will decrease until the first three people click on the
Wait button. The remaining person who has not clicked on the Wait button
will evaluate the product and receive as payment from the three
counterparts the amount in "Price Received to Evaluate" box.
This amount will be recorded in the "Price" column and will be
added to the "My Payoff" column for that period.
Just as before, the evaluator will also receive a payoff depending
upon whether the product is Good or Bad.
The three other people who clicked on the Wait button will wait
that particular period. The three waiters will each pay the amount next
to the label "Price Paid to Wait." This amount will be
subtracted from the "My Payoff" column. Notice that because
there are three waiters, the amount paid by each waiter is 1/3 of the
price received by the evaluator.
If the "Price Received to Evaluate" falls to zero and
there are still at least two people who have not clicked on the Wait
button, then one of the people who has not clicked the button will be
randomly selected to be the evaluator and will receive zero (because by
not clicking the Wait button the person has indicated that they are
willing to receiving nothing for evaluating).
At the end of the period you will have 5 seconds to review the
results. At the end of that time, the prices will again start at 84 and
will decrease until the first three people click on the Wait button.
At the end of the experiment, your experimental dollars will also
be converted into cash at the rate of 200 experimental dollars for US$1.
Any questions? If not, please raise your hand to indicate that you
have finished reading the instructions.
Table 1. Market Values
Good Bad
Agent Value Value Opportunity
Agent i Type (g) (b) Cost (c)
1 1 320 -340 24
2 2 220 -240 24
3, 4 3 100 -120 24
Expected Value Expected Value if
if Agent Another Agent
Agent i Evaluates Evaluates
1 -10 172
2 -10 122
3, 4 -10 62
Table 2. Expected Social Surplus by Type of Evaluator
Type of Expected Efficiency
Agent Social (= Social Payoff - No Evaluation Payoff)/
Evaluating Payoff (Maximum Social Payoff - No Evaluation Payoff)
None 96 0%
Type 1 236 56%
Type 2 286 76%
Type 3 346 100%
Table 3. Estimation Results for Linear Mixed Effect Model of Price
([Price.sub.jkt] = [[beta].sub.0]+ [[beta].sub.1] [Descending.sub.j]
+ [[beta].sub.2] [Uniform.sub.j] + [[beta].sub.3]
[Discriminatory.sub.j] + [u.sub.j] + [e.sub.k] + [[epsilon].sub.jkt])
Standard Degrees of
Parameter Estimate Error Freedom t p
[[beta].sub.0] 47.06 11.763 178 4.00 <0.0001
[[beta].sub.1] 11.84 16.647 16 0.711 0.4871
[[beta].sub.2] -5.44 16.615 16 -0.326 0.7475
[[beta].sub.3] 4.39 16.661 16 0.263 0.7957
Table 4. Market Behavior of Subjects Who Evaluated
[greater than or equal to] 50% of the Time in Periods 13-24
% of Periods Evaluated with Auction
(Periods 37-48)
No.
Agent of Auction By These By All Others of
Type Individuals Treatment Agents the Same Type
2 1 Uniform 58 22
2 1 Uniform 8 22
3 1 Uniform 33 30
2 1 Discriminatory 25 3
3 2 Discriminatory 13 35
2 1 Ascending clock 33 23
3 2 Ascending clock 88 18
1 2 Descending clock 42 11
This paper has benefited from conversations with Kevin McCabe and
David Meyer and from comments by the Editor, two anonymous referees,
Glenn Harrison, and participants at the 2003 Summer Economic Science
Association meetings. We thank Stephen Salena for research assistance in
running these experiments. B.J.W. also thanks the Office of the Provost at George Mason University for summer research funding and for
additional support from the International Foundation for Research in
Experimental Economics. The data are available on request.
Received February 2004; accepted November 2004.
(1) Additionally, businesses have long attempted to use techniques
such as brand image and advertising as an indication of quality.
However, when new products or services are introduced within a brand
line, buyers can still face uncertainty about their value for the
product or service.
(2) Both amateur and professional reviews can suffer from a lack of
technical knowledge on the part of the reviewer and can reduce social
welfare if, given the heterogeneity of values, the actual reviewers are
not the socially optimal ones.
(3) As Avery, Resnick, and Zeckhauser (1999) point out in their
footnote 3. positive evaluations for some products (e.g., stocks or
restaurants) can increase demand and hence could be rivalrous. In such
situations, agents could have an incentive not to truthfully reveal.
(4) Avery, Resnick, and Zeckhauser (1999) show that markets can
solve much more complex problems as well, such as the case in which
someone else's positive experience only serves as a signal about
the probability that one's own experience will be positive.
However, as a first step in understanding behavior in markets for
evaluations, this study focuses exclusively on the case in which
everyone's opinion of the outcome, but not necessarily their payoff
from it, is the same.
(5) For a more comprehensive discussion, the reader is directed to
Kagel and Roth (1995). Two early studies test the strategic equivalence
of first-price and Dutch auctions. Coppinger, Smith, and Titus (1980)
and Cox, Roberson, and Smith (1982) both find that (i) prices are higher
in first-price auctions than in Dutch auctions and (ii) bidding is
consistent with risk-averse behavior. The predicted isomorphism between
English and second-price auctions also fails to be observed. Coppinger,
Smith, and Titus (1980) and Kagel, Harstad, and Levin (1987) find that
bidding in the English outcry auction conforms to the theoretical
predictions quite well, whereas in the one-shot second-price auction,
bidders consistently bid higher than the dominant strategy prediction,
even with experience in the auction mechanism (Kagel and Levin 1993).
(6) Because one bidder will necessarily be selected as the
evaluator and receive payment from the other participants, these
mechanisms can be classified as fair division games (see Guth and van
Damme 1986: Guth el al. 2002).
(7) The type 3 agents are similar to those used by Avery and
Zeckhauser (1997) in a discussion of a market for evaluations in a
similar setting.
(8) Identical expected values for evaluating generate a nontrivial
environment to test how well a market institution induces a socially
optimal evaluation by a type 3 agent.
(9) As a simplifying assumption, theoretical and experimental work
on private value auctions have assumed that values are distributed
uniformly and that this is common knowledge among the market
participants. In contrast, we chose a challenging environment that
parallels the naturally occurring economy, in which participants only
have private information on their own values, a la a typical double
auction market experiment. It is well known that the double auction
institution quickly achieves competitive outcomes (at or near 100%
efficiency), all with strictly private information on all values and
costs. In this paper, we also want to investigate a rather difficult
test of a market mechanism when the participants only have strictly
private information, as is reasonably the case in the diffuse and
impersonal world oil the Internet. The reader is referred to Guth and
van Damme (1986) and Guth et al. (2002) for the derivation of the
optimal bid strategies in fair value games for agents when there is
common knowledge on the distribution of values.
(10) This positive gain property helps prevent a loss of control
over a subject's motivation because negative earnings cannot be
enforced. This is particularly important in the early stages of the
experiment when subjects are relatively inexperienced.
(11) Participants in an unpaid pilot experiment indicated that more
than 30 seconds was too long and rarely was the 30-second time limit
binding. Also, the starting price of 84 in the English clock auction
should discourage type l and type 2 agents from evaluating because
risk-neutral agents of both types would prefer to exit the auction
immediately. The results of the experiments are interpreted accordingly.
(12) The use of an exchange rate allows the clock prices to adjust
over finer increments. Previous research has shown that clock speed and
increment can be significant factors that influence market prices in
standard clock auctions, which is also why we calibrated the starting
prices for the clock auctions at 54 [+ or -] 30 experimental dollars.
(13) An alternative metric for evaluating market performance is to
select an evaluator for the no-evaluation periods in the same proportion
as when there was a volunteer, or, [[theta].sub.i] = [m.sub.ij]. A
Wilcoxon rank sum test with this metric leads to the same conclusion (W
= 37, p = 0.0284).
(14) The approach of conducting ex ante efficiency analysis similar
to that discussed following Finding 3 would assume that the efficient
type is assigned to evaluate (i.e., [[theta].sub.i] = 1 if i = 3 and
[[theta].sub.i] = 0 otherwise. In this case, the null hypothesis that
the market mechanisms generate the same level of efficiency as the
baseline cannot be rejected in favor of the two-sided alternative with
the Wilcoxon rank sum test. This also suggests that the markets induce
the optimal agent to evaluate when no one volunteers (W = 71, p =
0.6832).
(15) See Longford (1993) for a description of this technique that
is commonly employed in experimental sciences. A linear mixed effects
model is not appropriate for the efficiency analysis because ex ante
efficiency is discrete, taking on only one of three values each period.
(16) The linear mixed effects model for repeated measures treats
each session as I degree of freedom with respect to the treatments.
Hence, with four parameters, the degrees of freedom for the estimate of
the institution treatment fixed effects are 16 = 20 sessions--4
parameters. This estimation accommodates sessionwise heteroscedastic
errors when estimating the model via maximum likelihood. Adjusting the
model to include first-order autoregressive errors in [u.sub.jkt], does
not significantly increase the efficiency of the estimates.
(17) No subject volunteered to evaluate more than 83% of the time
during these periods, and in many periods, no evaluations were provided.
(18) Table 4 only includes 11 of the 12 subjects that were in the
auction mechanism treatment. The 12th was in the baseline treatment.
References
Avery, Christopher, Paul Resnick, and Richard Zeckhauser. 1999. The
Market for evaluations. American Economic Review 89:564-84.
Avery, Christopher, and Richard Zeckhauser. 1997. Recommender
systems for evaluating computer messages. Communications of the ACM 40:88-9.
Coppinger, Vicki M., Vernon L. Smith, and Jon A. Titus. 1980.
Incentives and behavior in English, Dutch and sealed-bid auctions.
Economic Inquiry 18:1-22.
Cox, James C., Bruce Roberson, and Vernon L. Smith. 1982. Theory
and behavior of single object auctions. In Research in experimental
economics. Volume 2, edited by Vernon L. Smith. Greenwich, CT: JAI Press, pp. 1-43.
Deck, Cary, and Bart J. Wilson. 2003. Automating posted pricing
markets in electronic posted offer markets. Economic Inquiry 41:208-23.
Guth, Werner, Radosveta Ivanova-Stenzel, Manfred Konigstein, and
Martin Strobel. 2002. Bid functions in auctions and fair division games:
Experimental evidence. German Economic Review 3:461-84.
Guth, Werner, and Eric van Damme. 1986. A comparison of pricing
rules for auction and fair division games. Social Choice and Welfare
3:177-98.
Isaac, R. Mark, and Stanley S. Reynolds. 2002. Two or four firms:
Does it matter? In Research in experimental economics, volume 9: Market
power in the laboratory, edited by Charles Holt and R. Mark Isaac.
Greenwich, CT: JAI Press, pp. 95-119.
Isaac, R. Mark, and James Walker. 1985. Information and conspiracy
in sealed bid auctions. Journal of Economic Behavior and Organization
6:139-59.
Kagel, John, Ronald Harstad, and Dan Levin. 1987. Information
impact and allocation rules in auctions with affiliated private values:
A laboratory study. Econometrica 55:1275-1304.
Kagel, John, and Dan Levin. 1993. Independent private value
auctions: Bidder behavior in first-, second-, and third-price auctions
with varying numbers of bidders. Economic Journal 103:868-79.
Kagel, John, and Alvin Roth. 1995. The handbook of experimental
economics. Princeton, NJ: Princeton University Press.
Longford, N. T. 1993. Random coefficient models. New York: Oxford
University Press.
Smith, Vernon L. 1994. Economics in the laboratory. Journal of
Economic Perspectives 8:113-31.
Thomas, Charles J., and Bait J. Wilson. 2002. A comparison of
auctions and multilateral negotiations. RAND Journal of Economics
33:140-55.
Cary A. Deck * and Bart J. Wilson ([dagger])
* Department of Economics, Walton College of Business, The
University of Arkansas, Fayetteville, AR 72701, USA.
([dagger]) Interdisciplinary Center for Economic Science, George
Mason University, 4400 University Drive, MSN 1B2, Fairfax, VA
22030-4444, USA: E-mail
[email protected]: corresponding author.