Category Archives: research

New York Computer Science and Economics Day (NYCE Day) October 3, 2008

We invite participants to the first New York Computer Science and Economics Day (NYCE Day), viagra 60mg October 3 2008, unhealthy at the New York Academy of Sciences, 7 World Trade Center.

NYCE Day is a gathering for people in the NYC metropolitan area with interests in auction algorithms, economics, game theory, e-commerce, marketing, and business to discuss common research problems and topics in a relaxed environment. The aim is to foster collaboration and the exchange of ideas.

The program features invited speakers Asim Ansari (Columbia), Susan Athey (Harvard), Constantinos Daskalakis (MIT), and Tuomas Sandholm (CMU), and a rump session with short contributed presentations.

You can indicate your interest in the event on but official registration should go through NYAS.

Your participation and suggestions are greatly welcome. Please distribute this announcement to people and groups who may be interested.

NYCE Day Organizers
 Anindya Ghose, NYU
 S. Muthu Muthukrishnan, Google
 David Pennock, Yahoo!
 Sergei Vassilvitskii, Yahoo!

P.S. This is one week prior and in the same location as the Symposium on Machine Learning.

P.P.S. For those familiar, NYCE Day is inspired as a Right Coast version of BAGT.

P.P.P.S. The New York Academy of Sciences in a spectacular venue. See for yourself.

Yoopick: A sports prediction contest on Facebook with a research twist

I’m happy to announce the public beta launch of Yoopick, a sports prediction contest with a twist.

You pick any range you think the score difference or point spread of the game will fall into, for example you might pick Pittsburgh wins by between 2 and 11 points.

Yoopick make your pick slider interface screenshot

The more your prediction is viewed as unlikely by others, and the more you’re willing to stake on your prediction, the more you stand to gain. Of course it’s all for fun: you win and lose bragging rights only.

You can play with and against your friends on Facebook.

You can settle a pick even before the game is over, much like selling a stock in the stock market. Depending on what other players have done in the interim, you may be left with a gain or loss. You gain if you were one of the first to pick a popular outcome.

If you run out of credit, you can “work off your debt” by helping to digitize old books via the recaptcha project.

Those are the highlights if you want to go play the game. If you’re interested in more details, read on…

Motivation, Design, and Research Goals

There are a great many sources of sports predictions, including expert communities, statistical number crunchers, bookmakers, and betting exchanges. Many of these sources are highly accurate, however they typically focus on predicting the outright or spread-adjusted winner of the game. Our goal is to obtain more information about the final score, including the relative likelihood of each point spread. For example, if our system is working, on average there should be more weight put on point spreads of 3 and 7 in NFL games than on 2,4,6, or 8.

We chose sports as a test domain to tap into the avid fan base and the armies of arm chair (and Aeron chair) prognosticators out there. However, the same approach should translate well to any situation where you’d like to predict a number, for example, the vote share of a politician or the volume of sales of your company’s widget. In addition to giving you the expected value of the number, our approach gives you the confidence or variance of the prediction — in fact, it gives you the entire probability distribution, or the likelihood of every possible value of the number.

Underneath the hood, Yoopick is a type of combinatorial prediction market where the possible outcomes are the values of the point spread, and each pick is a purchase of a bundle of outcomes in a given interval. We use Hanson’s logarithmic market scoring rules market maker to price the picks — that is, to set the risk/reward ratio. This pricing mechanism also determines the gain or loss when picks are settled early.

Wins and losses on Yoopick are measured in milliyootles, a social currency useful for expressing thanks.

Our market maker can — and we expect will — lose yootles on average. Stated another way, we expect players as a whole to gain on average. At the same time, we actively work to improve our market maker to limit its losses to control inflation in the game.

Because the outcomes of a game are tied together in a unified market, picks in one region automatically affect the price of picks in other regions in a logically consistent way. Players have considerable flexibility in how and what information they can inject into the market. In particular, players can replicate the standard picks like outright winner and spread-adjusted winner if they want, or they can go beyond to pick any interval of the point spread. No matter the form of the pick, all the information flows into a single market that aggregates everything in a unified prediction. In contrast, at venues from Wall Street to Churchill Downs to High Street to Las Vegas Boulevard, markets with many outcomes are usually split into independent one-dimensional markets.

Our goal is to test whether our market design is indeed able to elicit more information than traditional methods. We hope you have fun playing in our Petri dish.

Sharad Goel
David Pennock
Daniel Reeves
Prasenjit Sarkar
Cong Yu

Call for Papers and Participation: Workshop on Prediction Markets: Chicago, July 9 2008

I am happy to announce the following prediction market workshop and solicit submissions and participants.

Call for Contributions and Participation

Third Workshop on Prediction Markets

Afternoon of July 9, 2008
Chicago, Illinois

In conjunction with the
ACM Conference on Electronic Commerce (EC’08)


We solicit research contributions, system demonstrations, and
participants for the Third Workshop on Prediction Markets, to be held
in conjunction with the Ninth ACM Conference on Electronic Commerce
(EC’08). The workshop will bring together researchers and
practitioners from a variety of relevant fields, including economics,
finance, computer science, and statistics, in both academia and
industry, to discuss the state of the art today, and the challenges
and prospects for tomorrow in the field of prediction markets.

A prediction market is a financial market designed to elicit a
forecast. For example, suppose a policymaker seeks a forecast of the
likelihood of an avian flu outbreak in 2009. She may float a security
paying $1 if and only if an outbreak actually occurs in 2008, hoping
to attract traders willing to speculate on the outcome. With
sufficient liquidity, traders will converge to a consensus price
reflecting their collective information about the value of the
security, which in this case directly corresponds to the probability
of outbreak. Empirically, prediction markets often yield better
forecasts than other methods across a diverse array of settings.

The past decade has seen a healthy growth in the field, including a
sharp rise in publications and events, and the creation of the Journal
of Prediction Markets. Academic work includes mechanism design,
experimental (laboratory) studies, field studies, and empirical
analyses. In industry, several companies including Eli Lilly, Corning,
HP, Microsoft, and Google have piloted internal prediction
markets. Other companies, including ConsensusPoint, InklingMarkets,
InTrade, and NewsFutures, base their business on providing public
prediction markets, prediction market software solutions, or
consulting services. The growth of the field is reflected and fueled
by a wave of popular press articles and books on the topic, most
prominently Surowiecki’s “The Wisdom of Crowds”.

Workshop topics

The area of prediction markets faces challenges regarding how best
to design, deploy, analyze, implement, and understand prediction
markets. One important research direction is designing mechanisms for
prediction markets, especially for events with a combinatorial outcome
space. Another notable issue is manipulation in prediction
markets. Understanding the effect of manipulation is especially
important for prediction markets to find their way to assist
individuals and organizations in making critical decisions. Moreover,
how to implement market mechanisms that not only are easy to use but
also facilitate information aggregation has been an important problem
for practitioners. Prediction markets face social and political
obstacles including antigambling laws and moral and ethical concerns,
both real and constructed.

Submissions of abstracts for research contributions from a rich set
of empirical, experimental, and theoretical perspectives are
invited. Topics of interest at the workshop include, but are not
limited to:

* Mechanism design
* Game-theoretic analysis of mechanisms, behaviors, and dynamics
* Decision markets
* Combinatorial prediction markets
* Market makers for prediction markets
* Manipulation and prediction markets
* Order matching algorithms
* Computational issues of prediction markets
* Liquidity and thin markets
* Laboratory experiments
* Empirical analysis
* Prediction market modeling
* Industry and field experience
* Simulations
* Policy applications and implications
* Internal corporate applications
* Legal and ethical issues

Submissions of summaries for demonstrations on prediction market
systems are invited. Systems of interest at the workshop include, but
are not limited to:

* Implemented combinatorial prediction markets
* Mature systems and commercial products of market mechanisms
* Research prototypes on prediction markets
* Other collective prediction systems

Submission instructions

Research contributions should report new (unpublished) research
results or ongoing research. We request an abstract not exceeding one
page for every research contribution.

For system demonstrations, a summary of up to two pages including
technical content to be demonstrated is requested. Please indicate if
the demonstration requires network access.

Research contributions and system demonstrations should be submitted
electronically to the organizing committee at no
later than midnight Hawaii time May 23, 2008.

At least one author of each accepted research contribution and
system demonstration will be expected to attend and present or
demonstrate their work at the workshop.

Important dates

May 23, 2008: Submissions due midnight Hawaii Time

May 30, 2008: Notification of accepted research contributions and
system demonstrations

July 9, 2008: Workshop date

Organizing committee

Yiling Chen, Yahoo! Inc
David Pennock, Yahoo! Inc
Rahul Sami, University of Michigan
Adam Siegel, Inkling Markets

More information

For more information or questions, visit the workshop website:

or email the organizing committee:

Call for Papers and Participation: Workshop on Ad Auctions: Chicago, July 8-9 2008

I am happy to announce the following ad auctions workshop and solicit submissions and participants.

Call for Papers

Fourth Workshop on Ad Auctions

July 8-9, 2008
Chicago, Illinois, USA


In conjunction with the
ACM Conference on Electronic Commerce (EC’08)

We solicit submissions for the Fourth Workshop on Ad Auctions, to be
held July 8-9, 2008 in Chicago in conjunction with the ACM Conference
on Electronic Commerce. The workshop will bring together researchers
and practitioners from academia and industry to discuss the latest
developments in advertisement auctions and exchanges.

In the past decade we’ve seen a rapid trend toward automation in
advertising, not only in how ads are delivered and measured, but also
in how ads are sold. Web search advertising has led the way, selling
space on search results pages for particular queries in continuous,
dynamic “next price” auctions worth billions of dollars annually.

Now auctions and exchanges for all types of online advertising —
including banner and video ads — are commonplace, run by startups and
Internet giants alike. An ecosystem of third party agencies has grown
to help marketers manage their increasingly complex campaigns.

The rapid emergence of new modes for selling and delivering ads is
fertile ground for research from both economic and computational
perspectives. What auction or exchange mechanisms increase advertiser
value or publisher revenue? What user and content attributes
contribute to variation in advertiser value? What constraints on
supply and budget make sense? How should advertisers and publishers
bid? How can both publishers and advertisers incorporate learning and
optimization, including balancing exploration and exploitation? How do
practical constraints like real-time delivery impact design? How is
automation changing the advertising industry? How will ad auctions and
exchanges evolve in the next decade? How should they evolve?

Papers from a rich set of empirical, experimental, and theoretical
perspectives are invited. Topics of interest for the workshop include
but are not limited to:

* Web search advertising (sponsored search)
* Banner advertising
* Ad networks, ad exchanges
* Comparison shopping
* Mechanism and market design for advertising
* Ad targeting and personalization
* Learning, optimization, and explore/exploit tradeoffs in ad placement
* Ranking and placement of ads
* Computational and cognitive constraints
* Game-theoretic analysis of mechanisms, behaviors, and dynamics
* Matching algorithms: exact and inexact match
* Equilibrium characterizations
* Simulations
* Laboratory experiments
* Empirical characterizations
* Advertiser signaling, collusion
* Pay for impression, click, and conversion; conversion tracking
* Campaign optimization; bidding agents; search engine marketing (SEM)
* Local (geographic) advertising
* Contextual advertising (e.g., Google AdSense)
* User satisfaction/defection
* User incentives and rewards
* Affiliate model
* Click fraud detection, measurement, and prevention
* Price time series analysis
* Multiattribute and expressive auctions
* Bidding languages for advertising

We solicit contributions of two types: (1) research contributions,
and (2) position statements. Research contributions should report new
(unpublished) research results or ongoing research. The workshop
proceedings can be considered non-archival, meaning contributors are
free to publish their results later in archival journals or
conferences. Research contributions can be up to ten pages long, in
double-column ACM SIG proceedings format:
Position statements are short descriptions of the authors’ view of how
ad auction research or practice will or should evolve. Position
statements should be no more than five pages long. Panel discussion
proposals and invited speaker suggestions are also welcome.

The workshop will include a significant portion of invited
presentations along with presentations on accepted research
contributions. There will be time for both organized and open
discussion. Registration will be open to all EC’08 attendees.

The first three workshops on sponsored search auctions successfully
attracted a wide audience from academia and industry working on
various aspects of web search advertising. Following the footsteps of
the previous workshops, the Fourth Workshop on Ad Auctions strives to
be a venue that helps address challenges in the broader field of
online advertising, by providing opportunities for researchers and
practitioners to interact with each other, stake out positions, and
present their latest research findings. While the first three
workshops focused on web search advertising, we have broadened the
scope this year to include auctions and exchanges for any form of
online advertising.

Submission Instructions

Research contributions should report new (unpublished) research
results or ongoing research. The workshop’s proceedings can be
considered non-archival, meaning contributors are free to publish
their results later in archival journals or conferences. Research
contributions can be up to ten pages long, in double-column ACM SIG
proceedings format:
Positions papers and panel discussion proposals are also welcome.

Papers should be submitted electronically using the conference
management system:
no later than midnight Hawaii time, May 11, 2008. Authors should also
email the organizing committee ( ) to
indicate that they have submitted a paper to the system.

At least one author of each accepted paper will be expected to attend
and present their findings at the workshop.

Important Dates

May 11, 2008 Submissions due midnight Hawaii time
a. Submit to:
b. Notify
May 23, 2008 Notification of accepted papers
June 8, 2008 Final copy due

Organizing Committee

Susan Athey, Harvard University
Rica Gonen, Yahoo!
Jason Hartline, Northwestern University
Aranyak Mehta, Google
David Pennock, Yahoo!
Siva Viswanathan, University of Maryland

Program Committee

Gagan Aggarwal, Google
Animesh Animesh, McGill University
Moshe Babaioff, Microsoft
Tilman Borgers, University of Michigan
Max Chickering, Microsoft
Chris Dellarocas, University of Maryland
Ben Edelman, Harvard University
Jon Feldman, Google
Jane Feng, University of Florida
Slava Galperin, A9
Anindya Ghose, New York University
Kartik Hosanagar, University of Pennsylvania
Kamal Jain, Microsoft
Jim Jansen, University of Pennsylvainia
Sebastien Lahaie, Yahoo!
John O. Ledyard, Caltech
Ying Li, Microsoft
Ilya Lipkind, A9
Preston McAfee, Yahoo!
Chris Meek, Microsoft
John Morgan, University of California Berkeley
Michael Ostrovsky, Stanford University
Abhishek Pani, Efficient Frontier
Martin Pesendorfer, London School of Economics
David Reiley, Yahoo!
Tim Roughgarden, Stanford University
Catherine Tucker, Massachusetts Institute of Technology
Rakesh Vohra, Northwestern University

More Information

For more information or questions, visit the workshop website:

or email the organizing committee:

The right way to implement a multi-outcome prediction market: Linear programming

There are many examples of multi-outcome prediction markets, for example election markets with more than two candidates, or sports championship markets with dozens of teams.

What is the best way to implement a multi-outcome prediction market?

The simplest way is to effectively ignore the fact that there are multiple outcomes, breaking up the market into a bunch of separate binary markets, one for each outcome. Each outcome-market is an independent instrument with its own order flow and processing.

This seems to be the most common approach, taken by for example intrade, IEM, racetracks, and most financial exchanges. IMHO, it’s the wrong way, for three reasons.

  1. Splitting up a market can hurt liquidity. In a split market, there are effectively two ways to do everything (e.g., buy outcome 1 equals sell outcomes 2 through N), so traders may not see the best price for what they want to do, and orders may not fill at the best price available. There may even be orders that together constitute an agreeable trade, yet are stuck waiting in separate queues.
  2. A split market may also slow information propagation. Price changes in one outcome do not directly affect prices of other outcomes; it’s left to arbitrageurs to propagate logical implications.
  3. Finally, a naïve implementation of a split market may limit traders’ leverage, forcing them set aside more money than necessary to complete a set of trades. For example, on IEM, short selling one share at $0.99 requires that you have $1 in your account, even though the most you could possibly lose in this transaction is $0.01. The reason is that to short sell on IEM you must first buy the bundle of all outcomes for $1, then sell off the outcome that you don’t want.

IEM has possibly the worst implementation, suffering from all three problems.

Intrade’s implementation is slightly better: they at least handle leverage correctly.

Newsfutures is smarter still.1 They generate phantom bids to reflect the redundant ways to place bets. For example, if there are bids for outcomes 2 through N that add up to $0.80, they place a phantom ask on outcome 1 for $0.20. A trader who accepts the ask, buying outcome 1 for $0.20, actually sells outcomes 2 through N behind the scenes, an entirely equivalent transaction. Chris Hibbert has a more elaborate methodology for eking out as much liquidity as possibly using phantom bids, an approach he has implemented plans to implement in his Zocalo platform.

Yet phantom bids are a band-aid that cannot entirely heal a fractured market. Still missing is the ability to trade bundles of outcomes in a single transaction.

For example, consider the US National Basketball Association championship market, with 30 teams. A split market (possibly with phantom bids) works great for betting on individual teams one at a time, but is terribly cumbersome for betting on groups of teams. For example, betting that a Western conference team will win requires 15 separate transactions. A common fix is to open yet another market in each popular bundle, however this limits choice and exacerbates all three problems above.

Bundling is especially useful with interval bets. For example, consider this bet on the peak price of gasoline through September 2008, broken up into intervals $3-$3.25, $3.25-$3.40, etc. In order to bet that gas prices will peak between, say, $3.40 and $4.30, you must buy all six outcomes spanning the interval, one at a time. (Moreover, you must sum the six outcome prices manually to compute a price quote.)

Fortunately, there is a trading engine that solves all three problems above and also allows bundle bets…

It’s linear programming!

Bossaerts et al. call it combined value trading. Baron & Lange, Lange & Economides and Peters et al. call it a parimutuel call market. Fortnow et al. and Chen et al. describe it in the context of combinatorial call markets.

Whatever you call it, the underlying principle is relatively straightforward, and it seems inherently the right way to implement a multi-outcome market. Yet I’ve rarely seen it done. The only example I know of is the now defunct economic derivatives markets run by Longitude, Goldman Sachs, and Deutsche Bank.

The set up of the linear program is as follows. Each order is associated with a decision variable x that ranges between 0 and 1, encoding the fraction of the order that the auctioneer can accept.2 There is one constraint per outcome that ensures that the auctioneer never loses money across all outcomes. The choice of objective function depends on the auctioneer’s goals, but something like maximizing the fill fraction makes sense.

Once the program is set up, the auctioneer solves for the x variables to determine which orders to accept in full (x=1), which to accept partially (0<x<1), and which to reject (x=0). The program can be solved either in batch mode, after waiting to collect a number of orders, or in continuous mode immediately as new orders arrive. Batch mode corresponds to a call market. Continuous mode corresponds to a continuous auction, a generalization of the continuous double auction mechanism of the stock market.

Each order consists of a price, a quantity, and an outcome bundle. Traders can just as easily bet on single outcomes, negations of outcomes, or sets of outcomes (e.g., all Western Conference NBA teams). Every order goes into the same pool of liquidity no matter how it is phrased.

Price quotes are queries to the linear program of the form “at what price p will this order be accepted in full?” (I believe that bounds on the dual variables of the LP can be interpreted as bid and ask price quotes.)

Lange & Economides and Peters et al. devise clever ways to make prices unique rather than bid-ask ranges, by injected a small subsidy to seed the market at the onset.

Note that Hanson’s market scoring rules market maker also elegantly solves all the same problems as the LP formulation, including handling bundle bets. However, the market maker requires a patron to subsidize the market, while the LP auctioneer formulation is budget balanced — that is, can never lose money.

Also note that I am not talking about a combinatorial-outcome market here. In this post, I am imagining that the number of outcomes is tractable — small enough so that we can explicitly list, store, and compute across all of the outcomes. A true combinatorial-outcome market, on the other hand, has an exponentially large number of outcomes making it impossible to even list them all explicitly, and forcing all calculations to operate on an implicit representation of outcomes, for example Boolean combinations of base events.

1Apparently worked out in conjunction with Brian Galebach, a mathematician and Newsfutures fan extraordinaire who runs the prediction contest
2Alternatively, the variables can range between 0 and q, where q is the quantity of shares ordered.

FYI 2 CFPs: WWW2008-IM & ACM EC'08

Here are two Call For P*s for upcoming academic/research conferences:

  1. Call for Participation: For the first time, the World Wide Web Conference has a track on Internet Monetization, including topics in electronic commerce and online advertising. The conference will be held in Beijing April 21-25, 2008. If the Olympics in China are all about image, then the Internet in China is all about, well, Monetization. (A lot of it, growing fast.)
  2. Call for Papers: The 2008 ACM Conference on Electronic Commerce will be held in Chicago July 8-12, 2008 in proximity to AAAI-08 and GAMES 2008. Research papers on all aspects of electronic commerce — including personal favorites prediction markets and online advertising — are due February 7, 2008.

You can signal your interest on social events calendar WWW2008 | EC’08

Hope to see some of you in either the Forbidden or Windy City, as the case may be.

Computational aspects of prediction markets: Book chapter and extended bibliography

Rahul Sami and I wrote a chapter called “Computational aspects of prediction markets” in the book Algorithmic Game Theory, Cambridge University Press, forthcoming 2007.

You can download an almost-final version of our chapter here.

Update 2007/09/19: You can now also download the entire book Algorithmic Game Theory: username agt1user , password camb2agt . If you like it, you can buy it.

In the course of writing the chapter, we compiled an extended annotated bibliography that ended up being too long to publish in its entirety in the book. So we trimmed the bibliographic notes in the book to cover only the most directly relevant citations. You can download the full extended bibliography here.

Here is the abstract of our chapter:

Prediction markets (also known as information markets) are markets established to aggregate knowledge and opinions about the likelihood of future events. This chapter is intended to give an overview of the current research on computational aspects of these markets. We begin with a brief survey of prediction market research, and then give a more detailed description of models and results in three areas: the computational complexity of operating markets for combinatorial events; the design of automated market makers; and the analysis of the computational power and speed of a market as an aggregation tool. We conclude with a discussion of open problems and directions for future research.

If you’re interested in this topic, you might also take a look at our recent paper on Betting on permutations, published after the book chapter was completed.

Finally, for a higher-level treatment, here is a pre-print version of a short letter on “Combinatorial betting” that we submitted to SIGecom Exchanges.

Thoughts from WWW2007 on web science, web history, and misc

WWW2007 LogoEarlier this month, I spent a few days in lovely Banff, Alberta, Canada, at WWW2007, the 16th International World Wide Web Conference. Here are my thoughts from the event. [See also: Yahoo! Research’s writeup.]

It’s becoming clear that other sciences beyond computer science, including economics and sociology, are necessary for understanding the web and realizing its full potential. This theme ran through both Tim Berners-Lee’s and Prabhakar Raghavan’s plenary talks. For every new advance in the web, once it reaches critical mass, the economic incentives to manipulate the system inevitably emerge. Email led to spam. Altavista led to keyword spam. Google led to link spam. Blogs led to comment and trackback spam. Folksonomies led to tag spam. Recommender systems and aggregators (e.g., Digg) led to shilling. It’s clear that a better understanding of incentives, game theory, and system equilibrium is needed, beyond just cool engineering feats. The University of Michigan calls this incentive-centered design and has a world-class research team exploring the topic; see Jeff MacKie-Mason’s blog ICD Stuff for an interesting and accessible discussion. Yahoo! Research is also betting on the importance of human incentives, building a group of economists and sociologists to complement our contingent of computer scientists.

Among conference events, nowhere was the convergence of economics and computer science more clear than at the Third Workshop on Sponsored Search Auctions. The workshop is a rare venue where terms like Nash equilibrium and NP-complete can coexist in harmony. The workshop explored the intricacies of web search advertising, a multi-billion dollar industry experiencing rapid growth. Contributions included new designs for auctioning off advertising space, new analyses of the systems currently used by search engines, new tools to help advertisers, and empirical studies of the industry. Participants included representatives from both academia and industry, including economists, computer scientists, search engine employees (including representatives from the “big three”: Google, Microsoft, and Yahoo!), and search engine marketers. Yahoo! had a large presence at the workshop: Yahoo! scientists (including me) served on the organizing committee, Yahoo! employees and interns presented six of the fourteen peer-reviewed papers, and many Yahoos attended, contributing to their voice to the discussion of this emerging field.

Bradley Horowitz‘s talk also emphasized the new web order, where artists are needed as much as technologists: artists who can envision, create, and orchestrate online communities can be the difference between mass adoption and a flop.

An interesting addition to the WWW program was the Web History track and the Web History Center. Some of the talks were fascinating. Hermann Maurer recounted stories of interactive TV products that proliferated in Europe in the 1970’s and that mirrored almost everything that is done on the Web today in a primitive form. [Some keywords to search for if you’re interested: PRESTEL, Teletel/Minitel (France), MUPID (Austria).] For example, one massive multiplayer game, which involved social exploration of 64 million virtual planets, each with a hidden secret, was so wildly popular that it crashed the network. The apparent winner of the contest returned his prize, admitting that he didn’t actually solve for the secrets, but rather hacked into the system and reverse engineered the code. This pre-Internet system even featured some things I’m still waiting for on today’s web, like micropayments.

CFP: Second Workshop on Prediction Markets

We’re soliciting research paper submissions and participants for the Second Workshop on Prediction Markets, to be held June 12, 2007 in San Diego, California, in conjunction with the ACM Conference on Electronic Commerce and the Federated Computing Research Conference. The workshop will have an academic/research bent, though we welcome both researchers and practitioners from academia and industry to attend to discuss the latest developments in prediction markets.

See the workshop homepage for more details and information.

You can signal your intent to attend at, though official registration must go through the EC’07 conference.

The economics of attention

Here is a fluffy post for a fluffy (but important) topic: the economics of attention.

Yahoo! is in the business of monetizing attention: that’s essentially what advertising is all about. We (Yahoo!) attract users’ attention by providing content, usually free, then diverting some of that attention to our paying advertisers. Increasingly users’ attention is one of the most valuable commodities in the world. This trend will only accelerate as energy becomes cheaper and more abundant, and thus everything we derive from energy (that is, everything) becomes cheaper and more abundant, on our way to a post-scarcity society, where attention is nearly the only constrained resource.

Today, users generally accept content and entertainment in return for their attention, though likely in the future users will be more savvy in directly monetizing their own attention. I’ve heard a number of companies and organizations large and small discuss direct user compensation. Beyond advertising, the economics of attention is important for the future of communication in general.

I haven’t found much academic writing on the topic, though I haven’t looked thoroughly. John Hagel’s piece “The Economics of Attention” is a good start, and he looks to have compiled some nice resources on the topic, though I haven’t yet investigated closely.

An organization that has garnered some attention of their own (of the Web 2.0 buzz variety) is Attention Trust. I find the description on their own website vague and impenetrable. The best explainer on Attention Trust I could find is PC4Media’s, though questions remain. The basic concept is simple enough: users should be empowered to control and monetize their own attention, including the output of their attention (e.g., their click trails, personal data, etc.). Just how Attention Trust plans to hand this power to the people seems to be the hand-wavy part of their story.

Another interesting company in this space is Root Markets, whose business is to connect both sides of the attention market in an attempt to commoditize attention. Their first product is much more specific than that: an exchange for mortgage leads.

If the absence of formal models of the economics of attention is real — and not simply a matter of my own ignorance — than it may be that some economist can make a career by truly tackling the topic in a precise and thorough way.