Results and Analysis, Bargain Tournament 2009

Edward Tsang,
Computational Finance & Economic Research Lab

What makes a strategy successful?

Whenever the buyer's utility and higher than the seller's cost, there is potential for profit. Profit can only be made when two bargainers agree on a price. Therefore, whether a strategy succeeds or not in a bargain depends on (a) it manages to make a deal in this bargaining situation, and (b) how much this strategy gets out of the deal. The former partially demands the ability to recognize deadline by its opponent. The latter partially demands agressiveness.

How good is a strategy?

Performance assessment is not straight-forward.

The performance of a strategy obviously depends on what the strategy does. However, it also depends on the behaviour of other participants. For example, if there are sufficient number of easy-going participants, who would make deals as soon as profit can be made, then an aggressive strategy could do quite well, even if it makes no attempt to recognize deadlines by its opponents. On the other hand, if most participants are aggressive and make no attempt to recognize deadlines by their opponents, they could fail to strike deals most of the time. In this case, an easy-going participant could do well.

The performance of a strategy also depends on the profit potential, i.e. utility minus cost. Some programs (especially aggressive ones) work better when the profit potential is large.


Sellers Buyers
2009 entries: jmcait_seller.plg,, jvanSeller.txt, jtbuyer.plg, jvanBuyer.txt
2004 entries: ccmusg_s.plg, lemeri_s.plg, mkern_s.plg ccmusg_b.plg, rbragg_b.plg
2001 entries:, dgjaco_sell.plg,
Control: keen_seller.plg, random_seller.plg keen_buyer.plg, random_buyer.plg

2001 entries were not given the parameters. 2004 entries were told that parameters were provided at run time. So they have an edge over 2001 entries. 2009 entries were told in advance the parameters that were used in the tournament. So they have an edge over the previous entries. Some programs were designed for the parameters given.

Results under the announced parameters

The following parameters were given the the participants: Costs were drawn from the range [101, 200]. Utilities were drawn from the range [201, 300]. Days to buy and days to sell were both drawn from the range [3, 10]. Participants were not told the distributions of these ranges. In practice, uniform distributions were used. So on average, the potential profit to be shared is 100.

Of the 2009 entries, Philip Street's seller ( and Jason Caits-Cheverst's buyer ( achieved the highest scores (on average 40.1 and 30.5 respectively per game). Overall, Robert Stacey's seller ( and Christopher Musgrave's buyer (ccmusg_b.plg) achieved the higher scores (on average 64.7 and 49.3 respectively per game).

Philip Street and Edward Tsang photo
Jason Caits-Cheverst and Edward Tsang photo
Philip Street
Best Seller
Jason Caits-Cheverst
Best Buyer

What if programs with negative profits are removed?

Two of the programs (jtbuyer.plg and jvanBuyer.txt, probably due to programming bug rather than design) in this year's entry accepted money-losing deals. One may argue that non-rational behaviour should not exist in a competitive market. However, evidence shows that even rational investors make mistakes. (Otherwise how could one explain the trading of CDOs, which values are not fully understood!?)

Nevertheless, results with lost-making traders removed were looked at. Under this situation, Christopher Musgrave's seller (ccmusg_s_plg) and buyer (ccmusg_b.plg) were both winners (on average scoring 42.8 and 49.3 respectively per game).

Using alternative parameters

Competitions were also conducted with alternative parameters. Robert Stacey's seller ( and buyer ( and Christoper Musgrave's buyer (ccmusg_b.plg) were pretty strong under various situations. For details of results, readers are referred to the Excel spreadsheet.

Maintained by: Edward Tsang; Last Updated: 2009.12.08