10,000-way donation matching

(ETA: I think this problem is impossible and in particular this mechanism is broken. Credit to Daniel Kane for pointing out the error. At least I did warn: “I feel pretty confused and this might all be wrong.”)

Suppose Alice and Bob are both interested in a public good: maybe it’s a movie they want to watch, an improvement to a public space they both use, or a project that advances their shared altruistic goals.

Alice and Bob both face a tradeoff between giving money to the public good, and spending money on themselves. If Alice and Bob make decisions separately, neither will give unless they benefit more from $1 of public good than from $1 of spending.

For example, Alice might be willing to give money to fund the most urgent improvements to public spaces. But she’ll only give until she reaches the threshold where spending $1 improving the public spaces is exactly as good (for Alice) as spending $1 to improve Alice’s private space. And Bob might do the same thing.

This is not an efficient outcome: if both Alice and Bob consumed $1 less and gave $1 more, then they would both be better off. Alice and Bob can try to fix the problem by matching donations: they each agree to give $1 if and only if the other one also gives $1. If Alice and Bob have symmetrical preferences about the public good, then this aligns their incentives with social welfare. (Unfortunately many donation matches have fishy counterfactuals, but there is real value to be had from a real match.)

GiftMatching

For a public good that benefits 10,000 people, the gap between “socially optimal behavior” and “individually rational behavior” is even larger. To close the gap, we could again try a donation match: everyone agrees to give $1, if and only if everyone else also gives $1. This potentially increases the incentive to give by 10,000x.

But the 10,000-way match runs into a few huge problems:

  • How do you identify the 10,000 people who care about this public good? In reality different people care different amounts, so you need to figure out how much each person cares.
  • Even if you know exactly how much all 10,000 people care, some of them are going to hold out or behave erratically and the match will predictably fail.
  • Every individual actor has a strong incentive not to be included in the match, or to look like a holdout—they’d prefer that everyone else sign up for a match and fund the public good, while they personally spend nothing.

Can we overcome these problems? How much can donation matching improve the incentive to give?

(See also: dominant assurance contracts, which are a complementary mechanism for public goods provision.)

Getting O(√N) leverage

(Warning: I feel pretty confused and this might all be wrong.)

Given a public good with 10,000 comparably-sized donors, I think a donation match can increase the incentives to donate by ~20x, while having only ~15% of the funds go unmatched, using a simple but counterintuitive strategy.

Roughly speaking, we’ll divide donors into two groups and have them compete: each donor chooses how much to pledge conditioned on the match succeeding. If one group pledges substantially less than the other, then members of that group donate nothing.

More precisely:

  • Randomly divide all possible donors into two groups.
  • Survey each group about how much they would donate given various possible match targets. Based on their answers predict how much money the other group will donate; set the other group’s match target to be 1 standard deviation below the median.
  • After deciding on a group’s match target, use the survey answers to compute how much each donor in that group pledges.
  • A group’s match is a success if the total pledged is more than the target. If the match is successful, everyone donates their pledge.

Incentives

Call the groups A and B. Let N be the total number of donors. Consider Alice, an individual from group A. For simplicity, assume each person will donate about $1, so that the match target is about $N/2. In this case the distribution of total pledged is Poisson, and the probability of being 0-$1 below the target is about 0.35/√N. The probability of being below 1 standard deviation, and hence having the match fail, is about 16%.

Now consider Alice’s incentives, given that her donation will be matched, but ignoring the fact that her answers will be used to calculate the match for group B. On average, pledging $1 more costs Alice $0.84. But it increases the probability of group A meeting its match target by 0.35/√N, which increases the expected donation by $N/2 * 0.35/√N ~ $(0.17√N). So Alice’s leverage is (0.17 √N) / (0.84) ~ (0.20 √N). For N=10,000, this gives leverage of 20x.

What about the fact that Alice’s answers will determine the match for group B—does this affect her incentives?

The only reason that Alice cares about the size of group B’s match is because Alice wants group B to contribute as much as possible. But the whole reason we are using Alice’s response to set group B’s match is so that we can find the matching level that generates as much revenue as possible. So our incentives here are perfectly aligned with Alice’s, and she has no incentive to mislead us.

Alice may behave “dishonestly” if she knows something we don’t about what matching level would raise the most money from group B. For example, she’ll shade upwards if she knows that we are going to underestimate the efficient match level. But in this case she’s just trying to help the match generate even more revenue, so we don’t need to worry unless Alice is predictably irrational (even after hearing this argument).

This analysis seems robust to precommitments, collusion, acausal cooperation, etc. Roughly speaking: if the donors can successfully cooperate then they would just give a large amount with or without the matching drive, so cooperation isn’t going to decrease the total giving. We’re not trying to play people off of each other, we’re just trying to help them coordinate. (But see the discussion below about agent’s predicting the behavior of others in their group.)

Subtleties

In order to elicit information from each group, we send out a survey with a bunch of questions of the form: “Conditioned on being offered a match with target $T, how much would you pledge?” We then find the largest target such that, given that target, the group would actually meet it. We use that as the target for the other group. We can guarantee the existence of a self-consistent target by a simple fixed point argument.

In practice you’d hopefully have a good sense of what the real level of donations would be, so you’d only have to ask about a few targets until you found one that was close enough. If you discover that you messed up and picked the wrong range, you could send out a new round of surveys.

The analysis only works if agents are committed to giving if the match succeeds, and committed to not giving if the match fails. Everyone is incentivized to make such a commitment in order to maximize revenue, but it may be difficult to actually make it, especially once the game is repeated across many years and many public goods. It would require a much larger post to address this issue, but I do think it’s manageable.

After collecting the surveys, we can run this process with a large number of random partitions rather than using a single partition. Each person then donates the average amount they would be asked to donate, across all of the random partition. This reduces variance while preserving the incentive structure. And it means we don’t have to use any fancy division of all possible donors, since we can just sample random partitions of the actual donors.

By giving in this match, I influence how much people expect to be given in future matches (and maybe acausally influence how much people expect to be given in this match, or causally influence it by making a precommitment). But this doesn’t significantly affect the equilibrium as long as people’s priors are broad compared the √N polling error. If people’s priors are very narrow compared to the polling error—for example if I know that there are 117 +- 2 people who are interested in giving to this public good—then things are back to being a complicated mess. I think this is an unusual situation: for example, if I wanted to know how many people would give to MIRI’s 2017 fundraiser, I think that “how much did a random subsample of 1/2 of the population donate to MIRI’s 2017 fundraiser” would be my best source of evidence by far.

As a special case: suppose that Alice would by default give $1 but tries to get ahead by committing to not making a donation and hoping that others in her group will pick up her slack. (This is a winning strategy in a naive formulation of the matching drive.) Now consider Bob’s decision, facing a match target of $X. There is a 50% chance that Alice was in Bob’s group, in which case his match target is effectively $1 higher, since Alice isn’t giving. But there is also a 50% chance that Alice was in the other group. If the match target is Bob’s best source of evidence about how many people in his group will donate, then Bob’s match target is effectively $1 lower since the match target understates the true propensity to donate in the other group. So to first order Alice’s precommitment won’t affect Bob’s donation size, eliminating Alice’s incentive to make such a precommitment.

When different donors give different amounts, the actual best response is complicated; the net effect seems to be to decrease the effective number of donors N. If there are just a few very large donors, who contribute most of the money, then the effective N will be quite small.

I somewhat arbitrarily set the threshold at 1 standard deviation below the mean. Really you’d want to have some responsible estimate for the optimal match, which depends on the tradeoff between incentives and unmatched funds, and use that.

If donor’s pledges depend on the target in different ways, picking the revenue-maximizing target introduces optimizer’s regret, which you should correct for. This is a bit complicated but again doesn’t seem to change the numbers much.

Conclusion

Overall the situation, and especially the proposed mechanism, feel deeply confusing and I think there is a significant probability that I’ve gotten something wrong.

I think that appropriate matching mechanisms could make it significantly easier to fund public goods, and it’s worth thinking about how to set them up. The quantitative effects are potentially very large, so while the whole area feels confusing and vaguely like cheating, I think it may turn out to be important.

I’m less sure about whether this should affect our altruistic activities. If there were two possible worlds, one with good matching and one without, should I give more in the world with a match? But the numbers are large enough that it seems worth thinking about this seriously and figuring it out.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s