Could Algorithms Create a Win-Win for Both Society and Crowdfunding Platforms?

Tuck professor Prasad Vana studies how algorithms that rank lists of items can be a lever for social benefit.

Like many advances of the digital age, algorithms hold great promise, but also plenty of peril.

The Brookings Institution defines algorithms as “a set of step-by-step instructions that computers follow to perform a task,” and notes that they have “become sophisticated and pervasive tools for automated decision-making.” When you type an entry into the Google search box, Google’s proprietary algorithm is what decides the search results and especially the order in which they appear. The convenience of Google aside, one major problem with algorithms is that they are often opaque and are known to perpetuate unjust biases and stereotypes. 

In a new working paper, Tuck associate professor Prasad Vana examines the role of algorithms on consumer choice and asks how we can use them to promote socially beneficial outcomes while protecting against their potential harm. He and co-author Anja Lambrecht of London Business School find that researchers need to be able to see the inner workings of algorithms to accurately understand their influence, and that algorithms can be designed to help those most in need without detracting from the profit imperatives that power them.

The context for their study is the crowdfunding platform DonorsChoose, which allows schoolteachers to raise money for classroom improvements and supplies. At any given time, there are tens of thousands of projects seeking funding on the platform, and the website’s algorithm determines the order in which projects appear when a user searches for them. DonorsChoose gave the researchers full access to the code of its algorithm, so they could study how changing the code affects search results and funding outcomes. 

tuck-news-research-prasad-vana.jpg

Professor Vana teaches the core Analytics I and II courses, and the Quantitative Digital Marketing elective in the MBA Program at Tuck.

The first question the researchers ask is how well the DonorsChoose algorithm serves its objectives to benefit disadvantaged groups and achieve a high rate of project completion. DonorsChoose wants to help schools from high-poverty areas, so its algorithm ranks projects higher that are in the highest poverty category, and projects from schools that serve a high rate of free-or-reduced lunches. In addition, the platform only makes money when projects are fully funded, so it prefers projects that are likely to succeed. To see whether these two goals are in conflict, Vana and Lambrecht adjusted the algorithm’s code and ran simulations. First, they turned off the preferences related to poverty and free-or-reduced lunches. Removing those parameters reduced contributions to those schools by 12.98 percentage points. Then they increased the power of the poverty and free-or-reduced preferences, to see if it would result in more of those projects being funded. Surprisingly, they found the effect to be minimal. “If you take out the poverty components, the schools suffer a lot,” Vana explains, “but if you double the components, it doesn’t really add much more. This means there is an upper bound for how much the algorithm can help these schools, and the platform is already very close to that bound.”

We show that the goal of helping those in need does not take away from the overall goal of the platform, which is to stay in business.

Second, Vana and Lambrecht once again adjusted the algorithm’s code, this time to simulate the role of the parameters in the algorithm that prefer projects that are most likely to finish. For example, among other things, the algorithm ranks higher up projects that have already raised most the money requested, or projects that have small target amounts. They find that turning these preferences on or off does not significantly alter the proportion of money contributed to schools with high levels of poverty. This suggests that a platform’s primary goal of maximizing the number of projects that succeed in raising money need not come at the cost of underserving disadvantaged groups.  

As society has learned of the potential for algorithms to perpetuate biases, there have been calls for “algorithm transparency,” so regulators and researchers can properly study how algorithms make decisions. One finding from Vana and Lambrecht’s paper is that such transparency is empirically critical, if we seek to accurately understand consumer preferences. Another takeaway, Vana says, is that the goals of fundraising and platform profits are “orthogonal”—in other words, not in conflict with each other. “We show that the goal of helping those in need does not take away from the overall goal of the platform, which is to stay in business.”