Q&A: Author Kartik Hosanager on What Makes for Responsible Algorithms

The Wharton marketing professor talks about the need for more transparency

Author Kartik Hosanager says platforms probably won't get more transparent without regulation. - Credit by Amazon
Headshot of Patrick Kulp

In an age where data and machine learning have been woven into our daily lives, algorithms have the quiet power to influence everything from electoral politics to mortgage applications.

In A Human’s Guide to Machine Intelligence: How Algorithms Are Shaping Our Lives and How We Can Stay in Control, University of Pennsylvania’s Wharton business school professor Kartik Hosanagar attempts to lay down a comprehensive set of principles that define socially responsible algorithms and ways their creators should be held to account.

He boils down part of his criteria to four pillars that make up an “algorithmic bill of rights”: people have a right to know (1) which data are used in the algorithms that impact them and (2) how those models work in simple terms. Algorithm designers should also (3) offer those impacted some level of feedback-based control and (4) foresee unintended consequences.

Hosanager spoke with Adweek about how the tech industry should be regulated, why there is such a thing as too much transparency and what tech platforms need to do to improve.

This conversation has been edited for length and clarity.

Many of your prescriptions involve more transparency from platforms, but on Facebook, for example, data is sometimes available in a form that might be overwhelming to the average person. How do you make sure that information is accessible and presented in such a way that users can easily engage with it and know what to look for?

One of the things I clarify in my book is that there is such a thing as too much transparency, at least as far as consumers are concerned. And when we talk about transparency, again, what we need to talk about is whether the transparency is for consumers or lay-users or it’s for, let’s say, auditors of the system. I think both are important, but when we talk about consumer transparency, I think of it as transparency at a very high level. Let us know what kinds of data are being used. Let us know what are the most important variables or factors that the algorithm uses in its decision making. That’s very high level and that doesn’t require anyone to understand the ins and outs of the algorithm. But it allows them to get an intuition into it and also helps us know, ‘Hey, something is potentially wrong here.’

"[T]here is such a thing as too much transparency, as least as far as consumers are concerned."
-Kartik Hosanager

Sometimes algorithms are making decisions and we don’t even have the most basic form of transparency, which is even knowing that an algorithm made the decision. So maybe it starts there, but then after that it is what data is being used. And if you applied for a job or a mortgage application, and the company uses not only all the information you provided but also, let’s say, your social media posts—that’s really useful information for us. And then we know if we are OK with that, did we give consent to that or not? And that’s again a basic level of transparency that people can understand.

Are there any platforms out there that you see as doing particularly well with consumer transparency?
First of all, I’m going to say that, at least in socially consequential settings, most of the systems are not highly transparent unless regulation has forced them. And when regulation has forced them, it’s kind of in very different ways. But there are settings where companies are transparent, and it’s helpful. One that I personally use a lot and I love is Pandora, where they make recommendations you can actually click, and it’ll tell you why this music was recommended to you. And it gives you very simple stuff such as, ‘Based on your interests, we think that you like music with a lot of instrumentation.’ … And, of course, there it’s interesting but not consequential. In settings where it is consequential, I don’t really see that, because I think in those settings, companies worry a little bit that shedding too much with consumers could also get them in trouble. But I think that’s the whole point. I think that transparency helps you in trust when you kind of know, ‘OK, this is the kind of data, and these are the most important factors in the decision.’

With more and more lawmakers talking about the possibility of tech regulation, are there any proposals that strike you as moves in the right direction, according to these principles?
There are multiple proposals out there. Some I feel are too extreme in terms of bringing a big heavy hand of regulation. If you look at Senator Warren’s proposal, I think it’s informed as far as it’s thinking about the right set of issues, but I do think it comes at it with a very heavy hand of regulation. And then of course there are also [people] saying, ‘Let’s not regulate technology at all.’ You need some sort of a middle ground. … There’s this bipartisan proposal from Senator Amy Klobuchar and Senator John Kennedy that’s kind of trying to ask for some level of transparency–at least as it relates to privacy and social media. And then there’s representative Ro Khanna, who’s also got a few proposals. I think these are the more interesting ones for me, which recognize the issues and don’t overdo it in the other direction of over-regulating.

How do your principles apply to people shaping their advertising experience?
User control plays a big, important role where you can say, ‘Don’t show me ads like this’ or you could ask, ‘Why was this ad shown to me?’ That’s a good example of where transparency can go a long way in winning user trust. That brings up the other pillar in my proposed algorithmic bill of rights, which is user control. And in a lot of settings, engineers design algorithms where there’s almost this mindset that we want to make it completely autonomous so consumers don’t have to think about anything. And you kind of just use it blindly and in a very passive way. But I think all the research shows that this doesn’t help with consumer trust, nor does it help in sort of detecting problems. So in advertising we see sometimes where we can say, ‘Don’t show me ads like this’ or ‘Show me more like this.’ That helps with trust but that also helps vet the algorithms.


@patrickkulp patrick.kulp@adweek.com Patrick Kulp is an emerging tech reporter at Adweek.
Publish date: March 27, 2019 https://dev.adweek.com/digital/qa-author-kartik-hosanager-on-what-makes-for-responsible-algorithms/ © 2020 Adweek, LLC. - All Rights Reserved and NOT FOR REPRINT