PRATIK SHAH

The ethics and governance of Artificial Intelligence: panel discussion at The MIT Media Lab

Joi Ito:

This is the first substantive session of the Ethics and Governance in AI class that we’re doing together with MIT Media Lab and the Berkman Center are doing together, and I’m co-teaching it with Professor Jonathan Zittrain, who will be speaking after me.

Joi Ito:

I thought I’d start out by talking a bit about why we’re doing this class. What I hope you think about as we go through this class, and maybe touching on and framing some of the things that we’d like to work on together, and as we said at dinner yesterday, this is such a new field that by the end of this class you will know more than 99.99% of people who think about this thing. Just like any new field, and I think Jonathan and I are old enough to have been in that period of the internet where there literally were only a handful of people who knew enough about each part of the internet to help get it started. It really is like that.

Joi Ito:

I think that AI broadly, is a fairly large space with lots of people working on it, but this field of AI and governance, AI and ethics, I wouldn’t say it’s just this room, but it’s a small enough number of people thinking about it in a smart way that it really is an opportunity to contribute. I hope that this will kickstart some of you to make this if not your main thing, at least a peripheral thing that you’re interested in.

Joi Ito:

… at least mentioned these two in the morning, but they don’t know in the context of which I mentioned them. So, with that, maybe we’ll just go and just sort of briefly, but I might double-click you to go deep on a few of the things that you say. Describe sort of roughly your work, your point of view on machine learning and ethics, and then we’ll try to have a conversation with everyone.

Pratik Shah:

I’m Pratik Shah. I am fundamentally trained as a biologist. I have a PhD in microbiology, and I may be one of the few biologists who’s actually in machine learning. I think it’s probably fair to say that most machine learning people are doing health. But there are very few people who are trained fundamentally in biology and doing machine learning. I happen to be in the reverse demographic. I’m fascinated by understanding complex systems and learning how things learn.

Pratik Shah:

and relationship to human cognition, and that’s why I became biologist, because in biology there’s a lot to understand. You need to know how to put all the knowledge in context of what we know. That’s what attracted me to biology first, when I was a child. I’m now very fascinated over the last two or three years to make machines become cognitive like us, and by that I mean that there is machine saliency, and Joi might have mentioned there is human saliency.

Pratik Shah:

So when humans look at an object, they have a certain cognitive bias, or evolutionary benefit to look at it and understand patterns, understand shapes. Machines that we are training right now, and although they are kind of a lower level of cognition than humans if I may say in some early systems, they have unique points of saliency. And so the salient views that machines have, or algorithms have, can be exploited for medical knowledge.

Pratik Shah:

My research program and what I’m very interested in, one of the areas that Joi and I have discussed is to use machine saliency to find new medical information that humans may think it’s not necessary or not useful, that’s one. I’m also very passionate about looking at how learning systems can be applied at the point of care, so translating all the information we have into point of care medical technology, making machine learning more accessible, deployable to help people at the bottom of the pyramid. Because I grew up in India, I come from a relatively middle class family and I’ve seen both sides of the spectrum, so I’m very passionate about that.

Pratik Shah:

And the final is ethics, which Joi and I were discussing briefly yesterday, especially in healthcare and pharma, about how billions of dollars are being spent on potentially life-saving drugs, and we don’t know what are human ethics there. We don’t have a good idea about that, and then machine learning is introduced I think we have an opportunity to introduce correct ethics in those learning systems.

Audience:

Thank you for passing it and not throwing it. That scared me. So, Sarah Holland, assembler and Google public policy. I have this question about humans in the feedback loop, and really specifically the end user. So, to what extent should the end user be in the feedback loop and in what context? The question is really more about is that collective accountability an influence? Is that representation, or is that garbage in, garbage out?

Joi Ito:

When you say feedback, can you define that a little bit more?

Audience:

Feedback loop as in constantly saying, “Yes, this was right. No, this was wrong,” and vice versa. So in my head I think about smart reply. So, I like that response, I didn’t like that response. But more in terms of humans gauging the “accuracy,” and I put that term in air quotes, of what that means. Is that good, in what context? Is that representative? Is that accountability or is it kind of like, sometimes with bad data, garbage in garbage out?

Joi Ito:

Can I just add, and tie that to another related piece that came up at the end of JZ’s thing that Pratik can also work on, which is, it ties to the explainability thing, right? So if you’re a doctor and you don’t understand the output, that’s also an interesting feedback thing, and so one of the things that JZ showed, Pratik you didn’t see this, was at the end he showed a, I think it was a paper in medicine that showed machines predicting … what was it? Was it death by cardiac arrest? By heart attack? But that there wasn’t really any explanation for the relationship, or it was very complex. But you’re also showing, and you can talk about this a little bit more, that maybe there actually is a theory or an understanding that could be derived, but not through our current framework, and that may be listening and thinking about it.

Joi Ito:

So there’s the simple thing of just feedback, “Yes, no,” but there’s I think a much bigger one, where suddenly the computer gives you a whole bunch of facts that completely contradict any framework that you have. What is the response of the human being and the relationship with the computer? So there’s two … It’s really kind when the fact the machine and human are interacting with-

Pratik Shah:

Very quickly, can I make two other quick comments? Tying in with what the class was saying, what you were saying. I think it’s like machine learning or AI, whatever we are working on was invented by computer scientists who did math, and now we are expecting machines to behave like humans while they were essentially derived from mathematical principles. I am a huge fan of mathematical principles because it’s math, but I think one of the things is that there are some other machine learning models that go from biologically inspired systems like the brain, the neocortex, and which make more lateral connections between neurons in an algorithm, is one way to kind of move towards more humanized way of looking at machines.

Pratik Shah:

That is one comment, and second is error in machine learning is highly penalized right now. It cannot be wrong, and in fact if you look at all computer science publications, what is the error under the curve, or your ROC curve of your algorithm, and that’s the accuracy. Well as humans we are most likely, at least usually wrong about many things, but we have this higher moral turpitude towards machines. That goes to Joi’s question in healthcare that seems to be incredibly important. If you’re diagnosing someone, you cannot make a mistake, and our entire healthcare system is set up with those paradigms, which I think are useful, should be enforced, and then as Joi pointed out, there are many things we are discovering in my research where the policy, we call it a policy that an expert or a doctor uses to treat a patient. It’s X, and when you use machine learning or we do reinforcement learning or advanced unsupervised learning techniques, it comes up with a complete new machine policy that we call, and it kind of defies, as Joi pointed out, what we understand about treating the disease.

Pratik Shah:

So I don’t know the answer to the question, to be very honest. I think more research is needed, and we need to kind of accept that machines can be wrong, but so could also we. And kind of come up with new paradigms of learning ethics that can incorporate this back and forth with humans and machines to work together without antagonizing each other.

Video