Civilizing the Internet of Things | Francine Berman || Radcliffe Institute

Civilizing the Internet of Things | Francine Berman || Radcliffe Institute


– After that introduction, I
feel like I should be taller. [LAUGHTER] Thank you so much, Alyssa,
I really appreciate it. I want to talk to
with you all today about the Internet
of Things, and that might interest you
to know that today, there are many more
connected devices than there are humans on Earth. And if we think about
it, we’re really talking about
disconnected world, and I want to talk a
little bit about what it is, where we’re headed, but
I really want to start with you. So this is the audience
participation part of the talk. And the question is,
are you connected? So how many of
you have a Fitbit? Raise your hand. Keep your hand up. How many of you have an
Alexa system at home? OK. How many of you have a
Roomba-connected vacuum cleaner? Very useful if
you have pet hair. How many of you know someone or
have an implantable connected medical device? A pacemaker or an insulin
pump or something like that? Maybe somebody in your family. How many of you have an iPhone? There you go. How many of you have a Tesla? Alyssa, this would be you. How many of you have ever been
caught speeding or running a red light by a
surveillance camera? You don’t have to
answer that one. [LAUGHTER] So if you think about
it, I would say most if not all of you have
connected devices, and so welcome to
your world which is the world of the
Internet of Things. The Internet of Things is
a deeply interconnected set of sensors and
cameras and smart systems and all kinds of devices
and other technologies that are connected
to one another, that share and
exchange data, that work together to make decisions,
and often operate autonomously in the background with or
without you knowing it, and that turns out to be
the rub a lot of the time. The Internet of Things has
the promise of technology that will really
enhance our environment. So the idea is that the
Internet of Things or the IoT is going to empower
people through technology and technology
through intelligence. And we’re seeing that now. At this nascent point in
the Internet of Things, we’re seeing autonomous
decision-making, we’re seeing products
that provide customization and personalization,
optimization, we’re seeing monitoring that
make environments safer, that help us understand
what’s going on, we’re seeing efficiencies,
we’re seeing smart technologies, we’re seeing IoTs
that run cities and drive cars, that fly
planes, that tell the grocery store when you’re out of
milk in your refrigerator, and all of these
kinds of things. So these benefits are driving
greater and greater uptake of the Internet of Things. So how prevalent is it? Well, we’re just seeing
the tip of the iceberg. So the Internet of Things will
become more ubiquitous and more unavoidable as time goes on. These days, Gartner
predicts that there is about 20 billion things
in the internet of things as of next year–
that’s roughly three times the number of devices
for every human on earth. The Internet of Things
is supposed to bring in an economic impact that
between $4 and $11 trillion– trillion with a T– by 2025. The surveillance industry
alone is bringing in roughly $63 billion by 2022. We expect almost
all, if not all cars, to be self-driving by 2050. So the Internet
of Things is truly an environment that is going
to promise to change pretty much everything in our society. So what are the
benefits of that? And it turns out that
there are many benefits of the Internet of Things. And if you think about it,
it’s changing medicine, we already have robotic
surgery, and that means that whether your
surgeon is in the operating room with you or whether the
surgeon is hundreds of miles away, they can use
precise instruments to operate on people. We think about robots in
a lot of environments, but one really
interesting use of robots is to send them into
disaster scenarios and allow them to save humans
and other kinds of things in toxic environments. How many of you have
had an avocado lately? Here’s precision agriculture
with avocado plants, and precision
agriculture allows us to use sensors to decide
how much nitrogen is needed, how much water is needed,
how much fertilizer is needed at the resolution
of an individual plant. And so your smart tractor
and your smart combine and all of your smart
vehicles on the farm can actually apply just
the amount of resource that those plants need. For farmers that means
they have higher yield, that means that they– it supports economics
for them, but it also makes their farms
more sustainable because they don’t have to put
a lot of nitrogen in the soil if only some plants need it. So there are a lot of
really great benefits to the Internet of Things, and
enough so that it is ubiquitous and it is increasing over time. On the other hand,
there is a lot of risk that the Internet
of Things as well. So there have been
multiple articles about security problems
with baby monitors where these connected
baby moderators– intruders have hacked them,
they’ve screamed at babies, they’ve threatened parents,
pretty scary stuff. We’ve seen crashes
of self-driving cars, we’ve seen conversations
that have been taken up by Alexa being shared
inappropriately, we’ve been seeing cyber
vulnerabilities of pacemakers and other kinds of
connected devices. We’ve seen a lot
of surveillance– and there’s a real
national discussion going on as to whether that
is protective or intrusive. When is surveillance good and
when is surveillance too much? And so the question becomes,
is the IoT a future utopia or is it a future dystopia? And that’s a real
question for all of us. And the fact is,
how the IoT fulfills its promise to be
beneficial or put us at risk is really up to us,
and it’s up to us as we develop and
nurture the IoT that we’ll be seeing
in the future. So how do we embed these
amazing technologies in the larger world? How do we move the IoT towards
benefits and not towards risk? And so one thing
to think about is it may be kind of reassuring to
know that we’ve actually been here before. So not many of us are old enough
to remember the Industrial Revolution, but
what we always see is when there are
disruptive technologies, you see a tremendous
technological advance. But the social frameworks
and the ethical guidelines for sort of using
these technologies take a while to catch up. So this is an image–
thank you, Wikipedia, which I just donated to, by
the way, of the Iron Bridge– yesterday was Giving
Tuesday, right? Of the Iron Bridge
at Shropshire, this really happened during
the Industrial Revolution, and because it became much
easier to manufacture iron, there was– we had steam power that
was more ubiquitous, we had iron production,
and chemical manufacturing became more ubiquitous,
and it turns out that all of these goods
and services really, really changed life. But the goods and services and
the technologies in some sense might not have been the
most important changes. The most important changes were
social and economic changes. So if you think about it, it was
a time when cities really grew. Standard of living really grew. It was our first factories,
it was the first time people could get lots
of goods and services. It changed what people learned,
it changed the economy. It was the first time we saw
child labor laws and some of the first environmental laws. So if you think about the
social and the environmental and the ethical and
all of the other sort of big social ecosystem that
surrounded these technologies, they were at least as important
as the technologies themselves. And we’re in that
environment now. So we’re in the
environment where we start looking at all
these wonderful disruptive technologies, but maybe the most
disruptive part is what are we going to do with
the social framework with the planetary impacts
that are going to host them? So when I start thinking about
the Industrial Revolution, just as I think about
the Internet of Things, you know it’s reasonable
to think about, well what does success look like? What does success look like
for the Internet of Things? And if I want a
public-focused IoT utopia, I should have real
measures of success, and reasonable measures of
success are, first of all, we need to have a
planet that sustains the Internet of Things–
so the IoT should be good for the environment. And if we go a little
bit deeper on that, what does that mean it needs
to promote sustainability, it needs to minimize
e-waste, it needs to avoid depletion of
rare materials, et cetera. And then, of course, society
lives on our planet Earth, so we want the IoT
to be good for us. And what does that mean? We need to be safe
and secure, we need to be able to have
reasonable privacy, we need to be able to
maximize the benefits and minimize the risks. And so that’s a great vision,
but how do we get there? How are we going to get
to that kind of IoT? And it turns out
that there is going to be a lot of players in that. All of us are going
to be involved. It’s going to be really
important for the government to be involved
because there’s going to need to be policy and
legislation and some sort of social framework. It’ll be important
for business to be involved so we can responsibly
design IoT products. It’ll be important for the
public to expect transparency and an understanding of what
good use– appropriate use is. It’ll even be important
for us in academia to help make people aware of
what the rewards and the risks are of the IoT and to be able
to help new innovators really deal with the kinds of problems
that we’re going to see. So it turns out that all
of this is really hard. Let me say that again–
all of this is really hard. And let’s sort of take
a little bit deeper dive into what some of these issues
are, why they’re so complex, and why they’re so hard
by looking at arguably what many people’s favorite IoT
system is– self-driving cars. So let’s think about
self-driving cars for the IoT. And we’re going to just
look at a case study on this and we’re going to ask
ourselves the question, are self-driving cars
good for the environment, are they good for us? But before they do–
before we do that, let’s level set a
little bit and let’s see, how do self-driving
cars actually work so we know what
we’re talking about. So it turns out that a
self-driving car is basically a car and stuff, right? And the stuff is components
that help the car see, and computers that
help the car analyze what those components
see, and model what the car is
supposed to do, and then tell the car to actually
actuate what it says. And self-driving cars drive
just like you and I do. So what we do is we
sense our environment, we plan what we’re going
to do, and then we act. And of course, we are we do
that in a fraction of a second, and the self-driving car
needs to do that as well. So the Society of
Automotive Engineers actually grade cars in terms
of their ability to self-drive. So a level 0 and a level 1
car is the absolute bottom. So level 0 car, none
of us have that, that’s like a Schwinn
bike that some of you had when you were a kid. It has no– any
system that’s really going to help you other
than your own power. But a level 1 car
is like my car. I have a 2014 Subaru, it
has a few systems, a radio, things like that. It doesn’t help me
back up, it doesn’t help me figure out
where the lanes are, it doesn’t help me do anything. And I drive it OK. So that’s a level 1 car. These days, the cars that
you can get commercially are typically level
2 and level 3 cars. And those are cars
that expect the driver to be more or less engaged. So a level 2 car
has a few systems, and as you can see on the image,
it has GPS and video camera and perhaps some other
things to help you with. Level 3, level 4, level 5
cars have a lot more stuff and we’ll talk a little bit
about that in the next slide. But the idea is that
many of these cars can do some driving or
some kinds of functions on their own, you have to
help them with the rest. Now the fact is we have
no level 5 cars yet. So a level 5 car is a car
that can drive itself, it doesn’t need a
driver, and it can do that in all possible scenarios. We have no car
that does that yet. Waymo and GM and Uber and
a bunch of other companies are working on level 4 cars
for ride-sharing, right? And these are all sort of
still in the pilot stage. The kind of cars that
you can buy, though– Alyssa bought–
is a level 3 car. So if you have a Tesla
or some other car that can do a fair amount
of the driving for you, that’s sort of the
state of the art today. And the state of the
art is changing fast. So how did these
self-driving cars see? And it turns out that they
sense their environment using a variety of
different kinds of sensors. They may bounce light waves
or radio waves or sound waves depending on which
the component is. And each of these components,
each of these sensor systems have different benefits, they
have different liabilities, they have different
ranges, and they have different error modes. And so the idea is to
have a number of them so that in combination, it’s
going to be fairly reliable, you’re going to get exactly
the resolution and precision that you need to drive
safely, and you’re going to be able to drive well. And as you can see
from the picture, they’re going to handle
different parts of the driving. So how the car sees turns out
to be an amazing engineering feat, is to try to figure
out, what’s the error range, how did these systems be
able to replicate each other, when do I need one system
and when do I need another? So the question is like,
OK, that’s how it sees, what about what it sees? So this is a very cool clip of
a TED talk given by Chris Urmson from a few years ago–
he worked for the Google self-driving car. And it shows you how complicated
this model actually is. So what you see is
pedestrians, you have to figure out whether
they’re stationary pedestrians, like law enforcement who
might be directing traffic or where they’re going to
they’re going to be moving. You see bicyclists,
you see different kinds of cars and trucks. There’s construction there. You have to pay attention
to where the lines are and where the
obstacles might be, and who might be doing something
that’s a little unanticipated. So you want to be able
to model that as well. And just in terms of looking
at what an amazing feat it is to write a program
that can do this even a fraction of a time, it’s
a little mind-blowing, because these are incredibly
difficult models to develop, and in some sense, I’ve
read that these are actually harder than airplane models. Because of all the things
that happen on the road now, if you came
here on an airplane or you’ve ridden on
an airplane recently, a lot of what your airplane
does is autonomous, right? And the pilots are
needed for some things, but a lot of what
the airplane does doesn’t involve the pilots. Well a self-driving
car, we’re trying to get certainly to that
point, and it turns out that it’s harder. So how prevalent are
self-driving cars today? How prevalent are the
cars where you want to see this kind of a model? And it turns out today that
only a fraction of the cars that we see on the road– roughly 1% or less– are self-driving in any way,
shape, or form, basically set level 3 and up. And here’s some
statistics that might kind of help us understand that
we’re not yet prevalent at all. There were about 1.4
billion cars in 2017. There were– by 2020, we expect
there to be about 10 million– and that’s level 3 and above– self-driving cars. Americans drove more
than 3 trillion miles. Waymo’s car, which is the one
that has sort of the big reward now, travels about 8 million
miles on public roads. So again, it’s
really a fraction. But as you can see
from this graph, the predicted sales
of self-driving cars– and of course, none of our cars
now will be driving in 2050 or very few of them– make it more and
more ubiquitous. So by 2050, we’ll be selling
almost all self-driving cars and almost all self-driving
cars will be on the road. OK. So that gives us a little sense
that they’re not prevalent yet, but they’re going to be. That I basically have a car
with a whole bunch of gear, and that we’re really– this is really going to be
the future of transportation in some sort of a real way. So let’s get back to sort
of our measures of success, and let’s start with the
measure of success of is a self-driving car
good for the environment? And it turns out that
the answer is not yet. So you have both electric cars
and fuel-injected cars that are self-driving these days. Certainly better
for the environment are the electric
cars, but the fact is that to provide all of the
cameras and all of that stuff– and if you remember
from the picture of the self-driving cars, it
has stuff sticking out all over the place. And all of that stuff makes
the car less aerodynamic. So it’s heavier, it’s
less aerodynamic, you have more drag to it. So even if it’s
an electric car– and then don’t forget
that these emissions, you’re also going
to get emissions during production of the car. So you have to count not
just operation of the car, but production of
the cars as well. And so the estimate is that
you have between 3% and 20% additional emission to this– to a regular car to
provide a connective– connected autonomous vehicle. Now the fact is that
there are things we can do to reduce those
emissions, and a lot of that are in the category of what
you think of as eco-driving. So the idea is if you brake
a little bit more smoothly and you accelerate a little bit
more smoothly, if you platoon, which is we’re seeing now
trucks are starting to do that– and we’ll see that more
and more with cars, a little bit about
that in a bit– you can actually sort of
do things more efficiently. And that should give us 9%
or 10% better performance for these cars. But of course, that’s not
really the environmental impact we want. We want an environmental
impact that comes from not just these kinds of strategies,
but also better design. So one thing we can do
is posit what will cars look like in 2050, and
I fought with myself about whether to give you
a Jetsons picture for this but decided not to. In 2050, cars are not going
to resemble I think anything we’ve seen. And so here are some
designers’ ideas about cars might look like. Today’s car is
thousands of pounds. The new right-sized pods that
we will be traveling in– and right-sized means I
order up a car and ride share and it says, how many people? And I says, oh, just
me, and they say, OK, you’re sending your pod,
and that’s hundreds of pounds. So they’ve made it lighter,
they’ve made it faster, it can go more miles an hour. They’ve jettisoned some
of the safety equipment, so now I have fewer accidents,
but maybe those accidents will be more severe, there’s
a trade-off right there. But what we’ll see in 2050 is
we’ll see more eco-driving, because all of these
cars will do that; we’ll see higher speeds; we’ll
see congestion mitigation, because all of these cars will
be connected to one another; and so a lot of the times
that traffic jams up will be handled by this
big connected car system. We’re also going to
see more mobility. So if you have to pick
up the kids from school, just send the car. If you want mom or dad to
come for Thanksgiving dinner, you might just send
the car, and that car– and you might be able to send
the car to someone hundreds of miles away because
those cars are going to be going not
tens of miles an hour, but hundreds of miles an hour. So if you think about this,
it changes everything. And this is just the cool
technology part of it, so let’s think again about
the Industrial Revolution. Remember, the most
important part arguably of the Industrial Revolution
was not the gee whiz cool manufacturing
that you could do, but how it was going
to change society. So how will these
things change society? And it turns out that they’re
going to change society in a tremendous number of
ways that we haven’t even thought of. It’s going to change
how we settle in cities. So if this car can go
hundreds of miles an hour– and I can do anything
I want in it. I can work, I can sleep,
I can watch a movie, I can meet with friends,
I can have a meal, I can do anything
I want in this car. Maybe I’m going to
live in Washington, DC and work in Boston. Or maybe I’m going to visit the
relatives in Chicago and just take a few-hour car trip or– well, I don’t know
how many hours, but the idea is that it
will really change that. Police don’t need to get these
cars for speeding anymore because that’s going to
be handled by the system. They’ll be looking at
doing something else. When we buy insurance for a
car or think about liability for a car, we have
to figure out who’s going to be responsible
in an accident? What am I being insured for? If it’s a ride service, am I– is it like renter’s
insurance that I’m sort of insured for like–
yeah, Lexi is nodding her head. That I’m going to be– this
is one of our wonderful lawyer fellows– that I’m going to be– that I’m going to be just buying
this to make sure that I’m OK? It’ll have economic impacts in
terms of whether you own a car or whether you just
use the service. And there may be
some things that we want to do in terms of
policy and regulation to make sure that the
economics of it work for us. Maybe it will make a
difference in terms of public transportation,
and maybe that will become privatized
to a larger extent. Evolution of jobs and
services might change. There might be now new jobs when
you just swap out a battery. So you’re not going to
go to a charging station, you’ll just have an
extra battery somewhere and someone will swap it out
while you’re at your meeting because the car will
actually be roaming around. So one of the kind of most
fun things to think about is what will it do to parking. And it turns out that in
the average urban center, roughly 30% of the area
is used for parking. Now our cars spend 95% of their
time in park state, right? And so if you think about
this in the business center– and now I’m talking
about street parking and underground parking, and of
course, New York introduces us to this tiered parking, and I
kind of spread all of that out, it turns out to
be 30% of a city. In Los Angeles, it’s
80% of the city. And if you think
about these cars, are they going to need to park? They’ll drop me off, they’ll
either go pick up someone else or they’ll roam
around or they’ll go off and change their
battery or they’ll park offline or they’ll do a variety
of other things. And so the interesting
things is, now I don’t need all that room
in the city for parking. What can I use it for? Can I use it for something
new, something else I’m going to be doing in 2050? So it turns out that all of
the different legislative, all of the different public
infrastructure, all of the different land use,
all of the different places where a car touches our society
are also going to be different, and just not just sort of
the coolness of having a pod. So let’s think about
this a little bit more. So we thought about
environmental aspects and societal
aspects, but society is made up of individuals. So let’s think a little
bit more about how this will impact us as people. And so let’s go just for a
moment to our IoT success measure 2– are self-driving
cars good for you? By the way, you don’t see
a steering wheel at all in the self-driving car,
because self-driving cars don’t need steering wheels in 2015. So here’s the question, here’s
the question I want to ask– are they safe, do I trust them,
do they protect my privacy? So let’s see if we can think
about that a little bit. So first of all, what can go
wrong with a self-driving car? Because I’m not going to
trust it if everything is going wrong all the time. And it turns out that
a lot can go wrong. Your systems may not work. There may be weather that’s
problematic– your camera can’t see or maybe the systems
themselves go out, they don’t work as expected. There could be
security problems, and there was a very well-known
experiment a few years ago where some hackers took over
a car from hundreds of miles away, a journalist
was driving it, and they basically
ran him off the road. He was OK. But the fact is, if there
are security vulnerabilities in every system, and if we don’t
expect robust enough systems, these are things that
can happen in your car. So we want to definitely
design our cars to be more secure,
to be safer and so that these systems will work. Now what risk are we
willing to put up with? And it turns out that
the risk of dying in a car crash
over your lifetime is 1 in 102, which
seems like a lot, but your risk of dying over
your lifetime of a heart attack is 1 in 6, so you might want
to calibrate that a little bit. Actually, if you can, you’re
better off taking a plane. Your risk in dying in an
airline crash over your lifetime as 1 in 205,000,
which is much better. So the question is, how safe do
self-driving cars have to be? And the fact is– and
I’ll just give you a heads up for the next slide, we
have no statistics yet, because we really
don’t have enough data. So here’s the data we do have. We know that there have been– to my knowledge and
Wikipedia’s knowledge– six fatal car crashes
for self-driving cars. Five of them were
for the driver, the sixth one was for a
passenger on the road. Most of them were Teslas. That’s not necessarily because
Tesla is a worse system, but Tesla is a more
commercially available system than the other systems
that were being tested. We know that we actually measure
safety in a more robust way. We don’t just look
at fatalities, we look at serious injuries
and crashes, roadsmanship, which means how much
is your car cheating. People often cheat in
cars, they sort of go outside the lines or
something like that. And disengagement, which means
how often does the driver have to take over,
because all of these cars have drivers in them. And it turns out that the
only state that’s actually collecting data on some
piece of this, which is the disengagements, the
number of times you take over a car, if there is a law
in California that says, if you are piloting these
cars on public roads, you need to report
for a vehicle mile travels your number
of disengagements. So of all of the cars
that are traveling, Waymo and GM are
our number 1 and 2, and here is their reported
statistics for 2018. Now it turns out that
the vehicle mile travels for all of the cars is only
roughly 2 million miles, so it may not be as
ubiquitous as it will be. But it’s sort of
interesting if you ask the question, what’s my risk
of dying in a self-driving car over my lifetime? It’s clear we do not
have enough data. We just don’t have enough data
to be able to answer that, and that’s part of
what we need to do. So one thing you would
expect is for states to expect companies
to report that. But that’s another problem,
is that legally, we have a whole patchwork of things. So I’ll tell you about
that in a little bit. So let me ask you
another question next. And the other question
is, it’s not clear how safe I am in a
self-driving car, how private are my conversations
in a self-driving car? And so here are two cars. One of them is mine and
one of them is Alyssa. Alyssa put on the entertainment
for the purposes of this photo, but we were not
driving and watching a movie at the same time. And my car is a level 1,
Alyssa’s is arguably a level 3. And the data collected
about you in many cars, including self-driving
cars, technical data, societal and crowdsourced
data, and personal data. Now my card pretty much
collects technical data, not so much crowdsourced data,
not so much personal data, because it doesn’t have
the stuff to do it, but Alyssa’s car does. So the question is, which
of our cars is more private? If we want to commit a crime
together, Alyssa and I, and we want to have that
conversation in a car, should we do it in my
car or Alyssa’s car? We should do that in my car,
I’d just like to point that out. The personal data that
self-driving cars collect may include a lot of stuff. It certainly includes where you
go, it includes where you are, it might include what
you’re doing or saying. If you want to talk
to OnStar, OnStar has to be listening
to you, right? It includes how you drive. It includes your
biometrics– are you putting your hand on the wheel? And it may figure out
other things about you. It might include how much
alcohol you consume– you’ve seen those little
insurance plug-ins that don’t let you start the car
if you’ve had too much alcohol. Avoiding drunk driving,
so that’s a good thing. It might include what
you’ve listened to. It might include your
phone calls, your tweets or the websites that
you use through the car. And the question is, what is
that information used for? In Alyssa’s car, Elon
Musk gets to decide that. Because there are no laws
protecting any of that data from leaving the
manufacturer and not being shared in exchange with
other kinds of manufacturers. So that information
could be exchanged with law enforcement,
or insurance companies, or courts of law,
or somebody who wants you to go to their coffee
place instead of your coffee place so they can market to you,
or all kinds of other things. And without protections for us
and our privacy in these cars, something needs to
be done in order to make that a private
space like my car is. So what are the laws look like
for privacy and for safety and for data that we might get? And it turns out that the US has
a tremendous patchwork of laws. And so if you look at– and there are very few federal
laws that cover these things. So if you look at the states– and I do the proverbial
road trip between– on Route 66 between
Chicago and LA, I’m going to California
where California has a number of different
laws on self-driving cars, including reporting vehicle
miles, travel, and engagements. I’m also going through Missouri,
Kansas, Oklahoma, and New Mexico where there is very
little legislation that tells me anything about it. And that also gets us to
the harmonization problem. So what happens if
your car is crossing a state boundary and one
state has a very different law than another state? How do the vendors
deal with that? How do the drivers
deal with that? How does anybody deal with that? So it turns out that
all of these things are really important, and
that the legal system will have to provide a
framework which allows us to deal with self-driving cars. Now I don’t want to end
the self-driving car part without telling you
that as a technologist, the technologies behind
all of these things are tremendously important. And so that anything that has
societal and environmental repercussions are
something that we can try to deal with in the
design and the architecting and the implementation
and the manufacturing and the operation of these cars. And it is not easy. So if I think about what
happens to the data– here’s a really good question– and the stewardship of the
data, where does it live? Does it live in my car? Does it live in the cloud? Where should it be hosted? How long do I have to retain it? What kind of metadata do I
need to think about that? What kind of access policies? Who owns it? Who can use it? What about the security
of these systems? How high does the
level of security have to be for my car
or my baby monitor or my Alexa or my
Fitbit, et cetera? Who should have
access and control? And of course, all of us
in the research environment want to be able to
design and innovate so we have new generations
of innovations. And of course, when we tried
to do science on this data, how much of it is open? How much of it is protected? How can we do
reproducible science? What do we use as
a control group? If we’re doing artificial
intelligence algorithms, which many of these
algorithms are, how do we be sure we’re not
introducing bias in them, or that the outcomes
are actually representative of the kinds
of things we’re doing? So all of these things
are really big issues for the technology community. And every time we say, let’s
do something around privacy, for example, then
you have to architect the system to have access
controls so that things really are private. Maybe they’re variable,
that in some instances things are private, other
instances they’re not. And so all of this means
that all of this stuff has to go hand in hand. I can’t just look at the
innovation of the car and have the car be
OK everywhere else. I can’t just make a
law and then assume that it’ll be easy to develop a
product that actually gives you that kind of outcome, it
has to happen hand in hand. So if you think
about it, there’s an incredible universe
of impacts of these cars. And it’s not just in
the cyber world, it’s– where it’s hardware
or software or data that I have to
worry about, I have to worry about the impact in the
natural ecosystem and the built environment
infrastructure, I have to worry about
society as a whole, and I have to worry about
humans as individuals. And the fact is,
it’s not just cars. It’s pacemakers, it’s
smart refrigerators, whatever your IoT devices are. And so as we said
in the beginning, there are many places
you can actually tune the IoT to be a more
beneficial, more utopian IoT. If we think about this for cars,
we can think about tuning it so that its fewer emissions. That takes more towards
the public interest. That we can make them safer. That, again, takes it more
towards the public interest. When we think about
things that use computers, we want to avoid
depleting rare materials. 70% of our batteries or lithium. Lithium and the other kinds
of materials that you see, some of them are tremendously
expensive or hard to get or will deplete them
and there’ll be no more, so that’s not a good thing. We want to make sure
we want to turn it towards the public interest
in terms of protections. We want to turn it towards
the public interest in terms of upgrading our device. If I throw out my cell
phone every few years because my vendor
expects me to, I’m adding to the 40
million tons of e-waste that we add to the
earth every year. If I upgraded in software more
often instead of hardware, I’m doing less of that. So there’s lots of ways
that we can turn the dials, and we haven’t even talked about
the ethics of creating systems that really support
the public good. So I know the thing that
you’re all thinking to yourself is, well that’s fine, but
shouldn’t Mark Zuckerberg just solve this problem? And the fact is,
whether you think it’s Mark Zuckerberg or Jeff
Bezos or Elon Musk or Bill Gates or any of the
titans of Silicon Valley to solve the problem, we’re
going in the wrong direction. If it is our strategy to haul
these folks to Washington and ask them why
they’re putting business interests before the
public good, we’re doomed. The fact is that it’s really
important for the public sector to sort of stand up and
really set the lead for that. So what can our government do? And it turns out that
the government can help in lots of different ways. We have an abominably
small number of privacy and security
and safety expectations for IoT products. Now I understand that we want
to let innovation go for a while because otherwise we don’t
want to really retard that, but we have enough
experience with many products to know they need to
have better security, they need to have privacy,
they need to protect us better. And so what that
means is that we have to get more serious
about social frameworks. We have to know what
rights we can expect in the Internet of Things. We have to know what kind
of data protections we have. We need an IoT equivalent
of OSHA to keep us safe, or FDA to plan when
products can come out. We need guidelines for
ethical development. We need clarity on
who’s responsible and who’s liable
when things go wrong. And we need to regulate
the private sector. And it turns out that when we
regulate the private sector, saying you have to do this,
if you have workers that work with asbestos, they
have to wear the right gear, for example, that’s
something we get from OSHA. We can regulate
the private sector so they do things more
in the public interest. Turns out they can
do a lot of things in the public interest,
especially when the expectation is that they will. They can focus on more
responsible design. So that design would be
public-focused design, environmentally-focused design. They can focus on more
responsible practice. That means that when they look
at their own supply chain, they can do that in a way that’s
more environmentally friendly and more protective for us. And they can help us
by being transparent. So we regulate food
quality and safety, and the food companies comply,
we can do this for IoT products as well. Now these aren’t
the only folks that need to help out, because
academia can help, too. And here at Harvard,
we’re very lucky to have a number of different
courses that some of you who are students have
taken that really are in the area of public
interest technology. Many other schools throughout
the country don’t have this. And the idea is that our
job here at the university is to educate and train
responsible citizens and the leaders of tomorrow. That means they ought
to know something about how technology
touches society and how society
touches technology. And it means that through
the mechanisms that we know and love in the
academic environments, through courses, perhaps through
minors and majors, publication venues for research,
like the kind of research I’m doing here at the
Radcliffe Institute, education and training, partnerships
at the private sector or practicum, et cetera, we
can have a set of citizens so that when you all go
home for the holidays and you talk to people about
net neutrality or data privacy or the Internet of Things, they
know a little bit about what that is and what their role
is, which is really important. And it turns out that last
but not least, we can do stuff as individuals as well. Maybe not with the
kind of big impact that a new law or policy– a good new law or
policy might have, but we can do a lot of stuff. So when we buy products,
we can ask ourselves, what’s the privacy
policy of that product? What level of security
does that product have? We can protect our own data. If you’re not using two-phase
authentication for your Gmail, you know go home and make
that happen, we can do that. And we’re in an election year. So it’s important
for you to know what does your candidates
think about this? What is their technology policy? The head of the country has
an enormous responsibility to set the tone for where the
US is going to be in terms of information technology. It’s important to know
that about your candidate and to ask your
candidate that question. And so there are
things we can do, too. So last but not least, I
want to thank the internet for the sources and images
for this talk– there are many of them. And most important, I want to
thank the Radcliffe Institute for providing a really
transformative year. I want to thank my fellow
Fellows, all of whom are more amazing than the next. And I want to thank my
Radcliffe research partners, here they are. Emilia and Wassim and Ali who
have been tremendously helpful in this work. And with that, I’m happy
to answer your questions. [APPLAUSE] [MUSIC PLAYING]

Leave a Reply

Your email address will not be published. Required fields are marked *