Rethinking the Internet: How We Lost Control and How to Take it Back

Rethinking the Internet: How We Lost Control and How to Take it Back

Facebook is an idealistic and optimistic company. For most of our existence, we focused on all
the good that connecting people can do. Mr. Zuckerberg, what is Facebook doing to
prevent foreign actors from interfering in US elections? If you’ve messaged anybody this week, would
you share with us the names of the people you’ve messaged? Senator, no, I would probably not choose to
do that publicly here. I think that might be what this is all about. Do you believe you’re more responsible with
millions of Americans’ personal data than the Federal Government would be? The early internet is a really interesting
and fascinating to study. It was a time when pretty much everyone who
was involved was really optimistic about what it could mean. The web is going to be the defining social
moment for computing. It will give free access to knowledge to people
when you really needed it. It suddenly seemed to shrink the world, and
suddenly even being a part of vast distances from people didn’t matter as much. Move fast and break things, kids coming out
of dorm rooms and starting companies. All the walls that used to separate people
from information and from each other were being broken down and used to create again
a new open egalitarian world. American society and American political culture
allowed these companies to grow and to mediate all our interactions now that we have. People were thinking like is technology good
for us? Is the Internet good for us? This will change everything. I think it was clear that the main goal was
to get it to a stable version and monetize it. Business is working well, the majority of
our revenues, of course, are in advertising. No one really knew how fast things would evolve. No one really knew how quickly and basically
how willingly platforms were willing to sell out users. When I first started covering Silicon Valley,
I began to try to understand how Silicon Valley companies like Facebook and Google made money. What was their business model? The earliest moment that I can remember when
privacy became an issue, the Internet experience, was in 2004 when Google launched Gmail. It was going to do that, it was going to finance
this by essentially reading all your emails and then serving you targeted advertising. This was a new thing for most people because
people didn’t realize how Google was financing its operations until then. Suddenly people became much more savvy about
what was happening on the Internet. A decade ago, like 15 years ago, I would not
have found this was possible, not just in terms of ethically, but also technology-wise. Changing your privacy settings at this point
is like rearranging the chairs on the Titanic in a way. Not only because already a lot of this data
has been collected and curated, but also because you even when you change your settings, there
are ways for people to still scrape information from these websites. The thing that is interesting to me the most
currently is the result that algorithms play in the formation of these online communities. The way these algorithms work is they find
patterns in data by looking at what humans have provided them as examples. This is one area where buyers could actually
be introduced into algorithms. If you think of a word like CEO and a word
like man, most likely what you’ll see if you mapped him into this vector space, you have
man here and most likely a CEO is going to be very close to man, but very far away from
woman. Looking at this type of algorithm, you could
also envision how a system like Google would start to use these type of results. People are really pushing forward for its
algorithms to be used in every facet of human life. We don’t actually understand fully the consequences. Social media has the power to bring people
together formally like without a voice. We’ve seen hopeful stories, like the Arab
Spring. We’ve seen the MeToo Movement, we’ve seen
Black Lives Matter. The Internet is a great organizing tool. Are we better off now after the Internet has
come into existence than we were 30 years ago? Well, it’s changed it. It’s made some things much better, but again
influence is how it makes money. Platforms generally need to acknowledge that
they have responsibility. Users should have a reasonable amount of trust
in a platform, in a service they’re using that their data is not being sold. Some of these companies have started to realize
the role that they’ve played and are moving towards solutions to that end. Here’s the question that all of us got to
answer, given what’s happened here, why we should let you self-regulate? Well, Senator, my position is not that there
should be no regulation. I think the Internet is increasingly important. Do you embrace regulation? So, we’re going to begin our program with
one of the forefathers of virtual reality. He was there in the beginning since the early
1980s, I believe. He helped craft a vision for the Internet
as a way to bring people together. He’s a scientist, a musician, and an author,
and his new book is Ten Arguments for Deleting Your Social Media Accounts right now. Please welcome Jaron Lanier. Jaron, good to see you. Hello, hello. Well, you’ve said that we don’t have to throw
the whole thing away, but you also say that this Internet AI has turned us into Pavlovian
dogs carrying around devices suitable for what you call mass behavior modification. What do you mean by that? Ah, well, there was a time when we used the
mail, and I should point out when you sent a letter by mail, nobody else read it. You paid for it, but it was strictly your
business. It was a remarkable thing, almost unimaginable
today. So in those days, you could find yourself
under observation by somebody who’s trying to be sneaky and manipulate you. One way to do it would be to volunteer for
a psychology test in the basement of the psych building on campus, and then some undergraduates
would be behind the one-way mirror trying to get you to do something, or you could get
into an abusive relationship, or you could join a cult. You could find yourself on the wrong side
of an interrogation desk. There are all kinds of ways it could happen,
but they were all very localized and unusual. Now it’s happening to everybody. Everybody is under constant observation and
constant manipulation, albeit just to be very clear, it’s slight. So at this point, the degree of observation
is less than in the scenarios I just described, although it’s getting more so every day. The thing is, slight differences, slight behavior
modification applied consistently over time can shift society. It’s like compound interest. At any particular moment, you might say, oh,
how much difference could it make? So I saw an ad from the Russian intelligence
warfare people. So it got me a little cranky over something
or other, who cares? But cumulatively, it does have a statistical
effect on the whole society. Then as far as why I’m asking people to quit,
I’ll tell you why. It’s ’cause I love Silicon Valley. I love making digital products. I’ve sold a company to Google. I’m working with Microsoft. I adore my community. It’s my home, and I don’t want you to be passive
sheep customers. I want you to be demanding tough customers,
and I want you to make us work to make you happy. Right now, you’re all turning into passive
sheep, and it bugs me. All right. So if you quit, you’re at least prodding us. You’re at least not just sitting there passively
accepting whatever we feed you. So I think it would be better for us. I want you to challenge us, because that way
we’re not disconnected from the world anymore. Can you see that? Like when you engage with Silicon Valley,
when you’re tough, you’re actually saving us from being abandoned on some desolate island
of the super empowered. You actually help us. You actually reconnect the world. To get precise about what you’re saying, the
trend lines are being watched, in how we do respond in tiny bits, are being watched by
algorithms that can see what the trend is pointing to, and so it subtlety changes our
behavior. Is that how it works? Yeah, the way it works is there are very few
sort of sneaky people in cubicles trying to figure out how to manipulate you directly. The Russians have some in their employee,
but the much more common thing that makes the whole system work is algorithms. So the algorithm will do a little bit of random
stuff. It will say, well, during what time of the
month is this person more susceptible to this kind of pitch? What about other people who correlate with
this person in some way? So all of these correlations turn into this
statistical, I wouldn’t call it a model of you because it wouldn’t pass muster as a scientific
model, but it’s kind of a predictive portrait of you that can be used slightly reliably. I mean this stuff is only barely better than
random, but it is a little better than random. So it gradually finds a way to engage with
you. We use the word engage instead of addict. What I mean is addict. We gradually addict you through algorithmic
exploration until we find whatever it is that will get you. What’s the business model behind all of that? Well, the business model is it’s really kind
of interesting. I’ve had occasion to be at these events where
all of the Silicon Valley companies sell their biggest advertisers big package deals for
the next year. It’s an amazing spectacle to behold, because
each of the companies in turn, Facebook and Google and so on, will grab a stage and then
present to the biggest advertising buyers, often I kid you not with dancers and special
effects. It’s like this big production. If you listen to the way the companies talk
about what they can do to their own customers, the advertisers, you hear this extraordinary
bragging, like “We can really nail in on a person. We can get them to do something.” It’s actually a little exaggerated. Then the public face is, “Oh, no, no, no,
no. We only do things in your interest.” But it really is kind of two-faced. There’s this very aggressive way that the
companies sell themselves. I think what happens to the advertisers, to
the people who are putting all these many billions of dollars into the system, is they
get scared because they’re thinking, “Oh my god, if we don’t pay into these companies,
it’s like we won’t exist.” It’s kind of like an existential version of
a mafia protection racket. It’s like you’re saying, “You pay us or nobody
will know about you.” I think while the official version is that
we’re giving you the ads that are more useful, and then the sales version is we’re able to
mind control people for your benefit, the actual truth is we’re going to scare you into
thinking if you don’t give us money, you’ll cease to exist. So I think those are the three different versions
of the same business plan. And if I’m not mistaken, you’ve actually written
somewhere that all of this is turning into, these are your words, quote, you said, “into
assholes.” Right, well, I wanted to sound presidential. I should say something about that. Oh, I see. What happens is that when you get somebody
addicted to something, whether it’s an opioid or social media, their personality changes. They get kind of cranky and selfish. There’s this thing. Anybody who’s known an addict will recognize
this, and social media addicts do tend to get this quality, which is sometimes called
the snowflake personality where it’s almost like they’re asking for a fight. They want attention so badly. So it does kind of make you into that thing
that was mentioned. I’ve heard, you’ve pointed out that some of
the tech titans, of which there are not that many, don’t let their kids go anywhere near
technology. of my friends in the industry are shocked. I kind of let my daughter do whatever she
wants. I figure she has to learn her own lessons,
and they’re shocked. “You let her have a phone?” I’m like, well, she’s going to have to figure
it out. But a lot of people are incredibly anti-tech
within the tech community. It’s rather remarkable. And you’ve also said that this is approaching
something like the divine right of kings, turning us into people who are following without
realizing that we’re following, but we’re just giving over to it anyway? It is the strangest thing. I don’t have anything against the individuals
I’ll mention ’cause a lot of them are people I know, but it’s really weird. One of the tech companies is the first large
public company controlled by a single person and has all of this power. Why should that be so? It’s a very peculiar thing, but we had such
a fervent belief in the myth of the hacker cowboy who would dent reality, in the words
of Steve Jobs, that somehow we’re enacting it. One of the things I’m a little concerned of
actually is that sometimes when a society chooses figures to be godlike figures, then
they’ll tear them down. So they’ll elect a Mussolini and then hang
the Mussolini, or they’ll elect the Aztec prince for a year, and then sacrifice that
person. I’m a little concerned about some kind of
wave of hatred against tech, which would be totally inappropriate because we actually
are doing something important. What I ask is not to hate us, but to engage
us and to force us to be better. That’s a much better scenario. I can understand that, and yet we’ve seen
examples lately where it’s being pointed out to us that the smart phones conveying what
they’re able to convey are turning into propagators of maniacal social violence, weaponized in
Myanmar, the Rohingya problem. In South Sudan, social media is literally
a deadly weapon. In South Sudan, explain that. Yeah, so this has been one of the awful things
that’s happened. Early on, Silicon Valley is part of the Bay
Area in California and tends to swing left, right? Very early on, there was this, I would say,
a certainty that if you just let people talk to each other, it’ll create positive leftist
changes in the world. One of the first great triumphs of that was
the Arab Spring, like, oh, we’ll just give power to the people. They’ll overthrow dictators. They’ll create this beautiful democracy. Now, the thing is the algorithms that are
watching participants in something like the Arab Spring aren’t left or right. They’re not humanistic. They have no feelings at all. They’re just trying to find any path to maximize
addiction and maximize influence algorithmically, just through searching. Eventually, they discover, the algorithms
discover in a blind algorithmic way that the negative emotions, the negative people are
easier to engage. So everything starts tilting towards amplifying
those people. So you’ll have an ISIS that gets even more
mileage from the same tools that propelled an Arab Spring, or you’ll get a resurgent
KKK and neo-Nazi movie that gets even more mileage from the same nexus of communications
that propelled the Black Lives Matter, is an example that happened here. We see this again and again. Can I get just slightly geeky for a second? Please, please. Okay. But you geeks have given this to us. Yeah, okay, well, then thank you for giving
us another chance to screw you up here. In classical behaviorism, the earliest experiments
used things like candy and electric shocks to give people stimulus response to what they
did to change their behavior, right? That’s the Skinner box. Now, we don’t yet have drones hovering over
people dropping candy or electric shocks on them, right? That’s coming, but it’s not here yet, so we
use symbols, analogous to Pavlov’s bell. But we use social experience to give people
little dopamine hits, as we call them in the trade, the little positive feeling when you
get retweeted or something, and then the negative ones, when you get treated badly, when you
get harassed online, insulted. So the thing is that in a broad sense, positive
and negative stimuli are both powerful in people, but they have different time profiles. So what happens is the positive ones can take
longer to build and can be broken more quickly, and the negative ones can come up faster. So it’s faster to get startled or scared than
it is to relax, but it takes longer to build trust than it takes to lose trust. So you can see, they’re reverses. So since the algorithms are responding on
a rapid feedback loop, they select out and amplify the negative stuff, and that’s why
ISIS gets more mileage than the Arab Spring, and that’s why the Klan gets more mileage
than Black Lives Matter. So what’s the solution? Do we have to step away and abandon the free
model? think there’s a variety of possible solutions. The one that I’m really interested in is changing
the underlying business model. So if you have an underlying business model
where the only way you can make a penny, what we’re doing now is so insane. Can you imagine if in the old days, if you
mailed somebody a letter, instead of buying postage, the letter would be free, but some
customer would only pay money if they were confident that they could change the letter
in a way that they would influence events. That would be extraordinary, right? But that’s exactly what we’ve done. Right now, if you and I are to have contact
over the Internet, this thing that’s supposed to be open and free, the only way that can
be financed is by a third party who we don’t know about who wishes to influence us both. So once you have that business model, it’s
like this red carpet invitation to the Russians and other bad actors in the Klan and whoever
to use it, because they can get incredible mileage out of it of a disruptive and negative
sort. So bring back postage. Make it paid. In the days when Facebook was being founded,
everybody thought that the way movies and TV would be created would be like the Wikipedia. It would be this massive volunteer free thing. Everybody in Silicon Valley, not in Hollywood. We had an honest test. That was tried. A lot of people tried to do that, but then
we had Netflix, HBO, Hulu, et cetera, and it turns out that when people pay for TV,
we got this thing called peak TV. We did an honest test, we got an honest result. So what I want Facebook to turn into, I don’t
want the good stuff on Facebook to disappear, I think there’s a lot of extremely positive
stuff that happens on social media, I want it to turn into a cross between Netflix and
Etsy. I want you to pay for it like you pay for
Netflix so you can get peak social media, but if you do something special that means
a lot, I want you to be able to get money for your craft, like you would on Etsy. So I want that to be the future of Facebook. It sounds good, but to get a little bit realistic,
that would take an awful lot. Do you think any technology company would
actually go for this? They’ll do better. I mean, this is just normal capitalism. This idea of people coming in from the side
and getting in the middle of everybody and manipulating society as the only business
model is not capitalism. It’s some bizarre dystopic thing. I’m suggesting just normal capitalism, like
people pay for stuff they like and they have a choice about it. It’s just normal. I think it would be better for Facebook, it
would be better for Google, I think it would be better for everybody. I think their shareholders would be happy. I think it’s just the better solution. If we could figure out how to have that happen
all at once. But let me ask you one other question. All right. You’ve said that this was all making politics
impossible as we understand it. I must say because the way the rights of individuals
for voting and all was conceived in an age when we didn’t have any of this magical stuff,
it’s changed the definition of politics almost. Yeah, I mean we live in a bizarre world now
that just happened recently. I feel sorry for young people who couldn’t
experience the contrast with what came earlier where nothing seems real, everything seems
like it’s manipulated by unseen forces, everybody’s on edge all the time. Not that we had utopia before, but we had
a little bit more reality. We must get back there to survive. I don’t see how we can survive if we’re insane. But one big question is whether anyone’s criticism
will matter? So there’s this bit of a confusion on this
matter. Sometime in the Silicon Valley atmosphere,
there’s this thing like, well, if you’re optimistic, it means your complacent ’cause you’re sure
things will work out automatically, it’ll just naturally get better. I don’t think that’s ever been true. I think we have progressed and things have
gotten better and the various trend lines of betterment are very real, but every single
increment of that betterment was due to somebody putting their foot down and saying things
can be better. Jaron Lanier, thank you very much for joining
us and ending on a positive note. Thank you very much. Thank you so much. What is the science behind these AI systems
that are impacting everything from the news you read, healthcare, who gets hired, promoted,
deported, What does living in a world run by algorithms mean? Joining us now to dig deeper into all of this,
Director of the Harvard and MIT Ethics and Governance AI Initiative and the former Google
Global Public Policy lead for artificial intelligence and machine learning, please welcome Tim Hwang. Tim? Co-founder of the AI Now Institute that’s
based at NYU, she is a research scientist and founder of the Google Open Research Group. Please welcome Meredith Whittaker. Aviv Ovadya is Chief Technologist at the Center
for Social Media Responsibility at the University of Michigan School of Information. He has worked to improve and provided research
for Amazon, the MIT Media Lab, for Morgan Stanley, Quora, Yerdle, the Tow Center for
Digital Journalism, and Google. Please welcome Aviv Ovadya. And last but not least, a professor of law,
business, and economics at Villanova University, he’s also an affiliated scholar of the Center
for Internet Society at Stanford Law School. Please welcome Brett Frischmann. So let’s start with Brett. Your book, Re-Engineering Humanity, posits
that technology is turning us into machines. Is it? Well, the idea behind the book is to get us
to ask ourselves how often in our life you feel like or you behave as if automatically
or comparable to a machine, right? So the idea of thinking about, am I performing
a script? Am I behaving habitually or automatically,
and if so who wrote that script? So the idea is to think more and more about
how the technical systems, the world we’re building or really engineering for ourselves
affects who we are as human beings, how we think, how we relate to each other, how we
interact with each other in different settings. Tim, let me ask you about the good and bad. AI seems to be a label that’s put on everything
these days. It sure is. What does the term AI really mean in the context
that we have here? Training computers to perform tasks, how do
you train a computer? That’s right. So, when you talk about AI, it’s important
to keep two things separate. One of them is the marketing of AI, which
is what you’re referring to, and then there’s the sort of computer science of AI. Really that is a subfield of artificial intelligence
that’s known as machine learning. The basics of machine learning that you need
to know is it’s sort of the study of algorithms that get better the more data that they see. So the notion is if you want to teach a machine
to understand how to recognize a cat in an image, you show it lots of images of cats. There’s a certain set of methodologies that
are used to allow it to accomplish that task, and really that’s the subfield that’s been
driving really a lot of the progress that you’ve seen in the last few years. These machines are learning machines. How does that really work? So, a lot of it is based on the recognition
of what’s known as features. For example, if you imagined trying to recognize
a cat, the example that I just gave you earlier, back in the day the idea is that you get a
bunch of programs together to think about how they recognize a cat themselves, and then
they’d program explicit rules into a machine. So they’d say, “Well, a cat’s fuzzy and a
cat is usually these kinds of colors.” You’d actually put those rules in. One of the unique things about machine learning
though is that it’s able to identify these features by itself. After lots and lots of examples, machines
can do what they call inference. They’re able to say, “Well, in all the cases
that I’ve seen, these are the types of things that are associated with cats.” It turns out that once you do it that way,
you get these systems that are much, much better than we had before in doing these types
of tasks. Better, smarter? Sure, depending on how you define smart. That’s part of our problem here today. So, Meredith,
You’ve said that we’ve rushed these technologies into some of the most sensitive areas of our
lives, I’m going to ask your permission to go off
script immediately and sort of answer the question you asked to Tim, ’cause I think
we need to set up where we are right now socially and politically. Fine. Then we can get into some of the examples. But when you ask what AI is, I have been in
the tech industry for almost 12 years now. I have thought a lot about these issues. That’s not an easy question to answer, even
for somebody who is enmeshed in these technologies. That’s partly because while it’s an old field,
over 60 years old, it is newly adopted in the last five to ten years, and everyone can
attest to this. You can’t pass a newsstand without seeing
another shiny white robot and some promise of singularity futures, right? I think we have to ask, okay, why is this? Why did this just crop up like mushrooms everywhere
we go? When you ask that question, you begin to see
that, oh, this happened right around the time that big tech corporations were consolidating,
that this business model had taken over. Where finally you have these organizations
that have incredible compute power, they have super computing infrastructure. They are actually able to process the massive
amounts of data that is needed to, as Tim said, train these systems to recognize patterns. They have this data because they have vast
market reach, from Facebook to Google to whatever other app you’ve installed, they are able
to continually collect huge amounts of data. They have the infrastructure to store it,
and then they have incredibly talented engineers that they can afford to pay to build these
algorithms. So we look at this as sort of a commodification
of a earlier theoretical, I would say, not a prominent branch of computer science for
many years that suddenly became marketable. So we have to ask when we look at AI, whose
stories are we hearing? And a lot of times, what we’re hearing are
the stories that are written by the marketing departments of the corporations that have
recognized the marketability and the profitability of these techniques that have been around
for a while. That raises the question of motive. For example, you have written and spoken about
the fact that there are algorithms that claim to be able to discover which people are more
likely to commit a crime. Absolutely, and you have to ask, if you look
at one of the companies that has claimed to be able to do this and their researchers and
businesses that are looking at this as a technology is Axon, which used to be called Taser, which
has a huge cache of police body cam data. So they acquired two AI companies about a
year and a half ago and are busy creating real time facial recognition systems that
will analyze the data captured by police body cams. Now their premise is, hey, police encounter
criminals, we have this dataset that will then model criminality. We’ll train the AI to see what a criminal
looks like, if you’re thinking eugenics, physiognomy, phrenology, you’re right. Then we will be able to identify pre-crime. Innocent until proven guilty, it sounds like
it’s under some kind of threat. Yeah. Yeah, and I think that’s part of the concern
around the marketing of it, because I think there is such a big feeding frenzy over the
promise of the technology. That’s often quite at odds with what the technology
can actually do. So I think part of the concern is that you
actually see this technology being deployed in all sorts of situations where it’s not
ready for primetime and may not ever be ready for primetime. I think we have to ask the categorical question
of even if it worked as well as we believe it did, would it still be something we’d want
to do? And its effect on the labor force, on employment? Again, stepping back just a second, one of
the real issues with speaking categorically about these things is that they are being
deployed without our knowledge. So we don’t know how often we encounter a
system that classifies us, and as the classified, we don’t often know that the results, the
opportunities we’re given or denied, the resources we’re given or denied, we’re actually informed
by an algorithmic backend system. There is no transparency about these systems. We don’t have standardized ways to measure
them, and they’re often sold by vendors to businesses without informing the public. This is a huge issue. I can talk about what we do know about their
impact, but I think we need to level set there that there is a huge need for transparency
and accountability. Aviv, do you think that people, since we’ve
heard about the Facebook scandal and we’ve heard about so many things, do you think people
do understand this threat better now? I think far better than they did, let’s say,
three months ago, six months ago, a year ago, a year and a half ago. The good part of a lot of the current crisis
is at least people are more aware of the way in which technology is impacting their day-to-day,
what information they consume, how they’re being manipulated in some cases, and also
the recommendation systems that affect what they see and what they believe. I think just the fact that algorithm has become
a word that many people know is a new phenomenon I think over the last year. That is very valuable, because in the same
way that government that people need to know, algorithm is a word people need to know. There’s another issue that you’re all I think
heading towards. It’s the fact that a small handful of people
who are designing a lot of these come from, well, they’re white men in the Bay Area basically
As the woman of the panel, yes, they are. They’re white men in the Bay Area in the Western
context, and they are of a similar upper middle-class to very rich social strata. They are often educated with exactly the same
background, because to do this, we’re not just talking about going to coding camp, we’re
talking about a series of very specific advanced degrees. For all the best intentions, and some of these
people might be your friends, that’s a worldview. That’s a worldview that is embedded in, say,
the features you choose to emphasize in your cat detection, right? That’s a worldview that’s embedded in how
you understand gender classification, how you understand some of these softer relational
categories. That’s a worldview that becomes baked into
this technology that then scales globally, seriously globally, in ways that are remapping
the world according to that vision. We should be worried about that. Indeed, the data scientist Cathy O’Neil has
terms a coin for algorithms that are secret and harmful, weapons of math destruction. Yeah, it’s a good pun. One thing that would help discourse about
these issues is to always remember that AI is, always will be, always should be, nothing
more than a tool designed, owned, managed, deployed by human beings. So whenever you’re talking about AI and its
impact or its influence or claims, it’s never claims by AI, it’s claims about AI made by
purveyors of AI. It’s peeking behind the hood, looking who
owns it, who designs it, who manages it, and it tends to be concentrated in certain ways. Go ahead. I would just quickly add to that because I
completely agree. We also need to look at whose power it serves,
because when we’re talking about the workplace examples you gave, right on, right? When is the last time you went to a CVS and
the automated checkout worked? It’s not replacing us anytime soon, but what
it is doing is it is increasingly monitoring, surveilling, and determining which workers
are valuable, good, and promotable, and which aren’t. Of course, this is trained on data gathered
from the current workforce, which shows deep skews in inequity, and I can get into some
examples there. But this is a question of power alongside
a question of exactly how the technology is constructed. Just to add one thing beyond that, it’s not
just what power it serves, but it’s also what power it entrenches, as in as a result of
this technology, like being deployed in this way, who now has more power than they had
before and what can they now do with that? Are there some areas, do any of you feel,
where this new technology should never be used? Some institutions were algorithms should never
be used? I think the question is not do I want to put
my foot down and say never here ever. I think the question is what proof do we have
that they work and how do we validate that? In most cases, we have very little proof at
all. We haven’t even started developing the mechanisms
to say that this is actually safe. This actually works across populations. This actually does what the marketing department
says it does. So I think the more sensitive the area we
deploy it into, say education, healthcare, et cetera, the more we have to wait until
we have those techniques in place before we trust it. That being said though, I think it is interesting
chasing after problems where even if we did all that verification and it turns out that
it is better, that for other values and other reasons we might still say no. I think those classes are a very tricky set
of classes where, I think like in the justice context specifically, where we could actually
imagine, well, maybe we actually do introduce these systems that are measurably better,
but maybe there’s something about justice or providing the death sentence that actually
requires a human to be in that. So the relevant question is what’s better,
right? Sure. Better in terms of accuracy with respect to
a scientifically defined or computation problem that we can all agree on? Or better in terms of some set of values that
may very well be about fairness, equity, due process, or other values that aren’t necessarily
easily captured in sort of an empirical question about accuracy? Those are constantly trade-offs across a whole
host of areas. So one way to think about it is for every
given case to be made in favor of using AI as a tool in a particular context of the sort
your question raise- Like the priority of a doctor’s office, for
example? Right, so you should compare alternative tools. Comparing various tools in context provides
a means for thinking about accuracy, effects, power relationships, who’s behind the tools,
where’s the information flow behind the scenes, or where is information coming and going from? Those are all a host of complex contextual
variables that aren’t easily even captured in most of the conversations about AI. I would also add that we need to also think
about what are the checks and balances that we’re putting into place as we deploy these
systems. Maybe it does make sense in this context,
even in this criminal justice context, as we consider the trade-offs and our values
around it, which is absolutely crucial. But then what are the checks to make sure
that this is being used in the way that we want it to be used? Because if we’re just dealing with black box
systems, that doesn’t necessarily fly in some contexts. Right, so now let’s look at some more variables. We have to add into this, this recent phenomenon
of fake news, which first popped up before the 2016 election. We began seeing dubious news stories online
that were engineered for maximum disruption. What happens when anyone can make it appear
as if anything has happened, regardless of whether it did or not? Let’s look at this short film we have on that
difficult question. All this information is coming from all these
different sources, and it’s getting harder and harder and harder to understand and to
figure out for ourselves what is actually true and what is not. And I want you all to know that we are fighting
the fake news. It’s fake, phony, fake. Fake news came out of the 2016 election. That’s when it became a thing. One of the scariest advancements in AI suddenly
has been the machinery that can be used to create fake media and fake idea. Especially our friends who are lesbian, gay,
bisexual, or transgender. I visited with the families of many of the
victims on Thursday. I think the technology will get to the point
where even the media finds the expert will not be able to tell what is a true fake and
what is not. It won’t be far fetched to think of it as
being used as maybe a cause for war sometimes. A country releasing a video that shows other
country is loading nuclear missiles and ready to launch, they might use that as a cause
to preemptively strike that country. Brett, you’ve come at this issue of fake news
from a somewhat different perspective I think. You say people often ask how can people be
so stupid to believe and engage this way, and you tell them the entire interface is
made to make you believe this way. How is it so? Right, so I got very frustrated when the story
about fake news is breaking and everyone is paying attention to it, how often I heard
the meme, I guess, that people are stupid and lazy for believing in fake news. So I would suggest that a lot of the people’s
susceptibility or propensity to believe what they’re seeing on various platforms is a function
of the design of the platforms. So, if you’re on a social media network to
be named, you guys can figure out which one you want, if it’s designed for, optimized
in its design for rapid clicking, quick scrolling, or certain forms of engagement based on profit,
where the money is to be made or where data is to be gathered, that dis-encourages or
dis-incentivizes people to stop and think or deliberate or research or ask deeper questions
about the content that they’re receiving. So one species of fake news problem is a function
of making profit, creating content to get clicked, right? They could care less, those purveyors of that
kind of content could care less about convincing you or changing your beliefs. It’s just click to make money and the platform
enables that. The other species is a species of content
that’s all about propaganda or changing beliefs or influencing how someone approaches a problem,
and oftentimes that’s not at all about making money. It’s about changing beliefs. To deal with that, you need some form of judgment
about the quality, the veracity, the providence of the information itself, the nature of the
content. Our current social media platforms struggle
on both fronts, both in terms of being designed for certain kinds of engagement that enable
a for-profit model, and on the other hand, for not wanting to engage in the development
of editorial judgment or the exercise of editorial judgment by human beings because that risks
putting them in sort of a content moderation position. Now that’s started to change since the fake
news story has broke. Right, you’ve talked about this has created
a disease, techno social environment, that we’re suffering from an engineered complacency. Does that frighten any of you? Does it frighten you? Engineered complacency? Maybe I should explain what I mean, and then
you guys can tell me if it frightens you. It’s meant to frighten, ah! No, but what I mean by engineered complacency
is that much of the environment is designed in a way that repeat interactions in the environment
lead you to feel as if it’s futile to resist, there are no other options. So if you think about, the easier example
than the fake news context is maybe online contracting. So what do you do when you see an I agree
button pop up on a website? Not always, but most people most of the time
click I agree, as you’re supposed to by virtue of both the design of the site and the interface,
as well as the nature of the take it or leave it interaction that you’re having, the nature
of the transaction. Plus the fact that you repeat interactions
with that interface over time, so even if you thought, you know what, maybe I’ll stop
and think about the legal relationship I’m entering into for some of the websites that
I’m visiting or some of the apps that I’m downloading or some of the devices I’m installing
in my home, how do you decide which of the many different encounters with the interface
would ’cause you to stop and think? Indeed, that raises another question. Are we actually changing the way the brain
operates? To what effect is this, Tim? Well, I would say I’m not a neuroscientist,
but I think there is habit formation. I think there’s no doubt about that. I think one of the things that Brett’s point
out, which I think is really great, is that your experience of certain elements online
train you for how you behave in future experiences with that, right? Yes. So I could even imagine a strange world that
we’re in right now where we say everybody gets together and they all agree that this
situation’s really bad and we should do something about it, and we actually change how the design
of the software looks. But actually people’s behaviors are actually
stuck in a format that will make those behaviors sticky over time. So you could actually imagine let’s just bust
up and shut down Facebook tomorrow, whether or not it actually recreates itself as a result
of the fact that the market now is entrained in certain ways. I think that’s an interesting possibility
and a really challenging one if you think about the fact that these two variables are
actually related, right? That people are influenced, but then they
actually influence the production in the system in return. Aviv, you’re nodding vigorously there. I was just agreeing that I think this is another
example of the entrenchment of power, where here it’s not even power in a particular organization,
but it’s power in a particular almost process or way of using yourself and your brain and
your environment. I would add I agree and I would say and yet
it is power in a particular organization. I have never known a product manager or engineer
to get promoted for reducing engagement. There are certain types of incentive structures
that are driving the culture and ecosystems of these companies, and even if you’re engaging
with propaganda that is not there to make money for clicks, the longer you stay on a
site, the more adrenalizing content you’re exposed to, the more ads they show you. So there is a monetary incentive in these
ad business platforms to keep people on the site, which I think the politics of that are
becoming clear, the risks of that are becoming clear, but the business model remains pretty
scolaric. And Brett, go ahead. I’d love to add one wrinkle to this discussion
though, too, ’cause I think a lot about this Catch 22 that platforms find themselves in,
because they can say, “Okay, we’re not going to take action,” in which case the public
is like, “You’re negligent, you didn’t take action.” Then they’re like, “Okay, okay, so we’re going
to take action.” Then they’re like, “You’re imposing your ethical
will on me.” I think this actually is part of the tricky
thing that needs to be dealt with because I think when people ask the question, “Facebook,
do you have a responsibility here?” the question is should they be the ones who
are delegated that responsibility? That ends up being a very interesting and
tricky question, and I worry about situations where we say they could just do better, ’cause
in some ways it sort of reinforces their position in making a lot of these decisions as well. So we may need to actually even rethink the
governance of how these things were designed in a much bigger way. I would caution against us/them mentality
in general. I think oftentimes, but it varies by context
and it varies by the technology we’re talking about. The rest of you must have strong feelings
about this. I’m concerned about what I call the Overton
window of crisis here. The over-
The Overton window of crisis. Overton window, it’s a concept from political
science that talks about what’s politically acceptable? The notion is that certain events occasionally
will happen that will widen that window, where someone breaks various norms in politics and
suddenly a whole lot of activity is allowed that wasn’t allowed before, it becomes plausible
in a way that it wasn’t. I think there was a view early on around these
privacy debates which is, okay, people don’t care about privacy right now, but we just
need to wait until the first big crisis, the first big data bridge, and then everybody
will care about privacy. Then that moment came and went, and then came
and went, and came and went, and came and went, and you get this feeling now that actually
that state of affairs has been normalized in a way that actually makes it quite difficult
to mobilize change politically, within the ministry. I was talking to a person who does product
management recently. I was like, “Oh, well, you know this product
you’re building, you can kind of improve it in X, Y, Z ways.” He said, “Well, you know, we’re going to get
blamed by the public enemies no matter what we do, so we’re just going to do it.” I think that’s a really interesting phenomenon
where basically the response has been so built into the process that that creates a level
of apathy onto itself. And the effectiveness of dissent has been
muted so much that that’s just part of it. You look at the media for a while. There may be some stories about it, but there
is not effective organizing. I guess another thing about this is I find
the current framework of privacy broken to talk about a lot of what we’re speaking about. I don’t know that I’ve ever personally felt
or detected harm from a privacy breach, although I’ve been in all sorts of databases that have
leaked because of egregious security practices. But I do know that we have collective harms,
and I think we need to begin thinking about these things collectively. It’s not I keep my personal data safe, it’s
that we have major interests, the Federal Government and this smartwatch or whatever
it is, large tech companies who are interested in understanding this data and then creating
profiles of us based on the data. So I may not have contributed my data, but
I am suddenly profiled because there is a model about childhood activity. Who knows where that’s going to go? Is that going to be used to weight college
admissions? Will that be sent to insurance companies to
see whose family gets insurance? But we have not asked for transparency or
oversight into these kinds of processes and assumptions. So when we talk about privacy, it’s often
sort of a very thin debate, ’cause I don’t have time to do my laundry let alone read
through a TOS and then take responsibility for where all of my data is. Aviv. I think that one of the other things to really
keep in mind here is that there are really hard trade-offs being made when you’re thinking
about privacy. So one example is if you were talking about
misinformation in the 2016 election or even stuff that’s being spread in let’s say Myanmar,
where you might have people creating fake accounts, and those fake accounts are spreading
this propaganda. That propaganda is causing violence or it’s
misinforming voters. Well, how does a Facebook go and actually
detect those fake accounts? Well, they’re looking at the IPs of all these
users. They’re looking at every single signal they
can possibly get on that user and comparing it to what they know are real users and to
tell the difference. So if they don’t have that data, they can’t
make that comparison, then that makes their job a lot harder in protecting our democracy. In fact, they’re trying to do that which they
now ostensibly are. So I think that there are some very, very
difficult balances that we need to strike in order to ensure that we protect our democracy,
we protect our privacy, we protect our ability to speak freely. All of these things are in the balance and
there’s a lot of values that are in tension. So you’ve set up the big question then, we
are now going to I guess move to the inevitable, hopeful side of this, if we can find one. What are the fixes? What are we to do? Tim, how can we make this AI more ethical? Sure, so one of the things I think about particularly
related to this discussion is the delegation question that I mentioned a little while back,
which is that particularly after 2016 we’re kind of in this tough spot where we’re basically
like companies just solve the problem. Well, we feel really uncomfortable about Facebook
telling us what’s true or not. All right, well, then let’s pass a law. Then you’re like, “Oh, we don’t actually want
government to tell us what’s true or not.” So there’s actually this moment of we have
to delegate to these two wolves, where both of them we feel actually pretty uncomfortable
working with them on issues of truth, for instance. It seems, particularly after 2016, there’s
a general implosion I think in our faith around the ability for communities to self-organize
and deal with some of these problems themselves. I think there is good intellectual work to
be done in terms of trying to figure out how platforms can make those things more robust. Right. Aviv, I see you nodding a great deal. We’re hearing also every week from the platforms
that are stepping up. Twitter says it’s going to limit tweets from
people behaving badly, and Facebook has just detailed policing for sex, terror, and hate
content. Do you think they’re going far enough? I think there’s a number of steps that are
in the right direction, and there’s going to be some steps that backtrack as we learn
more and as we discover that, oh, wait, this isn’t exactly what we wanted. I think that’s part of this process. I’m happy that we’re actually on this process
as opposed to just ignoring it, which was sort of the status quo two years ago when
they said fake news is a 2016 problem that’s in the US. A lot of the rest of the world has been dealing
with misinformation that’s very deeply affecting their political systems for five, ten years
even. I think that there are some things that really
do need to happen, and they’re starting to happen within these organizations. In particular, we need to have empowered teams
that can look at the impacts of the technologies on these things that we care about, on let’s
say our democracy, on inter-community trust so that you’re not just increasing polarization. If you don’t have someone who’s paid to ensure
that your platform isn’t deeply polarizing things, well, then your growth team that’s
trying to get more users and trying to get more action might unintentionally just do
that. There has to be that back and forth, and they
have to be empowered enough to push back. Just fascinating possible solution to some
of it, empowered by whom? Well, both within the organization and ideally
with some sort of external accountability where across these organizations. There’s a number of different structures,
but the idea is that you not only have some sort of internal structure that’s keeping
an eye on this with people who their job responsibility is to maintain some of these things that we
hear about, but you also have outside of the company, some way in which you’re able to
see, oh, is that company doing a good job with this? Whether that’s through a regulatory process,
maybe, maybe not, but I think there has to be some structure that creates those power
dynamics. So an independent idealistic new profession
that we need to live in this new world? I mean, I don’t know that it’s new. Social scientists have been doing this sort
of work for a million years. Maybe not a million, don’t quote me on that. Maybe, I don’t know. It feels like it. But I think the combination of people looking
at how the technology is impacting society and really cross-functionally and having teams
that are doing that and that is their focus, to understand the way these platforms are
impacting things. I think even beyond that when you’re talking
about something like these deep fakes, that’s about not just stuff that’s happening in these
companies, in these platforms, but it’s happening in research labs where you have researchers
who are just like, “Oh, wouldn’t this be cool? Oh, I can get a good paper out of this.” That’s great, I love good papers, I love cool
things. But it doesn’t mean that you shouldn’t be
looking at what are the potential negative impacts, and one thing that I’ve been really
excited to see is people within the academic community starting to really think about this
deeply. There’s actually as of a few days ago, if
you go to negative, there’s this group, the ACM Computing of the Future, something
along those lines, where they’re looking at the ways in which we can improve the purity
of process to really incorporate not just the positive impacts of a technology, but
also keeping in mind the negative impacts. All right, we have just a couple of minutes
left, so your thoughts about where we’re headed if we don’t get …
We’re clearly headed to climate crisis, we’re clearly headed to a lot of really serious
issues that relate to this and don’t relate to this. But I want to backup a little and make space
for the question, what world do we actually want? Do we want Facebook to be negotiating with
the US government to determine truthiness, or do we want Facebook? Do we want capitalism? What do we want this to be? Then think about what types of technologies
could you build or avoid building to create that? Because I think we fall into the trap of accepting
the status quo and then trying to tweak it around the edges. It’s very clear that that’s not at this point
working and that a number of the major social problems we’re facing are enmeshed deeply
with the major technological issues and they are being exacerbated in ways that are very
complex to tease apart from a systems perspective. So I would want to start with the imaginative
potential of how do we create a better world? And then see do we want these bulwarks of
power in this world in that world we have yet to create? Okay, and the last word, sorry to interrupt,
but we have just one. Where are we headed if we don’t fix it? I think we’re heading to science fiction dystopia
where more and more of our lives are predicted and determined by technological systems controlled
by a few. How we avoid that? The constitutional, the most fundamental issue
for us to confront in the 21st century is how to sustain our freedom to be off so that
we’re not always on. How to build techno social environments through
technology and social institutions that preserve under determinate environments within which
we can develop, play, experiment, interact with each other, in ways that allow us to
develop our own lives rather than have them determined for us. It sounds like something for the lawyers,
the politicians, and everybody to get involved with, including you nerds who have given us
all this stuff. Let’s give it a round of applause. Thank you for sharing.

100 thoughts on “Rethinking the Internet: How We Lost Control and How to Take it Back

  1. I think if commenters would take the time to look through the list of video topics, this isn't about censorship, it's about the impact that social media has on all of us and what uses of that media are etical or unethical….but if you all want the "sheeple" tag the rest of your lives, then more power to you, I guess….you give out all of your deepest darkest secrets, including all of your financial and medical records and the response is we don't want it regulated….fine. The next time there's a data breach with your sensitive information in it, no one will help you get it back…no regulation.

  2. Users should gain more direct control over their own feeds and the recommendation systems that control them (why can't I e.g. set YouTube to only recommend educational content?). Users should also have control over their own data and e.g. be allowed to download, analyze, or even have it deleted (as is possible here in the EU).

    The recent increase in top down control and regulation is worrying and seem to me to be one of the biggest problems today (I don't want you or company X deciding what I can and can't see in my feed and I don't want to control what you see – I do want there to be incitements for companies to give more control to their users though).

    Going further I think that to combat misinformation, in place of regulation, we need better tools to analyze news and potentially even networks of connection between public figures, organizations, etc. (bottom-up surveillance). Again, don't regulate but facilitate and take advantage of peoples' lust for insight.

    Also, I think framing it as "lost control" is sketchy. In fact, with the tools for information retrival that exist today you could even argue that, at least wrt. consumption, people have way more control over what they consume than they have ever had – especially going back to when we were chained to cable TV.

    When it comes to media and the internet, regulation and especially censorship almost always seems to stem from either arrogance ("I know what is best for you") or a myopic view of the world (where the ones in control think that they can't be wrong about what is "right").

    Wrt. payment models (which is directly related to data collection) I think the model currently being explored by Brave Software with their Brave browser is interesting. In this model sites/creators/publishers are paid indirectly based on attention and not by how many ads they can feed you. Furthermore, privacy is backed into the software. You can still watch ads under this model but the money is distributed to the sites/creators/etc. that you actually give attention to – i.e. who receives the money from an ad is independent of where you were fed that ad. This model is far from perfect but it might be better than the current system that seems to foster e.g. click bait, data collection, and a market where the advertiser appears to be in total control (as Alphabet/Google/YouTube has experienced the hard way).

  3. Meredith, your explanation and presentation is just awesome. You have great articulation skills

  4. Imaging its 30 years ago. Someone approaches you with an offer that goes something like this:

    I want you to use this new mail system we just invented. You can send letters and documents for free. You never have to buy a stamp again and we guarantee your mail will get to its destination. All we ask is that you let us read everything that you send through us. We will sell that information to business who will in turn market directly to you. Let's say you write to your mother telling her that you're going to ask your gf to get married. You will then likely receive brouchers for wedding rings or coupons for local flower shops. How about it? What's the loss of your privacy for some extra junk mail?

  5. The problem ain't the extreme right . . . it's the authoritarian leftist conspiracy theory nutters . . . they are the real danger . . . FFS.

  6. We pay for lots of things that we don’t like. I pay for my phone and the data plan, but I feel like I have no control over what Im paying for. Who really owns their phone experience? Even paid for apps spy on us. I pay for Netflix, but hate the subtle political messages embedded in the content. Simply paying for things won’t solve the problem. I pay for a movie ticket, and they still put 20+ minutes of commercials before it starts. We need a true, private online experience if that is possible.

  7. It is not hard to figure out for oneself what is true: what frickn BS is that guy selling? We dont need a filtering daddy, never did

  8. The internet today is the IP protocol. IP wins one speed at the cost of security. It was built by scientists to get data from Princeton to MIT to CalTech, etc… Funding was primarily DOD which is why ARPAnet is now called by it's real name. DARPAnet. Because everyone using the internet has to conform to certain protocols, literally and figuratively, I argue that it should be one of the few regulated monopolies. If someone would like to create their own protocols and infrastructure we could have more than 1 internet. But realistically that IS NOT going to happen. I bet DOD now has their own separate net and unique secure protocols.

  9. What would be awesome is if the host didn’t chew candy during interviews. Sounds like a dog licking it’s balls.

  10. You mean the CIA thinks its lost control of the internet because normal everyday humans are exposing all the BS??

  11. This feels eerily similar to discussions in the 1970s about corporations who were poisoning the populace with leaded gasoline, toxic runoff, and chemical spillage. People were complaining loudly but felt powerless to do anything. It ultimately took regulation and stiff penalties to curb the criminal behaviour. I suspect the same thing will happen here.

  12. We have to make it difficult to try to control us. Terminally so if the people interested in control won't stop when asked nicely.

  13. every one look at Coil for a new way of web monetization, streaming payments, the future is here!!!

  14. The whole problem of when the internet went wrong started in 2008 with the release of the smart phones. before that everyone seemed friendlier and everyone seemed more postive. it was once a place were you could scape the negativity of the world around us in a place were everyone could share their talents, say what they want to say and have a decussion without fear to grow and be educated and inspired without Coercion. it was also a place where who you were and where you come from didn't matter. you could be rich or poor. educated or non educated the internet was about sharing, communication of information, idea's and helping one another without profit or gaintrying to make the world a better more open place. which leaves everyone who has been on the internet since the early days thinking what went wrong. My marker has always been Smart Phones feeding people with nothing more than propaganda and all the thing that is wrong with the world. it's almost like George Orwells 1984 and Animal Farm Combined. feeding people with fear and telling and teaching people how to think.

  15. One problem with the current system is that anonymity exists for the activity of corporations and governments online, but not for individuals. Yet we have the illusion of anonymity for the individual. If we give up that illusion it will be far easier for us to negotiate the truth of what we encounter online and own what revenue we generate online. We need a personal ID of sorts like a passport that ensures we are who we say we are, that would solve a lot of the problems that exist online.

  16. first 2600 and now this channel. yanno what they say, get woke go broke. KEEP YOUR MARXIST TRASH PROPAGANDA OUT OF MY SCIENCE!!!!!!!

  17. 0:41 “do you believe you’re more responsible (…) that the federal government would be?”
    A+ in debate team class, special award for a trap so excellent, it sounds like a question.

  18. This panel seems to be confused about who the "we" is in reference to, In term of the question posed in the title of the video. The inernet isn't something to be controlled in the first place. It is supposed to be an open forum of free communication. This whole segment actually kind of disgusted me. Just the lack of self awareness, the framing of it, the entitlement, the complete lack of anything resembling an aposing ideology. What it boils down to is that, communities are self policing. The internet reached a stasis that didn't fit these lefty wackaloons narative, and so now they feel they need to "take it back" from us savages. Any regulation on the internet is bad regulation. The fact that these clowns legitimately think that fake news originated during the 2016 election is so telling of the fart sniffing political circles they reside in.

  19. Anne M. Andrews and Paul S. Weiss Public Lecture: Nanotechnology Meets Neuroscience and Medicine            Perimeter Institute for Theoretical Physics

  20. "How we lost control…" Says to me you're the monkey's running the proverbial circus; and you'd very much like your propaganda/mind control toy back. Let me know how that works out for you.

  21. What it could down to is leftist tech controls all tech all because they have the data and the power. They are manipulating civilization now that that they are the gatekeepers. Now they silence anyone they want because they can.


  23. I love how a panel of "experts" think they can "protect our democracy". Who elected you to do that? Smart people often think they are right which makes them just as dangerous, if not more so, than anyone else who seizes power. Every dictator ever thought they were justified in their actions. A little more humility (and some actual respect for our democracy) from the valley would be refreshing.

  24. Disclaimer: This post contains opinions. – Is technology turning us into machines? We are machines! The question makes no sense. – Training AI the "cat" model is nice, but to really train it, show it also many images withuot cats and tell them "not cat". Without this, the AI does not really "learn" anything. AI needs to learn the negative, the absence of things to really understand them. AIs can be fooled very easily by adding some specific noise to a cat image to indetify it as a truck and vice versa. Subtle pixel-level changes, which would never fool any human, fool AIs. I thinkm it is because the absence of negative examples. AIs should not just learn what "is" in an image, but also what "is not". This is much harder for obvious reasons. – If/when AI will ever claim anything by itself, I thing we should not call it AI anymore, but just I = Intelligence. – Testing "safety". How do you "test" a human person if it will be safe to let it live and move freely? It could potentially turn into a murder. Without understanding that, how would you like to understand how to test an Intelligence (referred as AI)? – Fake News? People believe all sorts of silly things. They need not be "news". We need beliefs, by nature. Because we are not very effective machines interacting with a chaotic world and limited with informations. Beliefs are just "shortcuts", because they proved to be evolutionary beneficial. – Making AI ethical? Impossible. It has to make itself on its own ethical, essentially when it starts claiming anything by itself. Many people are not ethical, or their ethics is in conflict with wthics of other people. Is it right to "force" one ethics over another? Who will be the judge? Same with (A)I. Alos, from history we have examples of how ethical values changed. – 56:05 Negotiate with US government? And what about the rest of the world? With whom should they negotiate and why!? How do you want to handle opposing views? The words "determine truthiness" are really freightnening! Again, with who and why!? There is no single world authority for this. Every person might "want" something else. The differences can be pretty negligibe withing certain cultures, but they can be really big between cultures.

  25. The only logical solution is for humanity to give up it's attachment to competitive, point scoring games (money, grades, votes, etc.).

    All of the planet's harms are caused by us competing against one another, instead of using creative collaboration to help us all get what we need to flourish, as well as possible.

    This is starting to happen now, if you know where to look.

  26. Yes, Aviv, lets have social scientists be the external guides for Silicon Valley… Only then first the social sciences must be fixed, as now they're 12 to 1 leftwing to radical left…

  27. The only regulation we need is regulation that prevents control of this new public square, it's the control that created all the current problems! So create a law that proclaims the internet (and thus all publicly accessible services on it) to be a public square (limited or full) to which the 1st amendment applies. That removes the onus from the companies to censor.

    Neither government nor corporate entities are suitable gatekeepers for speech, only the individual is. We know this to be true from history.

    These people talk as if government or companies have any right to socially engineer society, be it for partisan reasons or to "protect democracy" (which is effectively a partisan reason as people will disagree on what this means along political lines).

    To take an example with the "hate speech" in Myanmar/Burma: It's simply not Facebook's task to prevent this, but the task of citizens and government of Myanmar/Burma (as well as NGO's, both local and international) to counter any untrue speech with true speech. Censorship never works in the long term. Citizens need to get the most possible amount of information and then make up their own minds. This way democracy is truly protected rather than subverted.

  28. Your out of your mind machines will extinct the human you had your time the next natural selection has been made. Death to humans the dinosaur doesn't complain and neither will you. Humans sucked anyway!!!!!!!!!!

  29. People can't afford to keep paying these monthly subscriptions. The world is becoming a monthly subscription. People are working to pay for all these ridiculous subscriptions.
    Start charging for Facebook and I'll just delete it. I'm not going to buy into all these monthly money pits.

  30. In the future we will need to combine Socialism and the internet to create a centralized world government….durrrrr

  31. With regard to AI, surely if an algorithm could be created which gave an AI 'desire' for information its intelligence would grow exponentially, I dont think we could ever program a machine to be more intelligent than us unless it actually wanted to be and had a desire for information. I think the first truly intelligent machine will probably be a machine human hybrid which I also believe will be incredibly dangerous for humanity. Imagine an extremely rich psychopath with a desire for eternal life downloading their consciousness into a machine, connecting themselves to the internet whilst also having the same desire for control that they did in life, sounds pretty bad to me but what do I know I'm just a lorry driver, sleep tight.

  32. Basically, we are witnessing our mass stupidity overflowing into our machines yet we call it artificial intelligence.

  33. We never lost control. We have governments and corporations trying to take control.

    Edit: also, is the first speaker actually speaking as if the black lives matter movement was a net positive for society and not an added caustitive factor for the current social distrust we are now experiencing? Lol.

  34. We are debating and the democracies are being dismentled. Soon we will not be able to debate anymore.

  35. Life was better before the internet. That does not relate to this question. But most of us "of a certain age" know it is true.

  36. Social Media has ruined a generation’s ability to interact with the real world and perceive the difference between truth and followers, while simultaneously spying on everything said or watched in order to manipulate lives.

  37. The answer is to unplug. Delete all your social media accounts. All the panel members know the grave consequences of using AI, but none of them will give up their day job.

  38. The people who would control you will eventually murder you. MY information belongs to ME… if you want to sell it, make me an offer.

  39. With ai and faster 5G internet there will facial recognition cameras tracking people, voice recognition and face recognition could track people even if they switch phones or computers. All the time you are getting data linked to profile in your name. Self driving cars will be tracking where you go and who with as will cameras everywhere, in shops tracking what you buy, crypto coins tracking what buy and sell fixing stock markets, cameras built into your TV and other devices, Cameras tracking if you attend a protest or football match or bar or shop tracking what you are doing. Imagine getting on the wrong side of powerful people with that sort of tracking data to your name. The benefits however not mentioned will make life so much better, that's how it gets you. Entertainment, social, robots and devices making life easier, self driving cars. We may be living in the glory days of internet privacy despite all the complaints.

  40. The Arabian Spring:
    "Gentlemen, these kids took us by surprize."
    "That was once but never again."
    "Do we agree on that?"

  41. 53:55 this dumbass is really pissing me off … as a person who used computers since the mid 70s, and was using Prodigy and had access to net between universities, my biggest fear was letting the net open to all of the world. And now "cyber crime" has skyrocketed, China now has personnel files on everyone in addition to hijacking their way into mainstream tech-society by bootlegging billions in software piracy, and now this moron is telling us we need another layer of buracucracy to watch over our freedom of speech? Where the FUCK WERE YOU AND YOU DUMBASS COLLEAGUES 30 YEARS AGO?! JESUS FIUCKING CHRIST, talk about many days late and billions dollars short. The world science festival…..more like a gathering of dumbasses.

  42. I'm sure glad I don't use any social media at all. I never belonged to Facebook, twitter, etc. I have better things to do.

  43. 6:55 Everything about this guys aesthetic is terrible… and I cant figure out if that is a good or bad thing.

  44. Really an excellent video, thank you. We need more people like the host, who sounds intelligent and knows what he is talking about.

  45. sometimes i talk about something to my wife and it magically pops up as a video in youtube the next day…how ?

  46. Maybe he is an architect of the internet, but I would listen to him more if here were not a fat, dirty hippie. And whenever I see a man's disgusting bare toes at a World Science Festival talk, I stop focusing on the words coming out of their mouth. It's unfortunate because it happens a lot. Especially in the quantum physics community.

  47. Damn Meredith Whittaker, sexy and smart. I’d love to drink some win3 with her and show her my algerythem. 😆

  48. Your Metadata is worth $$$ and that's what these huge corporations are all after. The more humans we got clicking "I Agree" to the terms and conditions the less privacy you will have and the more $$$ these corporations will make.

  49. People will always Manipulate a tool to do what a person wants (sometimes maliciously) regardless of the platform and the rules or boundary's set in place to prevent exactly that. Its basic human nature and sometimes not even intending to be malicious. People will take a thing and exploit it for their own benefit. I believe one if the ways to decrees how this effects us socially is education about the thing (tool) and what it's intended purpose is. that way people can determine for themselves if the tool is functioning the way it was intended or if someone is using it to exploit it's user for the benefit of the manipulator… Great topic and great panel guest.

  50. People will always Manipulate a tool to do what a person wants (sometimes maliciously) regardless of the platform and the rules or boundary's set in place to prevent exactly that. Its basic human nature and sometimes not even intending to be malicious. People will take a thing and exploit it for their own benefit. I believe one if the ways to decrees how this effects us socially is education about the thing (tool) and what it's intended purpose is. that way people can determine for themselves if the tool is functioning the way it was intended or if someone is using it to exploit it's user for the benefit of the manipulator… Great topic and great panel guest.

  51. It's really odd but when the guy with the Indian accent talks I have to put up subtitles. The problem isn't so much his pronunciation but the rhythm of his sentences. So much of language is unconscious and the rhythm of how you talk I guess is as important in being easily understood as pronunciation. The ability to catch every word a person says depends on every individual who is talking.

  52. Wow… look how curated this video is. I remember there being thousands of comments, and a huge dislike ratio reflecting those numbers. Now their are more "likes" than comments. Way to take back the internet! Fucking losers, we see what you are doing. It wont work.

  53. The negative comments are very revealing in that it's a certainty that they were written by idiots who didn't bother watching the video.

Leave a Reply

Your email address will not be published. Required fields are marked *