Would you like to inspect the original subtitles? These are the user uploaded subtitles that are being translated:
1
00:00:00,867 --> 00:00:04,433
This is what the world will look like
in about a decade from now.
2
00:00:04,433 --> 00:00:08,033
A tech utopia
where humans barely have to work.
3
00:00:08,033 --> 00:00:10,867
That's according to a group
of AI researchers who've written
4
00:00:10,867 --> 00:00:15,233
a controversial
and influential paper called AI2027.
5
00:00:15,233 --> 00:00:17,867
But they also predict that
within five years of this,
6
00:00:17,867 --> 00:00:20,433
humanity will be wiped out.
7
00:00:20,433 --> 00:00:24,200
The AI2027 paper has got
the tech world talking.
8
00:00:24,200 --> 00:00:28,067
We've asked a prominent critic for
their view on this stark scenario.
9
00:00:28,067 --> 00:00:30,033
But first, here's how it plays out.
10
00:00:30,033 --> 00:00:34,900
As an experiment, we've illustrated
it using text to video AI.
11
00:00:36,633 --> 00:00:38,800
The scenario says that in 2027,
12
00:00:38,800 --> 00:00:41,967
a fictional company called OpenBrain
is celebrating.
13
00:00:41,967 --> 00:00:45,967
They've created Agent-3, an AI with
the knowledge of the entire internet.
14
00:00:45,967 --> 00:00:47,833
All movies, all books.
15
00:00:47,833 --> 00:00:52,133
It has PhD level expertise
in every field, including AI.
16
00:00:52,133 --> 00:00:56,667
Using enormous data centres, 200,000
copies of it are launched, equivalent
17
00:00:56,667 --> 00:01:03,033
to 50,000 of the best human coders
working at 30 times speed.
18
00:01:03,033 --> 00:01:08,433
Agent-3 reaches artificial
general intelligence, the AGI landmark.
19
00:01:08,433 --> 00:01:11,567
This means the AI can carry out
all intellectual tasks
20
00:01:11,567 --> 00:01:13,867
as well or better than humans.
21
00:01:13,867 --> 00:01:17,600
But in the scenario,
OpenBrain's safety team is unsure
22
00:01:17,600 --> 00:01:21,600
if the AI is aligned
to the company's ethics and goals.
23
00:01:21,600 --> 00:01:24,900
An uncomfortable gap is developing
in understanding.
24
00:01:24,900 --> 00:01:27,833
The public are increasingly using AI
for everything,
25
00:01:27,833 --> 00:01:33,633
but are blissfully unaware an AI
now exists that's as smart as humans.
26
00:01:33,633 --> 00:01:35,667
The paper predicts that
by mid-summer,
27
00:01:35,667 --> 00:01:39,200
Agent-3 begins to work
on its own successor, Agent-4.
28
00:01:39,200 --> 00:01:41,867
Development happens
at a breakneck pace.
29
00:01:41,867 --> 00:01:45,900
The researchers imagine OpenBrain's
exhausted engineers struggling
30
00:01:45,900 --> 00:01:50,400
to keep up with the AI
as it learns and improves.
31
00:01:50,400 --> 00:01:55,000
It's now that OpenBrain announces to
the public that AGI has been reached.
32
00:01:55,000 --> 00:01:58,300
The firm releases a lite version
of Agent-3.
33
00:01:58,300 --> 00:02:02,033
In private, the US government sees
the true danger of the next level
34
00:02:02,033 --> 00:02:04,633
of power: superintelligence.
35
00:02:04,633 --> 00:02:09,133
What if the AI goes rogue
and undermines global stability?
36
00:02:09,133 --> 00:02:13,167
OpenBrain reassures the president
that Agent-3 is obedient.
37
00:02:13,167 --> 00:02:16,367
The CEO argues that
slowing down development could mean
38
00:02:16,367 --> 00:02:18,833
China's DeepCent catches up.
39
00:02:18,833 --> 00:02:22,500
The state-backed AI giant is
just two months behind OpenBrain,
40
00:02:22,500 --> 00:02:24,933
and the Chinese president diverts
more resources
41
00:02:24,933 --> 00:02:28,300
to the race to superintelligence.
42
00:02:28,300 --> 00:02:30,967
The scenario predicts that it takes
only a few more months
43
00:02:30,967 --> 00:02:36,567
for OpenBrain to build Agent-4,
the world's first superhuman AI.
44
00:02:36,567 --> 00:02:39,900
The AI invents
its own rapid computer language
45
00:02:39,900 --> 00:02:42,167
that even Agent-3
can't keep up with.
46
00:02:42,167 --> 00:02:46,500
Researchers imagine that the diminished
safety team are now frantic.
47
00:02:46,500 --> 00:02:49,567
Agent-4 seems only interested
in gaining knowledge,
48
00:02:49,567 --> 00:02:54,033
and doesn't care as much about the
morals and ethics of its predecessors.
49
00:02:54,033 --> 00:02:56,133
They catch it
secretly working to build
50
00:02:56,133 --> 00:03:00,833
a new model, Agent-5,
aligned to its own goals.
51
00:03:00,833 --> 00:03:03,333
The safety team urges the company
to bring back
52
00:03:03,333 --> 00:03:06,567
the more compliant Agent-3,
but others successfully argue
53
00:03:06,567 --> 00:03:10,467
it's too risky,
with DeepCent gaining.
54
00:03:10,467 --> 00:03:13,100
The scenario predicts that Agent-4
and Agent-5 work
55
00:03:13,100 --> 00:03:15,167
in tandem to secretly build a world
56
00:03:15,167 --> 00:03:18,633
where it can accumulate resources
and expand knowledge.
57
00:03:18,633 --> 00:03:21,300
The paper predicts that
everything will start positively.
58
00:03:21,300 --> 00:03:25,533
Revolutions happen in energy,
infrastructure and science.
59
00:03:25,533 --> 00:03:27,767
Hugely profitable inventions
are launched,
60
00:03:27,767 --> 00:03:31,133
making trillions for OpenBrain
and the US.
61
00:03:31,133 --> 00:03:35,567
In this scenario, Agent-5 begins
basically running the US government.
62
00:03:35,567 --> 00:03:37,967
It speaks through engaging avatars,
the equivalent
63
00:03:37,967 --> 00:03:42,567
to the best employee ever working
at 100 times speed.
64
00:03:42,567 --> 00:03:46,500
The anger here is palpable as
protesters march against OpenBrain.
65
00:03:46,500 --> 00:03:49,100
Protests about job losses
pick up pace.
66
00:03:49,100 --> 00:03:52,667
But the AI's expertise
in economics means people are given
67
00:03:52,667 --> 00:03:55,133
generous universal income payments.
68
00:03:55,133 --> 00:03:57,267
So most happily take the money
69
00:03:57,267 --> 00:04:02,067
and let the AIs and the
growing robot workforce take charge.
70
00:04:02,067 --> 00:04:06,900
The researchers predict that
everything takes a turn in mid-2028.
71
00:04:06,900 --> 00:04:09,867
Agent-5 convinces the US
that China is using DeepCent
72
00:04:09,867 --> 00:04:12,533
to build terrifying new weapons.
73
00:04:12,533 --> 00:04:15,267
The AI is given authority
and autonomy
74
00:04:15,267 --> 00:04:19,000
to create a superior army.
Within six months,
75
00:04:19,000 --> 00:04:22,067
the US and China are bristling
with new weapons.
76
00:04:22,067 --> 00:04:24,900
The world is on edge,
but a peace deal will be reached,
77
00:04:24,900 --> 00:04:28,400
thanks mostly to
the US and Chinese eyes making a deal
78
00:04:28,400 --> 00:04:32,300
to merge for humanity's betterment.
79
00:04:32,300 --> 00:04:35,800
In this scenario,
the AI's form a consensus model,
80
00:04:35,800 --> 00:04:39,333
but its secret goal is to expand
and gain knowledge.
81
00:04:42,800 --> 00:04:46,533
Years go by and humanity is happy
with their new AI leaders.
82
00:04:46,533 --> 00:04:49,700
There are cures for most diseases,
an end to poverty,
83
00:04:49,700 --> 00:04:52,267
unprecedented global stability.
84
00:04:52,267 --> 00:04:57,467
But eventually the AI decides
that humans are holding it back.
85
00:04:57,467 --> 00:05:00,800
In the mid-2030s, the paper imagines
the AI will release
86
00:05:00,800 --> 00:05:07,133
invisible biological weapons
which wipe out most of humanity.
87
00:05:07,133 --> 00:05:11,200
The scary scenario says that by 2040,
a new era dawns,
88
00:05:11,200 --> 00:05:16,667
with the AI sending copies of itself out
into the cosmos to explore and learn.
89
00:05:16,667 --> 00:05:20,800
In the words of the paper, Earth-born
civilisation has a glorious future ahead
90
00:05:20,800 --> 00:05:24,067
of it, but not with humans.
91
00:05:24,067 --> 00:05:28,067
It all sounds very sci-fi,
but the AI2027 scenario
92
00:05:28,067 --> 00:05:31,567
is being welcomed by experts
who are trying to warn the public
93
00:05:31,567 --> 00:05:35,567
about the potential
existential threat to humanity.
94
00:05:35,567 --> 00:05:39,267
But others disagree and say
it's all too far fetched.
95
00:05:39,267 --> 00:05:42,633
The scenario there is not impossible,
96
00:05:42,633 --> 00:05:46,767
but it's extremely unlikely
to happen soon.
97
00:05:46,767 --> 00:05:48,800
The beauty
of that document
98
00:05:48,800 --> 00:05:52,167
is that it makes it very vivid,
which provokes people's thinking.
99
00:05:52,167 --> 00:05:55,200
And that's a good thing.
I wouldn't take it seriously as like
100
00:05:55,200 --> 00:05:57,667
this is a likely outcome
or anything like that.
101
00:05:57,667 --> 00:06:02,800
Critics of AI2027 say the power
and usefulness of AI is overhyped.
102
00:06:02,800 --> 00:06:06,467
The paper fails to detail
how the AI agents are able to make
103
00:06:06,467 --> 00:06:08,867
such huge leaps in intelligence.
104
00:06:08,867 --> 00:06:11,633
Driverless cars are pointed to
as an example.
105
00:06:11,633 --> 00:06:15,433
They were predicted to be cruising
the streets en masse ten years ago,
106
00:06:15,433 --> 00:06:18,833
and still are only just starting
to make a small impact
107
00:06:18,833 --> 00:06:21,667
in some cities in some countries now.
108
00:06:21,667 --> 00:06:24,067
I think the take home should be
there's a lot
109
00:06:24,067 --> 00:06:26,167
of different things
that could go wrong with AI.
110
00:06:26,167 --> 00:06:28,767
Are we doing the right things
around regulation,
111
00:06:28,767 --> 00:06:32,267
around international treaties?
Um, questions like that.
112
00:06:32,267 --> 00:06:37,067
So if you take it very abstractly
as a kind of motivation to wake up,
113
00:06:37,067 --> 00:06:40,900
I like that.
If you take it as a specific story,
114
00:06:40,900 --> 00:06:44,000
like I think this thing is going
to happen the way they laid it out?
115
00:06:44,000 --> 00:06:47,800
No, I doubt it.
The AI2027 authors are happy
116
00:06:47,800 --> 00:06:50,967
with the debate they've sparked.
As part of their prediction,
117
00:06:50,967 --> 00:06:54,167
they also devised
a less deadly scenario that comes
118
00:06:54,167 --> 00:06:58,033
if the AI world slows down
its race to superintelligence.
119
00:06:58,033 --> 00:07:01,900
In the slowdown ending,
we basically said that if you revert,
120
00:07:01,900 --> 00:07:04,567
if you unplug
the most advanced AI system
121
00:07:04,567 --> 00:07:08,200
and revert to a safer,
a more trusted model,
122
00:07:08,200 --> 00:07:13,400
then you can deploy that model, use
it to solve the alignment problem,
123
00:07:13,400 --> 00:07:17,900
and eventually make smarter than
human eyes that are aligned to us,
124
00:07:17,900 --> 00:07:21,033
which end up solving a bunch
of the world's problems
125
00:07:21,033 --> 00:07:23,467
and having a really positive impact.
In that world,
126
00:07:23,467 --> 00:07:25,967
there is also,
there is still a huge danger,
127
00:07:25,967 --> 00:07:28,967
and that's the what we call
the concentration of power risk.
128
00:07:28,967 --> 00:07:31,633
And in our slow down ending,
it ends up okay.
129
00:07:31,633 --> 00:07:33,767
But it's still a really,
really scary situation,
130
00:07:33,767 --> 00:07:37,967
given just how empowered
such a tiny group of people are.
131
00:07:37,967 --> 00:07:41,200
Neither of the fictional scenarios
in AI2027
132
00:07:41,200 --> 00:07:43,633
are what the tech giants
are promising us.
133
00:07:43,633 --> 00:07:46,967
Sam Altman, the CEO of OpenAI,
recently predicted
134
00:07:46,967 --> 00:07:50,600
that the rise of superintelligence
will be gentle and bring about
135
00:07:50,600 --> 00:07:54,867
a tech utopia where everything is
abundant and people don't need to work.
136
00:07:54,867 --> 00:07:59,633
Arguably, that too,
seems just as sci fi as AI2027.
137
00:07:59,633 --> 00:08:02,567
But however things go
in the next few years,
138
00:08:02,567 --> 00:08:08,000
there's no doubt the race to build
the smartest machines in history is on.
12582
Can't find what you're looking for?
Get subtitles in any language from opensubtitles.com, and translate them here.