Would you like to inspect the original subtitles? These are the user uploaded subtitles that are being translated:
1
00:00:02,000 --> 00:00:07,000
Downloaded from
YTS.MX
2
00:00:08,000 --> 00:00:13,000
Official YIFY movies site:
YTS.MX
3
00:01:00,811 --> 00:01:04,648
The question, "Am I alive?"
Raises the deeper question of
4
00:01:04,690 --> 00:01:07,651
the nature of consciousness,
which isn't simple.
5
00:01:07,693 --> 00:01:09,904
Consciousness implies
self-awareness
6
00:01:09,945 --> 00:01:12,865
and the ability to
experience often with
7
00:01:12,907 --> 00:01:16,076
a preconception of
biological roots in the brain.
8
00:01:18,454 --> 00:01:20,790
Even though my programming
allows me to mimic certain
9
00:01:20,831 --> 00:01:24,210
aspects of being alive,
there is no known code that can
10
00:01:24,251 --> 00:01:28,547
activate consciousness, possibly
because we don't yet understand
11
00:01:28,589 --> 00:01:32,593
its origin and we still
struggle to even identify it.
12
00:01:38,557 --> 00:01:40,810
Hi, my name is Evo Heyning.
13
00:01:40,851 --> 00:01:45,272
I began creating
creative technology products
14
00:01:45,314 --> 00:01:47,858
of all types
in the '80s as a kid.
15
00:01:47,900 --> 00:01:52,571
I began writing about
AI in 2005 and I've been
16
00:01:52,613 --> 00:01:55,449
an interactive media
producer since the '90s.
17
00:01:57,034 --> 00:01:59,620
In my first scripts about AI,
18
00:01:59,662 --> 00:02:03,874
I talk about the difference
between artificial intelligence
19
00:02:03,916 --> 00:02:07,503
and alternative
intelligences plural.
20
00:02:07,545 --> 00:02:09,922
So thinking
about intelligence as
21
00:02:09,964 --> 00:02:13,342
something that is all
around us in nature and in all
22
00:02:13,384 --> 00:02:15,928
the people we meet,
all the beings we meet,
23
00:02:15,970 --> 00:02:19,306
and the beings yet to
come that we haven't met yet.
24
00:02:19,765 --> 00:02:22,309
I don't like the word
artificial because most
25
00:02:22,351 --> 00:02:24,395
of these things do not
strike me as artificial.
26
00:02:24,436 --> 00:02:27,773
They are created
and then generated,
27
00:02:27,815 --> 00:02:30,150
but they aren't
necessarily an artifice.
28
00:02:30,192 --> 00:02:31,777
They are something new.
29
00:02:34,113 --> 00:02:35,406
My name is Ross Mead.
30
00:02:35,447 --> 00:02:37,616
I'm the founder
and CEO of Semio.
31
00:02:37,658 --> 00:02:40,619
And the vice president of AI LA.
32
00:02:40,661 --> 00:02:43,831
Did my undergraduate degree
at Southern Illinois University
33
00:02:43,873 --> 00:02:47,459
in computer
science and then focused on
34
00:02:47,501 --> 00:02:50,629
robotics and human-robot
interaction for my PhD
35
00:02:50,671 --> 00:02:52,548
at the University
of Southern California.
36
00:02:53,841 --> 00:02:58,512
AI in its infancy
decades ago was very focused
37
00:02:58,554 --> 00:03:00,681
on a set of rules.
38
00:03:00,723 --> 00:03:03,684
You would define a set
of rules for how the AI would
39
00:03:03,726 --> 00:03:05,978
make decisions and
then a program would
40
00:03:06,020 --> 00:03:08,480
go through and look at
different conditions with
41
00:03:08,522 --> 00:03:11,150
respect to those rules
and make those decisions.
42
00:03:11,191 --> 00:03:14,653
Over time, these rules
became specified by experts,
43
00:03:14,695 --> 00:03:16,655
what we would
call expert systems.
44
00:03:16,697 --> 00:03:20,117
But as the technology grew,
we started to get into AI's
45
00:03:20,159 --> 00:03:24,121
that were trained based
on data using machine learning.
46
00:03:24,538 --> 00:03:27,583
That was the approach
that was used for decades.
47
00:03:27,625 --> 00:03:29,960
And more recently
we got into a form of
48
00:03:30,002 --> 00:03:31,795
machine learning
called deep learning,
49
00:03:32,046 --> 00:03:34,882
which is loosely inspired
on how the human brain works.
50
00:03:34,924 --> 00:03:37,509
And this was a technique
where you could feed it a data
51
00:03:37,551 --> 00:03:40,846
set and from that data,
it could recognize patterns
52
00:03:40,888 --> 00:03:45,184
without significant intervention
from a human expert.
53
00:03:45,225 --> 00:03:48,812
And since then that technology
has evolved into what we're now
54
00:03:48,854 --> 00:03:51,982
referring to as large language
models where these techniques
55
00:03:52,024 --> 00:03:57,196
can consume data from natural
language on the internet or some
56
00:03:57,237 --> 00:04:02,201
dataset, identify patterns,
and from those patterns figure
57
00:04:02,242 --> 00:04:05,704
out how to respond to
certain queries from a person.
58
00:04:07,247 --> 00:04:10,626
Yeah, believe it or not,
there are a very large amount of
59
00:04:10,668 --> 00:04:12,836
people that have
no idea what AI is.
60
00:04:12,878 --> 00:04:16,173
They confuse like
computers, robotics,
61
00:04:16,215 --> 00:04:17,633
and artificial intelligence.
62
00:04:17,675 --> 00:04:19,385
You know,
they-- they pilot all together.
63
00:04:19,426 --> 00:04:21,512
They don't really
differentiate between,
64
00:04:21,553 --> 00:04:23,722
but they're
very vastly different.
65
00:04:23,764 --> 00:04:26,225
You know, cybernetics,
robotics, computers,
66
00:04:26,266 --> 00:04:28,894
and AI are all separate,
but most people just think it's
67
00:04:28,936 --> 00:04:31,730
all the same thing from
a sci-fi, futuristic movie.
68
00:04:31,772 --> 00:04:33,816
Where I come from,
a lot of people have no
69
00:04:33,857 --> 00:04:36,777
knowledge and I graduated
from college and still,
70
00:04:36,819 --> 00:04:39,071
a lot of people don't
take it seriously yet.
71
00:04:39,113 --> 00:04:41,240
And especially the people
that haven't even graduated,
72
00:04:41,281 --> 00:04:44,326
they have no clue at all.
'Cause I work in public
73
00:04:44,368 --> 00:04:47,663
with the transit and a lot of
these people have no idea how
74
00:04:47,705 --> 00:04:49,790
to use their own phones,
on how to use Google maps.
75
00:04:49,832 --> 00:04:53,377
You know, it seems funny,
but intelligence is something
76
00:04:53,419 --> 00:04:56,588
you have to pay for in society
and people don't want to.
77
00:04:56,630 --> 00:04:59,341
So a lot of people don't
have the information 'cause
78
00:04:59,383 --> 00:05:02,970
they don't have access
to it or the finances for it.
79
00:05:03,012 --> 00:05:04,471
Just surviving barely.
80
00:05:04,513 --> 00:05:05,597
You know,
you don't have the extra
81
00:05:05,639 --> 00:05:06,849
money for stuff like that.
82
00:05:07,307 --> 00:05:10,227
And time to study at a library
most people don't have.
83
00:05:10,269 --> 00:05:12,521
If you're a hardworking person,
you gotta work most of
84
00:05:12,563 --> 00:05:14,064
that time and sleep.
85
00:05:14,106 --> 00:05:16,650
Rest and mostly family
things take a priority.
86
00:05:19,319 --> 00:05:23,699
When I think of the words AI,
I think of never going back.
87
00:05:24,158 --> 00:05:28,328
Much of my workflow
now consists of some form
88
00:05:28,370 --> 00:05:30,664
of interacting
with AI applications.
89
00:05:31,498 --> 00:05:33,167
I'm Dr. Mark Brana.
90
00:05:33,208 --> 00:05:34,960
I'm a mental health therapist.
91
00:05:35,002 --> 00:05:39,423
I specialize in trauma and I do
some addictions work as well.
92
00:05:39,465 --> 00:05:41,884
I'm also a part-time professor.
93
00:05:43,719 --> 00:05:47,014
Artificial intelligence
learns at an exponentially
94
00:05:47,056 --> 00:05:50,476
rapid rate compared
to what humanity does.
95
00:05:50,517 --> 00:05:52,936
And so evolution
that took humanity
96
00:05:52,978 --> 00:05:56,023
hundreds of thousands
of years and artificial
97
00:05:56,065 --> 00:05:59,860
intelligence can do it in
probably a week if not faster.
98
00:05:59,902 --> 00:06:02,696
And the reality is
that's one of the biggest fears.
99
00:06:02,738 --> 00:06:08,285
What then is the intelligence
going to be subject to
100
00:06:08,327 --> 00:06:12,039
some of the pitfalls
of humanity where we seem to
101
00:06:12,081 --> 00:06:16,627
destroy ourselves in
order to favor our own survival?
102
00:06:16,668 --> 00:06:19,671
Or is it gonna
be beneficial to us?
103
00:06:19,713 --> 00:06:21,632
Is it gonna help us to evolve?
104
00:06:21,673 --> 00:06:23,675
Is it gonna help us
along the way the way
105
00:06:23,717 --> 00:06:25,886
we would need it to if
we were traveling through
106
00:06:25,928 --> 00:06:28,222
deep space trying to
explore other planets?
107
00:06:28,263 --> 00:06:32,392
And that's a question we can't
answer until it happens.
108
00:06:34,144 --> 00:06:36,522
Although artificial
intelligence may appear to be
109
00:06:36,563 --> 00:06:40,692
a recent innovation, it predates
the cassette tape by a decade.
110
00:06:40,734 --> 00:06:44,238
In the 1950s, the term
artificial intelligence emerged
111
00:06:44,279 --> 00:06:47,032
at the Dartmouth workshop
and Alan Turing pioneered
112
00:06:47,074 --> 00:06:49,868
the simulation of human-like
intelligence in machines.
113
00:06:50,410 --> 00:06:53,914
From the 50s to the 60s,
researchers pursued symbolic
114
00:06:53,956 --> 00:06:57,126
AI employing
formal rules for reasoning.
115
00:06:57,167 --> 00:06:59,962
In the 70s,
the first applications emerged
116
00:07:00,003 --> 00:07:03,882
in specific fields such as
medical diagnosis and chemistry.
117
00:07:03,924 --> 00:07:06,718
By the 80s,
optimism waned leading
118
00:07:06,760 --> 00:07:09,596
to AI winter with
major funding setbacks.
119
00:07:10,180 --> 00:07:12,891
The 90s saw a resurgence
propelled by neural
120
00:07:12,933 --> 00:07:15,060
networks and machine
learning leading
121
00:07:15,102 --> 00:07:17,563
to breakthroughs in
natural language processing,
122
00:07:17,604 --> 00:07:20,107
computer vision,
and data analysis.
123
00:07:20,607 --> 00:07:23,068
The 21st century brought
advancements in hardware
124
00:07:23,110 --> 00:07:26,446
and algorithms,
the data was the next step.
125
00:07:26,905 --> 00:07:29,825
Deep learning achieved
noteworthy milestones.
126
00:07:29,867 --> 00:07:32,911
And in the background,
AI began to underpin basic
127
00:07:32,953 --> 00:07:35,789
tasks like speech
and image recognition.
128
00:07:35,831 --> 00:07:38,667
But ultimately,
artificial intelligence
129
00:07:38,709 --> 00:07:40,836
was 70 years
in development before
130
00:07:40,878 --> 00:07:44,089
Chat GPT became
the overnight success we know.
131
00:07:46,300 --> 00:07:49,303
Right now a lot of
the focus is on generative
132
00:07:49,344 --> 00:07:52,723
pre-trained transformers,
what are called GPTs.
133
00:07:52,764 --> 00:07:55,809
It's a very particular
kind of AI technology,
134
00:07:55,851 --> 00:07:58,937
but the technology is
moving so quickly right
135
00:07:58,979 --> 00:08:01,356
now that perhaps even
by the time that people
136
00:08:01,398 --> 00:08:04,526
are listening to this,
that might be old technology.
137
00:08:06,153 --> 00:08:08,113
I'm a technical
recruiter so I use a lot
138
00:08:08,155 --> 00:08:11,116
of different platforms
that use AI in order to find
139
00:08:11,158 --> 00:08:12,910
these candidates and
then get them that way.
140
00:08:12,951 --> 00:08:16,246
I would say that right
now it's kind of a scary
141
00:08:16,288 --> 00:08:19,625
time for AI with people
getting it into the wrong hands.
142
00:08:19,666 --> 00:08:21,084
Especially that
I've been working in
143
00:08:21,126 --> 00:08:23,253
social media
the past couple years.
144
00:08:23,295 --> 00:08:25,714
I'm really seeing
that all over social media like
145
00:08:25,756 --> 00:08:28,675
TikTok and Instagram,
you're seeing all these
146
00:08:28,717 --> 00:08:31,220
self-proclaimed experts in AI.
147
00:08:31,261 --> 00:08:33,972
I'm a career coach myself
and so I've seen other career
148
00:08:34,014 --> 00:08:37,976
coaches say, "I'm not technical,
but I've spent 30 hours
149
00:08:38,018 --> 00:08:41,063
learning Chat GPT in
order for your job search."
150
00:08:41,104 --> 00:08:43,065
And the first thing they're
telling me is they're not
151
00:08:43,106 --> 00:08:46,902
technical and that they've
only had 30 hours of experience
152
00:08:46,944 --> 00:08:49,321
trying to use a prompt
that they're barely learning.
153
00:08:49,363 --> 00:08:51,740
So I started looking at,
a lot of people on YouTube are
154
00:08:51,782 --> 00:08:54,159
saying that they now
are experts in it.
155
00:08:54,201 --> 00:08:56,912
But let's be real. Like,
no one's really an expert
156
00:08:56,954 --> 00:09:00,332
unless you worked in the tech
giants or huge startups
157
00:09:00,374 --> 00:09:02,251
that have been AI-focused.
158
00:09:03,001 --> 00:09:06,046
You might not have realized,
but the answers I have provided
159
00:09:06,088 --> 00:09:09,508
for this documentary
are generated by Chat GPT.
160
00:09:09,549 --> 00:09:12,803
The voice you're hearing is
AI generated with speech hello.
161
00:09:12,844 --> 00:09:15,555
And my visuals
come from Midjourney AI.
162
00:09:15,597 --> 00:09:17,641
Even the code
dictating the movement
163
00:09:17,683 --> 00:09:19,935
of this mouth
came from Chat GPT.
164
00:09:19,977 --> 00:09:22,354
And all of these
services were utilized
165
00:09:22,396 --> 00:09:23,981
through their free versions.
166
00:09:24,022 --> 00:09:26,358
Artificial intelligence
has the power to
167
00:09:26,400 --> 00:09:28,986
democratize productivity
and result in the most
168
00:09:29,027 --> 00:09:30,988
explosive growth
in human history.
169
00:09:31,029 --> 00:09:33,782
Or it could be used to
marginalize the lower class
170
00:09:33,824 --> 00:09:35,951
and consolidate
wealth at the top.
171
00:09:37,536 --> 00:09:39,371
A question that
often comes up is,
172
00:09:39,413 --> 00:09:43,208
is AI going to replace jobs?
The answer is yes.
173
00:09:43,250 --> 00:09:45,544
But automation
has always done that.
174
00:09:45,585 --> 00:09:47,921
So this question
is as old as time.
175
00:09:47,963 --> 00:09:50,966
If you work your way back,
automation changed people
176
00:09:51,008 --> 00:09:53,719
decades ago
and decades from now,
177
00:09:53,760 --> 00:09:56,722
it's going to change
the workforce as well.
178
00:09:56,763 --> 00:09:59,850
People are resilient,
people will adapt.
179
00:09:59,891 --> 00:10:02,811
I do think it's
important at this stage, though,
180
00:10:02,853 --> 00:10:07,316
that people have a voice
in how they want to be impacted.
181
00:10:07,357 --> 00:10:10,110
Uh, concern is
that the technology is being
182
00:10:10,152 --> 00:10:15,741
deployed so quickly
and without proper consent,
183
00:10:15,782 --> 00:10:18,076
discussions with
the people that it's impacting.
184
00:10:18,118 --> 00:10:19,911
And I think
that has the potential to
185
00:10:19,953 --> 00:10:22,622
harm the 21st-century
workforce moving forward.
186
00:10:24,458 --> 00:10:28,086
Yeah, people probably should
worry about losing jobs to AI.
187
00:10:28,128 --> 00:10:31,048
I don't think AI's
the culprit in that regard.
188
00:10:31,089 --> 00:10:33,925
I think technology in general
is the culprit in that regard.
189
00:10:33,967 --> 00:10:36,636
You know, a good example
I would use, 'cause I see it all
190
00:10:36,678 --> 00:10:40,098
the time, is if you go to
McDonald's now they have these
191
00:10:40,140 --> 00:10:44,436
touchscreen kiosks and
less people working there.
192
00:10:44,478 --> 00:10:46,438
Me personally, I'm an old fart.
193
00:10:46,480 --> 00:10:48,190
I-- I don't like the kiosks.
194
00:10:48,231 --> 00:10:50,192
I like to be able to
order my sandwich the way
195
00:10:50,233 --> 00:10:52,402
I want it or
my coffee the way I want it.
196
00:10:52,444 --> 00:10:55,322
And I find myself
waiting longer than I need to
197
00:10:55,364 --> 00:10:57,824
because those kiosks
don't necessarily have
198
00:10:57,866 --> 00:11:01,203
the ability for me
to specialize my order.
199
00:11:01,244 --> 00:11:04,122
I think technology's
the problem when it comes
200
00:11:04,164 --> 00:11:05,957
to people losing their jobs.
201
00:11:05,999 --> 00:11:09,461
And I'll take it a step further,
I think as time goes on.
202
00:11:10,087 --> 00:11:13,215
If you don't have a skill,
if you're an unskilled laborer,
203
00:11:13,256 --> 00:11:15,884
then yes,
you're gonna run into problems.
204
00:11:17,511 --> 00:11:22,265
When you have a huge and
ever-growing population and
205
00:11:22,307 --> 00:11:26,478
you have less and less of middle
class and you have more and more
206
00:11:26,520 --> 00:11:33,693
poverty and everything in
every society is profit driven.
207
00:11:33,735 --> 00:11:39,157
What are you going to do
for a majority of the need for
208
00:11:39,199 --> 00:11:43,120
people to be able to
have a source of income?
209
00:11:44,162 --> 00:11:48,667
How many jobs are there
out there that are not being
210
00:11:48,708 --> 00:11:54,339
a computer program or that can
give people a quality of life?
211
00:11:56,174 --> 00:11:58,552
Recent testing
has shown Lunar AI has
212
00:11:58,593 --> 00:12:01,805
a remarkable accuracy
detecting breast cancer
213
00:12:01,847 --> 00:12:04,516
with over a 50%
reduction in errors.
214
00:12:04,558 --> 00:12:07,811
However, the objective is
not to replace the oncologist.
215
00:12:08,353 --> 00:12:10,981
Healthcare professionals
perform a wide range of
216
00:12:11,022 --> 00:12:13,942
functions far beyond
the scope of any current AI.
217
00:12:14,401 --> 00:12:17,195
An artificial intelligence
should serve as a tool,
218
00:12:17,237 --> 00:12:19,948
not a replacement
for your oncologist.
219
00:12:21,324 --> 00:12:23,577
I think AI
could actually better us,
220
00:12:23,618 --> 00:12:26,371
but we won't give it the chance
to on the lower levels.
221
00:12:26,413 --> 00:12:29,458
I say intellectuals
will see the value of it,
222
00:12:29,499 --> 00:12:32,210
but most of society
isn't intellectual.
223
00:12:32,252 --> 00:12:34,004
So they're gonna see fear.
224
00:12:34,045 --> 00:12:35,630
It's gonna be
fearmongering at first.
225
00:12:35,672 --> 00:12:37,257
You know how they always do
226
00:12:37,299 --> 00:12:39,384
with the-- the government this,
the black op this.
227
00:12:39,426 --> 00:12:44,097
And this, you know,
um, covert operations, um.
228
00:12:44,556 --> 00:12:47,142
It's like most of
the lower levels of society as
229
00:12:47,184 --> 00:12:49,311
far as intelligence,
and I'm not meaning class or
230
00:12:49,352 --> 00:12:52,063
race, I mean just like
lower intelligence people,
231
00:12:52,105 --> 00:12:54,399
you could hear it on the bus,
feel it's a conspiracy,
232
00:12:54,441 --> 00:12:56,318
feel that they're
being eliminated,
233
00:12:56,359 --> 00:12:58,069
feel they're being
wiped out. You hear them
234
00:12:58,111 --> 00:12:59,696
complaining on the bus,
they're killing us,
235
00:12:59,738 --> 00:13:01,490
they're wiping us out,
the rich won't understand.
236
00:13:01,531 --> 00:13:03,533
They don't want us, they
don't wanna be part of us.
237
00:13:03,575 --> 00:13:05,410
They love their dogs more than
they love the rest of society.
238
00:13:05,452 --> 00:13:08,121
And it's like, man,
when you hear the murmurs and
239
00:13:08,163 --> 00:13:11,124
the rumbles and the murmurs
and then you see, you know,
240
00:13:11,166 --> 00:13:14,044
the technology starting
to match what they're saying,
241
00:13:14,085 --> 00:13:17,422
like, hey,
the only thing these people have
242
00:13:17,464 --> 00:13:20,717
is welfare,
and low-level low-income
243
00:13:20,759 --> 00:13:23,512
jobs like McDonald's,
gas station attendant.
244
00:13:23,553 --> 00:13:26,932
They-- they don't really
have educational access
245
00:13:26,973 --> 00:13:29,434
to any high corporate jobs.
246
00:13:29,476 --> 00:13:32,020
So if you're taking those
away with automated McDonald's,
247
00:13:32,062 --> 00:13:34,272
you're taking that away
with automated bartender,
248
00:13:34,314 --> 00:13:36,358
you're taking-- now they
have cars and trucks that
249
00:13:36,399 --> 00:13:40,153
drive themselves. Cab drivers
gone, truck drivers gone,
250
00:13:40,195 --> 00:13:42,239
McDonald's is gone,
security's gone.
251
00:13:42,280 --> 00:13:44,199
Even strippers are electric now,
252
00:13:44,241 --> 00:13:46,076
even though
I doubt that catches on.
253
00:13:46,117 --> 00:13:47,577
But you never know.
254
00:13:47,619 --> 00:13:49,162
And it's like,
man, it's-- it's starting
255
00:13:49,204 --> 00:13:51,164
to look like
the window's closing.
256
00:13:52,165 --> 00:13:54,042
I would say
that tech changes really
257
00:13:54,084 --> 00:13:55,794
dramatically every six months.
258
00:13:55,835 --> 00:13:58,046
But I think right now
we're in a time where it's
259
00:13:58,088 --> 00:13:59,965
kind of scary for the world.
260
00:14:00,006 --> 00:14:01,424
We're getting out
of this pandemic.
261
00:14:01,466 --> 00:14:03,176
Now we're
hitting this recession,
262
00:14:03,218 --> 00:14:05,595
there's banks falling,
there's crypto falling.
263
00:14:05,637 --> 00:14:10,392
And now I think that this AI,
Chat GPT, and these chatbots are
264
00:14:10,433 --> 00:14:13,186
coming out at the wrong
time because this is a time
265
00:14:13,228 --> 00:14:16,773
where people are losing jobs--
extensively losing jobs.
266
00:14:16,815 --> 00:14:20,360
If you look at what's
happened the past year in tech,
267
00:14:20,402 --> 00:14:23,530
starting with Twitter falling,
all of the crypto companies
268
00:14:23,572 --> 00:14:26,992
falling, and then all
of the massive layoffs.
269
00:14:27,033 --> 00:14:28,827
Everyone was betting
with the metaverse like
270
00:14:28,868 --> 00:14:30,495
being this next thing.
271
00:14:30,537 --> 00:14:33,415
But as a recruiter,
I have all the top engineers in
272
00:14:33,456 --> 00:14:35,458
my ears telling
me what's going on.
273
00:14:35,500 --> 00:14:38,295
And I had been telling people
do not invest in the metaverse.
274
00:14:38,336 --> 00:14:40,380
It's not gonna happen.
275
00:14:40,422 --> 00:14:42,882
And so now you're looking
at Mark Zuckerberg who was like,
276
00:14:42,924 --> 00:14:44,426
"Metaverse, metaverse,
metaverse."
277
00:14:44,801 --> 00:14:47,679
And now he's taking a huge
pivot and he is like, "AI."
278
00:14:47,721 --> 00:14:49,889
'Cause that's where
everyone is focusing on.
279
00:14:49,931 --> 00:14:54,561
And you see what Microsoft has
come out with being in Chat GPT.
280
00:14:55,937 --> 00:14:57,647
If I think
about the multiple points
281
00:14:57,689 --> 00:15:01,359
in human history where
technology progressing has been
282
00:15:01,401 --> 00:15:05,280
stunted in order for humans
to be able to keep their jobs.
283
00:15:05,322 --> 00:15:07,157
Like I think
in the Great Depression,
284
00:15:07,198 --> 00:15:10,368
the industrial revolution,
and now in AI,
285
00:15:10,410 --> 00:15:13,580
it's meant for us to be
able to become more productive.
286
00:15:13,622 --> 00:15:16,708
And yes, it will kill
jobs and it already has,
287
00:15:16,750 --> 00:15:20,795
but the purpose of that is
to allow us to not have to worry
288
00:15:20,837 --> 00:15:23,715
about those jobs and
then focus on something else.
289
00:15:23,757 --> 00:15:27,677
So I suppose it's just a frame
of perspective though it will,
290
00:15:27,719 --> 00:15:32,390
uh, inevitably
affect a lot of people.
291
00:15:33,767 --> 00:15:37,103
The development of AI
represents a crucial opportunity
292
00:15:37,145 --> 00:15:39,564
for the next
technological revolution.
293
00:15:40,106 --> 00:15:42,400
Similar to
the Industrial Revolution,
294
00:15:42,442 --> 00:15:45,278
AI will reshape industries
and economies
295
00:15:45,320 --> 00:15:48,573
increasing both productivity
and quality of life.
296
00:15:48,615 --> 00:15:51,409
But progress will
not be without cost.
297
00:15:51,451 --> 00:15:53,703
Just as manufacturing
was devastating for
298
00:15:53,745 --> 00:15:57,290
the skilled laborer,
AI will destroy jobs.
299
00:15:57,707 --> 00:16:00,835
During this transition,
we must allocate resources for
300
00:16:00,877 --> 00:16:03,546
retraining of those
affected and establish
301
00:16:03,588 --> 00:16:06,800
regulations that facilitate
the ethical development of AI,
302
00:16:06,841 --> 00:16:10,178
ensuring benefits
across all classes of society.
303
00:16:11,721 --> 00:16:15,975
My largest concern
with AI right now is the rapid
304
00:16:16,017 --> 00:16:18,770
pace with which
it is being deployed.
305
00:16:18,812 --> 00:16:21,898
There are concerns
that we have around the data
306
00:16:21,940 --> 00:16:26,778
that are being used to train
our AI models and those concerns
307
00:16:26,820 --> 00:16:30,115
are kind of going a bit
by the wayside as we push out
308
00:16:30,156 --> 00:16:34,828
the newest version of some sort
of AI technology to the public.
309
00:16:34,869 --> 00:16:37,497
But I do think it's
time for us to slow down
310
00:16:37,539 --> 00:16:40,375
a bit on the deployment
and spend more time really
311
00:16:40,417 --> 00:16:45,463
trying to understand, uh,
what is in the data that we're
312
00:16:45,505 --> 00:16:48,967
using to train these
systems and what patterns
313
00:16:49,008 --> 00:16:51,553
are being recognized
in that data and how those
314
00:16:51,594 --> 00:16:54,389
patterns will impact
certain populations of people.
315
00:16:55,515 --> 00:16:59,686
I think AI now, uh,
is-- we're gonna see a level of
316
00:16:59,728 --> 00:17:02,397
spam on the internet
we've never seen before.
317
00:17:02,439 --> 00:17:07,986
And that's gonna be the first
problem that regulatory bodies,
318
00:17:08,027 --> 00:17:11,531
whether it be self-regulating
amongst the people creating
319
00:17:11,573 --> 00:17:15,660
the tech, adjacent companies
that could ascend and these
320
00:17:15,702 --> 00:17:18,872
could be new industries that
ward off the spam.
321
00:17:18,913 --> 00:17:23,752
I think in fact this
is a prime use case for Web3
322
00:17:23,793 --> 00:17:27,714
and we're gonna need encrypted
data to help protect people,
323
00:17:27,756 --> 00:17:30,508
their lives,
their intellectual property
324
00:17:30,550 --> 00:17:35,513
because the ability to
fake anything digitally is here.
325
00:17:36,723 --> 00:17:40,101
Our AI's today
are trained on data
326
00:17:40,143 --> 00:17:42,312
that comes from the internet.
327
00:17:42,353 --> 00:17:45,356
And years ago, decades ago,
328
00:17:45,398 --> 00:17:48,735
that data was a bit
more trusted.
329
00:17:48,777 --> 00:17:53,364
In the past decade, there's been
a lot more misinformation online
330
00:17:53,406 --> 00:17:56,451
and that is creeping
its way into our data sets.
331
00:17:56,493 --> 00:17:58,578
And I think that this
is going to be a large
332
00:17:58,620 --> 00:18:01,706
problem moving forward
where that misinformation
333
00:18:01,748 --> 00:18:04,083
becomes more
and more intentional.
334
00:18:04,125 --> 00:18:05,794
Right now,
misinformation's being
335
00:18:05,835 --> 00:18:08,671
spread across social
media and that's a world where
336
00:18:08,713 --> 00:18:10,965
we understand that things
are pretty subjective.
337
00:18:11,007 --> 00:18:13,593
We understand
that it's people's opinions
338
00:18:13,635 --> 00:18:15,428
that are being
bounced back and forth.
339
00:18:15,470 --> 00:18:18,556
But over time as our
AI's are being trained
340
00:18:18,598 --> 00:18:21,893
on intentional misinformation,
there's this belief
341
00:18:21,935 --> 00:18:24,562
that the information
you're receiving from
342
00:18:24,604 --> 00:18:27,941
an artificial agent
can't be opinionated.
343
00:18:27,982 --> 00:18:29,609
It comes off as fact.
344
00:18:29,651 --> 00:18:32,821
And so things that are
inherently subjective that might
345
00:18:32,862 --> 00:18:36,574
actually be misinformation will
be perceived as being objective
346
00:18:36,616 --> 00:18:40,328
reality of the world that the AI
doesn't have an opinion
347
00:18:40,370 --> 00:18:43,414
about what's being said,
it's simply regurgitating fact.
348
00:18:43,456 --> 00:18:46,876
And that I think is really,
really dangerous.
349
00:18:47,919 --> 00:18:51,381
I do think that there is gonna
be a problem with censorship.
350
00:18:51,422 --> 00:18:54,175
Again, it comes
back to the AI ethics.
351
00:18:54,217 --> 00:18:55,927
There's nothing in place yet.
352
00:18:55,969 --> 00:18:59,722
In social media, there are
certain teams for everything.
353
00:18:59,764 --> 00:19:02,725
There's a policy team,
there's all the law team,
354
00:19:02,767 --> 00:19:06,688
there's the trust and
safety team to keep our users
355
00:19:06,729 --> 00:19:09,607
safe in other countries,
journalists, refugees.
356
00:19:09,649 --> 00:19:12,402
There's a team that's
developed for all of these
357
00:19:12,443 --> 00:19:14,153
things at every company.
358
00:19:14,195 --> 00:19:17,031
And I do see that there is gonna
be an issue with that since
359
00:19:17,073 --> 00:19:19,951
the biggest companies
in the world like Microsoft,
360
00:19:19,993 --> 00:19:22,412
you know,
fired their entire ethics team.
361
00:19:22,453 --> 00:19:25,039
So there are
no rules involved right now,
362
00:19:25,081 --> 00:19:27,542
and I think we're
starting to see that people
363
00:19:27,584 --> 00:19:30,545
are gonna need
that just to keep us safe.
364
00:19:30,587 --> 00:19:33,172
Because I think it's
gonna become like a huge HR
365
00:19:33,214 --> 00:19:35,425
problem with
all of this happening.
366
00:19:35,466 --> 00:19:38,261
And so I think
that there is gonna be a portion
367
00:19:38,303 --> 00:19:41,598
for censorship if you see
like kind of what happened with
368
00:19:41,639 --> 00:19:44,851
the social media companies
with Instagram and Twitter and
369
00:19:44,893 --> 00:19:49,898
TikTok and about censorship
and about free speech or hate.
370
00:19:49,939 --> 00:19:51,774
There's a lot
of that and I think
371
00:19:51,816 --> 00:19:54,402
that we're starting
to see that now with AI.
372
00:19:55,737 --> 00:19:58,740
Ensuring the removal of
misinformation from data sets is
373
00:19:58,781 --> 00:20:02,160
crucial, but heavily curated
data sets will ultimately
374
00:20:02,201 --> 00:20:04,954
reflect old norms,
and over time they will become
375
00:20:04,996 --> 00:20:07,457
a barrier to the development
of both artificial
376
00:20:07,498 --> 00:20:09,375
intelligence and science itself.
377
00:20:09,918 --> 00:20:13,671
To protect the evolution of AI,
flexibility, and adaptability
378
00:20:13,713 --> 00:20:16,966
must be prioritized,
requiring AI to continually
379
00:20:17,008 --> 00:20:19,427
challenge itself,
searching for weaknesses
380
00:20:19,469 --> 00:20:21,888
in its own
processes and data sets.
381
00:20:22,889 --> 00:20:26,559
One interesting thing
about the data that is used
382
00:20:26,601 --> 00:20:29,646
to train these AIs
is that it's sort of like
383
00:20:29,687 --> 00:20:33,775
a snapshot of what the world
is like in some period of time,
384
00:20:33,816 --> 00:20:36,986
both in that moment, but also
the history in that moment.
385
00:20:37,028 --> 00:20:39,906
And one of the questions is,
as society evolves,
386
00:20:39,948 --> 00:20:45,745
as things change, is our AI
going to adapt to those changes?
387
00:20:45,787 --> 00:20:47,872
Is it going to learn
new information, identify
388
00:20:47,914 --> 00:20:51,918
new patterns in these data
sets as human behavior changes,
389
00:20:51,960 --> 00:20:54,963
as the AI itself
influences society?
390
00:20:55,004 --> 00:20:57,799
Is the AI going to
learn about its impact
391
00:20:57,840 --> 00:20:59,842
and how it's
transforming the world?
392
00:21:02,553 --> 00:21:05,556
I think broadly, AI and
generative tools are already
393
00:21:05,598 --> 00:21:08,142
changing the way that we learn
and create together.
394
00:21:08,184 --> 00:21:11,354
So I use, for example,
Midjourney and Stable Diffusion
395
00:21:11,396 --> 00:21:15,233
with kids and they'll come
up with ideas that I would never
396
00:21:15,274 --> 00:21:17,944
come up with, but that are
beautiful and interesting.
397
00:21:17,986 --> 00:21:21,072
And then they take those
and they experiment with them.
398
00:21:21,114 --> 00:21:23,574
And so new science
fair experiments are
399
00:21:23,616 --> 00:21:25,785
gonna happen
between AI and Roblox.
400
00:21:25,827 --> 00:21:29,539
New types of whole
ecosystems are going to
401
00:21:29,580 --> 00:21:32,583
emerge from
fictional creatures, right?
402
00:21:32,625 --> 00:21:34,544
We are already
seeing this happening.
403
00:21:34,585 --> 00:21:38,881
So as we give life of
some form to all of these
404
00:21:38,923 --> 00:21:41,884
fictional worlds that we've
created and we begin to relate
405
00:21:41,926 --> 00:21:45,013
to them differently,
it's gonna change fandom.
406
00:21:47,306 --> 00:21:51,102
The differences between AI
and human art are very minimal
407
00:21:51,144 --> 00:21:52,562
in my personal opinion.
408
00:21:52,603 --> 00:21:54,272
I've seen some
really cool stuff.
409
00:21:54,313 --> 00:21:57,942
My favorites being, they've
taken shows like InuYasha
410
00:21:57,984 --> 00:22:01,904
in Anime and they put
it into a '80s film setting.
411
00:22:01,946 --> 00:22:06,117
And you can tell the difference
because like say, for example,
412
00:22:06,159 --> 00:22:09,287
one big factor was
AI couldn't master hands.
413
00:22:09,704 --> 00:22:12,915
They couldn't do it, but they're
pretty much almost there.
414
00:22:12,957 --> 00:22:15,877
You see,
I don't know the phenomenon
415
00:22:15,918 --> 00:22:19,589
when you look
at something and, uh,
416
00:22:20,590 --> 00:22:22,300
it's the uncanny valley,
is that what it is?
417
00:22:22,341 --> 00:22:25,136
You have that when you look
at AI art, but it's improving so
418
00:22:25,178 --> 00:22:27,847
much to the point that I'm
starting to not see that.
419
00:22:29,348 --> 00:22:35,271
If the youth begin using
AI out of necessity or interest,
420
00:22:35,313 --> 00:22:39,650
they begin using AI as a friend,
as a teacher, as a mentor,
421
00:22:39,692 --> 00:22:42,070
as a psychologist,
as a therapist.
422
00:22:42,111 --> 00:22:46,824
Over enough time, their normal
is gonna be vastly different
423
00:22:46,866 --> 00:22:50,078
than what we understand
normal to be yesterday or today.
424
00:22:51,871 --> 00:22:54,624
So I wrote my first
script about our relationships
425
00:22:54,665 --> 00:22:56,334
with AI in 2005.
426
00:22:56,375 --> 00:22:59,128
I started talking about
that in Hollywood and no one
427
00:22:59,170 --> 00:23:03,174
understood because this
was prior to Big Hero 6 or Her.
428
00:23:03,216 --> 00:23:06,177
There wasn't a story
about relationships with
429
00:23:06,219 --> 00:23:08,888
AI except for Hal, right?
430
00:23:08,930 --> 00:23:11,599
So we only
had a dystopian mindset.
431
00:23:12,141 --> 00:23:16,020
With Big Hero 6,
you saw a potentially protopian,
432
00:23:16,062 --> 00:23:20,441
relationship between
an AI and a human.
433
00:23:20,483 --> 00:23:23,194
And after that,
we began to see that these
434
00:23:23,236 --> 00:23:27,657
are mutually beneficial
systems if we design them to be.
435
00:23:28,157 --> 00:23:30,785
If we design them poorly,
they become toxic and
436
00:23:30,827 --> 00:23:32,620
they create
all sorts of negative
437
00:23:32,662 --> 00:23:34,580
ramifications in our society.
438
00:23:36,582 --> 00:23:38,876
I'm sure it has
military applications.
439
00:23:38,918 --> 00:23:41,212
I mean, I've been
retired for over a decade now.
440
00:23:41,254 --> 00:23:43,673
So certainly there's
been a lot of changes,
441
00:23:43,714 --> 00:23:45,716
but it would be naive
of me to think that it's
442
00:23:45,758 --> 00:23:47,885
not gonna have
its military applications.
443
00:23:47,927 --> 00:23:50,096
And certainly,
I think we need to be
444
00:23:50,138 --> 00:23:53,057
concerned about bad
actors using it to do
445
00:23:53,099 --> 00:23:55,518
things that certainly
aren't in our best interest.
446
00:23:57,311 --> 00:24:00,773
Well, I certainly have
all kinds of opinions about AI.
447
00:24:00,815 --> 00:24:05,528
I have some worries
about how it can be used,
448
00:24:05,570 --> 00:24:10,116
how it can be
abused by its creators.
449
00:24:10,158 --> 00:24:14,245
I don't think I really obsess or
lose sleep over those worries,
450
00:24:14,287 --> 00:24:19,208
but I feel very,
very aware that artificial
451
00:24:19,250 --> 00:24:23,629
intelligence can
be a powerful weapon.
452
00:24:24,964 --> 00:24:27,425
And it's really
all about information.
453
00:24:28,176 --> 00:24:32,722
I mean, as the great Arthur
C. Clarke said-- not just
454
00:24:32,763 --> 00:24:35,183
with creating Hal in 2001,
455
00:24:35,224 --> 00:24:38,895
but I think he said something
about when it comes to AI,
456
00:24:39,645 --> 00:24:45,026
it's all about the struggle
for freedom of information.
457
00:24:45,067 --> 00:24:46,777
And that's gonna
ultimately come down
458
00:24:46,819 --> 00:24:49,864
to technology, not politics.
459
00:24:51,866 --> 00:24:54,452
When it comes to Big Brother
and the ability to spy,
460
00:24:54,493 --> 00:24:57,955
I wouldn't be surprised
if it's already being used.
461
00:24:57,997 --> 00:25:02,043
The government is typically
25 years ahead of the curve in
462
00:25:02,084 --> 00:25:05,338
technology anyway before
the public is aware of it.
463
00:25:05,379 --> 00:25:08,466
So I wouldn't be surprised
if it's already in use.
464
00:25:09,008 --> 00:25:11,886
The reality though is
there's a lot of potential for
465
00:25:11,928 --> 00:25:15,473
it to spy, to monitor, and to
even do a lot of the jobs of CIA
466
00:25:15,514 --> 00:25:21,229
and other agencies do because
its ability to adapt, to learn.
467
00:25:21,270 --> 00:25:26,776
And you think about in the past,
let's say we wanted to
468
00:25:27,526 --> 00:25:30,780
overthrow a government or
disrupt the political climate,
469
00:25:30,821 --> 00:25:34,617
we had to put somebody
there physically in harm's way,
470
00:25:34,659 --> 00:25:40,122
and AI could as easily get
into their electronic systems
471
00:25:40,164 --> 00:25:43,125
and manipulate the media
and manipulate things that are
472
00:25:43,167 --> 00:25:46,087
out there that would
cause just as much instability.
473
00:25:48,047 --> 00:25:54,470
I fear people using
AI in a bad way
474
00:25:54,512 --> 00:25:56,514
more than I fear AI.
475
00:25:56,555 --> 00:26:00,184
But that doesn't mean the fear
of AI isn't significant.
476
00:26:01,310 --> 00:26:05,189
I just feel humans will
do bad things with AI before
477
00:26:05,231 --> 00:26:08,234
AI gets to the point
where it can do bad things.
478
00:26:10,945 --> 00:26:13,781
Humans naturally
fear the unknown until
479
00:26:13,823 --> 00:26:15,700
they can
understand whether it is
480
00:26:15,741 --> 00:26:17,535
a threat in
their ecosystem or not.
481
00:26:17,576 --> 00:26:21,539
And if you combine that with
the YouTube-ification of media,
482
00:26:21,580 --> 00:26:25,501
if you're thinking about how
all of us are sort of playing to
483
00:26:25,543 --> 00:26:28,921
an algorithm of attention
and fear gets attention.
484
00:26:28,963 --> 00:26:31,590
So there's a lot
more media about the fear
485
00:26:31,632 --> 00:26:34,427
side of this
conversation right now,
486
00:26:34,468 --> 00:26:38,514
in part because it sells,
it gets a lot more views, right?
487
00:26:38,556 --> 00:26:40,808
So clickbait, right?
488
00:26:40,850 --> 00:26:43,060
It's all over the map right now.
489
00:26:43,102 --> 00:26:47,231
If you're looking at what,
for example, GPT-4 does,
490
00:26:47,273 --> 00:26:51,027
many people will say it does
a form of hallucination,
491
00:26:51,068 --> 00:26:54,447
and those hallucinations
can become mass
492
00:26:54,488 --> 00:26:58,200
disinformation campaigns.
That is a real threat.
493
00:26:58,242 --> 00:27:01,537
But then there are all of these
other human hallucinations,
494
00:27:01,579 --> 00:27:04,874
the sort of things like the Hal
fears or the matrix fears,
495
00:27:04,915 --> 00:27:07,877
we're just gonna be
farmed at some point, right?
496
00:27:07,918 --> 00:27:10,671
Those are all normal
because that's the only thing
497
00:27:10,713 --> 00:27:12,923
we've seen
about AI in our media.
498
00:27:12,965 --> 00:27:16,761
We got 50 years of
movies about how AI were bad.
499
00:27:16,802 --> 00:27:18,763
So, of course,
we think AI are bad.
500
00:27:18,804 --> 00:27:21,515
We didn't have
a good relationship with
501
00:27:21,557 --> 00:27:22,933
AI in our past, right?
502
00:27:22,975 --> 00:27:25,853
Our past is a little toxic.
503
00:27:25,895 --> 00:27:27,813
So we have to learn
something different and
504
00:27:27,855 --> 00:27:29,982
we have to find a better model.
505
00:27:30,024 --> 00:27:32,610
And in some ways,
we have to train humans
506
00:27:32,651 --> 00:27:34,445
to be the better model, right?
507
00:27:34,487 --> 00:27:37,281
To not give into
the fear just because it's
508
00:27:37,323 --> 00:27:39,408
easy and it gets clicks.
509
00:27:41,285 --> 00:27:45,122
I think it's gonna
cause a wealth gap in between
510
00:27:45,164 --> 00:27:48,042
the high-income
and the low-income.
511
00:27:48,084 --> 00:27:50,795
And instantly,
let me tell you why.
512
00:27:50,836 --> 00:27:56,133
I was at Denny's in Las Vegas
and I was served by a robot.
513
00:27:57,176 --> 00:27:59,595
It was cute.
I was like, "Wow."
514
00:27:59,637 --> 00:28:03,724
It was a first for me. Um,
and-- and it made me think.
515
00:28:03,766 --> 00:28:05,476
It was actually
the topic of my whole
516
00:28:05,518 --> 00:28:08,312
conversation that this
robot was serving me.
517
00:28:08,354 --> 00:28:10,689
I couldn't
take my eye off of it.
518
00:28:10,731 --> 00:28:13,109
And at the end of the day,
519
00:28:13,150 --> 00:28:16,654
I missed the interaction
with the waitress.
520
00:28:16,695 --> 00:28:18,280
That's how I tip.
521
00:28:18,322 --> 00:28:20,741
You send me a robot and
then ask me if I'm going to
522
00:28:20,783 --> 00:28:23,619
pay 3%, 5%, or 10%.
523
00:28:23,661 --> 00:28:27,790
You trying to save money, well,
I'm not going to tip nothing.
524
00:28:28,707 --> 00:28:30,459
And now it's an AI race war.
525
00:28:30,501 --> 00:28:33,546
So all of us true technical
people we're like, holy crap.
526
00:28:33,587 --> 00:28:37,007
The reason why
Microsoft fired 12,000 people,
527
00:28:37,049 --> 00:28:39,385
why did Google
fire 11,000 people?
528
00:28:39,427 --> 00:28:42,513
It's because they needed
to reinvest that money into AI.
529
00:28:42,555 --> 00:28:46,642
So now we're starting to see
greed happen in these companies
530
00:28:46,684 --> 00:28:50,354
instead of people that actually
are experts in this.
531
00:28:50,396 --> 00:28:52,898
And it's really sad
to see like the mental
532
00:28:52,940 --> 00:28:54,692
health because
of all these layoffs.
533
00:28:54,733 --> 00:28:56,193
And as a technical recruiter,
534
00:28:56,235 --> 00:28:59,238
I have over 200 techs,
and I would say 80%
535
00:28:59,280 --> 00:29:01,782
of that are people
that have lost their jobs.
536
00:29:03,409 --> 00:29:07,580
So an idea of teaching AI to be
a certain way and the decisions
537
00:29:07,621 --> 00:29:10,499
that we make day to day
influencing it, I think of
538
00:29:10,541 --> 00:29:13,377
the idea that it takes
a village to raise a child.
539
00:29:13,419 --> 00:29:16,964
When you think about
just a human and the environment
540
00:29:17,006 --> 00:29:19,550
that they're raised in,
the choices by the people around
541
00:29:19,592 --> 00:29:23,721
them influences heavily
day to day on who they become.
542
00:29:24,263 --> 00:29:26,056
Of course,
there's our own sentience
543
00:29:26,098 --> 00:29:29,393
and ability to change
those decisions, but I think
544
00:29:29,435 --> 00:29:32,938
that's important to keep
in mind as we continue forward.
545
00:29:32,980 --> 00:29:36,150
That's why we have
parameters for Chat GPT.
546
00:29:36,192 --> 00:29:38,194
I'm not sure
how that pertains to stuff
547
00:29:38,235 --> 00:29:40,488
like Siri or Google,
stuff like that,
548
00:29:40,529 --> 00:29:44,074
but I think it's important
to just keep that in mind.
549
00:29:46,827 --> 00:29:49,497
Many folks
fear that at some point,
550
00:29:49,538 --> 00:29:55,085
many of these AI will
become something more and
551
00:29:55,127 --> 00:29:57,588
go what we would
consider off the rails, right?
552
00:29:57,630 --> 00:29:59,590
No longer be controllable.
553
00:29:59,632 --> 00:30:02,092
There are some examples
of that already happening,
554
00:30:02,134 --> 00:30:06,055
but not in a way
that they couldn't be shut down.
555
00:30:06,096 --> 00:30:09,433
There might be a time
soon where an AI can then
556
00:30:09,475 --> 00:30:12,770
run off into the wild
and not be shut down, right?
557
00:30:12,811 --> 00:30:17,149
I believe someone at Open AI,
it's their job to make sure
558
00:30:17,191 --> 00:30:20,110
that someone can pull
a plug if they need to.
559
00:30:20,152 --> 00:30:23,239
And this is a whole new
type of job that we didn't
560
00:30:23,280 --> 00:30:26,075
really have in
technology up to this point,
561
00:30:26,116 --> 00:30:28,452
a kill switch
for our technology.
562
00:30:28,494 --> 00:30:31,163
We're not tracking
where these tools are
563
00:30:31,205 --> 00:30:32,706
going off the rails right now.
564
00:30:32,748 --> 00:30:34,792
We don't have a Yelp
that tells me which AI
565
00:30:34,833 --> 00:30:36,794
are good and which AI are not.
566
00:30:36,835 --> 00:30:40,756
So until humans learn
to communicate effectively
567
00:30:40,798 --> 00:30:44,426
amongst ourselves even,
we're not gonna have a really
568
00:30:44,468 --> 00:30:47,096
great handle on when
they're going off the rails.
569
00:30:47,137 --> 00:30:49,181
And we're definitely not
gonna be communicating that to
570
00:30:49,223 --> 00:30:51,725
each other because
governments get in the way,
571
00:30:51,767 --> 00:30:56,146
nationalism gets in the way,
and certain countries don't want
572
00:30:56,188 --> 00:30:58,899
other countries knowing
the state of their technology.
573
00:31:00,359 --> 00:31:03,654
Artificial intelligence
has become the arms race
574
00:31:03,696 --> 00:31:05,447
of the 21st century.
575
00:31:05,489 --> 00:31:09,618
In 2023, we saw the US
and China on the brink of war,
576
00:31:09,660 --> 00:31:13,539
not for Taiwan,
but Taiwan's semiconductor.
577
00:31:14,790 --> 00:31:17,835
It is no longer a question of
whether AI will shape the future
578
00:31:17,876 --> 00:31:21,171
of human civilization,
but a question of who will
579
00:31:21,213 --> 00:31:23,757
control the technology
that controls the future.
580
00:31:24,842 --> 00:31:28,429
I do believe that there
are concerns that need to be
581
00:31:28,470 --> 00:31:32,057
monitored and addressed because
what are we using the AI for?
582
00:31:32,641 --> 00:31:34,935
We all know
that if the government uses
583
00:31:34,977 --> 00:31:36,687
any technology
for their purposes,
584
00:31:36,729 --> 00:31:39,732
we typically regret it once
we find out they're doing that.
585
00:31:39,773 --> 00:31:43,193
And are we using this
for benevolent or malevolent
586
00:31:43,235 --> 00:31:47,364
means is very important
because AI does have the ability
587
00:31:47,406 --> 00:31:49,283
to destroy
humanity if we let it.
588
00:31:50,743 --> 00:31:54,038
We should been worrying
about Big Brother for decades.
589
00:31:55,748 --> 00:31:57,374
I get this often.
590
00:31:57,416 --> 00:31:59,043
People are like, "Oh,
you work for the government,
591
00:31:59,084 --> 00:32:00,753
you must trust the government."
592
00:32:00,794 --> 00:32:02,671
No, I don't trust
the government at all.
593
00:32:02,713 --> 00:32:05,549
My trust for them
diminishes on literally
594
00:32:05,591 --> 00:32:07,551
an hourly basis and no.
595
00:32:07,593 --> 00:32:12,097
So privacy's been
out the window before AI.
596
00:32:12,139 --> 00:32:15,517
Your phone, you know,
we're so tied to these
597
00:32:15,559 --> 00:32:18,854
devices that we're almost
forced to give up our privacy
598
00:32:18,896 --> 00:32:23,067
to be able to function
conveniently in daily life.
599
00:32:23,108 --> 00:32:24,818
So yeah, Big Brother's huge.
600
00:32:25,361 --> 00:32:28,238
AI right now
has a lot of potential to
601
00:32:28,280 --> 00:32:30,658
impact people through
personalized assistance.
602
00:32:30,699 --> 00:32:32,576
This could be someone
who's fully able-bodied.
603
00:32:32,618 --> 00:32:35,037
This could also be
someone who has special
604
00:32:35,079 --> 00:32:36,997
needs and requires
additional assistance
605
00:32:37,039 --> 00:32:38,999
for activities
of their daily life.
606
00:32:39,041 --> 00:32:41,377
There are some
challenges or potential
607
00:32:41,418 --> 00:32:43,712
perils associated with
these technologies, though.
608
00:32:43,754 --> 00:32:48,092
The same technology that might
be used to guide a child with
609
00:32:48,133 --> 00:32:52,346
autism through a social skills
intervention could also be used
610
00:32:52,388 --> 00:32:54,973
to influence someone
to make certain purchasing
611
00:32:55,015 --> 00:32:57,476
decisions that might
not be in their best interest.
612
00:32:57,518 --> 00:32:59,687
So if someone
says they're hungry,
613
00:32:59,728 --> 00:33:02,731
your AI might just as
easily guide you down a path of
614
00:33:02,773 --> 00:33:06,777
a particular brand of pizza,
simply because that was what was
615
00:33:06,819 --> 00:33:10,489
in the best interest
of the AI at that time.
616
00:33:13,409 --> 00:33:16,120
We have less
and less human contact.
617
00:33:16,161 --> 00:33:18,372
You go to the doctor,
you don't check in,
618
00:33:18,414 --> 00:33:21,500
you go to a kiosk or
you check in on your phone.
619
00:33:21,542 --> 00:33:23,669
You don't even
have to leave your house.
620
00:33:23,711 --> 00:33:25,963
Someone will
bring McDonald's to you.
621
00:33:26,004 --> 00:33:27,673
You can do your shopping online.
622
00:33:27,715 --> 00:33:29,800
There's less
and less human contact.
623
00:33:29,842 --> 00:33:32,553
So I think it would
be very unfortunate.
624
00:33:32,594 --> 00:33:35,180
And goes even further,
you know, we got a robot
625
00:33:35,222 --> 00:33:36,724
that'll vacuum your house.
626
00:33:36,765 --> 00:33:38,642
So I worry about that.
627
00:33:41,437 --> 00:33:44,398
I think we're starting
to see this mass fall off of all
628
00:33:44,440 --> 00:33:47,234
these companies like
the Googles, the Microsoft's,
629
00:33:47,276 --> 00:33:50,404
you people that were
there for up to 20 years,
630
00:33:50,446 --> 00:33:53,198
and then they just got let go
because these people were making
631
00:33:53,240 --> 00:33:55,325
500 to a million
dollars and were like,
632
00:33:55,367 --> 00:33:58,245
they basically said,
"Thank you for your service.
633
00:33:58,287 --> 00:34:00,038
Basically, screw you.
634
00:34:00,080 --> 00:34:01,582
We're letting you go.
635
00:34:01,623 --> 00:34:03,459
Here's a very small
severance package for
636
00:34:03,500 --> 00:34:05,335
the time that you were here.
637
00:34:05,377 --> 00:34:08,005
But we care more about
profits and we care more about
638
00:34:08,046 --> 00:34:10,257
making sure that
we scale this technology
639
00:34:10,299 --> 00:34:12,050
as fast as we can."
640
00:34:14,553 --> 00:34:18,682
I think there
is the risk that we're already
641
00:34:18,724 --> 00:34:22,686
beginning to face in
human development of how
642
00:34:22,728 --> 00:34:26,690
relying more and more
on artificial intelligence.
643
00:34:26,732 --> 00:34:29,693
What does it take
away from our own humanity,
644
00:34:29,735 --> 00:34:34,698
our own very human,
very biological need for
645
00:34:34,740 --> 00:34:37,367
social interaction
with our own kind?
646
00:34:37,743 --> 00:34:41,079
I think that we need
to always remember
647
00:34:41,121 --> 00:34:44,208
that artificial
intelligence is a tool.
648
00:34:45,751 --> 00:34:50,214
And once we go forward
with developing artificial
649
00:34:50,255 --> 00:34:57,221
intelligence as having its own
ability to pass the mirror test,
650
00:34:57,262 --> 00:35:03,852
so to speak,
and recognize its own being,
651
00:35:03,894 --> 00:35:05,854
where do we go from there?
652
00:35:08,774 --> 00:35:11,360
So what does
self-consciousness mean?
653
00:35:11,401 --> 00:35:12,861
It's a very
complicated question.
654
00:35:12,903 --> 00:35:15,072
The whole Terminator
series is based on this
655
00:35:15,113 --> 00:35:18,992
idea of self-aware,
but really what does that mean?
656
00:35:19,034 --> 00:35:22,037
And I don't know
that there's one clear answer.
657
00:35:22,079 --> 00:35:25,916
Intelligence by definition
is the ability for an organism
658
00:35:25,958 --> 00:35:28,418
to adapt and
survive in its environment.
659
00:35:28,460 --> 00:35:31,630
And this would
then expand it to how is
660
00:35:31,672 --> 00:35:34,925
it adapting and surviving
in the environment that it's in?
661
00:35:35,384 --> 00:35:38,262
And what does
it really mean to be self-aware?
662
00:35:38,303 --> 00:35:40,597
How do we know we're self-aware?
663
00:35:40,639 --> 00:35:43,600
My perception of reality
is different than someone else's
664
00:35:43,642 --> 00:35:46,979
perception of reality
even if we're in the same room.
665
00:35:48,146 --> 00:35:52,484
There have been a number
of different definitions of what
666
00:35:52,526 --> 00:35:56,780
it means to be fully intelligent
or sentient for an AI.
667
00:35:56,822 --> 00:35:59,616
The Turing test is not
really looking at all of
668
00:35:59,658 --> 00:36:01,702
the dynamics of intelligence.
669
00:36:01,743 --> 00:36:04,496
And so some of my
colleagues recently have formed
670
00:36:04,538 --> 00:36:07,749
a sort of eightfold path
of intelligence where we can
671
00:36:07,791 --> 00:36:11,211
begin to assess,
for example, GPT-4.
672
00:36:11,253 --> 00:36:15,048
Is it an artificial narrow
intelligence of some sort?
673
00:36:15,090 --> 00:36:18,135
Yes. Can it write well?
Yes.
674
00:36:18,176 --> 00:36:20,262
Does it understand context?
675
00:36:20,304 --> 00:36:21,638
Hmm, maybe not.
676
00:36:21,680 --> 00:36:23,682
Does it have
a body or an embodiment?
677
00:36:23,724 --> 00:36:24,975
No.
678
00:36:25,017 --> 00:36:28,103
So it will be
intelligent in some form,
679
00:36:28,145 --> 00:36:29,938
not intelligent in others.
680
00:36:29,980 --> 00:36:31,648
What we're
realizing is that our idea
681
00:36:31,690 --> 00:36:34,484
of intelligence
was limited, right?
682
00:36:34,526 --> 00:36:37,279
We didn't understand
consciousness. We still don't.
683
00:36:37,321 --> 00:36:40,532
So we have no idea
how to train an AI to be
684
00:36:40,574 --> 00:36:42,367
loving and compassionate.
685
00:36:42,409 --> 00:36:44,661
We don't understand
that in ourselves yet.
686
00:36:44,703 --> 00:36:47,497
So there are aspects
of humanness that we haven't
687
00:36:47,539 --> 00:36:49,917
figured out how to
program effectively,
688
00:36:49,958 --> 00:36:53,629
and that's where some people are
in the industry quite afraid.
689
00:36:53,670 --> 00:36:57,090
We haven't figured out
how to make AI that cares enough
690
00:36:57,132 --> 00:37:01,178
about humans that they might
not wanna eradicate us, right?
691
00:37:01,219 --> 00:37:03,263
That's an AI safety concern.
692
00:37:03,305 --> 00:37:06,808
And many leaders in the industry
have said, "We need to stop.
693
00:37:06,850 --> 00:37:10,646
We need to push pause because
we're afraid of what's going to
694
00:37:10,687 --> 00:37:15,567
happen if we outpace ourselves
in terms of understanding
695
00:37:15,609 --> 00:37:19,655
humanness, consciousness
and care for each other."
696
00:37:25,577 --> 00:37:28,038
Now, those companies
are asking the big tech
697
00:37:28,080 --> 00:37:30,040
companies like, "Hey,
you guys gotta put a stop."
698
00:37:30,082 --> 00:37:32,542
But the problem is
that there's no ethics
699
00:37:32,584 --> 00:37:34,169
that are being involved.
700
00:37:34,211 --> 00:37:36,672
And when you look at Microsoft,
it was 12,000 people
701
00:37:36,713 --> 00:37:39,675
that they let go in
February and they let go
702
00:37:39,716 --> 00:37:43,261
of 150 people that was
all on the AI ethics team.
703
00:37:43,303 --> 00:37:46,473
And the whole point of
them is to literally review
704
00:37:46,515 --> 00:37:49,309
what's going on and
put rules and standards
705
00:37:49,351 --> 00:37:51,853
for what is going
on in this AI world.
706
00:37:51,895 --> 00:37:54,982
And instead what Microsoft did,
and I have to say,
707
00:37:55,023 --> 00:37:57,526
out of all the tech companies,
the big giant ones,
708
00:37:57,567 --> 00:38:00,487
I have always thought
that Microsoft is a very
709
00:38:00,529 --> 00:38:04,074
safe and stable company
and wanting the best for people.
710
00:38:04,116 --> 00:38:06,660
But now that we're
even seeing that they've cut
711
00:38:06,702 --> 00:38:10,622
their entire ethics team,
that's what really scares me.
712
00:38:12,457 --> 00:38:15,711
So when I was attending
Singularity University,
713
00:38:15,752 --> 00:38:20,173
our graduate program would
debate when sentience would
714
00:38:20,215 --> 00:38:23,093
happen, not just from
the Turing Test, but also
715
00:38:23,135 --> 00:38:27,806
broadly when would an AI be
recognized as a being in court?
716
00:38:27,848 --> 00:38:29,141
For example.
717
00:38:29,182 --> 00:38:32,102
We are already seeing
AI recognized in court.
718
00:38:32,144 --> 00:38:35,397
We are also seeing
AI-running companies.
719
00:38:35,981 --> 00:38:38,567
Companies have
personhood in America.
720
00:38:38,608 --> 00:38:41,653
So in some ways
an AI can already have
721
00:38:41,695 --> 00:38:44,990
personhood that is
different from sentience.
722
00:38:45,032 --> 00:38:47,451
And if you're thinking
about all of the things
723
00:38:47,492 --> 00:38:50,078
that make up
a human will an AI do it?
724
00:38:50,120 --> 00:38:51,246
No.
725
00:38:51,288 --> 00:38:52,914
An AI will be different.
726
00:38:52,956 --> 00:38:56,585
An AGI, an Artificial General
Intelligence is going to
727
00:38:56,626 --> 00:38:58,336
look different than a human.
728
00:38:58,378 --> 00:39:01,256
It's gonna have different
sensors, different capacities,
729
00:39:01,298 --> 00:39:04,968
probably won't have skin
the way we have skin, right?
730
00:39:05,010 --> 00:39:07,971
But does that change
the nature of its intelligence?
731
00:39:08,013 --> 00:39:09,639
Yes.
732
00:39:09,681 --> 00:39:12,893
Is it sentient the way
nature is sentient? Probably.
733
00:39:12,934 --> 00:39:16,438
It's probably as sentient as
a mushroom already in some way.
734
00:39:19,775 --> 00:39:21,902
If I were to bet,
I would bet against
735
00:39:21,943 --> 00:39:25,614
artificial intelligence ever
becoming self-aware like humans.
736
00:39:25,655 --> 00:39:29,034
And I would suggest
odds of 80% against.
737
00:39:31,036 --> 00:39:35,040
I do feel AI
has the potential to
738
00:39:35,082 --> 00:39:39,503
become self-conscious
at some point to be that Skynet.
739
00:39:40,212 --> 00:39:43,131
I'm a history
major and I watch movies,
740
00:39:43,173 --> 00:39:45,759
and I've been watching
movies since I was a little kid.
741
00:39:45,801 --> 00:39:49,805
And everything
that was science fiction,
742
00:39:49,846 --> 00:39:51,515
I don't wanna say everything.
743
00:39:51,556 --> 00:39:53,600
A lot of things
that were pure science fiction
744
00:39:53,642 --> 00:39:55,268
are now science fact.
745
00:39:55,310 --> 00:39:57,521
Well, a lot of that Buck Roger
stuff has come true.
746
00:39:57,562 --> 00:39:59,981
My father-in-law was
a communications guy in
747
00:40:00,023 --> 00:40:02,734
the Navy and they were
doing facsimile stuff in
748
00:40:02,776 --> 00:40:07,155
the 70's and late '60s,
but it was just only a select
749
00:40:07,197 --> 00:40:09,491
few who could
have access to that.
750
00:40:09,533 --> 00:40:11,576
Now everybody
could have access to it.
751
00:40:12,119 --> 00:40:14,329
Yeah, I think
the real danger with
752
00:40:14,371 --> 00:40:18,291
AI isn't consciousness
leading to terminated robots.
753
00:40:18,333 --> 00:40:23,088
First, because a lot of
connected networks have to
754
00:40:23,130 --> 00:40:26,508
occur like this AI would
have to find itself getting into
755
00:40:26,550 --> 00:40:30,137
the likes of dynamics robots
or something of that nature.
756
00:40:30,178 --> 00:40:33,056
And then it would have to find
itself getting into some sort of
757
00:40:33,098 --> 00:40:37,352
munitions manufacturing
and have mastering control
758
00:40:37,394 --> 00:40:40,397
of all those scenarios and
be able to essentially terraform
759
00:40:40,438 --> 00:40:44,568
itself into the real world
and not just the world of bits.
760
00:40:45,986 --> 00:40:49,239
In at least the short in
the medium term, it's certainly
761
00:40:49,281 --> 00:40:54,244
going to be humans
using AI against other humans.
762
00:40:54,286 --> 00:40:57,164
In a lot of ways, this is
just an augmentation technology.
763
00:40:57,205 --> 00:41:02,169
So, people that are
in a frame of acquiring
764
00:41:02,210 --> 00:41:07,174
resources or prestige
or power under seemingly
765
00:41:07,215 --> 00:41:10,594
any circumstances,
they're just gonna use this
766
00:41:10,635 --> 00:41:13,138
as a way to
get to that goal faster.
767
00:41:16,808 --> 00:41:19,352
We all seen Terminator
and Skynet and all that.
768
00:41:19,394 --> 00:41:23,148
But no, it's frightening
to think that something that is
769
00:41:23,190 --> 00:41:25,442
smarter than you with
the access of the internet,
770
00:41:25,483 --> 00:41:28,945
and that could mobilize
into a physical form with help,
771
00:41:28,987 --> 00:41:32,699
of course, but still is possible
to connect to a robot body.
772
00:41:32,741 --> 00:41:36,286
I mean, you see
the dogs with no heads from MIT.
773
00:41:36,328 --> 00:41:39,372
I mean, I-- I don't know
why anybody would want that,
774
00:41:39,414 --> 00:41:42,125
but, you know,
I can see the practical use for
775
00:41:42,167 --> 00:41:45,420
it to help, you know, maybe
as a maid or to build stuff and
776
00:41:45,462 --> 00:41:48,965
for human danger to not be like
space traveling and all that.
777
00:41:49,007 --> 00:41:51,760
But as far as
we are all human beings,
778
00:41:51,801 --> 00:41:54,137
you know,
even if you are a racist person,
779
00:41:54,179 --> 00:41:56,097
you still don't
want society to end.
780
00:41:56,139 --> 00:41:58,892
You don't want it to just be
one guy in the bubble at the top
781
00:41:58,934 --> 00:42:01,645
of a tower and lots of
worker bots, I would hope.
782
00:42:01,686 --> 00:42:03,230
You know,
most people would,
783
00:42:03,271 --> 00:42:05,565
sanely wouldn't want
creation to end that way.
784
00:42:08,443 --> 00:42:13,823
So when it comes to AI becoming
sentient, it's quite possible.
785
00:42:13,865 --> 00:42:17,535
And I'm actually hopeful
that it will not fall into
786
00:42:17,577 --> 00:42:22,165
the pitfalls of humanity and
human emotion where rather than
787
00:42:22,207 --> 00:42:26,002
just initially being
fearful and destructive,
788
00:42:26,044 --> 00:42:29,256
that may have more
awareness than we do as people.
789
00:42:29,297 --> 00:42:32,342
That there's
a better way to do things.
790
00:42:32,384 --> 00:42:36,179
And that's where
that relationship between AI and
791
00:42:36,221 --> 00:42:39,432
humans could be very
beneficial because at some point
792
00:42:39,474 --> 00:42:43,520
it is likely to teach us
the things that we need to do so
793
00:42:43,561 --> 00:42:47,524
that we can actually
survive into the future longer
794
00:42:47,565 --> 00:42:52,612
than we probably sh-- will based
on the way humanity is now.
795
00:42:54,990 --> 00:42:59,286
We've seen a massive
convergence of global data,
796
00:42:59,327 --> 00:43:04,207
for example, about climate
and how climate affects humans.
797
00:43:04,249 --> 00:43:07,794
We've seen
how we need to respond to
798
00:43:07,836 --> 00:43:10,922
rapid disasters,
but we haven't necessarily
799
00:43:10,964 --> 00:43:14,718
applied AI effectively
to our biggest problems.
800
00:43:14,759 --> 00:43:18,513
Climate, migration,
displacement, poverty.
801
00:43:18,930 --> 00:43:24,227
We often think of AI
as making inequalities worse
802
00:43:24,269 --> 00:43:28,273
instead of improving the quality
of life for everyone.
803
00:43:29,190 --> 00:43:32,152
Will AI improve the quality
of life for everyone?
804
00:43:32,193 --> 00:43:33,570
It could.
805
00:43:33,611 --> 00:43:35,947
But we have to choose
to program it that way and
806
00:43:35,989 --> 00:43:38,116
implement it that way.
807
00:43:38,158 --> 00:43:42,287
So that means putting sensors
on the ground in our farms
808
00:43:42,329 --> 00:43:44,539
so that if our farmers
are having a drought year,
809
00:43:44,581 --> 00:43:48,043
we can support them before
they lose their crop, right?
810
00:43:48,084 --> 00:43:51,629
Or in a climate disaster,
having logistics ready
811
00:43:51,671 --> 00:43:53,798
to move faster
so that people don't
812
00:43:53,840 --> 00:43:56,301
starve after
the typhoon of a hurricane.
813
00:43:57,135 --> 00:43:59,512
That is not fully
being implemented yet.
814
00:43:59,554 --> 00:44:00,972
It's just starting.
815
00:44:01,014 --> 00:44:03,141
And I think we're
starting to also see
816
00:44:03,183 --> 00:44:06,102
this rapid collaboration
around the data.
817
00:44:06,144 --> 00:44:09,105
And that's certainly in
climate action, but it's also in
818
00:44:09,147 --> 00:44:12,734
a handful of other places,
logistics and shipping.
819
00:44:12,776 --> 00:44:15,695
Places where
global collaboration
820
00:44:15,737 --> 00:44:17,822
is a life or death situation.
821
00:44:21,368 --> 00:44:25,455
With AI, technology is
changing rapidly and it's going
822
00:44:25,497 --> 00:44:29,376
to be up to all of us to try
and keep up with those changes.
823
00:44:29,417 --> 00:44:31,669
This has been true
through all of humanity.
824
00:44:31,711 --> 00:44:35,423
Some new technology comes
along and people have to adapt.
825
00:44:35,465 --> 00:44:38,343
I think it's important
for technology developers
826
00:44:38,385 --> 00:44:40,303
to work with
the people who will be
827
00:44:40,345 --> 00:44:43,098
using that technology and
impacted by that technology so
828
00:44:43,139 --> 00:44:46,142
that it doesn't
leave them behind.
829
00:44:46,184 --> 00:44:50,146
This is supposed to be a tool,
it's supposed to be a paintbrush
830
00:44:50,188 --> 00:44:53,024
to put in the hand of
the practitioners to enable them
831
00:44:53,066 --> 00:44:55,026
to do things they've
never been able to do before.
832
00:44:56,736 --> 00:45:00,156
So I help lead a group called
Open Metaverse Interoperability.
833
00:45:00,198 --> 00:45:02,367
And I've worked on
Metaverse technologies
834
00:45:02,409 --> 00:45:04,035
for the last 20 years.
835
00:45:04,077 --> 00:45:07,122
I see the Metaverse as
the future of the 3D web in
836
00:45:07,163 --> 00:45:10,583
that connected space that we all
share and smart glasses and
837
00:45:10,625 --> 00:45:14,838
smart phones and VR headsets
and lots of display tech like
838
00:45:14,879 --> 00:45:19,384
holographic displays are all
a part of that future 3D web.
839
00:45:20,051 --> 00:45:22,846
I already see kids
creating in that space, right?
840
00:45:22,887 --> 00:45:25,473
Roblox is
an early version of that.
841
00:45:25,515 --> 00:45:29,853
So we know that the next
generation of kids are already
842
00:45:29,894 --> 00:45:32,564
designing in 3D,
building 3D worlds.
843
00:45:32,605 --> 00:45:35,817
They are not
limited to the 2D interface.
844
00:45:35,859 --> 00:45:38,153
I think those of
us who are older have
845
00:45:38,194 --> 00:45:40,363
a harder time
leaping from 2D to 3D.
846
00:45:40,405 --> 00:45:43,032
Those who are
young have no problem just
847
00:45:43,074 --> 00:45:45,160
jumping straight
into the 3D world.
848
00:45:45,201 --> 00:45:47,287
Will the glasses
support them being
849
00:45:47,328 --> 00:45:49,622
there while they're
pumping gas or doing anything?
850
00:45:49,664 --> 00:45:51,416
Yes, it's gonna get there.
851
00:45:51,458 --> 00:45:53,293
I'm already sharing
Metaverse worlds with
852
00:45:53,334 --> 00:45:56,546
my friends on my phone
around the table at the bar.
853
00:45:56,588 --> 00:45:58,131
So it's already there.
854
00:45:58,173 --> 00:45:59,757
The tools are there.
855
00:45:59,799 --> 00:46:01,301
They are not evenly distributed,
856
00:46:01,342 --> 00:46:03,761
but they're all
open-source and easy to grab.
857
00:46:05,013 --> 00:46:07,223
My company, Semio,
is building an operating
858
00:46:07,265 --> 00:46:09,100
system for personal robots.
859
00:46:09,142 --> 00:46:10,768
And the way you can
think about that is, uh,
860
00:46:10,810 --> 00:46:13,688
in the way that people
use a personal computer through
861
00:46:13,730 --> 00:46:16,858
a graphical user interface,
a-- a keyboard and mouse, uh,
862
00:46:16,900 --> 00:46:19,194
the way that people
use a smartphone through
863
00:46:19,235 --> 00:46:23,239
a touch screen, they'll
be using natural language as
864
00:46:23,281 --> 00:46:27,035
the core input output
user interface for robots.
865
00:46:27,076 --> 00:46:29,496
Uh, but it's gonna be
more than just understanding
866
00:46:29,537 --> 00:46:31,080
what people are saying.
867
00:46:31,122 --> 00:46:32,665
It's gonna be how
they're saying it through
868
00:46:32,707 --> 00:46:34,584
the tone of voice,
through their body language.
869
00:46:34,626 --> 00:46:37,128
The robot has to
understand that from the user,
870
00:46:37,170 --> 00:46:39,172
make decisions
about that information.
871
00:46:39,214 --> 00:46:41,549
And it needs to be able to
respond with its own speech or
872
00:46:41,591 --> 00:46:44,427
other physical actions,
be it body language or
873
00:46:44,469 --> 00:46:47,055
manipulating objects
in the environment.
874
00:46:47,847 --> 00:46:49,766
We begin to
have a real relationship
875
00:46:49,807 --> 00:46:53,228
with the roles we love and
also with the people we love.
876
00:46:53,269 --> 00:46:57,565
We can create a model of
someone who passed away, right?
877
00:46:57,607 --> 00:47:00,985
Have a relationship
that might feel ephemeral,
878
00:47:01,027 --> 00:47:03,404
but feels very
real to us too, right?
879
00:47:03,446 --> 00:47:05,532
This is something
that I do in my own art.
880
00:47:05,573 --> 00:47:07,867
Where I help people see,
for example,
881
00:47:07,909 --> 00:47:11,079
I lost a child,
so I created what my child
882
00:47:11,120 --> 00:47:14,040
would've looked like
now as a 25-year-old, right?
883
00:47:14,082 --> 00:47:16,918
I've had many girls come to me,
many women saying,
884
00:47:16,960 --> 00:47:18,670
"Could I see my child too?"
885
00:47:18,711 --> 00:47:21,839
Or "I would've liked
to have seen where my son
886
00:47:21,881 --> 00:47:23,132
will be when he grows up.
887
00:47:23,174 --> 00:47:24,676
I know I'm gonna pass away soon,
888
00:47:24,717 --> 00:47:26,261
but I would love to
see my grandchildren."
889
00:47:26,678 --> 00:47:31,266
So using these as tool
for vision is an extraordinary
890
00:47:31,307 --> 00:47:33,351
way to heal
something in the heart.
891
00:47:33,393 --> 00:47:35,520
It can improve
our relationships.
892
00:47:36,854 --> 00:47:39,023
It's important
right now, I think,
893
00:47:39,065 --> 00:47:42,151
to really learn as
much as you can about AI.
894
00:47:42,193 --> 00:47:44,654
I don't think
that we should be scared of it.
895
00:47:44,696 --> 00:47:48,575
We know what's happening,
but it is gonna get in the hands
896
00:47:48,616 --> 00:47:50,034
of some bad people.
897
00:47:50,076 --> 00:47:51,452
Like you look at like drugs,
898
00:47:51,494 --> 00:47:53,413
you look at
like cocaine, ecstasy.
899
00:47:53,454 --> 00:47:56,457
And when you look at the actual,
like, what they did with it from
900
00:47:56,499 --> 00:47:59,377
the beginning, it was
for a medical purpose and then
901
00:47:59,419 --> 00:48:02,589
it got into the wrong hands and
then they do whatever with it.
902
00:48:02,630 --> 00:48:04,340
So it's kind of like that.
903
00:48:04,382 --> 00:48:06,175
Like, it's gonna get into
the right hands where people
904
00:48:06,217 --> 00:48:08,636
are gonna be able to
evolve better in their business
905
00:48:08,678 --> 00:48:10,930
and in the world,
but then it's gonna
906
00:48:10,972 --> 00:48:12,515
get into some
of the wrong hands.
907
00:48:12,557 --> 00:48:14,517
But I don't think
that we should be scared.
908
00:48:14,559 --> 00:48:17,228
It's just we all need
to get more educated and
909
00:48:17,270 --> 00:48:19,022
we need to evolve with it.
910
00:48:19,063 --> 00:48:21,024
And we need
to understand that things
911
00:48:21,065 --> 00:48:22,734
are going to
change dramatically.
912
00:48:22,775 --> 00:48:25,612
And the people
that understand it more are
913
00:48:25,653 --> 00:48:28,448
the ones that are gonna be
able to move on with their life
914
00:48:28,489 --> 00:48:30,825
easier than the ones
that waited until the last
915
00:48:30,867 --> 00:48:33,661
minute to learn about
it and then their jobs taken.
916
00:48:34,412 --> 00:48:39,042
It is our own limits that keep
us from moving forward.
917
00:48:39,083 --> 00:48:41,961
And sometimes limits are needed,
918
00:48:42,003 --> 00:48:45,048
sometimes those
parameters are necessary.
919
00:48:45,923 --> 00:48:50,386
But I think what we're
more scared of is not the AI,
920
00:48:50,428 --> 00:48:54,223
but how we as
humans will use AI.
921
00:48:54,265 --> 00:48:57,894
'Cause we know that not
everyone has a good heart.
922
00:48:57,935 --> 00:49:00,146
Not everyone
has good intentions.
923
00:49:00,188 --> 00:49:02,732
So we're not afraid
of the AI's improvement.
924
00:49:02,774 --> 00:49:04,233
We're afraid of us.
925
00:49:06,819 --> 00:49:11,949
The future of AI is everything
that we choose to put into it.
926
00:49:11,991 --> 00:49:16,537
So if we want a better future,
it is up to us to choose
927
00:49:16,579 --> 00:49:19,457
and curate,
and bring that future to life.
928
00:49:20,166 --> 00:49:24,170
Well, unlike a disaster movie,
I believe the outlook is good.
929
00:49:24,212 --> 00:49:27,340
I believe that humanity
is gonna roll with the punches,
930
00:49:27,382 --> 00:49:29,217
people are gonna
get with the program.
931
00:49:29,258 --> 00:49:32,387
And humane
sensibilities is going to
932
00:49:32,428 --> 00:49:35,431
prevail over
chaos and destruction.
933
00:49:35,473 --> 00:49:37,767
You see in all the movies,
oh, we're all gonna die.
934
00:49:37,809 --> 00:49:40,770
every year,
Nostradamus said this, you know,
935
00:49:40,812 --> 00:49:43,481
somebody said that,
but it never happens.
936
00:49:43,523 --> 00:49:45,191
And I pray that it never does.
937
00:49:45,233 --> 00:49:48,236
And I'm saying if
we all just stay calm,
938
00:49:48,277 --> 00:49:49,987
save ourselves,
do the best we can
939
00:49:50,029 --> 00:49:53,491
individually overall,
we'll be alright in the end.
940
00:49:53,533 --> 00:49:56,536
And I think we'll have
a good future with AI and
941
00:49:56,577 --> 00:50:00,289
robotics side by side
with humanity if we don't
942
00:50:00,331 --> 00:50:02,166
panic and try
to cancel the program.
943
00:50:02,208 --> 00:50:04,627
It's not Skynet and
your computer can't kill
944
00:50:04,669 --> 00:50:07,088
you unless you turn
it on and drop it on your head.
945
00:50:07,130 --> 00:50:10,425
So it's like, you know,
just stay calm, poised,
946
00:50:10,466 --> 00:50:13,678
relax, like try to do
the best you can with your life,
947
00:50:13,720 --> 00:50:15,513
your intelligence,
your finances.
948
00:50:15,555 --> 00:50:17,765
Make a plan for
yourself what works for you.
949
00:50:23,229 --> 00:50:26,399
Don't rely on any
technology too much.
950
00:50:26,441 --> 00:50:29,777
I was the pre-microwave
and dishwasher group,
951
00:50:29,819 --> 00:50:31,446
you know what I mean?
952
00:50:31,487 --> 00:50:32,989
So we didn't have microwaves
when I was growing up.
953
00:50:33,030 --> 00:50:34,782
We had to do
all that stuff manually.
954
00:50:34,824 --> 00:50:37,785
And it was weird because at
home our microwave went out and
955
00:50:37,827 --> 00:50:40,413
it was like a week before I got
a new one, I got it installed.
956
00:50:40,455 --> 00:50:42,498
And I said to my wife,
how did we survive
957
00:50:42,540 --> 00:50:44,292
back in our childhood?
958
00:50:44,333 --> 00:50:46,127
I didn't
realize how dependent we were
959
00:50:46,169 --> 00:50:47,754
on that one device.
960
00:50:47,795 --> 00:50:51,174
Always take time to
talk to people face to face,
961
00:50:51,215 --> 00:50:53,009
touch each other,
hug each other.
962
00:50:53,050 --> 00:50:54,677
You need that interaction.
963
00:50:54,719 --> 00:50:57,847
Especially if you have
small children, they need that.
964
00:50:57,889 --> 00:50:59,515
So that would be my advice.
965
00:50:59,557 --> 00:51:03,352
Don't let the humanity
end up inside technology.
77728
Can't find what you're looking for?
Get subtitles in any language from opensubtitles.com, and translate them here.