Would you like to inspect the original subtitles? These are the user uploaded subtitles that are being translated:
1
00:00:00.000 --> 00:00:00.899
In this video,
2
00:00:00.899 --> 00:00:05.478
we're going to talk about your role in
using generative AI as a thought partner.
3
00:00:05.478 --> 00:00:08.691
We've talked about the importance
of critical thinking, but
4
00:00:08.691 --> 00:00:10.636
let's be a little bit more specific.
5
00:00:10.636 --> 00:00:15.079
The output from generative AI is based
on the inputs on which the models
6
00:00:15.079 --> 00:00:20.050
are trained, and it can recombine these
inputs in ways that are not valuable,
7
00:00:20.050 --> 00:00:22.558
not true, and even harmful to people.
8
00:00:22.558 --> 00:00:27.372
So I always try to do five key things
whenever I use generative AI, and
9
00:00:27.372 --> 00:00:30.945
I'll share those five
things in this short video.
10
00:00:30.945 --> 00:00:32.045
So, as we've said,
11
00:00:32.045 --> 00:00:35.910
working with large language models as
a thought partner is very valuable.
12
00:00:35.910 --> 00:00:39.978
And as these models get more powerful,
it will become even more valuable.
13
00:00:39.978 --> 00:00:43.672
But these models, at least today,
they're not perfect and
14
00:00:43.672 --> 00:00:45.634
they're often flat out wrong.
15
00:00:45.634 --> 00:00:50.593
And so you have an important role to play
in the way that you interact with these
16
00:00:50.593 --> 00:00:51.216
models.
17
00:00:51.216 --> 00:00:53.856
I mean,
you can see I'm trying to depict here,
18
00:00:53.856 --> 00:00:56.959
you've got to stay in control
of the conversation, and
19
00:00:56.959 --> 00:01:00.470
you cannot just blindly accept
what comes out of these models.
20
00:01:00.470 --> 00:01:04.855
So five action verbs that I always try
to take when I'm using these models.
21
00:01:04.855 --> 00:01:06.520
Number one is to reflect,
22
00:01:06.520 --> 00:01:10.164
I do not take what comes out
of these models at face value.
23
00:01:10.164 --> 00:01:12.699
I always think about, I say,
what do I think of that?
24
00:01:12.699 --> 00:01:13.774
Does that make sense to me?
25
00:01:13.774 --> 00:01:15.644
Does that match my intuition?
26
00:01:15.644 --> 00:01:20.029
Is that something that seems
true based on my experience?
27
00:01:20.029 --> 00:01:22.579
And then not only does it seems true, but
28
00:01:22.579 --> 00:01:25.582
number two is validating
that things are true.
29
00:01:25.582 --> 00:01:30.182
So if you're looking for factual things,
not just conceptual ideas,
30
00:01:30.182 --> 00:01:32.953
make sure that you validate those things.
31
00:01:32.953 --> 00:01:37.943
I mean, I frankly do not use large
language models for factuality, for
32
00:01:37.943 --> 00:01:42.852
searches very often, unless it's
with a search engine that will give
33
00:01:42.852 --> 00:01:47.842
me a generated response based on
underlying web pages, where I can go to
34
00:01:47.842 --> 00:01:53.034
the web page and I can myself decide
whether I find that web page credible.
35
00:01:53.034 --> 00:01:55.561
Another thing that's happening,
by the way,
36
00:01:55.561 --> 00:01:58.037
is a lot of web pages
are being generated by AI.
37
00:01:58.037 --> 00:02:02.617
Many of these are not true, so even
though it looks like your search results
38
00:02:02.617 --> 00:02:06.034
are grounded in a web page
that maybe is authoritative,
39
00:02:06.034 --> 00:02:08.302
that web page is not necessarily so.
40
00:02:08.302 --> 00:02:12.673
As usual, it's just good practice,
evaluate your sources.
41
00:02:12.673 --> 00:02:14.903
And with a large language model,
it's hard to know what the sources are.
42
00:02:14.903 --> 00:02:17.225
So be very careful about factuality,
43
00:02:17.225 --> 00:02:21.226
and make sure that you validate
anything that you deem to be true.
44
00:02:21.226 --> 00:02:25.516
Especially if the information that you're
going to be using is going to be put into
45
00:02:25.516 --> 00:02:26.902
a high stakes decision,
46
00:02:26.902 --> 00:02:31.457
make sure it's true before you actually
base your decision on that information.
47
00:02:31.457 --> 00:02:34.419
All right, third big action, debate.
48
00:02:34.419 --> 00:02:38.243
Don't just be passive, if the language
model tells you something, challenge it.
49
00:02:38.243 --> 00:02:40.831
Let it challenge you, you challenge it.
50
00:02:40.831 --> 00:02:43.773
Now, one of these you'll find is a lot
of these models, they'll just fold.
51
00:02:43.773 --> 00:02:48.228
If you say, I disagree with that,
they'll say, yeah, you're right.
52
00:02:48.228 --> 00:02:52.829
So, try to frame your questions as
challenges that are kind of open ended so
53
00:02:52.829 --> 00:02:56.132
that it just doesn't
automatically agree with you.
54
00:02:56.132 --> 00:02:59.010
The fourth action is to filter.
55
00:02:59.010 --> 00:03:03.199
One of the things that these generative
models are really good at is generating
56
00:03:03.199 --> 00:03:04.154
lots of options.
57
00:03:04.154 --> 00:03:08.220
And one of the things I like to do is
I like to ask it for way more than I'm
58
00:03:08.220 --> 00:03:12.732
needing, because then I can sift through
it and pick the pieces that I like.
59
00:03:12.732 --> 00:03:17.158
So instead of saying, give me
a recommendation for how to put a title on
60
00:03:17.158 --> 00:03:22.109
top of this paragraph, I'll say, give
me five recommendations for how to put
61
00:03:22.109 --> 00:03:26.626
a title on top of this paragraph, and
then I can pick the one that I like.
62
00:03:26.626 --> 00:03:31.594
But filtering is really valuable because
,a, the generative AI model can give
63
00:03:31.594 --> 00:03:36.651
you a lot of options, and b, the process
of filtering keeps you really engaged.
64
00:03:36.651 --> 00:03:39.706
So that ultimately,
it is your decision and
65
00:03:39.706 --> 00:03:44.836
choice about what you decide to consider
putting into your point of view.
66
00:03:44.836 --> 00:03:48.436
And that kind of gets me to the final
point, which is to integrate.
67
00:03:48.436 --> 00:03:50.570
And filtering and
integrate really go together.
68
00:03:50.570 --> 00:03:55.029
So, you've got to decide what you're
going to actually integrate into your
69
00:03:55.029 --> 00:03:58.992
thinking, into your point of view,
into your company strategy,
70
00:03:58.992 --> 00:04:00.989
into your interview processes.
71
00:04:00.989 --> 00:04:04.702
Ultimately, though,
you need to be accountable for
72
00:04:04.702 --> 00:04:08.177
what you choose to integrate
into your thinking.
73
00:04:08.177 --> 00:04:12.071
And I would say that part of
accountability is to make sure that you've
74
00:04:12.071 --> 00:04:16.889
reflected on it, that you've validated it,
that you've tested it through debate,
75
00:04:16.889 --> 00:04:21.311
and that you have chosen wisely the kinds
of things that you want to integrate into
76
00:04:21.311 --> 00:04:22.235
your thinking.
77
00:04:22.235 --> 00:04:26.774
So, those are five key actions that will
help you be a better thought partner and
78
00:04:26.774 --> 00:04:29.549
get more out of the process,
and also, I think,
79
00:04:29.549 --> 00:04:32.124
help you avoid some of
the pitfalls that could
80
00:04:32.124 --> 00:04:36.850
be associated with relying too much on
generative AI models as a thought partner.
7716
Can't find what you're looking for?
Get subtitles in any language from opensubtitles.com, and translate them here.