WEBVTT 1 00:00:01.830 --> 00:00:03.770 I am not Morgan Freeman. 2 00:00:03.830 --> 00:00:05.970 And what you see is not real. 3 00:00:06.470 --> 00:00:07.740 What is real? 4 00:00:07.750 --> 00:00:09.020 And what's fake? 5 00:00:09.030 --> 00:00:11.660 Working that out is increasingly important. 6 00:00:11.670 --> 00:00:16.630 Deepfakes, manipulated videos, photos and voices are becoming more common. 7 00:00:18.360 --> 00:00:20.390 AI has ushered in a whole new level of fakes 8 00:00:20.390 --> 00:00:23.380 and it's becoming more difficult to tell what's true and what's false. 9 00:00:25.332 --> 00:00:30.240 And AI also makes it much easier to create fake images and videos. 10 00:00:31.990 --> 00:00:36.630 We're confronted by a flood of images every day and more and more of them 11 00:00:36.640 --> 00:00:38.440 have been manipulated. 12 00:00:41.840 --> 00:00:45.840 Like any tool, AI can be used and also abused. 13 00:00:45.960 --> 00:00:49.680 We look at these potentially abusive uses and try to counteract them. 14 00:00:51.560 --> 00:00:53.860 To detect AI-generated fakes, 15 00:00:53.860 --> 00:00:58.027 researchers at the Fraunhofer Institute for Applied and Integrated Security 16 00:00:58.027 --> 00:01:00.553 employ another AI technology: 17 00:01:00.553 --> 00:01:02.153 super machine learning. 18 00:01:02.200 --> 00:01:05.393 By studying many examples of audio and video tracks, 19 00:01:05.393 --> 00:01:09.760 it can learn to detect patterns that allow it to flag up faked content. 20 00:01:10.410 --> 00:01:12.980 This kind of AI support is crucial. 21 00:01:13.000 --> 00:01:16.640 Because before long it will be effectively impossible for us humans 22 00:01:16.640 --> 00:01:19.420 to recognize deepfakes. 23 00:01:19.440 --> 00:01:23.200 And not every deepfake is as harmless as this one, 24 00:01:23.200 --> 00:01:24.956 in which former German Chancellor Angela Merkel 25 00:01:24.956 --> 00:01:27.822 appears to recite some comic verse. 26 00:01:30.633 --> 00:01:33.400 Deepfake detection is a race against time. 27 00:01:33.400 --> 00:01:36.819 And that's mainly because the aggressors are upping their game. 28 00:01:36.830 --> 00:01:40.280 AI models don't get better on their own, but because humans put effort 29 00:01:40.280 --> 00:01:41.660 into generating them. 30 00:01:41.840 --> 00:01:45.720 So we have to find better ways to uncover and detect fakes 31 00:01:45.720 --> 00:01:47.300 and push further development. 32 00:01:49.400 --> 00:01:52.840 Similar technology is also widely used in advertising. 33 00:01:52.920 --> 00:01:56.200 Our willingness to believe the unbelievable is something 34 00:01:56.200 --> 00:01:58.600 Claudia Bussjaeger has observed for many years. 35 00:02:01.100 --> 00:02:03.030 People want to believe what they see. 36 00:02:03.040 --> 00:02:05.750 And what they don't want to see, they also don't want to believe. 37 00:02:05.760 --> 00:02:06.630 They're in a bubble. 38 00:02:08.507 --> 00:02:12.720 Bussjaeger runs the first German-based platform for AI artists. 39 00:02:15.587 --> 00:02:18.387 Before that, she worked in advertising, 40 00:02:18.387 --> 00:02:22.419 an industry which often stretches the frontiers of truth. 41 00:02:26.370 --> 00:02:29.190 People have been faking it in advertising for ages. 42 00:02:29.200 --> 00:02:33.240 There are no ads that haven't been edited or had parts swapped out. 43 00:02:33.600 --> 00:02:35.230 But nobody questions that. 44 00:02:35.240 --> 00:02:36.700 Nobody ever has. 45 00:02:40.480 --> 00:02:44.000 Images can have power, we've known that for a long time. 46 00:02:44.040 --> 00:02:48.200 Just like faked texts, when manipulated images can be used 47 00:02:48.200 --> 00:02:49.870 to spread misinformation. 48 00:02:49.880 --> 00:02:52.419 Everywhere, from art to politics. 49 00:02:52.760 --> 00:02:57.040 Soviet dictator Joseph Stalin notoriously had those he fell out with 50 00:02:57.040 --> 00:02:59.030 erased from photographs. 51 00:02:59.040 --> 00:03:02.680 Such manipulation is problematic for a number of reasons. 52 00:03:05.560 --> 00:03:09.327 The public has to be able to rely on the information it gets 53 00:03:09.327 --> 00:03:13.139 because we all need to find out about the world we live in. 54 00:03:14.760 --> 00:03:16.770 That includes journalistic information, 55 00:03:16.770 --> 00:03:18.780 which plays a role in that process. 56 00:03:19.040 --> 00:03:22.360 And if we can no longer trust theinformation that we're given, 57 00:03:22.360 --> 00:03:23.900 we have a fundamental problem. 58 00:03:25.720 --> 00:03:29.280 That's actually quite a frightening development when you think about it. 59 00:03:31.680 --> 00:03:35.720 We can create things that are completely, or almost completely, 60 00:03:35.720 --> 00:03:40.400 detached from reality, and we can manipulate them in a targeted way, 61 00:03:40.400 --> 00:03:42.350 for our own gains. 62 00:03:42.360 --> 00:03:44.340 And that can be dangerous. 63 00:03:47.160 --> 00:03:50.153 Because if everything can theoretically be faked, 64 00:03:50.153 --> 00:03:52.270 how do we know what hasn't been? 65 00:03:52.280 --> 00:03:55.540 Images stop being a reliable source of evidence. 66 00:03:55.640 --> 00:03:58.967 And yet at the same time, these developments bring new opportunities 67 00:03:58.967 --> 00:04:00.610 and new roles. 68 00:04:00.620 --> 00:04:04.473 AI artists, for example, use the new technology to create art: 69 00:04:05.667 --> 00:04:08.577 I think there's no stopping it. Even if you say: 70 00:04:08.577 --> 00:04:10.513 "AI is not for me!" 71 00:04:10.513 --> 00:04:12.680 Sooner or later it will be. 72 00:04:12.680 --> 00:04:15.420 It's always been like that, throughout history. 73 00:04:15.720 --> 00:04:18.760 When a new technology arrives, others disappear. 74 00:04:19.200 --> 00:04:22.960 We no longer have landlines, and one day we won't have cars 75 00:04:22.960 --> 00:04:24.550 that run on fossil fuels. 76 00:04:24.560 --> 00:04:26.220 That's just how it is. 77 00:04:26.240 --> 00:04:30.320 So we need to ask: how can I use AI in a positive way? 78 00:04:30.510 --> 00:04:33.290 So that it's not twisted and used destructively? 79 00:04:36.480 --> 00:04:39.520 Like in the grandparent phone scam, for example. 80 00:04:39.520 --> 00:04:43.560 Just a few words of recorded speech are now enough to clone a voice, 81 00:04:43.570 --> 00:04:46.920 which can then be used by scammers to confuse elderly relatives 82 00:04:46.920 --> 00:04:48.180 into giving them money. 83 00:04:48.610 --> 00:04:50.890 A more positive use of the same technology 84 00:04:50.890 --> 00:04:52.779 is being developed by Google. 85 00:04:56.200 --> 00:04:59.186 It enables speech-impaired people to communicate 86 00:04:59.186 --> 00:05:01.300 using their own voices again. 87 00:05:04.550 --> 00:05:09.750 It uses the same technology that can be misused to make deepfakes. 88 00:05:09.750 --> 00:05:13.950 And that shows, that as a technology, AI itself is morally neutral. 89 00:05:13.950 --> 00:05:16.570 It's down to us humans and how we use it. 90 00:05:18.000 --> 00:05:20.760 That's key when it comes to protecting what's true 91 00:05:20.760 --> 00:05:23.250 and also to protecting credibility. 92 00:05:23.279 --> 00:05:27.390 Media companies use fact checks and multiple-source checks as safeguards 93 00:05:27.400 --> 00:05:30.720 in what is still largely an unregulated zone. 94 00:05:34.560 --> 00:05:38.240 Currently we are in the Wild West as far as AI is concerned. 95 00:05:38.560 --> 00:05:42.130 We're seeing all kinds of different players jumping on the bandwagon, 96 00:05:42.140 --> 00:05:45.300 like golddiggers who want to use it for their own ends. 97 00:05:46.760 --> 00:05:50.640 The legal framework is only being set up now. 98 00:05:50.640 --> 00:05:55.100 And as is often the case, the law is slower than advances, 99 00:05:55.100 --> 00:05:57.540 it's behind technological developments. 100 00:05:59.040 --> 00:06:01.500 So politicians need to get into gear. 101 00:06:01.800 --> 00:06:04.640 They have to step up to make sure that we don't have 102 00:06:04.640 --> 00:06:06.300 these problems in future. 103 00:06:11.279 --> 00:06:15.000 Some countries have now signed a legally binding treaty aimed 104 00:06:15.000 --> 00:06:17.150 at regulating the use of AI. 105 00:06:17.160 --> 00:06:19.892 But until and if it finally comes into effect, 106 00:06:19.892 --> 00:06:22.630 responsibility remains with companies. 107 00:06:22.640 --> 00:06:26.720 Claudia Bussjaeger has drawn up ethical guidelines for her platform. 108 00:06:29.920 --> 00:06:32.720 Clearly, we don't take people and disparage them. 109 00:06:33.400 --> 00:06:36.540 We don't imitate artists and pretend to be them. 110 00:06:37.000 --> 00:06:40.680 We have very clear ethics when it comes to dealing with property 111 00:06:40.680 --> 00:06:43.680 belonging to people and artists and celebrities. 112 00:06:46.753 --> 00:06:51.400 You can basically say that it's about staying real in an unreal world. 113 00:06:52.567 --> 00:06:54.560 I think that's really important. 114 00:06:58.707 --> 00:07:02.520 There are also a number of newly emerging apps and platforms 115 00:07:02.520 --> 00:07:05.600 which focus on revealing AI-generated fakes. 116 00:07:05.920 --> 00:07:09.087 The Fraunhofer Institute for Applied and Integrated Security 117 00:07:09.087 --> 00:07:15.160 offers the "Deepfake Total" website, which can be used by anyone free of charge. 118 00:07:18.680 --> 00:07:22.120 Dealing with deepfakes is partly about media literacy. 119 00:07:22.120 --> 00:07:24.433 That means questioning what you see online, 120 00:07:24.433 --> 00:07:26.630 and not just taking it at face value. 121 00:07:26.640 --> 00:07:30.600 It's also about using technology to uncover deepfakes. 122 00:07:30.600 --> 00:07:32.413 And last but not least, 123 00:07:32.413 --> 00:07:35.750 it's about implementing protective verification methods. 124 00:07:35.760 --> 00:07:38.320 You can protect websites with digital signatures, 125 00:07:38.320 --> 00:07:40.750 and you can do the same with media content. 126 00:07:40.760 --> 00:07:43.180 If we use these three building blocks, 127 00:07:43.180 --> 00:07:45.660 I think we'll be well-positioned as societies. 128 00:07:47.840 --> 00:07:51.840 We've become used to living in a world flooded with information. 129 00:07:51.840 --> 00:07:55.960 Now we also need to develop a critical eye, 130 00:07:55.960 --> 00:07:59.720 to avoid falling for increasingly deceptive deepfakes in future.