Deep Fake video is where the facial expression, speech of the person shown in the video aren't real and is computer generated using Machine Learning. This has profound consequences on how we validate the content, accelerating the fake news problem and even affecting the legal system in the countries which accept videos as evidence.
Imagine a video showing leader of a country discussing pre-emptive strike on another country, only that content of the video is not real but an ultra realistic deep fake video, but in the age of confirmation bias
; we cannot expect people to validate everything they see or hear (if that was possible, fake news problem wouldn't exist in first place).
There are ethical use cases for deep fake video
but there would be no stopping to exploitation of the deep fake tech for nefarious means. Hence there is a need gap for building tools which can be integrated in the systems for validating information.
to comment or vote on this problem
Need karma! Please check contributor guidelines.