Great discussion, going deep on the difference between a human interaction and interacting with AI that delivers only probable words, one by one, without understanding. This discussion, from 1:08 onwards, starts with this great advise from Nelson Roque on use of AI tools: "One of the important parts of using these tools is not to create a future skill deficit by unlearning things or relying in kind of overreliance way on these AI tools."
I also think the demonstration Nelson did of peoples ability to detect AI generated images was very illuminating. You also wondered whether geospatial intelligence analysts faced problems in dealing with fake satellite images. I am not quite in that field, but in weather forecasting, where we also deal with lots of satellite images, satellite radiances and other data. We have considered whether we could face doctored satellite images or other weather data in times of conflict. We concluded that it would be very difficult to pull off because we rely on data from multiple sources and diverse set of satellite instruments with thousands of radiance channels, and to simultaneously fake all is basically impossible. When the satellite data get ingested into our systems, statistical quality controls of each piece of data is performed. Data are also faced with our models of nature, and implicitly yesterdays data, which would flag and likley reject faked data due to inconsistencies. In intelligence/counter-intelligence, the situation is similar, one never relies on one channel of data, but uses multiple channels to detect inconsistencies, and the more channels used the less likely it is that someone could have doctored all in a consistent way. The odd-ones out would be detected.
Returning to AI generated images, I was reminded that in the past people used spectral analysis to detect doctored photographs, and a quick search showed that researchers are now developing methods using spectral analysis and other techniques, even including machine learning, to detect AI generated images. There appear to be apps around doing this as well, at least code on github, but I have not checked further. But someone should do this kind of app, could be a plug-in even for browsers, that very quickly analyses any image and flags it as fake or real. End of random thoughts and thanks again for the episode.
Thanks Elias! Good to hear from you and have your attentive analysis as always! Glad to not be shouting into the void haha. Yes your comments on image spoofing in GEOINT and weather were great. I alluded to this in the episode saying that delivery systems would have to be compromised. But you gave very useful extra detail on how hard that would be. Thanks!
Great discussion, going deep on the difference between a human interaction and interacting with AI that delivers only probable words, one by one, without understanding. This discussion, from 1:08 onwards, starts with this great advise from Nelson Roque on use of AI tools: "One of the important parts of using these tools is not to create a future skill deficit by unlearning things or relying in kind of overreliance way on these AI tools."
I also think the demonstration Nelson did of peoples ability to detect AI generated images was very illuminating. You also wondered whether geospatial intelligence analysts faced problems in dealing with fake satellite images. I am not quite in that field, but in weather forecasting, where we also deal with lots of satellite images, satellite radiances and other data. We have considered whether we could face doctored satellite images or other weather data in times of conflict. We concluded that it would be very difficult to pull off because we rely on data from multiple sources and diverse set of satellite instruments with thousands of radiance channels, and to simultaneously fake all is basically impossible. When the satellite data get ingested into our systems, statistical quality controls of each piece of data is performed. Data are also faced with our models of nature, and implicitly yesterdays data, which would flag and likley reject faked data due to inconsistencies. In intelligence/counter-intelligence, the situation is similar, one never relies on one channel of data, but uses multiple channels to detect inconsistencies, and the more channels used the less likely it is that someone could have doctored all in a consistent way. The odd-ones out would be detected.
Returning to AI generated images, I was reminded that in the past people used spectral analysis to detect doctored photographs, and a quick search showed that researchers are now developing methods using spectral analysis and other techniques, even including machine learning, to detect AI generated images. There appear to be apps around doing this as well, at least code on github, but I have not checked further. But someone should do this kind of app, could be a plug-in even for browsers, that very quickly analyses any image and flags it as fake or real. End of random thoughts and thanks again for the episode.
Thanks Elias! Good to hear from you and have your attentive analysis as always! Glad to not be shouting into the void haha. Yes your comments on image spoofing in GEOINT and weather were great. I alluded to this in the episode saying that delivery systems would have to be compromised. But you gave very useful extra detail on how hard that would be. Thanks!