It has become resoundingly clear that our accelerating rate of innovation has not been given the respect it deserves, and such negligence will prove to be a defining issue of our time.
Of all the technological advancements seen in the past century, the Internet is perhaps most revolutionary in terms of how it has influenced our day-to-day lives. As a means of consolidating, generating and accessing knowledge, it dwarfs all previous methods in comparison, with a massive potential for both good and bad hitherto undreamt of.
Let’s start with the good: as a species, we are more connected than ever (low hanging fruit, I know). If I want to check on my uncle in Kuwait, I don’t even have to call him. I can just check his Instagram. Further, if I’m bored, I will often reflexively open the app to be greeted with an up-to-date feed of everyone I follow. The barrier to interaction is much lower when it comes to indirect versus direct communication, and that is where the Internet blows telephones out of the water. We don’t even have to try to be connected. It just happens.
Unfortunately, this progress is a double edged sword. In addition to well-documented ramifications for our mental health, the ability to access such a huge slew of information has other worrying consequences for our civil discourse that are only beginning to
come into view.
According to the BBC, data held by the largest online storage companies—Google, Amazon, Microsoft, and Facebook—exceeds 1,200 petabytes, equivalent to 1,200,000,000 gigabytes, or roughly 10 million iPhones. Such a massive amount of data is so extensive, the human brain isn’t even equipped to fully comprehend the magnitude of those numbers.
So, how do we go about evaluating the thousands of claims and ideas we come across on a daily basis? Well, according to Oregon State Professor of Philosophy Sharyn Clough, it may be getting too late to ask this question.
“I think they’re already out of control, and things are only going to get worse with extended reality,” Clough said. “Zuckerberg and folks at Microsoft and Apple have made clear they want us off our phones in the next five to ten years, and instead interacting in virtual spaces and augmented reality. That has huge potential, but it also increases the odds of deception—active deception—and we’re really going to have to train up our truth-detection skills, including some skills we don’t often think about, such as empathy and imagination.”
This issue of deception is particularly concerning because there is more money to be made in deceiving people than educating them. The reason so many headlines are filled with borderline falsehoods is that they make their money off of getting clicks, which is more reliably done with flashy ingratiations
than the truth.
Expanding on the issue of deception, OSU philosophy professor Jonathan Kaplan added via email, “What’s interesting is that the places where there is some public distrust of science are not places where the science is either counterintuitive or uncertain, at least in my view. So the reason that people doubt anthropogenic climate change,” for example, “is that very well-funded organizations worked very hard to sow doubt where there basically is none, and the issue became politicized.”
Indeed, things are just getting started in terms of what can deceive the masses.
With so-called “deep fakes” on the horizon, traditional shortcuts to establishing credibility are all but useless. Figures of authority may no longer be trustworthy in coming decades. Is that really the president speaking, or is it just a fabrication? Augmented reality may prove to take this question beyond our screens. Soon, we may not even be able to trust our own eyes.