One of the most common use cases of artificial intelligence at the moment is its ability to handle massive datasets, processing and interpreting them. A task that human data analysts would take ages to complete, if at all, is performed in no time, and without the possibility of human error. At the same time, the average person creates an increasingly larger digital footprint, leaving a trace in the form of a vast amount of personal information on the internet.
Corporations and governments, then, gather, store, and feed that information to powerful AI algorithms in order to learn as much as possible about that person for marketing (and other) purposes. All this has led to heated debates over the safety of our personal data and its potential misuse.
No doubt AI holds tremendous potential to disrupt and improve our lives, but there are some hidden traps and pitfalls that have to be discussed and overcome.
Is There Such a Thing as Too Much Data?
It depends on the point of view. Brands seem to need every single bit of information on their target audience in order to better understand their needs and preferences so that they can tailor the right marketing message.
While that’s in a way a legitimate thing, the rise of advanced technologies, including AI, has led this thirst for information to get in the way of their customers’ privacy.
Namely, before AI and big data analytics, it was impossible to properly interpret unstructured data coming from different sources and in different formats, which left a big chunk of information uninterpretable and thus unused.
But, once the technologies managed to crack this code and translate illegible data into the actual information, the concept of digital privacy became an issue.
In 2012, an incident showed how intimidatingly accurate data analytics can be, and what that means for an ordinary user. In an attempt to assist its customers in finding everything they might need, Target sent coupons for cribs and baby clothes to a high school girl through the mail. Her unsuspecting father went to complain, only to find out that this wasn’t just a random mistake – the store’s algorithm picked up different cues based on what kind of products the pregnant girl purchased and viewed.
Similarly, it’s possible to track and locate people with the help of their own mobile devices and wearables, which means that it’s virtually impossible to go off the radar and seclude oneself.
Voice and facial recognition additionally complicate things as these technologies are capable of completely obliterating anonymity in public places.
Although it’s somewhat comforting to know that this way many wrongdoings and crimes can be prevented and identified, the lack of regulations might put us all under surveillance. Besides, there are growing fears of misidentification and wrongful convictions. According to research studies, this technology isn’t accurate when it comes to identifying people of color, which can have grave consequences.
The Concept of Privacy in a Digital Age
The Facebook-Cambridge Analytica Scandal was just one in line of numerous incidents that demonstrated how unprotected our data is and how easy it is to obtain it, with almost no repercussions.
Just 20 years ago privacy was still a concept that existed only in the offline, physical world. And it was much easier to protect yourself and your personal data by not disclosing your credit card or Social Security number.
Today, as we use numerous online services, it’s hard to keep your data to yourself. If you want to purchase something online, you’ll have to provide your credit card number and authorize the transaction. Websites store this sensitive information online, and a single hacker attack can expose it.
For example, the data of up to 500 million Marriott International guests was compromised in a data breach in 2018.
But, it’s not only hackers and cybercriminals that jeopardize our privacy.
It’s not a secret that many companies use social media and the internet to find out more about their potential and existing employees. This can have severe implications, as people can be (and usually are) held accountable for what they post online. Some have even lost their jobs due to certain online activities like posting a profanity-laced tweet, which is exactly what happened to a NASA intern.
Is There a Solution to This Issue?
It can’t be denied that being constantly monitored and under surveillance can be frustrating.
But it would be a shame to curb the development of such immense technological advancement because of unresolved privacy issues.
AI, big data analytics, IoT, and 5G, for example, are much maligned in some circles due to the fact that they heavily rely on gargantuan amounts of data as well as that they enable a massive network of interconnected devices that can be controlled remotely.
What does this mean?
It can be both a gigantic blessing and a curse. When combined, these technologies allow, for example, the possibility of remote surgery that could save millions of lives. Similarly, IoT is a network that enables remote control of cars, homes, and appliances.
On the other hand, the data generated by these technologies can be compromised or used for harmful purposes.
Another example is AI-powered chatbots that have become indispensable in numerous industries, thanks to the fact that they can improve customer engagement and juggle multiple customer queries at the same time. This way, they help customers and increase satisfaction.
They are also capable of collecting, analyzing, and storing customer information in order to personalize every subsequent customer touchpoint and offer the best and most personalized service. This way, companies can reduce operational costs and boost customer retention rates.
Luckily, there are ways to make the most of all these AI benefits but not at the sake of compromising users’ privacy.
A New Dawn of Digital Privacy
How are we going to achieve this win-win situation and give brands our data without any fears of it being misused?
The trick is in combining cryptography and machine learning, which will result in AI’s ability to learn from data without actually seeing it.
This way, the privacy of end-users will be protected, and at the same time, companies will be able to leverage their data without breaking any laws of ethics.
Several technologies will make this happen:
- Federated learning: This concept describes a decentralized AI framework distributed across millions of devices. Federated learning will enable scientists to create, train, improve, and assess a shared prediction model while keeping all the data on the device. In a nutshell, companies won’t have access to users’ raw data as well as no possibility of labeling it. This technology is a synergy of AI, blockchain, and IoT, keeps users’ privacy safe, and yet provides all the benefits of aggregated model improvement.
- Differential privacy: A number of applications, including maps or fitness and health apps, collect individual users’ data so that they can make traffic predictions and or analyze users’ fitness levels and other parameters. At the moment, it’s theoretically possible to match individual contributors and their data. Differential privacy will add some randomness to the entire procedure and make it impossible to trace back the information. As a result, it won’t be possible to expose the identity of individual contributors, while allowing for their data to be collected and analyzed.
- Homomorphic encryption: This technology uses machine learning algorithms to process and analyze encrypted data without accessing sensitive information. This data is encrypted and analyzed on a remote system. The results are sent in an encrypted form too, and they can be unlocked only by using a unique key so that the privacy of users whose data is being analyzed can be protected.
We’re still far from finding the right solution to the problem of privacy, but these small steps help in keeping things under control. AI and other technologies keep on evolving, which means that other obstacles will emerge, meaning that scientists and security experts will have to keep pace and constantly upgrade security protocols.