Big Tech knew that data was worth something long before the rest of us even knew what data was. They collected all they could and found clever ways of getting AI to decipher it to sell us things without us even realizing we’re being sold things. And it’s fine by us. It’s only for adverting purposes. What’s the problem? But when Big Data and AI is involved in predictive policing, your loan prospects, travel freedoms, and the judicial system, we should really consider how much of a data footprint we leave.
As we march like digital drones into an uncertain online future, here are some reason why the less we reveal about ourselves the better.
Recently, the World Economic Forum released a concept to use people’s digital identity as a tool for “secure and seamless travel”. This document outlines the concept of using “traveler managed” digital identities as a vetting tool to better control the flow of travelers through borders. Merging traveling rights with your online life seems even more likely when you look at the recent US announcement that the country will soon request five years of social media history for all visa applicants. I suppose someone’s online behaviors over a five year period is good indicator of who they are and how likely they are to be dangerous. It’s also open to being grossly unfair and discriminatory. Online will be a place where everyone watches what they say for fear of how their comments will come back to bite them with bureaucracy down the line.
Recently, the UK government ordered an independent watchdog to conduct an investigation on the potential for bias with AI decision making in criminal and justice cases. The AI bias inquiry will be led by the Centre for Data Ethics and Innovation (CDEI), despite the fact that the police already use artificial intelligence to predict crimes.
In California, a new law is slated to take effect in October where AI will decide on the freedom of citizens. The law in that state requires the criminal justice system to replace cash bail with an algorithmic pretrial risk assessment. If someone receives a “high” risk score, the person must be detained prior to arraignment, effectively placing crucial decisions about a person’s freedom into the hands AI.
Algorithms can reinforce existing inequity in the criminal justice system. Researchers at Dartmouth University found in January that one widely used tool, COMPAS, wrongly classified black defendants as being at risk of committing a misdemeanour within two years at a rate of 40 per cent, versus 25.4 per cent for white defendants. This is hardly to be unexpected. AI is not neutral. AI follows the crowd and creates a feedback loop.
China plans to rank all its citizens based on their “social credit” by 2020 and people can be rewarded or punished according to their scores. If you drive badly, smoke in a non-smoking zone, you might see yourself barred from the best dating websites. If you behave yourself, you could get reduced energy bills. The scheme is being piloted for millions of people across the country already and will be mandatory when fully rolled out. Every Chinese citizen will have a searchable file of amalgamated data from public and private sources tracking their social credit. All this data on their citizens will be scanned by AI and could see Chinese society effectively run by AI.
This might seem like something that could never catch on in the Western World, but the same about Pokemon so it’s better to be safe than sorry. Wherever possible, restrict your data footprint. Brows in private with Mozilla, browse the internet with DuckDuckgo, don’t give over any data unless you absolutely need to, and message and send files with get2Clouds. Total privacy. Total encryption.