What Are Youth Politics? (with pictures)

Society Social Issues

Syndromes / July 15, 2020

Alison Powell, Assistant Professor at LSE, investigates how data and algorithms effect our daily lives, from negotiating public transport and booking restaurants, to the more serious issues of surveillance and privacy. She argues that there is a greater need for algorithmic accountability in order for us to understand its impact, both positive and negative, on not only our day-to-day lives, but also on citizenship and inequality in our societies.

Our social world is in the midst of a turn towards data. You might not notice it, but data underpins an increasing number of everyday experiences. Particularly, data drives decisions. Some seem minor, such as day-to-day decisions that make life better. What’s the quickest way to work via public transport? How many people have rated this restaurant four stars? But although greater access to data improves individual experiences, it also has a significant impact on other, more serious decisions. Will you get a loan? How much will your insurance cost? Are you being tracked by the police? Where you will be able to live? Or even – in the age of centrally controlled ‘smart locks’ – can you open the door to your house?

These decisions are driven by a proliferation of digital data produced by connected devices, sensors, and all of the parts of our everyday life that we share online. They are made, increasingly, with the help of automated calculation systems that can deal with volumes of data that are difficult for humans to manage. These processes – algorithms – can even learn from their previous decisions, making it ever easier to use ever more data to make ever better decisions.

Algorithmic Authority

But the data-based social world, and the ‘algorithmic authority’ that underpins it, throws up new problems for social scientists, as well as providing new contexts for our continuing interests in citizenship, participation, equality and justice. Many machine learning processes depend on creating correlations between large sets of often diverse data. The resulting inferences (rather than conclusions) are now underpinning categorisations of people, which in turn control things like what price you see for goods online (for example, if you’ve recently bought an expensive pair of headphones, some online retailers will display higher prices for trainers), or whether you are considered an insurance risk. New ‘smart’ products are constantly being developed that collect data remotely and store and manage it centrally, so the ‘smart lock’ that controls under what circumstance you can enter your house is only a step past the car that recognises its driver and configures her preferences.

Frank Pasquale, professor of law at the University of Maryland, argues that data and algorithmic processes have created a ‘black box society’ where government and corporations collect increasingly detailed information about individuals – influencing recruitment for jobs, consumer prices, and relationships with law enforcement (including, for example, appearance on lists like the UK’s domestic extremism register.

The problem with the black box society is not data or algorithms themselves, but the secrecy that results from massive corporate and government control of data and of the algorithms used to process it and make decisions. Pasquale discusses how this secrecy is embedded into algorithms used by big technology companies: research by Latanya Sweeney shows how people with African-American names return Google search results connecting their names to arrest records, whereas people with white-identified names did not. Even more concerning is the rampant trade in ‘runaway data’ where brokers buy and sell packages of personal data profiling web use habits, credit card purchases, and networks of relationships formed on social media. Brokers package and resell this information, micro-targeting consumers into very specific categories based on ongoing websurfing, clicks and likes.

Source: blogs.lse.ac.uk