Behind and in front of algorithms: A conversation through the screen

by Elinor Carmi & Martina Mahnke


[Image found here.]

Being on the Internet means searching for information. It means digging into what is simultaneously new and old. It means searching for the known while finding the unknown. Being on the Internet means a constant negotiation with algorithms.

How do we get algorithms to do what we want them to do? Does using algorithms mean tricking “the man” behind? Is “he” trying to help us digging through the graveyard of information or does “he” have other intentions? Why are we using these tools? Is it because we have no other possibilities or because we can’t simply create our own algorithms? If Google wasn’t free would it be that big? What would happen if regulators said that nothing on the Internet can be free anymore? How do we negotiate control in relation to the way Information is presented to us on the Internet?

Two media scholars started a conversation about algorithms and how to make sense of them. This is not an academic article, i.e. – we do not reach the catharsis of a final conclusion. If anything, this chat digs into the hotly debated subject of the mathematical equations that organize the way we receive and interact with information on the Internet. It is an attempt to poke the black box that is our screen, but it is merely the beginning, a chat – no more, no less.

ON FACEBOOK (slightly edited and shortened version)

[22-11-2013 4:04 pm Elinor]

I think there is a confusion with programmers and the ownership of software or applications. Programmers employed by Google or Facebook do not own anything. They are working for a corporation and usually don’t have much power or control over what they produce. In addition, these algorithms are a product of many other actors such as different standardization organisations, the FCC, W3C, OSI etc. Therefore, we have to understand that this issue is actually an amalgamation of many actors who shape algorithms.

[22-11-2013 4:35 pm Martina]

I actually like to put emphasis on the human-algorithm-interaction and understand algorithmic output as the result out of an interaction between the user and the algorithms. Simply said: no user, no algorithm, no output. Further, I think we focus too much on the institutional and structural components. I think it’s important to talk about the individual interaction on the micro-level. Yes, we can just ‘blame’ the algorithm or on an institutional level ‘Google’ but this will always end up in a power fight. Who’s right? Who’s wrong? This will never change anything. Therefore, we have to ask what can we as individual users do? In most research users are understood as being passive and non-influential. Hence, we need to start reflecting on our behavior as a user.

[22-11-2013 5:26 pm Elinor]

Putting the responsibility on the user is exactly what neo-liberalism is all about. The ‘power fight’ you are talking about is exactly what is needed, IMO, and usually silenced and moved to the end user’s ‘fault’. I do agree with you that as users AND citizens we have something to do, but it does not end merely by knowing what algorithms do. Instead of thinking of some users as stupid perhaps we should ask why are programmers so obsessed with user experience?

[22-11-2013 5:30 pm Elinor]

In other words, changing user’s knowledge of their ‘on-line’ behavior is only one step that users can do. But corporations and governments have to be accountable for different products (in this case algorithms) provided to citizens. Not all people have the privilege to have the sufficient literacy of programming languages, especially since these are intentionally designed to be ‘black boxed’. So I disagree with you that it is only the ‘users’ responsibility, since this assumes all users and citizens have same economic and cognitive abilities.

[22-11-2013 5:54 pm Martina]

I would even go further and say we teach our children the wrong subjects … I don’t think we disagree as I haven’t said it’s ‘just’ the user but BOTH – I just focus on the micro level. What we might disagree on is what kind of regulation we want. It seems like I’m more on the programmer’s side, wanting liberation. I don’t want some government to decide, I rather engage in literacy and figure out the ‘black-box’. Why should governmental governance be any better?

CONTINUED ON GOOGLE DOCS (slightly edited and shortened version)

[22-11-2013 Elinor]

Programmers are usually a white male elite that has invented this language, and I hardly think we should automatically adopt what they think is the right and only language to talk and build the Internet. If we are to take the power to decide, then I decide not use this language. Oh, but wait, can I really? I do agree with you that governmental regulation can be problematic but not sure if the current situation is much better.

[25-11-2013 Martina]

I do see two lines of argumentation here. One is a very normative one: What is the ‘right’ thing to do? Who takes on power? How do we regulate? Shouldn’t we just let the people decide? If people think it’s useful and use it, why not … And the other side, number two, we are talking about: cultural interrelations. Yes, mankind is not as free as we might hope, we are bound to our heritage. Therefore, I suggest more literacy skills. Back to: we need to teach our children different subjects.

[28-11-2013 Elinor]

I agree these questions are hard but also believe we have to confront them; otherwise they are answered for us rather than by us. I am not even sure that the question is ‘what is the “right” thing to do’. Perhaps there is no right answer to this, but are we given with options? Do we have access to the decision-making procedures of such algorithms, and if not, can we at least have some kind of transparency in terms of what they actually do? These direct questions have to be answered, mainly because algorithms have a direct influence on our digital (and material) lives: structuring what we see on the Internet, deciding what kind of prices we get on various websites, deciding how we interact with each other, deciding how we interact with other commercial and non-human agents, etc. If such an architectural design of online environment has such far reaching consequences over our lives (decisions, actions, thoughts, feelings) shouldn’t we at least have some kind of idea of what they actually do, how they organize the information we engage with and when they decide to change the equations of such algorithms? THEN, literacy will make sense because we will be confronted with these algorithms and have to understand how they work, not as passive agents, but rather as active ones who can take control over the way their online environment is designed and shaped.

[30-11-2013 Martina]

This I think is interesting: “Do we have access to the decision making procedures of such algorithms”?. What I read out of this question is the understanding that algorithms are ‘decision making procedures’. What does that actually mean? Starting very naive: Let’s assume you drive a car and you reach a conjunction, you have 3 possible ways to go and you need to choose one option. The decision that follows is bind to certain conditions. Maybe you’d like to go the fasted way or you’d like to go the most beautiful way. This may or may not influence your choice, however, you need to select. Therefore, making a decision means to select something. You need to go this way and not the other, meaning you leave one road behind. After a while you reach another crossroad and the same scenario occurs. You need to select again, and once again you leave something behind. Problematic at this scenario is that you can only do either or. This seems to account for the algorithms as underlying structure of algorithmic media as well. The final news we see seem to have been selected over others, leaving the impression that we’ll not be able to see them. From a user’s perspective it seems kind of random. Why so? Because we look at the content. We just relate to content ask ourselves why do we see this and not that. The underlying processes of algorithmic media, however, track user’s behavior. User’s behavior is quantified. Here’s a quote out of my interview material:

“And it worked in a way that we know that we gave you a list, ordered from 1 to 10. But you read, actually you clicked on item number 3 first. We inferred that you prefer the content of number 3 to number 1 and 2. And that gives us this next time, if we get any content that is very similar to one and two and content that is similar to 3, then we can assume that because you preferred it last time, you might prefer it this time and we’ll put it first. But if now again you choose the third item, then it switches back. That’s why it keeps sort of (u) what’s going on. If I give you old news at the top and it’s not interesting to you anymore, you gonna read the Johnny Depp item, then we know ok, she prefers that always and it’s always before other stuff. That’s how the only idea works behind it …”

So one could argue that the decision is actually made by the user and where she clicks. In this way it makes sense, if programmers say:

“We don’t filter out. We only sort it.”
“Filtering is not very clever. I mean it’s like if you have information overload and you just drop sources so you didn’t really solve the problem (…) yeah, you solve the problem but not very wise. You just give up on information.”

“It doesn’t make decisions. It gives rank for things. So it’s like mathematical formula. You give the number inside and the formula gives you a rank.”

Maybe it’s important to think about decision. What does it mean for whom in which situation. I think it’s just too simple to say ‘algorithms decide’.

[Elinor Carmi 11-12-2013]

I will divide my answer according to yours :)

- First, your allegory is good but somewhat misleading, because it is true that there must be some kind of decision making but in a way algorithms do not cater for one person (who drives the car), we are talking about huge amounts of people, different backgrounds, different literacy capacities etc. This means that each decision making taken here is so much more crucial because it touches so many people and their digital lives. Therefore, I see this as a design that should be transparent and clear, rather than opaque and vague as it is today. These decisions, categories, standards and options are not neutral and ‘obvious’ paths to be taken, but rather, they are influenced by corporate decisions that are guided by profit and thus should be under much higher scrutiny and supervision. As I’ve mentioned before, programmers do not operate by their own free will, they get job tasks from people who aim to have a specific user experience that will bring profit in the most frictionless way possible.

- I am not sure that as users we only think about the content, I think, but again I have no data or thorough research to back this – that beyond the content that we see, we also care about user experience. And this unfolds also the kinds of expression tools given to us: How can I present myself on Facebook? Which privacy settings can I adjust to filter different circles in my life and so on and so forth (I think PEW research center just published something about youth being super conscious about their settings, because they want to hide information from their parents). These are very important and I think people notice that as well. The fact that I have to identify myself with my real name, my offline identity if you like, already signals a very well planned strategy that suits third party companies rather than users, which, of course, were not asked about any of Facebook’s design changes.

- Filtering and organising information in a specific way is extremely important, and shows that these programmers want to do this job FOR the users, rather than they would do so themselves. It sort of resembles these special stands in supermarkets and even in book stores of specific products, which are given a more central space because the corporation that make them paid more money to have better visibility. Are these the products the users think are more important? Would they choose them if they weren’t so ‘highlighted’? These are important questions, especially since we are talking about different forms of information, some very crucial. Therefore, it is not so much like a mathematical formula since there is an internal bias within them from the beginning, so the process is extremely tilted towards corporations that have the resources to make certain forms of information more visible than others. And this has far reaching consequences on the way we think and understand the world.


Concluding through the screen

Google, Facebook, Netflix, Amazon, Spotify and other tech companies are a big part of our (digital) lives. They are here to stay, at least for now, and they rely on algorithms. They shape us and we shape them. They are a complex; interrelating the social and the technical. Many actors are involved, corporations, regulators and most importantly – the users. Influencing the mass as well as the individual. We are very much in the beginning of understanding what algorithms do and how they influence us. And we are even more at the beginning to understand how to deal with them. Is the call for transparency sufficient? Does it even lead us to where we want to go? Is technology really empowering us or is it time to step back? What does empowerment even mean? And how can we find a way for multiple voices/needs/literacy/ to have equal access to the main channels of knowledge production?

What science can do and what we need to do is to develop a language that enables us to understand the complex interrelations beyond the pure mathematics on one side, and modern hyperboles of mystic algorithms on the other. Stay tuned!

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>