Some thoughts on Algorithms to Live By

“I just read from Algorithms to Live By“, “it was mentioned in Algorithms to Live By that”, “there’s a chapter about this in Algorithms to Live By” and variants thereof have been a regular part of my vocabulary as of late. I borrowed this gem from an acquaintance on a whim sometime ago, because it seemed like it might be interesting. It turned out to be pretty great for the most part. It was also apparently listed as one of the best Science books of 2016 by Amazon and probably got a bunch of other such acknowledgements, but I wasn’t really aware of any of these.

Algorithms to Live By is a book about applying insights from computer science to common real-life problems – mostly insights stemming from theoretical computer science, the sort that studies questions such as “what can be computed” and “which problems can be solved efficiently (according to a specific technical definition of ‘efficiently’)”. It’s less a book about computers as physical machines, the internet, programming or other more engineer-level things that people might associate with the term ‘computer science’. Essentially, the book deals with using ideas from the field of algorithmics to make the best possible kinds of decisions in situations of uncertainty, limited resources and intractable complexity.

I imagine the book’s premise may sound preposterous to many people with a background in more human-oriented fields, like the humanities or social sciences, but that’s hardly the case. Rarely do the authors claim that they can present a method for finding the right answers. In fact, it’s a recurring theme throughout the book that those are often even theoretically unattainable (in polynomial time by a deterministic Turing Machine, at the very least), but that computer science can at least help us make the very best guesses.

One of the book’s strong points is that it’s very approachable to people from different backgrounds. Yours truly happens to be an undergraduate student in computer science (although I’ve regrettably had too little time for that subject as of late), and I was at least familiar with the basic mindset of the book and, on a surface level, with many of the main topics were discussed. For me, both the book’s discussion of the theoretical stuff and its application to real-life situation were interesting, but I would imagine even someone with a CS degree to the links that the authors build between the theory of algorithms and everyday life to be clever and insightful.

To someone coming from a purely non-CS background, I’d say this is actually one of the best currently available introductions to what algorithms are and how algorithmic decision-making works. I’ve already recommended it as such to students interested in computational social science, and I’d very much recommend it to anyone from the social science side of academia riding the current algorithm fad or considering hopping on that bandwagon.

Of the authors, Tom Griffiths is Professor of Psychology and Cognitive Science at Princeton, and Brian Christian is a writer with a background in philosophy, computer science and, interestingly, poetry. A bunch of other smart CS people appear in the book via quotations or citations.

* * *

It’s a sadly little-known fact that there exists an optimal algorithm for finding the best possible soulmate. It’s equally sadly little-known that even said algorithm cannot be guaranteed to give the best result even most of the time.

That result is related to an interesting decision-theoretic problem called the secretary problem, and its discussion forms the core of Algorithms to Live By‘s first chapter, which deals with optimal stopping. Let’s assume that you’re running a firm and want to hire a new secretary. You have n applicants to the position, and you know n; let’s say it’s one hundred. You can unambiguously rank the applicants from best to worst once you’ve seen them, and you interview them sequentially. After each interview, you can either accept an applicant or reject them – once rejected, and applicant can no longer be accepted.

Now, the problem is: how to ensure that you end up hiring the best possible applicant? The order in which the applicants come is random, so the best applicant can be the first one, the last one, or anything in between. If you pick the first, possibly very good, candidate, there’s a high probability that later on you would have met someone even better. But if you refrain from making a decision until the very end, it’s also extremely likely that you will end up passing by the best applicant.

The trick is that there is no way to be sure that you pick the best candidate, but there is an algorithm which guarantees that, as n grows, you have a 1/e or around 37% chance of choosing the best one (or, perhaps more accurately, an algorithm that, when used, will choose the best applicant 37% of the time). It’s a very simple one: reject each of the first 37% of applicants, then immediately choose the first one who’s better than any of the one’s you’ve seen before. Unfortunately even this algorithm misses the best applicant almost two-thirds of the time – but given that the assumptions hold, there’s no way to beat it.

The problem’s formulation is curiously ill-suited to the scenario that’s usually presented, in that job applicants are not generally accepted or rejected immediately after an interview. There are, however, real-life situations which closely resemble it – such as buying an apartment, or dating. In Western societies, it’s generally (after the early stages, at least) not acceptable to date multiple people simultaneously, and if you choose to reject a potential mate or break up with them, there’s commonly no coming back. So the problem of choosing whether to settle in with a person or keep looking in the hopes that there’s someone even better around the corner is something encountered by most people who are looking to commit themselves to a serious relationship.

Of course, the exact formulation of the question does not apply here, either. Certainly, the n of potential mates is rarely known with certainty (and for many people, almost certainly does not approach infinity). Ordering the prospective partners is also probably hard to do with certainty, although with a reasonably-sized n it can probably be done with some degree of accuracy. Here, Christian and Griffiths suggest reframing the problem a bit: choose a timeframe, say, five years or your twenties or thirties or whatever during which you want to find a partner. Then spend the first 37% of that dating without seriously considering committing and establishing a baseline. After that, settle in with the first one whom you’d rank higher than the best one you met during your wild years (and hope that they’re not in their baseline-estimation period, lest that the assumptions of our scenario will not hold and they will end up rejecting you instead).

This will yield a 37% probability that you settle in with the smartest, hottest, most warm-hearted one of all the people who agreed to date you. More likely you’ll pass them over and end up with whoever you met the last just before the end of your search period, someone who’s probably not as good as the One That Got Away. But at the very least you will be able to find solace in the fact that this was the best you could do. For a serial monogamist there’s no better algorithm, so you’ll need to settle for at best a 37% chance of finding the closest-to-the-right one – or maybe choose a different set of assumptions and start practicing polyamory or some other form of (hopefully ethical) non-monogamy.

* * *

Other chapters in the book deal with, for example, explore vs exploit problems, Bayes’ theorem, overfitting and game theory. The first half of the book deals more closely with everyday real-life situations. For example, chapter two on explore/exploit problems begins with a discussion of the awesomely named multi-armed bandit problem. That is, the problem of which slot machine to play at a casino, when the odds of each machine’s payoff are unknown, and the only way to gain that information about a machine is to play enough of it.

Again, there are real-life problems that are more familiar to those of us who do not frequent casinos, such as eating at restaurants. Each time you eat out, you have the option of exploiting your existing knowledge by going to a restaurant that you know for sure is great. But picking a safe choice every time entails accepting the risk that there’s an even better restaurant out there that you’ll never find out about because when you exploit, you can’t explore. Of course, when you explore, you’ll more likely than not end up eating at a place that’s sub-par.

Some practical recommendations emerge. For example, the authors (quite sensibly) point out that you’d do well to prioritize exploration when you’ve just moved into a city, and exploit when you’re about to leave. The value of any information obtained is, after all, tied to how long you will be able to use it. But the chapter also contains a very interesting discussion on the ethics of running clinical trials.

In a randomized controller trial that studies treatments to an illness, patients are split into groups and receive a different kind of treatment based on which group they were sampled into. Most likely, at least if it’s a study comparing new treatments to well-established ones, one of the treatments works better than the others. Sometimes it’s the new treatment, more often an old one. Knowledge of what treatment works best will probably emerge during the trial, but patients are generally given the same treatment for the entire duration of the study. This raises the obvious but (according to the authors at least; I don’t really know much about medicine myself) often neglected ethical problem that some patients in a study are given treatment that’s known (with some degree of certainty) to be worse than an alternative.

A possible solution, the authors suggest, is to treat clinical trials as multi-armed bandit problems and use an adaptive trial setup. This setup, introduced by the biostatistician Marvin Zelen, involves treating patients sequentially and goes roughly like this. First, pick a treatment at random for the first patient and see what happens. If the treatment is a success, increase the probability of that treatment being selected during subsequent trials; if not, increase the probability of the other treatment. (You can imagine there being two kinds of balls in an urn, one for each of the treatments, and picking a ball from it at random. At first there are two balls; each time a treatment is successful, add one ball of the corresponding kind to the urn.) That way, you’ll gain knowledge of the effectiveness of each treatment as the trial proceeds, and each patient will get treated according to the researchers’ best knowledge. The book discusses (classical) trials of an approach, called ECMO (that’s extracorporeal membrane oxygenation for you) to treating respiratory failure in infants, which was eventually deemed better than pre-existing treatments, but not until quite a few infants receiving ultimately inferior treatment had died during trials. Apparently there’s been a discussion in the medical community about whether it was ethical to assign patients into groups receiving inferior treatment even after preliminary evidence had surfaced that a better option was available.

I’m not sure how relevant all of this is. I kind of suppose that the authors are simplifying the issue quite a bit, and there are probably a bunch of medical scientists out there who object to this on some grounds. But I found this a fascinating idea nevertheless, and it’s an interesting direction to turn to from considering the classical multi-armed bandit problem.

Some other practical ideas contained in the book: it only makes sense to sort something, if it actually saves time. Interestingly, and I didn’t know this, there’s way to sort a bookshelf alphabetically in approximately linear time, or O(n) for short, using a technique called Bucket Sort, if you know beforehand how to sort the books into roughly even-sized categories that are needed as an intermediate step to full sort in this technique. However, just going through that bookshelf one by one can always be done in linear time, and that’s the worst case – usually, the book you want to find is not the last one on the shelf. (If you notice that that for some reason that’s the case, then you could of course leverage that information in how you search your bookshelf.)

I like to sort my bookshelf because I kind of enjoy having my books in certain order – standard-sized fantasy books at the beginning, followed by the few other kinds of fiction books that I own, them followed by a motley of non-fiction books, and finally, comics, video games, and so on. I do it every time I move and occasionally adjust it when I add new books – I rarely reread books, so after that, the shelf more or less remains untouched. I reserve the right to do this despite its irrationality because I like it.

But I did decide to stop caring about the order in my kitchen cabinet after reading Algorithms to Live By. Instead, I now just put the stuff that I regularly use at front of the cupboard – things like muesli, peanuts, textured soy protein and, well, muesli again. The stuff that I use less often gets pushed to the back of the cupboard, where it remains until I need it again, after which I put it in the front again. This sounds like an excuse for messiness (and to be honest, it probably is), but at least it’s rational messiness. I spend a lot less time worrying about placing everything where it needs to be and usually find everything immediately when I need it.

* * *

Algorithms to Live By gets a lot more philosophical towards the end. The first few chapters do occasionally mention things such as the multi-armed bandit implying that life should get better as you age – after all, when you’re old, you can focus on exploiting a lifetime of wisdom and need to care less about exploring. But the latter half seemed to me to focus a lot less on practical problems like finding a free parking space, and a lot more on more fundamental topics.

A recurring theme here is the huge computational challenge of modelling a world in detail, and the folly of trying that with a puny human brain (or, indeed, any computer). The chapter on overfitting explicitly warns about thinking too much, noting that it may well lead to worse, not better, decisions than acting on instinct. For those of you not versed in the language of statistical models, overfitting happens when you build a complex model based on a set of observations, and that model ends up sucking at predicting new observations because it’s too sensitive to random variations in the training data set. Like in this example, from Wikipedia:

CC BY-SA 4.0 Ghiles (source)

The blue line represents a hugely complicated model that perfectly “predicts” the placement of each black dot in the graph, much better than the very simple model represented by the black line. But it’s obvious that the black line is much better in reality – just imagine adding a new data point with x = 6, and try to figure out where the blue model and the black model would expect it to appear.

I’m very happy that overfitting has its own chapter in this book, because the need to avoid overfitting when making decisions is something that I’ve thought a lot about myself – and often failed terribly at. Overfitting is why one probably shouldn’t be a classical utilitarian on a daily basis – the complexity of figuring out exactly what to do in a situation with a huge set of factors that should theoretically be taken into account is far too overwhelming, and trying to do make a decision based on, say, wanting to minimize a specific person’s suffering based on how you expect their mind to work and react in a specific situation will probably lead one completely astray. Better by far to adopt a deontological or, perhaps, a rule-based utilitarian approach. If you go by well-accepted norms, do your best to avoid killing anyone and try not to lie too much, you’ll probably do quite well. You will make mistakes, of course, but I bet that they will be less disastrous.

The chapters on randomness and relaxation go further into this, dealing more with the fundamental complexity of the world, and how finding perfect answers to computational problems is often impossible in practice. These contain some of the deepest insights this book has to offer, which happen to be some of the most fascinating ideas that I have encountered while studying computer science.

Roughly speaking, it’s this: the world we live in is in very many ways computational – a case has been made that it’s fundamentally so. The authors cite Scott Aaronson, who points out that the quantitative gaps in problems of different complexity classes in computer science are effectively qualitative gaps. As Aaronson has pointed out, for example, finding out that P equals NP – a famous and unsolved puzzle in computer science – would, in a sense, imply that there’s no fundamental difference between appreciating a great book and writing one.

Computational complexity offers one perspective to appreciating what this fundamental difference is. And it’s not immediately obvious. A few months ago, an acquaintance of mine talked about how fascinating it would be to have a library that contains every book that can be written. Sadly, I was not clever enough back then to boast that I happen to have just such a library on my computer (at least one that contains all books made up of Unicode characters that fit in approximately 16 GB of memory), and that he just needs to figure out a way to find the books he’d like to read from it.

But it also details how it’s often surprisingly easy to find answers that are good enough to problems in NP. Solving the traveling salesman problem is intractable to even the most powerful number-crunchers on this planet with just a few dozen cities in the salesman’s graph. But it can be solved, the book points out, if you’re satisfied with a solution that’s within less than 0.05% of the optimal solution – which, I would imagine, is quite enough in most situations.

The chapter on Bayes’ rule contains some similar insights, as it focuses a lot on the Copernican Principle and details J. Richard Gott’s famous prediction that the Berlin Wall would fall before 1993 with 50% probability. Gott’s reasoning was based on him seeing the Wall in 1969, eight years after it was built, and noting that he should expect to be there at the wall at roughly the mid halfway point of its lifetime. Thus, Gott surmised, the wall should stand for about another eight years – and, considering that the Wall might have stood for 8000 years like namesake in Westeros, his guess was pretty good.

Of course, someone using this logic and looking at the Notre-Dame in March 2019 would have expected the cathedral to remain standing for approximately 700 years more. Still, the general principle is valid. In itself, it’s also probably already familiar to many of you. What was interesting in Algorithms to Live By’s treatment of the topic was their discussion of how the Copernican Principle behaves with different kinds of priors. The example with the Berlin Wall is based on using an uninformative prior – every lifetime for the Wall is, a priori, assumed to be equally likely. However, we probably have better priors than that in many cases. How much money films make to their producers, for example, follows a power-law distribution. To predict how much money a film makes over its lifetime, you should take its current earnings and multiply them by a constant amount (according to the authors, this constant happens to be 1.4). (What I should expect my h-index to grow up to by this logic I dare not think of.) For a phenomenon following a normal distribution, predict the average of that distribution. And so on. As unintuitive as this logic appears at first, I think it’s quite fascinating that accurate predictions can be made with hardly any information. And yet it’s also quite intuitive: after all, if you wanted to find someone’s number from the phone book, back in the days when they still existed, you probably started your search by opening it roughly from the middle.

* * *

I did not like all of Algorithms to Live By. For one, I think it’s latter half wasn’t written with the same meticulousness as its beginning. And although its more philosophical aspects were interesting, it did feel like the authors were trying to cram a lot of points in a relatively short books, and weren’t that careful with all of them. The books penultimate section on game theory, for instance, besides featuring a fascinating discussion of the fact that finding a Nash equilibrium is an NP-complete problem (I wonder what economists have made of that), goes to discuss norms and religion as solutions to game-theoretic coordination problems. It also discusses the feeling of romantic love as a way to avoid a Prisoner’s Dilemma in coupling.

This is not wrong – it highlights the fact that people willing to form a relationship need to be able to be reasonably sure that their partner will be faithful, and to be able to signal their own faithfulness as well. When dealing with a creature that can read your mind, it helps a lot in being able to signal your faithfulness if you are indeed faithful. But still, this is hardly the end of the story – as Kevin Simler and Robin Hanson point out, the fact that your conscious motivations are pure does not mean that your real motivations are, too. Even with romantic feelings in place as a coordinating mechanism, the temptation to defect is still there. This does feel a bit like computer scientists using their favourite tools to explain the world, and doing it a bit lazily. Maybe they didn’t intend it that way.

Nevertheless, I think Algorithms to Live By is a great book, and besides some minor complaints like that, I don’t have many. It’s one part handy life advice, one part good introduction to the theory of computation, one part enjoyable armchair philosophy. Regardless of your background, I believe you would be a wiser person if you read it.

From Karlsen et al. (2017)

Another paper which challenges some conventional views of online ‘echo chambers’, i.e. the idea that on the internet and on social media, people largely live in a bubble of like-minded people, leading to a mutual reinforcing of views. Based on survey data from Norway, the authors note that most people encounter people with opposing views at least ‘sometimes’, although

.. this does not mean that they change their opinions. Debaters who say they are often contradicted also claim to emerge from online debates stronger in their beliefs.

While their experimental data tells us that

Both confirming and contradicting arguments affect attitude reinforcement in similar ways. This is true for both the self-reported reinforcement and attitude change reinforcement measures that we used in the study. One-sided confirming and contradicting arguments had stronger effects on reinforcement than two-sided neutral arguments. It is important to note that attitude strength is important in this picture. Effects are stronger for individuals with strong attitudes than individuals with moderate attitudes. However, this interaction effect is most consistent in the analysis based on self-reported reinforcement.

A practical tentative takeaway might be that to influence opinions, use extreme rhetoric if you’re dealing with moderates, and balanced two-side arguments if you’re dealing with die-hard fundies.

Instead of echo chambers, the authors suggest another metaphor:

Together, our results indicate that if a single metaphor is to be applied to online debating, trench warfare is a more fitting description than echo chambers. People are frequently met with opposing arguments, but the result is reinforcement of their original opinions and beliefs. However, the logic of confirmation bias, which is central to the echo chamber thesis, is also central in the notion of trench warfare. The Internet provides the opportunity to interact with like-minded people and those with opposite views at the same time. Interaction with like-minded people enables debaters to stay strong in their encounter with opposing arguments.

I’ve been toying for some time with the idea that in the current media environment, our main problem is not that people do not encounter opposing opinions and outgroup members; it is that they encounter them *too often* and in questionable contexts. I’m hoping to write more on that later, but the results here fit that idea quite nicely.

Karlsen, R., Steen-Johnsen, K., Wollebæk, D., & Enjolras, B. (2017). Echo chamber and trench warfare dynamics in online debates. European Journal of Communication.

From Schmidt (2012)

Topic modeling and, specifically, Latent Dirichlet allocation, is a fancy machine learning method used in computational social science and digital humanities to explore large sets of documents. I’ve used it a bit myself.

Benjamin Schmidt, in an article that’s already five years old, has some great points about the caveats of using LDA:

The idea that topics are meaningful rests, in large part, on assumptions of their coherence derived from checking the list of the most frequent words. But the top few words in a topic only give a small sense of the thousands of the words that constitute the whole probability distribution.

He demonstrates this with a clever example, using LDA to cluster ship voyages by treating ship logs as documents and locations derived from them as a vocabulary. It would be interesting if similar problems could somehow be demonstrated with more conventional corpora as well.

Schmidt also has a few things to say about using machine learning in the humanities as well:

Perhaps humanists who only apply one algorithm to their texts should be using LDA. — But “one” is a funny number to choose. Most humanists are better off applying zero computer programs, and most of the remainder should not be limiting themselves to a single method.


Although quantification of textual data offers benefits to scholars, there is a great deal to be said for the sort of quantification humanists do being simple.

I think LDA is a promising method and hope to be able to explore what it’s actually useful for in the near future. But I also think Schmidt makes a good point that we should aim to work with simple methods whenever possible. That one should only turn to more complicated methods (which, by extension, tend to produce less interpretable results and be more prone to overfitting) once the possibilities of simpler methods have been exhausted seems to me to be an idea that comes rather naturally to computer scientists, but perhaps less so to those from other disciplines.

The article also accidentally invents a clever new term. Along with supervised and unsupervised machine learning, we know have ‘poorly supervised’ machine learning as well. Better be careful with that.

Schmidt, B. M. (2012). “Words Alone: Dismantling Topic Models in the Humanities”. Journal of Digital Humanities, 2(1). Retrieved from


From Flache & Macy (2011)

The conventional wisdom says that polarization can be effectively countered by increasing contact between people with different views. Here’s a very interesting paper that challenges this. The authors simulated ‘cavemen graphs’, i.e. graphs with a number of tight but disconnected clusters, and show that adding new ties leads to reduced polarization when it is assumed that the valence of interaction is positive, i.e. actors can only be more or less attracted to those who are similar. However, when valence can be negative, meaning that actors are averse to those with differing views, adding random ties increases polarization.

The authors state:

This implication of ‘‘small-world’’ theory depends on the micro-level behavioral assumption that interaction is exclusively positive in valence. This result should caution modelers of cultural dynamics against overestimating the integrative effects of greater cultural contact.

I’m not familiar with the practice or philosophy of simulation models and am quite unsure how seriously I should take it (the model naturally is highly simplified). But I am willing to, tentatively, consider the view that trying to reduce polarization by inconsiderately increasing connections may be a bad idea. Anyways, I found this paper really fascinating.

Flache, A., & Macy, M. W. (2011). Small Worlds and Cultural Polarization. Journal of Mathematical Sociology, 35(1), 146–176.

From Dibble, Drouin, Aune & Boller (2015)

This time, something very much unrelated to my own research. A topic that has I’ve been quite interested in as of late is the effect of e.g. Facebook’s chat, and the algorithm that chooses which users to display at the top of the list, on people’s social and possibly romantic relationships. When discussing this theme, a friend of mine suggested a paper called Simmering on the Back Burner: Communication with and Disclosure of Relationship Alternatives.

The paper in question defines back burners as

people we are romantically and/or sexually interested in, who we’re not currently involved with, and with whom we keep in contact in the possibility that we might someday connect romantically and/or sexually. People can have back burners even if they’re already in a romantic relationship with someone else. Also, a former romantic and=or sexual partner can still count as a back burner so long as we still desire a romantic and/or sexual connection with them.

and goes on to note that most people have a number of them on their Facebook friend list whether currently engaged in a romantic relationship or not, most people do not tell about them to their partners (if they have any), most people identify their “closest” back burner as a casual or close friend, et cetera. I think my back burner count is probably lower than that of the average subject in this study, but I haven’t actually gone through my list, and then again, this study was naturally performed on American college students, a group whose social life is probably somewhat different from mine.

Interesting stuff. In hindsight, it is quite obvious that this phenomenon exists, but I very much like the name chosen by the researchers for it. I’m a bit sceptical of the results, though; according to the study, there’s a significant difference (statistically and substantially) in the number of “sexually desirable alternatives” identified by subjects depending on whether they are asked about back burners specifically or about contacts that they would like to be romantically or sexually involved with in general, and I’m not sure this should be the case. Also, as said, the study was done on U.S. college students, of whom most were of Asian origin (although this is probably more likely to mean that the study underestimates the phenomenon, if Asian Americans are more conservative than the average American).

Dibble, J. L., Drouin, M., Aune, K. S., & Boller, R. R. (2015). “Simmering on the Back Burner: Communication with and Disclosure of Relationship Alternatives”. Communication Quarterly, 63(3), 329–344.

From Clark (2016)

I argue that a hashtag’s narrative logic – its ability to produce and connect individual stories – fuels its political growth. — My case study of #WhyIStayed suggests that in the initial stage, hashtags that express outrage about breaches of gender justice are likely to invite online participation, while the escalation into online collective protest depends on the nature of interaction among multiple actors and their sociopolitical contexts.

A tweet can be about something as mundane as a user’s morning cup of coffee, but when combined with the networked power of hashtags, the political fervor of digital activists, and the discursive influence of collective storytelling, online personal expressions can grow into online collective action.

One of the areas I’m working on – hashtag activism, hashtag campaigns, hashtag advocacy, or whatever you want to call it, depending on your point of view – curiously seems to have been primarily advanced by the field of feminist media studies, with a journal with the same name having published three special issues on the subject in the past few years. This is one of the Feminist Media Studies papers, and it seems to argue that hashtagged displays of activism are particularly well-suited for online feminist discursive action, with the paper in question detailing how one particular hashtag was used to subvert mainstream narratives on domestic violence.

I think this is interesting stuff, even while I remain somewhat unconvinced of the practical relevance of it.

Clark, Rosemary (2016). “Hope in a hashtag”: the discursive activism of #WhyIStayed. Feminist Media Studies, 16(5), 787–804.

From Bruns & Stieglitz (2013)

There are three key areas of metrics which we suggest are of general use in the study of hashtag data-sets: metrics which describe the contributions made by specific users and groups of users; metrics which describe overall patterns of activity over time; and metrics that combine these aspects to examine the contributions by specific users and groups over time. Further, more specific metrics may also be established, but these soon become substantially more case-specific, and are no longer useful for a comparison of patterns across different cases. We discuss these areas in turn, and provide examples of how these metrics may be utilised for the study of individual hashtags as well as for comparative work across hashtags.

One of these days, I should just read through everything Axel Bruns and Stefan Stieglitz have published. This one, already a few years old, outlines some fairly simple but useful metrics for comparing hashtagged Twitter conversations and presents a few examples of such comparisons. Nothing mind-blowing, but it’s good that somebody’s put this stuff on paper.

Bruns, Axel & Stieglitz, Stefan (2013). “Towards more systematic Twitter analysis: metrics for tweeting activities”. International Journal of Social Research Methodology 16:2, 91-108, DOI: 10.1080/13645579.2012.756095.

From Feinberg & Willer (2013)

Some supporting evidence for the argument that environmental issues are usually framed in ways that resonate with the moral values of progressives, but not so much with those of conservatives. Namely, the harm/care domain of moral foundations theory, which progressives care much more about than conservatives do, is heavily emphasized, while purity/sanctity, which is important to conservatives but not at all to liberals, is absent. Also, they argue that conservatives can be made to care about environmental issues if they’re framed in the right way.

We argue that these differences result from a tendency for harm- and care-based moral arguments, bases of moral reasoning that are more compelling to liberals than to conservatives, to dominate environmental rhetoric. — Thus, we hypothesized that exposing conservatives to proenvironmental appeals based on moral concerns that uniquely resonate with them will lead them to view the environment in moral terms and be more supportive of proen-vironmental efforts. — These results suggest that political polarization around environmental issues is not inevitable but can be reduced by crafting proenvironmental arguments that resonate with the values of American conservatives.

Feinberg, M., & Willer, R. (2013). The Moral Roots of Environmental Attitudes. Psychological Science, 24(1), 56–62.

From Poell, Abdulla, Rieder, Woltering and Zack (2016)

It has been argued that contemporary or recent protest movements taking place and being organized on social media, such as the Occupy movement, are characterized by a logic of ‘connective action’ in which the sharing of personalized ideas, images, memes by individual activists unaffiliated with established organizations is of importance, and in which formal leadership is absent or unimportant.

This paper critiques that view, by looking at the Facebook page Kullena Khaled Said (We are all Khaled Said) which played a role in bringing about the revolution in Egypt that toppled Mubarak’s regime, and showing that the page’s admins, in their own way, took a leadership role in the events. They then tie this to the idea of ‘connective leadership’, in contrast to more traditional social movement leadership, characterized by the use of social media instead of mass media, steering of discussion and participation instead of giving orders and proclaiming views, of coordinating streams of information and networks of people instead of a formal organization, and so on.

This ties well with my own emerging views on the topic. Social media and modern technologies have not made organizations and leaderships irrelevant or useless, but they have, in some cases, changed their dynamics.

Taken together, the examination, on the one hand, reaffirms Castells’ (2012), and Bennett and Segerberg’s (2012) observation that the 2011 protest wave was not initiated or coordinated by formal SMOs and prominent activist leaders. — On the other hand, our analysis compli-cates the idea that this was an uprising organized by the crowd through self-motivated online sharing. It suggests that the sharing of grievances, as well as more complex pro-cesses of protest mobilization and coordination, was facilitated and shaped by what has been labeled as connective leadership. — Whereas social movement leadership appears effective in motivating protest participation through mass media, connective leadership, in its focus on actively involving users in the articulation of protest, seems especially suitable for the social media age.

Poell, T., Abdulla, R., Rieder, B., Woltering, R., & Zack, L. (2015). “Protest leadership in the age of social media”. Information, Communication & Society, 4462(June), 1–21.

From Lamba, Malik and Pfeffer (2015)

This one looks at whether bursts of controversy on Twitter have an effect on the behaviour of firestorm participants, applying the idea of “biographical consequences of activism”.

On the other hand, if firestorms arise from existing social ties, it would point to firestorms being a consequence rather than a cause of other action, and if there is no relation to social ties, it would be inconclusive but, as social actions are embedded in networks of social ties, it would suggest firestorms are of little importance.

Going back to our theoretical motivations, it seems that at least among the firestorms we sample, we see no evidence of the type of social change associated with action that has biographical consequences on participants. This suggests that, at least along this dimension, firestorms should not be a source of anxiety for targets nor a source of satisfaction for opponents; firestorms in general do not create the conditions to lead to larger and more long-term actions, at least among the mass of participants.

The method (comparing mention networks before, during and after a firestorm, and with mention networks of non-participants) is a bit rudimentary, but I like the point of departure.

Lamba, H., Malik, M. M., & Pfeffer, J. (2015). “A Tempest in a Teacup? Analyzing Firestorms on Twitter“. In 2015 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (pp. 17–24). Paris, France.