Thinking global, living local: Voices in a globalized world

The Age of Too Much Information and How to Deal With It

Written by on . Published in News.

“One of the effects of living with electric information is that we live habitually in a state of information overload. There’s always more than you can cope with”, Marshall McLuhan said on The Best of Ideas on CBC Radio in 1967. As with many of McLuhan’s other quotes, this one defined the problem long before it became important (or even visible) in public discourse.

“Too much information running through my brain / Too much information driving me insane” sang Sting back in 1981. Twelve years later Duran Duran recorded the song with the same title (and much the same idea) ending with a call to “dilate your mind.”

In 2011 people produce and consume much more information than they did in 1967, but the overload is still there. And, as McLuhan, proclaimed, it will probably be there until the end of ‘electronic information’ (if indeed there will be an end).

What produces the overload?

Philipp Rautenberg, a Ph.D. student with a background in neuro-cognitive psychology and mathematics at BCCN Munich, and the moderator of the “Structuring Data Chaos” panel at the Berlin Barcamp 2011, cites three major sources of the ‘data tsunami’ (which by 2007 accounted for 295 exabytes of stored data): new technologies, new ways of data access and new methods for data arrangement.

Berlin Barcamp 2011

A live example of how technology adds more data: Rautenberg’s Linux presentation software shows two screens instead of one due to some error

How do we handle the data tsunami?

There are three ways to avoid the tsunami: 1. Escape from it (physical filtering), 2. Ignore it (cognitive filtering), 3. Generate content-blindness (‘subconscious pre-processing’ in scientific jargon).

Another possible way of structuring data chaos would be to make everyone able to structure the information. One of the participants (under Chatham House rules) proposed:

We can learn a lot from our brain in filtering online information, but we need to understand our own brain first. We need to learn how to read and write. Most people are customers of data, of apps, etc. But people need to know how to create data, how to create apps, etc.

The counter opinion here is that people do not need to know how to produce more information or the analytical tools needed to structure it as it is people themselves that create the tsunami. People should be more proficient in organizing data, in being ‘the better librarians.’

Another problem with more profound computer literacy (I would say the code literacy or superliteracy) is the issue of equal access to data: while developed countries are indeed experiencing the data tsunami, there are regions still located in the digital Sahara, with expensive broadband, no smartphones, etc (some countries, ironically or not, are situated in both areas simultaneously).

The inequality in this field, it seems, is unavoidable. The next level of the digital divide, as one participant suggested, will be based not only on the price of access to data but also on the ability ‘to update the skill set to appreciate the data.’

Digital democracy and growing complexity

Apart from the issue of inequality, data chaos also leads to the reshaping of power mechanisms in democracies. The most practically powerful person in the complicated system of electronic democracy is the chief engineer. The last word is usually had by the technical implementer.

This is why a lot of countries stick to the model of ‘paper elections’ (the European model) as opposed to purely ‘electronic’ ones. In the United States, the electronic vote is outsourced to private companies that are tied by contract.

None of these solutions, however, seems to be optimal. Two solutions could be used to address this issue: 1. Open sourcing both of data and its processing algorithms, 2. Better visualization of existing data.

If e-voting procedures and software are distributed as open source together with election data, this gives ‘enough eyeballs’ to fix any possible bugs that might surface. But then this solution comes up against the privacy issue. The Barcamp discussion didn’t deal with the openness/privacy dilemma, but it seems obvious that this is an extremely important issue for the future of e-democracy.

Can the mind be dilated?

If not better computer and programming education, what then can be done to help cope with the ever increasing glut of data? Should we actually bother at all? – after all, the brain copes somehow. The question then becomes “when will humans reach the limits of their brain capacity?” almost as Duran Duran suggested back in 1993.

Neuro-scientists say that the human brain hasn’t evolved much in 1000 years. If a person listens to two stories, he or she will be able to memorize only one of them in detail. Instead of increasing brain capacity (what people think they’re doing), what they’re in fact doing is a trade-off (the precise form of which has yet to be discovered, although authors like Nicholas Carr seem to know the answer).

My personal view here is that even though our hardware (our brain and nervous system) is more or less stable and even though it’s not very fast in evolving (not as fast as we might want it to be), we can still improve our software (our conscious and unconscious mechanisms of data filtering and processing).

 

Tags: , , , , ,