The following was written by the now deceased <predator> back in 2001. Most of the URLs are now broken and the technology referred to may seem old and out of date but the issues he describes and solutions proposed are still worth considering today, now more than ever.
Content: Derived in part from the cat.org.au minutes 5 Nov 2001 and comments on
the catgeek listserv, Nov 2001
Warn: Opinions expressed herein may not represent those of the host brain.
Warn: Contains unauthorised thought processes.
Status: <R> for rant.
The fine art of anonymous media
Nov 16 2001
I am <predator>, a member of the CAT collective since about 1997. CAT is
an acronym for Community Access Tecknowledgy, an anarchosyndicate
internet service provider which has its origins during the very early
1990's, when it was Community Access Television.
This file is hosted on a CAT server in Sydney, Australia, called
conway.cat.org.au. It used to be called black.cat.org.au. If you try to
resolve the black.cat address you will be directed to another CAT server,
ollie.cat.org.au. The name change came about after a major OS version
upgrade in January of this year.
CAT's web server, conway, hosts a public-contributory activist calendar
called Active Sydney. Active Sydney has counterparts in several other
Australian cities. It takes a lot of hits. It's so successful that even
the cops look at it when they want to know what street protest, community
rally (etc) is planned when and where.
In its previous incarnation as black.cat, the CAT web server was the
platform where Active Sydney's developmental programming efforts were
centred, which originated out of early Reclaim the Streets webcasts.
Active Sydney's PHP/Postgres code was written by CAT geeks, and this code
is what enables Active Sydney to function at all. The same code was also
the basis for the development efforts to build the database engine which
makes Indymedia function. So black.cat, in significant ways, is the
machine where the Indymedia open publishing phenomenon originated.
CAT member Simon Rumble has written a useful summary about the evolution
of the webcasting software at http://www.cat.org.au/cgi-bin/fom?file=36
The ollie.cat/black.cat is a donated Cyrix-based computer; and the
webserver conway is also a donated box, a commodity-guts PentiumII which
runs Linux and is maintained by CAT volunteers. CAT is an open
organisation - whereas parliamentarians and police in Australia do not
make public archives of their email transactions to increase their
openness and transparency, the CAT lists are deliberately publically
archived. Some of us even use our real names.
By dint of their location, CAT servers fall under the influence of
Australian legislature, and are within the jurisdiction of Australia's
court system and its various telecommunications bureaucracies. It is worth
noting that in Australia, while we have constitutional rights to a trial
by jury and freedom of religion, we have no constitutional protection of
freedom of speech. We have a federally funded government organisation
called the OFLC, the Office of Film and Literature Classification (it also
classifies software and music), which has banned sale of at least 30,000
thousand publications since I was born in 1971. Amusingly some of these
books were banned decades after their publication. Misguided censorshit is
alive and well in Australia.
There is also a considerable body of common law dealing with what is called
libel, and these laws are primarily used to determine if an individual suffers
economic loss on the basis of what is claimed by various journalists in the
media. Some of us at CAT have started to feel the pinch of the libel laws,
and interestingly this is not because of anything we ourselves have ever
Various large mainstream media corporations, their expensive laywers, and
the police, have during recent history rattled sabres in our direction,
requesting server logs, or demanding that particular anonymously
contributed and somewhat critical Indymedia article pages be removed. The
rattle is getting louder. They're scared. They're losing control of their
precious myth factory, losing their carefully fabricated perception of
credibility and the audience sizes which go with it, which makes companies
want to pay to advertise with them. It must somehow hurt their wallets.
Indymedia must, therefore, be doing something significantly right.
It used to be that if a website's content was hosted outside of Australia,
then the Australian laws lacked any power over the box or its contents.
Sydney Indymedia, for example, is not hosted in Sydney, or even Australia.
Most of CAT, including myself, does not even have a password to give us
access the box where it is hosted. The Sydney Housing Action Collective
site, which gives the location of abandoned buildings suitable for
conversion to autonomous housing, is hosted in Germany, for example, and
has hence exhibited glorious immunity to Australian take-down notices.
SHAC's membership is unknown, communications to it are protected by hard
cryptographic software (PGP), hence no actual people can be associated
However, the jurisdiction limitation is starting to change. Whereas SHAC is
operated anonymously, Indymedia sites typically are not, since openness is
supposed to be part of their fundamental nature. What this means is that human
beings with real names become associated with possession of the means of control
of certain Indymedia sites, and as such become, correctly or not, designated as
people who can not only be made to enforce removal of particular contributions
from the site, but whom can also be held responsible for failure to remove
content, in the usual expensive and protracted court procedural manner, where
justice is what you get when you run out of money.
Now, of course it has not been so bad in Australia as it has been in
Italy, where, after the Genoa protests, the cops smashed squats, raided
Indymedia premises, were physically violent to contributory independent
journos, and confiscated or destroyed computer equipment (this seems to be
an Italian tradition, even the old FidoNet nodes suffered similar
persecution over ten years ago). However, there is no reason to expect
that it will not eventually become that way. Three weeks ago the NSW
police, unable to gain access to a heavily barricaded squat at a
long-abandoned building owned by the local arm of the Anglican Church, in
the Sydney suburb of Newtown, called in the fire brigade who, with the
blessing of the owners, used hydraulic equipment to force the front door
off its hinges. Once inside they made everyone leave the premises,
sleeping rolls and all. For their uh, safety, you understand. Sleeping on
the footpath is safer than sleeping inside a building. Its tha law,
If merely taking preventative steps against homelessness is crime enough
to warrant this sort of action, surely anything perceived to be active
provocation of wealthy inflated media egos is likely to result in the same or
worse in the long term.
Whilst CAT isn't expecting the hydraulic removal of the door to its
premises any time soon, any money donated to CAT by the community, which
might be caused to be spent on defense attorneys and legal counsel, in
order to fight off harassment by Big Media, is really money we cannot
afford to spend on anything else than bandwidth and equipment. CAT takes
no corporate or government funding, after all. We are not rich, nor well
defended by lawyers on expense accounts.
The S11 aviation incidents in the US have, with careful media massaging,
given the governments of the world what they consider to be carte blanche
as regards stripping away everyone's legal rights to things like free
speech, in order to help the fight against terrorism (read: fight anything
which impinges upon wealthy corporate interests). Australians living near
where I live, who speak a different language to me, have better tans and
believe in a god other than profit, are systematically demonised every
other night on TV; important news never makes it to air while abject
trivia (sport, advertising) increasingly saturates the available
As I observe it, the mainstream media's content is increasingly discrepant
with daily reality. The only group who can counter this discrepancy is the
public, and Indymedia has successfully assisted them to do this, to the
extent that the authorities have asked to grab server logs, to find out
who these nasty people with a non-approved viewpoint were, perhaps to help
identify them for later re-education. Isn't it just too bad no such
logs are kept.
Indymedia is a very successful first step in the implementation of free
contributory media. However, as those who operate Indymedia sites know, it has
its problems. Especially now in the ever more nasty legal landscape.
As I write, new legislation is being crafted in the parliament of New
South Wales, by stealth, which, incredibly, aims to make the act of
uploading anything considered offensive by a policeman into an act
answerable in court - that is, the NSW cops (the same people who were
crowing last year that `silence is violence') on behalf of the OFLC, will
become the arbiters of what can be freely said and what cannot. It has,
unsurprisingly, recieved NO media coverage. It has made it through its
second parliamentary reading. If you upload anything, to anywhere, and do
it while you're in NSW, the police service will be able to arbitrarily
make you responsible for it. This is exactly the sort of institutionalised
insanity contributors to Indymedia, organisations like it, open email
lists, newsgroups, or any other public contributory media will be facing
in the immediate future.
Anonymous accounts on free web hosts allow you to do a certain amount,
but due to the architecture of the legal system where eventually someone,
is made responsible, and the architecture of the nameserver/DNS system
which points back to some kind of responsible human being, the sysadmins
become the meat in a sandwich of which they had no awareness, but of
course, under the legal system as we are subjected to it, ignorance is no
defense. The ISP administrators can always be leaned on in order to force
removal of certain content.
There are additional, larger issues.
First, centalised (server) systems tend to result in a small group of
people, typically those equipped with the technical prowess to operate a
server, becoming disproportionately responsible for the operation of a
website with increasing logistical demands, in terms of hardware,
bandwidth, accounting, administration and security requirements.
Stallman.indymedia.org is being hammered by planetwide HTTP requests, and
as such needs loads of disk space, RAM, processor grunt and data
communications capacity; the provision of these requirements falls heavily
on a small group of technical people, who are a very small proportion of
the total indymedia readership. Further, this administrative load tends to
tie them up so they can't contribute to the media content themselves.
Second, since the operation tends to be condensed around a particular box
with a group of people who run it, and who quite reasonably do not want
random anybodies using the machine for purposes other than those intended,
centralised servers tend to accrete bureaucratic overhead, which tends to
then generate all the usual problems of bureaucracies: glacial response
times, increasing self-servitude and exponentiating complexity, none of
which assist the operation of free media or the distribution of knowledge
of how to produce and distribute it.
Third, anyone who has looked at an Indy page has observed annoying contributions
in which they have no interest whatsoever. Indy server operators have been asked
to implement rating schemas, content filters and everything up to and including
censorshit in order to prevent the poor, defenseless readership from having to
suffer exposure to these postings.
In the corporate print media, contributors are paid 45 cents a word,
making this article "worth" about two grand at commercial rates. The
pay-per-word architecture has the interesting consequences that concision,
or the usage of a few complex words instead of many short ones, is less
profitable to the author. Further, paid contributors are compelled to
write for an audience - that is, they are discouraged from seriously
challenging or confronting the mindset of the readers, or seriously
upsetting them, or even writing without the persuasion of a particular
demographic as their objective.
This is because, writing with insouciance, without deferring to the
sensibilities of the audience, produces articles which might eventually
reduce the total audience size. Audience size is what determines the rates
for which advertising space is sold, and because advertising is the
primary source of income for corporate publications, this audience size
must be maximised at all costs, some of which include, for example, the
quality, confrontation potential, depth of research and even word
complexity of the article. Mustn't use up saleable ad space on mere
journalism. Were this article written under such constraints, you would
not have been exposed to this particular self-referential digression, for
example, QED. I note that these constrains seem not to have prevented
certain writers or their automated grammar checkers from generating
meaningless, buzzword-laden word salad, however, perhaps because it was
found that this sort of stuff *attracts* certain types of audiences.
In any case, when contributing to Indymedia, contributors don't have to
assume that the audience wants something specific. This tends to increase
the diversity of the contributed articles, which is a good thing and a
welcome change in the mainstream media ideological monoculture. However,
the audience does not necessarily want to see every contribution and in
the present client-server model, where they get whatever contributions
they're given, whether they like them or not, and if they don't like them,
they eventually stop asking for them, which is standard human information
foraging behaviour, and is the same reason I stopped watching TV.
However, as I see it, free media providers should NOT be responsible for
catering to the likes and dislikes of the readership. The audience must
therefore be given tools to assist them in their search for content
relevant to the requirements which they, as individuals, must specify. You
can't, however, print a program guide for a site whose content is by
definition neither programmed nor designed to be fitted into a program
We need solutions to these problems. Even though the current client-server
contributory media model can be forced to behave at larger scales, a small
number of servers represent us with lessened redundancy (site redundancy
improves reliability) and increased failure and confiscation
vulnerability: One or two critical boxes can go down, and the whole system
dies. Tao.ca, which mirrors cat.org.au, exhibited this problem this year.
The idea has been floated that on Indy pages, there be installed a "Delete
this story" button. I am opposed to it on two fronts. First, this would
turn indymedia into just the sort of site nobody would want to read
because all the stories which challenged particular viewpoints could be
censored, anonymously, with nothing more than a mouse click. Second, there
would never be an equivalent requirement made for media websites operated
by large newsprint and broadcasting corporations (let alone on the content
fed into their massive transmitters)... usual story, the little operators
can't afford to fight their way through the legal morass, whereas the big
wealthy operators can tie up the court system for years, during which they
continue to broadcast as they please. One law for them, another for us.
This "delete-it-yourself" model might be a revealing short-term experiment
if, and only if, the censors had to identify themselves by name, address,
organisation, and provide reasons for their actions, but I strongly
suggest that it never, ever be implemented as a serious model for
Additional ideas have been kicked around about how one might go about
implementing anonymous uncensorable media. Machines with one
anonymous group user, open-root-passwords, concealed boxes using wireless
links, large piles of read-only filesystem, etc, solve some problems and
create others, but centralisation and its attendant problems remain,
namely that an idiotic law can be written which would enable someone to:
- traceroute the box, get a warrant and confiscate the thing; Or,
- de-allocate IP name space from organisations deemed to be irksome, by
pushing the right buttons at ICANN or AUDA or whatever organisation
provides the local DNS name registry services - "Don't you
understand? These free media people are terrorists!"; Or,
- render unlocatable any IP address space owned by the free speech site,
by leaning on whoever controls the routing table entries of the
gateways; presumably this would be done by illegalising the
possession of a routing table entry (to something which would be
subsequently called a "rogue box" in the usual sensationalist way)
in much the same way as it is currently illegal in Australia to possess
certain kinds of file (porno pictures, drug synthesis instructions,
music of scatalogical or racist genres, anything containing creative
thinking, etc) on your harddisk. Traceroute will tell you which
machines route to which boxes, and I don't think VPNs will be immune
If history is anything to go by, it will only be a matter of time until
this sort of thing happens... many new and potentially democratising forms
of information distribution have been squashed or regulated to death for
the benefit of the corporate system, everything from pianola rolls and the
postal service to CB and packet radio.
So the thing to do here is to build a truly public, polynymous (many
named) box which everybody owns and everybody runs, where _users_ decide
for themselves what they want to look for and what they want to
I think the way this might happen is that responsibility for the "box" be
voluntarily dispersed onto as many people as possible (n-thousands) by
adopting a model entirely different to the client-server model for serving
up media web pages. This inescapably means we must use the plural form of
the word box.
We need to codify a Distributed Independent media platform across massive
numbers of boxes. It sounds nasty. It is nasty. But it isn't like this
problem has not been solved before.
Napster's death, brought on by the RIAA and its lawyers was the first real
example of thorough stomping-into-a-grave and pissing-upon of a freed
internet media website, and server centralisation and lack of operator
anonymity were its weaknesses. Nullsoft learned from it and generated
GNUtella. GNUcleus is the windows peer app for it. Download it and try it.
You'll be amazed, not least by the fact that none of the machines involved
know where almost any of the music is, but also at how many terabytes of
music you can suddenly access. There's no GNUtella server, per se,
however, so how does Gnutella do it?
Each machine knows a little bit about where a requested file might be, or
it knows which machines to ask about where the data might be. It's rather
like a social network wherein, say, you might need some electrical work
done - you might not know how to do it yourself, and you might not know
anyone who can do it, but you know someone who knows someone else who
knows how to do it. Or they know someone else in turn. You can have many
many degrees of separation between requesting box and providing box, and
the search still works. It is in effect a distributed search engine and
distributed server all in one. This is called the peer-to-peer model.
GNUtella operates in an infinitely distributed manner where all people
downloading and contributing all host a little bit of the content. Beyond
bandwidth restrictions (which are negotiable by other software, or even
additional hardware which people can cheaply install and configure for
themselves - check out http://www.cat.org.au/~predator/getlaid.html and
http://air.net.au - I see no reasons why it couldn't work for other kinds
of data, it's just files and code, after all.
Another, perhaps better, example is Freenet. Not only is it a distributed
peer system but it has hard cryptography embedded right into the system.
Search propagation times behave well, being proportional to log(n) for a
system with n peers. (Math: Log (10) is 1. Log(100) is 2. Log (1000) is 3.
Log (1000000) is 6. For 100 million users, a search might take 8 times
longer to propagate than for a network of 10 users, though the bandwidth
demands start to get nasty).
Indy's slogan is "Everyone has a story. Everyone is a journalist." Peer to
peer systems might have as their catchphrase, everyone is a client,
everyone is a server. It's much more participatory and democratic. Want
free media? Do it yourself, anonymously, on your own box. Content
immediately becomes public property on publication in such a system : free
media (publically owned content) and copyrighted work (privately owned
work) are intrinsically opposed philosophies.
The number of different locations where a contribution exists, and
therefore the number of people involved in hosting the content, rapidly
becomes too big for the legal system to process. Contributors are
anonymous anyway, so the task of pinning them down is made even more
difficult, especially since the peer seeks out a file which might not
actually be in the same place all the time and which eventually comes from
a machine about the identity and ownership of which it cannot directly
know. All it knows is, it asked the next machine along to locate and
provide a file, and eventually the file was provided. From... somewhere.
Freenet's protocols do not even care if the peers involved do not have
permanent IP addresses; In fact, in some ways, it's better if they don't,
since no small group of people who have responsibility for these IP
numbers is named in any central name registry. Peer-to-peer suits virtual
private networks perfectly. If certain large independent uncensored
contibutory media websites were coded this peer-to-peer way and the peer
application program was subsequently distributed, free media would be
permanently invulnerable to the exact sort of idiotic take-downery and
libel law crap and harassment by the TIO, et.al. which has characterised
the last decade of communications law evolution in this country.
The peer-to-peer model disperses the responsibility for media content
across the participants in communication, which is ultimately where it
should be since communication is a dynamically evolving data transfer
between two data processing entities. The model matches closely some of
the proposals in historical Anarchist academic works, notably some ideas
from Ivan Illich ("Tools for Conviviality", London, 1973) and the
`Information Routing Groups' of David Atkins (1984, "The IRG solution:
Heirachical incompetance and how to overcome it", London, Souvenir Press).
See also: http://www.uow.edu.au/arts/sts/bmartin/pubs/98il/il02.html
The programmers of course merely (!) write the code to make it all work,
so they never say anything libellous, so they don't have to hide. A
graffiti artist sprays something on a wall, but nobody sues English
teachers, bricklayers or spraypaint manufacturers for teaching someone
how to write English, providing a canvas to write upon, or making
available the equipment to spray paint upon it, respectively, do they?
The responsibility for the nature of content lies with the contributors
and the people who read a contribution, not the people who create the
tools for communication therebetween.
Programmers, who would eventually implement this scheme, in whatever way
it is implemented, should not have to hide their identities and form
exactly the sort of undemocratic unaccountable secretive society which,
for instance, runs a typical large commercial TV station.
Further, the peer-to-peer implementation is an intrinsically difficult
substrate for a multiperson bureaucracy to grow upon. Since each
individual human operating their own peer is the entire bureaucratic node
for their box, the complexity of the internal bureaucratic processes which
might arise are confined to individual skulls where, owing to massive
processivity resources and huge internal bandwidth, decisions can be made
rapidly about how to configure a human individual's particular box. The
only bureaucratic structure which could usefully exist for the peer code
is already used in the making of Indymedia code as is, namely CVS, the
Concurrent Versions System, which assists in the tracking of the
development of the actual code from which the app originates. Given
geekdom's intense problem-solving orientation and legendary distaste for
bureaucratic procedure, I suspect any problem in this realm would be
As for the peer-to-peer-indy code I suspect it would need to be written
from scratch but I think a lot of its required functionality is already
lying around for free - PHP, postgres, mozilla, freenet; the app could
lean heavily on the code structures therein. It must, of course, be
open-source, so people will be able to know that what they compile or
execute has no onboard spyware or advertising distributorship built into
it, as has happened for some of the recent peer-to-peer apps which have
come to replace napster.
In much the same way as gnutella distributes _requests_ for particular
files of music based on what the title, artist, etc is, a distributed
indymedia peer could simply request anything with a datestamp generated
within the last few days, and fitting any newsy parameter you like...
keywords, specific languages, etc; any machine in the peer system with
content matching the request could then serve up whatever it had, to
whatever machine wanted it. No logs, of course. And since you're searching
for specific things, the amount of irrelevant content you see collapses as
your specificity increases, which returns to the readership the
responsibility for choosing what they want to look at. Your request
tightly defines what you want. You never go to google.com and ask for
"everything", do you?
When the peer application program is written, news about where to
anonymously download it could, of course, be posted on indymedia as it
already exists. Of course, if there was opposition to this next step in
the evolution of independent media from the present dug-in operators of
existing Indymedia sites, this itself would serve as adequate proof of
society's requirement for the distributed peer-to-peer app, since
Indymedia would no longer be doing what it claimed.
Geeks, go to it. Getting a peerage never looked so good.
<predator> 16 Nov 2001
Re: The fine art of anonymous media
Aka Predator, (Mike Carlton) used to work closely with AUSI, Australien Universal Space Industries, he coded the engine to the first indymedia network in the world, they even made a movie about him featuring a actor playing Julian Assange.