In future, the public may never be alerted to NSA-type
revelations because surveillance is fast becoming automated
By Christopher Steiner
Kurt Vonnegut once opined: "Human beings are chimpanzees who get crazy drunk on power." That power corrupts is hardly debatable. For that reason, the evolution of espionage has run in parallel with the development of organised tribes of human beings that we now refer to as countries.
By Christopher Steiner
Kurt Vonnegut once opined: "Human beings are chimpanzees who get crazy drunk on power." That power corrupts is hardly debatable. For that reason, the evolution of espionage has run in parallel with the development of organised tribes of human beings that we now refer to as countries.
Human
nature makes it predictable that
organisations such as the NSA would be cataloguing phone
calls and other electronic interactions between humans. But
Edward Snowden's revelations also tell us how far
electronic snooping has yet to go. While the din of outrage
still resonates, we should be thankful that Snowden – a
human being – actually exists. In the future, the world may
never be alerted to such breaches of privacy because there
will be no humans involved in spying at all. Just as
algorithms have conquered our stock markets and our
musical tastes, so too will they conquer surveillance. Even
the most human of tasks, snooping, will become the province
of the bots.
While
it's true that the surveillance Snowden spotlighted is of a
new and digital variety, it still required human levers to
give it any meaning. The
NSA,
for example, using its call log data, would take an interest
in people who repeatedly dialled the phone numbers of known
troublemakers. Human agents would query the call-logging
database and find out who a prime target in Yemen might be
speaking with inside the US. The data is collected passively
and electronically, but much of the intelligence and the
methods to derive it come straight from human minds. But
what will happen when a machine makes the rules?
In the
late 1940s, Vonnegut observed how General Electric was
replacing human machinists with computer-operated milling
machines to cut rotors for jet engines. This passing of
duties from humans to bots led
Vonnegut to imagine a world where human chores of all
manners would cease being the labour of men and become
strictly the work of machines. Power and income, then, would
be concentrated among the few who controlled the machines.
Snowden and the teams of analysts at the
NSA,
CIA and
GCHQ
who sit in front of our stores of electronic intelligence
will hardly be necessary in 15 years. Algorithms will have
replaced them, leaving only a few humans, like
General Keith Alexander of the NSA, left to watch the
house.
Underneath those top humans will be machine-learning
algorithms that dance across the data of humanity like a
spider tending a web. They won't be programmed simply to
search for call patterns or numbers; they will learn what
patterns and numbers are significant by ingesting news,
conflicts and terrorist threats in real time, comparing that
to activity seen on computer and phone networks. Algorithms
that trade stocks at the speed of light already read
specially tailored news feeds from Bloomberg and Reuters;
the intelligence world, although less lucrative than that of
Wall Street and the City of London, will not be far behind.
Algorithms are more efficient than people; they can find
relationships within data streams that a human eye couldn't
spot in 20 years; they're indefatigable – and they're cheap.
Also on the positive side, algorithms aren't much for drama,
counter-espionage or leaking. They do their jobs and don't
ask questions. But they can make mistakes that border on
inexplicable. Just as an
algorithm belonging to Knight Capital in 2012 went
berserk and lost that firm $440m (£288m) in 45 minutes, an
NSA
algorithm could finger thousands of innocent people to be
targeted for extra surveillance, or worse.
But
these things can and do work in what would seem to be
incongruous arenas. The CIA has been using algorithms that
run on a thread of mathematics called
game theory for more than two decades. The man behind
these strings of reason and mathematics,
Bruce Bueno de Mesquita, a political science professor
at New York University, says that analyses driven strictly
by human observation are flawed by their very nature. Human
analysts, he points out, have appetites for meaningless
information such as personal gossip, backstories and tales
of failure and conquest. Algorithms couldn't care less about
these things, of course – a fact that helps them do their
job better than humans. A CIA study found that Bueno de
Mesquita's algorithms were right twice as often as its own
analysts in making predictions about future intelligence
events. The study spanned more than 1,700 predictions made
by the algorithms – a task the bots dutifully performed
without billing even one hour of overtime.
No comments:
Post a Comment