Research undertaken between 2008 and 2014 suggests that more than 81% of Tor clients can be ‘de-anonymised’ – their originating IP addresses revealed – by exploiting the ‘Netflow’ technology that Cisco has built into its router protocols, and similar traffic analysis software running by default in the hardware of other manufacturers.
Professor Sambuddho Chakravarty, a former researcher at Columbia University’s Network Security Lab and now researching Network Anonymity and Privacy at the Indraprastha Institute of Information Technology in Delhi, has co-published a series of papers over the last six years outlining the attack vector, and claims a 100% ‘decloaking’ success rate under laboratory conditions, and 81.4% in the actual wilds of the Tor network.
Chakravarty’s technique [PDF] involves introducing disturbances in the highly-regulated environs of Onion Router protocols using a modified public Tor server running on Linux – hosted at the time at Columbia University. His work on large-scale traffic analysis attacks in the Tor environment has convinced him that a well-resourced organisation could achieve an extremely high capacity to de-anonymise Tor traffic on an ad hoc basis – but also that one would not necessarily need the resources of a nation state to do so, stating that a single AS (Autonomous System) could monitor more than 39% of randomly-generated Tor circuits.
Chakravarty says: “…it is not even essential to be a global adversary to launch such traffic analysis attacks. A powerful, yet non- global adversary could use traffic analysis methods […] to determine the various relays participating in a Tor circuit and directly monitor the traffic entering the entry node of the victim connection,”
Join The Stack in September for a look at the latest Pharma Tech - at the largest gathering of industry professionals in Europe.
The technique depends on injecting a repeating traffic pattern – such as HTML files, the same kind of traffic of which most Tor browsing consists – into the TCP connection that it sees originating in the target exit node, and then comparing the server’s exit traffic for the Tor clients, as derived from the router’s flow records, to facilitate client identification.
Tor is susceptible to this kind of traffic analysis because it was designed for low-latency. Chakravarty explains: “To achieve acceptable quality of service, [Tor attempts] to preserve packet interarrival characteristics, such as inter-packet delay. Consequently, a powerful adversary can mount traffic analysis attacks by observing similar traffic patterns at various points of the network, linking together otherwise unrelated network connections.”
The online section of the research involved identifying ‘victim’ clients in Planetlab locations in Texas, Belgium and Greece, and exercised a variety of techniques and configurations, some involving control of entry and exit nodes, and others which achieved considerable success by only controlling one end or the other.
Traffic analysis of this kind does not involve the enormous expense and infrastructural effort that the NSA put into their FoxAcid Tor redirects, but it benefits from running one or more high-bandwidth, high-performance, high-uptime Tor relays.
The forensic interest in quite how international cybercrime initiative ‘Operation Onymous’ defied Tor’s obfuscating protocols to expose hundreds of ‘dark net’ sites, including infamous online drug warehouse Silk Road 2.0, has led many to conclude that the core approach to deanonymisation of Tor clients depends upon becoming a ‘relay of choice’ – and a default resource when Tor-directed DDOS attacks put ‘amateur’ servers out of service.
On the Effectiveness of Traffic Analysis against Anonymity Networks Using Flow Records [PDF]
Identifying Proxy Nodes in a Tor Anonymization Circuit [PDF]
LinkWidth: A Method to measure Link Capacity and Available Bandwidth Using Single-End Probes [PDF]