Practically every image or video to be seen on the internet is based on the bitmap – a rectangular grid with thousands (potentially millions) of available spaces for one individual pixel of colour. Put enough of them together and stand back far enough, and the pixels form distinct pictures.

The principle was well-understood by impressionist master Georges Seurat even in 1884 – thirty-five years before Harry G. Bartholomew and Maynard D. McFarlane laid the foundation for the fax by sending the first ‘digitised’ images for newspapers; 73 years before a National Bureau of Standards computer researcher called Russell A. Kirsch created the first storable digital images (of his young family).

Alexander Bain’s ‘Electric Printing Telegraph’ (1844)

Alexander Bain’s ‘Electric Printing Telegraph’ (1844)

These ideas didn’t occur in isolation. In 1844 the Scottish inventor Alexander Bain produced a delicate but functional prototype for a ‘Printing Telegraph’ (pictured right), and by the time Seurat was experimenting with pointillistic effect, it was a fact of life in the form of halftone printing, a technology which the New York Daily Graphic had trail-blazed over ten years earlier.

Nothing much has changed since those days – images are still analogous to very long Morse code messages – a stream of narrow-range decisions to describe every single part of the image: yes/no, 0/1, Red/Green/Black, etc. And since an image has to define its own canvas, even a completely white image needs to be described in this manner, on a per-pixel basis.

The irrepressible – and practically incompressible – bitmap

Bitmap output takes a toll on bandwidth, as anyone can attest who remembers the world-wide wait for high-res images to load back in the 56k modem era. Video multiplies the problem by a factor of 24 – and that’s just for one second of video. With an Ultra HD output equalling 7gb of bandwidth per hour, Netflix alone accounts for 37% of internet traffic at peak times.

Bitmap's early victories were numerous

Bitmap’s early victories were numerous

Not only does this gargantuan glut of data lead providers to find unusual economies depending on the target market, but it all needs storing somewhere. Since Netflix uses the pristine Interoperable Master Format (IMF) to store high-res originals from which adaptable (mobile, medium HD, 4K etc.) versions will be derived, and since demand for its services has grown by three orders of magnitude in eight years, the company finally abandoned its own general-use data centres for those of its arch-rival Amazon – though it still streams video by its own Open Connect partnership scheme.

So bitmap (or raster) is still the elephant in the room, nearly sixty years after the first computer-storable digitised image was created. In the form of video streaming, it’s eating up the bandwidth revolution of the last 15 years, and most of the talk about averting the disastrous ‘event horizon’ that seems to be looming because of streaming binges regards frantic expansion of infrastructure and service provision, with all the associated costs to the consumer. Twenty or thirty years of codec wars seem to have led us to…full resolution; all the time.

Without doubt, bitmap won the world over. But has it won forever?

Attack vector

Based on mathematics, vector graphics are infinitely scalable - until your graphics card or RIP system gets exhausted.

Based on mathematics, vector graphics are infinitely scalable – until your graphics card or RIP system gets exhausted.

Before the Flash browser plug-in became the darling of annoying full-motion ads, a primary target for hackers and a general object of online hatred, it was offering something new to the pre-broadband internet: vector-based motion graphics which delivered a video experience pretty quickly in an age where even a large image was a formidable download (never mind a video of any watchable quality).

Vector graphics get rasterised before you see them, since all the display surfaces in your life are raster displays*. However, the native dimensions of a vector graphic, still or moving, are arbitrary and mathematical. To form a simple curve, you need only two dots and a line whose curvature will be interpreted by its relation to them. To make a raster image of the same curve you need, as usual, to start in the one corner and start filling in the dots until the image is finished. Effectively this is what your computer or device has to do to the vector information before showing it to you. However the information itself needs little space to store, little bandwidth to transmit. And since vector data is essentially just text, it’s the most easily compressible data there is.

Vector is no stranger to you

Vector is not some abstract technology; most of the popular sites you’re likely to visit today will serve you a vector image – likely some recurring logo – in the form of a Scalable Vector Graphic (SVG). Only a small subset of the amazing capabilities of SVG are currently supported even in the most up-to-date browsers, because the format fought most of its battles before the pre-eminence of the GPU, and the local hardware limitations of the (later) smartphone revolution allowed of no real ‘second wave’.

The vector information underpinning a level in Grand Theft Auto

The vector information underpinning a level in Grand Theft Auto

If you’re a fan of videogames, vector is a huge part of your life, since the 3D geometry in your favourite titles are defined by vector shapes. And the fact that you can zoom this page until the text is practically large enough to be seen from space quickly reveals the primary presence of vector in your life – fonts. Finally, if you’re looking forward to new recommender systems and self-driving cars, you’re already counting on vectors to help advance AI far enough to make these happen.

Yet in the field of imagery, vector has retained the ‘cold’ image it earned early with the geometric flourishes of Disney’s vanguard 1982 CGI outing Tron, and the deliberately artificial style of modern children’s CGI movies, which was originally forged in Pixar’s experimental lamp movie ‘Luxo Jr.’ and permanently cemented by the pipeline limitations and rich imagination behind the Toy Story franchise. Modern sensibility considers such stylised CGI as ‘cartoons’ – and the vector ‘sterile’ and ‘perfect’.

Bitmaps, vectors and pin-ups

I was inspired to reconsider vector’s semi-abandoned place in the mainstream digital world by a new paper this week from Professor Xianping Li of the University of Missouri-Kansas City. Anisotropic Mesh Adaptation for Image Representation [PDF] may not have the most enticing title, but reveals new approaches to using vectors to represent and even recreate ‘real world’, bitmap-style imagery.

lena-vector-reconstruction

The above image shows the fruits of Li’s work in reconstructing a real photograph – in this case the ‘scientist’s pin-up’, Lena Söderberg. The first image is the original photograph; the second shows the initial mapping of vector points; the third visualises how the points assemble to represent different areas of colour and detail; and the final image is a vector conversion of the 1970s image.

The vector approach to photorealism is the same one which video codecs have to consider; where there is great perturbation and detail, such as in the trail of the feather boa, the vector points must cluster and address the problem in the same way that codecs have to throw dense information at the issue – at the cost of file-size (in both still and video imagery) and performance (in video content). Where only simple gradients need to be resolved, such as in the smooth shoulder area, only a handful of triangles are required to make an accurate rendering, and the extra effort expended on the boa ‘sells’ the low-effort gradient.

The boa-cluster means that the final vector image contains plenty of information – but it’s text information, and where it has ‘little to say’ (such as in the out-of-focus background and the shoulder area) it’s taking up very little processing or storage capacity. The equivalent areas in an image are by contrast getting equal time with the more detailed areas.

The point of points

No-one can reasonably suggest that in the future it will be possible to download and automatically decompress a 10mb vector file which would ‘fold out’ to a full-length high-resolution movie (and here we’re not even considering audio). Unless Moore’s Law starts applying to weeks instead of years (not likely, it seems), the amount of processing power needed to interpret vector information at 24fps is hardly to be considered, even in an age where the GPU has become so pivotal both in mainstream consumer activity and in scientific research.

'Peppers' - ironically the least spicy of the classic 1972 scientists' picture favourites - original, wireframe and full vector artwork

‘Peppers’ – ironically the least spicy of the classic 1972 scientists’ picture favourites – original, wireframe and full vector artwork

Yet it is worth considering how little regard hardware manufacturers have given to vector processing in the years where broadband pushed the bitmap to pre-eminence. Even a reasonably capable laptop seems to struggle when I scroll down to a vector image of even medium complexity in a PDF. The hardware hardly ever has to deal with this problem, and it makes a meal of it when it does come up. Vector resolution has been corralled to the GPU and the aegis of games developers.

Beyond the high-end challenge of video, it’s also to be considered what fruits a more dedicated hardware and software approach to vector interpretation might yield in the field of still imagery. There seems potential here to replace JPEG and competing bitmapped formats either with a new iteration of photorealistic SVG or with a completely new wrapper format, delivering high-resolution imagery at a fraction of current bandwidth occupancy and storage space, and with comparable – or even superior – results.

Beyond smartphone austerity

Ultimately any possibility for progress in this matter depends on how the current tension between local and remote processing plays out in the next 5-10 years. The smartphone revolution initially reset local hardware’s economics to the levels of the early 1990s: the processing and storage budgets on portable devices are still so tightly rationed as to represent a kind of ‘digital austerity’. Because of its low footprint and compressibility, vector has a lot to offer regarding storage; but since the smartphone sector has regarded it as dismissively as desktop, vector remains prohibitively high-cost as regards processing and power usage.

However the increasing interest in fog-computing – combined with the coming advent of Oculus-inspired VR-envy – could be a distant portent that ‘thin’ and cloud-centric devices may be driven by force majeure to deliver greater local processing power. But it’s up to the research community to continue to show that vector still has enough unrealised promise to earn a second chance at more mainstream integration.


* Vector displays, once popular in oscilloscopes, HUDs and air traffic control displays have now given way to raster displays, despite a popular retro-fetish for their distinctive appearance. However, if you’re willing to put some effort in, you can still build your own.