Article 2(b) of Directive 95/46/EC determines that “processing of personal data (processing) shall mean any operation or set of operations which is performed upon personal data, whether or not by automatic means…”
The definition of processing is neutral as to the technology, because it is so basic – it is meant to define the scope of the Directive, rather than identify the specifics of the technology. Once the processing is in the framework, because it is “automatic”, the technology issue falls away. The UK DPA specifies equipment operating automatically in response to instructions for that purpose, envisaging a person issuing instructions to the equipment, while at the same time pursuing the purposes listed elsewhere in the Act. This is a MS-DOS picture of interacting with a computer, if you can remember that.
In which case, how, within the data protection framework, do you deal with problems arising from the new technologies?
One answer to this question is that it is the other rules which take account of the technology, because they take account of the operations carried out “automatically” on the data. But the same rules apply to manual records. It is also difficult to see, in the mere fact of it being automatic, what it was about the processing that made it necessary to protect the processed data – even though that was the point of the Directive.
The assumption must be that “automatic” wasn’t thought to be that different from familiar modes of acting on data, even though it was thought to be sufficiently different to require some special attention. This ambiguity has been exposed by the new technology.
Take the good housekeeping data principles, for instance, which still remain central to any conception of data protection. Collecting and holding data on limited terms may once have been restricted by the level of the technology, and because the technology wasn’t pervasive. There was protection in the fact that the technology couldn’t go too far or fast.
Today, it is like being able to buy a Formula One car off the shelf on the assumption that you will drive it like the Austin 7 you used to have. Surely you can’t claim that the difference between them doesn’t matter because they both have internal combustion engines? And because the F1 car is going down the same road as the Austin 7? Is it the same car and road any longer?
We can also observe the technology influencing the legitimate and lawful purposes, taking those purposes beyond what we might call a “natural” definition and understanding, because of the technological infrastructure. Functional purpose and hard data operations on a common technological skein means the technology goes all the way through.
The DP framework does not permit, however, a principled challenge to a technologised purpose (like the ID card). The third principle cannot be used to dismiss the first principle, only to modify it (data not excessive for a lawful purpose). But how are we to think about an ICT-saturated purpose? Is Facebook just another way of meeting your friends?
More questions than answers. But the technologised lawful purpose in a world of pervasive ICT’s inevitably drags behind it high intensity processing, and what we are seeing now is the protection framework knocking on the purpose (which it was never intended to do) because it is difficult to determine where the “automatic” technology stops.