Five steps to make your privacy information easier to understand with Privacy Icons (and generate a competitive advantage)
Privacy information that is easy to understand is important for users of data-based services and service providers alike. Understandable privacy information builds trust with users and therefore provides an important competitive advantage for the provider.
Everyone agrees that “a good service” offers users significantly more advantages than disadvantages. Thus, it should be self-evident that this also applies to how information about users is handled: With personal information, providers can improve their services, tailor them to the individual needs of their users or even invent completely new services. However, the providers (or any other third party) can or could also use this data to the disadvantage of the users. For example, they can gain disproportionately deep insights into the users’ private life (or insights that the users do not want at all). The information may be used against them in professional recruitment, judicial or private legal proceedings or in any other context. A privacy-friendly service mitigates such risks. Privacy information should make this ratio of high benefits to low risks transparent to users in a credible way. In conclusion, with good privacy information, users can see that a service offers them significantly more benefits than risks – or at least they can make their own judgement as to whether the benefits are worth the risks. Thus, good privacy is an important quality feature of the data-based service and has a significant impact on how users view the brand or reputation of the service provider.
However, designing privacy information that is both truthful and easy for users to understand is challenging. There are many complex balancing decisions to be made, not only with respect to legal but also visual design issues. This is why designing understandable privacy information requires combining at least two disciplines that have had little to do with each other in current practice: law and visual (as well as UX) design.
In our experience, the challenges of combining legal and visual (and UX) design methods actually culminate in the design of Privacy Icons. The use of Privacy Icons is expected to make lengthy legal texts clear and easy to understand (see for example Article 12 section 7 of the EU General Data Protection Regulation (GDPR), which encourages the use of such icons). However, the challenge in designing Privacy Icons that work lies in the almost contradictory nature of legal texts and icons: With legal texts, their designers (i.e. lawyers) try to cover all eventualities of relatively complex circumstances, to work out subtle differences and in this way to protect themselves against possible legal proceedings; with icons, on the other hand, their designers (i.e. visual designers) try to visually condense a statement to its core content in a way that is as intuitively understandable as possible. In our opinion, this is the reason why so many approaches to the design and use of Privacy Icons end up being inadequate. Attempts in practice to combine the two methods often end up with lawyers designing awful visual designs or visual designers making wrong legal assessments. Neither party may be blamed for this. How could they know the other method? On the contrary, it is commendable that the attempt is being made at all!
To successfully combine legal and visual (as well as UX) design methods, lawyers and visual (as well as UX) designers therefore need to understand each other's methods, goals and terminology so that they can align them. We are far from claiming that we have already achieved this goal with the Privacy Icons and the guidance published here. Nevertheless, we think that our Privacy Icons and this guide may be helpful for some users – whether practitioners or academics – at least as a starting point for further improvements. At some point, we may arrive together at legal texts, such as privacy policies, that are both legally correct and easy to understand. In order to achieve this goal, we summarise in the following five steps our experience from our five-year research and development process on how privacy information may be made easier to understand by using (and designing) Privacy Icons.
The first step is to explore your “design space”. The design space is determined visually, on the one hand, by the visual space you can use for providing your users with your privacy information and, on the other hand, by how much attention your users are likely to pay to your information in their usage context. Further, the legal aspects of your design space are defined by the applicable law determining which information you must provide in which manner. Last but not least, the design space may also be determined by the state of the art (SoA). The SoA means the best technical and / or organisational (including the visual) implementation of a legal provision known (and available) on the market. All three aspects determine which information you should show where and in what form.
Exploring the visual design space means understanding the usage context in which your users are confronted with your privacy information: for example, do you show your privacy information via a cookie banner which automatically appears in the lower right or left corner of the screen of your users when they are visiting your website? or is it about information to be provided when you ask your users for consent during their registration process for your service? or about information displayed within a contextual consent form that appears when your users are interacting with the content of your service (e.g. with Google maps embedded in your website)? or do you want to offer background information that users can (only) access when they actively click on a corresponding button (e.g. a privacy policy in the footer of your website)? or would you like to inform your users via a consent agent where they can centrally manage their privacy settings long before they use the services through which their data is collected? All of these usage contexts differ in terms of how much visual space you have to provide your users with your privacy information and, equally important, how much attention your users are likely to pay to your information in that context. If you have the entire screen of the user’s device at your disposal and the user’s primary task in this context is reading your privacy information, it is much easier to make them understand this information than if they are actually trying to find something else on your website (other than your tiny cookie banner interrupting their search).
Beside the visual design space, you need to determine what information you are legally obliged to provide. In the context of the GDPR, this is first of all the information required by Articles 13 and 14 of the GDPR. This essentially includes information about the purposes for which the data is processed and for how long; who receives the data; on what legal basis the data is processed (see Article 6 section 1 GDPR); and what data subject rights the users have against the processing of their data. Additional information may result from the transparency principle pursuant to Article 5 section 1 lit. a GDPR. Furthermore, according to Article 25 section 1 GDPR, the service provider must also implement these requirements in such a way that they effectively (!) protect their users from the data processing risks (more precisely, from the risk that the data processing undermines their autonomous exercise of their fundamental rights). In this context, as mentioned above, the service provider must also take into account the SoA, i.e. the implementation that is known to be the most effective. One of the key questions that regularly arises in this assessment is how the service provider must specify the purposes of the processing. On the one hand, this question is central because the data processing purposes are the most important starting point for many other legal requirements under data protection law. Equally important, the specified purposes must enable users to assess the scope and consequences of the processing of their data and whether they find this appropriate or objectionable. To achieve this, it is best to first describe the purpose of the data processing from the perspective of the service provider (which usually describes the advantages of this service for the user) and then clarify, with regard to the technical and organisational means used for this purpose, which risks are at stake (to the user’s fundamental rights). The above illustrates how much information a service provider needs to provide to its users so that they may ultimately make their own informed decision for or against using the service (or for or against giving their consent), and how complex the underlying assessment is.
As mentioned above, providers of data-based services are obliged under the GDPR to also take the SoA into account when implementing the transparency obligations. But even outside the scope of the GDPR, they should use the SoA as a benchmark because this is usually the most effective way to convince their users of the quality of their service. To find out whether a SoA already exists, it is necessary to look at the processing purpose and which data protection risks this purpose causes with regard to its technical-organisational means. If one finds comparable processing operations that have been made transparent in a convincing manner at first glance, one should check in more detail which methods have been used to test the effectiveness of the respective transparency measure. Only by knowing the methods used to test the effectiveness can one really assess how reliable the test results are, which implementation has been proven to be most effective and whether you should apply the implementation accordingly. Even if this may seem complex to some readers, researching the SoA is not only worthwhile because it promises the most effective implementation to be adapted in one's own case. Adapting an existing SoA also saves the cost and effort of testing the effectiveness of a transparency measure on its own (see here more under point 4).
After you have explored the design space, you may begin to actually map the necessary privacy information to the different levels of your visual design space (taking into account the SoA). You should apply the so-called three-layered approach. According to this approach, you show your users the information that is most important to them on the first visual level. From there, they can click to a second level for more information, and from there to a third level. Interestingly, in one of our quantitative tests on the effectiveness of cookie banners (with almost 1000 users), only 0,6 percent clicked on the second level and no less than 0,1 percent on the third level. These test results show how important it is to choose what information to display on the first level. However, one should still not underestimate the importance of the second and third layer. The fact that so few users click through to the second and third layers does not mean that these layers are useless. As with all other legal means in a state governed by the rule of law, the most important thing is that your users can (!) obtain the information on the second and third levels if they want to or if they need to, not that they actually do. Just to give you an example from another legal area: Neighbourhood law is not useless just because you don't sue your neighbour every time fruit from his trees falls on your property, even though you could legally do so. However, even if it is used rarely, neighbourhood law remains important for some cases where legal protection is really needed. This idea also applies to data protection law. In view of numerous qualitative tests we have conducted in recent years, you should specify at the first level the processing purposes for which you want to process your users' data. To allow your users to weigh up the benefits and risks of these purposes at a glance, you should make the benefits and risks clear at the first level as well. You may then provide further information at the second level: In particular, what data is collected and what technical and organisational means are used that lead to these risks (and benefits); what protective measures you implement to protect your users from these risks; what rights or measures they may use to protect themselves from these risks; and so on. In our designs, we have often chosen to list the data recipients on the second level as well, but provide more information about them on the third level. In detail, there may of course be many possible variations. What is important is that these are tested (and at best compared) for their effectiveness and then implemented accordingly. What makes the allocation of privacy information to the different visual levels particularly challenging is that you should use only as few textual and visual elements as possible. Each word and each visual element less in an icon makes it easier for users to intuitively understand it. In this context, less is indeed more! Of course, there is a minimum number of elements below which the information would be incomprehensible. The question of how many elements can be avoided without falling below this limit is what makes this task quite difficult. Apart from that, a simple and clear layout and language should be used as well. For designers, both are usually a matter of course. From experience, lawyers have more difficulties in achieving this goal because they quickly get lost in a maze of ifs and buts. The challenge here is to use simple language that nevertheless correctly reflects the legal facts.
Once you know what information you are providing to your users with, at what visual level, and so forth, you may finally start working on the Privacy Icons. Considering the design principles described above, the challenge of this task cannot be overestimated. The question is which of the textual information should be supported by an icon so that the information as a whole becomes more intuitively understandable for users. The challenge of this task is that you should not cover all textual information with an icon, otherwise you will overload the visual design space with elements. Instead, you have to select which of the information on a level is so important or appropriate that it should be visually supported by an icon. Textual information should rarely be completely replaced by an icon. Rather, you should combine text with icons to ensure that your users understand the specific meaning of each icon. Only if the meaning of an icon has already been made actually clear on a previous visual level, you may replace the text completely with the corresponding icon on a subsequent level. Another interesting observation we have made in the course of our research and development process is that users usually understand icons only in relation to a specific usage context. This means that we constantly had to adapt our icons for a specific context, requiring to make quite difficult judgments. Let's take the example of personalised advertising in the sale of consumer goods (as opposed to personalised election advertising or personalisation of any other content on the internet). The data processing operations for the purpose of personalising advertising in the sale of consumer goods are very complex and contain many different aspects: from the types of data processed, to the technical and organisational means, to the benefits and risks for the users. Let us take the category of risks alone (because we consider it to be the most important from a data protection perspective). Here, numerous workshops with data protection experts have revealed the following immediate risks, among others:
-
deep insights into the private life of the users due to profiling and sharing of the generated profiles with a large number of recipients;
-
risk (to the user’s autonomy through) manipulating his purchasing decisions (among others due to opacity);
-
discrimination risks due to the display of different advertisements based on different personal characteristics;
-
plus, where applicable, the display of different prices and the associated risk of financial disadvantages.
In order not to overburden users with a multitude of icons, we should hence focus on singular aspects. This means that the textual term "personalised advertising in the sale of consumer goods" could be abbreviated and combined with one or more icons.
Depending on the icon, you will set a different focus, replace, supplement or emphasise individual text elements, and so on. The more elements, the more likely you are to overwhelm the user, and the fewer elements, the more incomplete the description of the legal situation. Because the visual design space may vary so much depending on the context of use, we have usually had to recombine the icons, or often just some of their visual elements, to get the right focus. To sum it up: Finding, designing and combining the right icons is challenging and crucial at the same time. Furthermore, as we have focused on two contexts in the development of the Privacy Icons presented here, namely the processing of personal data when visiting websites and through digitalised building technology, our library is far from complete. Every time we have tried to apply our existing Privacy Icon library to a new context, such as the automotive sector, we have had to design new icons, especially for other types of data. For these two reasons, we have placed our Privacy Icons library under a copyleft licence, according to which our icons may also be edited, changed and further developed. New icons based on our icons must therefore again be placed under the same legal conditions. In this way, we hope that over time the publicly available Privacy Icon libraries will become more and more extensive. In any case, you should use pre-existing icons if they work. Don’t reinvent the wheel, it would not only be unnecessarily burdensome for you but also for your users being confronted in the digital space with different icons for the same referent.
Whether icons actually improve user's understanding of privacy information needs to be empirically validated. We typically tested our designs in three steps: Generally, we researched common visualisations for individual textual information (words such as "autonomy") and often created additional designs of our own. In a first step, we tested these designs in small qualitative tests to see whether laypeople understood the meaning behind the icon or which icon of several designs they understood best. In a second step, we tested individual combinations of textual and visual elements, including the layout, also in small qualitative tests. Even if such qualitative tests (especially with small groups of 6 people) do not allow any generalisable conclusions to be drawn, they are very helpful in avoiding the most obvious comprehension problems.
It was only on the basis of these qualitative tests that we tested different variations for better comprehensibility in larger quantitative groups (with up to 1.000 people). The criteria we used to measure comprehensibility were specific risks and benefits that we had previously attributed to specific processing purposes in specially organised qualitative workshops with data protection experts. With these tests, we have expanded our research and development team to include UX design researchers (currently from psychology and behavioural economics). We believe that the methods developed can be used to find out and further develop the SoA in the implementation of transparency requirements in relation to certain processing operations. Again, these are of course not the only possible methods. However, they should be a good start.
Most readers have probably heard the phrase: "Do good - and talk about it." This is exactly what you should do if you have designed understandable privacy information with privacy icons that work. This is especially the case if you have been able to empirically demonstrate that your design effectively informs your users about their benefits and risks, or even most effectively compared to similar services offered by your competitors. The reason for this, in the context here as well, is not only charitable but also business – even in a radical sense: if you can prove that your designs represent the new SoA, this fact pushes your competitors into the grey area of legality. Because at least in the scope of the GDPR, Article 25 section 1 forces your competitors to take into account the SoA you have developed further in their own implementation. Your designs thus become the benchmark for the entire market. For this to work, you have to talk about it, and this in turn comes not only to the benefit of your business, but ultimately to the entire society.
All Icons are licensed under Creative Commons Attribution-ShareAlike (CC BY-SA): This license lets others remix, adapt, and build upon our work even for commercial purposes, as long as they credit the authors and license their new creations under the identical terms. See the licence aggreement under: https://creativecommons.org/licenses/by-sa/4.0/legalcode.
Julie Heumüller, Max v. Grafenstein, Isabel Kiefaber, Valentin Rupp, Otto Kolleß