Artificial intelligence calls for regulatory perceptiveness

Mattila JuriSeppälä Timo

The regulatory approach towards artificial intelligence is currently the subject of heated debate among policy makers. This regulatory debate is, however, dominated by a one-dimensional viewpoint, in which the digital forest cannot always be seen for its trees. Contrary to popular belief, however, artificial intelligence does not in and of itself constitute a regulatory problem.

Artificial intelligence is neither the essence of the phenomenon nor the problem

Artificial intelligence (AI) is not a new phenomenon. Various AI applications, such as machine vision, have been in use in Finland for several decades. The concept of AI itself is, however, characterised by some sense of being unachievable. Like a treasure chest waiting at the end of the rainbow, the definition of AI seems to move further past the horizon as technology continues to advance.

Although AI has been a part of society for quite some time, never before has its regulation as a standalone phenomenon been deemed necessary. In fact, it is only since the platform giants entered the market and started using AI to combine and analyse their user data across industry boundaries and without transparency that the development of AI and the use of algorithms has been viewed as a large-scale problem.

Contrary to popular belief, though, AI cannot in fact be regarded as a separate technological phenomenon. Instead, it is inextricably linked to the broader picture of digitalisation. Indeed, the platforms, the mass data generated by them, and the AI algorithms used to analyse this data are often different manifestations of the same broader phenomenon.

In other words, AI does not seem to be at the heart of the regulatory problem that is often associated with it. Rather, its bad reputation appears to stem from the questionable ways in which platform giants sell and use the data they collect from their users. On the other hand, many other AI-enabled sectors, such as healthcare or manufacturing, do not appear to be afflicted by the same degree of negativity towards its use.

The platform is a legislatively slippery fish

How, then, should we address the wider problem of platforms when we debate the merits of AI? From the point of view of traditional legislature, the alternatives are general legislation and sectoral regulation.

In terms of general law, the answer can be found in the European Commission’s General Data Protection Regulation (GDPR), which also sought to curb the activities of the platform giants.

Although GDPR as a general provision has, at least for the most part, had a successful impact on controlling the corporate use of personal data, its effectiveness against platform giants has been questionable. With the resources available to the platform giants, jumping through the various legislative hoops and complying with the formal legal requirements is not an issue. However, any restrictive measures are rendered inconsequential if consumers freely consent to the use of their data. And if there is one thing the platform giants know how to do, it is the persuasion to grant this consent.

Moreover, from the point of view of market competition, it is probable that GDPR has only served to strengthen the market position of the platform giants. In fact, the impact of its provisions is most painfully felt by the smaller competitors of the platform giants, whose resources to comply with the requirements are far more limited. As the regulatory jungle becomes more dense, the most attractive option for many companies is to not process any personal data that is subject to GDPR regulations at all.

On the other hand, regulating platforms with sectoral regulation can also be problematic. The business model used by the platform giants is often based on overstepping traditional industry boundaries. For example, when Google’s CAPTCHA system asks users to select all the tiles containing road signs in a photo depicting a traffic situation, it is highly likely that Google will use this data to fuel the supervised learning of its autonomous vehicles.

By positioning their businesses in the peripheries of traditional industry classifications, platform companies seek to leverage a competitive advantage over other players in the sector in the form of a less stringent regulatory environment. Moreover, when a platform is built across industry boundaries, the platform operators often work on the assumption that individual sectoral restrictions do not apply in the same way and, consequently, they do not constitute a barrier to the exploitation of data.

A closer inspection of industry delineations?

It is likely that the regulation of the overall phenomenon of platforms, data, and AI will require both general legislation and sector-specific, targeted interventions. Both of these approaches nevertheless pose their own problems and, as such, there is cause for consideration to be extended to other possible resolutions, too.

For example, one interesting prospect in the AI debate worthy of consideration is whether an entirely new industry classification code should be established, at both the national and European levels, for the classification of platform operators that leverage data and AI. Such an approach could potentially enable more targeted regulation of the platform giants, with less legislative collateral damage.

As another point of interest, consideration might also be given to the question of whether more effort should be made to develop and codify data leveraging practices in which data is analysed by AI, mobilised, and exploited across the existing industry boundaries. For example, different authentication practices have evolved over time in different industries. Should society establish general rules on how data generated through stronger authentication may be combined with that generated from less protected users, and on how reliable should such a combined and processed data product be considered to be in each context?

Effective regulation requires a capacity to see the bigger picture — especially in the digital age

Technology does not develop in a vacuum, detached from the rest of society. For that reason, separate regulation of individual technological phenomena is almost always a dangerous Pandora’s box. Despite decades of development, the ways in which AI technologies are currently applied are still quite narrow in scope. While the societal impact of AI may appear wide in scope due to the joint effect of all the various manifestations of digitalisation, it is important that the description of the problem is sufficiently broadly-based when deciding on the regulatory approach towards new technologies.

That being said, technological neutrality must not become a regulatory dogma. Especially if the progress starts getting close to developing strong AI, it may become appropriate once again to re-examine the technology-driven regulatory approach. In the meantime, however, the goal posts in the AI debate will still be shifted several times.

 

This column has been drafted as a part of the BRIE-Etla 2019–2022 – Shaping the Future in the Era of Intelligent Tools: AI and Beyond research project funded by Business Finland. The BRIE-ETLA 2020–2022 research examines the impact of new and emerging information technologies on businesses and society. Its main objectives are to examine how the deployment of new technologies and the use of digital platforms will transform business, industry and work, and to identify managerial implications to businesses, but also to propose techno- and socio-economic policy recommendations.