Artificial Intelligence: Trust and Excellence

No technology is universally applicable. There is no single technology that can act as a master key to unlock all potential systems solutions.

Executive summary

The attitude towards modern artificial intelligence (AI) is the subject of a debate among policymakers in the European Union (EU) across its member states. On 19 February 2020, the European Commission launched a public consultation on AI. Contrary to popular thought, modern AI does not in and of itself constitute an excellence (and competence) or trust (and security) problem, nor an ethical and regulatory problem. Today’s AI is not universally applicable. There is no single technology that can act as a master key to unlock all potential systems solutions. All solutions are a combination of different technologies. Hence, the technology is maturing at its own pace; if we one day start getting close to “general AI,” it may be particularly appropriate to re-examine the EU’s approach and strategies.

Next, eight policy implications on AI are presented. These implications are divided to three categories based on the time limit these implications could be carried out in societies. These categories are short-term, medium-term, and long-term policy implications.

Short-term policy implications

“Define AI and AI systems and describe what technologies are included to these definitions and descriptions. If there is no conceptual clarity on AI, the European Commission level AI strategy is ineffective.”

In the European Commission’s AI white paper, the concept of AI itself is, however, not defined but rather characterized by some sense of it being unachievable and full of mystery. We have lived with AI for quite some decades and its regulation as a standalone phenomenon has never been deemed necessary. Many other AI-enabled sectors — such as the banking, healthcare and manufacturing e.g. — do not appear to be afflicted by the same degree of urgency, negativity, and legislative and regulatory problems regarding AI’s use. In fact, many of these industrial sectors have already made risk-based assessments on the use of different digital technologies, and AI should not be treated differently in any way.

“Before building European data spaces, data governance, especially for cross-sectoral and different risk-level data, needs to be established.”

“AI requires both general legislation (i.e., new industry classifications), and sector- and risk-level–specific ex-ante interventions (i.e., cross-sectoral and risk-level data).”

It is likely that the regulation of the overall phenomenon of platforms, data, and AI will require both general legislation and sector-specific, targeted interventions. Both approaches nevertheless pose their own problems and, as such, there is cause for consideration to be extended to other possible resolutions too. For example, one interesting prospect in the AI debate that is worthy of consideration is whether an entirely new industry classification code should be established, at both the national and European levels, for the classification of the platform operators that leverage data and AI.

In addition to these three policy implications, there is another short-term policy implication, different from already presented and more relevant in the business sector:

“Promote the insourcing of data storage, tangible and intangible compute resources, and algorithms and tools for the AI development of companies in order to reach sector- and risk-level–specific working standards and, later, gain critical mass, which then potentially unlocks economies of scope and network effects.”

What is central to the discussion regarding AI platforms is that at least three digital platform companies — Amazon, Google, and Microsoft — go beyond offering stand-alone AI-powered products and services. By bundling data storage, compute resources, and algorithms or tools for their development, these digital platform companies provide the core of a new middleware platform that is to be built upon inside a larger system of systems and cyber-physical systems.

Medium-term policy implications

“European-level innovation promotion arrangements should always be temporary in nature — excellence and competence centers should be co-located with local universities, as with the Finnish Center for Artificial Intelligence.”

“If new structures/institutions for the innovation promotion arrangements of the European Commission are proposed, some old structures/institutions should be removed.”

At least thus far, temporary excellence centers (e.g., the Finnish Center for Artificial Intelligence, the 6G Flagship, Smart Machines and Manufacturing Competence Centre (SMACC) and Finland’s Artificial Intelligence Accelerator) have been acting as key contributors to new AI excellence, competence, and knowledge and also as facilitators and motivators for companies across different industrial sectors, as well as the public sector, working in collaboration with them. Hence, there are advantages, as well as disadvantages, to such temporary arrangements.

Advantages are such as the excellence centers and exposure to other companies and universities provide a reality check regarding what can and cannot be currently done with AI technologies. Excellence centers also coach and facilitate the ideation/growth of innovative Finnish players in the sector. This enhances the role of the participants in making the leap from R&D to commercialization. However, excellence centers do not push companies forward unless companies commit, invest, and execute themselves. This is not really a disadvantage, but it needs to be clear that participating on an excellence center work can help but will not drive any major change by itself.

Long-term policy implications

“Establish a European-level identity management method for citizens, companies, products, services, and their digital twins (cyber-physical systems) in all sectors and at all levels of risk in order to enable next-generation digital systems development.”

The electronic identification approach is a European/governmental intervention and initiative to acquire trust and a secured status. As a consequence of the intervention and initiative, an electronic identification standalone is not enough for any AI system. Similarly, as we have electronic identification for citizens, it should be taken into use by companies and their products and services. In doing so, the government would limit itself to enacting laws laying down boundary and data governance criteria for the arrangements between citizens and independent companies and industrial sectors.

“Establish European-level cyber-physical systems standards for the systems’ computations, networking, and physical process integration for all levels of risks.”

Instead of setting policies and strengthening coordination on AI and AI systems, the European Commission should build technology-neutral standards for computations, networking. and physical processes based on the risk level of an application.



More information:

Researcher Timo Seppälä, +358 46 8510500,


Artifical Intelligence: Trust and Excellence 5.5.2020

Presentation about AI researcher Timo Seppälä gave in 5th of May 2020.