Friday, 30 June 2023

The Concept of a Virtual Registered Office for EU Law

 



Virginijus Bitė, Professor of Law at the Law School of Mykolas Romeris University

Ivan Romashchenko, Senior Researcher of the Legal Technology Centre at the Law School of Mykolas Romeris University

Photo credit: EmDee, via Wikimedia Commons

On 29th of March 2023 the European Commission published a Proposal for a Directive within the initiative devoted to upgrading digital company law. It mostly focuses on the increased transparency and access to information as well as cross-border use of company data. These goals were earlier mentioned in the inception impact assessment report published on 20th of July 2021. However, the inception impact assessment included one more policy option that was omitted in the Proposal: making the EU company law rules and procedures fit for digital age. Virtual registered office (VRO), surprisingly, has not been given green light due to mixed feedback from stakeholders.

Before the European Commission mentioned VRO in the documents, there was an attempt in Lithuania to stipulate this concept at the national level. The draft law on the introduction of a VRO was submitted to the Lithuanian parliament, the Seimas, in 2018. Although the Seimas in general supported the idea, the provisions on VRO have not yet been adopted.

Despite the lack of regulation at the EU and national levels and the apparent lack of academic consideration of VROs, researchers and practitioners have displayed enthusiasm concerning the opportunities that the introduction of VRO might provide. According to the 2017 report by Adelė Jaškūnaitė and Raminta Olbutaitė, prepared within the ‘Create Lithuania’ programme, even in the absence of a legal framework on VROs, Lithuania possessed the technical resources necessary to ensure communication with public institutions as a basis for the establishment of VRO. They concluded that there was a need to replace the physical address with a virtual one since a physical address, as an official registered office, had not fulfilled its purposes effectively. Legal entities had often been registered at so-called ‘mass addresses’, with some addresses serving as the registered offices of hundreds of companies. If VRO were to be introduced properly, this idea would reduce the financial burden on both public authorities and companies. The introduction of VRO would neither impact the corporate governance negatively as most communication among stakeholders in a company has been happening digitally. Inspired by the editorial of Lina Mikalonienė, in our recent research we have delved into the concept of a VRO and tried to evaluate the proper way it might be introduced.

For a VRO to replace registered office, it should be able to achieve the same functions as the registered office does: to ensure that the applicable law and jurisdiction are determined with respect to the legal person, and to ensure proper communication between a legal entity and its counterparties.

As far as the first function is concerned, there are reasons to conclude that applicable law and jurisdiction can be determined without knowing the exact physical location of a legal entity: information about the country where the entity is located should suffice. The VRO would be perfectly able to cope with the function of ensuring a connection between a legal entity and applicable law, even if we only knew the country that the legal entity came from. In that case, national law would be assigned the task of connecting the entity with the proper local laws and regulations, as well as the relevant local authorities. For instance, as far as Lithuania is concerned, a legal entity might have a VRO with a link to Vilnius and its city authorities.

Regarding the second function, it should primarily be noted that the existing regulation needs change. Such change needs to move in the direction of wider digitalisation, so that legal entities can act through a VRO instead of a physical address. While moving in this direction care should be taken not to forget about weaker parties, including consumers, some of which might be forced to communicate by regular mail due to poor digital skills or the absence of access to electronic tools. In addition, it is possible that some foreign state authorities might be prohibited to use such electronic system and be allowed to use only regular mail or services of clerks. Therefore, a link to a physical address to establish communication between a legal entity and its counterparties seems temporarily practical for the transition period till all players and society adapt to the system of e-communication and accept it more easily.

For these reasons, it is recommended that the EU interferes in this sphere by removing any misunderstandings and defining a registered office as including both a physical address and a VRO. EU intervention should also stipulate requirements for organisations that provide VRO in Member States, as well as setting out a legal basis for selecting a virtual address instead of a physical one and for the communication of domestic and foreign actors through VRO. These new rules need to contain safeguards against fraudulent practices, for this all legal entities using VRO should temporarily maintain a link to a physical address – for instance, the address of the director or another contact person. The suggested connection to a physical address should be viewed as a transitional compromise on a path to full VRO and the gradual development of improved virtual cross-border communication being the future replacement of the traditional registered office with its virtual counterpart.

 

For more information see: Bitė, V. and Romashchenko, I., 2023. The Concept of a Virtual Registered Office in EU Law: Challenges and Opportunities.  Utrecht Journal of International and European Law,  38(1), p.25–38.DOI: https://doi.org/10.5334/ujiel.605

Friday, 2 June 2023

The UK's pro-innovation AI regulatory framework is a step in the right direction

 



Asress Adimi Gikay (PhD), Senior Lecturer in AI, Disruptive Innovation, and Law (Brunel University London) Twitter @DrAsressGikay

Photo credit: via Wikicommons media

The Essence of the UK's pro-innovation regulatory approach  

After several years of evaluating the available options to regulate AI technologies, and the publication of the  National AI Strategy in 2021 setting out a regulatory plan, the UK government finally set out its pro-innovation regulatory framework in a white paper published in March of this year. The government is currently collecting responses to consultation questions. 

The white paper specifies that the country is not ready to enact a statutory law in the foreseeable future governing AI. Instead, regulators will issue guidelines implementing five principles outlined in the white paper. According to the white paper, following the initial period of implementation, and when parliamentary time allows, 'introducing a statutory duty on regulators requiring them to have due regard to the principles' is anticipated. So, an obligation to enforce the identified principles will imposed on regulators, if it is deemed necessary based on the lessons learned from the non-statutory compliance experience. But this will most likely not take place in the coming 2 to 3 years, if not more.

The UK's pro-innovation regime starkly contrasts with the upcoming European Union(EU) AI Act's risk-based regulation, applying different legal standards to AI systems based on the risk they pose.  The EU's proposed regulation bans specific AI uses, such as facial recognition technology (FRT), in publicly accessible spaces while imposing strict standards for developing and deploying the so-called high risk AI systems, including detailed safety and security, fairness, transparency and accountability. The EU's regulatory effort aims to tackle AI risks through a single legislative instrument overseen by a single national authority of member states. 

Undoubtedly, AI poses many risks ranging from discrimination in healthcare to reinforcing structural inequalities or perpetuating systemic racism in policing tools that could utilize (il)literacy, race, and social background to predict a person's likelihood to commit crimes. Certain AI uses also pose risks to privacy and other fundamental rights, as well as democratic values. However, the technology also holds tremendous potential for improving human welfare through enhancing the  efficient delivery of public services such as education, healthcare, transportation, and welfare. 

But is the UK's self-proclaimed pro-innovation framework, which uses a non-statutory regulatory approach to tackle the potential risks of AI technologies, appropriate?  

I contend that with additional fine-tuning, the approach taken by the UK better balances the risks and benefits of the technology, while also promoting socio-economically beneficial innovation.

Key components of the envisioned framework

The UK approach to AI regulation has three crucial components.  First, it relies on existing legal frameworks relevant to each sector such as privacy, data protection, consumer protection, and product liability laws, rather than implementing comprehensive AI-specific legislation. It assumes that many of the existing legislations being technology neutral would apply to AI technologies. 

Second, the white paper establishes five principles to be applied by each regulator in conjunction with the existing regulatory framework relevant to the sector. These principles are safety, security and robustness, appropriate transparency and explainability, fairness, accountability and governance, and contestability and redress.

Third, rather than a single regulatory authority, each regulator would implement the regulatory framework supported by a central coordinating body that among others, facilitates consistent cross-sectoral implementation. As such, it is up to individual regulators to determine how they apply the fundamental principles in their sectors. This could be called a semi-sectoral approach as the principles apply to all sectors, but their implementation may differ across sectors.

Although the white paper does not envision prohibition of certain AI technologies, some of the principles could be used to effectively prohibit certain use cases, for example unexplainable AI with potentially harmful societal impact.  Regulators are given a leeway, as a natural consequence of the flexibility offered by the approach adopted.

There will not be a single regulatory authority comparable to, for example, the Information Commissioner's Office that enforces data protection law in all areas. Initially, a statute will not require regulators to implement the principles. Actors in the AI supply chain will also have no legal obligation to comply with the principles unless the relevant principle is part of an existing legal framework. 

For instance, the principle of fairness requires developing and deploying AI systems that do not discriminate against persons based on any protected characteristics. This means that a public authority must fulfil its  Public Sector Equality Duty (PSED) under the Equality Act by assessing how the technology could impact different demographics. On the other hand, a private entity has no PSED as this obligation applies only to public authorities. Thus, private actors may avoid the obligation to comply with this particular aspect of the fairness principle unless they voluntarily choose to comply.

Why is the UK's overall approach appropriate? 

The UK’s flexible framework is generally a suitable approach to the governance of an evolving technology. Three key reasons can be provided for this.

   It allows evidence-based regulation.

Sweeping regulation gives the sense of preventing and addressing risks comprehensively. However, as the technology and its potential risks are yet to be understood reasonably, most AI risks today are a product of guesswork. 

This is a significant issue in AI regulation, as insufficient and non-contextualised evidence is increasingly used to advocate for specific regulatory solutions. For instance, risks of inaccuracy and bias identified in gender classification AI systems are frequently cited to support a total ban on law enforcement use of FRT in the UK. 

Although FRT has been used by law enforcement authorities in the UK several times, no considerable risk of inaccuracy has been reported because the context of law enforcement of FRT, especially in the UK, is different from online gender classification AI systems. Law enforcement use of FRT is highly regulated, so the technology deployed is also more stringently tested for accuracy, unlike an online commercial gender classification algorithm that operates in less regulated environments. Ensuring that relevant and context-sensitive evidence is used in proposing regulatory solutions is crucial.

By augmenting existing legal frameworks with flexible principles, the UK's approach enables regulators to develop tailored frameworks in response to context-sensitive evidence of harm emerging from the real-world implementation of AI, rather than relying on mere speculation

Better enforcement of sectoral regulation 

Scholars have debated for a while on whether sector-specific regulations enforced by a sectoral regulator are suitable in algorithmic governance. In a seminal piece, 'An FDA for Algorithm,’ Andrew Tutt advocated for creating a central regulatory authority for algorithms in the US comparable to the Federal Drug Administration. The EU has adopted this approach by proposing a cross-sectoral AI Act, enforceable by a single national supervisory authority. The UK chose a different path, which is likely the more sensible way forward.

Entrusting AI oversight to a single regulator across multiple sectors could result in an inefficient enforcement system, lacking public trust. Different regulatory agencies possessing expertise in specific fields, such as transportation, aviation, drug administration, and financial oversight, are better placed to regulate AI systems used in their sectors. Centralising regulation may lead to corruption, regulatory capture, or misaligned enforcement objectives, impacting multiple sectors. In contrast, a decentralised approach allows specific regulators to set enforcement policies, goals, and strategies, preventing major enforcement failures and promoting accountability.

The ICO can provide a good example.  Its track record in enforcing data protection legislation is exceptionally poor, despite having the opportunity to bring together all the required resources and expertise needed to perform its tasks. The ICO has failed miserably, and its failure impacts data protection in all sectors.

As the Centre for Data Innovation asserted, “If it would be ill-advised to have one government agency regulate all human decision-making, then it would be equally ill-advised to have one agency regulate all algorithmic decision-making."

The UK's proposed sectoral approach avoids the risk of having a single inefficient regulatory authority by distributing regulatory power across sectors.

Non-statutory approach and flexibility to address new risks

The non-statutory regulatory framework allows regulators to swiftly respond to unknown AI risks, avoiding lengthy parliamentary procedures. AI technology's rapid advancement makes it difficult to fully comprehend real-world harm without concrete evidence. 

Predicting emerging risks is also challenging, particularly regarding "AI systems that have a wide range of possible uses, both intended and unintended by the developers"(known as general purpose AI) and machine learning systems. Implementing a flexible regulatory framework allows the framework to be easily adapted to the evolving nature of the technology and the resulting new risks.

But two challenges need to be addressed   

The UK's iterative, flexible, and sectoral approach could successfully balance the risks and benefits of AI technologies only, if the government implements additional appropriate measures.

Serious enforcement  

The iterative regulatory approach would be effective only if the relevant principles are enforceable by regulators. There must be a legally binding obligation for relevant regulators to incorporate these principles in their regulatory remit and create a reasonable framework for enforcement. This means that regulators should have the power to take administrative actions while individuals should be empowered to seek redress for the violation of their rights or to compel compliance with existing guidelines.  If no such mechanism is implemented, the envisioned framework will not address the risks posed by AI technologies. 

Without effective enforcement tools, companies like Google, Facebook, or Clearview AI that develop and/or use AI will have no incentive to comply with non-enforceable guidelines. There is no evidence to support this, and there will never be.

Enforcing the principles does not require changing the flexible nature of the UK’s envisioned approach, as how the principles are implemented is still left to regulators.  The flexibility remains largely in the fact that the overall principles can be amended without a parliamentary process. So, regulators can tighten or loosen their standards depending on the context. However, a statute that says the essence of those principles should be implemented and enforced by the relevant regulators is necessary.

Defining the Role of the central coordinating body

The white paper emphasizes the need for a  central function to ensure consistent implementation and interpretation of the principles, identify opportunities and risks, and monitor developments. But regulators must consult this office when implementing the framework and issuing guidelines. 

Although the power to issue binding decisions may not need to be conferred, the central office should be mandated to issue non-binding opinions on essential issues, similar to the European Data Protection Board. Regulators should also be required to initiate a request for an opinion on certain matters formally. This would facilitate cross-sectoral consistency in implementing the envisioned framework and enable early intervention in tackling potential challenges.

Conclusion 

The UK has taken a step in the right direction in adopting a flexible AI regulation that fosters innovation and mitigates the risks of AI technologies. However, the regulatory framework needs to be enhanced to maintain the UK's leadership in AI. The lack of a credible enforcement system and solid coordination mechanism may undermine the objective of the envisioned framework, including deterring innovation and undermining public trust and international confidence in the UK regulatory regime.