Course Details

EU Legal Foundations of E-Government

5 Credits
Total Hours: 120
With Ratings: 125h
Undergraduate Mandatory

Course Description

The module "EU Legal Foundations of E-Government" is aimed at studying the theoretical and legal foundations of e-government systems functioning in the European Union, analyzing the regulatory framework of e-government in the EU, as well as developing students' fundamental knowledge about European approaches to digitalization of public administration. The study of European e-government standards contributes to deepening knowledge about principles of digital transformation of the public sector, mechanisms of legal regulation of digital public services, and protection of citizens' rights in the digital environment. The module enables students to understand the role of EU legal institutions in forming a unified digital space and promotes understanding of principles of the European approach to e-government. The module "EU Legal Foundations of E-Government" covers legal relations arising in the process of digitalization of public administration in the EU, analyzes contemporary European e-government practices and their impact on the development of digital law. Additionally, it contributes to the formation of practical skills in this field. Instruction is conducted in Uzbek, Russian, and English languages.

Syllabus Details (Topics & Hours)

# Topic Title Lecture
(hours)
Seminar
(hours)
Independent
(hours)
Total
(hours)
Resources
1
Theoretical-methodological foundations of legal regulation of e-government in the EU
2 2 5 9
Lecture text

Section 1: The Concept and Evolution of E-Government in the European Context

The concept of "e-government" (electronic government) in the European Union has evolved from a purely technical endeavor into a sophisticated legal and political doctrine. Initially, in the late 1990s, e-government was viewed primarily through a technological lens—digitizing paper forms and putting them online. This early phase, often termed "e-administration," focused on efficiency gains within the bureaucracy. However, the theoretical understanding has deepened significantly. Today, e-government in the EU is conceptualized as a transformative process that reshapes the relationship between the state, citizens, and businesses. It is no longer just about "doing things digitally" but about "digital governance"—using technology to enhance transparency, accountability, and participation. This shift necessitates a robust legal framework that goes beyond technical standards to address fundamental questions of administrative law, data protection, and digital rights (Savoldelli et al., 2014).

The definition of e-government in EU law is dynamic and functional. The European Commission defines it broadly as the use of information and communication technologies (ICTs) in public administrations combined with organizational change and new skills to improve public services and democratic processes and strengthen support to public policies. This definition highlights that technology is merely an enabler; the core is organizational and legal transformation. Theoretical models of e-government maturity, such as the widely cited Layne and Lee model, describe a progression from mere "cataloguing" (publishing information) to "transaction" (filing taxes), "vertical integration" (linking local and central systems), and finally "horizontal integration" (seamless services across different agencies). The EU's legal strategy aims to facilitate this progression towards the highest level of integration, known as the "Once-Only Principle," where citizens provide data only once, and administrations share it internally (Layne & Lee, 2001).

Historically, the EU's approach was driven by the "i2010" and "eEurope" action plans, which were soft law instruments setting political targets. The lack of binding legislation in the early stages led to fragmentation, with Member States developing incompatible systems. This "digital fragmentation" became a major barrier to the Single Market. Consequently, the theoretical foundation shifted from "coordination" to "harmonization." The legal basis for e-government moved from general administrative cooperation to the specific internal market competence (Article 114 TFEU). This shift justified the adoption of binding regulations like eIDAS (electronic identification) and the Single Digital Gateway, transforming e-government from a national administrative prerogative into a core component of the European digital single market infrastructure (Codagnone & Wimmer, 2007).

The principle of "good administration" (Article 41 of the Charter of Fundamental Rights) provides the normative bedrock for e-government. Digitization is not an end in itself but a means to achieve the right to good administration. This implies that digital services must be accessible, impartial, and fair. Theoretical debates often center on the "digital divide" and "digital exclusion." If the state moves services online, does it disenfranchise those without internet access? The EU's theoretical response is "digital by default, but inclusive by design." While digital is the preferred channel, the legal framework mandates that offline alternatives remain available for vulnerable groups, ensuring that the digital transformation does not violate the principle of equal access to public services (Helbig et al., 2009).

"Interoperability" is the central technical and legal concept in EU e-government theory. It is defined in the European Interoperability Framework (EIF) as the ability of organizations to interact towards mutually beneficial goals, involving the sharing of information and knowledge between these organizations. The EIF distinguishes between four layers of interoperability: legal, organizational, semantic, and technical. Legal interoperability is the most critical for this course. It ensures that organizations operating under different legal frameworks can work together. For instance, if a digital signature is valid in Estonia, the legal framework in France must recognize it to allow for cross-border administrative procedures. Without legal interoperability, technical connections are useless (Guijarro, 2007).

The transition from "e-Government" to "Open Government" marks a further theoretical expansion. Open Government emphasizes transparency, collaboration, and participation. The Open Data Directive (EU) 2019/1024 mandates that public sector information (e.g., weather data, traffic data) be made available for re-use. This treats government data as a public good and an economic asset. The legal theory here shifts from "secrecy" (data belongs to the state) to "openness" (data belongs to the public). This commodification of public data requires careful legal balancing with privacy rights (GDPR) and intellectual property, creating a complex new field of "public data law" (Janssen et al., 2012).

"User-centricity" is another guiding methodological principle. Traditionally, administration was "agency-centric," organized around the internal structures of ministries (silos). E-government theory demands a Copernican revolution where services are organized around "life events" of the user (e.g., birth, moving house, starting a business). The Single Digital Gateway Regulation codifies this principle, requiring procedures to be fully online and cross-border. This forces a legal re-engineering of administrative processes. The law must now reflect the user's journey rather than the bureaucrat's hierarchy, challenging the rigid structures of traditional administrative law (Bertot et al., 2010).

The "Cross-border" dimension differentiates EU e-government from national e-government. The goal is to create a "European digital administrative space." This theoretical construct envisions a seamless flow of administrative data across borders, similar to the free movement of goods. However, this clashes with the principle of "administrative sovereignty," where states are jealous guardians of their citizen registries. The EU solves this through "large-scale pilot" projects (like e-SENS) and specific regulations (SDG, eIDAS) that build bridges between national islands without creating a centralized "super-registry" in Brussels. It is a federalist model of interconnected sovereignty (Homburg, 2008).

"Trust" is the currency of e-government. Without trust, citizens will not use digital services. The legal framework builds trust through "e-Trust Services" (electronic signatures, seals, timestamps) regulated by eIDAS. This regulation creates a legal fiction where a qualified electronic signature has the equivalent legal effect of a handwritten signature. This "functional equivalence" is a key theoretical tool. It allows the law to remain technology-neutral while providing the certainty needed for digital transactions. The shift is from trust based on personal interaction to trust based on cryptographic proof and legal certification (Rotter, 2010).

The emergence of "Algorithmic Government" or "Automated Decision-Making" (ADM) presents the newest theoretical challenge. When an algorithm decides on a tax rebate or a social benefit, traditional administrative law concepts like "discretion" and "reason-giving" are strained. The GDPR (Article 22) provides a right not to be subject to solely automated decisions, but exceptions exist. The challenge for legal theory is to translate the principles of due process and the rule of law into the algorithmic code. This requires "accountability by design," ensuring that the automated state remains a human-centric state (Yeung, 2018).

"Digital Sovereignty" has recently entered the discourse. It refers to the EU's ability to act independently in the digital world. E-government relies on infrastructure (cloud, servers). If this infrastructure is owned by non-EU tech giants, is the state truly sovereign? The legal response involves initiatives like "Gaia-X" and data localization requirements for sensitive public data. The theory of e-government is thus intersecting with geopolitical strategy, where the control over the "digital public infrastructure" becomes a matter of national security and constitutional autonomy (Floridi, 2020).

Finally, the evolution is towards "GovTech" and an ecosystem approach. The state is no longer the sole developer of e-government solutions. It acts as a platform, opening APIs (Application Programming Interfaces) for private startups to build services on top of public infrastructure. This "Government as a Platform" (GaaP) model requires a new legal framework for procurement and liability. Who is liable if a third-party app accessing tax data fails? The legal boundary of the "state" becomes porous, requiring a redefinition of public liability in a networked ecosystem.

Section 2: Methodological Approaches to E-Government Regulation

The methodology of regulating e-government in the EU is characterized by a "hybridity" of instruments and a multi-level governance structure. Unlike traditional areas of law dominated by a single code, e-government regulation relies on a mix of "hard law" (Regulations, Directives) and "soft law" (Communications, Frameworks, Action Plans). This methodological choice reflects the rapid pace of technological change. Hard law provides legal certainty and enforceability for core infrastructures (like eID), while soft law allows for flexibility and experimentation in evolving areas (like AI in public services). This duality allows the EU to steer national administrations without locking them into obsolete technologies, a strategy known as "technological neutrality" (Senden, 2004).

"Technological Neutrality" is a core methodological principle. It dictates that legislation should define the objectives to be achieved and the functions to be performed, rather than prescribing specific technologies. For example, the eIDAS Regulation defines the requirements for an "advanced electronic signature" (uniquely linked to the signatory, capable of identifying them) but does not mandate the use of a specific encryption algorithm or smart card. This prevents the law from becoming outdated as technology evolves. Methodologically, this requires drafting rules in abstract, functional terms, shifting the burden of technical specification to standardization bodies (like ETSI) and implementing acts (Koops, 2006).

The "Once-Only Principle" (OOP) serves as a methodological guiding star for reducing administrative burden. It posits that citizens and businesses should provide diverse data to the public administration only once. Public administration bodies must then take action to share and reuse this data internally, in compliance with data protection rules. Implementing OOP requires a shift from a "silo-based" methodology (where each agency owns its data) to a "network-based" methodology (where data is a shared resource). This requires complex legal engineering to create gateways for data exchange while respecting the purpose limitation principle of the GDPR. The Single Digital Gateway Regulation creates the legal obligation for cross-border OOP, forcing Member States to interconnect their base registries (Kalvet et al., 2019).

"Digital by Default" is another methodological imperative. It means that public administrations should deliver services digitally as the preferred option, while keeping other channels open for those who are disconnected. This reverses the traditional presumption where paper was the default and digital was an add-on. Legislatively, this translates into mandates for online procedures. For instance, the Company Law Digitalisation Directive requires that it be possible to register a limited liability company entirely online without a physical appearance. This forces a redesign of administrative procedures, removing physical presence requirements and paper-based evidence formalities (Hansen et al., 2004).

"Interoperability by Design" is a proactive regulatory methodology. It means that when designing new public services or legislation, public administrations must consider interoperability requirements from the start. The European Interoperability Framework (EIF) provides the blueprint for this. It is not a binding law but a set of recommendations that Member States "should" follow. The methodology here is "convergence through coordination." By agreeing on common standards and architectural principles (like open standards and reusability), Member States align their systems voluntarily, avoiding the political friction of forced harmonization. The proposed Interoperability Europe Act aims to harden this soft methodology into a structural cooperation mechanism (Misuraca et al., 2010).

The "Sandboxing" methodology is gaining traction, particularly for disruptive technologies like blockchain and AI in government. A regulatory sandbox allows public administrations to test innovative technologies in a controlled environment with relaxed regulatory requirements for a limited time. This "experimental legal regime" allows regulators to learn about the risks and benefits before drafting permanent laws. It represents a shift from "ex-ante regulation" (predicting risks) to "iterative regulation" (learning by doing). The European Blockchain Services Infrastructure (EBSI) is an example of such a testbed for cross-border public services (Ranchordás, 2019).

"Impact Assessment" is a standard methodological tool in EU lawmaking, but it has specific relevance for e-government. The "SME Test" and "Digital Check" are mandatory steps in the legislative process. The Digital Check assesses whether new legislation is "digital-ready" (i.e., can it be automated? does it use clear logic? is it compatible with existing digital workflows?). This prevents the creation of "analog" laws that are impossible to implement digitally (e.g., requiring a "handwritten signature in blue ink"). This methodology ensures that the legal drafting process itself is aligned with the goals of digital transformation (Meuwese, 2008).

The "Whole-of-Government" approach is a systemic methodology. It rejects the piecemeal regulation of individual agencies. Instead, it treats the public sector as a single entity. The legal manifestation of this is the "Base Registry" concept. Base registries (population, vehicle, land, business) are the authentic sources of data. E-government law mandates that all other agencies must query the base registry rather than asking the citizen. This requires a legal hierarchy of registries and clear rules on data governance. The "Interconnection of Business Registers" (BRIS) system is a practical application of this methodology at the EU level (Christensen & Lægreid, 2007).

"Privacy by Design" and "Security by Design" are methodological obligations imposed by the GDPR and the Cybersecurity Act. They require that privacy and security features be embedded into the architecture of the e-government system from the outset, not added as an afterthought. For the legal regulation of e-government, this means that technical specifications (e.g., encryption standards, access logs) are not just IT details but legal compliance requirements. A system that is not secure by design is unlawful. This fuses legal norms with software engineering principles (Cavoukian, 2009).

The "Cross-Border" methodology focuses on overcoming semantic and organizational barriers. Legal concepts differ across Member States (e.g., the definition of a "university diploma"). To enable cross-border e-services (like applying for a study grant), the EU uses "Semantic Interoperability" assets like the Core Public Service Vocabulary. These are data models that map different national concepts to a common European definition. While technically "soft," these vocabularies act as a "lingua franca" for the digital single market, allowing divergent national laws to communicate electronically without full substantive harmonization (Peristeras et al., 2009).

"Co-creation" and "User-Centricity" influence the methodology of service design. The Tallinn Declaration on eGovernment commits Member States to engage with users in the design of public services. Legally, this pushes towards "agile" procurement and development methodologies. Instead of rigid, long-term contracts for IT systems, the public sector is encouraged to use iterative processes with user feedback loops. This challenges traditional public procurement law, which favors detailed upfront specifications. The EU innovation partnership procedure is a legal tool designed to accommodate this co-creation methodology (Bason, 2010).

Finally, the "Multi-level Governance" methodology recognizes that e-government happens locally. Most services (parking, waste, schools) are local. EU regulation must therefore penetrate to the municipal level. The "Berlin Declaration on Digital Society" emphasizes the role of local and regional authorities. The legal framework often uses "cascading" obligations: the EU regulates the national level (e.g., the Single Digital Gateway national coordinator), which in turn coordinates the regional and local entities. This ensures that the digital transformation reaches the "last mile" of public administration.

Section 3: Principles of Good Administration in the Digital Age

The digitalization of public administration does not occur in a normative vacuum; it is governed by the general principles of EU law, particularly the "Right to Good Administration" enshrined in Article 41 of the Charter of Fundamental Rights. This article guarantees the right of every person to have his or her affairs handled impartially, fairly, and within a reasonable time. In the digital age, these abstract principles acquire new, specific meanings. "Impartiality" in the context of Algorithmic Decision-Making (ADM) translates into "Algorithmic Fairness" and "Non-Discrimination." An algorithm used to detect tax fraud must not be biased against specific ethnic groups or neighborhoods. The legal regulation of e-government must therefore include mechanisms to audit and explain algorithmic logic to ensure it adheres to the principle of impartiality (Hofmann & Cisotta, 2019).

The principle of "Fairness" and the "Right to be Heard" face challenges in automated systems. Traditionally, the right to be heard implies a dialogue between the citizen and the official before an adverse decision is taken. In a fully automated system (e.g., automatic traffic fines), this dialogue is often removed or relegated to an ex-post appeal. EU law (GDPR Art. 22) and administrative jurisprudence insist that the right to be heard must be preserved. This might require "hybrid" procedures where a human reviews the automated proposal before it becomes final, or where the system is designed to accept "counter-arguments" or evidence from the citizen digitally before issuing a decision. The digital process must not become a Kafkaesque wall of code (Citron, 2007).

"Reasonable Time" is a principle that e-government is uniquely positioned to enhance. Automation drastically reduces processing times. However, the "Right to a Good Digital Administration" also implies a right to a timely digital service. If a government portal crashes on the day of a deadline (e.g., tax filing), does the citizen bear the liability? E-government law is evolving to recognize "technological force majeure." If the state mandates digital interaction, it assumes a duty to maintain the availability of the infrastructure. A failure of the digital system should not result in legal prejudice to the citizen (e.g., fines for late filing). This establishes a "Service Level Agreement" (SLA) mentality within public law (Galetta, 2019).

"Transparency" is a cornerstone of good administration, transformed by digitalization into "Openness." Article 15 TFEU grants citizens a right of access to documents. In the digital era, this evolves into a "Right to Data." The Open Data Directive requires public sector bodies to publish raw data in machine-readable formats. This allows civil society to monitor government performance (e.g., spending visualization). Furthermore, transparency in ADM requires "Explainable AI" (XAI). The administration must be able to explain why an algorithm reached a specific decision. A "black box" decision is legally void because it denies the citizen the ability to understand and challenge the reasoning, a fundamental component of the rule of law (Pasquale, 2015).

"Legal Certainty" in the digital realm requires reliable "Digital Identity" and "Trust Services." A citizen must be certain that the digital message they receive is truly from the tax authority (authenticity) and has not been altered (integrity). The eIDAS Regulation provides the legal basis for this certainty. It establishes the principle of "non-discrimination" of electronic documents: an electronic document shall not be denied legal effect solely on the grounds that it is in electronic form. This creates a "presumption of validity" for digital administrative acts, which is essential for the stability of digital legal relations (Poullet, 2009).

The principle of "Accessibility" (e-Inclusion) ensures that the digital state is a state for all. The Web Accessibility Directive (2016/2102) mandates that public sector websites and apps be accessible to persons with disabilities (e.g., screen reader compatible). This is not just a technical standard but a human rights obligation derived from the UN Convention on the Rights of Persons with Disabilities. It asserts that the digital interface of the state must not create new barriers. Failure to ensure accessibility constitutes discrimination. This principle acts as a constraint on the design of e-government interfaces, prioritizing inclusivity over aesthetic or technical complexity (Easton, 2013).

"Data Protection" and "Privacy" are not just constraints but constitutive principles of trusted e-government. The GDPR imposes "Purpose Limitation" and "Data Minimization." The state cannot use data collected for a parking permit to profile a citizen for tax audits unless there is a specific legal basis. This compartmentalization prevents the emergence of a "surveillance state" where the government knows everything about everyone. The "Once-Only Principle" must be implemented with strict safeguards: the citizen should retain control over the flow of their data between agencies (e.g., via a consent dashboard). This empowers the citizen as a "data subject" rather than a mere object of administration (Hijmans, 2016).

"Accountability" in automated administration requires clear lines of responsibility. If an AI makes a mistake, who is liable? The vendor? The official who procured it? The agency? The principle of good administration requires that the public authority always retains ultimate responsibility. It cannot outsource its liability to a software algorithm or a private contractor. This "human-in-the-loop" requirement ensures that there is always a human agent answerable for the exercise of public power, even if that power is mediated by code (Bovens & Zouridis, 2002).

"Simplification" is a policy goal that has become a legal principle. The Single Digital Gateway Regulation mandates that procedures be "user-centric" and simple. It prohibits the request of documents that the state already holds (Once-Only). This transforms "bureaucratic complexity" from an annoyance into a potential violation of EU law. The administration has a positive duty to reduce the administrative burden. E-government is the primary tool for this "administrative simplification," streamlining the interaction between the state and the citizen (Wegrich, 2009).

"Cross-border Accessibility" is a specific EU principle. A French e-government service must be accessible to a German citizen. This prohibits "digital geoblocking" in public services. For instance, an authentication portal that requires a French phone number excludes non-residents. The Single Digital Gateway Regulation explicitly bans such discriminatory obstacles. Public digital services must be "borderless by design," ensuring that EU citizenship rights can be exercised digitally across the entire Union (Kotzinos et al., 2011).

"Security" is a prerequisite for good digital administration. The administration has a duty to protect the integrity and confidentiality of citizen data against cyber threats. A breach of security that leaks tax data is a failure of good administration. The Cybersecurity Act and NIS2 Directive impose obligations on public administrations to manage cyber risks. This creates a "duty of cyber-care" for the state. Citizens entrust their digital lives to the state, and the state must warrant the safety of that digital vault.

Finally, the principle of "Digital Sovereignty" implies that the state must retain control over its critical digital infrastructure. Relying entirely on foreign cloud providers for core e-government functions (like the census or health data) creates a dependency that may undermine the autonomy of the administration. Good administration in the 21st century requires a strategy for "technological autonomy," ensuring that the public interest is not held hostage by private tech monopolies.

Section 4: The Legal Status of Data in EU E-Government

Data is the lifeblood of e-government. The legal regime governing this data is a complex matrix of ownership, protection, and openness. Theoretically, data in the public sector has a dual nature: it is both a "protection object" (when it involves personal data) and a "production factor" (when it is public sector information). The EU legal framework attempts to balance these conflicting natures through the interplay of the GDPR and the Open Data Directive. The foundational principle is "Open by Default" for non-personal data and "Protected by Default" for personal data. Navigating the boundary between these two is the central legal challenge for e-government practitioners (Janssen et al., 2012).

The Open Data Directive (2019/1024) establishes the legal regime for the re-use of Public Sector Information (PSI). It mandates that data held by public bodies (documents, databases, maps) should be made available for commercial and non-commercial re-use, ideally free of charge and in machine-readable formats (API). The theory is that public data is paid for by taxpayers and should therefore be returned to the public to spur innovation (e.g., navigation apps using public map data). This transforms the state from a "data owner" to a "data steward," holding information resources in trust for the public. Exceptions exist for sensitive data (security, IP, privacy), but the default is openness (Huijboom & Van den Broek, 2011).

Personal Data is governed by the General Data Protection Regulation (GDPR). In the context of e-government, the lawful basis for processing is usually "public task" (Article 6(1)(e)) or "legal obligation" (Article 6(1)(c)), rather than consent. This reflects the mandatory nature of the state-citizen relationship (you cannot refuse to pay taxes). However, the GDPR imposes strict limits on "purpose limitation." Data collected for the census cannot be used for police surveillance without a specific legislative measure. This "siloing" of data by purpose protects citizens from the "panoptic state." Breaking these silos for the "Once-Only Principle" requires specific legislation that explicitly authorizes the data sharing and provides safeguards (Kuner et al., 2017).

"High-Value Datasets" are a special category introduced by the Open Data Directive. These are datasets with high potential for economic and social impact (e.g., geospatial, meteorological, mobility). The Commission defines these lists, and Member States must make them available for free via APIs. This creates a "federal data layer" across Europe. The legal obligation here is absolute; agencies cannot charge fees to recover costs. This prioritizes the macro-economic benefit of the data economy over the micro-economic budget of the agency, forcing a shift in the funding models of public bodies like meteorological offices (European Commission, 2019).

The concept of "Base Registries" is central to the data architecture. These are the trusted, authentic sources of basic information (people, companies, vehicles, land). E-government law increasingly mandates that these registers be the "single source of truth." If a discrepancy exists between a citizen's claim and the base registry, the registry presumes correctness until proven otherwise. The legal regulation of these registries focuses on data quality, liability for errors, and access rights. The Interconnection of Business Registers (BRIS) and the Land Registers Interconnection (LRI) create a European network of these authentic sources (Schmidt, 2020).

"Data Sovereignty" and the Data Governance Act (DGA) introduce the concept of "data altruism" and secure processing environments. The DGA facilitates the reuse of sensitive public sector data (e.g., health records) that cannot be released as open data. It creates a regime where trusted intermediaries or secure environments allow researchers to compute on the data without seeing the raw personal details. This legal innovation unlocks the value of sensitive data for the public good (e.g., cancer research) while respecting privacy. It represents a "middle way" between closed silos and open data (Micheli et al., 2020).

"MyData" or "Self-Sovereign Identity" models are influencing legal theory. These concepts propose that citizens should have direct control over their government-held data, managing permissions through a "personal data wallet." The revision of the eIDAS Regulation (eIDAS 2.0) introduces the European Digital Identity Wallet, which allows citizens to store and selectively share attested attributes (e.g., "over 18," "licensed driver") without the government tracking every transaction. This shifts the locus of control from the central registry to the citizen's device, embedding data sovereignty into the legal identity infrastructure (Verschuuren, 2020).

"Cloud Computing" raises issues of data jurisdiction. If the Estonian tax authority stores data on an AWS server in Ireland, who has jurisdiction? And if the US CLOUD Act allows US law enforcement to access that data, is EU data sovereignty violated? The "Schrems II" judgment of the CJEU invalidated the Privacy Shield, creating a crisis for international data transfers. The legal response is a push for "European Cloud" initiatives (like Gaia-X) and strict data localization requirements for "sovereign" data categories. The legal regime for e-government data is thus becoming increasingly territorial to ensure immunity from foreign surveillance (Svantesson, 2020).

"Algorithmic Data" refers to the training data used for AI in government. If this data is biased, the administrative decisions will be biased. The proposed AI Act imposes data governance requirements on high-risk AI systems used by public authorities. They must use training, validation, and testing data sets that meet quality criteria to minimize bias. This creates a "legal standard for data quality" in the public sector. Poor data is no longer just a technical problem; it is a legal liability (Hacker, 2018).

"Interoperability of Data" requires semantic standardization. If "income" is defined differently in the tax office and the social security office, data sharing fails. The Core Vocabularies (Core Person, Core Business) developed by the EU provide the semantic legal standards. While voluntary, using these standards is often a condition for EU funding. This "soft standardization" harmonizes the meaning of administrative data across Europe, creating a common administrative language essential for the Once-Only Principle.

"Data Ownership" in the public sector is a misnomer. The state does not "own" citizen data in a proprietary sense; it holds it as a custodian. However, the state does hold Intellectual Property rights over databases it creates. The Open Data Directive limits the use of the sui generis database right by public bodies to prevent them from locking away public data. The legal principle is that public data is a "commons," and IP rights should not be used to restrict its re-use unless necessary (Hugenholtz, 2013).

Finally, the "Right to Data Portability" (GDPR Art. 20) allows citizens to move their data from one provider to another. While primarily aimed at private platforms, it applies to public services if the processing is based on consent or contract. Extending this to "administrative portability" (moving your full citizen profile from France to Germany) is the frontier of EU e-government law, enabling the true free movement of the digital citizen.

Section 5: Institutions and the Future of EU E-Government

The institutional framework of EU e-government is a decentralized network of national authorities coordinated by EU bodies. There is no "EU Ministry of Digital Affairs." Instead, governance is distributed. The European Commission (DG CONNECT, DG DIGIT) acts as the strategic leader, proposing legislation and managing funding programs (Digital Europe). It provides the "interoperability solutions" (e.g., the TESTA network for secure data exchange) that form the backbone of the system. The Commission's role is that of an architect and a funder, driving convergence through the "power of the purse" and the "power of the standard" (Mergel et al., 2019).

The eGovernment Action Plan (currently 2016-2020 and its successors in the Digital Decade) sets the political agenda. While non-binding, these plans are endorsed by the Council (Member States), creating a political commitment to targets like "100% of key public services available online." The monitoring is done through the DESI Index (Digital Economy and Society Index), which ranks Member States. This "governance by numbers" creates peer pressure ("naming and shaming"), forcing laggards to reform their e-government laws and systems to avoid low rankings. It is a powerful soft governance mechanism (Domorenok, 2019).

The Single Digital Gateway (SDG) coordination group is a key hard-law institution established by Regulation 2018/1724. It consists of national coordinators who oversee the implementation of the gateway (the "Your Europe" portal). This group manages the practical integration of national portals into the EU network, resolving technical and legal friction. It represents the "administrative federalism" of the EU, where national officials implement EU law in a coordinated network (Schmidt, 2020).

eu-LISA is the EU Agency for the Operational Management of Large-Scale IT Systems in the Area of Freedom, Security and Justice. It manages the Schengen Information System (SIS), Eurodac, and VIS. These are the "hard" e-government systems of the EU, handling border control and asylum. eu-LISA is a unique institution: an EU agency that operates critical operational IT infrastructure for Member States. It embodies the "operationalization" of EU e-government, moving from policy to running live servers (Bigo et al., 2012).

The European Interoperability Framework (EIF) governance structure involves the ISA² (now Interoperable Europe) committee. This body maintains the EIF and funds interoperability solutions. The proposed Interoperable Europe Act aims to upgrade this governance. It proposes an "Interoperable Europe Board" to set strategic agendas and a community to share solutions. This institutionalizes the "cooperation" on IT standards, moving it from a project-based activity to a permanent structural function of the EU (Misuraca et al., 2010).

National E-Government Agencies (e.g., AgID in Italy, DINUM in France) are the implementers. Their legal status and power vary. Some are powerful centralizers; others are coordinators. The EU encourages the "CIO" (Chief Information Officer) model, where a central authority has the legal power to mandate standards across the entire government. The success of EU regulations often depends on the strength of these national counterparts. The EU network of CIOs facilitates strategic alignment (Homburg, 2008).

"Cross-Border Digital Services Infrastructures" (DSIs) are the concrete manifestations of EU e-government. These include eHealth (patient summary exchange), eProcurement, and eJustice. These are "generic services" provided by the Commission connected to "national services." The legal governance of these DSIs is complex, involving agreements on liability, data controlling, and service levels. They are the "plumbing" of the digital single market, requiring sustained legal and technical maintenance (Codagnone & Wimmer, 2007).

The European Data Protection Board (EDPB) plays a critical oversight role. It ensures the consistent application of the GDPR in e-government. It issues guidelines on topics like "video surveillance" or "connected vehicles." Its opinions effectively set the legal boundaries for what e-government technologies are permissible. The EDPB acts as a "constitutional court" for data privacy, checking the expansionist tendencies of the digital surveillance state (Hijmans, 2016).

Future Trends: The "GovTech" ecosystem is rising. The EU is establishing "GovTech Incubators" to foster a market for public sector innovation. This shifts the institutional focus from "building" systems to "buying" innovation. The legal challenge is to adapt public procurement rules (Directive 2014/24/EU) to allow for the purchase of agile, experimental solutions from startups, rather than just massive contracts from legacy vendors (Mergel, 2019).

"AI in the Public Sector" is the next frontier. The AI Act will create a "European Artificial Intelligence Board." This body will oversee the implementation of AI rules. E-government will be a primary user of AI (for tax fraud detection, welfare allocation). The governance of "Public AI" will require new institutions capable of auditing algorithms for bias and legality. The "Algorithmic Registry" (public registers of AI used by cities like Amsterdam) is a transparency institution likely to become standard across the EU (Wirtz et al., 2019).

"Digital Rights". The "European Declaration on Digital Rights and Principles for the Digital Decade" (2022) sets the normative horizon. It asserts rights to connectivity, digital education, and fair online environments. While declaratory, it signals a move towards a "Digital Constitution." Future institutions may include a "Digital Ombudsman" to handle citizen complaints about digital administration.

Finally, the "Metaverse" and Government. As virtual worlds emerge, "consular services" or "virtual town halls" may move into the metaverse. The institutional challenge will be to project "sovereignty" and "public service" values into these privatized corporate virtual spaces. The future of EU e-government institutions lies in regulating the interface between the public democratic state and the private digital infrastructure it increasingly relies upon.

Questions


Cases


References
  • Bason, C. (2010). Leading Public Sector Innovation. Policy Press.

  • Bertot, J. C., et al. (2010). Using ICTs to create a culture of transparency. Government Information Quarterly.

  • Bigo, D., et al. (2012). The EU's large-scale IT systems. CEPS.

  • Bovens, M., & Zouridis, S. (2002). From Street-Level to System-Level Bureaucracies. Public Administration Review.

  • Cavoukian, A. (2009). Privacy by Design.

  • Christensen, T., & Lægreid, P. (2007). The Whole-of-Government Approach to Public Sector Reform. Public Administration Review.

  • Citron, D. K. (2007). Technological Due Process. Washington University Law Review.

  • Codagnone, C., & Wimmer, M. A. (2007). Roadmapping eGovernment Research. European Commission.

  • Domorenok, E. (2019). The digital agenda of the European Union. Policy and Society.

  • Easton, C. (2013). Website accessibility and the European Union. International Review of Law, Computers & Technology.

  • European Commission. (2019). The European Data Strategy.

  • Floridi, L. (2020). The Fight for Digital Sovereignty. Philosophy & Technology.

  • Galetta, D. U. (2019). Algorithmic Decision-Making and the Right to Good Administration.

  • Guijarro, L. (2007). Interoperability frameworks and enterprise architectures in e-government initiatives in Europe and the United States. Government Information Quarterly.

  • Hacker, P. (2018). Teaching an Old Dog New Tricks? Verfassungsblog.

  • Hansen, H., et al. (2004). Digital government. Government Information Quarterly.

  • Helbig, N., et al. (2009). The dynamics of e-government and the digital divide. Government Information Quarterly.

  • Hijmans, H. (2016). The European Union as Guardian of Internet Privacy. Springer.

  • Hofmann, H., & Cisotta, R. (2019). EU Administrative Law and the Digital Single Market. European Public Law.

  • Homburg, V. (2008). Understanding E-Government. Routledge.

  • Hugenholtz, B. (2013). The PSI Directive. Amsterdam Law School Research Paper.

  • Huijboom, N., & Van den Broek, T. (2011). Open data: an international comparison. Telematics and Informatics.

  • Janssen, M., et al. (2012). Benefits, Adoption Barriers and Myths of Open Data and Open Government. Information Systems Management.

  • Kalvet, T., et al. (2019). The Once-Only Principle. TOOP.

  • Koops, B. J. (2006). Should ICT Regulation be Technology-Neutral? Starting Points for ICT Regulation.

  • Kotzinos, D., et al. (2011). Cross-border e-Government services. International Journal of Electronic Government Research.

  • Kuner, C., et al. (2017). Machine learning and the GDPR. International Data Privacy Law.

  • Layne, K., & Lee, J. (2001). Developing fully functional E-government: A four stage model. Government Information Quarterly.

  • Mergel, I. (2019). Digital Transformation of the Public Sector. Public Administration Review.

  • Meuwese, A. (2008). Impact Assessment in EU Lawmaking. Kluwer.

  • Micheli, M., et al. (2020). Emerging models of data governance. JRC.

  • Misuraca, G., et al. (2010). Envisioning Digital Europe 2030.

  • Pasquale, F. (2015). The Black Box Society. Harvard University Press.

  • Peristeras, V., et al. (2009). Semantic interoperability in pan-European e-government services. Social Science Computer Review.

  • Poullet, Y. (2009). E-Government and the Information Society. International Review of Law, Computers & Technology.

  • Ranchordás, S. (2019). Experimental Regulations for AI. William & Mary Bill of Rights Journal.

  • Rotter, P. (2010). A framework for assessing secure electronic signatures. Government Information Quarterly.

  • Savoldelli, A., et al. (2014). Understanding the e-government paradox. Government Information Quarterly.

  • Schmidt, J. (2020). The Single Digital Gateway Regulation. European Public Law.

  • Senden, L. (2004). Soft Law in European Community Law. Hart.

  • Svantesson, D. (2020). Data Localisation Laws and Policy. Oxford University Press.

  • Verschuuren, P. (2020). Self-Sovereign Identity. Computer Law & Security Review.

  • Wegrich, K. (2009). The administrative burden reduction policy. Better Regulation.

  • Wirtz, B. W., et al. (2019). Artificial Intelligence in the Public Sector. International Journal of Public Administration.

  • Yeung, K. (2018). Algorithmic Regulation. Regulation & Governance.

2
Regulatory framework of EU e-government
2 4 10 16
Lecture text

 

Section 1: Constitutional Competence and the Legal Basis for Digitalization

The regulatory framework of e-government in the European Union is built upon a complex constitutional foundation, primarily because the founding Treaties do not explicitly confer a specific competence for "e-government" or "digitalization" to the Union. Consequently, the EU legislature has historically relied on the "Internal Market" clause, Article 114 of the Treaty on the Functioning of the European Union (TFEU), as the primary legal basis for its digital legislation. This article authorizes the adoption of measures for the approximation of national provisions which have as their object the establishment and functioning of the internal market. The logic is that divergent national e-government systems create digital barriers to the free movement of goods, services, capital, and persons. Therefore, harmonizing these systems is not an interference in national administrative autonomy, but a necessary step to prevent the fragmentation of the Single Market (Savoldelli et al., 2014).

A secondary, yet increasingly important, legal basis is Article 197 TFEU, introduced by the Treaty of Lisbon. This article explicitly states that the Union may support the efforts of Member States to improve their administrative capacity to implement Union law. While Article 197 excludes the harmonization of laws, it provides a solid constitutional footing for operational cooperation, exchange of best practices, and funding programs like the "Digital Europe Programme." This creates a dual-track regulatory approach: hard harmonization based on Article 114 (e.g., eIDAS Regulation) to remove market barriers, and soft coordination based on Article 197 to enhance administrative efficiency and interoperability (Hofmann, 2019).

The principle of "Subsidiarity" (Article 5 TEU) plays a critical role in shaping the regulatory framework. Since the organization of public administration is a core competence of Member States, the EU can only intervene if the objectives cannot be sufficiently achieved by the Member States alone. In the context of e-government, the cross-border dimension is the trigger for EU action. A national e-ID card works fine domestically, but without EU intervention, it is useless in another Member State. Thus, the EU regulates the "interoperability layer"—the interface between national systems—while leaving the internal design of those systems largely to national discretion. This federalist balance prevents a centralized "EU super-state" administration while enforcing connectivity (Craig & De Búrca, 2015).

The Charter of Fundamental Rights of the European Union serves as the normative compass for the entire regulatory framework. Article 41, the "Right to Good Administration," has been reinterpreted in the digital age to include rights to digital access and digital fairness. Furthermore, Article 8 (Protection of Personal Data) acts as a "super-legality" that constrains all e-government legislation. Every digital regulation, from the Single Digital Gateway to the AI Act, must undergo a fundamental rights impact assessment to ensure compliance with the Charter. This constitutionalizes the regulatory framework, ensuring that efficiency goals do not override citizen rights (Hijmans, 2016).

The shift from "Soft Law" to "Hard Law" marks the maturation of the regulatory framework. For decades, EU e-government was governed by non-binding Action Plans, Ministerial Declarations (e.g., Malmö, Tallinn), and Recommendations. While these built political consensus, they failed to prevent digital fragmentation. The turning point was the realization that voluntary interoperability was insufficient. The modern framework is characterized by the use of Regulations—which are directly applicable in all Member States—rather than Directives. The eIDAS Regulation and the Single Digital Gateway Regulation exemplify this shift towards uniform, enforceable rules that leave little room for national deviation in cross-border matters (Senden, 2004).

The "Better Regulation" agenda also influences the framework. The EU institutions are bound by the Interinstitutional Agreement on Better Law-Making to ensure that legislation is evidence-based and proportionate. In the digital context, this implies the "Digital Check," a tool used during the legislative process to ensure that new laws are "digital-ready" and future-proof. This regulatory meta-policy aims to prevent the creation of analog rules in a digital world, ensuring that the regulatory framework facilitates rather than hinders technological adoption (Renda, 2019).

The concept of "Technological Neutrality" is a drafting principle embedded in the framework. EU regulations define legal effects and functional requirements but avoid prescribing specific technologies (e.g., blockchain or smart cards). This ensures that the law remains valid even as technology evolves. For instance, the definition of an "electronic seal" in eIDAS is broad enough to cover future cryptographic methods. This approach prevents legal lock-in to obsolete technologies and fosters innovation within the regulatory boundaries (Koops, 2006).

Administrative Law principles, particularly the right to be heard and the obligation to give reasons, are codified in the regulatory framework governing automated systems. When the EU legislates on automated decision-making (e.g., in the GDPR or the AI Act), it translates these traditional administrative guarantees into digital rights. This creates a "Digital Administrative Law" that governs the interface between the citizen and the algorithm, ensuring that the rule of law persists in the digital sphere (Galetta, 2019).

The "Once-Only Principle" (OOP) has transitioned from a political aspiration to a legal obligation. Originally a soft law recommendation, it is now codified in the Single Digital Gateway Regulation. Article 14 of that Regulation obliges the Commission and Member States to establish a technical system for the automated exchange of evidence. This demonstrates how the regulatory framework progressively hardens best practices into binding law once the technological maturity allows for it (Kalvet et al., 2019).

The external dimension of the regulatory framework involves the "Brussels Effect." By setting high standards for digital government (e.g., GDPR, AI Act), the EU influences global regulatory norms. The framework includes mechanisms for adequacy decisions and international cooperation, ensuring that data flows with third countries meet EU standards. This projects the EU's regulatory power beyond its borders, shaping the global governance of digital public administration (Bradford, 2020).

However, "Gold-Plating" by Member States remains a challenge. Even when using Regulations, national implementation can introduce additional layers of complexity. The regulatory framework tries to minimize this by using "Implementing Acts" and "Delegated Acts" (Articles 290-291 TFEU) to set technical standards centrally. This comitology process ensures that the technical details of e-government (e.g., data formats) are uniform, preventing national divergences at the technical level.

Finally, the "Digital Decade Policy Programme 2030" creates a monitoring and governance mechanism for the regulatory framework. It sets binding targets and establishes a cooperation mechanism between the Commission and Member States to ensure the targets are met. While not a regulation of specific technologies, it is a "regulation of governance," creating the institutional machinery to drive the implementation of the e-government acquis across the Union.

Section 2: The Core Infrastructure: eIDAS and the Single Digital Gateway

The eIDAS Regulation (EU) No 910/2014 is the cornerstone of the EU's e-government regulatory architecture. It repealed the 1999 Electronic Signatures Directive and established a uniform legal framework for electronic identification (eID) and trust services. Its primary objective is to enable secure and seamless electronic interactions between businesses, citizens, and public authorities. eIDAS distinguishes between two main pillars: electronic identification schemes and trust services. For eID, it creates a system of "mutual recognition." Member States are not forced to issue a specific eID card, but if they notify their national scheme to the Commission, other Member States must recognize it for accessing public services, provided it meets specific assurance levels (Low, Substantial, High). This mechanism respects national sovereignty while ensuring cross-border utility (Graux, 2015).

The concept of "Trust Services" in eIDAS covers electronic signatures, seals, time stamps, electronic delivery services, and website authentication. The Regulation grants these digital tools legal validity. A critical provision is Article 25, which states that an electronic signature shall not be denied legal effect solely because it is in electronic form. Furthermore, a "Qualified Electronic Signature" (QES) is granted the equivalent legal effect of a handwritten signature across the entire EU. This creates a "harmonized evidentiary status" for digital acts, removing the legal uncertainty that previously plagued cross-border electronic contracts and administrative filings (Dumortier, 2017).

The evolution towards eIDAS 2.0 (European Digital Identity Framework) addresses the shortcomings of the original regulation, primarily the low uptake of notified eIDs by the private sector. eIDAS 2.0 mandates that every Member State must offer a "European Digital Identity Wallet" (EUDI Wallet) to its citizens. This shifts the paradigm from a state-centric identity to a user-centric one. The regulation defines the legal status of the Wallet, requiring it to handle not just identity but also "electronic attestation of attributes" (e.g., driving licenses, university diplomas). This regulatory expansion aims to create a universal digital ID usable for both public services and private platforms (e.g., opening a bank account, renting a car), effectively creating a single digital key for the internal market (Alves et al., 2022).

The Single Digital Gateway (SDG) Regulation (EU) 2018/1724 is the second pillar of the infrastructure. It mandates the digitalization of 21 key administrative procedures (listed in Annex II), such as requesting a birth certificate, registering a car, or claiming pension benefits. The Regulation requires these procedures to be fully online, cross-border accessible, and user-centric. It acts as a "forcing mechanism" for national digitization efforts. Member States can no longer require physical presence or paper documents for these procedures, forcing a radical re-engineering of national administrative back-offices (Schmidt, 2020).

The "Your Europe" portal serves as the unified front-end for the SDG. The Regulation establishes quality criteria for the information provided on this portal. Information must be clear, accurate, up-to-date, and available in at least one other official EU language widely understood by the largest number of cross-border users (usually English). This "language regime" is a significant regulatory intervention, effectively mandating multilingualism in national public administrations to facilitate the rights of mobile EU citizens.

The Once-Only Technical System (OOTS) is the operational heart of the SDG Regulation. It operationalizes the Once-Only Principle by creating a secure network for the exchange of evidence between competent authorities. The Regulation defines the legal basis for these data transfers, overriding national secrecy laws that might prevent a tax authority in country A from sending data to a social security authority in country B. It establishes a "trust domain" where competent authorities are authenticated and authorized to request data, ensuring that the data sharing is lawful and secure (Krimmer et al., 2017).

The Interconnection of Business Registers (BRIS), established by Directive 2012/17/EU, is a sector-specific infrastructure integrated into the broader framework. It connects the national commercial registers, allowing for the cross-border search of company information. The regulatory framework mandates the use of unique identifiers (EUID) and defines the standard message formats for exchange. This ensures transparency in the corporate sector and facilitates the "freedom of establishment" for companies, allowing them to open branches or merge cross-border with reduced administrative friction.

e-Procurement is regulated by the Public Procurement Directive (2014/24/EU) and the e-Invoicing Directive (2014/55/EU). These laws make electronic submission of tenders (e-Submission) and electronic invoicing mandatory for public contracts above certain thresholds. The regulatory goal is to increase transparency, reduce corruption, and lower costs. The framework mandates the use of the "European Single Procurement Document" (ESPD), a standardized self-declaration form that replaces tons of paper evidence. This demonstrates how e-government regulation acts as a lever for market efficiency (Bockting & Scheel, 2016).

The e-Justice framework, including the e-CODEX regulation (2022/850), digitalizes judicial cooperation. It provides the legal basis for the decentralized IT system that allows courts to exchange documents (e.g., European Arrest Warrants, small claims) securely. Unlike the general administrative systems, e-Justice regulation must strictly observe procedural safeguards and judicial independence. The regulation defines e-CODEX as the gold standard for secure judicial communication, ensuring the admissibility of digital evidence in cross-border proceedings (Velicogna, 2017).

Standardization plays a crucial supporting role. The regulatory framework often refers to standards (e.g., ETSI standards for electronic signatures) to define technical compliance. This "New Legislative Framework" approach allows the law to remain abstract while the detailed technical specifications are handled by standard-setting bodies. Compliance with these harmonized standards creates a "presumption of conformity" with the legal requirements, providing legal certainty to technology providers and public administrations.

Trust Lists are a specific mechanism under eIDAS. Each Member State must maintain a "Trusted List" of qualified trust service providers (QTSPs). These lists are central to the chain of trust. The Commission publishes a central list of these national lists. The legal effect is that a provider on the list is automatically recognized across the EU. This "federated trust" model avoids the need for a single central EU certification authority, respecting the institutional autonomy of Member States while ensuring EU-wide validity.

Finally, the regulatory framework includes Governance Bodies. The eIDAS Cooperation Network and the SDG Coordination Group are established by the respective regulations to oversee implementation, resolve disputes, and agree on technical details. These comitology committees are where the "real" regulation happens, translating legal texts into operational reality through peer pressure and technical consensus.

Section 3: Data Governance and Open Government

The regulatory framework for data in EU e-government is characterized by a dichotomy between data protection (restricting access) and open data (mandating access). The General Data Protection Regulation (GDPR) (Regulation (EU) 2016/679) acts as the foundational layer. For public administrations, the lawful basis for processing is typically Article 6(1)(e) "performance of a task carried out in the public interest" or Article 6(1)(c) "legal obligation." The GDPR imposes strict limitations on "purpose repurposing." Data collected for a driving license cannot be freely used for tax enforcement without a specific legal basis. This "silo-by-law" approach protects citizens from the "panoptic state" but creates legal friction for the Once-Only Principle, which requires data sharing (Kuner et al., 2017).

The Open Data Directive (Directive (EU) 2019/1024) is the primary instrument for the "production factor" side of data. It mandates that Public Sector Information (PSI)—data produced by public bodies, such as meteorological, traffic, or statistical data—must be reusable for commercial and non-commercial purposes. The Directive introduces the concept of "High-Value Datasets" (HVDs), a specific list of datasets (e.g., geospatial, earth observation) that must be available free of charge, in machine-readable formats, and via APIs. This creates a legal obligation for the state to act as a "data provider" for the digital economy, shifting the cost of data production from the user to the taxpayer (Huijboom & Van den Broek, 2011).

The Data Governance Act (DGA) (Regulation (EU) 2022/868) fills the gap between GDPR and Open Data by regulating the re-use of sensitive public sector data (e.g., health records, data protected by IP). It does not mandate openness but creates a secure legal regime for voluntary sharing. It establishes "secure processing environments" where researchers or AI developers can access sensitive data without seeing the raw personal details. The DGA also introduces the concept of "Data Altruism," creating a legal registry for organizations that collect data for the general interest. This regulation aims to unlock the value of protected government data without compromising rights (Micheli et al., 2020).

The Data Act (Regulation (EU) 2023/2854) introduces a novel B2G (Business-to-Government) data sharing obligation. It empowers public sector bodies to request data from private companies (e.g., telcos, utilities) in cases of "exceptional need," such as a public emergency (e.g., pandemic, flood). This reverses the traditional flow of data. The regulation defines strict conditions for these requests to prevent state overreach, ensuring the data is used only for the specific public interest purpose and then deleted. This creates a regulatory framework for "privately held public data" (Kerber, 2016).

Base Registries are increasingly regulated as "Critical Data Infrastructure." While the setup of registries is national, EU law imposes requirements on their connectivity and quality. The Interconnection of Insolvency Registers Regulation (2015/848) mandates that national insolvency data be accessible via the European e-Justice Portal. Similar requirements apply to criminal records (ECRIS) and vehicle registration (EUCARIS). These sector-specific regulations create a "federated data layer" where the legal validity of a data point (e.g., a company's legal status) in one registry is recognized across the Union.

The "Free Flow of Non-Personal Data" Regulation (EU) 2018/1807 prohibits data localization requirements. Member States cannot force public data to be stored within their national borders unless justified by "public security." This regulation is crucial for the uptake of cloud computing in the public sector. It allows a Polish municipality to store its non-sensitive data in a French data center. By removing data borders, it creates a single market for government cloud services, subject to the security overrides of the state (Svantesson, 2020).

Intellectual Property Rights in public data are restricted by the regulatory framework. The Open Data Directive limits the ability of public bodies to exercise sui generis database rights to prevent re-use. The principle is that data paid for by the public belongs to the public. However, third-party IP rights held by the government (e.g., software copyrights) are respected. This nuanced regime prevents the "enclosure of the digital commons" by administrative bodies seeking to monetize their data monopolies.

Interoperability of Data is addressed through semantic regulation. The Core Vocabularies (Core Person, Core Business, Core Location), developed under the ISA² programme, provide the standard data models. While technically "soft standards," they are increasingly referenced in public procurement and funding calls, effectively becoming the "soft law" standard for data modeling in EU e-government. The framework encourages the use of these vocabularies to ensure that "data travels with meaning" across borders.

Algorithmic Transparency is emerging as a data governance requirement. Although not a standalone "Algorithmic Accountability Act," provisions in the GDPR (Article 13-15 on logic of processing) and the AI Act create a duty to document the data used to train public sector algorithms. Public bodies must ensure their training data is relevant, representative, and error-free to prevent "automated discrimination." This regulates the "input" side of the e-government machine (Hacker, 2018).

Data Sovereignty initiatives like Gaia-X influence the regulatory environment. While Gaia-X is a project, it drives the development of "Rulebooks" for data spaces. The European Health Data Space (EHDS) regulation proposal is the first sector-specific data space regulation. It will create a specific legal regime for the primary use (patient access) and secondary use (research) of electronic health records, superseding the general GDPR rules in this specific context. This marks a shift towards "sectoral data constitutions."

Statistical Governance is regulated by Regulation (EC) No 223/2009. It ensures the professional independence of statistical authorities (Eurostat and national offices). Trusted statistics are the basis of evidence-based policy making. The regulation protects statistical data from political interference, establishing a "firewall" around the data production process. This is a critical component of the "truth infrastructure" of the state.

Finally, the Governance of the "Once-Only" Principle requires a specialized liability regime. If erroneous data is shared from Country A to Country B, leading to a wrongful denial of benefits, who is liable? The SDG Regulation allocates liability: the "authentic source" is responsible for the accuracy of the data, while the "consumer" of the data is responsible for its processing. This clear allocation of responsibility is essential for building trust in the automated exchange of administrative data.

Section 4: Interoperability and Standardization

Interoperability is the capacity of organizations to interact towards mutually beneficial goals, involving the sharing of information and knowledge. In the EU legal context, it is not just a technical feature but a regulatory objective. The European Interoperability Framework (EIF) serves as the "constitution" for this domain. Although originally soft law (a Communication), it establishes the four layers of interoperability: Legal, Organizational, Semantic, and Technical. The EIF's 12 underlying principles (e.g., openness, reusability, technological neutrality) guide national digitalization strategies. Member States align their National Interoperability Frameworks (NIFs) with the EIF to ensure coherence (Guijarro, 2007).

The Interoperable Europe Act (Regulation 2024/...) marks a paradigm shift from voluntary coordination to structured cooperation. It creates a formal governance structure, the Interoperable Europe Board, to steer the strategic agenda. It introduces mandatory "Interoperability Assessments" for any digital public service or policy with cross-border implications. This "assessment duty" forces legislators and IT architects to consider interoperability ex ante, preventing the creation of new digital barriers. The Act also establishes a mechanism for sharing "Interoperability Solutions" (software, data models) to promote reuse and reduce costs (Misuraca et al., 2010).

"Open Standards" are promoted by the regulatory framework. The EIF encourages the use of open specifications to avoid vendor lock-in. Lock-in occurs when a public administration is dependent on a single provider for its IT systems, making it impossible to switch or share data. Regulation (EU) No 1025/2012 on European Standardization allows the Commission to identify ICT technical specifications (even from non-formal bodies like W3C or OASIS) for use in public procurement. This legal mechanism bridges the gap between formal standards (ISO/CEN) and the dynamic world of internet standards, ensuring the public sector runs on modern, open protocols.

Legal Interoperability is the specific focus of the first EIF layer. It addresses the fact that different national laws can prevent data exchange. For example, if country A requires a "wet signature" by law, it cannot interoperate with country B's digital process. The regulatory framework tackles this by "screening" legislation. The Interoperable Europe Act encourages "digital-ready policymaking," ensuring that new laws do not inadvertently create interoperability barriers. It promotes the use of "regulatory sandboxes" to test the legal feasibility of cross-border solutions before full rollout.

Semantic Interoperability is regulated through the promotion of Core Vocabularies and the DCAT-AP (Data Catalog Vocabulary Application Profile). These are metadata standards that describe public services and datasets. While their use is often voluntary, EU funding programs and specific regulations (like the Open Data Directive for High-Value Datasets) increasingly mandate compliance with these semantic standards. This creates a "common grammar" for the European administration, allowing a machine in Lisbon to understand a dataset from Helsinki (Peristeras et al., 2009).

The ISA² Programme (and its successor in Digital Europe) provided the financial and operational instrument for interoperability. It funded the development of key solutions like TESTA (Trans European Services for Telematics between Administrations), a secure private network for sensitive data exchange (e.g., Schengen, Europol). The existence of this "state intranet" is a critical piece of the regulatory puzzle, providing a secure channel that complies with the high security requirements of national laws.

Shared Reusable Solutions. The concept of "reusability" is legally reinforced. Public administrations are encouraged to publish their software as Open Source. The Commission's Open Source Software Strategy sets the example. By sharing code (e.g., the code for the EU Digital COVID Certificate), the EU reduces fragmentation. The Interoperable Europe Act creates a portal for these shared solutions, creating a "marketplace" for government code. This shifts the legal default from "proprietary software" to "public code."

Cross-Border Digital Services Infrastructures (DSIs). The Connecting Europe Facility (CEF) regulation provides the legal and financial basis for operational DSIs like eDelivery, eID, and eSignature. These "Building Blocks" are packaged solutions that Member States can integrate into their national systems. The regulation defines the service level agreements and governance of these blocks. Using a CEF Building Block provides a "safe harbor" for compliance with EU standards like eIDAS, incentivizing their adoption.

Standardization in Public Procurement. The Public Procurement Directive allows contracting authorities to refer to labels and standards. However, it prohibits technical specifications that mention a specific make or source (e.g., "Windows" or "Intel"), unless justified. This "neutrality obligation" forces the public sector to define requirements functionally, promoting interoperability and competition. The "European Catalogue of ICT Standards" helps procurers find the right open standards to reference.

Governance of the EIF. The National Interoperability Framework Observatory (NIFO) monitors the implementation of the EIF in Member States. It produces "factsheets" and scores. This "soft governance" via monitoring creates transparency and peer pressure. The Interoperable Europe Act hardens this by requiring Member States to designate competent coordinators and participate in the Board, creating a permanent diplomatic corps for IT standards.

The "European Interoperability Reference Architecture" (EIRA) provides a common architectural language. It defines the building blocks required to build interoperable e-government systems. Though technical, it has legal relevance as it is used to assess the eligibility of projects for EU funding. If a project does not fit the architecture, it may be deemed non-compliant with the interoperability policy.

Finally, Legacy Systems. The regulatory framework acknowledges the problem of "legacy" (old mainframes and code). It does not mandate the immediate replacement of all systems, which would be impossible. Instead, it mandates the creation of APIs (Application Programming Interfaces) to wrap legacy systems. The Data Act and the Open Data Directive push for "API-first" government, requiring that data be accessible via modern interfaces regardless of the age of the underlying database.

Section 5: Emerging Regulations: AI, Cybersecurity, and Platforms

The regulation of Artificial Intelligence (AI) in the public sector is governed by the AI Act (Regulation 2024/...). This landmark legislation classifies AI systems based on risk. AI used for "remote biometric identification" by law enforcement is largely prohibited (unacceptable risk). AI used in critical areas of public administration—such as determining eligibility for social benefits, migration/asylum management, and justice—is classified as "High-Risk." For these systems, the regulatory framework imposes strict obligations: high-quality data governance, documentation, transparency, human oversight, and robustness. This creates a "safety engineering" regime for algorithmic government, ensuring that the automated state complies with fundamental rights ex ante (Wirtz et al., 2019).

Cybersecurity is a prerequisite for e-government. The NIS2 Directive (Directive (EU) 2022/2555) repeals the first NIS Directive and expands the scope to include "public administration entities" of central and regional governments as "essential entities." This means that ministries and regions have direct legal obligations to manage cyber risks and report incidents to national CSIRTs. Failure to comply can lead to sanctions. This ends the era where government security was a matter of internal discretion; it is now a matter of harmonized EU law. The framework establishes a "baseline of cyber-hygiene" for the state.

The Cyber Resilience Act (CRA) complements NIS2 by regulating the products used by the government. It imposes cybersecurity requirements on hardware and software products with digital elements placed on the EU market. This ensures that the routers, firewalls, and software suites purchased by public administrations are secure by design. It shifts the liability for security flaws to the manufacturers, reducing the burden on the public sector to secure inherently insecure products.

The Digital Services Act (DSA) (Regulation (EU) 2022/2065) primarily regulates platforms, but it has significant implications for e-government. Public authorities often act as "Trusted Flaggers" to report illegal content (e.g., hate speech, disinformation) to platforms. The DSA formalizes this status, requiring platforms to prioritize notices from these authorities. Furthermore, the DSA regulates the use of platforms by governments for public communication ("Gov-to-Citizen"). It imposes transparency on algorithms used by platforms, which helps governments understand how public information is disseminated or suppressed.

The Digital Markets Act (DMA) (Regulation (EU) 2022/1925) regulates "Gatekeepers" (Big Tech). This is relevant for e-government because the public sector relies on gatekeeper services (cloud, app stores, search). The DMA's interoperability and non-discrimination obligations prevent Gatekeepers from locking public administrations into their ecosystems. For example, it ensures that government apps can be easily installed on mobile operating systems and access necessary hardware features (like NFC for eID apps). It protects the "technological sovereignty" of the public sector against private monopolies.

Cloud Sovereignty and the Data Act. The reliance of e-government on non-EU cloud providers (AWS, Azure) raises legal issues regarding the US CLOUD Act (extraterritorial access to data). The EU's regulatory response includes the EUCS (European Cybersecurity Certification Scheme for Cloud Services) under the Cybersecurity Act. The "sovereignty requirements" in high-assurance levels of this scheme may effectively require data localization or immunity from foreign laws for sensitive government data. This creates a "sovereign cloud" segment within the regulatory framework, reserved for critical state functions.

Algorithmic Transparency Registers. While the AI Act mandates registration of high-risk systems in an EU database, some cities (Amsterdam, Helsinki) and states have gone further, establishing public registers of algorithms. This emerging regulatory trend ("transparency by default" for algorithms) aims to allow journalists and citizens to scrutinize the "code of law." The regulatory framework is moving towards making the "source code" of public administration accessible, subject to security exceptions.

Automated Decision-Making (ADM) and GDPR Article 22. The GDPR grants a right not to be subject to a decision based solely on automated processing. Member States can authorize ADM by law, but must provide "suitable measures" to safeguard rights. The regulatory framework for e-government thus involves a patchwork of national laws authorizing ADM in specific sectors (e.g., tax), constrained by the EU requirement for a "human in the loop" or a right to obtain human intervention. The CJEU jurisprudence (e.g., Schufa) interprets "solely" broadly, tightening the screws on fully automated government.

The "European Digital Identity Wallet" (eIDAS 2.0) introduces a new paradigm for platform interaction. Large platforms (Very Large Online Platforms) will be required to accept the Wallet for user authentication. This forces a convergence between the state-issued identity and the private digital economy. It regulates the "login" button of the internet, inserting the sovereign identity into the private platform ecosystem. This reasserts the state's role as the guarantor of identity in the digital realm.

Blockchain Regulation (MiCA and DLT Pilot Regime). While primarily financial, these regulations provide legal certainty for the use of Distributed Ledger Technology (DLT). For e-government, the European Blockchain Services Infrastructure (EBSI) operates under a specific governance framework. The regulation of "smart contracts" in the Data Act (kill switches, functional equivalence) provides the legal basis for automating administrative processes via blockchain (e.g., automatic grant disbursements).

Liability for AI in Government. The proposed AI Liability Directive focuses on civil liability. However, for the state, liability is usually governed by national administrative law (State Liability). The principle of EU law (Francovich) implies that the state is liable for breaches of EU law, including digital law. If a government AI discriminates in violation of the GDPR or the Charter, the state is liable. The regulatory framework ensures that the "veil of the algorithm" does not shield the state from accountability.

Finally, the "Interoperable Europe" governance acts as a forum to align these various emerging regulations. The "Digital Decade" targets (e.g., 100% online provision of key public services by 2030) act as the overarching political enforcement mechanism. The regulatory framework is thus a dynamic ecosystem, constantly adapting to "absorb" new technologies (AI, Blockchain) into the rule of law, ensuring that the "Digital State" remains a "Constitutional State."

Questions


Cases


References

Reference List

  • Alves, E., et al. (2022). The European Digital Identity Wallet: A Game Changer?. European Commission.

  • Bockting, S., & Scheel, H. (2016). The implementation of e-procurement in the EU. ERA Forum.

  • Bradford, A. (2020). The Brussels Effect: How the European Union Rules the World. Oxford University Press.

  • Craig, P., & De Búrca, G. (2015). EU Law: Text, Cases, and Materials. Oxford University Press.

  • Dumortier, J. (2017). The European Regulation on Trust Services (eIDAS). Digital Evidence and Electronic Signature Law Review.

  • Galetta, D. U. (2019). Algorithmic Decision-Making and the Right to Good Administration. European Public Law.

  • Graux, H. (2015). The eIDAS Regulation: A new era for eID and trust services? Computer Law & Security Review.

  • Guijarro, L. (2007). Interoperability frameworks and enterprise architectures. Government Information Quarterly.

  • Hacker, P. (2018). Teaching an Old Dog New Tricks? AI and the Public Sector. Verfassungsblog.

  • Hijmans, H. (2016). The European Union as Guardian of Internet Privacy. Springer.

  • Hofmann, H. (2019). EU Administrative Law and the Digital Single Market. European Public Law.

  • Huijboom, N., & Van den Broek, T. (2011). Open data: an international comparison. Telematics and Informatics.

  • Kalvet, T., et al. (2019). The Once-Only Principle: The Way Forward. TOOP.

  • Kerber, W. (2016). Governance of Data: Exclusive Property vs. Access. IIC.

  • Koops, B. J. (2006). Should ICT Regulation be Technology-Neutral? Starting Points for ICT Regulation.

  • Krimmer, R., et al. (2017). The Once-Only Principle. IOS Press.

  • Kuner, C., et al. (2017). Machine learning and the GDPR. International Data Privacy Law.

  • Micheli, M., et al. (2020). Emerging models of data governance. JRC.

  • Misuraca, G., et al. (2010). Envisioning Digital Europe 2030.

  • Peristeras, V., et al. (2009). Semantic interoperability in pan-European e-government services. Social Science Computer Review.

  • Renda, A. (2019). Single Market 2.0: The European Union as a Platform. CEPS.

  • Savoldelli, A., et al. (2014). Understanding the e-government paradox. Government Information Quarterly.

  • Schmidt, J. (2020). The Single Digital Gateway Regulation. European Public Law.

  • Senden, L. (2004). Soft Law in European Community Law. Hart.

  • Svantesson, D. (2020). Data Localisation Laws and Policy. Oxford University Press.

  • Velicogna, M. (2017). e-Justice in Europe: From national to cross-border. Utrecht Law Review.

  • Wirtz, B. W., et al. (2019). Artificial Intelligence in the Public Sector. International Journal of Public Administration.

3
Subjects of legal relations in the field of EU e-government
2 2 10 14
Lecture text

Section 1: The Transformation of Public Administration as a Legal Subject

The digitalization of the public sector has fundamentally reshaped the legal identity of public administration within the European Union. Traditionally, public administration was viewed through the lens of "public authority" (imperium), characterized by unilateral decision-making powers and hierarchical structures. In the e-government context, this identity is transitioning towards that of a "digital service provider." This shift is not merely terminological but carries profound legal implications. Under EU law, specifically the Single Digital Gateway Regulation (2018/1724), public authorities are legally obligated to act as facilitators of cross-border mobility, bound by duties of quality, accessibility, and user-centricity that mirror consumer protection standards in the private sector. The administration is no longer just a regulator but a regulated entity subject to strict performance indicators regarding its digital interface (Schmidt, 2020).

As a subject of legal relations, the public administration in the EU comprises a diverse array of entities ranging from EU institutions (like the European Commission) to national ministries, regional councils, and local municipalities. The principle of "institutional autonomy" traditionally allowed Member States to organize their own administrative structures. However, e-government legislation increasingly pierces this veil by imposing functional requirements that apply regardless of the internal structure. For instance, the Directive on Open Data (2019/1024) defines "public sector bodies" broadly to include not just the state but also bodies governed by public law, forcing a wide range of entities to assume the legal persona of "data providers." This functional definition harmonizes the obligations of diverse administrative subjects across the Union (Janssen, 2012).

A critical aspect of the administration's legal status in the digital age is its dual role as a "controller" and "processor" of personal data under the General Data Protection Regulation (GDPR). Unlike private entities which rely on consent or contract, public administrations primarily process data based on "public task" or "legal obligation" (Article 6 GDPR). This grants them specific privileges but also imposes heightened responsibilities. As a legal subject, the administration is strictly liable for data breaches and must appoint Data Protection Officers (DPOs). The legal personality of the administration is thus constrained by a "fiduciary duty" towards citizen data, transforming the state from a sovereign owner of information into a custodian of digital identities (Hijmans, 2016).

The concept of "interoperability" creates a new form of "networked administrative subject." Through the European Interoperability Framework (EIF) and the proposed Interoperable Europe Act, national administrations are legally incentivized to act as nodes in a European network rather than isolated silos. This requires the legal capacity to enter into cross-border data exchange agreements and to recognize the digital credentials issued by other Member States. Consequently, the legal boundaries of the administrative subject become porous; a German tax authority becomes a functional extension of the French administration when verifying the VAT status of a French company. This "administrative federalism" redefines the territorial limits of the public subject (Guijarro, 2007).

The public administration also acts as a "procurer" of digital technologies, a role governed by the EU Public Procurement Directives. As a subject of commercial law, the state must navigate the complexities of buying software and cloud services while avoiding vendor lock-in. The legal capacity of the administration to innovate is often constrained by rigid procurement rules designed for the industrial age. New legal instruments like "Innovation Partnerships" attempt to give the public buyer more flexibility, allowing the administration to act as a co-creator with the private sector. This evolves the legal status of the administration from a passive purchaser to an active market shaper (Mergel, 2019).

In the realm of liability, the digital administration faces new challenges. If an automated system makes an error—for example, wrongly denying a social benefit—the question of attribution arises. Traditional administrative law attributes acts to the "competent authority." However, in complex digital ecosystems involving cloud providers and algorithms, locating the legal subject responsible can be difficult. EU jurisprudence maintains that the public authority retains ultimate responsibility (respondeat superior), regardless of the technological intermediaries used. The administration cannot outsource its legal liability to a software glitch; it remains the primary subject accountable to the citizen (Galetta, 2019).

The "transparency" of the administrative subject is legally mandated by the principle of open government. The administration is no longer a "black box" but a "glass house." Legal obligations to publish algorithms (algorithmic registers) and open data sets fundamentally alter the nature of administrative power. The administration effectively loses the "right to secrecy" that characterized the bureaucratic state. As a legal subject, it is under a continuous duty of disclosure, making its internal processes visible and contestable by the public. This shift creates a new legal relation of "scrutiny" between the state and civil society (Meijer et al., 2012).

Furthermore, the administration is a "security subject" under the NIS2 Directive (Network and Information Security). It is classified as an "essential entity" with direct legal obligations to manage cybersecurity risks and report incidents. Failure to do so can lead to sanctions. This integrates the administration into the national security architecture not just as a policy-maker but as a critical infrastructure operator. The legal personality of the public body is thus expanded to include the role of "cyber-defender," responsible for the resilience of the digital state (Markopoulou et al., 2019).

The emergence of "automated administrative acts" challenges the traditional definition of the administrative subject. When a decision is issued by an algorithm without human intervention, is the "decision-maker" still the administrative body? Legal theory affirms that the legal act is imputed to the authority that deployed the system. The administration provides the "legal will" even if the machine provides the "execution." This imputation is crucial for maintaining the rule of law, ensuring that there is always a legal subject capable of being sued in administrative courts (Zouridis et al., 2020).

The administration also possesses a "digital sovereignty" dimension. As a subject of international law and EU relations, the administration must assert control over its digital assets. This involves legal battles over data localization and cloud sovereignty (e.g., Gaia-X initiatives). The public administration acts as a guardian of "informational self-determination" not just for individuals but for the collective polity. The legal status of the administration includes the duty to ensure that its digital infrastructure is not subject to extraterritorial foreign laws (like the US CLOUD Act) (Floridi, 2020).

In the context of the "Once-Only Principle" (OOP), the administration becomes a "trusted source." It has the legal authority and duty to validate data for other entities. This elevates the status of administrative registers (e.g., population, land) to that of "legal truth" providers for the entire single market. The administration's declaration that a citizen resides at a certain address becomes a portable legal fact, accepted across borders. This enhances the probative value of administrative acts in the digital sphere (Kalvet et al., 2019).

Finally, the administration is a "capacity builder." Article 197 TFEU provides a legal basis for the EU to support the administrative capacity of Member States. This recognizes that the "subject" of administration requires continuous upgrading (reskilling, technical investment) to fulfill its legal duties. The legal personality of the administration is thus dynamic, engaged in a permanent process of digital transformation mandated and supported by EU law.

Section 2: The Digital Citizen: Rights and Legal Status

The "citizen" in EU e-government is not merely a passive recipient of services but an active subject endowed with specific "digital rights." This "digital citizenship" is a composite status derived from EU citizenship (Article 20 TFEU), the Charter of Fundamental Rights, and specific digital regulations. The primary legal characteristic of this subject is "user-centricity." EU law, particularly the Single Digital Gateway Regulation, mandates that digital procedures be designed around the needs of the user, not the convenience of the bureaucracy. This creates a subjective right to accessible, understandable, and efficient digital administration, transforming the citizen from a "subject" of authority into a "customer" of the state with enforceable service standards (Maggiolino, 2018).

At the core of the citizen's legal status is the "data subject" as defined by the GDPR. In the context of e-government, this grants the citizen powerful rights against the state, including the right to access their own data, the right to rectification, and the right to restrict processing. While the "right to be forgotten" (erasure) is limited in the public sector due to legal archiving obligations, the citizen retains the right to know who has accessed their data and why. This "transparency right" allows the citizen to audit the state's use of their information, reversing the traditional panopticon where the state watched the citizen; now, the citizen watches the state's data practices (Lynskey, 2015).

The "Right to Good Administration" (Article 41 of the Charter) is reinterpreted for the digital citizen as a "right to a digital procedure." This implies that citizens have the right to communicate with EU institutions and, increasingly, national administrations digitally. The eIDAS Regulation reinforces this by giving citizens the right to use their national electronic identification (eID) to access public services in other Member States. This creates a "cross-border digital personality," allowing the citizen to carry their legal identity across the Union just as they carry their physical passport. The legal subject is no longer tethered to a physical location but is mobile within the digital single market (Sullivan, 2018).

The "digital divide" poses a challenge to the universal status of the citizen. If rights are accessed digitally, are unconnected citizens disenfranchised? EU law addresses this through the principle of "non-discrimination" and "accessibility." The Web Accessibility Directive (2016/2102) creates a legal obligation for public sector bodies to make websites and apps accessible to persons with disabilities. Furthermore, the Single Digital Gateway Regulation mandates that "offline" channels remain available for those unable to use digital tools. This ensures that the "digital citizen" does not replace the "analog citizen" but complements them, preserving the universality of the legal subject (Easton, 2013).

The "Right to Explanation" in the context of Automated Decision-Making (ADM) is a critical component of the digital citizen's status. Under the GDPR (Article 22) and administrative law principles, a citizen subject to an automated decision (e.g., algorithmic tax assessment) has the right to obtain human intervention, to express their point of view, and to contest the decision. This protects the citizen's legal agency against the "black box" of the algorithm. It asserts that the legal subject is a human being who deserves a reasoned justification, not just a computer output (Watcher et al., 2017).

"Data Portability" (Article 20 GDPR) creates a new proprietary-like right for the citizen over their personal data. While primarily aimed at the private sector, it has implications for the public sector, particularly in health and education. It empowers the citizen to move their medical records or academic transcripts between providers or across borders. This transforms the citizen's data from a state-owned record into a personal asset that the citizen controls and can transport, enhancing their mobility and autonomy (De Hert et al., 2018).

The citizen also acts as a "co-producer" of public value in the Open Government paradigm. Through e-participation platforms and public consultations, citizens have a legal role in the policy-making process. The European Citizens' Initiative (ECI) allows one million citizens to invite the Commission to propose legislation. The digitalization of the ECI makes this right more accessible. The legal subject here is the "active citizen" or citoyen, participating in the democratic life of the Union through digital channels (Alves, 2020).

"Digital Identity Wallets" (under eIDAS 2.0) will further empower the citizen. The Wallet allows the citizen to control the disclosure of their attributes (e.g., proving age without revealing name). This implements the principle of "Self-Sovereign Identity" (SSI) within the EU legal framework. The citizen becomes the "point of integration" for their own data, deciding which parts of their legal identity to share with whom. This shifts the power dynamic from the state (issuer of identity) to the citizen (holder and manager of identity) (Verschuuren, 2020).

However, the digital citizen is also a "vulnerable subject." The complexity of digital systems and the risk of cybercrime create new vulnerabilities. EU consumer law and cybersecurity regulations recognize this by imposing "duties of care" on the providers of digital services. The citizen has a right to security and trust. If the state's digital interface is insecure and leads to identity theft, the citizen has a right to liability and redress. The legal status of the citizen includes a "right to cyber-safety" in their dealings with the state.

The "Once-Only Principle" (OOP) confers a "right to silence" on the citizen regarding data already held by the administration. The citizen has the right not to be burdened with repetitive requests for information. This transforms the administrative procedure from a burden on the citizen to a duty on the state to retrieve data internally. It lightens the "administrative load" on the legal subject, recognizing their time as a valuable resource (Krimmer et al., 2017).

In cross-border scenarios, the citizen is a "mobile subject." The SDG Regulation ensures that a citizen moving from Poland to Ireland can register a car or apply for a pension online. This removes the "paperwork border." The legal subject is defined by their mobility rights, and the e-government infrastructure is legally mandated to follow the citizen's movement, ensuring continuity of rights and social security coverage.

Finally, the citizen has a "right to error" in some national jurisdictions (e.g., France), which is influencing EU discourse. In a complex digital system, honest mistakes (e.g., wrong click in a tax form) should not be immediately penalized. E-government systems must be designed to allow for correction and rectification. This humanizes the digital legal subject, acknowledging that while the system is binary, the user is fallible.

Section 3: The Private Sector: Businesses and GovTech Providers

Business entities are dual subjects in the EU e-government ecosystem: they are both "users" (G2B) of public services and "providers" (GovTech) of digital solutions. As users, businesses enjoy rights similar to citizens but tailored to economic activities. The Freedom of Establishment (Article 49 TFEU) and Freedom to Provide Services (Article 56 TFEU) form the constitutional basis for their digital rights. These freedoms mandate that e-government procedures (e.g., registering a company, filing VAT) must be accessible online and non-discriminatory for foreign companies. The Company Law Digitalisation Directive (2019/1151) codifies this, granting businesses the right to form a limited liability company entirely online in any Member State, creating a "digital corporate citizenship" (Schmidt, 2020).

As "data subjects," legal persons do not enjoy GDPR protection (which applies only to natural persons), but they have rights regarding "commercially sensitive information" and trade secrets. The Open Data Directive respects these rights while encouraging the sharing of non-personal data (B2G). The Data Act introduces a new obligation for businesses to share data with governments in exceptional circumstances (e.g., emergencies). This creates a "public duty" for the private sector subject, obliging them to contribute their data assets to the public good when necessary, redefining the boundary between private property and public necessity (Kerber, 2016).

The private sector's role as a "provider" of e-government solutions (GovTech) is governed by Public Procurement Law. The state rarely builds its own software; it buys it. Private companies are the architects of the digital state. The legal framework for this relationship involves complex contracts that must balance innovation with stability. "Innovation Partnerships" allow the public sector to co-develop solutions with startups. In this relationship, the private subject is a "partner" rather than just a vendor, sharing the risks and intellectual property of the new digital public service (Mergel, 2019).

Intermediaries play a crucial structural role. Banks, notaries, and telecom operators often act as "identity providers" or "trust service providers" within national eID schemes. Under eIDAS, these private entities can be "qualified" to issue digital identities that are accepted by the state. This privatizes a core sovereign function—identification. The legal status of these intermediaries is highly regulated; they are "trusted third parties" liable for the security and accuracy of the identities they manage. They act as "gatekeepers" to the digital public sphere (Dumortier, 2017).

The "Once-Only Principle" for businesses is particularly economically significant. It aims to reduce the administrative burden (red tape) that hampers competitiveness. The interconnection of business registers (BRIS) allows company data to flow between Member States. For a business subject, this means that its "legal existence" in one state is automatically visible and verifiable in another. This "interconnected legal personality" facilitates cross-border mergers, branches, and procurement, making the Single Market a tangible digital reality for the corporate subject.

e-Invoicing and e-Procurement are mandatory digital interactions for businesses supplying the state. The e-Invoicing Directive (2014/55/EU) obliges public authorities to accept electronic invoices. For businesses, this becomes a de facto requirement to digitize their own financial processes to trade with the state. The state uses its purchasing power to force the digitalization of the private sector. The business subject is thus compelled to modernize to maintain its status as a government contractor (Bockting & Scheel, 2016).

GovTech Startups face specific legal barriers (e.g., high capital requirements in procurement). The EU seeks to lower these barriers to foster a "GovTech market." Legal initiatives involve simplifying procurement procedures and creating "marketplaces" for public solutions. The startup is viewed as a distinct legal subject—agile, innovative, but resource-poor—requiring a supportive regulatory environment to compete with established IT giants for government contracts.

Big Tech Platforms (Gatekeepers) are regulated by the Digital Markets Act (DMA) to ensure they do not bottleneck the public sector's access to the cloud or app stores. If a government app is removed from the Apple App Store, public service delivery is disrupted. The DMA imposes obligations on Gatekeepers to ensure interoperability and fair access. This protects the "public sector subject" from being held hostage by the "private platform subject," ensuring the sovereignty of the state's digital distribution channels.

Public-Private Partnerships (PPPs) are a common legal vehicle for large e-infrastructure projects (e.g., broadband rollout). In a PPP, the private sector finances and operates the infrastructure while the state regulates it. The legal contract defines the division of risks and rewards. These partnerships create a hybrid legal subject—a special purpose vehicle (SPV)—that operates with a public mandate but private management logic. The governance of these entities is a key challenge for e-government law.

Corporate Digital Responsibility is an emerging concept. Beyond legal compliance, businesses are expected to act ethically in their digital dealings with the state and citizens (e.g., ethical AI). While largely soft law, this concept is hardening through ESG reporting requirements (CSRD). The corporate subject is increasingly judged not just on its financial performance but on its contribution to "digital sustainability" and inclusion.

Liability in GovTech. If a private software used by the government fails (e.g., a tax algorithm), who is sued? The citizen sues the state (administrative liability). The state then seeks recourse against the private vendor (contractual liability). This "chain of liability" is often governed by complex indemnity clauses. The legal capacity of the private vendor to bear this risk (insurance) is a critical factor in the stability of the e-government ecosystem.

Finally, the "API Economy". The state opens its data via APIs, and private businesses build value-added services on top (e.g., a weather app using public meteorological data). The business subject here is a "re-user" of public information. The legal relationship is governed by the Open Data Directive licenses. This creates a symbiotic relationship where the private sector amplifies the value of public assets, transforming the legal nature of government data from a static record into a dynamic economic input.

Section 4: Trust Service Providers and Technical Intermediaries

Trust Service Providers (TSPs) are a specialized category of legal subjects created and regulated by the eIDAS Regulation. They are the "notaries of the internet," providing the cryptographic certainty needed for digital transactions. TSPs issue electronic signatures, seals, time stamps, and website authentication certificates. The law distinguishes between "qualified" and "non-qualified" TSPs. Qualified TSPs (QTSPs) enjoy a privileged legal status: the services they provide (e.g., qualified electronic signatures) are granted a presumption of legal validity and are automatically recognized across the entire EU. To achieve this status, QTSPs must undergo rigorous audits by national supervisory bodies and be listed on the EU "Trusted List." This "status-based" regulation creates a closed market of highly trusted entities that underpin the security of the European digital space (Graux, 2015).

The legal Liability of TSPs is strictly defined in Article 13 of eIDAS. TSPs are liable for damage caused intentionally or negligently to any natural or legal person due to a failure to comply with their obligations. The burden of proof lies with the TSP to show they were not negligent (for qualified services). This reversal of the burden of proof places a heavy legal responsibility on these actors. They act as "guarantors of trust." If a digital certificate is forged or misused due to the TSP's security failure, the TSP bears the cost. This liability regime is essential to give citizens and administrations the confidence to rely on digital proofs.

Identity Providers (IdPs) are intermediaries that manage digital identities. They can be public agencies, banks, or telecom operators. In federated eID schemes (like eIDAS nodes), the IdP authenticates the user and asserts their identity to the service provider. The legal relationship between the IdP, the user, and the relying party (government) is governed by a framework of "trust interoperability." The IdP must ensure the "Level of Assurance" (LoA) of the identity. If an IdP falsely asserts an identity, it disrupts the legal validity of the administrative act. The legal status of the IdP is thus that of a "gatekeeper of identity" (Sullivan, 2018).

Electronic Delivery Services (e-Delivery) providers act as "digital postmen." They provide secure channels for serving legal documents and administrative decisions. Under eIDAS, qualified electronic registered delivery services provide legal proof of sending and receiving data, and protect against the risk of loss or alteration. The legal subject here acts as a neutral conveyor. Their legal status is analogous to the traditional postal service, endowed with the power to certify the "time and content" of communication, which is crucial for meeting administrative deadlines and due process requirements.

Website Authentication Providers issue certificates (QWACs) that allow users to verify who owns a website (e.g., ensuring you are on the real tax authority site, not a phishing site). The eIDAS 2.0 reform proposes to require web browsers (like Chrome or Firefox) to recognize these EU certificates. This creates a conflict between the "legal status" of the EU TSP (certified by the state) and the "technical power" of the browser vendor (private gatekeeper). The EU aims to assert its digital sovereignty by legally mandating the recognition of its trust infrastructure within the private browser ecosystem.

The Role of Standardization Bodies (like ETSI and CEN) is legally significant. While they are private associations, their standards are referenced in EU legislation. Compliance with these standards triggers a "presumption of conformity" for TSPs. These bodies act as "quasi-legislators" for the technical layer of the law. The legal subjects in the e-government field (TSPs, vendors) must align their behavior not just with the text of the Regulation but with the technical norms produced by these standardization communities.

Supervisory Bodies are national authorities designated to oversee TSPs. They have the power to grant and revoke "qualified" status. They act as the "police" of the trust ecosystem. Their legal relationship with TSPs is one of continuous audit and supervision. If a TSP fails to meet security requirements, the Supervisory Body must remove it from the Trusted List, effectively killing its business. This strong regulatory oversight ensures that the "trust" in Trust Services is state-backed.

Validation Services act as intermediaries that check the validity of signatures and certificates. In a long-term e-government context (e.g., archiving land deeds for 50 years), the validity of the original signature must be preserved even after the signing certificate expires. "Preservation Services" offer this legal certainty. These subjects are the "archivists" of digital legality, ensuring that digital administrative acts remain enforceable over time.

Wallet Providers under eIDAS 2.0 will be a new class of intermediaries. They will provide the European Digital Identity Wallet app. These providers (likely a mix of public and private entities) will be certified to hold the citizen's most sensitive credentials. Their legal status involves strict liability for security breaches and adherence to high data protection standards. They are the "custodians" of the new user-centric identity architecture.

Open Source Communities are informal but critical subjects. Many e-government solutions rely on open source libraries. The maintenance and security of this code depend on loose communities of developers. The Cyber Resilience Act attempts to regulate the commercial use of open source, imposing liability on those who monetize it. The legal status of the "open source contributor" is shifting from a volunteer hobbyist to a potential link in the supply chain of critical digital infrastructure.

Blockchain Nodes in the European Blockchain Services Infrastructure (EBSI) act as technical intermediaries for decentralized ledgers. The legal status of a node operator involves responsibility for the integrity of the ledger. The governance framework of EBSI defines the rights and obligations of these node operators, creating a "permissioned" network where the technical actors are legally vetted public or private entities.

Finally, the Global Dimension. Non-EU TSPs can be recognized in the EU only if there is an international agreement (Mutual Recognition Agreement). Currently, no such general agreement exists. This creates a "digital border." A US digital signature is not automatically a "Qualified" signature in the EU. This protectionist legal wall ensures that the subjects providing trust in the EU e-government space are subject to EU jurisdiction and standards.

Section 5: Artificial Intelligence and Algorithmic Systems as Quasi-Subjects

Artificial Intelligence (AI) and automated systems present a unique challenge to legal theory: are they tools, or are they emerging legal subjects? In the current EU legal framework, AI agents are not legal persons. They do not have rights or duties. They are classified as "products" or "technologies." Liability for their actions is always attributed to a human or corporate subject (the provider or the user). However, their autonomy and impact on administrative decision-making give them a "quasi-subject" status in practice. The law increasingly regulates them as if they were actors, imposing duties of "transparency," "fairness," and "accountability" directly on the system's design (Abbott, 2020).

The AI Act categorizes AI systems based on risk. AI used in essential public services (justice, welfare, migration) is "High-Risk." While the AI itself is not the subject, the Act imposes obligations on the "Provider" (developer) and the "Deployer" (public authority). The Provider must ensure the system meets quality criteria (accuracy, robustness). The Deployer must ensure human oversight and monitor for bias. This splits the legal personality of the "administrative decision-maker" into a technical creator and an administrative user, creating a complex chain of responsibility for the final administrative act (Veale & Borgesius, 2021).

"Human-in-the-loop" is a legal requirement designed to deny AI full subjectivity. Article 22 GDPR and the AI Act emphasize that significant decisions should not be made "solely" by automated means. A human must retain the authority to review and override the algorithm. This legal fiction maintains the anthropocentric nature of the administration. The human official is the "legal interface" for the AI, validating its output and assuming the legal consequences. Without this human anchor, the administrative act would be void for lack of a competent author (Zouridis et al., 2020).

Algorithmic Transparency treats the AI system as a subject of scrutiny. Citizens have a right to know "the logic involved" in automated decisions. Public registers of algorithms (as seen in Helsinki or Amsterdam) list the AI systems "employed" by the city. This personifies the AI to some extent—it has a name, a purpose, and a registered owner. The law demands that the AI "explain itself" (Explainable AI), attributing a communicative duty to the software (or its designers) to make its internal reasoning accessible to human reason (Pasquale, 2015).

Bias and Non-Discrimination. If an AI discriminates, it violates the Charter of Fundamental Rights. Since an AI cannot have mens rea (intent), the law focuses on the "input data" and "design logic." The liability falls on the administration for using a "biased instrument." The AI is treated as a "defective tool." However, the legal remedies (e.g., retraining the model) are directed at the AI's behavior. The law seeks to "rehabilitate" the algorithmic subject to ensure it complies with constitutional values (Hacker, 2018).

"Electronic Personhood" is a theoretical proposal (discussed by the European Parliament in 2017) to grant AI a specific legal status to manage liability, similar to a corporation. This would allow an AI to hold assets (e.g., insurance) to pay for damages. The EU has largely rejected this for now, sticking to product liability and strict liability for operators. The fear is that granting personhood would allow humans to evade responsibility behind a "corporate veil" of code. The consensus remains that the AI is an object of law, not a subject.

Automated Agents in Procurement. In e-procurement, bots can place bids. The law recognizes these automated declarations of will. The actions of the bot are legally attributed to the principal who programmed it. If the bot makes a mistake (e.g., bids too high), the principal is bound, unless the error was due to a system malfunction. This applies the principles of agency law to software: the AI is a "digital agent" acting on behalf of a legal principal (Sartor, 2009).

Chatbots and Virtual Assistants in public service delivery act as the "face" of the administration. They provide information and guidance. The AI Act imposes transparency obligations: users must be informed they are talking to a machine. This "anti-deception" rule maintains the distinction between human and non-human subjects. The chatbot has no legal authority to make binding decisions; it is an information retrieval tool. If it gives wrong advice, the liability rests with the agency for providing misinformation ("negligent misstatement").

Smart Contracts as "Admin-Bots". A smart contract that automatically executes a subsidy payment upon hitting a trigger is an "automated administrator." The Data Act regulates these scripts. They must have a "kill switch." This asserts the supremacy of human law over code. The smart contract is a "self-executing legal act," but its validity depends on the underlying administrative decision. It is an enforcement mechanism, not a decision-maker in its own right.

The "Black Box" Defense. Public authorities sometimes argue they cannot explain a decision because the AI is a "black box" (proprietary or technically opaque). Courts in the EU (e.g., in the SyRI case in the Netherlands) have rejected this. The opacity of the tool cannot excuse the violation of the transparency principle. The legal subject (the state) must be able to account for its tools. If the tool is unexplainable, it is unlawful to use it for public decisions.

AI as a "Delegate". In some theories, the delegation of power to an AI is seen as an delegation of administrative discretion. EU law strictly limits the delegation of discretionary power to private entities (Meroni doctrine). Delegating discretion to a private AI model might violate this. Therefore, AI in government is legally confined to "bound administration" (applying clear rules) or preparatory support, protecting the monopoly of the human official over "sovereign discretion."

Finally, the Future Status. As AI becomes more autonomous (General Purpose AI), the pressure to recognize some form of distinct legal status may grow, perhaps as a "registered digital asset" with mandatory insurance. For now, EU e-government law firmly places the AI in the category of "high-risk infrastructure" to be controlled, audited, and kept on a tight legal leash by human subjects.

Questions


Cases


References
  • Abbott, R. (2020). The Reasonable Robot: Artificial Intelligence and the Law. Cambridge University Press.

  • Alves, E. (2020). Digital democracy and the European Citizens' Initiative. European View.

  • Bockting, S., & Scheel, H. (2016). The implementation of e-procurement in the EU. ERA Forum.

  • De Hert, P., et al. (2018). The right to data portability in the GDPR. Computer Law & Security Review.

  • Dumortier, J. (2017). The European Regulation on Trust Services (eIDAS). Digital Evidence and Electronic Signature Law Review.

  • Easton, C. (2013). Website accessibility and the European Union. International Review of Law, Computers & Technology.

  • Floridi, L. (2020). The Fight for Digital Sovereignty. Philosophy & Technology.

  • Galetta, D. U. (2019). Algorithmic Decision-Making and the Right to Good Administration. European Public Law.

  • Graux, H. (2015). The eIDAS Regulation: A new era for eID and trust services? Computer Law & Security Review.

  • Guijarro, L. (2007). Interoperability frameworks and enterprise architectures. Government Information Quarterly.

  • Hacker, P. (2018). Teaching an Old Dog New Tricks? AI and the Public Sector. Verfassungsblog.

  • Hijmans, H. (2016). The European Union as Guardian of Internet Privacy. Springer.

  • Janssen, M. (2012). Open Data and the Future of Public Administration. Government Information Quarterly.

  • Kalvet, T., et al. (2019). The Once-Only Principle: The Way Forward. TOOP.

  • Kerber, W. (2016). Governance of Data: Exclusive Property vs. Access. IIC.

  • Krimmer, R., et al. (2017). The Once-Only Principle. IOS Press.

  • Lynskey, O. (2015). The Foundations of EU Data Protection Law. Oxford University Press.

  • Maggiolino, M. (2018). Digital democracy and the role of the citizen. European Public Law.

  • Markopoulou, D., et al. (2019). The new EU cybersecurity framework. Computer Law & Security Review.

  • Meijer, A., et al. (2012). Open government: Connecting vision and voice. International Review of Administrative Sciences.

  • Mergel, I. (2019). Digital Transformation of the Public Sector. Public Administration Review.

  • Pasquale, F. (2015). The Black Box Society. Harvard University Press.

  • Sartor, G. (2009). Cognitive Automata and the Law. Artificial Intelligence and Law.

  • Schmidt, J. (2020). The Single Digital Gateway Regulation. European Public Law.

  • Sullivan, C. (2018). Digital Identity. Cambridge University Press.

  • Veale, M., & Borgesius, F. Z. (2021). Demystifying the Draft EU AI Act. Computer Law Review International.

  • Verschuuren, P. (2020). Self-Sovereign Identity. Computer Law & Security Review.

  • Watcher, S., et al. (2017). Counterfactual Explanations without Opening the Black Box. Harvard Journal of Law & Technology.

  • Zouridis, S., et al. (2020). Automated Discretion. Administration & Society.

4
Objects of legal relations in the field of EU e-government
2 2 10 14
Lecture text

Section 1: Information and Data as the Primary Object

Information and data constitute the primary, foundational object of legal relations in the field of EU e-government. Unlike traditional administration, which dealt with physical files and tangible archives, digital administration operates on "dematerialized" information objects. The legal status of this information is complex, oscillating between the concepts of a "public good" and a "protected asset." The Open Data Directive (2019/1024) defines Public Sector Information (PSI) as documents held by public sector bodies. This legal definition transforms raw administrative data—such as meteorological records, traffic data, and business registers—into a reusable economic resource. The Directive establishes a "right to re-use," effectively treating public information not as the private property of the bureaucracy but as a "commons" to be exploited for the benefit of society. This legal objectification of information shifts the focus from "secrecy by default" to "openness by default," redefining the state's relationship with its own memory (Huijboom & Van den Broek, 2011).

However, not all data is an object of open exchange. Personal Data represents a distinct legal object governed by the General Data Protection Regulation (GDPR). In the e-government context, personal data (e.g., tax records, health data) is an object of "protection." It cannot be treated as a commodity. The legal relation here is one of stewardship. The public administration "processes" this object but does not "own" it in a proprietary sense. The citizen retains control rights (access, rectification) over this object. The conflict between the "openness" of PSI and the "protection" of personal data defines the legal boundary of information as an object. Techniques like anonymization are legal tools used to transform "protected personal data" (a liability) into "open non-personal data" (an asset), altering the legal status of the object itself (Kuner et al., 2017).

The concept of "High-Value Datasets" (HVDs) introduces a hierarchy within information objects. Defined by the Open Data Directive, these are specific categories of data (geospatial, earth observation, statistics, companies, mobility) deemed to have significant socio-economic benefits. The law treats these HVDs as "privileged objects" that must be available for free, via APIs, and in machine-readable formats. This regulatory intervention commodifies specific subsets of public information, prioritizing their liquidity in the digital market. The legal object here is not just the content but the format and availability; a PDF scan of a map is not the same legal object as a vector file available via API, as only the latter fulfills the high-value criteria (European Commission, 2019).

"Base Registries" (or Authentic Sources) serve as the authoritative repositories of these information objects. A Base Registry (e.g., the population register) holds the "single version of the truth." The data within it has a specific legal quality: "presumption of accuracy." If the registry says a citizen is married, this data point is a legal fact that other agencies must accept. The Once-Only Principle (OOP) operationalizes this by mandating that this authoritative data object be shared between administrations rather than recollected. This transforms the data entry in a base registry from a static record into a dynamic "verifiable credential" that circulates within the e-government ecosystem (Schmidt, 2020).

The Data Governance Act (DGA) creates a new category of object: "Data with specific protection needs." This includes data protected by intellectual property or statistical confidentiality. The DGA creates a regime for the re-use of this sensitive data in secure environments. Here, the object is not "released" (like open data) but "computed upon." The legal object is the "insight" derived from the data, not the raw data itself. This nuanced approach allows the value of sensitive public data (e.g., health trends) to be extracted without compromising the confidentiality of the underlying records, creating a "safe harbor" for data utilization (Micheli et al., 2020).

"Metadata" is an often-overlooked but critical legal object. It is data about data (e.g., the date a document was created, its author, its classification). In e-government, metadata is essential for interoperability and searchability. The use of standardized metadata schemas (like DCAT-AP) is legally encouraged to ensure that information objects are discoverable across borders. Legal disputes can arise over the accuracy of metadata—for instance, if a document is misclassified as "secret" in its metadata, it is wrongfully removed from the public domain. Thus, metadata is not just a technical tag but a legal attribute determining the accessibility of the information object.

"Algorithmic Data" or training sets for public sector AI constitute a new class of object. The quality of this data determines the legality of the administrative decision. If the training data is biased, the resulting "administrative act" may be discriminatory and void. The AI Act imposes governance requirements on these data sets. The legal object here is the "representative sample" of reality. The administration has a duty to ensure that this digital object accurately reflects the population it governs, turning data quality into a precondition for the rule of law (Hacker, 2018).

"Document vs. Data." Traditionally, the legal object was the "administrative document" (a static PDF or paper). EU law is shifting the focus to "structured data." The Single Digital Gateway Regulation requires the exchange of "evidence" (data elements) rather than just documents. A digital proof of residence is a set of XML data points, not necessarily a scanned certificate. This "granularization" of the legal object allows for automated processing. The law now protects the integrity of the data stream as much as the integrity of the physical document.

"Real-time Data" (dynamic data) is another evolving object. Traffic sensors or air quality monitors generate continuous streams of data. The Open Data Directive requires this to be available via API immediately. This temporal dimension changes the nature of the object from a "record" (history) to a "feed" (present). The legal regime must account for the ephemeral nature of this object, focusing on access to the stream rather than archival of every data point.

"Secret Information" remains a restricted object. Classified information (state secrets) and commercially sensitive information (trade secrets) are excluded from open data regimes. The Trade Secrets Directive protects the latter even when held by public bodies (e.g., in procurement bids). The legal challenge is defining the boundary. When does a "commercial secret" become "public interest information"? EU jurisprudence tends to interpret exceptions strictly, ensuring that the "secrecy" label is not used to hide administrative inefficiency or corruption.

"Orphan Works" and copyright-protected materials held by public libraries are cultural information objects. The Directive on Copyright in the Digital Single Market allows for the digitization and cross-border dissemination of out-of-commerce works. This transforms "forgotten" cultural objects into accessible digital heritage. The legal object is the "digital copy" which is given a new lease of life through specific copyright exceptions for public institutions.

Finally, the "Data Sovereignty" aspect treats public data as a strategic national asset. The state asserts control over the storage and processing location of this object (e.g., prohibiting the hosting of tax data on non-EU servers). This territorializes the data object, linking its legal status to the physical jurisdiction of the hardware. The "object" of e-government is thus not just bits and bytes, but a sovereign resource subject to geopolitical control.

Section 2: Digital Services and Administrative Procedures

Digital services constitute the functional object of e-government legal relations. A "digital public service" is not merely a website; it is a legally defined administrative procedure performed via electronic means. The Single Digital Gateway (SDG) Regulation defines these services as procedures that must be available fully online, from identification to the delivery of the decision. The legal object here is the "transaction" between the citizen and the state. This transaction is governed by specific service quality standards (speed, transparency, ease of use), transforming the administrative procedure into a "service product" with guaranteed performance levels (Schmidt, 2020).

The transition from paper-based to "Fully Online Procedures" changes the legal nature of the interaction. In a traditional procedure, physical presence was often a validity requirement (e.g., appearing before a registrar). The SDG Regulation abolishes this for 21 key procedures, making "virtual presence" legally equivalent to physical presence. The legal object becomes the "digital workflow." The law regulates the steps of this workflow (submission, verification, notification) to ensure that the absence of physical interaction does not compromise legal certainty or fraud detection (Wegrich, 2009).

"Cross-Border Services" are a specific subset of this object. These are services accessible to non-residents (e.g., a German applying for a study grant in France). EU law mandates that these services be non-discriminatory. The legal object must be designed to accept foreign evidence (eID, documents). If a digital service form requires a specific national ID number format that foreigners do not possess, the service itself is legally defective because it violates the principle of cross-border accessibility. The "interface" of the service is thus a regulated legal object that must be inclusive by design (Kotzinos et al., 2011).

"Proactive Services" (or automated services) represent a future trend where the service is delivered without an explicit application (e.g., automatic child benefit payment upon birth registration). Here, the legal object is the "automated administrative act." The trigger for the legal relation is a data event (birth registration) rather than a citizen's petition. This shifts the legal burden from the citizen (duty to apply) to the state (duty to deliver). The regulation of these services focuses on the accuracy of the trigger data and the right to opt-out or correct errors (Zouridis et al., 2020).

"Information Services" are the most basic form. The obligation to provide information online (e.g., rights, deadlines, fees) is a legal duty under the SDG. The information itself is the object of the service. The regulation sets quality criteria: the information must be clear, accurate, up-to-date, and available in English. This treats "legal information" as a consumer good that must meet quality standards. If the information on a government portal is wrong and a citizen misses a deadline, the state may be liable for "negligent provision of information."

The "Once-Only" Service is a composite object. It involves the service of retrieving evidence from another authority. The citizen "requests" that the administration fetch the data. This request creates a legal mandate for the data transfer. The service object here is the "inter-administrative retrieval." The law regulates the liability for this retrieval: if the retrieval fails or fetches wrong data, the service is defective. This creates a chain of responsibility for the backend processes that are invisible to the user (Kalvet et al., 2019).

"e-Procurement" is a specialized digital service where the state buys from the market. The legal object includes the "electronic tender," the "e-auction," and the "e-invoice." The e-Invoicing Directive makes the electronic invoice the standard legal object for payment. The procedure is highly regulated to ensure transparency and competition. The digital service acts as a "market platform" managed by the state. The integrity of this platform (preventing bid rigging, ensuring secrecy) is a core legal requirement (Bockting & Scheel, 2016).

"e-Justice" services involve the digitalization of court procedures (e.g., filing a small claim online). The object here is the "judicial act" (writ, judgment) in digital form. The e-CODEX regulation governs the secure transmission of these objects. Unlike administrative services, e-Justice services must respect strict procedural rights (fair trial). The digital service must ensure that the "digital court" provides the same guarantees as the physical court. The "digital file" becomes the authoritative record of the proceedings (Velicogna, 2017).

"User Feedback" is an integrated part of the digital service object under the SDG. Users have the right to rate the service. This feedback data becomes a new legal object used to monitor quality. The administration has a duty to analyze this feedback and improve the service. This introduces a "customer satisfaction" loop into administrative law, treating the user's experience as a metric for legal compliance.

"Service Availability" is a critical attribute. E-government services rely on servers. If the server is down, the citizen cannot exercise their rights. E-government law is evolving to recognize a "duty of availability." If a mandatory digital service is unavailable, deadlines should be extended. The "uptime" of the digital service becomes a legal condition for the validity of administrative deadlines.

"Assistance Services" provide the safety net. The SDG mandates the availability of problem-solving services (e.g., SOLVIT) for when digital procedures fail. These human-mediated services are part of the broader service ecosystem. They are the "fallback object" ensuring that digitalization does not lead to a denial of rights for those who get stuck in the digital process.

Finally, the "Life Event" approach packages multiple services into a single object. A "birth" life event triggers health, population, and benefit services. The legal object is the "integrated journey." The administration is required to link these disparate legal procedures into a coherent user experience. This holistic view challenges the traditional fragmentation of administrative law, creating a "meta-service" that spans across different agencies.

Section 3: Digital Identity and Trust Services

Digital Identity (eID) is the "key" to the e-government ecosystem, serving as the object that allows a subject to act within the digital sphere. Legally, an eID is not just a username; it is a "guarantee of personhood." The eIDAS Regulation defines electronic identification as the process of using person identification data in electronic form uniquely representing a natural or legal person. The eID itself is a "credential object"—a set of data (keys, certificates) issued by a trusted party. The legal value of this object depends on its "Level of Assurance" (LoA): Low, Substantial, or High. A "High" level eID is a legal object capable of proving identity with a confidence equivalent to physical presence, enabling high-risk transactions like transferring money or signing deeds (Graux, 2015).

The "European Digital Identity Wallet" (EUDI Wallet), introduced by eIDAS 2.0, transforms the eID from a state-held record into a user-held object. The Wallet is a mobile app that stores the user's identity and attributes. Legally, the Wallet is a "product" and a "service" that must be certified. It acts as a container for "Digital Credentials." The citizen "owns" the Wallet and controls the release of data from it. This objectifies identity as a portable asset, distinct from the central registry. The Wallet is a "sovereign" object of the user, protected by high security and privacy standards (Alves et al., 2022).

"Electronic Attestations of Attributes" (EAAs) are new legal objects within the Wallet. These are digital proofs of qualities: "is a doctor," "is over 18," "has a driving license." Unlike the core identity (who you are), these are functional attributes (what you are). eIDAS 2.0 gives these electronic attestations the same legal effect as paper certificates. A digital university diploma in the Wallet is a valid legal object for applying for a job in another Member State. This digitizes the "contents of the wallet," turning paper cards into verifiable digital tokens.

"Trust Services" generate specific legal objects: electronic signatures, seals, and timestamps. An Electronic Signature is data in electronic form which is attached to or logically associated with other data in electronic form and which is used by the signatory to sign. The "Qualified Electronic Signature" (QES) is the highest form. It is a legal object that carries the presumption of integrity and the identity of the signer. It is the digital equivalent of the handwritten signature. This "digital ink" allows for the execution of binding legal acts (contracts, administrative decisions) in the virtual world (Dumortier, 2017).

"Electronic Seals" are the corporate equivalent of signatures. They are issued to legal persons (e.g., a ministry or a company) to ensure the origin and integrity of a document. An automated tax assessment generated by a server is "sealed" to prove it came from the Tax Authority and hasn't been tampered with. The seal is a legal object that certifies the "institutional will," allowing machines to issue authentic administrative acts without human intervention.

"Electronic Time Stamps" attach a trusted time to a digital document. In law, deadlines are crucial. A time stamp is a legal object that provides irrefutable proof that a document existed at a certain time (e.g., a tender submitted before the deadline). It objectifies "time" in the digital space, preventing disputes about when an action occurred.

"Website Authentication Certificates" (QWACs) are objects that prove the identity of a website owner. They ensure that the citizen knows they are on the official government portal, not a phishing site. This certificate is a "trust object" displayed by the browser. The legal controversy surrounding QWACs involves whether browsers must accept them. For e-government, these certificates are essential "digital badges" of authority.

"Validation Reports" are derivative objects. They are reports issued by a Trust Service Provider confirming that a signature was valid at the time of signing. In long-term archiving (e.g., land deeds kept for 100 years), the validation report preserves the legal value of the signature even after the cryptographic keys have expired. This object ensures the "immortality" of digital legal acts.

The "Notified eID Scheme" is a status object. When a Member State notifies its eID scheme to the Commission (e.g., the German Personalausweis), it becomes a "recognized European eID." This status triggers the obligation for other states to accept it. The notification process transforms a national ID into an EU-wide legal instrument. It is an object of mutual recognition.

"Biometric Data" used in eID (fingerprints, facial scan) is a sensitive object. While used to unlock the Wallet or ID card, the legal framework (GDPR) restricts its storage. Usually, the biometric template is stored securely on the device (secure element) and never leaves it. The "match" signal is the object that is transmitted, not the biometric data itself. This distinction protects the biological identity of the user.

"Pseudonyms" are legal objects used to protect privacy. eIDAS allows the use of pseudonyms in electronic transactions. A user can interact with a service using a consistent pseudonym (e.g., for age verification) without revealing their real name. This object facilitates "transactional privacy," allowing the user to prove something about themselves without revealing everything.

Finally, the "Chain of Trust" is the systemic object. The Root CA (Certificate Authority) issues certificates to Sub-CAs, who issue them to users. This hierarchy of digital objects constitutes the "Public Key Infrastructure" (PKI). The legal validity of every digital signature depends on the integrity of this entire chain. If the Root key is compromised, all downstream objects lose their legal value. The regulation of TSPs is essentially the regulation of this invisible infrastructure of trust.

Section 4: Technical Infrastructure and Interoperability Assets

The technical infrastructure constitutes the physical and logical "substrate" of e-government legal relations. While often invisible, these infrastructures are regulated objects. "Building Blocks" (e.g., eDelivery, eID, eInvoicing) developed under the Connecting Europe Facility (CEF) are standardized software components. Legally, they are "reusable solutions." When a Member State implements a Building Block, it adopts a specific set of technical and legal specifications (Service Level Agreements). These blocks are the "bricks" of the digital single market, reducing the cost and complexity of cross-border connection (Codagnone & Wimmer, 2007).

"eDelivery" is a secure messaging infrastructure. It acts as a "digital courier." The legal object here is the "message" and its "evidence of delivery." eDelivery uses the AS4 protocol to ensure that data sent from Country A to Country B is encrypted and authenticated. The infrastructure generates "non-repudiation" tokens—proof that the sender sent it and the receiver got it. This infrastructure is the legal backbone of cross-border justice and procurement, providing the "certainty of communication" required for due process.

"Central Platforms" like the "European Commission Authentication Service" (ECAS) or the "Internal Market Information System" (IMI) are centralized infrastructure objects. The IMI is a secure online tool that allows national authorities to communicate (e.g., checking the license of a doctor). The regulation of IMI defines who can access it and what data can be exchanged. These platforms are "closed networks" or "intranets" for the administration, creating a secure space for administrative cooperation removed from the public internet.

"Base Registry Interconnections" (BRIS, LRI, ECRIS) are networked infrastructures. They do not create a central database but link existing national databases. The legal object is the "search query" and the "response." The infrastructure ensures that a query from Italy can be understood by a database in Poland. The legal regulation focuses on the "access point" and the transformation of data formats. These interconnections are the "synapses" of the European digital brain (Schmidt, 2020).

"Cloud Infrastructure" is the storage object. Governments increasingly use cloud services. The "European Cloud Federation" and initiatives like Gaia-X aim to create a "sovereign cloud" infrastructure. The legal status of data stored in the cloud depends on the jurisdiction of the provider. The Data Act and Free Flow of Non-Personal Data Regulation govern this object, preventing vendor lock-in and data localization. The cloud infrastructure is a "leased asset" where the state must retain legal control despite lacking physical possession.

"APIs" (Application Programming Interfaces) are the connectors. The Open Data Directive promotes "API-first" government. An API is a technical contract: it defines how software components talk to each other. Legally, the API is an "interface object" that must be stable and documented. By mandating APIs for High-Value Datasets, the law treats the interface as a public utility, a tap from which data flows. The availability and performance of the API are subject to legal standards.

"Semantic Assets" are abstract objects. These include the Core Vocabularies (Core Person, Core Business). They are data models that define the meaning of terms. If "Family Name" is mapped to "Surname," interoperability is achieved. These semantic assets are "soft law" objects but are critical for the Once-Only Principle. Without agreed semantic definitions, data exchange is legally risky because the meaning might be lost in translation. These assets act as the "dictionary" of the digital administration (Peristeras et al., 2009).

"Open Source Software" (OSS) is a code object. The EU encourages the sharing of e-government solutions as open source (e.g., the code for the Digital COVID Certificate). The legal regime is governed by open source licenses (like the EUPL). This treats government software not as a proprietary secret but as a "public good" to be shared with other states and citizens. The "Public Code" movement argues that code paid for by the public should be available to the public.

"Blockchain Infrastructure" (EBSI) is a distributed object. The European Blockchain Services Infrastructure is a network of nodes run by Member States. It supports use cases like notarization and diplomas. The ledger itself is the legal object—an immutable record of transactions. The regulation of EBSI defines the "governance" of this ledger: who can write to it, who can read it. It is a "trust machine" owned collectively by the Union and Member States.

"Cybersecurity Certification Schemes" (under the Cybersecurity Act) create "certified objects." A cloud service or an eID card can be certified as having a "High" level of security. This certification is a legal object that acts as a "passport" for the product, proving its compliance with EU security standards. Public administrations are often legally required to purchase only certified products, linking procurement law to the cybersecurity framework.

"Test Beds and Sandboxes" are experimental infrastructures. They allow regulators and companies to test new technologies (like AI in healthcare) in a controlled environment. The "data" and "algorithms" within the sandbox are subject to a special legal regime (exemptions from certain rules). The sandbox itself is a "regulatory object," a safe space for innovation.

Finally, the "European Interoperability Reference Architecture" (EIRA) is a meta-object. It is a blueprint that describes how all these pieces fit together. While conceptual, it has legal weight in funding decisions. Projects that do not fit the architecture may be denied EU support. It is the "zoning plan" for the digital landscape of the EU.

Section 5: Intellectual Property and Digital Rights Management

Intellectual Property (IP) in e-government relations creates a complex web of rights over software, databases, and content. The state is both a creator and a user of IP. "Database Rights" are particularly relevant. The sui generis database right protects the investment in obtaining data. Public bodies create massive databases (e.g., meteorological data). The Open Data Directive restricts the public sector's ability to use these rights to prevent re-use. It mandates that public databases be open. This effectively "expropriates" the IP right of the agency in favor of the public domain, prioritizing the macro-economic value of data flow over the micro-economic value of agency revenue (Hugenholtz, 2013).

"Copyright in Software" affects government procurement. When the state commissions custom software, who owns the code? Traditionally, vendors tried to retain ownership ("lock-in"). Modern e-government guidelines recommend that the state acquire full ownership or broad license rights to avoid dependency. This allows the state to share the software with other administrations ("reuse"). The legal object is the "source code." Controlling the source code is a matter of "digital sovereignty" and sustainability.

"Standard Essential Patents" (SEPs) can impact interoperability. If an e-government standard relies on a patented technology (e.g., a specific compression format), the patent holder can charge royalties. The European Interoperability Framework promotes "FRAND" (Fair, Reasonable, and Non-Discriminatory) or royalty-free licensing. The legal goal is to ensure that standards remain open and accessible. A standard loaded with expensive patents acts as a barrier to entry for smaller players and citizens.

"Public Domain" materials held by cultural heritage institutions (archives, libraries) are cultural objects. The directive on Copyright in the Digital Single Market facilitates their digitization. Importantly, it states that faithful reproductions of works in the public domain cannot be subject to new copyright. This prevents museums from claiming copyright over digital photos of old paintings. It preserves the "public domain status" of the object in its digital form, ensuring that digital heritage remains a commons.

"Trademarks and Trust Marks" are semiotic objects. The EU "Trust Mark" for qualified trust services (the blue padlock symbol) is a regulated logo. Only qualified providers can use it. It signals legal certainty to the user. The misuse of this mark is a legal infringement. Similarly, the "logo" of an eID scheme acts as a brand of trust. These visual objects serve a consumer protection function in the digital market.

"Digital Rights Management" (DRM) technologies are used to control access to content. In e-government, DRM might be used to secure sensitive documents (preventing printing or forwarding). The legal status of DRM is protected by the Copyright Directive (anti-circumvention). However, the tension exists between DRM and "access rights." If DRM prevents a citizen from exercising their right to access their own file, the administrative right to good administration overrides the technical protection measure.

"Open Licenses" (e.g., Creative Commons, EUPL) are the legal instruments that operationalize open data and open source. They are "permissions attached to the object." When a dataset is released with a CC-BY license, the license travels with the data. The EUPL (European Union Public License) is a specific open source license designed to be compatible with EU law (civil law concepts, multilingual). It is the standard legal wrapper for EU government software, ensuring legal compatibility across Member States.

"User-Generated Content" in e-participation platforms (e.g., ideas submitted to a consultation) raises IP questions. Who owns the citizen's idea? Terms of use typically grant the government a license to use the content. However, the citizen retains moral rights. The legal handling of this content must respect the "authorship" of the citizen while allowing the administration to process and implement the ideas.

"Smart Contracts" can be viewed as "copyrighted code." The automated logic that executes a subsidy is a literary work. However, transparency requirements (AI Act, Freedom of Information) mandate that this code be scrutinisable. The state cannot claim "commercial secrecy" or copyright to hide the "law of the code" from the public. The logic of the law (public) overrides the logic of IP (private/exclusive) when the code acts as regulation.

"Geospatial Data" (maps) is a high-value object often encumbered by complex IP (e.g., from national mapping agencies). The Open Data Directive targets this specifically, mandating openness. This dismantles the "crown copyright" model where the state sold maps to fund itself. It treats the "digital map" as basic infrastructure, like a road, which should be free at the point of use.

"Trade Secrets" in procurement. Vendors submit detailed technical bids containing trade secrets. The contracting authority must protect this confidential information. However, this conflicts with the transparency of public spending. Losing bidders often want to see the winning bid. The legal balance involves redaction and "data rooms." The trade secret object must be protected, but not to the extent that it shields the procurement process from accountability.

Finally, "Data Altruism" forms involve a voluntary waiver of rights. A citizen or company donates data for the public good (e.g., for research). The legal instrument is a "consent form" or a deed of donation. This creates a "gift relationship" regarding the data object, moving it from the private sphere to the scientific commons. The Data Governance Act creates a register for these altruistic organizations to build trust in this transfer.

Questions


Cases


References
  • Alves, E., et al. (2022). The European Digital Identity Wallet. European Commission.

  • Bockting, S., & Scheel, H. (2016). The implementation of e-procurement. ERA Forum.

  • Codagnone, C., & Wimmer, M. A. (2007). Roadmapping eGovernment Research. European Commission.

  • Dumortier, J. (2017). The European Regulation on Trust Services. Digital Evidence Law.

  • European Commission. (2019). High Value Datasets.

  • Graux, H. (2015). The eIDAS Regulation. Computer Law & Security Review.

  • Hacker, P. (2018). Teaching an Old Dog New Tricks? Verfassungsblog.

  • Huijboom, N., & Van den Broek, T. (2011). Open data: an international comparison. Telematics.

  • Hugenholtz, B. (2013). The PSI Directive. Amsterdam Law School Research Paper.

  • Janssen, M. (2012). Open Data and the Future of Public Administration. Government Information Quarterly.

  • Kalvet, T., et al. (2019). The Once-Only Principle. TOOP.

  • Kotzinos, D., et al. (2011). Cross-border e-Government services. International Journal of Electronic Government.

  • Kuner, C., et al. (2017). Machine learning and the GDPR. IDPL.

  • Micheli, M., et al. (2020). Emerging models of data governance. JRC.

  • Peristeras, V., et al. (2009). Semantic interoperability. Social Science Computer Review.

  • Schmidt, J. (2020). The Single Digital Gateway Regulation. European Public Law.

  • Velicogna, M. (2017). e-Justice in Europe. Utrecht Law Review.

  • Wegrich, K. (2009). The administrative burden reduction policy. Better Regulation.

  • Zouridis, S., et al. (2020). Automated Discretion. Administration & Society.

5
Legal relations and legal facts in EU e-government
2 2 10 14
Lecture text

Section 1: The Transformation of Administrative Legal Relations

The digitization of public administration in the European Union has fundamentally altered the structure and nature of administrative legal relations. Traditionally, these relations were characterized by a vertical, hierarchical dynamic where the state exercised unilateral authority over the citizen through paper-based commands. In the e-government context, this verticality is increasingly supplemented by horizontal, networked relations. The concept of "legal communication" replaces the notion of "unilateral decree." Under the Single Digital Gateway Regulation (2018/1724), the legal relation is initiated not by a physical visit to a government office but by a digital request via a portal. This shift dematerializes the locus of the legal relation, detaching it from the physical territory of the administrative building and relocating it to the digital "front office," where the interaction is governed by interface design as much as by administrative procedure laws (Hoffmann-Riem, 2017).

A defining feature of digital legal relations is "interactivity." Unlike the static paper form, digital platforms allow for real-time feedback loops. The legal relation becomes a dynamic process of data exchange rather than a single event of submission. For instance, in the "Once-Only" model, the legal relation is no longer bilateral (citizen-state) but multilateral (citizen-source authority-requesting authority). When a citizen requests a service, they trigger a cascade of secondary legal relations between administrative bodies that exchange evidence on their behalf. This "backend" legal relation, often invisible to the user, is strictly regulated by the Interoperable Europe Act to ensuring that the data transfer is lawful and confined to the specific administrative purpose (Kalvet et al., 2019).

The "subjective right to digital interaction" is emerging as a new element of the legal relation. Citizens increasingly possess a legally enforceable right to communicate with the administration electronically. The eIDAS Regulation (910/2014) mandates the mutual recognition of electronic identities, effectively creating a cross-border legal relation where a Spanish citizen can validly identify themselves to a German authority using their national eID. This creates a "transnational administrative legal relation" that bypasses the traditional requirement of physical presence or nationality-based credentials. The legal bond is established through cryptographic verification rather than physical recognition, fundamentally changing the evidentiary basis of the relationship (Graux, 2015).

The "duty of availability" transforms the state's obligations within the legal relation. In the analog world, administrative offices had opening hours. In the digital world, the legal relation is presumed to be available 24/7. If a digital service is down due to technical failure, and a citizen misses a deadline, legal questions of "technological force majeure" arise. Emerging jurisprudence suggests that the administration bears the risk of technical failure. Therefore, the state has a positive legal duty to maintain the availability of the digital infrastructure. A server crash is no longer just a technical glitch; it is a breach of the administrative legal relation, potentially giving rise to state liability (Galetta, 2019).

The "automation of will" challenges the traditional concept of the administrative act. In automated legal relations, the "will" of the administration is encoded in software rules. When a traffic camera issues a fine, the legal relation is established by a sensor and an algorithm, not a human official. Legal theory addresses this by attributing the "electronic will" to the public authority that deployed the system. The legal relation is thus mediated by code ("Code is Law"), but the legal responsibility remains human. This requires a precise legal definition of "automated administrative acts" to ensure they carry the same presumption of validity as human decisions (Zouridis et al., 2020).

"Trust" becomes a legal constitutive element of the relation. In traditional relations, trust was institutional (trust in the seal). In digital relations, trust is technical (trust in the certificate). The legal framework regulates this trust through "Trust Services" (eIDAS). A legal relation formed via a Qualified Electronic Signature enjoys a higher evidentiary status than one formed via a simple email. The law differentiates between "trusted" and "non-trusted" digital relations, assigning different legal effects to each. This hierarchy of trust determines the binding nature of the digital interaction (Dumortier, 2017).

The "transparency" of the legal relation is mandated by the Open Government directive. The digital legal relation is legally required to be "observable." Citizens have a right to see the status of their application (tracking) and, increasingly, the logic of the decision (algorithmic transparency). This changes the power dynamic; the administration cannot hide behind bureaucratic opacity. The digital trace of the legal relation creates a permanent record of the interaction, which serves as a "legal fact" available for scrutiny by the citizen, the ombudsman, or the court (Meijer, 2012).

"Data protection" serves as a boundary condition for the digital legal relation. The processing of personal data is inherent to the relation. The GDPR imposes a duty of "lawfulness, fairness, and transparency." This means the legal relation cannot exist outside the specific legal basis for data processing. If the administration uses data collected for a tax purpose to enforce a parking fine without a legal basis, the secondary legal relation is void. The "purpose limitation" principle compartmentalizes administrative legal relations, preventing the state from treating the citizen as a single, unified data object (Hijmans, 2016).

"Consent" in administrative legal relations is complex. While private law relations are consensual, administrative relations are often mandatory. However, e-government introduces "consent-based" services (e.g., proactive notifications). The Data Governance Act introduces "data altruism," creating a voluntary legal relation where the citizen donates data for the public good. This introduces a "contractual" element into the administrative sphere, where the citizen enters a legal relation with the state not as a subject of authority but as a partner in data sharing (Micheli et al., 2020).

The "standardization" of the legal relation is driven by the European Interoperability Framework (EIF). To make cross-border relations possible, the legal concepts must be aligned (semantic interoperability). A "marriage" in the legal database of one country must map to "marriage" in another. The regulation of "Core Vocabularies" creates a standardized legal ontology. This ensures that when a legal relation travels across borders (e.g., moving residence), it retains its meaning and legal effects, preventing "semantic loss" that could invalidate the citizen's rights (Peristeras et al., 2009).

"Liability" in the networked legal relation is distributed. If a decision is based on erroneous data fetched from another authority, who is liable? The "source" of the data or the "consumer"? The Single Digital Gateway Regulation attempts to clarify this by assigning liability for data accuracy to the authentic source. This creates a "chain of trust" where the legal relation relies on the integrity of upstream data providers. The administration entering the relation with the citizen is legally entitled to rely on the "presumption of accuracy" of data from other competent authorities (Schmidt, 2020).

Finally, the "temporal dimension" of the legal relation is altered. Digital relations are instantaneous but also archived indefinitely. The "Right to be Forgotten" clashes with the administrative duty of archiving. The legal relation persists in the form of "log files" long after the transaction is finished. E-government law regulates the "lifecycle" of the digital legal relation, determining when the digital trace must be deleted to restore the citizen's privacy, thus ending the "latent" legal relation maintained by the data storage (Kuner et al., 2017).

Section 2: Electronic Documents and Digital Legal Facts

A "legal fact" (fait juridique) is an event or action that triggers legal consequences—creating, modifying, or extinguishing rights. In the e-government ecosystem, the primary legal fact is the "Electronic Document." The eIDAS Regulation (Article 46) establishes a fundamental rule: an electronic document shall not be denied legal effect and admissibility as evidence in legal proceedings solely on the grounds that it is in electronic form. This "non-discrimination principle" transforms digital data—a PDF, an XML file, a database entry—from mere information into a valid legal fact capable of proving administrative acts (e.g., a birth certificate or a tax clearance) (Mason, 2016).

The Electronic Signature is the legal fact that attributes authorship and will to a document. Under eIDAS, a "Qualified Electronic Signature" (QES) has the equivalent legal effect of a handwritten signature. It creates a presumption of integrity (the document hasn't changed) and authenticity (it really comes from the signer). This digital fact replaces the physical seal or wet signature. In administrative proceedings, the presence of a QES on a document is a "conclusive legal fact" regarding the identity of the applicant, shifting the burden of proof to anyone who challenges it. This certainty is the bedrock of digital administrative transactions (Namou, 2016).

Electronic Time Stamps create the legal fact of "existence at a specific time." In administrative law, deadlines are critical (e.g., appeals, tenders). An eIDAS-qualified electronic time stamp provides a presumption of the accuracy of the date and time it indicates and the integrity of the data bound to that time. This "objective digital time" replaces the subjective "date stamp" of the mailroom clerk. It resolves disputes about timeliness by providing an irrefutable cryptographic fact that a document existed and was submitted before the deadline, eliminating the ambiguity of network latency (Biasiotti, 2017).

Electronic Seals serve as the digital legal fact of "institutional origin." Unlike a signature (linked to a person), a seal is linked to a legal person (e.g., the "Ministry of Justice"). It proves that a document (e.g., a digital court judgment or an automated certificate) originated from a specific public body. This legal fact is essential for the "Once-Only Principle," as it allows a receiving authority to automatically verify the provenance of a piece of evidence fetched from a database. The seal guarantees that the data is an "authentic administrative act" rather than a forged file (Dumortier, 2017).

"Log Files" (Audit Trails) are the legal facts of "process." Every interaction in an e-government system generates logs: who accessed what data, when, and from where. In data protection law and administrative review, these logs are critical legal facts. They prove "processing operations." If a citizen claims their data was misused, the log file provides the factual basis for establishing liability. The integrity of these logs is paramount; if logs can be altered, the "factual history" of the administration is compromised. Therefore, secure logging is a legal requirement under the GDPR and NIS2 Directive (Accorsi, 2013).

"Websites" as legal facts. The publication of a law or notice on an official website constitutes "promulgation." In many Member States, the electronic official journal is now the only legally valid version. The content of the website at a specific moment is a legal fact that determines the rights and duties of citizens. "Website Authentication Certificates" (QWACs) under eIDAS provide the legal certainty that the website is genuine. This transforms the URL into a "territory" of legal validity, where the information displayed constitutes binding administrative communication.

"Verifiable Credentials" (in the European Digital Identity Wallet) are portable legal facts. A digital driving license stored in the wallet is a set of signed data attributes. When shared with a police officer or a car rental agency, this data exchange constitutes the "presentation of a legal fact." The legal value lies not in the display on the screen (which can be faked) but in the cryptographic proof exchanged in the background. This separates the "visual representation" from the "cryptographic reality" of the legal fact (Alves et al., 2022).

"Notifications" via Electronic Registered Delivery Services. The delivery of a decision triggers the appeal deadline. eIDAS regulates "electronic registered delivery services" (e-Delivery). Data sent and received using such a service enjoys the presumption of the integrity of the data, the sending by the identified sender, and the receipt by the identified addressee. The "electronic receipt" generated by the system is a legal fact proving that the administrative act successfully entered the sphere of the citizen, fulfilling the legal requirement of notification (Poullet, 2009).

"Database Entries" in Base Registries. The entry of a name in the civil registry is a "constitutive legal fact" (e.g., creating legal personality). In e-government, the database record is the legal fact. The "single version of the truth" principle means that the digital record in the base registry overrides conflicting paper documents. If the digital land registry shows ownership, that digital entry is the legal fact that proves title. The law regulates the "finality" of these digital records to ensure legal certainty (Schmidt, 2020).

"Smart Contracts" generate "automated legal facts." A smart contract on a blockchain can automatically verify a condition (e.g., "flight delayed") and execute a result (e.g., "pay compensation"). The blockchain record of this execution is an immutable legal fact. It proves that the obligation was performed. The Data Act recognizes the legal admissibility of smart contracts, treating their execution logs as valid evidence of the performance of a contract or administrative duty (De Filippi & Wright, 2018).

"Digital Archives" and Preservation Services. A digital document must remain a valid legal fact for decades. Electronic preservation services use technology to extend the trustworthiness of the qualified electronic signature beyond its technological validity period. They create a "preservation evidence" token. This legal fact ensures that a digital will or land deed signed today remains a valid proof of right fifty years from now, bridging the gap between the ephemeral nature of technology and the permanence of law.

Finally, the "Admissibility" of digital legal facts is harmonized. Courts cannot reject evidence solely because it is electronic. However, the weight of the evidence depends on its security level (e.g., Qualified vs. Advanced signature). This creates a "hierarchy of digital facts," where state-certified cryptographic proofs (Qualified) act as "irrefutable facts" (unless proven forged), while standard emails act as "simple evidence" subject to judicial assessment.

Section 3: The Automated Administrative Act

The "Automated Administrative Act" (AAA) is a decision issued by a public authority that is produced entirely by an automated system without human intervention (e.g., an automatic tax assessment or traffic fine). Legally, this object challenges the traditional definition of an administrative act, which presupposes a human will and discretion. In EU law, the legal nature of an AAA is determined by its "attribution." The decision is legally attributed to the competent authority that deployed the algorithm. The "will" of the administration is found in the programming of the rules (the code) and the decision to deploy the system. Thus, the algorithm is the "medium," but the agency remains the "author" of the legal act (Zouridis et al., 2020).

The Validity of an AAA depends on its compliance with the "principle of legality." The algorithm must accurately translate the statutory rules into code. If the code deviates from the law (e.g., by using a wrong tax rate), the resulting administrative acts are unlawful. This creates a new ground for judicial review: "coding errors." Courts must examine whether the "digital logic" matches the "legal logic." Unlike human error, a coding error affects all decisions made by the system, creating a risk of "systemic illegality" that can void thousands of administrative acts simultaneously (Coglianese & Lehr, 2017).

Discretion presents a major hurdle. Traditional administrative law allows officials to use discretion in complex cases. Algorithms are deterministic; they follow rigid rules. Therefore, AAAs are generally permissible only for "bound administration" (where the law dictates a clear result, e.g., math-based tax calculation). For discretionary decisions (e.g., child welfare), EU law (GDPR Art 22) and national administrative codes typically prohibit fully automated decisions, requiring a "human in the loop" to exercise the necessary judgment. The AAA in discretionary fields is thus legally downgraded to a "draft" or "proposal" subject to human ratification (Citron, 2007).

The Duty to Give Reasons (motivation) is a constitutive element of a valid administrative act. The citizen must know why a decision was taken to exercise their right of appeal. For AAAs, this requires "Explainable AI" (XAI). A decision based on a "black box" neural network that cannot explain its output is legally void because it violates the duty to give reasons. The administration must be able to provide the "meaningful information about the logic involved" (GDPR Art 15). This legal requirement effectively bans the use of unexplainable "deep learning" AI for binding administrative acts (Watcher et al., 2017).

Notification of AAAs is often automated. The legal fact of notification occurs when the decision enters the citizen's digital mailbox. Some jurisdictions apply a "fiction of service": the document is deemed served 48 hours after upload, regardless of whether the citizen opened it. This shifts the burden of monitoring to the citizen. However, EU law requires that the notification method be effective. If the citizen has not consented to digital delivery, the AAA may be ineffective until served physically. The validity of the digital notification conditions the start of the appeal period (Poullet, 2009).

Correction of Errors in AAAs requires specific legal mechanisms. Since automation can scale errors rapidly (e.g., a "robo-debt" scandal), the law must provide for "automated rectification." If a bug is found, the administration has a duty to ex officio review and correct all affected decisions, not just those appealed. The principle of good administration imposes a proactive duty of correction on the user of the automated system, recognizing the structural power imbalance created by the machine.

Signatures on AAAs are often replaced by "Electronic Seals." A human official cannot sign thousands of automated decisions per second. The eIDAS Regulation allows the use of a "Qualified Electronic Seal" of the legal person (the agency). This seal provides the legal guarantee of origin and integrity required for the act to be valid. It replaces the "wet signature" of the civil servant with the "cryptographic stamp" of the institution, depersonalizing the authority of the act (Graux, 2015).

Liability for AAAs follows the regime of state liability. If an automated system causes damage (e.g., wrongful denial of benefits leading to eviction), the state is liable. The state cannot use "software failure" or "vendor error" as a defense against the citizen. This reinforces the principle that the state guarantees the functioning of its tools. The internal recourse of the state against the software vendor is a separate contractual matter. The citizen's legal relation is solely with the public authority (Galetta, 2019).

Proportionality in automated enforcement. AAAs are often used for enforcement (e.g., speed cameras, automatic tax penalties). The principle of proportionality applies to the design of the system. An algorithm that automatically issues maximum fines without considering mitigating circumstances may be unlawful. The legal framework requires that the automated system be programmed to recognize "exceptions" or "outliers" and flag them for human review, ensuring that automation does not lead to "administrative rigidity" (Brauneis & Goodman, 2018).

Constitutional Due Process. The use of AAAs must respect the right to a fair hearing. If a system automatically cuts off a benefit based on a data match (e.g., "income detected"), the citizen must have a chance to contest the data before the negative effect occurs. The "presumption of innocence" applies to administrative algorithms: the system should not presume fraud based on a statistical correlation. Due process requires that the AAA be suspended pending the resolution of a dispute.

Transparency of the Algorithm. The AI Act and Open Government laws increasingly mandate that the "source code" or the "rules" of the AAA system be public. This allows civil society to audit the legality of the administrative logic. The legal status of the algorithm shifts from "internal working document" to "public regulation," subject to the same scrutiny as a published law.

Finally, the Future of the AAA. As AI evolves, we may see "Personalized Administrative Acts" where the law is tailored to the individual situation of the citizen (micro-directives). The legal challenge will be to maintain the principle of "equality before the law" (treating like cases alike) in a system capable of infinite individualization.

Section 4: Interoperability as a Generator of Legal Relations

Interoperability is the mechanism that connects disparate legal subjects and objects, creating new legal relations. In EU law, interoperability is not just technical; it is a "legal enabler." The Interoperable Europe Act establishes a framework where the ability to share data creates a "duty to cooperate." When two administrations (e.g., in Poland and France) connect their systems, they enter into a "Cross-Border Administrative Relation." This relation is governed by Service Level Agreements (SLAs) and Memoranda of Understanding (MoUs) that define liability, data governance, and costs. These agreements are the "treaties" of the digital administrative space (Misuraca et al., 2010).

"Semantic Interoperability" creates "shared legal concepts." To exchange data, a "university degree" in Country A must legally map to a "university degree" in Country B. The Core Vocabularies provide this mapping. When this interoperability is established, it creates a "legal equivalence" between the administrative facts of different states. The interoperability layer acts as a "legal translator," converting the legal effect of a document in one jurisdiction into a valid legal effect in another. Without this semantic bridge, the cross-border legal relation is impossible (Peristeras et al., 2009).

The "Once-Only Principle" (OOP) creates a "triangular legal relation." 1. The Citizen requests a service. 2. The Requesting Authority asks the Source Authority for evidence. 3. The Source Authority sends the evidence. This replaces the traditional bilateral relation (Citizen brings paper to Authority). The legal critical point is the "mandate": the citizen must usually give explicit consent or mandate for the authorities to talk to each other. This digital mandate is a legal act that authorizes the cross-border flow of personal data, legitimized under the SDG Regulation (Krimmer et al., 2017).

"Trust Domains" are the legal spaces created by interoperability. Within a trust domain (e.g., the TESTA network), participating entities agree to recognize each other's digital credentials. This creates a "circle of trust" where legal relations are expedited. Entities outside the trust domain (e.g., private actors or non-EU states) do not enjoy this privileged legal status. Access to the trust domain is a regulated legal privilege, requiring compliance with strict security and governance rules.

"Technical Errors" in interoperability generate complex liability relations. If data is corrupted during transmission between State A and State B, who is liable to the citizen? The SDG Regulation allocates liability: the emitting authority is responsible for the accuracy of the source data; the receiving authority is responsible for the processing; the Commission (or network provider) is responsible for the availability of the hub. This "fragmented liability" requires precise technical logging to determine where the legal fault lies (Schmidt, 2020).

"License Interoperability" governs the relation between open data/software creators and users. If State A releases software under the EUPL (European Union Public License) and State B modifies it, the license terms dictate the legal relation. The EUPL is designed to be "interoperable" with other open source licenses (like GPL), ensuring that code can be legally reused and mixed across administrations. This legal interoperability of licenses prevents "copyright silos" in the public sector (Hugenholtz, 2013).

"Digital Wallets" as interoperability nodes. The EUDI Wallet will act as a universal adapter. It allows the citizen to present credentials to any relying party (public or private) that accepts the standard. This creates a "universal acceptance" obligation for large platforms and public bodies. The legal relation is no longer bilateral negotiation of credentials but a standardized "presentation" protocol. The interoperability standard itself becomes a source of law, dictating the technical terms of the legal interaction.

"Sandboxes" create "temporary legal relations." Within an interoperability sandbox, administrations can exchange data without fully complying with all standard rules (under strict supervision) to test feasibility. This creates a "probationary legal space." The legal facts generated in the sandbox do not necessarily have full binding force outside it. This allows for legal experimentation without creating permanent precedents or liabilities (Ranchordás, 2019).

"Governance Boards" (e.g., the Interoperable Europe Board) are the regulators of these relations. They decide which standards become mandatory. Their decisions (e.g., endorsing a specific data model) have downstream legal effects on national procurement and IT architecture. The relation between the Board and the Member States is one of "co-regulation," where technical experts define the practical boundaries of the legal obligations.

"Legacy Systems" act as a barrier to new legal relations. If a national registry uses an obsolete database that cannot export data, it cannot enter the "Once-Only" network. The Interoperability Act creates a "duty to modernize" or to build "wrappers" (APIs) around legacy systems. This turns technical debt into a legal compliance issue. The state has a duty to ensure its infrastructure is capable of entering into modern digital legal relations.

"Cross-sectoral Interoperability" (e.g., between Health and Tax) is the most difficult. The legal relations between these sectors are often blocked by "purpose limitation" laws. Interoperability solutions (like the Data Governance Act) provide the legal gateways to cross these silos securely. The legal relation shifts from "forbidden" to "permitted under conditions" (e.g., anonymization).

Finally, Interoperability is Sovereignty. By defining its own interoperability standards, the EU defines the "rules of the road" for its digital space. Non-EU vendors must conform to these rules to enter the market. The interoperability framework acts as a "digital border," filtering the types of technical and legal relations that can occur within the Union.

Section 5: Digital Evidence and Procedural Law

Digital Evidence is the "procedural legal fact." In administrative and judicial proceedings, facts must be proven. The traditional hierarchy of evidence favored paper originals. E-government law dismantles this hierarchy. The eIDAS Regulation (Article 46) and the e-Justice framework establish the principle of "non-discrimination" of electronic evidence. A court cannot refuse an email or a database log solely because it is digital. However, the probative value (weight) of the evidence depends on its trustworthiness (Mason, 2016).

The "Presumption of Accuracy" for Qualified Trust Services is a procedural game-changer. A document signed with a QES benefits from a reversal of the burden of proof. The party challenging the signature must prove it is forged. This makes QES-signed documents "privileged evidence," similar to a notarial deed. In e-government, this allows automated systems to generate documents (e.g., tax certificates) that are "self-authenticating," streamlining administrative procedures by removing the need for manual verification (Biasiotti, 2017).

"Data Integrity" is the core evidentiary requirement. Digital evidence is fragile; it can be altered without a trace. Electronic Time Stamps and Seals provide the legal guarantee of integrity. They freeze the "state of facts" at a specific moment. In a dispute (e.g., "I submitted my tax return on time"), the valid timestamp is the decisive legal fact. Without it, the digital file is mere hearsay. Therefore, the use of qualified timestamps is a standard of care for preserving digital evidence in the public sector.

"The Chain of Custody" must be digital. When evidence moves from the citizen to the portal, to the archive, and then to the court, its integrity must be maintained. "eDelivery" services provide this chain. The "evidence of delivery" token proves that the document sent is identical to the document received. This technical log is the "procedural glue" that holds the case together. If the chain is broken (e.g., data moved via insecure USB), the legal value of the evidence collapses (Velicogna, 2017).

"Dynamic Evidence" (e.g., a webpage) poses a challenge. A webpage changes. To use it as evidence, it must be "staticized." Web archiving services or "website preservation" trust services create a legally admissible snapshot. This transforms the fluid content of the web into a fixed "document" that can be attached to a case file. The legal fact is not the website itself, but the "authenticated snapshot" of it.

"Blockchain as Evidence". A ledger entry is a high-integrity timestamped record. Some jurisdictions (e.g., Italy, France) have passed laws explicitly recognizing the evidentiary value of distributed ledgers. The EU's DLT Pilot Regime and EBSI framework support this. The blockchain acts as a "shared truth," reducing the need for discovery and expert witnesses to verify transaction histories. The consensus mechanism provides the "procedural certainty" of the fact.

"Machine-Generated Evidence" (IoT data, sensor logs). Automated enforcement (e.g., speed cameras) relies on sensor data. This evidence is usually accorded a "presumption of regularity" if the device is type-approved and calibrated. The legal fact is the "sensor reading." The citizen can challenge it, but the burden is high. The calibration certificate (often digital) is the meta-evidence that validates the sensor data.

"Cross-Border Taking of Evidence". The Regulation on the Taking of Evidence (2020/1783) facilitates the request for digital evidence from another Member State. It encourages the use of direct digital transmission between courts. The "legal fact" gathered in Germany must be admissible in France. The principle of mutual trust implies that evidence lawfully collected in one state should generally be admitted in another, subject to fundamental rights.

"The Right to a Digital Defense". If the administration uses digital evidence against a citizen (e.g., "our database shows you earned X"), the citizen must have access to that digital evidence to contest it. The principle of "equality of arms" requires that the citizen be given the metadata and logs, not just the printout. If the evidence is "black box" output from an AI, the lack of explainability may render the evidence inadmissible as a violation of the right to a fair trial.

"Forensics". The validity of digital evidence often requires forensic analysis (hashing, metadata analysis). The "expert witness" becomes a central figure in establishing the digital legal fact. However, e-government aims to reduce reliance on experts by using "Qualified" services (QES, etc.) that create prima facie validity, allowing judges to accept the evidence without deep technical analysis.

"Data Minimization" in Evidence. The court should only see the relevant data. Technologies like "Zero-Knowledge Proofs" allow a party to prove a fact (e.g., "I have enough funds") without revealing the underlying data (bank balance). This creates a "minimalist legal fact"—a binary proof (true/false) that satisfies the evidentiary burden while maximizing privacy.

Finally, the "Digital File" (dossier) becomes the sole authentic record. Many Member States have moved to mandatory e-filing in courts. The paper file is a copy; the digital file is the original. The "legal reality" of the case exists on the server. The rules on "electronic archiving" determine the long-term survival of these legal facts, ensuring that the history of justice is preserved for future generations.

Questions


Cases


References
  • Alves, E., et al. (2022). The European Digital Identity Wallet. European Commission.

  • Accorsi, R. (2013). Secure Business Process Engineering. IEEE.

  • Biasiotti, M. A. (2017). Evidence in the digital age. Digital Evidence and Electronic Signature Law Review.

  • Bockting, S., & Scheel, H. (2016). The implementation of e-procurement in the EU. ERA Forum.

  • Citron, D. K. (2007). Technological Due Process. Washington University Law Review.

  • Coglianese, C., & Lehr, D. (2017). Regulating by Robot: Administrative Decision Making in the Machine-Learning Era. Georgetown Law Journal.

  • De Filippi, P., & Wright, A. (2018). Blockchain and the Law. Harvard University Press.

  • Dumortier, J. (2017). The European Regulation on Trust Services. Digital Evidence Law.

  • European Commission. (2019). High Value Datasets.

  • Galetta, D. U. (2019). Algorithmic Decision-Making and the Right to Good Administration. European Public Law.

  • Graux, H. (2015). The eIDAS Regulation. Computer Law & Security Review.

  • Hacker, P. (2018). Teaching an Old Dog New Tricks? Verfassungsblog.

  • Hijmans, H. (2016). The European Union as Guardian of Internet Privacy. Springer.

  • Hoffmann-Riem, W. (2017). Legal Framework for the Digital Transformation. Archiv des öffentlichen Rechts.

  • Hugenholtz, B. (2013). The PSI Directive. Amsterdam Law School Research Paper.

  • Huijboom, N., & Van den Broek, T. (2011). Open data. Telematics.

  • Kalvet, T., et al. (2019). The Once-Only Principle. TOOP.

  • Kotzinos, D., et al. (2011). Cross-border e-Government services. IJEGR.

  • Krimmer, R., et al. (2017). The Once-Only Principle. IOS Press.

  • Kuner, C., et al. (2017). Machine learning and the GDPR. IDPL.

  • Mason, S. (2016). Electronic Evidence. University of London.

  • Meijer, A. (2012). Open government. IRAS.

  • Micheli, M., et al. (2020). Emerging models of data governance. JRC.

  • Misuraca, G., et al. (2010). Envisioning Digital Europe.

  • Namou, K. (2016). The value of electronic signatures. Computer Law & Security Review.

  • Peristeras, V., et al. (2009). Semantic interoperability. Social Science Computer Review.

  • Poullet, Y. (2009). E-Government and the Information Society. International Review of Law, Computers & Technology.

  • Ranchordás, S. (2019). Experimental Regulations. William & Mary Bill of Rights Journal.

  • Schmidt, J. (2020). The Single Digital Gateway Regulation. European Public Law.

  • Velicogna, M. (2017). e-Justice in Europe. Utrecht Law Review.

  • Watcher, S., et al. (2017). Counterfactual Explanations. Harvard Journal of Law & Technology.

  • Wegrich, K. (2009). The administrative burden reduction policy. Better Regulation.

  • Zouridis, S., et al. (2020). Automated Discretion. Administration & Society.

6
Key institutions of EU e-government
2 2 10 14
Lecture text

Section 1: The European Commission as the Strategic Executive

The institutional architecture of EU e-government is centered around the European Commission, which acts as the primary strategist, legislative initiator, and financial manager of digital transformation. Unlike a national government with a single "Ministry of Digital Affairs," the Commission distributes e-government responsibilities across several Directorates-General (DGs), reflecting the cross-cutting nature of the domain. The Directorate-General for Informatics (DG DIGIT) serves as the operational heart of this system. Historically responsible for the Commission's internal IT, DG DIGIT has evolved into a driver of public sector interoperability across the Union. It manages the "Interoperable Europe" initiative (formerly ISA²), fostering cross-border digital solutions and defining common standards. DG DIGIT’s role is dual: it transforms the Commission into a digital administration while simultaneously coordinating the digital convergence of Member States through technical frameworks and funding programs (Mergel, 2019).

Working alongside DIGIT is the Directorate-General for Communications Networks, Content and Technology (DG CONNECT). This DG is responsible for the broader "Digital Single Market" strategy, including connectivity, artificial intelligence, and the data economy. DG CONNECT sets the policy horizon, drafting key legislation like the Data Governance Act and the eIDAS Regulation. While DG DIGIT focuses on the "engine room" of administrative interoperability, DG CONNECT focuses on the "regulatory environment" that enables digital services to flourish. It manages large-scale funding instruments like the Digital Europe Programme, which provides the necessary capital for deploying e-government infrastructures like the European Blockchain Services Infrastructure (EBSI). The synergy between these two DGs ensures that e-government policy is integrated with the wider industrial and economic goals of the Union (Domorenok, 2019).

The Directorate-General for Internal Market, Industry, Entrepreneurship and SMEs (DG GROW) plays a critical specific role as the guardian of the Single Digital Gateway (SDG). Because e-government is legally anchored in the Internal Market (Article 114 TFEU), DG GROW oversees the implementation of the digital procedures required to facilitate the free movement of businesses and citizens. It chairs the Single Digital Gateway Coordination Group, a key governance body that harmonizes the user experience of national portals. DG GROW’s involvement ensures that e-government is not treated merely as an IT project, but as a mechanism for reducing administrative burdens and enforcing Single Market rights. This economic focus prevents digitization from becoming an end in itself, grounding it in the practical needs of cross-border users (Schmidt, 2020).

The Secretariat-General of the Commission ensures overall coherence. It oversees the "Better Regulation" agenda, mandating "digital-ready" checks for all new EU legislation. This institutional mechanism ensures that no new policy is adopted without considering its digital implementation. The Secretariat-General coordinates the "Inter-service Group on Public Administration Quality and Innovation," forcing different policy silos (health, transport, justice) to align their digital strategies. This central coordination is vital to prevent fragmentation, ensuring that the "Once-Only Principle" is applied consistently across different policy domains.

The Joint Research Centre (JRC) acts as the scientific arm of the Commission, providing evidence-based support for e-government policies. Its dedicated units research the impact of AI in the public sector, digital governance models, and data ecosystems. The JRC’s reports (e.g., on the "API State" or "Digital Government Insight") provide the intellectual foundation for legislative proposals. By translating complex technological trends into policy options, the JRC bridges the gap between academic research and bureaucratic decision-making, ensuring that EU institutions remain ahead of the technological curve (Misuraca et al., 2010).

The Commission also acts as a "platform provider." Through entities like the Publications Office of the European Union, it manages fundamental assets like the "Joinup" platform. Joinup is the central repository for interoperability solutions, open-source software, and semantic assets (Core Vocabularies). It serves as a knowledge hub where national administrations can share and reuse code, preventing the duplication of effort. The Commission’s role here is not just regulatory but operational; it provides the digital commons that national institutions rely upon to build their systems.

Financial governance is managed through executive agencies like the European Health and Digital Executive Agency (HaDEA). HaDEA manages the implementation of the Digital Europe Programme, organizing calls for tenders and grants. It acts as the financial interface between the policy-making DGs and the market of GovTech providers and national agencies. This separation of policy (DGs) and implementation (Agencies) ensures professional management of the massive funds allocated to digital transformation, allowing the Commission to focus on strategic direction.

The Commission represents the EU in international digital fora (e.g., OECD, G7). It promotes the "European model" of digital government—characterized by data protection and human-centricity—on the global stage. This external institutional role is crucial for establishing global standards that align with EU law. By speaking with one voice, the Commission ensures that the technical protocols governing the global internet (e.g., in standard-setting bodies) remain compatible with European values and legal requirements.

Within the Commission, the Chief Information Officer (CIO) of the Commission holds a strategic position. The CIO drives the internal digital transformation of the institution (EC Digital Strategy). This internal experience serves as a "living lab" for Member States. The solutions developed for the Commission’s own administration (e.g., EU Login, e-Prior for procurement) are often shared with Member States as reusable building blocks. Thus, the Commission leads by example, proving the viability of the technologies it mandates for others.

The "Legal Service" of the Commission plays a gatekeeping role. It ensures that innovative e-government proposals (like the European Digital Identity Wallet) are compatible with the Treaties and the Charter of Fundamental Rights. In the untested waters of digital law, the Legal Service provides the interpretative certainty needed to proceed. Its rigorous scrutiny ensures that the institutional push for efficiency does not override the rule of law, maintaining the constitutional integrity of the digital state.

The "Data Protection Officer" (DPO) of the Commission ensures internal compliance with data protection rules (Regulation 2018/1725). This institutional mirror to the GDPR ensures that the Commission practices what it preaches. The DPO monitors the Commission’s own large-scale IT systems (like IMI), providing accountability. This internal oversight mechanism is essential for maintaining the moral authority of the Commission as the regulator of the European digital ecosystem.

Finally, the Commission operates through "Comitology Committees." These are committees of national experts chaired by the Commission that adopt implementing acts (e.g., technical specifications for eIDAS). These bodies are the engine room of regulatory alignment, where the high-level political goals of the Commission are translated into binding technical code by national technocrats. This institutional mechanism ensures that EU e-government law remains technically feasible and politically acceptable to the Member States.

Section 2: Decentralized Agencies: eu-LISA and ENISA

While the Commission sets the policy, the operational management of critical pan-European e-government systems is delegated to specialized decentralized agencies. eu-LISA (European Union Agency for the Operational Management of Large-Scale IT Systems in the Area of Freedom, Security and Justice) is the most significant of these operational bodies. Based in Tallinn, Estonia (a symbolic choice reflecting Estonia's digital leadership), eu-LISA manages the "hard" infrastructure of the EU: the Schengen Information System (SIS), the Visa Information System (VIS), and Eurodac (asylum dactyloscopy database). These systems are the backbone of the borderless Schengen area, processing millions of queries daily. eu-LISA is unique because it is an IT agency with operational responsibility for critical sovereign functions, acting as the "digital border guard" of the Union (Bigo et al., 2012).

The legal mandate of eu-LISA has expanded significantly. Initially a technical operator, it now plays a proactive role in designing the future interoperability architecture of justice and home affairs systems. Under the Interoperability Regulations (2019/817 and 2019/818), eu-LISA is building the "Common Identity Repository" (CIR) and the "European Search Portal" (ESP). These massive databases will interconnect previously separated systems, creating a unified identity management infrastructure for non-EU nationals. This transforms eu-LISA into a central node of the European security state, managing the biometrics and identities of millions, a role that entails immense data protection responsibilities and technical complexity (Geyer, 2019).

ENISA (European Union Agency for Cybersecurity), based in Athens, is the institutional guardian of trust in the digital ecosystem. Its role was significantly strengthened by the Cybersecurity Act (2019), which granted it a permanent mandate. ENISA is responsible for developing "European Cybersecurity Certification Schemes" for ICT products and services (e.g., cloud services, 5G networks). In the context of e-government, these certifications are vital. They provide the "stamp of approval" that allows public administrations to trust digital tools. ENISA acts as a knowledge broker, coordinating the network of national Computer Security Incident Response Teams (CSIRTs) and organizing cyber-exercises to test the resilience of public infrastructure.

ENISA’s role in the eIDAS framework is crucial. It provides technical guidelines and security recommendations for Trust Service Providers (TSPs). While national supervisory bodies audit the TSPs, ENISA harmonizes the security standards they apply. This prevents a "race to the bottom" in security practices. ENISA also maintains the central register of "Smart Card" security certifications. By defining the "state of the art" in cybersecurity, ENISA effectively sets the technical law for e-government, determining which technologies are secure enough to be legal.

The institutional relationship between eu-LISA and ENISA is deepening. As eu-LISA builds critical databases, it relies on ENISA’s security standards to protect them. This cooperation illustrates the "ecosystem" nature of EU agencies. They are not isolated silos but interconnected nodes. However, tensions exist regarding their mandates. eu-LISA is operational and focused on efficiency and borders; ENISA is normative and focused on resilience and security. Balancing these operational imperatives with security requirements is a constant institutional negotiation.

Another relevant agency is BEREC (Body of European Regulators for Electronic Communications). While primarily a telecom regulator, BEREC is pivotal for the infrastructure layer of e-government (broadband, 5G). Without universal connectivity, e-government creates a digital divide. BEREC ensures the consistent application of Open Internet rules (Net Neutrality), guaranteeing that e-government traffic is not discriminated against by ISPs. It also advises on the regulation of digital gatekeepers under the DMA, ensuring that the underlying network infrastructure remains open and competitive for public services.

The European Labour Authority (ELA) is a newer agency with an increasing digital footprint. It manages the EURES portal (European Employment Services), a key e-government service facilitating labor mobility. ELA is also involved in the digitalization of social security coordination (EESSI system). This agency demonstrates how sectoral agencies are becoming e-government actors, digitizing specific domains of the single market. The institutional landscape is thus becoming "polycentric," with every EU agency developing a digital arm.

Cedefop (European Centre for the Development of Vocational Training) and Eurofound play a supporting role by researching the "digital skills" gap in the public sector. Their reports influence the Commission's funding strategies for digital literacy. Without a skilled workforce, the institutional infrastructure of e-government cannot function. These agencies provide the "human capital" intelligence required to staff the digital state.

The accountability of these agencies is a key legal issue. eu-LISA, managing sensitive biometric data, is subject to strict supervision by the European Data Protection Supervisor (EDPS). It must also report to the European Parliament. The "agencification" of e-government raises concerns about technocracy. Decisions about algorithm design or database interoperability are often taken by technical boards within these agencies, shielded from public debate. Institutional law mechanisms (budget discharge, annual reports) are the primary tools for democratic oversight.

These agencies also act as "hubs" for national authorities. eu-LISA’s Management Board consists of representatives from national interior ministries. This governance structure ensures that the agency remains responsive to Member State needs. It creates a "joint ownership" model where the agency is an EU body but operates as a service provider to national governments. This federalist structure helps overcome the reluctance of states to cede control over sensitive security infrastructures.

Operational cooperation is facilitated by "Liaison Officers." Agencies often place experts in Member States or other institutions to ensure smooth communication. This human network complements the technical network. For example, eu-LISA coordinates with Europol and Frontex to ensure that police and border guards can effectively use the central IT systems. This inter-agency cooperation is creating a dense web of "security interoperability" at the EU level.

Finally, the future evolution of these agencies points towards "AI readiness." Both eu-LISA and ENISA are building capabilities to manage AI in the public sector. eu-LISA is exploring the use of AI for border risk analysis, while ENISA is developing security standards for AI. These agencies are transforming from IT operators into "centers of excellence" for emerging technologies, guiding the EU public sector into the next phase of digitalization.

Section 3: Governance Boards and Cooperation Networks

The governance of EU e-government is not hierarchical but reticular, relying on a complex web of "Cooperation Networks" and "Governance Boards." These bodies are the venues where national sovereignty meets EU harmonization. The Interoperable Europe Board (proposed under the Interoperable Europe Act) is the supreme governance body for cross-border digital public services. Replacing the ISA² Committee, this Board is co-chaired by the Commission and a Member State representative. It has a strategic mandate to set the "European Interoperability Agenda." It decides which technical specifications (like data models for invoices) become recommended standards. This Board institutionalizes the political commitment to interoperability, moving it from a project-based activity to a permanent structural function of the EU (Misuraca et al., 2010).

The Single Digital Gateway Coordination Group is the specific governance body for the "Your Europe" gateway. Composed of national coordinators from all Member States, it oversees the practical implementation of the SDG Regulation. Its work is highly granular: it agrees on the quality criteria for web pages, the technical protocols for the "Once-Only" evidence exchange, and the user feedback mechanisms. This group acts as a "problem-solving forum," resolving the friction that arises when 27 different administrative cultures try to merge into a single user interface. It exemplifies "administrative federalism," where execution is national but coordination is supranational (Schmidt, 2020).

The eIDAS Cooperation Network is the guardian of the electronic identity framework. Established by the eIDAS Regulation, it consists of representatives from national eID authorities. Its primary function is the "Peer Review" of national eID schemes. Before a German ID card can be notified and recognized EU-wide, it must be peer-reviewed by experts from other states within this network. This mechanism builds "mutual trust" through technical scrutiny. The network also discusses security breaches and suspension of schemes. It is a "club of experts" that manages the trust architecture of the Union without a central EU certification authority (Graux, 2015).

The CIO Network (Chief Information Officers Network) is an informal but influential high-level group. It brings together the top digital officials from Member States (the national CIOs). They meet to discuss strategic trends (like Cloud policy or AI adoption) and align national digital strategies with EU goals. While it has no legislative power, its "soft power" is immense. Consensus reached in the CIO Network often paves the way for formal legislation. It acts as a "strategic radar" for the EU, identifying emerging challenges and building a shared vision among the leaders of national digital administrations (Homburg, 2008).

The European Data Innovation Board, established by the Data Governance Act, advises the Commission on data sharing practices. It consists of representatives from national data authorities, the EDPB, and ENISA. Its role is to harmonize the conditions for "data altruism" and the re-use of sensitive public data. It prevents the fragmentation of the European data space by ensuring that Member States apply consistent rules to data intermediaries and data spaces. This board is the governance engine of the European Data Strategy.

The Open Data Committee assists the Commission in implementing the Open Data Directive. It plays a crucial role in defining the list of "High-Value Datasets" via implementing acts. This committee represents the Member States' interests in the "commodification" of public data. It negotiates the technical modalities (APIs, formats) that determine the economic usability of government data. Its decisions directly impact the budgets of national agencies (e.g., meteorological offices) that might lose revenue from free data requirements, making it a site of intense political negotiation (Huijboom & Van den Broek, 2011).

The Gateway Coordination Group works closely with the "Once-Only" Technical System (OOTS) Sub-group. This specialized body focuses entirely on the technical architecture of evidence exchange. It defines the "e-Delivery" profiles and the "discovery services" that allow a Finnish authority to find a birth certificate in Portugal. This group bridges the gap between legal obligations and IT reality, composed of technical architects rather than policy generalists. It is where the "code of law" is translated into "software code."

"The European Multi-Stakeholder Platform on ICT Standardisation" advises on the identification of technical specifications for use in public procurement. It brings together Member States, industry, and standard-setting organizations (like W3C, OASIS). Its recommendations allow public authorities to cite "modern" internet standards in tenders without violating procurement neutrality rules. This body ensures that the e-government infrastructure is built on open, interoperable standards rather than proprietary vendor locks.

The Core Vocabularies Working Groups are communities of practice hosted on Joinup. While less formal, these groups of semantic experts define the data models (e.g., "Core Person," "Core Business") that underpin interoperability. Their output becomes the "soft law" standard for data exchange. These groups represent a "bottom-up" governance model, where technical consensus drives legal harmonization.

National Competent Authorities play a dual role. They are domestic regulators but also members of these EU networks. This "double hat" ensures the vertical alignment of EU and national policies. For example, the national agency responsible for trust services supervises local TSPs while simultaneously sitting in the eIDAS Cooperation Network to harmonize supervision practices EU-wide. This structure relies on the "administrative capacity" of national bodies to function effectively.

The "Digital Europe Programme Committee" oversees the funding. It approves the work programs for the Digital Europe funds. This body holds the purse strings, deciding which cross-border projects (like e-Justice pilots or AI testing facilities) get funded. By controlling the investment flow, it shapes the practical development of e-government infrastructure across the Union.

Finally, the governance landscape is shifting towards "Co-creation." The Interoperable Europe Act proposes an "Interoperable Europe Community" to involve GovTech companies, open source communities, and regions. This attempts to open up the closed circle of national bureaucrats to the wider ecosystem of digital innovators. The governance of the future is envisioned as a "platform," facilitating collaboration between public and private actors rather than just inter-governmental negotiation.

Section 4: Regulatory and Oversight Bodies: EDPB and EDPS

In the ecosystem of EU e-government, the processing of personal data is ubiquitous. Consequently, the independent data protection authorities are key institutional actors, acting as the "constitutional courts" of the digital state. The European Data Protection Board (EDPB) is the overarching body, composed of the heads of all national data protection authorities (DPAs) and the European Data Protection Supervisor (EDPS). It ensures the consistent application of the GDPR across the EU. For e-government, the EDPB issues binding decisions and guidelines on issues like the use of facial recognition in public spaces or the processing of health data. Its interpretations of the law set the hard boundaries for what e-government systems can and cannot do (Hijmans, 2016).

The European Data Protection Supervisor (EDPS) has a specific dual role. First, it is the supervisory authority for the EU institutions themselves. It monitors the Commission, eu-LISA, and others to ensure they comply with data protection rules (Regulation 2018/1725). The EDPS has the power to audit these institutions, investigate complaints, and even sanction them. Second, the EDPS acts as a policy advisor to the EU legislator. It issues opinions on every new legislative proposal with a digital component (e.g., the AI Act, eIDAS 2.0). These opinions are influential; they often force the Commission to introduce stronger privacy safeguards into e-government laws before they are passed.

The EDPB’s role in "Consistency Mechanisms" is vital for cross-border e-government. If the French DPA and the German DPA disagree on the legality of a cross-border data exchange system (like the Once-Only Technical System), the EDPB settles the dispute. This ensures that a citizen moving from France to Germany does not face a fragmentation of their privacy rights. The EDPB guarantees the "unity of interpretation" of data rights, which is a prerequisite for a unified digital administrative space.

Prior Consultation is a specific oversight mechanism. Under Article 36 GDPR, public authorities must consult their DPA before deploying high-risk technologies (like profiling algorithms for tax fraud). At the EU level, the Commission consults the EDPS. This "preventive supervision" ensures that privacy risks are mitigated by design before the system goes live. The EDPS reviews the Data Protection Impact Assessments (DPIAs) of large-scale systems like the Schengen Information System, acting as a quality check on the technical architecture of the surveillance state.

The EDPB acts as a check on "Function Creep." Governments have a tendency to reuse data collected for one purpose (e.g., health) for another (e.g., law enforcement). The EDPB rigorously enforces the "purpose limitation" principle. It has issued critical opinions on the interoperability of justice and home affairs databases, warning against the creation of a "Big Brother" database. This institutional friction is a feature, not a bug; it preserves the balance of power between the executive's desire for efficiency and the citizen's right to privacy.

Guidelines and Recommendations issued by the EDPB function as soft law with heavy weight. Guidelines on "Automated Decision Making" clarify the "right to a human in the loop" in administrative procedures. Guidelines on "Video Surveillance" dictate how smart cities can use cameras. National administrations follow these guidelines to avoid liability. Thus, the EDPB effectively writes the detailed "code of conduct" for digital administration across Europe.

The EDPS also hosts the "TechSonar" initiative and monitors technology trends. It recognizes that effective oversight requires technical expertise. The EDPS employs computer scientists and auditors to inspect the algorithms and code of EU systems. This "technological competence" allows the regulator to look under the hood of e-government, moving beyond legal compliance to technical verification of privacy claims.

Coordinated Supervision of large-scale IT systems (SIS, VIS, Eurodac) involves the EDPS and national DPAs working together. Since these systems have a central unit (eu-LISA) and national units, supervision must be shared. The "Coordinated Supervision Groups" ensure that the entire data lifecycle, from the border guard's tablet in Greece to the central server in Strasbourg, is subject to unbroken oversight. This avoids "accountability gaps" in the complex federal structure of EU IT systems.

Sanctioning Power. While the EDPS usually relies on reprimands for EU institutions, it has the power to impose fines. For national administrations, the national DPAs (members of the EDPB) have the power to fine public bodies (though some Member States exempt public bodies from fines). The threat of regulatory action forces public managers to take data protection seriously. The EDPB harmonizes the calculation of these fines to ensure equal treatment.

International Transfers. The EDPB ensures that e-government data does not leak to "unsafe" third countries. Following the Schrems II judgment, the EDPB issued strict guidance on using US cloud providers. This has forced public administrations to reconsider their procurement strategies, pushing for "sovereign cloud" solutions. The EDPB effectively acts as a geopolitical actor, enforcing digital sovereignty through privacy law.

Ethics and Fundamental Rights. The EDPS actively engages with digital ethics (e.g., the Ethics Advisory Group). It argues that "legal" does not always mean "ethical." In the context of e-government, this means questioning not just how to implement facial recognition, but whether it should be used at all in a democratic society. This normative leadership shapes the "moral compass" of EU digital policy.

Finally, the Independence of these bodies is their most critical asset. They are protected by EU law from political interference. This allows them to challenge the Commission and Member States when e-government initiatives threaten fundamental rights. They represent the "counter-power" within the institutional framework, ensuring that the digital state remains a constitutional state.

Section 5: Platforms and Support Structures: Joinup and GovTech

Beyond the high-level political and regulatory bodies, the EU e-government landscape is supported by operational platforms and innovation incubators that function as the "practical institutions" of the domain. Joinup is the central collaborative platform created by the European Commission (DG DIGIT). It serves as a "common repository" for e-government solutions. Joinup hosts thousands of interoperability assets, including open-source software, semantic vocabularies, and guidelines. It operates as a "library of code" for the European public sector. Its institutional role is to facilitate reuse; instead of 27 Member States building 27 different e-procurement modules, they can download and adapt a shared component from Joinup (Codagnone & Wimmer, 2007).

Joinup is organized around "Communities" and "Collections." For example, the "OSOR" (Open Source Observatory) community on Joinup provides news and case studies on open source in government. The "SEMIC" (Semantic Interoperability Community) manages the Core Vocabularies. These communities function as "knowledge networks," connecting practitioners across borders. Joinup provides the technical infrastructure (licensing assistants, repository federation) that turns the abstract principle of "sharing and reuse" into a practical reality. It is the "GitHub of the European public sector."

The GovTech Incubator is a new institutional mechanism under the Digital Europe Programme. It addresses the difficulty public administrations face in procuring innovative solutions from startups. The Incubator is a consortium of national digitalization agencies and private actors. Its goal is to pilot new technologies (like AI or blockchain) in a cross-border context. It provides a "sandbox" environment where rules can be tested and solutions co-created. This institution shifts the EU's role from "regulator" to "venture capitalist," funding the risk of experimentation that national bureaucracies often avoid.

The Common Services Platform (CSP) is the technical heart of the GovTech Incubator. It allows the scalable deployment of successful pilots. If a pilot for "privacy-preserving analytics" works in one country, the CSP facilitates its rollout to others. This creates a "pipeline" for innovation, moving from local experiment to EU-wide infrastructure. This institutionalizes the "scale-up" process, which has historically been a weakness of EU innovation policy (Mergel, 2019).

Digital Innovation Hubs (EDIHs) are regional contact points co-funded by the EU. While primarily aimed at SMEs, many EDIHs specialize in "Public Sector Innovation." They act as local consulting bodies, helping municipalities and regional governments adopt e-government technologies (e.g., AI, HPC, Cybersecurity). They provide "test before invest" services. EDIHs bridge the gap between the Brussels-based policy and the local town hall, ensuring that the institutional support reaches the grassroots level of administration.

The "living-in.eu" Movement is a platform for cities and regions. It promotes the "European way of digital transformation" at the local level (smart cities). Signatories commit to principles like open standards and citizen-centricity. This "coalition of the willing" acts as a bottom-up institutional force. The EU supports it to bypass national bottlenecks and engage directly with the cities, where most e-government services are actually delivered. It represents the "municipalist turn" in EU digital governance.

The Open Data Portal (data.europa.eu) is the central access point for European open data. Managed by the Publications Office, it harvests metadata from national open data portals. It acts as a "search engine" for public sector information. This platform is the institutional manifestation of the Open Data Directive. It provides the "discoverability" infrastructure that allows a developer to find and combine data from France and Poland to build a pan-European app.

The Academy of the Digital Europe Programme provides training resources. The institutional capacity of Member States is often limited by a lack of skills. The Academy offers courses on interoperability, AI, and cybersecurity for public servants. This "educational institution" aims to upgrade the human capital of the European administration, recognizing that laws and servers are useless without skilled personnel to operate them.

Testing and Experimentation Facilities (TEFs) are specialized infrastructures for AI. They allow public sector bodies to test AI solutions (e.g., for smart cities or health) in real-world conditions with liability safeguards. These physical and virtual facilities are "institutional sandboxes," providing the technical assurance needed to deploy high-risk AI in government.

The European Blockchain Services Infrastructure (EBSI) is a "platform as an institution." It is a network of nodes run by the Commission and Member States. It allows for the verification of credentials (diplomas, notarization) across borders without a central database. EBSI represents a new institutional model: a "Distributed Autonomous Organization" (DAO) of sorts, governed by the European Blockchain Partnership. It replaces the central authority with a consensus mechanism among Member States.

The "User Centricity Principles" platform (part of the Tallinn Declaration monitoring) tracks the performance of national portals. It provides a "dashboard" of compliance with user-centric design. This monitoring platform acts as a "mirror," showing administrations their own performance gaps. It institutionalizes the "voice of the user" in the governance process.

Finally, the GovTech "Marketplace". The EU aims to create a single market for GovTech solutions. Platforms like Joinup and the GovTech Incubator are steps towards this. The goal is to break the reliance on a few large system integrators and create a diverse ecosystem of European GovTech SMEs. These platforms are the "market-makers," reducing the transaction costs for small companies to sell to foreign governments.

Questions


Cases


References
  • Bigo, D., et al. (2012). The EU's large-scale IT systems. CEPS.

  • Codagnone, C., & Wimmer, M. A. (2007). Roadmapping eGovernment Research. European Commission.

  • Domorenok, E. (2019). The digital agenda of the European Union. Policy and Society.

  • Geyer, F. (2019). Security versus Justice? CEPS.

  • Graux, H. (2015). The eIDAS Regulation. Computer Law & Security Review.

  • Hijmans, H. (2016). The European Union as Guardian of Internet Privacy. Springer.

  • Homburg, V. (2008). Understanding E-Government. Routledge.

  • Huijboom, N., & Van den Broek, T. (2011). Open data: an international comparison. Telematics and Informatics.

  • Mergel, I. (2019). Digital Transformation of the Public Sector. Public Administration Review.

  • Misuraca, G., et al. (2010). Envisioning Digital Europe 2030.

  • Schmidt, J. (2020). The Single Digital Gateway Regulation. European Public Law.

7
Special institutions and emerging technologies in EU e-government
2 2 5 9
Lecture text

Section 1: European Digital Infrastructure Consortia (EDICs): A New Institutional Vehicle

The European Digital Infrastructure Consortium (EDIC) represents a novel and specialized legal instrument introduced by the Digital Decade Policy Programme 2030 (Decision (EU) 2022/2481). Designed specifically to overcome the fragmentation of national digital efforts, the EDIC provides a flexible framework for Member States to pool funding and resources for "Multi-Country Projects" (MCPs) that no single country could execute alone. Unlike traditional EU agencies, which are top-down creation of the Union legislated by the Parliament and Council, an EDIC is a "bottom-up" institution created upon the request of at least three Member States. Once established by a Commission Decision, it acquires legal personality recognized across the entire Union. This allows it to own assets, hire staff, and procure infrastructure, acting as a specialized purpose vehicle for high-tech public infrastructure deployment.

The governance structure of an EDIC is unique in EU law, characterized by a "member-driven" model. The founding Member States define the statutes, voting rights, and financial contributions, allowing for a tailored governance that reflects the specific needs of the project rather than a one-size-fits-all agency model. The European Commission holds a veto right only on matters concerning EU law compliance or when EU funds are involved, but otherwise, the EDIC operates with significant autonomy. This structure addresses the "sovereignty paradox" in e-government: Member States want shared infrastructure but refuse to cede control to a central Brussels agency. The EDIC solves this by acting as a "consortium of sovereigns," a legal shell that facilitates cooperation without centralization.

A prime example of this institution is the Alliance for Language Technologies (ALT-EDIC). Its mission is to address the scarcity of high-quality European language data for training Large Language Models (LLMs). By pooling data resources from national libraries and archives across France, Germany, and other members, ALT-EDIC aims to build "sovereign" European AI models that preserve linguistic diversity. Legally, the EDIC acts as the data controller and infrastructure operator, negotiating licenses and managing the complex intellectual property rights involved in training AI on public sector data. This illustrates how the EDIC serves as an institutional shield, protecting European cultural assets while enabling the scale required for global technological competitiveness.

Another critical instance is the LDT CitiVERSE EDIC (Networked Local Digital Twins towards the CitiVERSE). This consortium connects "Local Digital Twins"—virtual representations of cities—across the EU. By standardizing the data models and simulation tools used by cities like Valencia, Rotterdam, and Helsinki, the EDIC creates a federated market for smart city solutions. It allows a traffic simulation algorithm developed for one city to be legally and technically portable to another. The EDIC provides the legal certainty required for municipalities to procure and share these complex digital assets across borders, overcoming the procurement silos that typically trap smart city innovations at the local level.

The financing mechanism of EDICs allows for the "blending" of funds. An EDIC can combine contributions from national budgets, the EU's Digital Europe Programme, and even private investment. This financial engineering capacity is crucial for capital-intensive projects like quantum communication or blockchain networks. The legal framework ensures that these mixed funds are audited and managed under EU financial rules, providing the transparency required for public spending while allowing the agility of a project-financed entity. This capability transforms the EDIC into a "public venture capitalist" for strategic digital infrastructure.

The legal liability of an EDIC is limited to its assets, but Member States may accept unlimited liability in the statutes. This flexibility allows states to ring-fence the risks associated with experimental technologies. For instance, if an EDIC deploying experimental AI causes damage, the liability rules defined in its statutes determine whether the cost falls on the consortium or cascades back to the participating states. This "risk partitioning" is essential for encouraging governments to participate in high-risk, high-reward technology projects that would otherwise be paralyzed by risk aversion.

The establishment process involves a two-step legal check. First, the Commission verifies that the proposed EDIC meets the objectives of the Digital Decade (e.g., climate neutrality, digital skills). Second, it checks compliance with state aid rules. Because EDICs intervene in the market (e.g., by buying supercomputing time or developing software), they must not distort competition. The EDIC legal framework includes specific exemptions or streamlined state aid assessments for Multi-Country Projects, recognizing their strategic importance as "Important Projects of Common European Interest" (IPCEI) equivalents in the digital domain.

The EDIC also serves as a mechanism for "variable geometry" integration. Not all 27 Member States need to join every EDIC. This allows a "coalition of the willing" to move faster on specific technologies (like blockchain) without being held back by laggards. However, the legal framework mandates that EDICs remain open to new members on fair and reasonable terms. This "open door" policy prevents the EDIC from becoming an exclusive club, ensuring that the infrastructure it builds remains a potential public good for the entire Union.

In the context of the "Digital Sovereignty" agenda, EDICs are the institutional answer to dependence on non-EU tech giants. By creating a legal entity capable of owning and operating critical infrastructure (like a European cloud federation), the EU creates a "public alternative" to private platforms. The EDIC can enforce strict data localization and security rules in its procurement contracts, ensuring that the digital backbone of the European administration remains under European jurisdiction.

The lifespan of an EDIC can be temporary or indefinite. Some are designed to build a specific infrastructure and then dissolve or transfer it to an agency; others are intended to operate indefinitely. This temporal flexibility allows the institutional landscape of EU e-government to adapt to the lifecycle of technologies. Unlike a Treaty-based agency which is permanent and hard to reform, an EDIC can be "sun-setted" when its technology becomes obsolete, preventing institutional drift.

The EDIC interacts with national law through the "host state" principle. The EDIC has its statutory seat in one Member State (e.g., ALT-EDIC in France) and is governed by that state's law for matters not covered by the EU decision. This anchors the supranational entity in a specific national legal order for issues like employment law and contract enforcement. It requires the host state to grant the EDIC the most extensive legal capacity available to legal persons, ensuring it can operate effectively despite being a hybrid entity.

Finally, the EDIC represents a shift from "legislative harmonization" to "operational collaboration." Instead of just writing laws telling states what to do, the EU provides a legal vehicle for them to do it together. This shift acknowledges that in the field of emerging technologies, writing rules is insufficient; the public sector must build and own the infrastructure to enforce its values. The EDIC is the "builder's permit" for the European digital state.

Section 2: Blockchain Governance: The European Blockchain Services Infrastructure

The European Blockchain Services Infrastructure (EBSI) represents the EU's strategic attempt to create a "public permissioned blockchain" for administrative services. Unlike public blockchains (like Bitcoin) which are permissionless and energy-intensive, EBSI is a closed network where only authorized nodes (run by Member States and the Commission) can validate transactions. This architecture aligns blockchain technology with the legal requirements of e-government: accountability, energy efficiency, and compliance with the GDPR. The governance of this infrastructure is managed by the European Blockchain Partnership (EBP), a cooperation group of 29 countries (EU27 + Norway, Liechtenstein) established by a Ministerial Declaration in 2018.

The legal transition of EBSI is currently underway with the establishment of EUROPEUM-EDIC. This new legal entity will take over the operation of EBSI from the Commission. This move to an EDIC structure (hosted in Belgium) provides EBSI with the legal personality needed to sign contracts, hire staff, and potentially charge fees for usage. It marks the graduation of EBSI from a "pilot project" to a permanent "digital utility" for the EU public sector. EUROPEUM-EDIC acts as the "root of trust" for the blockchain, managing the onboarding of nodes and the governance of the consensus mechanism.

The primary use case for EBSI is the "Verifiable Credentials" model. In this model, the blockchain is not used to store personal data (which would violate GDPR "right to erasure" due to immutability) but to store "cryptographic proofs" (hashes) of documents. For example, a university issues a digital diploma to a student's wallet. The hash of this diploma is recorded on EBSI. When the student presents the diploma to a foreign employer, the employer checks the hash against the EBSI ledger to verify it hasn't been revoked or forged. This "off-chain data, on-chain verification" model is the legally compliant architecture for blockchain in EU government.

"Self-Sovereign Identity" (SSI) is the guiding legal philosophy of EBSI. It aims to give citizens control over their digital identity without relying on a central identity provider (like Facebook or Google). The EBSI infrastructure supports the use of "Decentralized Identifiers" (DIDs), which allows users to prove attributes (e.g., "I am over 18") without revealing their full identity. The legal challenge is to align this decentralized model with the eIDAS Regulation, which is traditionally centralized around state-notified IdPs. EBSI serves as the technical bridge, allowing state-issued eIDs to function within a decentralized wallet ecosystem.

The "Notarization" use case leverages the blockchain's immutability to create trusted audit trails. Administrative acts, public tenders, or grant disbursements can be "anchored" on EBSI. This creates a timestamped, tamper-proof record that serves as a "digital legal fact" in court. The legal value of these blockchain timestamps is reinforced by the eIDAS 2.0 regulation, which explicitly grants legal effect to "electronic ledgers." This harmonization ensures that an EBSI timestamp is admissible evidence in any Member State, preventing disputes about the integrity of administrative records.

Liability in a permissioned blockchain is a complex legal issue. If a node operator acts maliciously or negligently (e.g., validating a fraudulent transaction), who is liable? The EBP governance framework establishes a "Node Operator Agreement." This contract defines the duties and liabilities of the Member States running the nodes. It creates a "mutualized liability" model where the network collectively guarantees the integrity of the ledger, while individual operators retain liability for their specific breaches of security protocol. This contract law layer is essential to turn a technical network into a legally accountable infrastructure.

"Smart Contracts" on EBSI are used to automate administrative processes (e.g., automatic recognition of diplomas). The Data Act introduces "essential requirements" for smart contracts used in data sharing, such as robustness and access control. Within EBSI, smart contracts must undergo a security audit before deployment. Legally, these smart contracts are treated as "automated administrative acts." The public authority deploying the contract remains the legal author of the decision, ensuring that the "code" does not escape the "rule of law."

Regulatory Sandboxes play a crucial role in EBSI's evolution. The "European Blockchain Sandbox" allows regulators and companies to test innovative use cases (e.g., tokenization of public bonds) in a dialogue with data protection and financial authorities. This creates a "safe learning space" where legal uncertainties—such as the definition of a token under MiCA or the GDPR status of a public key—can be clarified through case studies before hard rules are enforced.

The interaction with MiCA (Markets in Crypto-Assets Regulation) is specific. While MiCA regulates private crypto-assets, EBSI focuses on public services. However, EBSI can support "utility tokens" or "access tokens" for public services (e.g., a token representing a carbon credit or a voucher). The legal status of these public tokens is derived from administrative law rather than financial law, but they benefit from the technological standards set by the broader blockchain ecosystem.

Cross-border Social Security is a pilot use case (ESSI Pass). EBSI is used to verify social security coverage (A1 forms) for posted workers instantly. This replaces the slow exchange of paper forms or bilateral database queries. The legal object exchanged is a "Verifiable Attestation." This use case demonstrates how blockchain can reduce fraud in the welfare state by creating a "single source of truth" that is shared across borders but updated locally.

The "Right to Erasure" (GDPR) on blockchain remains the hardest legal puzzle. Since data on a blockchain cannot be deleted, EBSI uses "pruning" or encryption techniques where the personal data is stored off-chain and the on-chain link is broken (the key is destroyed). This "crypto-shredding" is accepted by many data protection authorities as equivalent to erasure. The governance rules of EBSI mandate this privacy-preserving architecture, ensuring that the "immutable ledger" does not become a "permanent record" of citizen behavior in violation of privacy rights.

Finally, EBSI represents the "Internet of Value" for the public sector. It allows the state to issue unique digital assets (credentials, permits, quotas) that cannot be copied, only transferred. This prevents the "double-spending" problem in administration (e.g., using the same grant for two projects). By creating digital scarcity and uniqueness, EBSI provides the technological substrate for a more honest and efficient digital administration.

Section 3: Governing Public AI: The AI Office and Testing Facilities

The governance of Artificial Intelligence in the EU public sector is anchored in the AI Act, the world's first comprehensive AI law. To enforce this regulation, the Commission established the European AI Office in February 2024 (Commission Decision C(2024) 390). Located within the Commission (DG CNECT), the AI Office acts as the central enforcer for "General Purpose AI" (GPAI) models and the coordinator for the national supervisory authorities. For e-government, the AI Office is the supreme interpretative authority, issuing guidelines on what constitutes "High-Risk AI" in public services. It ensures that the AI systems used by Member States for critical functions like migration control or welfare allocation comply with fundamental rights.

The AI Office is supported by scientific and advisory bodies. The Scientific Panel of Independent Experts advises on the capabilities and risks of advanced AI models. For the public sector, this panel provides the technical expertise needed to assess whether a government algorithm poses a "systemic risk." Additionally, the AI Board, composed of Member State representatives, facilitates the consistent application of the AI Act. This institutional setup mirrors the GDPR's EDPB, creating a "federal" governance structure where the central AI Office handles the most powerful models (like GPT-4), while national authorities oversee specific local deployments (like a city's traffic AI).

A critical innovation for e-government is the network of Testing and Experimentation Facilities (TEFs). Co-funded by the Digital Europe Programme, TEFs are physical and virtual sites where AI solutions can be tested in real-world conditions. "CitCom.ai" is the specific TEF for Smart Cities and Communities. It allows municipalities to test AI tools (e.g., for energy grid management or autonomous public transport) before procurement. Legally, the TEF acts as a "certification sandbox." An AI system that passes the TEF tests receives a validation report, which serves as evidence of compliance with the AI Act's requirements on robustness and accuracy, de-risking the public procurement process.

"High-Risk AI" in the public sector is subject to strict conformity assessments. This includes AI intended to be used by public authorities to evaluate the eligibility of natural persons for public assistance benefits and services, or to dispatch emergency first response services. The legal regime imposes a "fundamental rights impact assessment" before deployment. The public authority must document the system's design, data governance, and human oversight measures. This creates a "bureaucracy of safety" around public AI, ensuring that the automated state is documented, auditable, and accountable.

"Transparency Obligations" for public AI are reinforced. The AI Act mandates that high-risk systems be registered in a public EU database. This transparency register allows citizens, journalists, and NGOs to see which AI systems are being used by whom and for what purpose. For e-government, this register is the "official gazette" of algorithms. It provides the necessary visibility for democratic scrutiny, preventing the deployment of "secret algorithms" in public administration.

"Real-world Testing" allows public authorities to test AI systems outside of TEFs under specific legal conditions. The AI Act permits this "regulatory sandbox" approach where informed consent is obtained or a specific public interest ground exists. This legal flexibility is crucial for adaptive regulation. It allows the administration to learn from failures in a controlled environment without facing immediate liability or public backlash, fostering a culture of innovation in the public sector.

"Frugal AI" and "Green AI" are emerging policy goals supported by the AI Office. The environmental impact of training large models is significant. The institutional framework promotes energy-efficient AI for the public sector. TEFs test not only for accuracy but for energy consumption. This aligns the digital transition with the Green Deal, creating a legal expectation that "smart government" must also be "sustainable government."

The "Algorithm Registries" emerging at the local level (e.g., Amsterdam, Helsinki) act as voluntary transparency institutions. While the EU database covers high-risk systems, these local registers often cover lower-risk algorithms (e.g., library chat bots) to build trust. The AI Office encourages this "transparency by default." These registers function as "digital plaques," explaining the tool's logic and data sources in plain language to the citizen.

"Human Oversight" governance. The AI Act requires that high-risk AI be designed so that it can be overseen by natural persons. The institution using the AI must designate competent human overseers with the authority to stop the system ("kill switch"). This creates a new role in the public service: the "AI Overseer." This official is legally responsible for the "human-in-the-loop" requirement, acting as the fail-safe against algorithmic error or bias.

"Public Procurement of AI". The Commission provides "Standard Contractual Clauses" (SCCs) for procuring AI. These clauses translate the AI Act's requirements into procurement contracts, ensuring that private vendors provide the necessary documentation and data access. This legal toolkit empowers public buyers, who often lack the technical leverage to demand transparency from big tech vendors. It standardizes the "terms of trade" for the algorithmic state.

"Coordinated Plan on AI". This policy instrument coordinates national AI strategies. It encourages Member States to focus AI investment in high-impact public sectors (health, transport). The monitoring of this plan ensures that the EU moves as a block, avoiding a "fragmented AI landscape" where some states are hyper-advanced and others lag behind. It promotes the "AI for Good" narrative in public services.

Finally, the "AI Liability Directive" (proposed) will complement the institutional oversight with private enforcement. It creates a "presumption of causality" for AI damages. If a public AI fails to comply with the AI Act (e.g., bad data governance) and causes harm, the court will presume the AI caused it. This liability threat reinforces the institutional mandate of the AI Office and TEFs: compliance is not just a bureaucratic box-ticking exercise, but a shield against costly litigation.

Section 4: Cloud and Data Sovereignty Institutions

The transition to the cloud is the infrastructure shift of the decade. However, the reliance on non-EU hyperscalers (Amazon, Microsoft, Google) raises issues of Digital Sovereignty. The US CLOUD Act allows US law enforcement to access data stored by US companies anywhere in the world. To counter this extraterritoriality and secure EU public data, the EU has established specialized institutions and frameworks. The European Cybersecurity Competence Centre (ECCC), based in Bucharest, is the funding engine. It manages cybersecurity funds from the Digital Europe and Horizon Europe programmes. Its mandate includes strengthening the EU's sovereign capacities in cloud and cyber defense, ensuring the public sector has access to secure, EU-made technologies.

The ECCC works through a Network of National Coordination Centres (NCCs). Each Member State designates an NCC to act as the gateway for funding and collaboration. This network creates a "distributed competence" across the EU. It supports SMEs and start-ups that develop sovereign cloud solutions. By funding the "supply side" of the European cloud market, the ECCC aims to create viable alternatives to foreign tech giants for public procurement.

Gaia-X is a prominent "special institution" in this domain, although it is a non-profit association under Belgian law rather than an EU agency. Initiated by France and Germany, it aims to create a "federated data infrastructure" for Europe. Gaia-X does not build a competing cloud (like an "Airbus for Cloud") but develops a framework of standards and rules (the "Gaia-X Rulebook") that ensures data sovereignty, reversibility, and transparency. Cloud providers who comply with these rules can be "Gaia-X compliant." For e-government, this provides a certification mechanism: public bodies can mandate "Gaia-X compliance" in tenders to ensure their data remains under European control.

SIMPL (Smart Middleware Platform) is the Commission's direct intervention to build the software layer for data spaces. SIMPL is an open-source middleware that enables the secure federation of cloud-to-edge infrastructure. It serves as the "operating system" for the European Data Spaces (e.g., Health, Procurement). Unlike Gaia-X (which sets standards), SIMPL provides the code. This special infrastructure project ensures that the data spaces are built on public code, preventing lock-in to proprietary platforms.

The EU Cloud Rulebook developed by the Commission creates a single compendium of all rules (GDPR, Free Flow of Data, Cybersecurity Act) applicable to cloud services. It serves as a guidance institution for public procurers. It clarifies "sovereignty requirements," such as data localization or immunity from foreign laws, which are being codified in the European Cybersecurity Certification Scheme for Cloud Services (EUCS). The EUCS proposes a "High+" level of assurance that would effectively require the cloud provider to be immune from non-EU jurisdiction (i.e., European headquarters). This is the "hard law" instrument of cloud sovereignty.

Important Projects of Common European Interest (IPCEI) on Next Generation Cloud Infrastructure and Services (IPCEI-CIS) allow Member States to subsidize massive cross-border cloud projects that the market would not fund alone. This state aid instrument allows for the creation of a "sovereign industrial cloud" capable of real-time processing (Edge Computing). For e-government, this means the future infrastructure for smart cities (traffic sensors, energy grids) will run on this sovereign Edge-Cloud continuum, reducing latency and dependency.

Data Spaces Support Centre (DSSC) is a coordination body funded by the EU to ensure all sectoral data spaces (Health, Mobility, Green Deal) are interoperable. It provides the "blueprint" for data spaces. For e-government, this ensures that the "Public Procurement Data Space" can talk to the "Green Deal Data Space." The DSSC acts as the "architect" of the European data economy, ensuring that the sovereign cloud is not a fragmented archipelago but a connected continent.

EOSC (European Open Science Cloud) is the precursor and model for other data spaces. It federates research data infrastructures. Its governance model (a tripartite partnership between the Commission, the EOSC Association, and Member States) serves as a template for other "Common European Data Spaces." It demonstrates how to govern a distributed, federated cloud infrastructure that serves a public mission (science).

"Sovereign Cloud" Labels. Some Member States (e.g., France with "SecNumCloud") have created national sovereign cloud certifications. The EUCS aims to harmonize these. The institutional tension lies between those who want strict sovereignty (data must stay in EU, operated by EU companies) and those who prioritize functionality (allowing US providers if they adhere to strict rules). The ECCC and ENISA are the arenas where this battle for the definition of "sovereign cloud" is fought.

Edge Computing shifts data processing from central servers to the "edge" (local devices). This enhances privacy (data stays local) and resilience. The EU invests in "Edge AI" hardware through the Chips Act and the Key Digital Technologies Joint Undertaking (KDT JU). This creates the hardware independence required for a truly sovereign e-government stack.

"Data Altruism Organisations" registered under the Data Governance Act are new institutional actors. They facilitate the voluntary sharing of data for public good. These distinct legal entities are subject to supervision and must maintain high ethical standards. They act as "trusted intermediaries" in the sovereign data ecosystem, allowing citizens to contribute their data to the state or research without surrendering it to big tech.

Finally, Strategic Autonomy. The overarching goal of these institutions (ECCC, Gaia-X, IPCEI) is to reduce "strategic dependencies." E-government is classified as critical infrastructure. The legal and institutional framework ensures that the state can continue to function digitally even in the event of geopolitical decoupling. It turns the "cloud" from a commercial service into a strategic national asset.

Section 5: Emerging Technologies: Quantum, Metaverse, and Digital Twins

The frontier of EU e-government lies in technologies that are currently transitioning from research to deployment. Local Digital Twins (LDTs) are the most mature of these. A Digital Twin is a virtual replica of a physical system (a city) that uses real-time data to simulate scenarios (e.g., "what if we close this road?"). The LDT CitiVERSE EDIC is the special institution driving this. It connects local twins into a European federation. The "Destination Earth" (DestinE) initiative is the planetary scale version, creating a digital twin of the Earth to monitor climate change. For public administration, this shifts governance from "reactive" to "anticipatory." Decisions are tested in the virtual twin before being implemented in the real world (Schrotter & Hürzeler, 2020).

Quantum Communication is the next leap in security. The EuroQCI (European Quantum Communication Infrastructure) initiative aims to build a secure quantum communication shield across the EU. It uses Quantum Key Distribution (QKD) to secure sensitive government data transmission against future "quantum computers" that could break current encryption. EuroQCI involves a terrestrial segment (fiber optics) and a space segment (satellites). This project is governed by a consortium of Member States and the Commission (often via EDIC or ESA). It represents the "physics-based" security layer of the future e-government, ensuring the long-term confidentiality of state secrets.

The Metaverse (or Virtual Worlds) in government is explored under the "Web 4.0 and Virtual Worlds" initiative. The Commission views the metaverse not just as a gaming space but as a venue for public services ("Citiverse"). Imagine applying for a building permit by walking through a 3D model of your house with a virtual planner. The regulatory challenge is to ensure open standards, preventing the "enclosure" of the public metaverse by private platforms. The EU promotes "Open Universal USD" and other open standards to build a democratic, interoperable virtual public sphere.

"Algorithmic Registries" and "Public AI". Cities like Amsterdam and Helsinki have launched public registers of the AI algorithms they use (e.g., for parking control). This emerging institutional practice—"transparency by default"—is becoming a standard for ethical smart cities. It allows citizens to review the "rules of the code." The AI Act encourages this but does not mandate it for all low-risk systems. However, the "Convention on AI" (Council of Europe) and local charters are hardening this into a norm of good digital administration.

"GovTech Incubators" and "Sandboxes". The GovTech4ALL consortium (funded by Digital Europe) is a "special institution" that connects national GovTech labs. It facilitates the cross-border piloting of startup solutions. This creates a "single market for public innovation." Instead of each city inventing its own chatbot, they can adopt a solution piloted in the incubator. This institutionalizes "experimentation" as a mode of governance.

"Personal Information Management Systems" (PIMS) or Data Pods (like Solid) are technologies that give users control over their data. The EU supports these through NGI (Next Generation Internet) funding. In an e-government context, PIMS would allow a citizen to grant the tax office temporary access to their bank data pod, rather than sending a PDF. This "visitation" model of data access (visiting the data where it resides) replaces the "data transfer" model, enhancing privacy and sovereignty.

"Immersive Public Services". Extended Reality (XR) is being tested for training (e.g., police, surgeons) and urban planning. The legal issues involve "virtual privacy" (biometric data from headsets) and "virtual property" (ownership of digital assets). The EU's "Virtual Worlds" strategy emphasizes that fundamental rights apply fully in the metaverse. The state must guarantee that the virtual town hall is safe from harassment and surveillance.

"Post-Quantum Cryptography" (PQC). As quantum computers advance, current eIDAS signatures could be broken. ENISA and national security agencies are standardizing PQC algorithms. The transition to PQC is a massive institutional challenge, requiring the update of every eID card and server in Europe. This "crypto-agility" is now a core requirement for e-government procurement.

"Neuromorphic Computing" and "Green ICT". To run massive AI and Twins sustainably, the EU invests in new chip architectures (Chips JU). The aim is to reduce the energy footprint of e-government. Legal mandates for "Green Public Procurement" will increasingly require IT providers to prove the energy efficiency of their code and hardware.

"Digital Commons". The EU is investing in "Digital Public Goods" (DPGs)—open source software, open data, open AI models—that are non-rivalrous and non-excludable. The NGI initiative funds the development of these commons as an alternative to proprietary platforms. This institutional support for the "commons" creates a "third pillar" of e-government, alongside the state and the market.

"Future of Work in Government". The introduction of these technologies requires a "Digital Skills Academy" for civil servants. The Interoperable Europe Academy is a prototype. The institutional challenge is to reskill the bureaucracy to manage AI and Quantum contracts, moving from "administrators of paper" to "architects of digital systems."

Finally, the "Anticipatory Regulation" approach. The EU uses "Foresight" (institutionalized in the Commission Vice-Presidency for Foresight) to predict these trends. By regulating "upstream" (e.g., setting standards for the metaverse before it is fully built), the EU attempts to shape the technology to fit democratic values, rather than adapting the law after the fact. This proactive stance defines the European approach to emerging technologies in government.

Questions


Cases


References
  • Abbott, R. (2020). The Reasonable Robot: Artificial Intelligence and the Law. Cambridge University Press.

  • Bigo, D., et al. (2012). The EU's large-scale IT systems. CEPS.

  • Codagnone, C., & Wimmer, M. A. (2007). Roadmapping eGovernment Research. European Commission.

  • Domorenok, E. (2019). The digital agenda of the European Union. Policy and Society.

  • European Commission. (2024). Decision establishing the European AI Office. C(2024) 390.

  • Graux, H. (2015). The eIDAS Regulation. Computer Law & Security Review.

  • Homburg, V. (2008). Understanding E-Government. Routledge.

  • Huijboom, N., & Van den Broek, T. (2011). Open data: an international comparison. Telematics and Informatics.

  • Mergel, I. (2019). Digital Transformation of the Public Sector. Public Administration Review.

  • Misuraca, G., et al. (2010). Envisioning Digital Europe 2030.

  • Schmidt, J. (2020). The Single Digital Gateway Regulation. European Public Law.

  • Schrotter, G., & Hürzeler, C. (2020). The Digital Twin of the City of Zurich. PFG.

8
Legal violations and responsibility in the field of EU e-government
2 2 5 9
Lecture text

Section 1: The Transformation of State Liability in the Digital Age

The digitization of public administration fundamentally alters the traditional dogmatics of state liability. Historically, administrative liability was predicated on the "fault" of a human official (faute de service). If a civil servant lost a file or made a calculation error, the state was liable. In e-government, the error is often systemic: a software bug, a database synchronization failure, or an algorithmic bias. This shifts the legal focus from "human error" to "system failure." The principle of state liability in EU law, established in the Francovich jurisprudence, dictates that a Member State is liable for damages caused to individuals by breaches of EU law. In the digital context, this means that if a national e-government system fails to comply with EU regulations (e.g., eIDAS or the Single Digital Gateway) and causes harm (e.g., a missed deadline for a tender), the state is directly liable to the citizen (Craig & De Búrca, 2015).

The concept of "objective administrative liability" is gaining ground. Given the complexity of IT systems, proving specific negligence by a public official is often impossible for a citizen. Therefore, legal theory is moving towards a strict liability model for digital infrastructure. If the state forces citizens to use a digital portal (mandatory e-government), the state implicitly guarantees the availability and security of that portal. If the server crashes on the last day of tax filing, this "technological force majeure" cannot be used as a defense by the state to penalize the citizen. Instead, the unavailability of the system constitutes a breach of the "duty of good administration," triggering liability for any resulting fines or lost opportunities (Galetta, 2019).

Legal violations in e-government often involve "administrative silence" caused by technical interoperability failures. When a digital application is "stuck" between two agencies due to a data format mismatch, the legal effect is a failure to act. Under the Single Digital Gateway Regulation, the administration is responsible for the transmission of evidence. If the transmission fails, the legal consequences (e.g., rejection of an application) cannot be borne by the user. The administration bears the "risk of transmission." This allocation of risk is a central feature of the digital liability regime, protecting the user from the "black box" of back-office technicalities (Schmidt, 2020).

The "Right to an Effective Remedy" (Article 47 of the Charter) is challenged by automated systems. If a citizen is denied a benefit by an algorithm, they must have a viable avenue for appeal. A legal violation occurs if the system does not provide a reasoned decision (traceability) or if there is no possibility to contest the digital outcome. Courts in the Netherlands (SyRI case) have ruled that using opaque algorithms for welfare fraud detection violates the right to private life and the principle of transparency. This establishes that the deployment of "unexplainable" e-government systems is in itself a legal violation, regardless of the accuracy of the specific decision (Hacker, 2018).

State liability also extends to the "procurement of defective technology." Public administrations rely on private vendors for software. If the software is defective (e.g., leaks data), the citizen sues the state, not the private vendor. The state cannot use "vendor lock-in" or "software bug" as a shield against administrative liability. The state is the "frontend" operator responsible for the service. It may seek recourse against the vendor later, but vis-à-vis the citizen, the public authority assumes full responsibility for the digital tools it employs. This reinforces the principle that the state cannot privatize its public law liability (Mergel, 2019).

The "principle of equivalence" requires that remedies for breaches of EU digital rights be no less favorable than those for domestic claims. If a national e-government portal discriminates against a non-national user (e.g., by requiring a local phone number), this violates the non-discrimination principle of the Single Digital Gateway. The state must provide a remedy that is effective and dissuasive. In many jurisdictions, this has led to the creation of "digital ombudsmen" or specialized administrative tribunals capable of adjudicating technical failures rapidly, ensuring that the remedy keeps pace with the speed of digital transactions.

"Maladministration" in the digital era includes the failure to update. The General Data Protection Regulation (GDPR) imposes a duty of "data accuracy." If a citizen updates their address in the base registry, but the tax authority continues to send fines to the old address due to a lack of synchronization, this constitutes maladministration. The legal violation is the failure to propagate the "single version of the truth" across the system. Liability arises from the "fragmentation" of the digital state, where the citizen is punished for the state's internal incoherence.

The violation of "accessibility" standards is a discrimination offense. The Web Accessibility Directive mandates that public sector websites be accessible to persons with disabilities. Failure to comply is not just a technical flaw but a violation of the right to equal treatment. Users can file complaints and seek redress. Liability here can take the form of injunctive relief (ordering the state to fix the website) or compensation for the indignity and exclusion suffered. This frames digital exclusion as a justiciable legal injury (Easton, 2013).

"Digital sovereignty" issues can also lead to liability. If a public administration stores sensitive citizen data on a non-EU cloud provider that is subject to extraterritorial access (e.g., US CLOUD Act), this may violate the GDPR (Schrems II judgment). The state is liable for the "loss of control" over citizen data. This creates a duty for public officials to conduct rigorous "Transfer Impact Assessments" before procuring cloud services. Ignorance of the geopolitical legal risks of cloud computing is no longer a defense for public managers.

The concept of "contributory negligence" by the citizen is evolving. If a citizen loses their eID credentials due to phishing, to what extent are they liable for the resulting fraud? The Payment Services Directive 2 (PSD2) limits consumer liability for unauthorized transactions, and this logic is being transplanted into e-government. While citizens have a duty of care to protect their digital keys, the state has a duty to design systems that are resilient to common user errors. A system that allows easy identity theft due to poor design (e.g., weak password requirements) shifts the liability back to the state (Sullivan, 2018).

Administrative sanctions for e-government violations are increasingly codified. National laws now define specific "digital administrative offenses," such as the unauthorized access to public databases by civil servants. These internal disciplinary liabilities complement the external state liability. They ensure that the human agents behind the digital screen remain accountable for their interaction with the system, preventing the "diffusion of responsibility" that often accompanies automation.

Finally, the future of liability lies in "automated compensation." If an e-government system makes an error (e.g., delays a payment), the system itself could automatically calculate and disburse interest or compensation to the citizen. This "proactive liability" would align the remedy with the technology, resolving legal violations at the speed of the digital error itself, without requiring lengthy judicial procedures.

Section 2: Data Protection Violations and GDPR Liability

The General Data Protection Regulation (GDPR) establishes the most formidable liability regime in EU e-government. Unlike previous directives, the GDPR imposes direct obligations on public authorities as "controllers" of personal data. A violation of the GDPR—such as processing data without a lawful basis, failing to secure data, or ignoring data subject rights—triggers a dual liability mechanism: administrative fines (in some Member States) and civil liability for damages. Article 82 of the GDPR grants any person who has suffered "material or non-material damage" as a result of an infringement the right to receive compensation from the controller. This creates a direct cause of action for citizens against public administrations for data mishandling (Hijmans, 2016).

"Non-material damage" is a critical concept in e-government liability. It covers distress, anxiety, and loss of control over personal data. In UI v. Österreichische Post (C-300/21), the Court of Justice of the European Union (CJEU) clarified that the mere infringement of the GDPR is not sufficient to confer a right to compensation; there must be a causal link to actual damage. However, the Court also ruled that there is no "de minimis" threshold for this damage. This means that even minor anxiety caused by a data leak in a government portal can be compensated. This broad interpretation exposes public administrations to significant cumulative liability in the event of mass data breaches (Gola et al., 2021).

The imposition of administrative fines on public bodies is a matter of national discretion under Article 83(7) GDPR. Some Member States (e.g., Germany, Estonia) exempt public authorities from fines to avoid "taking money from one pocket to put it in another." Others (e.g., Italy, France) actively fine public hospitals, municipalities, and ministries for GDPR violations. For example, the Italian DPA fined the INPS (Social Security Institute) for security failings. Where fines are not possible, Data Protection Authorities (DPAs) use "corrective powers" such as bans on processing or orders to delete data. These non-monetary sanctions can be even more disruptive to e-government operations than fines, effectively shutting down illegal digital services (Lynskey, 2015).

"Data Breaches" are the most common source of liability. Article 33 GDPR requires public authorities to notify the DPA of a breach within 72 hours. Failure to notify is itself a violation. In e-government, breaches often occur due to "human error" (sending an email to the wrong recipient) or "technical vulnerability" (unpatched servers). The liability principle is "accountability." The public authority must demonstrate that it had appropriate technical and organizational measures (TOMs) in place. If the breach occurred despite state-of-the-art security, liability may be mitigated. If it occurred due to negligence (e.g., passwords stored in plain text), liability is aggravated.

The distinction between "Controller" and "Processor" is vital for liability allocation. In e-government, the public authority is usually the controller, while the IT vendor (cloud provider, software company) is the processor. Article 82(2) holds the processor liable only if it fails to comply with obligations specifically directed at processors or acts outside the controller's instructions. However, the controller (state) is liable for the damage caused by the processing. This means the citizen sues the state for a vendor's error. The state can then claim back from the vendor, but this internal recourse is governed by the procurement contract, not the GDPR.

"Joint Controllership" creates complex liability scenarios. When multiple agencies share data (e.g., tax and social security) or participate in a cross-border platform (like the Internal Market Information System), they may be joint controllers. Article 26 GDPR requires a transparent arrangement determining their respective responsibilities. Under Article 82(4), joint controllers are held "jointly and severally liable" for the entire damage. This allows the citizen to sue any of the involved agencies for full compensation, regardless of which specific agency caused the leak. This "victim-friendly" rule forces public bodies to regulate their internal liability distribution strictly.

Violations regarding "Purpose Limitation" are systemic risks in e-government. Governments have a tendency towards "function creep"—using data collected for service delivery (e.g., toll roads) for law enforcement. Unless there is a specific legislative measure complying with Article 23 GDPR (restrictions), such re-purposing is illegal. Liability arises from the "unlawful processing." The remedy is often the deletion of the illegally processed data and the prohibition of its use in administrative decisions. This "fruit of the poisonous tree" doctrine in administrative law invalidates decisions based on illegally accessed data.

The "Right to Information" violations involve transparency. If a public portal uses cookies or tracking technologies without valid notice or consent (where required), it violates the ePrivacy Directive and GDPR. Public administrations are held to a higher standard of transparency. Liability here is often reputational and corrective, but systematic tracking of citizens without legal basis can lead to "collective actions" (class actions) by privacy NGOs under Article 80 GDPR, seeking injunctions and declaratory relief against the surveillance architecture of the state.

"Security of Processing" (Article 32 GDPR) requires public bodies to implement encryption, pseudonymization, and regular testing. A violation occurs if the security measures are not "appropriate to the risk." In the landmark Bulgarian National Revenue Agency case (C-340/21), the CJEU ruled that the mere occurrence of a hack does not automatically imply the controller violated Article 32. However, the burden of proof is on the controller to show their measures were appropriate. This establishes a "rebuttable presumption" of inadequacy when a hack succeeds, placing a heavy evidentiary burden on the state to prove its cyber-resilience.

The liability of the Data Protection Officer (DPO) within the public administration. The DPO is not personally liable for the non-compliance of the entity. The DPO’s role is advisory. Liability rests with the "management body" of the public authority. However, the administration can be liable for "failure to support the DPO" or for dismissing a DPO for performing their duties. Protecting the independence of the internal regulator is a legal obligation of the public authority.

"International Transfers" liability. Using non-EU cloud providers without adequate safeguards (Transfer Impact Assessment, Supplementary Measures) is a violation of Chapter V GDPR. Following Schrems II, public authorities are liable for exposing citizen data to foreign surveillance laws. This has led to a "sovereign cloud" push. Liability risks here include the suspension of data flows ordered by DPAs, which can paralyze e-government services relying on foreign infrastructure (Svantesson, 2020).

Finally, the interaction with Administrative Law. A GDPR violation constitutes "illegality" in administrative law. A citizen can challenge an administrative decision (e.g., a tax assessment) not just on its merits but on the grounds that it was based on unlawful data processing. This integrates data protection liability into the general system of administrative review, making GDPR compliance a precondition for the validity of administrative acts.

Section 3: Cybersecurity and Infrastructure Liability (NIS2)

The NIS2 Directive (Directive (EU) 2022/2555) revolutionizes the liability landscape for public sector cybersecurity. It classifies public administration entities of central and regional governments as "essential entities." This classification imposes direct legal obligations to take technical, operational, and organizational measures to manage security risks. Unlike the previous NIS Directive, NIS2 explicitly introduces personal liability for management bodies. Top-level public managers (e.g., heads of agencies, senior civil servants) can be held personally liable for failing to implement cybersecurity measures. This pierces the "corporate veil" of the state, forcing leadership to take ownership of cyber risks (Markopoulou et al., 2019).

The "Duty to Report" is a central liability trigger. NIS2 mandates a multi-stage reporting obligation for significant incidents: an "early warning" within 24 hours, an "incident notification" within 72 hours, and a final report within one month. Failure to report within these strict timeframes is a separate legal violation, punishable by fines or administrative sanctions. This creates a "transparency liability." The state cannot hide cyber incidents from the competent authorities (CSIRTs). This reporting obligation ensures that the "collective defense" of the EU is not compromised by the silence of a victimized agency.

"Supply Chain Liability" is a key focus of NIS2. Public administrations are liable for the security of their supply chains. This means they must vet their software vendors and managed service providers. If a breach occurs through a third-party vendor (like the SolarWinds attack), the public administration can be held liable for "negligent procurement" or "failure to audit." This extends the state's liability perimeter beyond its own firewall to the security posture of its private contractors, forcing the integration of strict security clauses in public contracts.

The Cyber Resilience Act (CRA) complements NIS2 by regulating products. It imposes a "CE mark" for cybersecurity on software and hardware. However, the public administration remains liable as the "user" if it fails to apply patches or updates provided by the manufacturer. If a vulnerability is known and a patch is available, the failure of the administration to update its systems constitutes "gross negligence." This creates a "duty to patch" within the public sector. Ignoring updates is no longer a bureaucratic delay; it is a legal breach of the duty of care.

"Sanctions and Enforcement" under NIS2 include the power of national authorities to conduct audits, issue warnings, and impose administrative fines. For essential entities (like central government ministries), the directive allows for the temporary suspension of certification or authorization of services. In extreme cases, it allows for the temporary ban of any person discharging managerial responsibilities (e.g., the CIO) from exercising managerial functions. This "disqualification" sanction is a potent tool to enforce accountability at the highest levels of the civil service.

State Liability for Cyberattacks. Is the state liable to citizens if a cyberattack paralyzes public services (e.g., hospitals, welfare payments)? Under the principles of administrative law, the state is generally not liable for "force majeure." However, NIS2 redefines the baseline of what is foreseeable. Since cyberattacks are now a known risk, the "force majeure" defense is weakened. If the state failed to implement the "baseline security measures" mandated by NIS2 (e.g., multi-factor authentication, backups), it cannot claim force majeure. The attack becomes a "materialization of a managed risk," triggering state liability for service interruption.

Liability for "Ransomware" Payments. Paying ransom to hackers is a legal gray area. While not explicitly illegal in all jurisdictions, using public funds to pay criminals raises issues of "misappropriation of public funds." NIS2 emphasizes reporting and resilience over payment. Public audit courts may hold officials liable for the financial loss if they authorize ransom payments without exhausting all recovery options. The legal framework pushes towards a "non-payment" policy backed by robust backups ("Resilience Liability").

Cross-border Liability in shared infrastructure. If a cyber incident in one Member State propagates to another through a connected system (like the Schengen Information System), issues of transboundary liability arise. The NIS2 Directive establishes the EU-CyCLONe (Cyber Crises Liaison Organisation Network) to manage coordinated response. While NIS2 does not create a direct cross-border compensation mechanism, the general principles of state responsibility under EU law could apply if one state's negligence compromises the collective security of the Union.

"Active Cyber Defense" and Liability. Some Member States allow "active defense" (hacking back). This creates immense liability risks. If a public authority, in attempting to neutralize a botnet, accidentally damages the infrastructure of innocent third parties or another state, it faces liability under international and domestic law. EU law generally promotes defensive measures. The "duty of non-harm" restricts the state's ability to engage in offensive cyber operations that might cause collateral damage to the digital ecosystem.

The Role of CSIRTs (Computer Security Incident Response Teams). CSIRTs are not liable for the incidents they investigate, provided they act within their mandate. They enjoy a "good Samaritan" protection. However, public administrations are liable if they fail to cooperate with the CSIRT or ignore its binding instructions during a crisis. The CSIRT acts as the "emergency doctor"; ignoring the doctor's orders shifts the liability for the worsening condition to the patient (the agency).

Certification and Liability. Under the Cybersecurity Act, products can be certified. If a certified e-government product fails, does the liability lie with the certifier or the user? The regulation clarifies that certification does not shift liability away from the manufacturer or the user. However, using certified products serves as evidence of "due diligence." It acts as a liability shield, proving that the administration took reasonable steps to secure its systems.

Finally, the "Whole-of-Government" Liability. NIS2 requires a coordinated approach. A vulnerability in a small municipality can be the entry point for an attack on the national grid. The central government has a "duty of supervision" over local entities. Neglecting the cyber-hygiene of local government is a failure of the central state's responsibility to secure the national digital territory.

Section 4: Automated Decision-Making and AI Liability

The deployment of Artificial Intelligence (AI) and Automated Decision-Making (ADM) in e-government introduces the concept of "algorithmic liability." The current legal framework relies on the GDPR (Article 22), which grants individuals the right not to be subject to a decision based solely on automated processing if it produces legal effects. A violation of this right—for example, a fully automated tax audit without a legal basis or human review—renders the administrative decision void. Liability arises from the procedural defect: the denial of the human element. The remedy is the annulment of the decision and a new, human-reviewed process (Watcher et al., 2017).

The AI Act (Regulation 2024/...) categorizes AI used in essential public services (e.g., migration, justice, welfare) as "High-Risk." This classification triggers a strict compliance regime. Public authorities ("deployers") must perform a Fundamental Rights Impact Assessment (FRIA) before use. They must ensure human oversight, data quality, and transparency. Failure to comply with these ex ante obligations is an administrative offense. While the AI Act focuses on product safety and market surveillance fines, non-compliance serves as a powerful evidence of negligence in civil or administrative liability claims brought by citizens.

The proposed AI Liability Directive (AILD) aimed to harmonize the civil liability rules for AI. It proposed a "presumption of causality" and a "right to disclosure of evidence." Although the directive's adoption is uncertain/stalled as of 2025, the principles it articulated are influencing national courts. If a citizen suffers harm from a "black box" government algorithm, courts may shift the burden of proof to the state, requiring the administration to prove the algorithm did not discriminate or err. The opacity of the tool cannot serve as a liability shield; instead, it creates a "rebuttable presumption of fault" against the user of the opaque technology (Bertolini, 2020).

The Revised Product Liability Directive (PLD) extends strict liability to software and AI systems. This means that if an e-government service uses defective AI software (e.g., a chatbot that gives wrong legal advice causing financial loss), the software "producer" is strictly liable. However, the public administration (as the user/deployer) often customizes the AI. Under the PLD, a "substantial modification" turns the user into a producer. Thus, a government agency that fine-tunes a general-purpose AI for public service delivery assumes the strict liability of a producer for any defects in that customized system.

"Automation Bias" and Human Liability. Even when a "human in the loop" exists, they often blindly accept the AI's recommendation (automation bias). EU law requires that human oversight be "meaningful," not just a rubber stamp. If a public official approves an erroneous AI decision without critical review, the official (and the agency) is liable for "negligent oversight." The presence of a human does not absolve the state if the human was structurally incapable of challenging the machine (e.g., due to lack of time or expertise).

Algorithmic Discrimination creates liability under EU non-discrimination law. If a welfare algorithm uses "proxy variables" (e.g., zip code) that correlate with protected characteristics (e.g., ethnicity) to deny benefits, this is indirect discrimination. The state is liable for the discriminatory outcome. The "SyRI" judgment in the Netherlands demonstrated that courts will strike down algorithmic systems that disproportionately target poor neighborhoods without objective justification. The liability here is constitutional: the violation of the principle of equality (Coglianese & Lehr, 2017).

Transparency and Explanation Liability. Citizens have a "right to an explanation" of administrative decisions. If an AI system is too complex to explain (e.g., deep learning), using it may be illegal per se in the administrative context. The administration is liable for "failure to give reasons." Liability is avoided only by using "interpretable AI" or by maintaining a parallel human reasoning process. The "unexplainability" of a decision is a fatal legal defect in the public sector.

Data Governance Liability. AI systems are only as good as their data. Using "dirty data" (biased, outdated, or incomplete) to train or operate a government AI constitutes negligence. The GDPR requires data accuracy. If an AI makes a wrong decision because the underlying database was not updated, the state is liable for the breach of the data quality principle. The liability sits with the data controller for failing to curate the "input" of the automated process.

Public Procurement of AI Liability. Contracts for AI must allocate liability. Standard Contractual Clauses (SCCs) are being developed to ensure that private AI vendors indemnify public authorities for IP breaches or algorithm failures. However, vis-à-vis the citizen, the state remains primary liable. The state cannot tell a citizen "sue Microsoft"; it must compensate the citizen and then trigger the indemnity clause. This ensures the continuity of state responsibility.

"Sandboxing" and Liability Waivers. Regulatory sandboxes allow for the testing of AI in the public sector. Within the sandbox, liability rules may be temporarily adapted to encourage innovation. However, this does not mean total immunity. Basic rights (health, safety, fundamental rights) must always be protected. If a sandbox experiment causes real-world harm, the state is liable, potentially under a specific "no-fault" compensation fund set up for the experiment.

Future of AI Liability. As AI becomes more autonomous (agentic AI), the link between human conduct and machine output weakens. Legal theory discusses "strict liability for operators" of autonomous systems, similar to the liability for keeping dangerous animals or vehicles. For the public sector, this would mean the state acts as the "insurer of last resort" for the risks posed by its automated agents, socializing the cost of algorithmic errors.

Section 5: Trust Services and Interoperability Liability

The eIDAS Regulation establishes a specific liability regime for Trust Service Providers (TSPs). Article 13 of eIDAS states that TSPs are liable for damage caused intentionally or negligently. For Qualified TSPs, the burden of proof is reversed: they are presumed negligent unless they prove otherwise. This high liability standard reflects the critical role they play. If a Certification Authority (CA) issues a digital certificate to an imposter, and that imposter uses it to defraud a government agency or a citizen, the CA is liable. This liability encourages rigorous security practices and is backed by mandatory insurance requirements for qualified TSPs (Dumortier, 2017).

Liability in the Single Digital Gateway (SDG) focuses on the exchange of evidence (Once-Only Principle). When data moves from a Source Authority (e.g., University) to a Requesting Authority (e.g., Employer), who is liable if the data is wrong? The SDG Regulation (Article 14) clarifies: the Source Authority is liable for the accuracy and veracity of the data. The Requesting Authority is liable for the lawful processing of that data. The technical intermediary (the OOTS node) is liable for the integrity of the transmission. This "segmented liability" ensures that each actor is responsible only for what they control, preventing the "blame game" in cross-border exchanges (Krimmer et al., 2017).

"Identity Provider" (IdP) Liability. If a national eID system (notified under eIDAS) is breached and identities are stolen, the notifying Member State may be liable. Article 11 of eIDAS makes the notifying Member State liable for damage caused to the relying party (e.g., a foreign service provider) due to a failure to ensure the technical specifications of the eID scheme. This "state-to-state" or "state-to-relying party" liability creates a powerful incentive for Member States to secure their national digital identity infrastructures. It treats the eID as a state-guaranteed product.

Software Vendor Liability in Public Procurement. When the government buys custom software, liability is governed by the contract and national civil law. However, the shift to open source and reusable solutions (Joinup) complicates this. Who is liable for a bug in open-source software used by the government? Generally, open source is provided "as is" without warranty. Therefore, the public administration assumes the risk of implementation. It becomes the "insurer" of the code it reuses. To mitigate this, administrations contract with "maintainers" or integrators who assume professional liability for the open-source stack.

Interoperability Liability. Failure to achieve interoperability can be a legal violation. If a Member State builds a system that is technically incompatible with EU standards (e.g., refusing to accept eIDAS-compliant signatures), it violates the regulation. This "failure to connect" can lead to infringement proceedings by the Commission. While this is a public law liability (State vs. EU), it can also lead to state liability claims by businesses that are blocked from the market due to the lack of interoperability barriers.

Blockchain (EBSI) Liability. In a permissioned blockchain like the European Blockchain Services Infrastructure, liability is defined by the node operator agreement. If a node inserts fraudulent data, that operator is liable. However, the "immutability" of the ledger creates a problem: if illegal content (e.g., personal data violations) is written to the chain, it cannot be deleted. All node operators might be jointly liable as "joint controllers" under the GDPR. The governance framework of EBSI attempts to solve this by mandating "off-chain" storage of sensitive data, limiting the ledger to hashes (De Filippi & Wright, 2018).

Liability for "Digital Exclusion". If a trust service (e.g., a mandatory app) is not accessible to elderly or disabled citizens, the provider violates the Accessibility Act and the Web Accessibility Directive. Liability involves the obligation to provide "alternative means" of access. If no alternative is provided and the citizen suffers loss (e.g., cannot access bank account), the provider is liable for discrimination. This reinforces the principle that digital trust services must be inclusive utilities.

"Relying Party" Liability. A public authority that relies on an electronic signature has a duty to verify it correctly (e.g., checking revocation lists). If the authority negligently accepts a revoked certificate and processes a fraudulent transaction, it shares liability. The "reliance" on digital trust is not passive; it requires active verification. Automated validation tools mitigate this, but the legal duty to verify validity status remains with the relying party.

Standardization and Liability. Compliance with ETSI standards creates a "presumption of conformity" for TSPs. However, if a standard itself is flawed (e.g., a weak encryption algorithm), following it might still lead to a breach. Liability in such cases is complex. Generally, following harmonized EU standards acts as a safe harbor against negligence claims, shifting the focus to the standard-setting process itself.

Cross-Border Enforcement of Liability. If a Spanish citizen is harmed by a Polish Trust Service Provider, they can sue in Spain (Brussels I Recast, consumer forum). The applicable law is determined by Rome II (place of damage). This cross-border judicial cooperation ensures that the liability regime of eIDAS is enforceable in practice, preventing TSPs from hiding behind jurisdictional borders.

Finally, the "Systemic Risk" Liability. The concentration of trust in a few large providers (or a single national eID) creates systemic risk. A failure of a root CA affects millions. The NIS2 Directive classifies TSPs as "essential entities," subjecting them to strict supervision. Liability for systemic failure is not just financial but existential (revocation of qualified status). The legal framework treats the trust infrastructure as "too critical to fail," imposing a regime of hyper-accountability.

Questions


Cases


References
  • Bertolini, A. (2020). Artificial Intelligence and Civil Liability. European Parliament.

  • Craig, P., & De Búrca, G. (2015). EU Law: Text, Cases, and Materials. Oxford University Press.

  • Dumortier, J. (2017). The European Regulation on Trust Services. Digital Evidence Law.

  • Easton, C. (2013). Website accessibility and the European Union. International Review of Law, Computers & Technology.

  • Galetta, D. U. (2019). Algorithmic Decision-Making and the Right to Good Administration. European Public Law.

  • Gola, P., et al. (2021). GDPR Compensation. Computer Law & Security Review.

  • Graux, H. (2015). The eIDAS Regulation. Computer Law & Security Review.

  • Hacker, P. (2018). Teaching an Old Dog New Tricks? Verfassungsblog.

  • Hijmans, H. (2016). The European Union as Guardian of Internet Privacy. Springer.

  • Krimmer, R., et al. (2017). The Once-Only Principle. IOS Press.

  • Lynskey, O. (2015). The Foundations of EU Data Protection Law. Oxford University Press.

  • Markopoulou, D., et al. (2019). The new EU cybersecurity framework. Computer Law & Security Review.

  • Mergel, I. (2019). Digital Transformation of the Public Sector. Public Administration Review.

  • Schmidt, J. (2020). The Single Digital Gateway Regulation. European Public Law.

  • Sullivan, C. (2018). Digital Identity. Cambridge University Press.

  • Svantesson, D. (2020). Data Localisation Laws and Policy. Oxford University Press.

  • Watcher, S., et al. (2017). Counterfactual Explanations. Harvard Journal of Law & Technology.

  • Zouridis, S., et al. (2020). Automated Discretion. Administration & Society.

9
Procedural mechanisms for implementation and protection of rights in EU e-government
2 2 5 9
Lecture text

Section 1: The Digitalization of Administrative Procedure and Article 41

The digitization of public administration in the EU is not merely a technical upgrade but a fundamental transformation of administrative procedure law. At the heart of this transformation lies Article 41 of the Charter of Fundamental Rights of the European Union, which enshrines the "Right to Good Administration." In the analog era, this right guaranteed impartial handling of affairs, the right to be heard, and the obligation to give reasons. In the digital era, these procedural guarantees are being reinterpreted. The right to have affairs handled "within a reasonable time" now implies a right to digital efficiency; delays caused by obsolete paper processes or lack of interoperability may constitute a violation of this fundamental right. Consequently, e-government initiatives like the Single Digital Gateway (SDG) are not just modernization projects but procedural mechanisms to fulfill the Charter’s mandate for a responsive administration (Hofmann & Cisotta, 2019).

The Single Digital Gateway Regulation (2018/1724) establishes the procedural framework for cross-border digital rights. It creates a subjective right for citizens and businesses to access 21 key administrative procedures fully online. This shifts the procedural burden from the user (who previously had to travel or mail documents) to the administration (which must provide a digital interface). The Regulation introduces the "Once-Only Principle" (OOP) as a procedural right: citizens should not be forced to resubmit data already held by a competent authority. This transforms the administrative procedure from a series of repetitive demands into a streamlined data exchange between authorities, legally protecting the citizen from "administrative harassment" via redundant requests (Schmidt, 2020).

"Digital-by-Default" is the guiding procedural principle, but it is balanced by the principle of "Inclusion-by-Design." Procedural justice requires that the digital channel does not become an exclusionary barrier. The SDG Regulation mandates that "offline" channels remain available for those unable to use digital tools, ensuring that the digitization of procedure does not violate the principle of non-discrimination. Furthermore, the Web Accessibility Directive acts as a procedural safeguard, ensuring that digital interfaces are usable by persons with disabilities. If a digital procedure is inaccessible (e.g., incompatible with screen readers), the administrative act itself may be procedurally defective, giving rise to a right of annulment or compensation (Easton, 2013).

The "Right to be Heard" faces specific challenges in digital procedures. Traditionally, this right involved a physical hearing or written submission before an adverse decision. In automated or semi-automated digital procedures (e.g., online tax filing), the "hearing" is often reduced to filling out boxes in a web form. To protect this right, digital procedures must be designed to allow users to provide "free text" explanations or upload supplementary evidence that falls outside standard data fields. A rigid digital form that prevents a citizen from presenting their specific case constitutes a violation of the right to be heard, rendering the subsequent decision unlawful (Galetta, 2019).

"Transparency" in digital procedure goes beyond access to documents. It includes "Process Transparency"—the ability to track the status of an application in real-time (like a parcel). The SDG Regulation mandates this "tracking" functionality. Additionally, transparency extends to the "logic of processing" when algorithms are used. Citizens have a procedural right to know how a decision was reached. If a digital system outputs a decision without an intelligible audit trail or explanation, it violates the duty to give reasons (Article 41(2)(c)), which is a constitutive element of the legality of the administrative act.

"Authentication" replaces physical identification as the procedural gatekeeper. The eIDAS Regulation provides the legal mechanism for this. The use of a notified eID scheme acts as a procedural guarantee of identity. It shifts the evidentiary burden: a transaction signed with a Qualified Electronic Signature benefits from a presumption of validity. This procedural mechanism streamlines the "verification phase" of administrative proceedings, allowing instant cross-border recognition of the applicant’s legal status without the need for notarized paper copies (Graux, 2015).

"Electronic Service of Documents" (e-Delivery) changes the calculation of procedural deadlines. In traditional law, deadlines often run from the receipt of a registered letter. In e-government, the eIDAS Regulation provides legal certainty for electronic registered delivery services. The "electronic timestamp" serves as irrefutable proof of the time of sending and receipt. This creates a precise "digital timeline" for the administrative procedure, reducing disputes about missed deadlines. However, it also imposes a duty of "digital vigilance" on the citizen to check their electronic mailboxes regularly (Dumortier, 2017).

"Interoperability" as a procedural necessity. When a procedure involves cross-border evidence exchange (e.g., recognizing a diploma), the lack of technical interoperability can lead to a denial of rights. The Interoperable Europe Act establishes a governance structure to ensure that national systems can "talk" to each other. It creates a procedural obligation for Member States to conduct "interoperability assessments" before introducing new digital services. This preventive mechanism aims to stop the creation of new digital barriers that would frustrate the implementation of EU rights (Misuraca et al., 2010).

"Assistance and Problem-Solving Services" are integrated into the procedural framework. The SDG Regulation mandates access to services like SOLVIT, which helps citizens resolve problems when public authorities breach EU rights. These services act as an "alternative dispute resolution" mechanism within the administrative procedure itself. They provide a rapid, informal remedy that can correct digital errors (e.g., a rejected eID) without the need for formal litigation, enhancing the practical effectiveness (effet utile) of digital rights.

"User Feedback" is institutionalized as a procedural feedback loop. The SDG Regulation requires the collection of user feedback on the quality of services. This data is used to monitor compliance. If a national portal consistently receives poor ratings for usability, the Commission can intervene. This transforms "user satisfaction" from a soft metric into a hard procedural requirement for Member States, linking the legitimacy of the digital administration to its actual performance as experienced by the citizen.

The "Right to Good Digital Administration" also implies "Data Protection by Default." Administrative procedures must be designed to minimize data collection. If a digital form asks for more data than necessary (violating GDPR minimization principles), the procedure itself is unlawful. The Data Protection Officer (DPO) within the public body acts as a procedural guarantor, ensuring that the digital workflow respects privacy rights ex ante, rather than waiting for a complaint ex post.

Finally, the "Liability for Technical Failure." If a digital procedure fails due to a server crash or a bug, preventing a citizen from exercising a right (e.g., applying for a subsidy before the deadline), the administration bears the risk. Emerging legal principles recognize "technological force majeure" as a valid ground for the reinstatement of deadlines (restitutio in integrum). The state has a procedural duty to ensure the availability of its digital systems; failure to do so cannot prejudice the citizen's legal position.

Section 2: Automated Decision-Making and Procedural Safeguards

The use of Automated Decision-Making (ADM) and Artificial Intelligence (AI) in public administration presents the most significant challenge to traditional procedural rights. When an algorithm decides on a tax refund, a social benefit, or a visa application, the human element of "discretion" is replaced by "code." Article 22 of the GDPR is the primary procedural safeguard. It grants the data subject the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects. This creates a "procedural prohibition" on fully automated administration unless specific exceptions (e.g., authorized by law with suitable safeguards) apply (Watcher et al., 2017).

The "Right to Human Intervention" (Human-in-the-loop) is the central remedy under Article 22(3) GDPR. Even where ADM is permitted, the citizen has the right to obtain human intervention, to express their point of view, and to contest the decision. This safeguard aims to prevent "kafkaesque" situations where a citizen is trapped by a machine's logic. The human reviewer must have the authority and competence to overturn the algorithm. A "rubber-stamping" human does not satisfy this procedural requirement; the review must be meaningful and substantive (Citron, 2007).

The "Right to an Explanation" (or explanation of the logic involved) is essential for the right of defense. A citizen cannot contest a decision they do not understand. In the context of "black box" AI (e.g., neural networks), providing a traditional statement of reasons is technically difficult. However, the principle of good administration requires "explainability." If the administration cannot explain why the AI reached a specific conclusion, the decision is procedurally void for lack of reasoning. This creates a legal pressure to use "interpretable AI" in the public sector or to develop "counterfactual explanations" (e.g., "you would have got the loan if your income was X") (Edwards & Veale, 2017).

"Bias and Discrimination" in ADM violate the right to equal treatment. Algorithms trained on historical data may reproduce past biases (e.g., against minorities). The AI Act classifies AI used in critical public services (migration, justice, welfare) as "High-Risk," imposing strict obligations on data quality and bias monitoring. Procedurally, this means that the administration must perform a Fundamental Rights Impact Assessment (FRIA) before deploying the system. This ex ante procedural step is designed to identify and mitigate discrimination risks before they affect citizens (Hacker, 2018).

"Legal Certainty" vs. "Algorithmic Change." Algorithms can evolve (machine learning). A decision made on Monday might differ from one made on Tuesday if the model updates. This conflicts with the principle of legal certainty and equality before the law. Administrative law requires that the "rules of the game" be fixed and public. Therefore, "continuous learning" AI systems are generally incompatible with the requirement for a stable legal basis. Procedural safeguards require "freezing" the algorithm at a specific version for the duration of the administrative procedure to ensure consistent application of the law (Zouridis et al., 2020).

"Transparency Obligations" under the AI Act complement the GDPR. Public authorities using high-risk AI must register the system in a public EU database. This "Algorithmic Register" allows civil society and oversight bodies to know what systems are in use. This procedural transparency is a precondition for accountability. Without a public register, citizens would not even know they are being subject to ADM, making the exercise of their rights impossible.

"Judicial Review of Algorithms." When an ADM decision is challenged in court, the judge must be able to review the legality of the algorithm. This requires "technical due process." The court may need access to the source code or training data. However, vendors often claim "trade secrets." EU law is evolving to prioritize the "right to a fair trial" over commercial secrecy in public law contexts. In the SyRI case (Netherlands), the court ruled that the government's refusal to disclose the full logic of a fraud-detection algorithm violated Article 6 ECHR (Right to a Fair Trial), establishing a precedent that "secret algorithms" are unlawful in public administration (Coglianese & Lehr, 2017).

"The Right to Good Digital Administration" implies a "Duty of Care" in system design. The administration is liable for "automation bias"—the tendency of humans to trust computer outputs blindly. Procedural safeguards must include training for staff to critically evaluate algorithmic suggestions. If a case worker automatically denies a benefit based on a "red flag" from an AI without investigating, they have failed in their duty of investigation (Inquisitorial Principle), rendering the decision unlawful.

"Proactive Administration" (granting rights without application) relies on ADM. While convenient, it raises procedural risks. If the system fails to grant a right (false negative), the citizen might not know they were entitled to it. Procedural mechanisms must ensure that proactive systems notify citizens of potential rights or provide a clear mechanism to claim a benefit if the automatic trigger fails. The "silence" of an automated system should not result in the forfeiture of a right.

"Audit and Certification." Under the AI Act, high-risk systems must undergo conformity assessments. These audits act as a "procedural shield." A certified system carries a presumption of compliance. However, this does not immunize the specific decision from review. The citizen challenges the application of the tool in their specific case, not just the tool's general safety. The procedural mechanism for this challenge remains the administrative appeal.

"Class Actions" against algorithmic errors. The Representative Actions Directive allows consumer organizations to bring collective claims. In the context of e-government (e.g., a "robodebt" scandal affecting thousands), this procedural mechanism is vital. It allows victims of systemic algorithmic errors to seek redress collectively, overcoming the power asymmetry between the individual citizen and the automated state.

Finally, the "Human Contact Point." Despite automation, the "right to a human" is emerging as a meta-right. The SDG Regulation requires that assistance services be accessible. Procedural fairness dictates that when the digital path leads to a dead end or a dispute, there must be a clearly signposted "exit" to a human case worker who can resolve the issue manually. The digital procedure must never be a closed loop.

Section 3: Access to Justice and e-Justice Mechanisms

Access to justice is a fundamental right (Article 47 Charter) that is increasingly mediated by digital tools. e-Justice refers to the use of ICT to improve access to judicial procedures and the efficiency of the justice system. The EU's flagship project is the e-Justice Portal, a "one-stop shop" for citizens and lawyers. It provides information on legal systems, finds lawyers, and offers forms for cross-border proceedings. Procedurally, this portal lowers the informational barriers to justice, empowering citizens to assert their rights across the Union without expensive legal research (Velicogna, 2017).

e-CODEX (e-Justice Communication via Online Data Exchange) is the technical backbone of cross-border judicial proceedings. Regulated by Regulation (EU) 2022/850, it provides a secure, decentralized network for courts to exchange documents (e.g., European Arrest Warrants, European Payment Orders). Procedurally, e-CODEX ensures the "integrity" and "authenticity" of judicial documents in transit. It replaces the slow and insecure postal service. A claim filed via e-CODEX is deemed procedurally valid and received at the moment of electronic timestamping, securing deadlines and preventing procedural defaults due to mail delays (Contini, 2020).

The European Small Claims Procedure (ESCP) is a prime example of a digitalized procedure. Designed for cross-border claims under €5,000, it is a written procedure that can be conducted entirely online. The e-Justice Portal provides dynamic forms that guide the user. The Regulation (2015/2421) promotes the use of "distance communication technology" (videoconferencing) for oral hearings. This procedural innovation removes the need for physical travel, making justice economically accessible for small value disputes. It embodies the concept of "proportionality" in procedural justice (Kramer, 2020).

"Electronic Service of Documents" is governed by the Service Regulation (2020/1784). It establishes a decentralized IT system (based on e-CODEX) for the transmission of documents between liaison bodies. Crucially, it also allows for the direct electronic service of documents to a party in another Member State, provided they have given prior express consent (e.g., via a registered e-delivery account). This modernizes the concept of "notification." The legal effect of service—which triggers the timeline for defense—is now tied to the electronic availability of the document, accelerating the pulse of cross-border litigation.

"Taking of Evidence" by Videoconference. The Taking of Evidence Regulation (2020/1783) mandates that courts should use videoconferencing to hear witnesses or experts located in another Member State, rather than requiring their physical presence. This "digital proximity" reduces costs and delays. The Regulation establishes procedural safeguards: the witness has the right to claim privilege under the law of their own state or the forum state. The technology must ensure the "visibility and audibility" of the witness to guarantee the judge's ability to assess credibility (immmediacy principle).

The "Digitization of Justice" Regulation (2023/2844) aims to make digital communication the default channel for cross-border judicial cooperation. It obliges Member States to accept electronic communication from natural and legal persons in cross-border procedures. This creates a "subjective right" to e-justice. A court cannot refuse a digitally signed PDF submission in a cross-border case. This harmonizes the "digital admissibility" of legal acts, overcoming national procedural rules that might still demand paper.

Online Dispute Resolution (ODR) platform. For consumer disputes, the EU ODR platform facilitates out-of-court settlements. While not a judicial procedure, it acts as a "pre-judicial" filtration mechanism. The platform uses translation tools and structured workflows to help parties reach an agreement. If ODR fails, the parties retain their right to go to court. This mechanism channels high-volume, low-value digital disputes away from the overburdened courts, providing a "fast track" to redress (Cortés, 2018).

"AI in the Judiciary" (predictive justice, legal analytics) raises profound procedural questions. The use of AI to assist judges (e.g., summarizing files, suggesting sentences) is debated. The European Ethical Charter on the Use of AI in Judicial Systems (CEPEJ) sets the principles: respect for fundamental rights, non-discrimination, and "user control." Procedurally, a judge cannot delegate the judicial decision to an AI. The "human judge" must remain the sole arbiter. The use of "risk assessment" algorithms in criminal justice (bail) is classified as High-Risk under the AI Act, requiring strict oversight to prevent bias.

"Digital Access to Case Law". The ECLI (European Case Law Identifier) and the semantic web make case law searchable across borders. This enhances "legal certainty" and the "equality of arms." A lawyer in Portugal can easily find relevant precedents from Germany. This transparency of judicial reasoning contributes to the formation of a common European judicial culture, harmonizing the interpretation of rights from the bottom up.

"Data Protection in Judicial Proceedings". The publication of judgments online must respect the "right to be forgotten" and privacy. Courts must anonymize or pseudonymize personal data in published rulings. The balance between "open justice" (public scrutiny) and "privacy" is managed through data governance rules. Procedural laws now include specific provisions on the "sanitization" of digital judgments before publication.

"Cybersecurity of the Courts". The digitization of justice creates risks of hacking or data manipulation. A cyberattack on the court registry could erase case files or alter judgments. Procedural law imposes a duty on the state to secure the "integrity of the judicial record." e-CODEX uses encryption and signatures to ensure that the "digital file" is tamper-proof. The resilience of the e-justice infrastructure is a precondition for the "validity" of digital judicial acts.

Finally, the "Interconnection of Insolvency Registers". Creditors have a procedural right to know about insolvency proceedings in other states to file claims. The interconnected registers provide this notice. This digital transparency triggers the procedural deadlines for lodging claims. It turns the "publicity" of insolvency from a local notice board into a pan-European digital alert.

Section 4: Non-Judicial Remedies: Ombudsman, SOLVIT, and DPAs

Judicial remedies are often slow and expensive. EU law provides a network of non-judicial oversight bodies that offer rapid, accessible redress for e-government failures. The European Ombudsman plays a central role in investigating "maladministration" by EU institutions. In the digital context, the Ombudsman investigates complaints about lack of transparency in algorithm use, refusal of access to digital documents (e.g., text messages), or the inaccessibility of websites for disabled persons. The Ombudsman’s decisions, while not legally binding, carry significant moral weight and often lead to systemic changes. They define the "soft law" standards of good digital administration (Hofmann, 2019).

SOLVIT is a problem-solving network for the Internal Market. It handles cross-border problems between citizens/businesses and national public authorities. If a French authority refuses to recognize a German digital diploma, the citizen can file a complaint with SOLVIT. The SOLVIT centers in both countries work together to resolve the issue within 10 weeks. This "administrative cooperation" mechanism is often more effective than court litigation for e-government glitches. It acts as a "debugger" for the practical implementation of EU digital rights (Hobbing, 2011).

Data Protection Authorities (DPAs) are the "police" of the digital sphere. They have the power to investigate complaints, conduct audits, and impose fines (or corrective measures) on public administrations for GDPR violations. If a citizen cannot access their data or suspects a breach, the DPA is the first point of redress. DPAs have a "corrective" mandate: they can order the administration to delete data, stop processing, or modify an algorithm. This direct intervention power makes them the most potent non-judicial remedy in the e-government ecosystem (Hijmans, 2016).

The European Data Protection Supervisor (EDPS) oversees EU institutions (like the Commission and eu-LISA). The EDPS handles complaints from citizens regarding the processing of their data by EU bodies. For example, if eu-LISA mishandles a visa application data in the VIS system, the EDPS investigates. The EDPS also has the power to refer cases to the CJEU, acting as a bridge between administrative oversight and judicial review.

National Ombudsmen complement the European level. They investigate maladministration by national and local authorities. In the digital age, they deal with "digital exclusion" complaints—elderly citizens unable to book appointments online, or rural areas without connectivity. Their reports highlight the "human cost" of digitization and push for "offline alternatives" as a matter of fairness. They act as the guardians of the "analog right" in a digital world.

"The Right to Petition" (Article 227 TFEU) allows citizens to petition the European Parliament on matters affecting them. This includes failures in the application of EU digital law. The Parliament’s Committee on Petitions can investigate and put political pressure on the Commission or Member States. This political remedy allows citizens to raise "systemic issues" (e.g., widespread lack of interoperability) directly to the legislators.

"Alternative Dispute Resolution (ADR)" bodies for specific sectors (e.g., telecom, energy, finance) handle complaints about digital services provided by regulated entities. While often private or semi-public, these bodies are integrated into the EU's consumer protection framework. For "privatized" e-government services (e.g., electronic identity provided by a bank), these ADR bodies provide a specialized forum for redress.

"Internal Administrative Review". Before going to court or an Ombudsman, citizens usually must exhaust internal remedies (administrative appeal). E-government laws mandate that this appeal process itself be digitalized. The "click-to-appeal" functionality in administrative portals lowers the threshold for challenging automated decisions. It integrates the remedy into the service delivery flow.

"Whistleblowing" channels. The Whistleblower Directive (2019/1937) protects persons who report breaches of EU law, including data protection and public procurement rules. IT staff in public administrations who discover security flaws or illegal algorithms can report them safely. This "insider" mechanism acts as an early warning system for e-government violations, strengthening the internal accountability of the digital state.

"Audits" by Court of Auditors (EU and national). While not a direct remedy for individuals, auditors check the "performance" and "compliance" of e-government systems. Reports on the failure of large IT projects or the lack of cybersecurity serve as evidence of maladministration. These findings can trigger political accountability and reforms that indirectly protect citizens' rights by improving system quality.

"Collective Complaints" to DPAs. Article 80 GDPR allows NGOs (like Noyb or La Quadrature du Net) to lodge complaints on behalf of data subjects. This "representative action" is crucial for challenging complex technical violations (e.g., illegal data transfers) that serve individual citizens lack the resources to investigate. It socializes the cost of enforcement.

Finally, the "Network of remedies". These bodies do not operate in isolation. The European Network of Ombudsmen, the EDPB, and the SOLVIT network cooperate. A complaint filed with the Ombudsman might be referred to the EDPS or SOLVIT. This "no wrong door" policy ensures that citizens find the right redress mechanism for their specific digital grievance.

Section 5: Future of Procedural Rights: Proactivity and Personalized Law

The future of procedural rights in e-government lies in the shift towards "Proactive Public Services." Instead of the citizen applying for a right (pull), the administration grants it automatically (push) based on data it already holds. For example, upon reaching retirement age, the pension is automatically calculated and paid. Procedurally, this eliminates the "burden of application." However, it requires a new procedural safeguard: the "Right to Opt-Out" or correct. If the automatic calculation is wrong, the citizen must have a simple, digital way to intervene. The procedure shifts from "initiation" to "verification" (Zouridis et al., 2020).

"Personalized Administrative Law" (Micro-Directives). AI allows the state to tailor rules and communications to the individual. A tax portal might show only the deductions relevant to that specific user. While efficient, this raises "equal treatment" concerns. If two citizens see different interfaces or get different advice based on their profile, procedural fairness is challenged. Future procedural law must guarantee "Information Neutrality"—ensuring that personalization does not hide rights or manipulate choices (nudging).

"Real-Time Administrative Proceedings". IoT sensors allow for real-time regulation (e.g., dynamic speed limits or pollution charges). The administrative act (the fine or charge) is issued instantly. The procedural right to be heard before the decision becomes impossible in nanosecond transactions. The remedy shifts to "ex-post review." Procedural law must ensure that the "logs" of these real-time events are preserved and accessible, allowing the citizen to "replay" the event and contest the sensor data.

"Algorithmic Due Process". As ADM becomes ubiquitous, a specific "due process for algorithms" will emerge. This includes the right to test the system (sandbox access for defense), the right to know the error rate, and the right to "adversarial auditing." Defense lawyers and ombudsmen will need "algorithmic forensic tools" to challenge the digital evidence of the state.

"Virtual Courts" and the Metaverse. If justice moves to virtual reality, procedural rights like the "public nature of the hearing" must be redefined. Is a virtual hearing public if anyone can stream it? Or does that violate privacy? The "digital courtroom" requires a new code of procedure that balances open justice with the protection of digital identity and data.

"Data Sovereignty as a Procedural Right". Citizens may gain the right to "host" their own administrative data (in a Personal Data Pod) and grant temporary access to the state. The procedure becomes a "data visitation." The state does not collect the file; it views it in the citizen's pod. This reverses the procedural power dynamic, making the citizen the gatekeeper of the file.

" anticipatory Legal Remedies". AI could predict legal conflicts (e.g., a likely denial of a permit) and suggest remedies before the decision is final. "Legal recommender systems" could guide citizens through the appeal process. However, reliance on "robot lawyers" raises issues of liability and quality of counsel.

"The Right to a Human Interface". As a reaction to total automation, a "right to human contact" is being debated. This would guarantee that for every significant administrative interaction, a human channel is available without excessive waiting times. This "analog right" ensures that the digital state remains humane and accessible to the non-digital population.

"Global Procedural Standards". As digital platforms operate globally, EU procedural rights (like GDPR redress) influence global standards. The "Brussels Effect" creates a global expectation of procedural fairness in digital transactions, exporting the EU model of "digital due process."

"Code-is-Law" vs. "Law-is-Code". The ultimate challenge is the translation of procedural law into software code. If the code prevents an illegal action (e.g., a form that cannot be submitted if fields are missing), the procedure is enforced by design. However, if the code is too rigid, it denies equity. Future procedural law will be "hybrid," consisting of legal text and computer code, requiring "legal coders" to draft the digital procedure.

"Democratic Control of Code". Procedural rules are laws. Therefore, the algorithms that implement these rules must be subject to democratic oversight. Parliaments must vote on the "parameters" of the algorithms (e.g., the threshold for fraud detection), not just the abstract law. This "political control of the algorithm" is the final procedural safeguard of the digital republic.

Questions


Cases


References
  • Citron, D. K. (2007). Technological Due Process. Washington University Law Review.

  • Coglianese, C., & Lehr, D. (2017). Regulating by Robot. Georgetown Law Journal.

  • Contini, F. (2020). Artificial Intelligence and the Transformation of Humans, Law and Technology. Law, Technology and Humans.

  • Cortés, P. (2018). The Law of Consumer Redress in an Evolving Digital Market. Cambridge University Press.

  • Dumortier, J. (2017). The European Regulation on Trust Services. Digital Evidence Law.

  • Easton, C. (2013). Website accessibility and the European Union. International Review of Law, Computers & Technology.

  • Edwards, L., & Veale, M. (2017). Slave to the Algorithm? Duke Law & Technology Review.

  • Galetta, D. U. (2019). Algorithmic Decision-Making and the Right to Good Administration. European Public Law.

  • Graux, H. (2015). The eIDAS Regulation. Computer Law & Security Review.

  • Hacker, P. (2018). Teaching an Old Dog New Tricks? Verfassungsblog.

  • Hijmans, H. (2016). The European Union as Guardian of Internet Privacy. Springer.

  • Hobbing, P. (2011). SOLVIT. European Parliament.

  • Hofmann, H., & Cisotta, R. (2019). EU Administrative Law and the Digital Single Market. European Public Law.

  • Kalvet, T., et al. (2019). The Once-Only Principle. TOOP.

  • Kramer, X. (2020). Service of Documents and Taking of Evidence. European Parliament.

  • Krimmer, R., et al. (2017). The Once-Only Principle. IOS Press.

  • Kuner, C., et al. (2017). Machine learning and the GDPR. IDPL.

  • Misuraca, G., et al. (2010). Envisioning Digital Europe.

  • Schmidt, J. (2020). The Single Digital Gateway Regulation. European Public Law.

  • Velicogna, M. (2017). e-Justice in Europe. Utrecht Law Review.

  • Watcher, S., et al. (2017). Counterfactual Explanations. Harvard Journal of Law & Technology.

  • Zouridis, S., et al. (2020). Automated Discretion. Administration & Society.

10
Prospects for the development of EU e-government
2 5 5 12
Lecture text


Questions


Cases


References


Total All Topics 20 25 75 120 -

Frequently Asked Questions