This article interrogates the intersection of Artificial Intelligence (AI), digital transformation and sovereignty in the African context, with particular focus on Nigeria. It critiques the growing dominance of foreign technologies in shaping the continent’s AI policies, innovation ecosystems and legal frameworks, often without commensurate local input or contextual grounding. The work warns that the unchecked proliferation of imported AI systems risks entrenching digital dependency, algorithmic inequality and policy misalignment with local constitutional values, especially the right to dignity, privacy and non-discrimination.
The author posits that Africa’s technological renaissance must not be outsourced to external actors whose platforms may embed biases, opaque logic and extractive data practices. He advocates for a homegrown model of AI governance rooted in the principle of “Ethics by Design”, one that reclaims human dignity and aligns technological progress with constitutional and cultural realities. The study highlights the Nigeria Data Protection Act 2023 as a positive, albeit preliminary, effort toward asserting regulatory control. However, it urges a more robust framework that includes mandatory data localization, algorithmic accountability and institutional capacity-building.
The paper further calls attention to the geopolitical dimensions of digital transformation, where Africa must negotiate its place not as a passive consumer but as an active co-creator of ethical, inclusive technologies. In conclusion, the author proposes a new social contract for the AI age, one that places human dignity, data sovereignty and indigenous innovation at the center of Africa’s digital future. Without this, foreign dominance in AI may reproduce colonial power asymmetries in digital form, undermining both democratic governance and developmental autonomy.
KEYWORDS: Artificial Intelligence and Digital Transformation, Regulatory Frameworks, Data Localization, Data Sovereignty, Algorithmic Accountability, Algorithmic Transparency, Ethics by Design, Foreign Tech Dominance, Digital Colonialism.
INTRODUCTION
In situating arguments advanced in this article, it is essential to clarify certain operative terms that recur throughout our discourse. Artificial Intelligence, digital transformation and related regulatory concepts are often deployed with varying meanings across technical, legal and policy discourses. Without clear definitional grounding, the analysis of foreign technology dominance in Africa’s innovation ecosystem risks being blurred by semantic ambiguity.
Accordingly, the following section sets out key terms as used in this study, providing not only conventional definitions but also the contextual nuances most relevant to Africa’s socio-legal environment. These definitions are drawn from authoritative international sources, comparative regulatory frameworks and scholarly discourses and they are tailored to the themes of sovereignty, accountability and digital justice that underpin the critique of “new digital colonialism.”
Artificial Intelligence (AI)
This term refers to the field of computer science and engineering devoted to building systems capable of performing tasks that ordinarily require human intelligence, such as reasoning, learning, perception, decision-making and natural language processing (Cole Stryker, Eda Kavlakoglu, ‘What is Artificial Intelligence? (IBM.com, 9th August, 2024) www.ibm.com/think/topics/artificial-intelligence accessed on 9th September, 2025). It encompasses a broad set of techniques, including machine learning, deep learning, expert systems, and natural language understanding, through which systems recognize patterns in data, build predictive models, and adapt through feedback (https://cloud.google.com/learn/what-is-artificial-intelligence> accessed on 9th September, 2025).
AI powers a wide range of applications: autonomous vehicles, healthcare diagnostics, financial risk analysis, e-commerce personalization and governance tools. Beyond its technical utility, AI also raises profound legal and policy questions about accountability, ethics, bias, privacy and sovereignty
Digital Transformation
Digital Transformation is the comprehensive integration of digital technologies, particularly artificial intelligence (AI), data analytics, cloud computing and automation, into every facet of economic, social and institutional life. It goes beyond mere digitization to fundamentally reshape how businesses, governments and societies operate, create value and deliver services.
In practice, digital transformation involves rethinking business models, optimizing operations and enhancing stakeholder experiences through data-driven decision-making. AI is its central driver: by automating routine processes, enabling predictive analysis, and personalizing interactions, AI not only improves efficiency but also generates entirely new modes of production, governance, and innovation.
At the societal level, digital transformation promises economic growth, financial inclusion and more adaptive public institutions. Yet it also introduces vulnerabilities such as cyber-security threats, dependency on foreign digital infrastructures and risks of algorithmic biases. In regions like Africa, where much of the enabling infrastructure is controlled by foreign technology providers, digital transformation intersects directly with questions of sovereignty, regulatory autonomy and the equitable distribution of technological benefits.
Regulatory Frameworks (for AI and Digital Technologies)
This concept refers to the system of laws, policies, institutions and enforcement mechanisms that govern the design, deployment, and use of emerging technologies. They establish permissible uses, set technical and ethical standards, protect fundamental rights (privacy, dignity, non-discrimination) and ensure accountability of both domestic and foreign actors operating within a jurisdiction.
In the context of AI, regulatory frameworks commonly rest on principles of algorithmic accountability, transparency, fairness, human oversight and data protection. They are meant to balance innovation with safeguards against harms such as bias, opacity, or exploitative data practices.
Comparatively, the EU’s proposed AI Act (< https://artificialintelligenceact.eu/ > (Artificialintelligenceact.eu) Accessed on 9th September, 2025.) exemplifies a risk-based approach, regulating AI systems according to their potential impact on rights and society. In Nigeria, emerging efforts such as the Data Protection Act (2023, < https://placng.org/i/wp-content/uploads/2023/06/Nigeria-Data-Protection-Act-2023.pdf > (Place.org) Accessed on 9th September, 2025.), the Startup Act, the Advertising Regulatory Council of Nigeria (ARCON) Act, and initiatives like the National Centre for Artificial Intelligence and Robotics (NCAIR) (< https://ncair.nitda.gov.ng/ > Accessed on 9th September, 2025.) under National Information Technology Development Agency (NITDA) (< https://nitda.gov.ng/ > Accessed on 9th September, 2025) signal movement toward structured oversight. Together, these instruments reflect attempts to localize data control, regulate AI-related services and guide innovation within Nigerian values and constitutional guarantees.
For Africa, the challenge is sharper: regulatory frameworks must also contend with foreign technology dominance, ensuring that imported AI systems and platforms are adapted to local contexts, protect sovereignty and advance developmental priorities rather than replicate external power asymmetries.
Algorithmic Transparency and Accountability
These are complementary principles designed to ensure that algorithmic systems operate in ways that are both understandable and responsible. Transparency requires that the processes, logic, data inputs and decision rules shaping algorithmic outcomes be visible and interpretable to users, regulators and other affected stakeholders (< https://en.wikipedia.org/wiki/Algorithmic_transparency > Accessed on 9th September, 2025). It is a precondition for effective oversight, enabling independent review, auditing and informed consent. While transparency alone does not guarantee fairness, it makes unfair or biased practices detectable and open to challenge. Its key components include explainability, documentation of data sources, model interpretability and disclosure of decision pathways, with global benchmarks such as the European Union’s “right to explanation” and the European Centre for Algorithmic Transparency (ECAT) illustrating its growing importance.
Accountability, on the other hand, extends beyond visibility to place direct responsibility on the organizations that design, deploy, or rely on algorithms for the outcomes they generate (< https://en.wikipedia.org/wiki/Algorithmic_accountability > (Wikipedia.org) Accessed on 9th September, 2025). It encompasses proactive measures such as algorithmic impact assessments, audits and bias testing, as well as reactive mechanisms including remedies for harm, liability before regulators or courts, and obligations to correct discriminatory or harmful results.
Taken together, transparency and accountability form the backbone of ethical AI governance. They ensure not only that algorithmic systems can be scrutinized, but also that those who use them remain answerable for their consequences, thereby aligning technological innovation with legal standards, human rights, and democratic values.
ETHICS BY DESIGN
This is a proactive philosophy and operational approach that integrates ethical principles such as fairness, privacy, human dignity, non-discrimination and accountability directly into the design and development of technological systems, especially AI (Philip Brey, Brandt Dainow, ‘Ethics by Design for Artificial Intelligence’ (Springer.com, 21st September, 2023) < https://link.springer.com/article/10.1007/s43681-023-00330-4 > Accessed on 9th September, 2025). Unlike “ethics as compliance,” which treats ethics as a regulatory checkbox, Ethics by Design embeds ethical impact assessments, stakeholder consultations, bias testing and data protection safeguards into the technical architecture and governance frameworks from the outset.
Its purpose is to ensure that technologies are not only efficient but also equitable and humane, preventing harms such as systemic bias, privacy violations, or opaque decision-making. Global concerns around algorithmic discrimination, data misuse, and failed digital rollouts underscore the risks of neglecting this approach. In contexts like Nigeria, Ethics by Design must go beyond code and courtrooms, extending to grassroots participation, inclusive innovation and civil society engagement to ensure that AI systems respect democratic values of dignity, autonomy and justice.
Foreign Tech Dominance
The situation in which a small number of large foreign technology firms hold disproportionate influence over infrastructure, platforms, data, algorithms, investment and policy in sectors like AI in Africa, often shaping agendas, norms and capacities, sometimes at the expense of local innovation, control, or sovereignty.
This dominance can manifest via cloud services, data storage and processing, algorithmic platforms, AI model deployment, foreign intellectual property, foreign regulatory templates.
Implications include dependency, technology transfer gaps, limited local capacity building, reduced bargaining power, risks of exporting bias, unfair terms, and potentially extractive data practices.
Digital Colonialism
This refers to the new forms of control, dependency and power asymmetry in the digital and AI sphere, where developing or formerly colonized societies remain subject to external influence through foreign-owned infrastructures, platforms, algorithms, investment and data flows. Like classic colonialism, which relied on railways and trade routes to extract value, digital colonialism operates through proprietary software, corporate cloud systems and centralized internet services that capture, exploit and commodify local data for external profit.
This phenomenon compromises digital sovereignty when critical infrastructural, legal, or algorithmic decisions are determined abroad, raising urgent questions about who sets global standards, whose values are embedded in AI systems, who profits from data, and whether fundamental rights: privacy, dignity, non-discrimination are preserved. Scholars have described it as a continuation of extractive logics under new technological guises, with Big Tech corporations imposing cultural norms, business models and algorithmic biases designed to maximize profit while presenting them under the rhetoric of “progress,” “development,” or “connecting people.”
Digital colonialism frames the global digital order as one in which the Global South risks remaining a consumer and data supplier, rather than an equal co-creator of the technologies that increasingly govern economic and social life. (To be continued).
THOUGHT FOR THE WEEK
“Some people call this artificial intelligence, but the reality is this technology will enhance us. So instead of artificial intelligence, I think we’ll augment our intelligence”. (Ginni Rometty).
