Skip to content

The Quiet Revolution That Nobody Named

Cover — a hand holds a glowing smartphone against a landscape bridging a fishing village, an African market with laptops, and a modern skyline


The critique of technology and inequality is a settled genre. You know it because you have read it and, more importantly, because you have lived it. The internet was supposed to democratize everything. Instead it produced Google, Amazon, Facebook, and TikTok — four companies whose aggregate market capitalization exceeds the GDP of most nations and whose business models rest on extracting behavioral data from the billions who use them for free. The people who promised democratization were naive optimists or sophisticated salespeople. The platforms got rich. The users got surveilled. The monopolies deepened.

The academic literature is no cheerier. Eszter Hargittai, a communication scholar who has spent two decades studying how people actually use the internet rather than how they are supposed to use it, found that access to a device does not produce equal outcomes.1 When she compared what higher-income and lower-income users do online, the pattern was consistent and damning: higher-income users engage in what she called "capital-enhancing" activities — job searching, professional learning, civic participation, news reading, career-related communication. Lower-income users default to entertainment, basic messaging, and passive consumption. This holds even when device access is equal. The divide is not primarily about hardware. It is about what the hardware is used for, and that gap tracks income and status with uncomfortable precision.2

Put these two findings together — platform monopolies extracting value upward, device access producing different outcomes depending on who holds the device — and you reach what has become the consensus view among educated people who think carefully about technology: the computer and the smartphone have not reduced inequality. They have given inequality new and more sophisticated instruments.

This view is accurate. Its component claims are well-supported. The platform critique is correct. The second-level digital divide is real. Any article that opens by dismissing these concerns is an article you should close.

But what if this account, while accurate, is fundamentally incomplete?


A Kerala fisherman standing in a wooden boat full of mackerel, phone pressed to his ear, dawn light on the water — the moment information access equalized a market

In 1997, fishermen working the Kerala coast in southern India faced a problem as old as commerce itself: too much catch, and no way to know where demand existed. When a boat returned to shore with a surplus of mackerel, the choice was stark: sell at whatever price the local market would bear, or dump the excess into the sea. Meanwhile, markets ten or fifteen kilometers up the coast might have unmet demand that same morning — buyers ready to pay, no fish to buy. The system was not broken. It was working exactly as markets work when participants cannot communicate. Buyers and sellers were separated not by distance but by information. The fish rotted in one place while people went without in another.

Between 1997 and 2001, mobile phone service rolled out across Kerala's fishing districts in a phased geographic expansion — not as a development program, but as a commercial deployment. Robert Jensen, an economist then at Harvard, recognized what was happening and used the phased rollout as a natural experiment, the closest thing to a controlled trial that economic reality allows. He tracked fish prices, waste rates, and fishermen's earnings across districts before and after mobile service arrived.

The results appeared in the Quarterly Journal of Economics in 2007.3 Price dispersion — the variation in fish prices across markets on the same day — dropped by 38 percentage points. Fish waste fell by 4.8 percentage points. Fishermen's profits rose 8 percent. Consumer prices fell 4 percent. Producer and consumer both came out ahead.

The study's specific narrative — fishermen freely calling ahead to compare prices before deciding where to land — has since been questioned. A 2016 critique noted that Kerala fishing regulations and credit arrangements with auction agents may have constrained market choice more than Jensen's account implies.4 The precise mechanism is contested. What is not contested is the outcome: information access improved market outcomes, substantially and measurably.

Jenny Aker, an economist at Tufts, then replicated the core finding in Niger — grain traders instead of fishermen, West Africa instead of South Asia, 2001 to 2006.5 When mobile service arrived in isolated markets, grain price dispersion across market pairs fell by 10 to 16 percent. The effect was largest exactly where you would expect: the most isolated markets, where information gaps inflicted the most economic damage. Same pattern. Different continent, different regulatory environment, different commodity.

What Jensen documented — whatever the precise mechanism — is what happens when people structurally excluded from a market's information suddenly gain access to it. The Kerala fisherman was not doing anything a commodity trader in London does not do every morning. He was doing exactly what a commodity trader does. He had been prevented not by his intelligence or his work ethic or the quality of his fish, but by the absence of a tool. When the tool arrived, outcomes equalized — not completely, not permanently, but measurably. Nobody planned this. Nobody named it.

From information, the pattern extends to money. Tavneet Suri and William Jack tracked 1,600 Kenyan households through five survey rounds between 2008 and 2014 to measure the effects of M-Pesa, the mobile money service launched in 2007 that allowed financial transactions — transfers, savings, payments — through a basic mobile phone. No bank account required. No credit history. No branch visit. The study, published in Science in 2016, estimated that M-Pesa access lifted 194,000 Kenyan households out of extreme poverty — a figure contested on methodological grounds.6 The broader financial inclusion story is less disputed: Kenya's financial inclusion rate rose from 26 percent in 2006 to 84 percent by 2021, driven primarily by M-Pesa. Suri and Jack's gender findings are the study's most robust element: in high-M-Pesa-density areas, 185,000 women shifted from subsistence farming to business or retail occupations. The technology reached women, changed what they could do, and that change shows up across multiple measures.

From money, the pattern scales to labor markets. The World Bank's 2023 report on online gig work found that between 2020 and 2023, job postings for online gig work grew 130 percent in Sub-Saharan Africa.7 In North America over the same period: 14 percent. That 9-to-1 ratio is the number that matters, more than any absolute count. Nigeria, Kenya, and South Africa together account for approximately 17.5 million online gig workers. The mechanism is structurally identical to Jensen's fish markets: a person previously excluded from a market by geography, credential, or capital now has access to it, because computing technology has made those barriers less permanent.

Consider what connects these cases. Jensen's fishermen used voice calls on basic handsets. M-Pesa runs on feature phones via SMS. Online gig work requires a smartphone or a laptop. Different devices — but points on a continuum, each expanding a different dimension of the same underlying equalization. The phone call equalizes information access. The SMS transfer equalizes financial access. The smartphone equalizes market access for skilled labor. The laptop equalizes production itself. The claim here is not about any single device. It is about what the computing continuum, taken together, has done to the barriers that geography and credential and capital once made permanent.

Why have you never heard about this? That is not a rhetorical question. It is the question the rest of this article is about.


Why the revolution is invisible has two answers. The first requires a concept most people who care about inequality have never encountered.

In 1999, the economist and philosopher Amartya Sen published Development as Freedom.8 Its central argument: the proper measure of human development is not income, or utility, or formal legal rights. It is capability — the actual ability of a person to achieve the life she has reason to value. Development is the expansion of substantive freedoms: the freedom to do, to be, to become. What blocks development is not primarily the absence of income but the absence of real options.

Sen's framework draws a distinction that is the load-bearing beam of any serious argument about technology and equality: capability equalization versus outcome equalization. They are not the same thing. Confusing them produces most of the muddle in debates about whether computers have made the world more or less equal.

Here it is plain. A computer gives a person in Lagos the capability to design, to publish, to earn, to organize, to diagnose, to advocate. Those capabilities were structurally inaccessible to her without it — not because she lacked intelligence or will, but because the tools required to exercise them cost millions of dollars to replicate and demanded geographic proximity to institutional gatekeepers. Whether she achieves equal outcomes with a designer in New York depends on what Sen called "complementary inputs" — bandwidth, language, hardware quality, client relationships — still unevenly distributed. The computer equalizes capability. It does not, by itself, equalize outcomes.

The question, then, is not whether a farmer in Lagos earns the same as a designer in New York. The question is whether she can design at all. That question has a different answer than it did in 1994. The change is more significant than the gap that remains. And this is not an abstract philosophical assertion — it is precisely the capability documented in the evidence: price information access in Kerala and Niger, financial transactions in Kenya, skilled labor markets from Nairobi to Lagos. Real capabilities, previously unavailable, now available through a device.

This distinction has not made it into popular discourse. "Digital equity" programs distribute devices. "Digital divide" coverage counts broadband subscriptions. The question of what a connected device makes structurally possible — which capabilities it equalizes, which it does not — is asked in academic journals and almost nowhere else. The political vocabulary for naming capability equalization as distinct from outcome equalization does not exist where political decisions are made.

And here is the second, smaller surprise: the political framework for naming this has already been built. Since 2012, the UN Human Rights Council has passed a series of resolutions affirming that internet access is protected under Article 19 of the Universal Declaration of Human Rights — the article guaranteeing freedom of expression.9 These are declarations of principle, not enforceable legal obligations, a distinction that matters and to which we will return. But the 2021 resolution, co-sponsored by Brazil, Nigeria, Sweden, Tunisia, and the United States with 70 co-sponsoring nations, called on states to adopt universal access as a policy objective. Estonia declared internet access a universal service right in 2000. Finland enacted a 1 megabit per second minimum as a legal right in 2010. Costa Rica's constitutional court declared internet access a fundamental right that same year. The vocabulary exists. It has been articulated and partially enacted. What does not yet exist at scale is the operationalization — the enforcement infrastructure that would treat the naming as actionable.

That gap between articulation and operationalization is the subject of the next section.


Every powerful equalizing technology in history has gone through a long gap between its emergence and its recognition as a rights instrument. Not an accident. A pattern. And understanding the pattern tells us exactly where we are.

The 15th Amendment, ratified in 1870, prohibited states from denying the vote on the basis of race. The formal right was established. For the next 95 years it coexisted with a systematic apparatus of suppression: literacy tests designed to be impossible to pass — reciting the entire Constitution on demand was a documented requirement in some Southern counties — poll taxes, grandfather clauses, physical intimidation, violence at the ballot box. The right existed. The exercise of it did not.

The Voting Rights Act of 1965 outlawed these practices.10 Within one year, four of thirteen Southern states had more than 50 percent of Black voters registered. The technology — the ballot — had been available for 95 years. Closing the gap required not a better ballot, but the enforcement infrastructure to remove the suppression mechanisms.

Education followed a similar arc. In 1848, Horace Mann declared education "the great equalizer of the conditions of men, the balance-wheel of the social machinery."11 Massachusetts enacted compulsory attendance in 1852. And then the "great equalizer" coexisted for 58 years with Plessy v. Ferguson's doctrine of "separate but equal," which legally codified educational inequality for Black Americans. Brown v. Board of Education came in 1954: 106 years after Mann's declaration, 58 years after Plessy made his claim a deliberate fiction.

The printing press moved on a longer timeline, but in a structurally identical pattern. And here the history offers a joke with a serious point.

In 1492, a German abbot named Johannes Trithemius sat down to write a book. The printing press had operated in Europe for roughly forty years, and Trithemius was worried. In his manuscript De Laude Scriptorum — "In Praise of Scribes" — he argued that monks should continue copying manuscripts by hand.12 The printed book, he warned, was made of paper and would "quickly disappear." Parchment would last. Worse, the press encouraged "sloth" — why labor at sacred copying if a machine could do it faster? The very ease of the press was its spiritual danger.

Trithemius's book was published in 1494 as a printed book.

Abbot Trithemius at his desk — quill in one hand, a printed book in the other, while through the arched window behind him a printing press runs

He used the technology he decried to distribute his complaint about the technology. The irony is perfect, and it is not accidental. The abbot's objection was not really about paper versus parchment. It was about the disruption of a practice — sacred manuscript copying — that organized the monastery's time, legitimized the monks' role, and protected the Church's monopoly on textual production. The printing press was going to equalize access to text. Trithemius, whose institution held that equalizing power, objected. The objection wore spiritual dress. Its social function was the preservation of a gatekeeping monopoly.

The "phones make us dumb" discourse is Trithemius's book in twenty-first-century costume. The cognitive fear shifts — scribal virtue becomes deep reading becomes undistracted attention — but the social function stays identical: the class whose status derives from mastering a cognitive tool resists the equalization of access to that tool by framing the resistance as concern for cognitive purity. To be fair: a 2024 meta-analysis of 33 studies found small-to-medium negative effects of smartphone proximity on working memory (d = -0.20).13 The cognitive concern is not baseless. But small real effects and the Trithemius pattern are not mutually exclusive. The printing press did disrupt scribal practice. The question is which side of the disruption deserves protection — the gatekeeping institution or the people newly gaining access.

A survey in Mathematics Teacher magazine in the mid-1970s found that 72 percent of respondents opposed giving seventh graders calculators.14 Students would lose computation skills, become reliant on machines, fail to learn from errors. Same structure as Trithemius in 1492. Same structure as Carr in 2010. Calculators are now universal in classrooms. The argument was settled: they do not make students dumb.

Even technologies with no cognitive dimension at all went through the lag. Marc Levinson's The Box, published in 2006 — fifty years after Malcolm McLean's first container voyage — was the first comprehensive account to name the shipping container as the engine of post-WWII globalization.15 The Economist concluded: "Without the container, there would be no globalization." Nobody noticed for fifty years. The container was a box. It was doing something world-historical. The naming came later.

We are inside the recognition lag now. The question is whether it has to last another fifty years.


Before answering that question, intellectual honesty requires taking the counterarguments seriously. Not as a rhetorical move. Because the counterarguments are real.

The most powerful objection is an analogy: the Green Revolution.

In the 1960s and 1970s, the Green Revolution introduced high-yield variety seeds that dramatically increased agricultural productivity across Asia, Latin America, and parts of Africa. The seeds were a genuine technological leap. They worked. They fed millions of people who would otherwise have starved. But the seeds required fertilizer, pesticides, and controlled irrigation to achieve their promised yields — complementary inputs that cost money. Large commercial farmers with access to credit, irrigation infrastructure, and bulk-purchasing power captured most of the productivity gains. Smallholder subsistence farmers without those inputs saw limited benefit. In many cases they were displaced as consolidation accelerated.

The Green Revolution equalized the seed. It did not equalize the harvest, because the seed without the soil is not a complete technology. The people who had the soil captured the gains.

This is the precise structure of the computer's limitation. The device without bandwidth, skill, language, client relationships, electricity — without its complementary inputs — does not deliver its equalizing potential. And those inputs are not equally distributed.

The GSMA Mobile Gender Gap Report documents what this looks like:16 in 2024, women in low- and middle-income countries were 15 percent less likely than men to use mobile internet, a gap narrowed from 19 percent in 2023 but now stalled — suggesting early progress was easier than what remains. The barrier is partly economic: an entry-level smartphone costs 24 percent of a woman's monthly income in those countries, versus 12 percent for a man, because women earn less. It is also social, and the social dimension is not reducible to a simple policy fix. Even controlling for income, education, and employment, women remain less likely to use mobile internet, because norms accumulated over generations, safety concerns that are real rather than perceived, and cultural structures around technology ownership add layers of barrier that no legislature dissolves in a single session. Approximately 885 million women are still not using mobile internet. Two-thirds of them are in South Asia and Sub-Saharan Africa.

Then there is the question of what kind of integration the computer offers. Data annotation workers in Kenya — reviewing harmful content for platforms like Facebook and TikTok — earn between 1.50and1.50 and 2 per hour.17 This is real income, higher than many local alternatives. But it is a tiny fraction of the value the same work generates for the Northern firms training the AI systems. And worker surveys in Ghana, Kenya, and Colombia find something the wage comparison does not capture: content moderators and annotation workers report significant rates of anxiety, depression, and trauma from sustained exposure to harmful material, without the support structures — health insurance, psychological services, stable scheduling — standard for equivalent labor in the Global North. The Nigerian annotator is earning more than before. She is also absorbing costs, psychological and otherwise, that the wage figure does not include. Is this equalization, or integration into a new hierarchy with different names? And note a further layer: the entire infrastructure — the platforms, the operating systems, the cloud services, the payment rails — is owned and controlled by Northern corporations. The capability the device offers is real but contingent, existing at the discretion of platform owners who can change terms of service, adjust fee structures, or withdraw from markets. This is a dependency relationship, not a permanent redistribution.

These are genuine limits. The complementary inputs problem is real. The gender gap means the revolution is structurally withheld from hundreds of millions of women. Platform capitalism extracts real value from Global South workers while delivering real income gains. Both things are true simultaneously. Neither cancels the other.

But these limits share a feature: in each case the failure is political, not technological. The seed was not the problem in the Green Revolution — the absence of fertilizer subsidies and land reform was. The device is not the problem with the gender gap — the social norms, wage inequality, and safety conditions that make the device inaccessible to women are the problem, and while those conditions have structural and cultural dimensions that resist fast legislative solutions, they are human-made, not technologically determined. The computer is not the problem with annotation wages — the absence of labor standards, collective bargaining rights, and platform regulation is. In each case the technology has equalized a capability. The political infrastructure has failed to ensure the equalization deepens rather than being captured.

We have seen this structure before. It is exactly what the voting rights history looks like.


The 15th Amendment established the formal right. The literacy test was the suppression mechanism. The Voting Rights Act was the enforcement infrastructure. The sequence: formal right exists, suppression prevents its exercise, political action names the suppression, enforcement infrastructure is built, functional right follows.

The computer is at step two of this sequence. The formal capability exists and is growing. The suppression mechanism is operating. It has not yet been named as suppression.

Zillien and Hargittai's research, confirmed by the US Department of Education's 2024 National Educational Technology Plan, documents the mechanism.18 Lower-income users, even when given device access, systematically land in passive content consumption — entertainment, scrolling, basic messaging — rather than the capital-enhancing activities that constitute the computer's equalizing function. The 2024 DOE plan introduced the term "digital use divide" as a formal policy concept, distinguishing it from access gaps: the use divide is about whether technology enables "exploration, creation, and critical analysis" or passive consumption.

The research does not establish definitively that platform design causes this pattern rather than pre-existing skill gaps or educational differences. What it does establish is that the pattern is consistent, income-correlated, and structurally reinforced by the incentive architecture of attention-economy platforms, which maximize engagement time through entertainment, outrage, and passive scrolling. The incentives of the platform and the equalization of the user are not aligned. Whether you call this "suppression" depends on how much weight you give to intent versus effect. Unlike the literacy test — designed explicitly to prevent Black Americans from voting — platform algorithms are designed to maximize engagement for profit, and the passive-consumption concentration among lower-income users is a consequence, not an objective. The analogy to the voting rights sequence is about structural effect, not intent: capability exists, the mechanism that prevents its exercise operates at scale, the mechanism has not been named as a political problem requiring political remedy.

Andy Clark, a philosopher at the University of Edinburgh, argues in Natural-Born Cyborgs that humans are not beings who use tools but beings whose intelligence is constitutively made of tools.19 Speech, counting, writing, printing, computing — each is what Clark calls a "mindware upgrade," a point at which the effective architecture of human intelligence expands and transforms. The printed book is not outside the mind that reads it; it is part of the cognitive apparatus through which that mind operates. If this is right — even in the weak version, requiring only that tools significantly extend cognitive capacity rather than literally constitute it — then denying access to the best available cognitive tools is not a consumer welfare failure. It is a cognitive infrastructure failure. A person without access to a computer is not operating with "purer" intelligence. She is operating with less of it.

This reframes the "phones make us dumb" impulse — the sense, common among educated people who read carefully about attention and cognition, that they should use their devices less, or more intentionally, or with more discipline. That impulse is not wrong as personal practice. The small cognitive effects documented in the 2024 meta-analysis are real, and individual screen-time decisions are legitimate. But as political analysis, the impulse is Trithemius's complaint. The abbot was right that his monks' spiritual practice was being disrupted. He was wrong about which side of the disruption deserved protection. The question is not whether any individual reader uses their device more mindfully. The question is whether the political systems that govern the device treat it as the civil rights instrument it already is.

Apply the voting rights model to computers and the political argument becomes precise. Formal access is growing — devices are reaching more people. What is missing is suppression removal (the attention economy, optimized for engagement in ways that steer the device away from its equalizing function) and enforcement infrastructure (the legal and institutional apparatus that would treat computer capability access as a rights issue, not a market outcome). Formal right plus suppression removal plus enforcement infrastructure equals functional right. We have the first. We are missing the second and third.


The UN Human Rights Council passed its first resolution linking internet access to Article 19 rights in 2012. By 2021, 70 nations co-sponsored a resolution calling for universal access as a policy objective. Estonia, Finland, and Costa Rica had already moved from resolution to law. But these resolutions are non-binding declarations of principle. They do not obligate governments to provide access. They cannot punish noncompliance. The gap between a resolution calling for universal access and a legal framework that enforces it is as wide as the gap between the 15th Amendment and the Voting Rights Act. Fourteen years have passed since the first UN resolution. The vocabulary exists where principles are articulated. The operationalization does not exist where capability is actually distributed.

The recognition lag is normal. Every equalizing technology went through it. The printing press operated for centuries before Eisenstein analyzed what it had done to society. The ballot existed for 95 years before the suppression mechanism was named and removed. Education was declared the great equalizer in 1848 and denied to Black Americans for a century.

But the lag is not free. It has a cost, and the cost falls on specific people in specific places.

The people paying it right now are not abstractions. They are the 885 million women in low- and middle-income countries who are not using mobile internet. They are the grain traders in Niger and the fish sellers in Kerala who got their information revolution by accident — a commercial telecom rollout happened to give them a tool their economies had denied them for generations. They are the 17.5 million workers in Nigeria, Kenya, and South Africa engaged in online gig work, and the orders of magnitude more who would be if the complementary inputs were treated as a political responsibility rather than a market sorting mechanism.

The computer is structurally more powerful as an equalizer than any tool that preceded it, because it operates across all domains simultaneously. The printing press equalized one category of information. The ballot equalized one dimension of political participation. Public education equalizes one domain, and takes a generation to do it. The computer equalizes the capability to publish, to earn, to learn, to organize, to diagnose, to advocate — all at once, at near-zero marginal cost of replication, across every domain in which human intelligence operates.

The revolution is not coming. It already happened. It happened in fish markets and grain markets and mobile money networks and online freelancing platforms, mostly in places that educated people in wealthy countries do not look at when they think about where the future is being made.

Every year the recognition lag persists, billions of people who could convert their intelligence into globally exchangeable value remain locked in the local economy of their birth — not because the technology does not exist, not because the will does not exist, but because no political system has yet decided to treat the tool they are holding as the civil rights instrument it already is. The people with the power to make that decision — legislators, platform executives, voters in wealthy democracies — are not, in the main, the people paying the cost of the delay.

The device in your hand is not a phone. It is not a productivity tool or an entertainment machine or a surveillance apparatus, though it is all of those things too. It is the most powerful instrument for the redistribution of human capability ever built. The naming is the political act. The lag is the choice — and the choice belongs to those of us for whom the cost of not choosing is invisible.


References

Further Reading


All images in this article were generated by AI (OpenAI gpt-image-1).

Footnotes

  1. Eszter Hargittai, "Second-Level Digital Divide: Differences in People's Online Skills," First Monday, Vol. 7, No. 4, April 2002. https://firstmonday.org/ojs/index.php/fm/article/view/942

  2. Nicole Zillien and Eszter Hargittai, "Digital Distinction: Status-Specific Types of Internet Usage," Social Science Quarterly, Vol. 90, No. 4, 2009. https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1540-6237.2009.00617.x

  3. Robert Jensen, "The Digital Provide: Information (Technology), Market Performance, and Welfare in the South Indian Fisheries Sector," Quarterly Journal of Economics, Vol. 122, No. 3, August 2007, pp. 879–924. https://academic.oup.com/qje/article-abstract/122/3/879/1879540

  4. Steyn, "Information and Communication Technology and the Fishers of Kerala: A Critical Reappraisal," Electronic Journal of Information Systems in Developing Countries, 2016. https://onlinelibrary.wiley.com/doi/pdf/10.1002/j.1681-4835.2016.tb00537.x

  5. Jenny C. Aker, "Information from Markets Near and Far: Mobile Phones and Agricultural Markets in Niger," American Economic Journal: Applied Economics, Vol. 2, No. 3, July 2010, pp. 46–59. https://www.aeaweb.org/articles?id=10.1257/app.2.3.46

  6. Tavneet Suri and William Jack, "The Long-Run Poverty and Gender Impacts of Mobile Money," Science, Vol. 354, Issue 6317, pp. 1288–1292, December 9, 2016. https://www.science.org/doi/10.1126/science.aah5309. For methodological critique, see: Milford Bateman, Maren Duvendack, and Nicholas Loubere, "Is Fin-Tech the New Panacea for Poverty Alleviation and Local Development?," Review of African Political Economy, 2019. https://www.tandfonline.com/doi/full/10.1080/03056244.2019.1614552

  7. World Bank Group, "Working Without Borders: The Promise and Peril of Online Gig Work," September 2023. https://www.worldbank.org/en/news/press-release/2023/09/07/demand-for-online-gig-work-rapidly-rising-in-developing-countries

  8. Amartya Sen, Development as Freedom, Anchor Books, 1999.

  9. UN Human Rights Council, Resolution A/HRC/32/L.20, June 2016; Resolution on Internet and Human Rights, July 13, 2021. See ARTICLE 19 analysis: https://www.article19.org/resources/un-human-rights-council-adopts-resolution-on-human-rights-on-the-internet/

  10. Voting Rights Act of 1965, National Archives. https://www.archives.gov/milestone-documents/voting-rights-act. See also: Brennan Center for Justice, "The Voting Rights Act, Explained." https://www.brennancenter.org/our-work/research-reports/voting-rights-act-explained

  11. Horace Mann, "Twelfth Annual Report to the Secretary of the Massachusetts State Board of Education," 1848.

  12. Johannes Trithemius, De Laude Scriptorum (In Praise of Scribes), written 1492, published 1494. https://archive.org/details/inpraiseofscribe0000trit

  13. "The Mere Presence of a Smartphone: A Meta-Analysis of 33 Studies," Technology, Mind, and Behavior (APA), 2024. https://tmb.apaopen.org/pub/7np97zr5

  14. Mathematics Teacher magazine survey, mid-1970s. Cited in: "A Historical Analysis of Attitudes Toward the Use of Calculators in Junior High and High School Math Classrooms in the United States Since 1975," ERIC ED525547.

  15. Marc Levinson, The Box: How the Shipping Container Made the World Smaller and the World Economy Bigger, Princeton University Press, 2006 (2nd ed. 2016). https://press.princeton.edu/books/paperback/9780691170817/the-box

  16. GSMA, "The Mobile Gender Gap Report," 2023/2024. https://www.gsma.com/gender-gap-2023/

  17. Brookings Institution, "Reimagining the Future of Data and AI Labor in the Global South," 2023. https://www.brookings.edu/articles/reimagining-the-future-of-data-and-ai-labor-in-the-global-south/. See also: Media@LSE, "The Perilous Future of AI Work in the Global South," November 2025. https://blogs.lse.ac.uk/medialse/2025/11/14/the-perilous-future-of-ai-work-in-the-global-south/

  18. Nicole Zillien and Eszter Hargittai, "Digital Distinction," Social Science Quarterly, 2009 (see note 2). US Department of Education, "2024 National Educational Technology Plan: A Call to Action for Closing the Digital Access, Design, and Use Divides," January 2024. https://tech.ed.gov/netp/

  19. Andy Clark, Natural-Born Cyborgs: Minds, Technologies, and the Future of Human Intelligence, Oxford University Press, 2003. https://global.oup.com/academic/product/natural-born-cyborgs-9780195177510