OpenAI has released a comprehensive set of policy proposals titled "Industrial Policy for the Intelligence Age," outlining potential economic and societal shifts anticipated with the advent of artificial intelligence, particularly superintelligence. The 13-page paper suggests new approaches to wealth distribution, labor, and risk management, aiming to initiate public discussion among policymakers, investors, and the broader public.
The release has drawn varied reactions from experts, who acknowledge the document's scope while also raising questions about its practical implementation and the company's broader advocacy efforts.
OpenAI's "Industrial Policy for the Intelligence Age" proposes new approaches to wealth distribution, labor, and risk management to prepare for economic and societal shifts anticipated with advanced AI.
Context and Objectives: Preparing for the AI Age
OpenAI's proposals emerge amidst growing public and governmental concerns regarding AI's potential impact, including job displacement, wealth concentration, and the increasing demand for data centers. The company stated its framework is designed to inform public discourse and prepare for a future where AI systems could surpass human intellect.
Three Core Objectives
The document's three main objectives include:
- Broader distribution of AI-driven prosperity.
- Establishing safeguards for systemic risks associated with AI.
- Ensuring wide access to AI capabilities to prevent concentrated economic power and opportunity.
The company presented these proposals as a starting point for discussion rather than a comprehensive set of recommendations. This release followed a similar policy blueprint issued by Anthropic six months prior. Notably, the release coincided with a New Yorker investigation that raised questions concerning OpenAI CEO Sam Altman’s public statements on AI safety and related matters.
Key Policy Proposals: Reshaping Economy, Labor, and Risk
The "Industrial Policy for the Intelligence Age" paper suggests a range of approaches, combining elements of public wealth funds and expanded social safety nets with a market-driven economic framework.
Economic and Tax Adjustments- Shift in Tax Burden: Proposes shifting the tax burden from labor to capital. OpenAI suggests that AI-driven growth could diminish the tax base supporting programs like Social Security and Medicaid as corporate profits increase and reliance on labor income decreases.
- Higher Taxation: Recommends higher taxes on corporate income, AI-driven returns, or top-tier capital gains. A "robot tax," similar to a concept proposed in 2017, was also mentioned.
- Public Wealth Fund: Suggests establishing a Public Wealth Fund to grant Americans an automatic public stake in AI companies and infrastructure, with returns distributed directly to citizens.
- Workweek Reduction: Proposes subsidizing a four-day workweek without a loss in pay.
- Company Contributions: Recommends increasing company contributions to retirement plans and covering a larger portion of healthcare, child, or eldercare costs, framing these as corporate responsibilities.
- Portable Benefits: Suggests portable benefit accounts that could follow workers between jobs, although these would largely depend on employer contributions.
- Containment Plans: To address risks such as misuse by governments or malicious entities, and AI systems operating beyond human control, the company proposes containment plans for dangerous AI.
- Oversight and Safeguards: Advocates for new oversight bodies and targeted safeguards against high-risk applications, including cyberattacks and biological threats.
- Energy Infrastructure: Proposes expanding electricity infrastructure to meet the significant power demands of AI systems.
- Accelerated Development: Calls for accelerating AI infrastructure development through measures like subsidies, tax credits, or equity stakes.
- AI as a Utility: Suggests that AI should be treated as a utility, advocating for collaboration between industry and government to ensure it remains affordable and widely available, rather than controlled by a few entities.
OpenAI stated that the transition to superintelligence necessitates a new industrial policy agenda to ensure its benefits are widespread. The company referenced historical economic shifts, such as the Industrial Age and initiatives like the New Deal, as precedents for facilitating broader opportunity and security through new public institutions, protections, and social safety nets.
OpenAI concluded that the transition to superintelligence will require an even more ambitious form of industrial policy, demanding collective action from democratic societies to shape their economic future for the benefit of all.
Expert Reactions and Considerations
The policy paper has elicited diverse responses from experts in AI policy and economics.
Lucia Velasco, a senior economist and AI policy leader, acknowledged the document's value in highlighting governments' lag in developing policy solutions for AI, viewing it as a structural economic shift. However, Velasco also noted OpenAI's inherent interest in shaping the conversation, stating that the discussion should extend beyond the initiating company.
Soribel Feliz, an independent AI policy advisor, credited OpenAI for formalizing the discussion around AI's potential impact on U.S. institutions and safety nets. Feliz observed that many of the proposed ideas, such as sharing prosperity, mitigating risks, and democratizing access, have been central to AI governance discussions since late 2022. She highlighted a perceived gap between identifying solutions and establishing concrete mechanisms for their implementation.
Nathan Calvin, vice president of state affairs at Encode AI, considered the document an improvement over previous high-level papers, specifically praising suggestions for auditing, incident reporting, and government restrictions on certain AI uses. Conversely, Calvin raised concerns about OpenAI's lobbying activities through the Leading the Future PAC, which has reportedly campaigned against politicians advocating for policies similar to those endorsed in the paper.
Anton Leicht, a visiting scholar with the Carnegie Endowment, expressed skepticism regarding the paper's feasibility. Leicht characterized the proposed ideas as fundamental societal changes that are politically challenging to implement. He further described the document as "comms work" that could provide cover for regulatory inaction, advocating instead for the AI industry to direct its political funding and lobbying toward advancing a concrete policy agenda.
OpenAI was founded as a nonprofit with the stated goal of AI benefiting all humanity but has since shifted to a for-profit entity, a change that has prompted scrutiny regarding its mission's compatibility with its financial obligations to shareholders. OpenAI President Greg Brockman and other individuals associated with the tech industry have contributed to super PACs advocating for policies that are characterized as "light-touch" regarding AI regulation.