Building the ecosystem around the rules: What AI startups actually need from policy

Reading Time: 3 minutes

There is a version of the AI policy debate that treats regulation and innovation as fundamentally opposed – where every compliance obligation is a drag on startup growth, and the choice is between protecting people and building companies. That is too simple.  

Across OECD member countries, governments are building national AI strategies, startup funding programmes, and governance frameworks for AI deployment. The intent is broadly the same: foster innovation, manage risk, maintain competitiveness. The problem is not the intent. It is that these three policy streams are rarely built in conversation with each other. The result is a policy landscape that says it wants AI startups to thrive while quietly making it harder for them to do so. 

Four gaps account for the distance between the stated ambition and the operational reality. 

Frameworks calibrated to incumbents, not entrants 

AI governance frameworks tend to be calibrated to the scale of the entities that dominated the consultation process that shaped them. This is not unique to AI – it is a pattern in technology regulation more broadly. Large technology companies maintain regulatory affairs functions specifically to engage in legislative design. Start-ups do not. Provisions for smaller companies are bolted on to a structure built around incumbents, rather than designed in from the start. 

An early-stage AI company in any OECD country faces a compliance landscape that was not really written with it in mind. While there is an expectation that founders will navigate this landscape, the supporting infrastructure is usually limited, particularly practical guidance that a founder without a legal team can realistically use. 

Having participated in the development of the EU AI Act’General Purpose AI Code of Practice as an SME representative, I can say from direct experience that this dynamic is visible even in the most carefully designed governance processes. The commitment to include smaller companies was genuine. The asymmetry between a startup trying to engage and an incumbent doing the same made balanced representation difficult in practice. The lesson is not that the process was closed – it was not – but easier engagement needs to be designed in, not assumed. 

Procurement: the missing lever in every national AI strategy 

Governments, particularly local ones, are the single largest purchasers of technology in most member economies. If the policy intent is to build a generation of leading AI companies, the most powerful instrument available is not a grant scheme or a tax credit – it is the decision about what to buy and from whom. On this measure, the record is uneven across most OECD countries. Public procurement frameworks are slow, risk-averse, and oriented toward established vendors. A startup with a better product frequently gets lost in the process, while large existing suppliers land the big contracts.  

The result is a contradiction at the centre of most national AI strategies. Until governments connect these two sides of their own policy, the investment will continue to underdeliver. Some countries have made progress — the UK’s GovTech Catalyst programme attempted to open procurement pathways for smaller technology companies, and the EU and Canada have tried to make accommodations through dedicated mechanisms, such as innovation procurement platforms and educating SMEs about how procurement works. But these remain exceptions. 



The compute gap nobody has properly solved 

Compute – access to the processing infrastructure on which AI systems are trained and deployed – is the area where policy frameworks across the OECD have been slowest to develop meaningful responses, and where the structural disadvantage facing startups is most acute. Large-scale compute is currently available from only a small number of hyperscale providers. 

Public compute initiatives are emerging – the EU’s AI Factories programme, the UK’s AI Research Resource, national supercomputing investments in Japan and Canada among them – but they are in early stages, variably accessible to startups, and underintegrated into the broader startup support frameworks they should be serving. Connecting public compute to the companies most constrained by its absence should be a more explicit policy objective than it currently is. 

The representation problem that shapes all the others 

Yet the core problem with AI policy is that it is still largely developed by people who have not built AI companies, informed by consultation processes that systematically over-represent the largest players and under-represent the startups whose behaviour and needs are most relevant to getting early-stage governance right. Changing it requires active effort: funded mechanisms that lower the cost of startup participation in regulatory development, outreach that goes beyond incumbents, and evaluation of policy outcomes that pays genuine attention to whether new entrants are able to operate under the frameworks. To date, the most substantive examples of this include the EU’s regulatory sandboxes, but no one has fully solved funded, low-friction routes for early-stage AI companies to shape regulatory design rather than react to it. 

The test that matters 

The test of whether any government’s AI innovation policy is working is not the ambition of its strategy document. It is whether an early-stage AI company can navigate the compliance landscape, compete for contracts and access the right digital infrastructure. 

Across the OECD, on each of those measures, the gap between intent and operational reality is still large enough to matter. 

Read more on the latest OECD D4SME Survey


Logically |  + posts

Jennifer Woodard is Chief Product & Technology Officer at Logically, an AI-driven intelligence company, and the former co-founder and CEO of Insikt AI, an applied AI company acquired in 2024. She participated in Working Group 2 of the EU AI Act's General Purpose AI Code of Practice as an SME representative. She has spent two decades in technology and a decade in applied AI research, working across intelligence, online harms, and regulatory risk, and has contributed to policy discussions at the UN Security Council, NATO StratCom, and the UN AI for Good Global Summit.