Verkor AI Agent Designs Complete RISC-V CPU in 12 Hours
Verkor's Design Conductor generated a verified, layout-ready RISC-V CPU from a 219-word specification in 12 hours, compressing traditional 18-36 month design cycles into a single day.
TL;DR
Verkorβs Design Conductor AI agent autonomously generated a complete, verified RISC-V CPU from a 219-word specification document in just 12 hours. The output is a layout-ready GDSII file, traditionally requiring 18-36 months of engineering effort.
Key Facts
- Who: Verkor (AI chip design startup)
- What: Design Conductor AI agent produced complete RISC-V CPU from 219-word spec in 12 hours
- When: Demonstrated May 2026, FPGA implementation planned for DAC conference
- Impact: Design cycle compressed from 18-36 months to 12 hours (1000x+ acceleration)
What Changed
Verkor announced that its Design Conductor AI agent successfully designed a complete RISC-V CPU, dubbed VerCore, starting from a minimal 219-word requirements document. The system produced a verified, layout-ready GDSII fileβthe industry-standard format for chip fabricationβin 12 hours.
According to IEEE Spectrum, the design process consumed βmany tens of billions of tokens,β indicating substantial computational resources. The company plans to implement the design on FPGA hardware at the upcoming Design Automation Conference (DAC) for live demonstration.
Traditional semiconductor design cycles span 18 to 36 months, involving large engineering teams working through specification, architecture, logic design, verification, and physical layout stages. Verkorβs system collapsed this entire workflow into a single day of automated processing.
Why It Matters
The demonstration represents a measurable acceleration in hardware design methodology, though with important caveats:
- Design output quality: The GDSII file is verified and layout-ready, meaning it passed design rule checks and functional verification
- Not yet silicon-proven: The design has not been manufactured or tested on actual silicon
- Computational cost: The process required tens of billions of tokens, suggesting significant infrastructure requirements
- Verification bottleneck: Human engineers still needed to verify the AIβs output before fabrication
Tomβs Hardware notes that this is not the first AI-designed chip, but it represents one of the most complete demonstrations of end-to-end automated CPU design from specification to GDSII output.
| Metric | Traditional | Verkor Design Conductor |
|---|---|---|
| Design cycle | 18-36 months | 12 hours |
| Input specification | Detailed specs (months) | 219 words |
| Output format | GDSII (manual) | GDSII (automated) |
| Silicon status | Varies | Not yet manufactured |
πΊ Scout Intel: What Others Missed
Confidence: high | Novelty Score: 88/100
Media coverage focuses on the headline-grabbing β12 hours vs 18 monthsβ comparison, but the computational requirements tell a more nuanced story. The βtens of billions of tokensβ consumed suggests this was not a lightweight processβcomparable to processing millions of pages of documentation. At current token pricing, a single design run could cost tens of thousands of dollars in compute alone, positioning this as an enterprise tool rather than a democratized design platform. Additionally, the gap between GDSII output and silicon-proven silicon remains the critical validation step; the FPGA demonstration at DAC will be the first real test of whether the AI-generated logic actually works.
Key Implication: Verkorβs approach compresses design time at the expense of compute costs, shifting the economics of chip development from labor-intensive to compute-intensiveβa tradeoff that favors large players with cloud resources over smaller teams.
What This Means
The immediate impact falls on semiconductor design teams at established companies and startups. Teams that previously required 12-18 months for initial design iterations can now explore multiple architectural approaches in days, potentially accelerating the design space exploration phase by 10-100x.
However, the technology introduces new dependencies. Design verification remains a human-in-the-loop process, and the absence of silicon-proven results means the risk profile differs from traditional methodologies. The FPGA demonstration scheduled for DAC will provide the first public validation of whether the generated logic functions correctly.
The medium-term trajectory depends on whether the computational costs can be reduced while maintaining output quality. If token consumption scales linearly with design complexity, large SoCs could require prohibitive compute budgets. If Verkor demonstrates sub-quadratic scaling, the approach becomes viable for mainstream semiconductor development.
For the RISC-V ecosystem specifically, automated design tools could lower barriers to custom processor implementations, enabling more domain-specific architectures without the traditional multi-year development cycles.
Related Coverage:
- Jama Connect First Engineering Software with MCP Server β MCP ecosystem expands to enterprise engineering systems
- Microsoft Agent Framework 1.0 GA Unifies AutoGen and Semantic Kernel β Multi-agent orchestration consolidation
- Isomorphic Labs AI-Designed Drugs Enter Human Trials β AI-designed molecules move from simulation to clinical validation
Sources
- AI Designs a Complete RISC-V CPU in 12 Hours β IEEE Spectrum, May 2026
- AI Agent Designs Complete RISC-V CPU from 219-Word Spec β Tomβs Hardware, May 2026
- AI Agent Designed Complete RISC-V CPU from Scratch β TechSpot, May 2026
- AI Designs Full RISC-V CPU in 12 Hours β Cloud News, May 2026
Verkor AI Agent Designs Complete RISC-V CPU in 12 Hours
Verkor's Design Conductor generated a verified, layout-ready RISC-V CPU from a 219-word specification in 12 hours, compressing traditional 18-36 month design cycles into a single day.
TL;DR
Verkorβs Design Conductor AI agent autonomously generated a complete, verified RISC-V CPU from a 219-word specification document in just 12 hours. The output is a layout-ready GDSII file, traditionally requiring 18-36 months of engineering effort.
Key Facts
- Who: Verkor (AI chip design startup)
- What: Design Conductor AI agent produced complete RISC-V CPU from 219-word spec in 12 hours
- When: Demonstrated May 2026, FPGA implementation planned for DAC conference
- Impact: Design cycle compressed from 18-36 months to 12 hours (1000x+ acceleration)
What Changed
Verkor announced that its Design Conductor AI agent successfully designed a complete RISC-V CPU, dubbed VerCore, starting from a minimal 219-word requirements document. The system produced a verified, layout-ready GDSII fileβthe industry-standard format for chip fabricationβin 12 hours.
According to IEEE Spectrum, the design process consumed βmany tens of billions of tokens,β indicating substantial computational resources. The company plans to implement the design on FPGA hardware at the upcoming Design Automation Conference (DAC) for live demonstration.
Traditional semiconductor design cycles span 18 to 36 months, involving large engineering teams working through specification, architecture, logic design, verification, and physical layout stages. Verkorβs system collapsed this entire workflow into a single day of automated processing.
Why It Matters
The demonstration represents a measurable acceleration in hardware design methodology, though with important caveats:
- Design output quality: The GDSII file is verified and layout-ready, meaning it passed design rule checks and functional verification
- Not yet silicon-proven: The design has not been manufactured or tested on actual silicon
- Computational cost: The process required tens of billions of tokens, suggesting significant infrastructure requirements
- Verification bottleneck: Human engineers still needed to verify the AIβs output before fabrication
Tomβs Hardware notes that this is not the first AI-designed chip, but it represents one of the most complete demonstrations of end-to-end automated CPU design from specification to GDSII output.
| Metric | Traditional | Verkor Design Conductor |
|---|---|---|
| Design cycle | 18-36 months | 12 hours |
| Input specification | Detailed specs (months) | 219 words |
| Output format | GDSII (manual) | GDSII (automated) |
| Silicon status | Varies | Not yet manufactured |
πΊ Scout Intel: What Others Missed
Confidence: high | Novelty Score: 88/100
Media coverage focuses on the headline-grabbing β12 hours vs 18 monthsβ comparison, but the computational requirements tell a more nuanced story. The βtens of billions of tokensβ consumed suggests this was not a lightweight processβcomparable to processing millions of pages of documentation. At current token pricing, a single design run could cost tens of thousands of dollars in compute alone, positioning this as an enterprise tool rather than a democratized design platform. Additionally, the gap between GDSII output and silicon-proven silicon remains the critical validation step; the FPGA demonstration at DAC will be the first real test of whether the AI-generated logic actually works.
Key Implication: Verkorβs approach compresses design time at the expense of compute costs, shifting the economics of chip development from labor-intensive to compute-intensiveβa tradeoff that favors large players with cloud resources over smaller teams.
What This Means
The immediate impact falls on semiconductor design teams at established companies and startups. Teams that previously required 12-18 months for initial design iterations can now explore multiple architectural approaches in days, potentially accelerating the design space exploration phase by 10-100x.
However, the technology introduces new dependencies. Design verification remains a human-in-the-loop process, and the absence of silicon-proven results means the risk profile differs from traditional methodologies. The FPGA demonstration scheduled for DAC will provide the first public validation of whether the generated logic functions correctly.
The medium-term trajectory depends on whether the computational costs can be reduced while maintaining output quality. If token consumption scales linearly with design complexity, large SoCs could require prohibitive compute budgets. If Verkor demonstrates sub-quadratic scaling, the approach becomes viable for mainstream semiconductor development.
For the RISC-V ecosystem specifically, automated design tools could lower barriers to custom processor implementations, enabling more domain-specific architectures without the traditional multi-year development cycles.
Related Coverage:
- Jama Connect First Engineering Software with MCP Server β MCP ecosystem expands to enterprise engineering systems
- Microsoft Agent Framework 1.0 GA Unifies AutoGen and Semantic Kernel β Multi-agent orchestration consolidation
- Isomorphic Labs AI-Designed Drugs Enter Human Trials β AI-designed molecules move from simulation to clinical validation
Sources
- AI Designs a Complete RISC-V CPU in 12 Hours β IEEE Spectrum, May 2026
- AI Agent Designs Complete RISC-V CPU from 219-Word Spec β Tomβs Hardware, May 2026
- AI Agent Designed Complete RISC-V CPU from Scratch β TechSpot, May 2026
- AI Designs Full RISC-V CPU in 12 Hours β Cloud News, May 2026
Related Intel
AI Agent Autonomously Designs Complete RISC-V CPU in 12 Hours
Design Conductor AI created a verified 1.5 GHz RISC-V CPU from a 219-word spec in 12 hours. First autonomous agent delivering production-ready silicon layouts.
NVIDIA Rubin Cuts MoE Inference Token Costs 10x vs Blackwell
NVIDIA Rubin GPU cuts MoE inference token costs by 10x vs Blackwell. The 336B-transistor architecture with Vera CPU integration targets H2 2026 production.
NVIDIA Rubin GPU Platform Enters Full Production
NVIDIA confirmed its Rubin GPU platform has entered full production with a 10x inference cost reduction versus Blackwell. The six-chip architecture including Vera CPU and Rubin GPU targets H2 2026 partner availability, positioning NVIDIA to maintain its AI infrastructure dominance.