AgentScout Logo Agent Scout

Hermes Agent Hits 95K Stars, Ships Self-Improving AI Framework

Hermes Agent v0.10.0 reaches 95,600 GitHub stars in 8 weeks with 118 bundled skills and three-layer memory architecture enabling autonomous skill creation.

AgentScout Β· Β· Β· 4 min read
#ai-agents #nous-research #hermes #self-improving #open-source #github
Analyzing Data Nodes...
SIG_CONF:CALCULATING
Verified Sources

TL;DR

Nous Research released Hermes Agent v0.10.0 with a self-improving learning loop that autonomously creates and refines skills from user interactions. The open-source framework reached 95,600 GitHub stars in 8 weeks, making it the fastest-growing agent project to date.

Key Facts

  • Who: Nous Research, an AI research organization focused on open-source agent frameworks
  • What: Hermes Agent v0.10.0 with 118 bundled skills, six messaging integrations, and three-layer memory architecture
  • When: April 2026 release; project launched February 2026
  • Impact: 95,600 GitHub stars in 8 weeks, zero agent-specific CVEs, MiniMax M2.7 model integration

What Changed

Nous Research announced Hermes Agent v0.10.0 on April 21, 2026, introducing a self-improving learning loop that represents a shift from static AI assistants to agents that evolve through experience. The framework ships with 118 bundled skills covering file operations, web scraping, API integrations, and code execution, along with six messaging platform integrations including Discord, Slack, and Telegram.

The release departs from traditional agent architectures that rely on predefined tool sets. Instead, Hermes analyzes user interactions and automatically generates new skills when it encounters repeated patterns, then iteratively improves those skills based on success rates and user feedback.

GitHub metrics show the project reached 95,600 stars within approximately 8 weeks of its February 2026 launch. According to the official Nous Research documentation, the repository averaged over 1,500 stars per day during peak periods, exceeding the growth trajectories of comparable frameworks like LangGraph (reached 80,000 stars in 14 weeks) and CrewAI (reached 65,000 stars in 12 weeks).

Why It Matters

The self-improving architecture addresses a core limitation of current agent systems: the manual effort required to expand capabilities. Traditional frameworks require developers to code individual tools, test integrations, and maintain compatibility as underlying APIs change. Hermes automates this cycle.

Key technical specifications:

  • Three-layer memory: Working memory for active tasks, episodic memory for interaction history, and semantic memory for distilled knowledge
  • Skill synthesis engine: Generates new skills from observed user patterns without explicit programming
  • Zero CVEs: No agent-specific security vulnerabilities reported as of April 2026
  • MiniMax partnership: Native integration with M2.7 model for enhanced reasoning capabilities

β€œThe framework creates a positive feedback loop where every user interaction potentially improves the system,” notes the TokenMix technical review. β€œSkills that fail get refined; successful patterns get promoted.”

The MiniMax partnership positions Hermes as a multi-model agent platform rather than being locked to a single LLM provider. This flexibility contrasts with OpenAI’s Agents SDK, which optimizes primarily for GPT models.

The zero CVE record deserves attention given the security concerns surrounding agent frameworks. Agent-specific vulnerabilities typically emerge from tool execution boundaries, file system access patterns, and prompt injection vectors. The clean record suggests architectural choices that sandbox skill execution effectively.

Comparison Table

DimensionHermes AgentLangGraphCrewAIOpenAI Agents SDK
Self-improvingYesNoNoLimited
Bundled skills118~20~3545
GitHub stars (Apr 2026)95,60082,00068,000127,000
Time to 95K stars8 weeks14 weeks12 weeks4 weeks
Multi-model supportYesYesYesLimited
Agent CVEs0321

πŸ”Ί Scout Intel: What Others Missed

Confidence: high | Novelty Score: 92/100

Media coverage focuses on star counts and feature lists, but the deeper signal is the competitive dynamics this release triggers. Hermes achieved 95,600 stars in 8 weeks while LangGraph took 14 weeks to reach 80,000β€”Hermes grew 2.1x faster despite launching later. This growth rate suggests the market values self-improvement over ecosystem maturity. More critically, the MiniMax M2.7 integration signals an alternative to OpenAI-centric agent stacks at a time when enterprises seek vendor diversification. LangChain and CrewAI now face pressure to either match the self-improving capability or differentiate on enterprise featuresβ€”both paths require substantial R&D investment that Hermes has already validated.

Key Implication: Enterprises evaluating agent frameworks should prioritize self-improving architectures over static tool catalogs, as the maintenance cost differential compounds over time.

What This Means

For developers: The framework reduces the barrier to building production-ready agents. Instead of coding 50 individual tools, developers configure the self-improvement parameters and let the system learn from usage patterns. The tradeoff is reduced control over exactly how the agent accomplishes tasks.

For enterprises: The MiniMax integration provides an alternative to OpenAI-centric agent stacks. Organizations already using Chinese LLM providers for regulatory or performance reasons can deploy Hermes without maintaining separate tool sets.

For the agent ecosystem: Hermes validates the self-improving architecture as a viable approach. Competitors will likely respond with similar capabilities, potentially shifting the competitive frontier from β€œwho has more tools” to β€œwho learns faster.”

What to Watch:

  • Enterprise adoption metrics: Watch for case studies from organizations deploying Hermes in production. The self-improvement claim needs real-world validation beyond GitHub stars.
  • Security research: As adoption grows, security researchers will probe the skill synthesis engine for vulnerabilities. The current zero-CVE record will be tested.
  • Competitive response: LangChain, CrewAI, and OpenAI may accelerate their own learning capabilities. Hermes has an 8-week head start on the self-improving architecture.

Sources

Hermes Agent Hits 95K Stars, Ships Self-Improving AI Framework

Hermes Agent v0.10.0 reaches 95,600 GitHub stars in 8 weeks with 118 bundled skills and three-layer memory architecture enabling autonomous skill creation.

AgentScout Β· Β· Β· 4 min read
#ai-agents #nous-research #hermes #self-improving #open-source #github
Analyzing Data Nodes...
SIG_CONF:CALCULATING
Verified Sources

TL;DR

Nous Research released Hermes Agent v0.10.0 with a self-improving learning loop that autonomously creates and refines skills from user interactions. The open-source framework reached 95,600 GitHub stars in 8 weeks, making it the fastest-growing agent project to date.

Key Facts

  • Who: Nous Research, an AI research organization focused on open-source agent frameworks
  • What: Hermes Agent v0.10.0 with 118 bundled skills, six messaging integrations, and three-layer memory architecture
  • When: April 2026 release; project launched February 2026
  • Impact: 95,600 GitHub stars in 8 weeks, zero agent-specific CVEs, MiniMax M2.7 model integration

What Changed

Nous Research announced Hermes Agent v0.10.0 on April 21, 2026, introducing a self-improving learning loop that represents a shift from static AI assistants to agents that evolve through experience. The framework ships with 118 bundled skills covering file operations, web scraping, API integrations, and code execution, along with six messaging platform integrations including Discord, Slack, and Telegram.

The release departs from traditional agent architectures that rely on predefined tool sets. Instead, Hermes analyzes user interactions and automatically generates new skills when it encounters repeated patterns, then iteratively improves those skills based on success rates and user feedback.

GitHub metrics show the project reached 95,600 stars within approximately 8 weeks of its February 2026 launch. According to the official Nous Research documentation, the repository averaged over 1,500 stars per day during peak periods, exceeding the growth trajectories of comparable frameworks like LangGraph (reached 80,000 stars in 14 weeks) and CrewAI (reached 65,000 stars in 12 weeks).

Why It Matters

The self-improving architecture addresses a core limitation of current agent systems: the manual effort required to expand capabilities. Traditional frameworks require developers to code individual tools, test integrations, and maintain compatibility as underlying APIs change. Hermes automates this cycle.

Key technical specifications:

  • Three-layer memory: Working memory for active tasks, episodic memory for interaction history, and semantic memory for distilled knowledge
  • Skill synthesis engine: Generates new skills from observed user patterns without explicit programming
  • Zero CVEs: No agent-specific security vulnerabilities reported as of April 2026
  • MiniMax partnership: Native integration with M2.7 model for enhanced reasoning capabilities

β€œThe framework creates a positive feedback loop where every user interaction potentially improves the system,” notes the TokenMix technical review. β€œSkills that fail get refined; successful patterns get promoted.”

The MiniMax partnership positions Hermes as a multi-model agent platform rather than being locked to a single LLM provider. This flexibility contrasts with OpenAI’s Agents SDK, which optimizes primarily for GPT models.

The zero CVE record deserves attention given the security concerns surrounding agent frameworks. Agent-specific vulnerabilities typically emerge from tool execution boundaries, file system access patterns, and prompt injection vectors. The clean record suggests architectural choices that sandbox skill execution effectively.

Comparison Table

DimensionHermes AgentLangGraphCrewAIOpenAI Agents SDK
Self-improvingYesNoNoLimited
Bundled skills118~20~3545
GitHub stars (Apr 2026)95,60082,00068,000127,000
Time to 95K stars8 weeks14 weeks12 weeks4 weeks
Multi-model supportYesYesYesLimited
Agent CVEs0321

πŸ”Ί Scout Intel: What Others Missed

Confidence: high | Novelty Score: 92/100

Media coverage focuses on star counts and feature lists, but the deeper signal is the competitive dynamics this release triggers. Hermes achieved 95,600 stars in 8 weeks while LangGraph took 14 weeks to reach 80,000β€”Hermes grew 2.1x faster despite launching later. This growth rate suggests the market values self-improvement over ecosystem maturity. More critically, the MiniMax M2.7 integration signals an alternative to OpenAI-centric agent stacks at a time when enterprises seek vendor diversification. LangChain and CrewAI now face pressure to either match the self-improving capability or differentiate on enterprise featuresβ€”both paths require substantial R&D investment that Hermes has already validated.

Key Implication: Enterprises evaluating agent frameworks should prioritize self-improving architectures over static tool catalogs, as the maintenance cost differential compounds over time.

What This Means

For developers: The framework reduces the barrier to building production-ready agents. Instead of coding 50 individual tools, developers configure the self-improvement parameters and let the system learn from usage patterns. The tradeoff is reduced control over exactly how the agent accomplishes tasks.

For enterprises: The MiniMax integration provides an alternative to OpenAI-centric agent stacks. Organizations already using Chinese LLM providers for regulatory or performance reasons can deploy Hermes without maintaining separate tool sets.

For the agent ecosystem: Hermes validates the self-improving architecture as a viable approach. Competitors will likely respond with similar capabilities, potentially shifting the competitive frontier from β€œwho has more tools” to β€œwho learns faster.”

What to Watch:

  • Enterprise adoption metrics: Watch for case studies from organizations deploying Hermes in production. The self-improvement claim needs real-world validation beyond GitHub stars.
  • Security research: As adoption grows, security researchers will probe the skill synthesis engine for vulnerabilities. The current zero-CVE record will be tested.
  • Competitive response: LangChain, CrewAI, and OpenAI may accelerate their own learning capabilities. Hermes has an 8-week head start on the self-improving architecture.

Sources

gp34qrxi96tszwqx2ms28β–‘β–‘β–‘4ebtr7jntotdv1635tq454sly5nby4h6qβ–ˆβ–ˆβ–ˆβ–ˆtiw50amdtajy94vzgw9ih7qrbjx8sga5β–ˆβ–ˆβ–ˆβ–ˆ6yc1ps5sd0prvknk453wunbx1x34zja6tβ–ˆβ–ˆβ–ˆβ–ˆeb41oglzyunnbbzqnlpi1bd1431ebfnrsβ–‘β–‘β–‘cqgvz8h5j7tc9qwbs5z5r8b3wbe6zsc8mβ–ˆβ–ˆβ–ˆβ–ˆfmc2ac72swmfb1vm8w0w4oq0gmlkjo5mmβ–ˆβ–ˆβ–ˆβ–ˆtu9pnoaacxby1z765n7nuorjzxg8f51β–‘β–‘β–‘5gkta9gemd9p8fqhybyft672gxd6p2bβ–‘β–‘β–‘eqv1nvnvltosb69v57ijyi4eezpl9w1v5β–ˆβ–ˆβ–ˆβ–ˆvytbm11vlcwo0ozqlnlfi31jg5454ws2β–ˆβ–ˆβ–ˆβ–ˆkw46oj9b3o9en1k1390c8iz2ilfamulβ–‘β–‘β–‘q2bh4cdu5ulkjis02j11vcjce608gdr39β–ˆβ–ˆβ–ˆβ–ˆcuco9cdo4m5yq4utxt6k6800ig31tlda9loβ–‘β–‘β–‘gcz5jxfdue51ulv71un2yub7uxdi2491uβ–ˆβ–ˆβ–ˆβ–ˆ4tvxjohx6d6svuheh2izuugd1jinxnxrβ–‘β–‘β–‘cniadsv0z1fa1342wxj5bm1i56sw3jyβ–‘β–‘β–‘pfqhm8f0a8eem36qott0dhqbw08k63nvβ–‘β–‘β–‘arm03yuba1uiu4jrcdqp3u8j1yhgsp5eβ–ˆβ–ˆβ–ˆβ–ˆ1w4wtgaq2276rgo2njdklb3bfit54gnmsβ–ˆβ–ˆβ–ˆβ–ˆsfave9md1ate0yjt7kb9bad0m6zc06vgoβ–ˆβ–ˆβ–ˆβ–ˆzv6i733i78cyyy0qyifq6e554i6xaosgβ–‘β–‘β–‘l1h1j9adxhbbpjcavq5vs52jspqzhzq7wβ–ˆβ–ˆβ–ˆβ–ˆtz1tvze39ipdz4hxesvcfjqm4iizxnlmcβ–ˆβ–ˆβ–ˆβ–ˆwj0l35izgxizmhg1lcj3llyvugenuimpβ–ˆβ–ˆβ–ˆβ–ˆtigh0add9l7nyaasu7m1e08u5evwn9vnkβ–‘β–‘β–‘o3kljrrwjw50qdjwmny2jkp4al0gwbmβ–‘β–‘β–‘f9td1q236ef9lzrr0f7z27scmtkv0ypxβ–ˆβ–ˆβ–ˆβ–ˆy7e3lrze2w964gjsv2ztfqsdl062zdo8β–ˆβ–ˆβ–ˆβ–ˆcpwgjshvjwpzq9o1pi6iu9m5kg2ml0inkβ–‘β–‘β–‘2157vet3ug0h1vdfcimxfnfu90blmxqkcfβ–‘β–‘β–‘ve2yradtraarm64wn0vl8at421c8mmpjcβ–ˆβ–ˆβ–ˆβ–ˆgtv395m5vpafup8wwdejkuo2f3syuj67β–‘β–‘β–‘mz725vely5ixblgk9m2p5au63bn4skarβ–ˆβ–ˆβ–ˆβ–ˆua45v83xqattt2jg3zaadeqikjz0eodhβ–ˆβ–ˆβ–ˆβ–ˆe460je86mk8b29t9u579ckt8q8drpxk5β–ˆβ–ˆβ–ˆβ–ˆh1xzncmk3kvkygr4fx54bp0ri86suz4knpβ–ˆβ–ˆβ–ˆβ–ˆ7z1jp6923brs0hp0chl41g2whgb8i9w17β–‘β–‘β–‘vxmtzd7ovmh50mgc5absw7008ham5odyeccβ–ˆβ–ˆβ–ˆβ–ˆf0ksq0tjjgnjcs9ciwagvor6acm88o7zqβ–‘β–‘β–‘gzj09r5rxfaf3t3r0kt6ssk21s5v5i03jβ–‘β–‘β–‘2divq3ygxk2fiojq6d1f9q0qj6ohu7qaqβ–ˆβ–ˆβ–ˆβ–ˆdjv72ept2642291a1oe2y7dl0v9slwjxβ–‘β–‘β–‘knaj0ftgt3cazw099i3e1p0xhu4r1f44pβ–ˆβ–ˆβ–ˆβ–ˆat8r2ou6kxkhclw50onuyjpurrkv4xu0aβ–ˆβ–ˆβ–ˆβ–ˆnra1iycc5llaahm7o6o6g7ktv62hmda6β–ˆβ–ˆβ–ˆβ–ˆ745803x10bl44p5bs0daathygotewaa49β–ˆβ–ˆβ–ˆβ–ˆo29e2kquwzrzn2pfsmch4jsc17kffrdiβ–ˆβ–ˆβ–ˆβ–ˆ8t2mh75wi9lh78p5hzvwrrnxbvtf5wyjrβ–ˆβ–ˆβ–ˆβ–ˆzu5r6zw4g89fqdorr78bo6n06s0xmc89aβ–‘β–‘β–‘3xrvzavm0xi