検索

Duplicate 3 layers in a 24B LLM, logical deduction .22→.76. No training

I replicated David Ng's RYS method ( https://dnhkng.github.io/posts/rys/ ) on consumer AMD GPUs (RX 7900 XT + RX 6950 XT) and found something I didn't expect. Transformers appear to have discrete "reasoning circuits" — contiguous blocks of 3-4 layers that act as indivisible cognitive units. Duplicate the right block and the model runs its reasoning pipeline twice. No weights change. No training. The model just thinks longer. The results on standard benchmarks (lm-evaluation-harness, n=50): Devstral-24B, layers 12-14 duplicated once: - BBH Logical Deduction: 0.22 → 0.76 - GSM8K (strict): 0.48 → 0.64 - MBPP (code gen): 0.72 → 0.78 - Nothing degraded Qwen2.5-Coder-32B, layers 7-9 duplicated once: - Reasoning probe: 76% → 94% The weird part: different duplication patterns create different cognitive "modes" from the same weights. Double-pass boosts math. Triple-pass boosts emotional reasoning. Interleaved doubling (13,13,14,14,15,15,16) creates a pure math specialist. Same model, same VRAM, different routing. The circuit boundaries are sharp — shift by one layer and the effect disappears or inverts. Smaller models (24B) have tighter circuits (3 layers) than larger ones (Ng found 7 layers in 72B). Tools to find circuits in any GGUF model and apply arbitrary layer routing are in the repo. The whole thing — sweep, discovery, validation — took one evening. Happy to answer questions.

AI サマリー

I replicated David Ng's RYS method ( https://dnhkng.github.io/posts/rys/ ) on consumer AMD GPUs (RX 7900 XT + RX 6950 XT) and found something I didn't expect. Transformers appear to have discrete "reasoning circuits" — contiguous blocks of 3-4 layers that act as indivisible cognitive units. Duplicate the right block and the model runs its reasoning pipeline twice. No weights change. No training. The model just thinks longer. The results on standard benchmarks (lm-evaluation-harness, n=50): Devstral-24B, layers 12-14 duplicated once: - BBH Logical Deduction: 0.22 → 0.76 - GSM8K (strict): 0.48 → 0.64 - MBPP (code gen): 0.72 → 0.78 - Nothing degraded Qwen2.5-Coder-32B, layers 7-9 duplicated once: - Reasoning probe: 76% → 94% The weird part: different duplication patterns create different cognitive "modes" from the same weights. Double-pass boosts math. Triple-pass boosts emotional reasoning. Interleaved doubling (13,13,14,14,15,15,16) creates a pure math specialist. Same model, same VRAM, different routing. The circuit boundaries are sharp — shift by one layer and the effect disappears or inverts. Smaller models (24B) have tighter circuits (3 layers) than larger ones (Ng found 7 layers in 72B). Tools to find circuits in any GGUF model and apply arbitrary layer routing are in the repo. The whole thing — sweep, discovery, validation — took one evening. Happy to answer questions.

おすすめ対象

開発者、プロダクトチーム、技術系創業者。

重要な理由

I replicated David Ng's RYS method ( https://dnhkng.github.io/posts/rys/ ) on consumer AMD GPUs (RX 7900 XT + RX 6950 XT) and found something I didn't expect. Transformers appear to have discrete "reasoning circuits" — contiguous blocks of 3-4 layers that act as indivisible cognitive units. Duplicate the right block and the model runs its reasoning pipeline twice. No weights change. No training. The model just thinks longer. The results on standard benchmarks (lm-evaluation-harness, n=50): Devstral-24B, layers 12-14 duplicated once: - BBH Logical Deduction: 0.22 → 0.76 - GSM8K (strict): 0.48 → 0.64 - MBPP (code gen): 0.72 → 0.78 - Nothing degraded Qwen2.5-Coder-32B, layers 7-9 duplicated once: - Reasoning probe: 76% → 94% The weird part: different duplication patterns create different cognitive "modes" from the same weights. Double-pass boosts math. Triple-pass boosts emotional reasoning. Interleaved doubling (13,13,14,14,15,15,16) creates a pure math specialist. Same model, same VRAM, different routing. The circuit boundaries are sharp — shift by one layer and the effect disappears or inverts. Smaller models (24B) have tighter circuits (3 layers) than larger ones (Ng found 7 layers in 72B). Tools to find circuits in any GGUF model and apply arbitrary layer routing are in the repo. The whole thing — sweep, discovery, validation — took one evening. Happy to answer questions.

主な機能

  • I replicated David Ng's RYS method ( https://dnhkng.github.io/posts/rys/ ) on consumer AMD GPUs (RX 7900 XT + RX 6950 XT) and found something I didn't expect.
  • Transformers appear to have discrete "reasoning circuits" — contiguous blocks of 3-4 layers that act as indivisible cognitive units.
  • Duplicate the right block and the model runs its reasoning pipeline twice.
  • No weights change.

ユースケース

  • Review original launch sources before making adoption decisions.
  • Track community momentum from Product Hunt, GitHub, and Hacker News.
  • Inspect repository activity and release cadence for technical fit.