▶️ ЗАБЕРИ СВОИ 8 ПОДАРКОВ 🎁 ПРИ СОЗДАНИИ СВОЕГО МАЙНКРАФТ СЕРВЕРА
Моды/Cobblemon:Crzbrain
Cobblemon:Crzbrain

Cobblemon:Crzbrain

CRZbrain is a Kotlin-based Fabric mod for Minecraft/Cobblemon that gives Pokemon an AI-powered brain featuring reinforcement learning, sentiment analysis, cross-trainer knowledge sharing, and adaptive response generation to make Pokemon interaction

536
5
Все версииCobblemon:Crzbrain 2.1.0

Cobblemon:Crzbrain 2.1.0

Release2 мес. назад

Список изменений

CRZbrain v3.7 — Bugfix + Thread Safety + Performance + AI Quality

Thread Safety & Memory Leaks (Batch 1)

TickOptimizationSystem.kt

  • Fix: Replaced global lastWorldEventTick / lastRelationshipTick counters with per-player ConcurrentHashMap<UUID, Long> — previously only the first player in the loop would trigger world events each cycle, causing uneven event distribution across players
  • Fix: Added cleanupPokemon(pokemonId) method to remove stale entries from pokemonDistanceTier map (was never cleaned, leaked ~1KB/Pokemon/hour)
  • Fix: Added cleanupPlayer(playerUuid) method to remove per-player tick counters and cache on disconnect

AdaptiveLearning.kt

  • Fix: Wrapped LRU eviction of trainerPatterns in synchronized block with double-check pattern — concurrent modification could corrupt the map during eviction
  • Fix: Added dead zone handling for success rates between 0.3–0.6 — previously no preference was recorded for ambiguous responses, now both liked/disliked decay toward zero
  • Fix: Changed context matching from exact equals(ignoreCase=true) to fuzzy word-overlap matching (≥50% overlap) — strategies learned for "battle_wild" now also apply to "battle_wild_grass"
  • Fix: Improved duplicate strategy detection key for more reliable dedup

IntelligenceEvolution.kt

  • Fix: Wrapped recentResults ArrayDeque access in synchronized(stats) block — ArrayDeque is not thread-safe, concurrent addLast/removeFirst could throw ConcurrentModificationException
  • Fix: Limited word association pairs to max 8 random samples for large word lists (was O(n×5) linkConcepts calls for every message) — also capped context associations to 3×3 entries

LearningContextIntegration.kt

  • Fix: Added @Volatile annotation to contextWeightsDirty flag — without it, the JVM could cache the value in a CPU register and the save thread would never see updates from chat threads

DeepUnderstanding.kt

  • Fix: Replaced single-entry LRU eviction with bulk 20% removal when playerCommunicationPatterns exceeds 200 entries — was removing 1 entry per call, causing O(n) overhead every call after cap
  • Fix: Capped context string in learnContextualMeanings to 100 characters — unbounded joinToString was producing very long map keys, wasting memory
  • New: Added saveData() / loadData() persistence for playerCommunicationPatterns and learnedMeanings to config/CRZbrain/learning/deep_understanding.json — previously all learned communication styles and word meanings were lost on server restart

CRZbrainMod.kt

  • Integration: Added DeepUnderstanding.loadData() call in SERVER_STARTED handler
  • Integration: Added DeepUnderstanding.saveData() call in SERVER_STOPPING handler
  • Fix: Added TickOptimizationSystem.cleanupPlayer() call on player disconnect alongside existing invalidateCache()

PokemonLifecycleManager.kt

  • Fix: Added TickOptimizationSystem.cleanupPokemon(pokemonId) call in cleanupRamOnlyData() to remove stale distance tier entries when Pokemon are released or removed

Performance (Batch 2)

ReinforcementLearning.kt

  • Fix: Replaced O(n log n) sortByDescending in replay buffer eviction with O(n) minimum priority scan — buffer was being fully sorted on every single add just to remove one lowest-priority entry

ExternalAILearning.kt

  • Fix: Removed unnecessary .copy() calls for event/emotion/topic maps in learnFromResponse() — was duplicating every LearnedResponse 5 times (one per index), now shares same object reference for nature/event/emotion/topic (species keeps copy due to different eviction lifecycle). ~60% memory reduction on learned responses
  • Fix: Changed overuse penalty from linear (count - threshold) * 0.05f to sublinear sqrt(count - threshold) * 0.03f — linear penalty was killing responses after ~20 uses, sqrt curve is much gentler (4 over → -0.06, 16 over → -0.12, 100 over → -0.3)
  • Fix: Capped feedbackScore at [-10, 10] and feedbackCount at 50 — previously accumulated without bound, causing responses with many feedbacks to dominate all quality calculations

AdvancedPatternMining.kt

  • Fix: Replaced message.hashCode().toString() document ID with AtomicLong counter + System.nanoTime() — 32-bit hashCode collisions caused different messages to overwrite each other's topic distributions
  • Fix: Capped initial topic keywords to 50 on creation (was unbounded from long messages)

GlobalKnowledgeNetwork.kt

  • Fix: Replaced hash-based strategy ID "${context.hashCode()}_${strategy.hashCode()}" with string concatenation "${context}__${strategy}" — hash collisions caused different strategies to merge
  • Fix: Skip trainer diversity penalty when totalTrainers < 3 — with 1-2 players, diversity ratio was artificially low (0.3-0.5), unfairly penalizing consensus scores
  • Fix: When global knowledge map is full, evict entry with lowest consensus score instead of silently rejecting new knowledge — previously new valid knowledge was dropped once cap was reached

AI Quality & Bug Fixes (Batch 3)

SmartResponseGenerator.kt

  • Fix: Removed dead playerMessage != null null check at line 671 — playerMessage is a non-null String parameter, check was always true (compiler warning)
  • Fix: Removed dead playerMessage != null null check at line 2266 — same issue, second occurrence. Both pre-existing compiler warnings are now resolved (0 warnings)
  • Fix: Consolidated double playerMessage.split() into single cached split, reused for both learnConcept and linkConcepts calls
  • Fix: Replaced java.time.LocalDateTime.now().hour fallback with constant 12 (noon) — game time ≠ system time, using real clock for unrecognized timeOfDay strings was producing wrong learning context
  • Fix: Added division-by-zero guard for maxHp in HP percentage calculation — maxHp == 0 (eggs, data glitches) produced Infinity which toInt() converted to Int.MAX_VALUE
  • Fix: Raised learning context quality threshold from > 0.3f to > 0.5f — low-confidence context was degrading response quality by applying unreliable enhancements

MoveAnimationSystem.kt

  • Fix: Added distance check startPos.squaredDistanceTo(endPos) < 0.01 before normalize() in playSignatureAnimation — when attacker and target are at the same position, normalizing a zero vector produces NaN components, causing particle positioning to fail silently

Files Modified (13)

  1. sensors/TickOptimizationSystem.kt
  2. ai/embedded/AdaptiveLearning.kt
  3. ai/embedded/IntelligenceEvolution.kt
  4. ai/learning/LearningContextIntegration.kt
  5. ai/embedded/DeepUnderstanding.kt
  6. CRZbrainMod.kt
  7. PokemonLifecycleManager.kt
  8. ai/learning/ReinforcementLearning.kt
  9. ai/learning/ExternalAILearning.kt
  10. ai/learning/AdvancedPatternMining.kt
  11. ai/learning/GlobalKnowledgeNetwork.kt
  12. ai/embedded/SmartResponseGenerator.kt
  13. combat/MoveAnimationSystem.kt

New Persistence File

  • config/CRZbrain/learning/deep_understanding.json — stores player communication patterns and per-Pokemon learned word meanings

Файлы

CRZbrain-2.1.0.jar(2.97 MiB)
Основной
Скачать

Метаданные

Канал релиза

Release

Номер версии

2.1.0

Загрузчики

Fabric

Версии игры

1.21.1

Загрузок

49

Дата публикации

2 мес. назад

Загрузил

ID версии

Главная