▶️ ЗАБЕРИ СВОИ 8 ПОДАРКОВ 🎁 ПРИ СОЗДАНИИ СВОЕГО МАЙНКРАФТ СЕРВЕРА
Моды/Cobblemon:Crzbrain
Cobblemon:Crzbrain

Cobblemon:Crzbrain

CRZbrain is a Kotlin-based Fabric mod for Minecraft/Cobblemon that gives Pokemon an AI-powered brain featuring reinforcement learning, sentiment analysis, cross-trainer knowledge sharing, and adaptive response generation to make Pokemon interaction

536
5
Все версииCobblemon:Crzbrain 2.1.0

Cobblemon:Crzbrain 2.1.0

Release3 мес. назад

Список изменений

v2.2: Core AI Learning Infrastructure

  1. Adaptive Learning Rate + Epsilon Decay - RL learning rate decreases over time, exploration decays (ReinforcementLearning.kt)
  2. Experience Replay Buffer with Prioritized Sampling - RL replays high-reward episodes more often (ReinforcementLearning.kt)
  3. Bilingual Sentiment IT/EN + Negation Detection - 3D sentiment (VAD) works in Italian and English, detects negation (AdvancedPatternMining.kt)
  4. Quality Decay + Diversity Scoring - External AI responses lose quality over time, diverse responses preferred (ExternalAILearning.kt)
  5. RL-Driven Response Selection - Response parts scored by RL success history (SmartResponseGenerator.kt)
  6. Closed-Loop Feedback Attribution - Player feedback traces back to which response sources caused it (LearningContextIntegration.kt)
  7. Cross-Pokemon Knowledge Transfer - Winning strategies propagate across trainers (GlobalKnowledgeNetwork.kt + AdaptiveLearning.kt)

v2.3: Critical Bug Fixes + Exploration Overhaul

  1. Move crash fix - null-safety on displayName.string, template access, target.pokemon (PokemonDefenseSystem.kt)
  2. Quality score formula fix - Was cancelling out normalization (LearningContextIntegration.kt)
  3. Policy Gradient fix - Now advantage-based (reward - baseline) instead of broken ln(probability) (ReinforcementLearning.kt)
  4. UCB exploration - Replaces epsilon-greedy in both RL and AdaptiveLearning
  5. Bayesian Beta confidence - Strategy success rates use proper Bayesian estimation (AdaptiveLearning.kt)
  6. Strategy temporal decay - Old strategies gradually lose relevance (AdaptiveLearning.kt)
  7. Markov chain prediction - Predicts trainer's next action from transition probabilities (AdaptiveLearning.kt)
  8. Context weight persistence - Learned source weights saved/loaded as JSON (LearningContextIntegration.kt)
  9. Conflict detection - Detects contradictions between context sources (LearningContextIntegration.kt)
  10. Adaptive enhancement probabilities - contextQuality drives feature activation rates (LearningContextIntegration.kt)

v2.4: RL Feedback Loop + Adaptive Probabilities

  1. RL Feedback Loop closed - SRG records responses and detects player feedback (SmartResponseGenerator.kt)
  2. LCI connected to SRG - Was completely dormant, now provides context (SmartResponseGenerator.kt)
  3. Rich RL context - "event_nature_mood" instead of just "event_name" (SmartResponseGenerator.kt)
  4. 6 adaptive gates - Key feature gates use getAdaptiveProbability() instead of hardcoded values (SmartResponseGenerator.kt)
  5. RL exploration from profile - Uses learned epsilon instead of hardcoded 10% (SmartResponseGenerator.kt)
  6. Context quality drives maxParts - 15-30% chance of 2 response parts based on quality (SmartResponseGenerator.kt)
  7. RL scoring boost - Scaled by context quality, up to +25 points (SmartResponseGenerator.kt)
  8. Post-processing with LCI context - Responses enhanced with learning context (SmartResponseGenerator.kt)

v2.5: Terminology Learning

  1. LearnedTerm data class - Tracks frequency, context, decay, anti-parrot flag (LearningSystem.kt)
  2. learnPlayerTerminology() - Extracts words/expressions, filters IT/EN stop words (LearningSystem.kt)
  3. getTopPlayerTerms() - Scored by frequency + recency - decay, minimum frequency 2 (LearningSystem.kt)
  4. Terminology injection - ~20% chance in post-processing (SmartResponseGenerator.kt)
  5. Nature-aware injection - jolly=excited, timid=hesitant, sassy=meta (SmartResponseGenerator.kt)
  6. Multi-word expressions - 2-3 word phrases supported, cap 150 terms/player (LearningSystem.kt)
  7. Persistence - LearningData v4, backward compatible with v3 (LearningSystem.kt)

v2.6: Bug Fix + Dormant Code Activation

  1. LanguageGrowth || → && fix - Expression/catchphrase development was 95% blocked (LanguageGrowth.kt)
  2. LanguageGrowth typo + activation - Fixed "deplorevol mente", activated useAcquiredWord + generateEmotiveExpression (LanguageGrowth.kt)
  3. EmotionalMomentum dynamics - Dynamic momentum by nature, energy/curiosity/confidence/stress influence responses (EmotionalMomentum.kt)
  4. DeepUnderstanding 5x faster - Comprehension reaches 0.85 in 2K messages instead of 100K (DeepUnderstanding.kt)
  5. IntelligenceEvolution inference fix - Fuzzy word matching for deductive, multi-word for inductive (IntelligenceEvolution.kt)
  6. Battle dialogue probability - Nature/friendship-based: brave+max=67%, timid+min=7.5% (BattleEnhancementSystem.kt)
  7. No early return in AdvancedIntelligence - Now flows through ALL post-processing systems (SmartResponseGenerator.kt)

v2.7: TPS Optimization

  1. PokemonDefenseSystem - Interval 5→20 ticks, 1 entity search per player instead of N*M, auto-deactivate (PokemonDefenseSystem.kt)
  2. Async File I/O - MemorySystem.batchSave + RL + PatternMining + CollectiveIntelligence on CompletableFuture.runAsync (CRZbrainMod.kt)
  3. Memory leak fix - 22 AI systems now have reset() and are registered in PokemonLifecycleManager
  4. Chat learning throttle - 2s cooldown + max 3 Pokemon per message (was unlimited) (CRZbrainMod.kt)

v2.8: Bug Fix + AI Quality Overhaul (44 fixes)

Critical (10)

  1. RL Experience Replay gradient fix - Was ln(prob), now advantage-based like main path (ReinforcementLearning.kt)
  2. SRG Feedback Loop ordering - Read previous context BEFORE overwriting (was attributing to wrong context) (SmartResponseGenerator.kt)
  3. LCI parser fix - Was parsing wrong format, always returning dummy data (LearningContextIntegration.kt)
  4. LCI Quality Score fix - Removed division by qualityFactors (was ~0.17, now sum ~1.0) (LearningContextIntegration.kt)
  5. EmbeddedAIProvider pokemonId fix - Pass UUID instead of display name (all lookups were failing) (EmbeddedAIProvider.kt)
  6. ExternalAILearning usageCount - val→var + increment (overuse penalty was dead) (ExternalAILearning.kt)
  7. ExternalAILearning 3 collections persisted - learnedByEvent/Emotion/Topic were lost on restart (ExternalAILearning.kt)
  8. GlobalKnowledgeNetwork propagation - propagateWinningStrategy now actually calls AdaptiveLearning.importStrategy (GlobalKnowledgeNetwork.kt)
  9. EmotionalMomentum loadAll() - Called at server start (states were saved but never loaded) (EmotionalMomentum.kt)
  10. SRG confidence/curiosity passthrough - Were silently dropped when calling EmotionalMomentum.applyMomentum (SmartResponseGenerator.kt)

High (14)

  1. AdaptiveLearning temporal decay - Read-time only (was compounding destructively on every update) (AdaptiveLearning.kt)
  2. RL double reward removed - integrateWithAdaptiveLearning no longer calls recordConsequence (ReinforcementLearning.kt)
  3. RL UCB constant - Fixed to 1.414 (was decaying with epsilon, becoming useless) (ReinforcementLearning.kt)
  4. RL plateau detection - Sliding window of last 200 rewards (was cumulative avg, always triggering) (ReinforcementLearning.kt)
  5. LCI word-boundary matching - Feedback indicators use word boundaries (was substring: "sono"/"pokemon" triggered "no"/"ok") (LearningContextIntegration.kt)
  6. APM transition matrix - Stores raw counts, normalizes on-the-fly (AdvancedPatternMining.kt)
  7. LearningSystem action inference - Infers action type from message (question/greeting/farewell/compliment/command/chat) (LearningSystem.kt)
  8. SRG wasSuccessful - Uses real feedback tracking (was always true) (SmartResponseGenerator.kt)
  9. LanguageGrowth nature mapping - All 25 natures mapped to 4 categories + complexity 5 expressions (LanguageGrowth.kt)
  10. SRG evolution keywords - Not filtered during EVOLUTION events (SmartResponseGenerator.kt)
  11. ExternalAILearning separate .copy() - Per collection (feedback was multiplied 5x) (ExternalAILearning.kt)
  12. AdaptiveLearning preferences - Non-destructive (both liked/disliked accumulate independently) (AdaptiveLearning.kt)
  13. EmbeddedAIProvider level fix - {newLevel} off-by-one fixed (context.level is already the new level) (EmbeddedAIProvider.kt)
  14. RL thread-safe replay - Local var instead of mutating shared profile.learningRate (ReinforcementLearning.kt)

Medium + Optimizations (20)

  1. RL prioritized sampling - Recalculates remaining priority after each selection
  2. RL sliding window baseline - For advantage computation (replaces stale cumulative avg)
  3. APM arousal baseline - 0.3 (was 0.5, wasting half the range)
  4. APM English stopwords - 35 words added to keyword extraction
  5. APM detectIntent - Word-boundary matching (was substring false-positive)
  6. DeepUnderstanding random gate - On output not analysis (detectImplicit, detectSarcasm)
  7. DeepUnderstanding emotionalReading - No longer multiplies intensity (was making emotion path unreachable)
  8. IntelligenceEvolution inference - Single-roll weighted type selection (was biased cascade)
  9. EmotionalMomentum punctuation - Regex-based period/exclamation replacement (sentence-ending only)
  10. ExternalAILearning recencyBonus - Uses creation timestamp (was rich-get-richer on lastUsed)
  11. ExternalAILearning Gson fixup - Post-load fixup for pre-v2.2 data
  12. GlobalKnowledgeNetwork diversity - Uses total trainers (was always 1.0)
  13. SRG thread-safe maps - ConcurrentHashMap for lastEventByPokemon/recentEventsByPokemon
  14. SRG reset() - Cleans all maps (was missing 2+1 maps)
  15. LanguageGrowth faster progression - Lower divisors + Math.round
  16. SRG discard reduced - ContextualResponse 40%→15%
  17. SRG bounded action space - Uses source category instead of response text
  18. APM sentiment cache - LRU eviction when full (was stopping caching at 500)
  19. EmbeddedAIProvider reset() - Added + registered in PokemonLifecycleManager (4 maps cleaned)
  20. EmotionalMomentum reset() - Deletes JSON file (was leaving orphans on disk)

v2.9: Performance Optimization

  1. LCI containsWord→Set check - Split once, Set.contains(), zero Regex per call (was 60+) (LearningContextIntegration.kt)
  2. LCI 4 parser Regex pre-compiled - PARSER_MESSAGE_COUNT, PARSER_MOOD, etc. (LearningContextIntegration.kt)
  3. APM detectIntent rewrite - Set-based word matching (was 20+ Regex per call) (AdvancedPatternMining.kt)
  4. APM sentiment cache LRU - Eviction when full (was stopping caching) (AdvancedPatternMining.kt)
  5. SRG wordBoundary→Set check - Was 24 Regex per call (SmartResponseGenerator.kt)
  6. SRG removed duplicate learning - Was running 17 ops twice per response (SmartResponseGenerator.kt)
  7. SmartConversation simplified - Replaced with recordDirectInteraction only (SmartConversation.kt)
  8. DeepUnderstanding 36 Regex pre-compiled - Object-level vals (was per-message creation) (DeepUnderstanding.kt)
  9. LearningSystem VOCAB_SPLIT_REGEX - Pre-compiled, recordDirectInteraction made public (LearningSystem.kt)
  10. CRZbrainMod isDirectToMe - true for sender's own Pokemon in handlePlayerChat (CRZbrainMod.kt)
  11. LearningSystem size caps - learnedVocabulary(500), associations(300), behaviorPatterns(100), positiveResponseTriggers(50), evidence(20) (LearningSystem.kt)

Impact: ~150 Regex/message → ~5, learning ops halved for responding Pokemon


v3.0: AI Quality Overhaul (Zero CPU Cost)

Gruppo A: Emotional + Intention Scoring

  1. EmotionalMomentum guides selection - Emotional state influences WHICH response is chosen, not just post-processing (SmartResponseGenerator.kt)
  2. DeepUnderstanding moved before selection - analyzeMessage runs BEFORE scoring, intent guides response choice (SmartResponseGenerator.kt)
  3. Intention-based scoring - QUESTION→answers, PRAISE→thanks, CRITICISM→apologies, AFFECTION→emotes (SmartResponseGenerator.kt)

Gruppo B: Waste Reduction

  1. Score-based rejection - topScore>=20 always passes, else 75% (was 50% random discard) (SmartResponseGenerator.kt)
  2. RL boost 2.5x stronger - Formula (40+ctx20) range 0-60 (was 20+ctx5, range 0-25) (SmartResponseGenerator.kt)
  3. IQ-based comment chance - 15-35% based on IQ (was fixed 15%) (SmartResponseGenerator.kt)
  4. Pattern-based recall chance - 10-25% based on patternRecognition (was fixed 10%) (SmartResponseGenerator.kt)

Gruppo C: Adaptive Gates

  1. 7 more gates adaptive - species_behavior, social_comment, battle_style, proactive_comment, help_offer, rank_dialogue, inter_member (SmartResponseGenerator.kt)
  2. Terminology + DeepUnderstanding adaptive - Both injection gates use getAdaptiveProbability() (SmartResponseGenerator.kt)

Gruppo D: Transitions + Feedback Loop

  1. Content-based transitions - combineResponsesLogically uses content analysis instead of random (SmartResponseGenerator.kt)
  2. Feature source tracking - activeFeatureSources feeds into recordResponseAttribution for feedback (SmartResponseGenerator.kt)

Gruppo E: Complexity-Aware Depth

  1. getComplexityLevel() added - Public function in LanguageGrowth (LanguageGrowth.kt)
  2. High-complexity Pokemon depth - complexity>=4 with high abstractThinking get extra intelligent comments (SmartResponseGenerator.kt)

v3.1: Stability + Performance + Dormant AI Activation

Gruppo A: Stability/Persistence (6 fixes)

  1. Shutdown async wait - Server waits up to 5s for async save to finish before shutdown saves (CRZbrainMod.kt)
  2. ExternalAILearning.loadData() at startup - Was missing, 3 collections not loaded (CRZbrainMod.kt)
  3. Atomic write - LearningSystem writes to .tmp then renames, prevents corruption on crash (LearningSystem.kt)
  4. isDirty timing verified - CollectiveIntelligence was already correct, added clarifying comments (CollectiveIntelligence.kt)
  5. learnedClosers persisted - Now saved/loaded as learned_closers.json (was lost every restart) (ExternalAILearning.kt)
  6. recentEpisodeRewards verified - Already persisted via Gson in RLProfile data class (ReinforcementLearning.kt)

Gruppo B: Performance (13 fixes)

  1. IntelligenceEvolution 17 Regex pre-compiled - detectEmotionalPattern(5), detectRequestPattern(4), categorizeMessage(8) (IntelligenceEvolution.kt)
  2. 4x removeAt(0) → ArrayDeque.removeFirst() - O(1) instead of O(n) in SRG, APM, EmbeddedAIProvider(2x)
  3. Outer map cap 200 - EmbeddedAIProvider trimOuterMapsIfNeeded() prevents unbounded growth (EmbeddedAIProvider.kt)
  4. SRG CLEAN_ACTION_REGEX - Pre-compiled Regex in scoring loop (SmartResponseGenerator.kt)
  5. AIHandler 2 retry Regex - CLAUDE_RETRY_REGEX + RETRY_DELAY_REGEX pre-compiled (AIHandler.kt)
  6. ContextualMemoryBridge 2 Regex - DIGIT_REGEX + INTENSITY_REGEX pre-compiled (ContextualMemoryBridge.kt)
  7. ExternalAILearning 3 Regex - SENTENCE_SPLIT_REGEX + WORD_SPLIT_REGEX + ASTERISK_EXPRESSION_REGEX pre-compiled (ExternalAILearning.kt)
  8. AdvancedPersonalitySystem 1 Regex - TOKEN_SPLIT_REGEX pre-compiled (AdvancedPersonalitySystem.kt)

Gruppo C: Dormant AI Activation (3 fixes)

  1. predictNextAction activated - Markov chain predictions enrich RL context before response selection (SmartResponseGenerator.kt ← AdvancedPatternMining.kt)
  2. validatePrediction activated - Predictions validated on next player message, closes feedback loop (SmartResponseGenerator.kt ← AdvancedPatternMining.kt)
  3. Cross-Pokemon RL transfer - New Pokemon (<10 episodes) get +15 score boost from global top actions; integrateWithAdaptiveLearning syncs every 50 episodes (SmartResponseGenerator.kt ← ReinforcementLearning.kt)

Файлы

CRZbrain-2.1.0.jar(2.90 MiB)
Основной
Скачать

Метаданные

Канал релиза

Release

Номер версии

2.1.0

Загрузчики

Fabric

Версии игры

1.21.1

Загрузок

26

Дата публикации

3 мес. назад

Загрузил

ID версии

Главная