Developmental Federated Tuning: A Cognitive-Inspired Paradigm for Efficient LLM Adaptation
Yebo Wu · Jingguang Li · Zhijiang Guo · Li Li
Abstract
Federated fine-tuning enables Large Language Models (LLMs) to adapt to downstream tasks while preserving data privacy, but its resource-intensive nature limits deployment on edge devices. In this paper, we introduce Developmental Federated Tuning (DevFT), a resource-efficient approach inspired by cognitive development that progressively builds a powerful LLM from a compact foundation. DevFT decomposes the fine-tuning process into developmental stages, each optimizing a submodel with increasing parameter capacity. Knowledge acquired in earlier stages is transferred to subsequent submodels, providing optimized initialization parameters that prevent convergence to local minima and accelerate training. This paradigm mirrors human learning, gradually constructing comprehensive knowledge structure while refining existing skills. To efficiently build stage-specific submodels, DevFT introduces deconfliction-guided layer grouping and differential-based layer fusion to distill essential information and construct representative layers. Evaluations across multiple benchmarks demonstrate that DevFT significantly outperforms state-of-the-art methods, achieving up to $4.59\times$ faster convergence, $10.67\times$ reduction in communication overhead, and 9.07% average performance improvement, while maintaining compatibility with existing approaches. We submit the code with the paper for reproducibility.
Successful Page Load