Abstract: We present a framework to improve shared autonomy in teleoperated robotic systems for complex assembly tasks. It addresses core challenges such as operator cognitive load, discontinuous contact dynamics, and restricted visual field. By integrating intention prediction, task decomposition, a diverse set of context-aware supporting actions that adapt in real-time to user behaviour and environment, and modularly assigning control strategies based on region divisions, the system allows the robot to seamlessly assist users while prioritising their inputs and posing minimal user disruption. In addition, we aim to enable generalisation across various users and task types, enhancing robustness, efficiency, and ease of use in teleoperated assembly scenarios.