Collective adaptive systems (CAS) often adopt cooperative operating strategies to run distributed decision-making mechanisms. Sometimes, their effectiveness massively relies on the collaborative nature of individuals’ behavior. Stimulating cooperation while preventing selfish and malicious behaviors is the main objective of trust and reputation models. These models are largely used in distributed, peer-to-peer environments and, therefore, represent an ideal framework for improving the robustness, as well as security, of CAS. In this article, we propose a formal framework for modeling and verifying trusted CAS. From the modeling perspective, mobility, adaptiveness, and trust-based interaction represent the main ingredients used to define a f...