This article addresses the problem of finding suitable agents to collaborate with for a given interaction in distributed open systems, such as multiagent and P2P systems. The agent in question is given the chance to describe its confidence in its own capabilities. However, since agents may be malicious, misinformed, suffer from miscommunication, and so on, one also needs to calculate how much trusted is that agent. This article proposes a novel trust model that calculates the expectation about an agent's future performance in a given context by assessing both the agent's willingness and capability through the semantic comparison of the current context in question with the agent's performance in past similar experiences. The proposed mechani...