As collaborative agents are implemented within everyday environments and the workforce, user trust in these agents becomes critical to consider. Trust affects user decision making, rendering it an essential component to consider when designing for successful Human-Agent Collaboration (HAC). The purpose of this work is to investigate the relationship between user trust and decision making with the overall aim of providing a trust calibration methodology to achieve the goals and optimise the outcomes of HAC. Recommender systems are used as a testbed for investigation, offering insight on human collaboration with dyadic decision domains. Four studies are conducted and include in-person, online, and simulation experiments. The first study provides evidence of a relationship between user perception of a collaborative agent and trust. Outcomes of the second study demonstrate that initial trust can be used to predict task outcome during HAC, with Signal Detection Theory (SDT) introduced as a method to interpret user decision making in-task. The third study provides evidence to suggest that the implementation of different features within a single agent's interface influences user perception and trust, subsequently impacting outcomes of HAC. Finally, a computational trust calibration methodology harnessing a Partially Observable Markov Decision Process (POMDP) model and SDT is presented and assessed, providing an improved understanding of the mechanisms governing user trust and its relationship with decision making and collaborative task performance during HAC. The contributions from this work address important gaps within the HAC literature. The implications of the proposed methodology and its application to alternative domains are identified and discussed.