As one of the earliest crucial applications of Bayesian statistics, credibility theory (see Bühlmann and Gisler, 2006) was first developed for net premium calibration in insurance by optimally combining individual claim history with other those from the whole population; for a century, this research direction has become a major discipline in the interplay among actuarial science, operational research and statistics. Traditionally, for the ease of calibration, the credibility formula is a linear functional of historical observations, which greatly simplifies the underlying computational complexity; yet, its downside is the resulting high sensitivity towards outliers. To remedy this shortcoming, De Vylder (1976) proposed to first transform the observations collected particularly by truncation, and this semi-linear approach was further investigated in Bühlmann and Gisler (2006). Gisler (1980) suggested that the L2-optimal truncation point can be determined in an ad hoc manner, but the derivation of its general explicit formula is difficult. In this talk, to strike a balance between practical usage and mathematical tractability, we focus on heterogeneous risks all coming from possibly different MDAs of the extreme value distributions, which well suffices in practice. By incorporating the satisficing method commonly used in operational research, we close the gap by providing the explicit formula for the aforementioned optimal truncation point up to a slowly varying function of the sample size in an asymptotic sense. A comprehensive numerical study also illuminates that with the aid of this newly obtained truncation point, the corresponding semi-linear credibility formula outperforms the classical Bühlmann model. This is a joint work with K.C. Cheung (HKU) and Phillip Yam (CUHK).