Abstract: Disputes over transactions on two-sided platforms are common. Traditionally, platforms rely on their customer service or third party service providers to resolve such disputes. In this paper, we study crowd-judging, a novel crowdsourcing mechanism where users (i.e., buyers and sellers) on a two-sided platform volunteer to act as jurors to resolve disputes. While this mechanism improves cost-efficiency and enhances resolution speed, there are concerns that the jurors may exhibit ingroup bias -- that is, buyer jurors favoring the buyer in the dispute and seller jurors favoring the seller. Because the vast majority of users on such platforms are buyers, the existence of such bias could systematically sway the case outcome in favor of the buyers. Using a proprietary dataset from the crowd-judging centre of Taobao, this paper provides the first empirical analysis of crowd-judging, with a focus on quantifying ingroup bias. In our dataset, buyer jurors account for 88 percent of the active jury pool and cast more than 90 percent of votes. Inexperienced jurors exhibit statistical signicant ingroup bias: the probability that seller jurors vote in favor of the seller is approximately 9 percent greater than that of buyer jurors with similar experience. However, such bias disappears among experienced jurors. Under a majority rule system where experienced jurors cast more than 75 percent of the votes, we estimate that ingroup bias could influence the outcomes of at most 12 percent of cases.