The goal of a collaborative CE project is to create a benchmark dataset for a research field. This idea is inspired by the critical role such datasets have played in AI over the past decades—for example, ImageNet, which has greatly advanced object recognition research.
A collaborative CE project shares some similarities with adversarial collaboration but differs in key aspects:
Similarity: It brings together researchers with differing perspectives to agree on a methodological approach.
Difference (Pre-Data Collection): Unlike adversarial collaboration, we will not define specific hypotheses or predictions beforehand. Instead, the focus will be on identifying the most informative experimental manipulations and measures. This reduces the burden of upfront theoretical specification.
Difference (Post-Data Collection): Unlike adversarial collaboration, we will not seek a mutually agreed-upon interpretation, model, or theory for the benchmark dataset. Instead, each contributor may independently develop and publish their own interpretations. This approach removes the need for premature consensus, which often discourages collaborators from tackling the most challenging questions. Moreover, after a protection period, the dataset will be made openly accessible. Therefore, researchers—including but not limited to the original contributors—can propose and test competing models or theories. Over time, the most robust solutions will emerge through fair, community-driven scrutiny—a strategy that has proven highly effective in AI and other data-intensive disciplines.
In short, this collaborative CE approach shifts from the traditional theory-driven design of experimental psychology to a data-driven one. Researchers accustomed to the former may find this unsatisfying. However, this may be ultimately the better path, as demonstrated by ImageNet and other benchmark datasets in AI (see further discussions).
Currently, one collaborative CE project is underway: