autoFC: Automatic Construction of Forced-Choice Tests
Forced-choice (FC) response has gained increasing popularity
    and interest for its resistance to faking when well-designed (Cao &
    Drasgow, 2019 <doi:10.1037/apl0000414>). To established well-designed
    FC scales, typically each item within a block should measure different
    trait and have similar level of social desirability (Zhang et al.,
    2020 <doi:10.1177/1094428119836486>). Recent study also suggests the
    importance of high inter-item agreement of social desirability between
    items within a block (Pavlov et al., 2021 <doi:10.31234/osf.io/hmnrc>). 
    In addition to this, FC developers may
    also need to maximize factor loading differences (Brown &
    Maydeu-Olivares, 2011 <doi:10.1177/0013164410375112>) or minimize item
    location differences (Cao & Drasgow, 2019 <doi:10.1037/apl0000414>)
    depending on scoring models. Decision of which items should be
    assigned to the same block, termed item pairing, is thus critical to
    the quality of an FC test. This pairing process is essentially an
    optimization process which is currently carried out manually. However,
    given that we often need to simultaneously meet multiple objectives,
    manual pairing becomes impractical or even not feasible once the
    number of latent traits and/or number of items per trait are
    relatively large. To address these problems, autoFC is developed as a
    practical tool for facilitating the automatic construction of FC tests
    (Li et al., 2022 <doi:10.1177/01466216211051726>), essentially
    exempting users from the burden of manual item pairing and reducing
    the computational costs and biases induced by simple ranking methods.
    Given characteristics of each item (and item responses), FC measures can
    be constructed either automatically based on user-defined pairing criteria
    and weights, or based on exact specifications of each block (i.e., blueprint;
    see Li et al., 2024 <doi:10.1177/10944281241229784>). Users can also 
    generate simulated responses based on the Thurstonian Item Response Theory 
    model (Brown & Maydeu-Olivares, 2011 <doi:10.1177/0013164410375112>) and 
    predict trait scores of simulated/actual respondents based on 
    an estimated model.
| Version: | 0.2.0.1002 | 
| Depends: | R (≥ 2.10) | 
| Imports: | dplyr, irrCAC, lavaan, MASS, SimDesign, thurstonianIRT, MplusAutomation, glue, tidyr | 
| Suggests: | knitr, rmarkdown | 
| Published: | 2025-03-13 | 
| DOI: | 10.32614/CRAN.package.autoFC | 
| Author: | Mengtong Li  [cre,
    aut],
  Tianjun Sun  [aut],
  Bo Zhang  [aut] | 
| Maintainer: | Mengtong Li  <ml70 at illinois.edu> | 
| BugReports: | https://github.com/tspsyched/autoFC/issues | 
| License: | GPL-3 | 
| URL: | https://github.com/tspsyched/autoFC | 
| NeedsCompilation: | no | 
| Citation: | autoFC citation info | 
| Materials: | README | 
| CRAN checks: | autoFC results | 
Documentation:
Downloads:
Linking:
Please use the canonical form
https://CRAN.R-project.org/package=autoFC
to link to this page.