Deviance

\[D = - 2 \log{\Lambda}= - 2 \log \left( \frac{L(\widehat{\theta}_0 | x)}{L(\widehat{\theta}_s | x)}\right) = 2 \sum_{i=1}^{M} \left( n_i y_i \log \left( \frac{y_i}{p_i} \right) + n_i(1-y_i) \log \left(\frac{1-y_i}{1-p_i} \right) \right)\]

Demonstration: …

Example

n <- 100
x <- c(.2, .4, .6, .8, 1)
k <- c(10, 26, 73, 94, 97)
r <- n - k
y <- k/n
dat <- data.frame(x, y, k, r, n)

Using quickpsy

library(quickpsy)
fit<-quickpsy(dat, x, k, n, B=3000)
fit$deviance
##   deviance     p
## 1 6.662395 0.088

quickpsy uses bootstrap to calculate the p-value of the deviance. First, it uses the parametric model, to generate \(B\) bootstrap samples. For each bootstrap sample, it estimates the parameters that best fit the fake data and calculates the fake deviance. Using the distribution of fake deviances, it calculates the probability of obtaining a value of the deviance larger than the deviance calculated for the original data.

Using Wilks theorem, we can obtain a similar p-value

1 - pchisq(fit$deviance$deviance,length(k) - 2) # 2 parameter psychometric function
## [1] 0.08347343

Using glm

model <- glm( cbind(k, r) ~ x, data= dat, family = binomial(probit))
model
## 
## Call:  glm(formula = cbind(k, r) ~ x, family = binomial(probit), data = dat)
## 
## Coefficients:
## (Intercept)            x  
##      -2.279        4.572  
## 
## Degrees of Freedom: 4 Total (i.e. Null);  3 Residual
## Null Deviance:       304.4 
## Residual Deviance: 6.662     AIC: 30.9

Using modelfree

library(modelfree)
ypred <- predict(model, data.frame(x = x), type = 'response')
deviance2(dat$k,dat$n,ypred)
## [1] 6.662394