- •List of Tables
- •List of Figures
- •Table of Notation
- •Preface
- •Boolean retrieval
- •An example information retrieval problem
- •Processing Boolean queries
- •The extended Boolean model versus ranked retrieval
- •References and further reading
- •The term vocabulary and postings lists
- •Document delineation and character sequence decoding
- •Obtaining the character sequence in a document
- •Choosing a document unit
- •Determining the vocabulary of terms
- •Tokenization
- •Dropping common terms: stop words
- •Normalization (equivalence classing of terms)
- •Stemming and lemmatization
- •Faster postings list intersection via skip pointers
- •Positional postings and phrase queries
- •Biword indexes
- •Positional indexes
- •Combination schemes
- •References and further reading
- •Dictionaries and tolerant retrieval
- •Search structures for dictionaries
- •Wildcard queries
- •General wildcard queries
- •Spelling correction
- •Implementing spelling correction
- •Forms of spelling correction
- •Edit distance
- •Context sensitive spelling correction
- •Phonetic correction
- •References and further reading
- •Index construction
- •Hardware basics
- •Blocked sort-based indexing
- •Single-pass in-memory indexing
- •Distributed indexing
- •Dynamic indexing
- •Other types of indexes
- •References and further reading
- •Index compression
- •Statistical properties of terms in information retrieval
- •Dictionary compression
- •Dictionary as a string
- •Blocked storage
- •Variable byte codes
- •References and further reading
- •Scoring, term weighting and the vector space model
- •Parametric and zone indexes
- •Weighted zone scoring
- •Learning weights
- •The optimal weight g
- •Term frequency and weighting
- •Inverse document frequency
- •The vector space model for scoring
- •Dot products
- •Queries as vectors
- •Computing vector scores
- •Sublinear tf scaling
- •Maximum tf normalization
- •Document and query weighting schemes
- •Pivoted normalized document length
- •References and further reading
- •Computing scores in a complete search system
- •Index elimination
- •Champion lists
- •Static quality scores and ordering
- •Impact ordering
- •Cluster pruning
- •Components of an information retrieval system
- •Tiered indexes
- •Designing parsing and scoring functions
- •Putting it all together
- •Vector space scoring and query operator interaction
- •References and further reading
- •Evaluation in information retrieval
- •Information retrieval system evaluation
- •Standard test collections
- •Evaluation of unranked retrieval sets
- •Evaluation of ranked retrieval results
- •Assessing relevance
- •A broader perspective: System quality and user utility
- •System issues
- •User utility
- •Results snippets
- •References and further reading
- •Relevance feedback and query expansion
- •Relevance feedback and pseudo relevance feedback
- •The Rocchio algorithm for relevance feedback
- •Probabilistic relevance feedback
- •When does relevance feedback work?
- •Relevance feedback on the web
- •Evaluation of relevance feedback strategies
- •Pseudo relevance feedback
- •Indirect relevance feedback
- •Summary
- •Global methods for query reformulation
- •Vocabulary tools for query reformulation
- •Query expansion
- •Automatic thesaurus generation
- •References and further reading
- •XML retrieval
- •Basic XML concepts
- •Challenges in XML retrieval
- •A vector space model for XML retrieval
- •Evaluation of XML retrieval
- •References and further reading
- •Exercises
- •Probabilistic information retrieval
- •Review of basic probability theory
- •The Probability Ranking Principle
- •The 1/0 loss case
- •The PRP with retrieval costs
- •The Binary Independence Model
- •Deriving a ranking function for query terms
- •Probability estimates in theory
- •Probability estimates in practice
- •Probabilistic approaches to relevance feedback
- •An appraisal and some extensions
- •An appraisal of probabilistic models
- •Bayesian network approaches to IR
- •References and further reading
- •Language models for information retrieval
- •Language models
- •Finite automata and language models
- •Types of language models
- •Multinomial distributions over words
- •The query likelihood model
- •Using query likelihood language models in IR
- •Estimating the query generation probability
- •Language modeling versus other approaches in IR
- •Extended language modeling approaches
- •References and further reading
- •Relation to multinomial unigram language model
- •The Bernoulli model
- •Properties of Naive Bayes
- •A variant of the multinomial model
- •Feature selection
- •Mutual information
- •Comparison of feature selection methods
- •References and further reading
- •Document representations and measures of relatedness in vector spaces
- •k nearest neighbor
- •Time complexity and optimality of kNN
- •The bias-variance tradeoff
- •References and further reading
- •Exercises
- •Support vector machines and machine learning on documents
- •Support vector machines: The linearly separable case
- •Extensions to the SVM model
- •Multiclass SVMs
- •Nonlinear SVMs
- •Experimental results
- •Machine learning methods in ad hoc information retrieval
- •Result ranking by machine learning
- •References and further reading
- •Flat clustering
- •Clustering in information retrieval
- •Problem statement
- •Evaluation of clustering
- •Cluster cardinality in K-means
- •Model-based clustering
- •References and further reading
- •Exercises
- •Hierarchical clustering
- •Hierarchical agglomerative clustering
- •Time complexity of HAC
- •Group-average agglomerative clustering
- •Centroid clustering
- •Optimality of HAC
- •Divisive clustering
- •Cluster labeling
- •Implementation notes
- •References and further reading
- •Exercises
- •Matrix decompositions and latent semantic indexing
- •Linear algebra review
- •Matrix decompositions
- •Term-document matrices and singular value decompositions
- •Low-rank approximations
- •Latent semantic indexing
- •References and further reading
- •Web search basics
- •Background and history
- •Web characteristics
- •The web graph
- •Spam
- •Advertising as the economic model
- •The search user experience
- •User query needs
- •Index size and estimation
- •Near-duplicates and shingling
- •References and further reading
- •Web crawling and indexes
- •Overview
- •Crawling
- •Crawler architecture
- •DNS resolution
- •The URL frontier
- •Distributing indexes
- •Connectivity servers
- •References and further reading
- •Link analysis
- •The Web as a graph
- •Anchor text and the web graph
- •PageRank
- •Markov chains
- •The PageRank computation
- •Hubs and Authorities
- •Choosing the subset of the Web
- •References and further reading
- •Bibliography
- •Author Index
226 |
11 Probabilistic information retrieval |
11.3.2Probability estimates in theory
For each term t, what would these ct numbers look like for the whole collection? (11.19) gives a contingency table of counts of documents in the collection, where dft is the number of documents that contain term t:
(11.19)
(11.20)
(11.21)
RELATIVE FREQUENCY
MAXIMUM LIKELIHOOD ESTIMATE
MLE
SMOOTHING
PSEUDOCOUNTS
BAYESIAN PRIOR
|
documents |
relevant |
nonrelevant |
Total |
||
Term present |
xt = 1 |
|
s |
dft − s |
dft |
|
Term absent |
xt = 0 |
S − s |
(N − dft) − (S − s) |
N − dft |
||
|
Total |
|
S |
N − S |
N |
|
Using this, pt = s/S and ut = (dft − s)/(N − S) and |
|
|||||
ct = K(N, dft, S, s) = log |
|
(dft |
s/(S − s) |
|
||
|
|
|
|
− s)/((N − dft) − (S − s)) |
To avoid the possibility of zeroes (such as if every or no relevant document has a particular term) it is fairly standard to add 12 to each of the quantities in the center 4 terms of (11.19), and then to adjust the marginal counts (the totals) accordingly (so, the bottom right cell totals N + 2). Then we have:
cˆt = K(N, dft, S, s) = log |
(s + 21 )/(S − s + 21 ) |
|
(dft − s + 21 )/(N − dft − S + s + 21 ) |
||
|
Adding 12 in this way is a simple form of smoothing. For trials with categorical outcomes (such as noting the presence or absence of a term), one way to estimate the probability of an event from data is simply to count the number of times an event occurred divided by the total number of trials. This is referred to as the relative frequency of the event. Estimating the probability as the relative frequency is the maximum likelihood estimate (or MLE), because this value makes the observed data maximally likely. However, if we simply use the MLE, then the probability given to events we happened to see is usually too high, whereas other events may be completely unseen and giving them as a probability estimate their relative frequency of 0 is both an underestimate, and normally breaks our models, since anything multiplied by 0 is 0. Simultaneously decreasing the estimated probability of seen events and increasing the probability of unseen events is referred to as smoothing. One simple way of smoothing is to add a number α to each of the observed counts. These pseudocounts correspond to the use of a uniform distribution over the vocabulary as a Bayesian prior, following Equation (11.4). We initially assume a uniform distribution over events, where the size of α denotes the strength of our belief in uniformity, and we then update the probability based on observed events. Since our belief in uniformity is weak, we use
Online edition (c) 2009 Cambridge UP
11.3 The Binary Independence Model |
227 |
MAXIMUM A POSTERIORI
MAP
α = 12 . This is a form of maximum a posteriori (MAP) estimation, where we choose the most likely point value for probabilities based on the prior and the observed evidence, following Equation (11.4). We will further discuss methods of smoothing estimated counts to give probability models in Section 12.2.2 (page 243); the simple method of adding 12 to each observed count will do for now.
11.3.3Probability estimates in practice
Under the assumption that relevant documents are a very small percentage of the collection, it is plausible to approximate statistics for nonrelevant doc-
uments by statistics from the whole collection. Under this assumption, ut (the probability of term occurrence in nonrelevant documents for a query) is dft/N and
(11.22) log[(1 − ut)/ut] = log[(N − dft)/dft] ≈ log N/dft
In other words, we can provide a theoretical justification for the most frequently used form of idf weighting, which we saw in Section 6.2.1.
The approximation technique in Equation (11.22) cannot easily be extended to relevant documents. The quantity pt can be estimated in various ways:
1. We can use the frequency of term occurrence in known relevant documents (if we know some). This is the basis of probabilistic approaches to relevance feedback weighting in a feedback loop, discussed in the next subsection.
2. Croft and Harper (1979) proposed using a constant in their combination match model. For instance, we might assume that pt is constant over all terms xt in the query and that pt = 0.5. This means that each term has even odds of appearing in a relevant document, and so the pt and (1 − pt) factors cancel out in the expression for RSV. Such an estimate is weak, but doesn’t disagree violently with our hopes for the search terms appearing in many but not all relevant documents. Combining this method with our earlier approximation for ut, the document ranking is determined simply by which query terms occur in documents scaled by their idf weighting. For short documents (titles or abstracts) in situations in which iterative searching is undesirable, using this weighting term alone can be quite satisfactory, although in many other circumstances we would like to do better.
3. Greiff (1998) argues that the constant estimate of pt in the Croft and Harper (1979) model is theoretically problematic and not observed empirically: as might be expected, pt is shown to rise with dft. Based on his data analysis, a plausible proposal would be to use the estimate pt = 13 + 23 dft/N.
Online edition (c) 2009 Cambridge UP
228 |
11 Probabilistic information retrieval |
Iterative methods of estimation, which combine some of the above ideas, are discussed in the next subsection.
11.3.4Probabilistic approaches to relevance feedback
We can use (pseudo-)relevance feedback, perhaps in an iterative process of
estimation, to get a more accurate estimate of pt. The probabilistic approach to relevance feedback works as follows:
1.Guess initial estimates of pt and ut. This can be done using the probability
estimates of the previous section. For instance, we can assume that pt is constant over all xt in the query, in particular, perhaps taking pt = 12 .
2.Use the current estimates of pt and ut to determine a best guess at the set
of relevant documents R = {d : Rd,q = 1}. Use this model to retrieve a set of candidate relevant documents, which we present to the user.
3.We interact with the user to refine the model of R. We do this by learning from the user relevance judgments for some subset of documents V. Based on relevance judgments, V is partitioned into two subsets: VR =
{d V, Rd,q = 1} R and VNR = {d V, Rd,q = 0}, which is disjoint from R.
4.We reestimate pt and ut on the basis of known relevant and nonrelevant documents. If the sets VR and VNR are large enough, we may be able to estimate these quantities directly from these documents as maximum likelihood estimates:
(11.23) |
pt = |VRt|/|VR| |
||||
|
(where VRt is the set of documents in VR containing xt). In practice, |
||||
|
we usually need to smooth these estimates. We can do this by adding |
||||
|
21 to both the count |VRt| and to the number of relevant documents not |
||||
|
containing the term, giving: |
|
|
|
|
(11.24) |
pt |
= |
|VRt| + 21 |
|
|
|VR| + 1 |
|||||
|
|
|
|||
|
However, the set of documents judged by the user (V) is usually very |
||||
|
small, and so the resulting statistical estimate is quite unreliable (noisy), |
||||
|
even if the estimate is smoothed. So it is often better to combine the new |
||||
|
information with the original guess in a process of Bayesian updating. In |
||||
|
this case we have: |
|
|VRt| + κ pt(k) |
|
|
(11.25) |
p(k+1) |
= |
|
||
|
t |
|
|VR| + κ |
||
|
|
|
Online edition (c) 2009 Cambridge UP
(11.26)
(11.27)
11.3 The Binary Independence Model |
229 |
Here p(tk) is the kth estimate for pt in an iterative updating process and is used as a Bayesian prior in the next iteration with a weighting of κ. Relating this equation back to Equation (11.4) requires a bit more probability theory than we have presented here (we need to use a beta distribution prior, conjugate to the Bernoulli random variable Xt). But the form of the resulting equation is quite straightforward: rather than uniformly distributing pseudocounts, we now distribute a total of κ pseudocounts according to the previous estimate, which acts as the prior distribution. In the absence of other evidence (and assuming that the user is perhaps indicating roughly 5 relevant or nonrelevant documents) then a value of around κ = 5 is perhaps appropriate. That is, the prior is strongly weighted so that the estimate does not change too much from the evidence provided by a very small number of documents.
5.Repeat the above process from step 2, generating a succession of approximations to R and hence pt, until the user is satisfied.
It is also straightforward to derive a pseudo-relevance feedback version of this algorithm, where we simply pretend that VR = V. More briefly:
1.Assume initial estimates for pt and ut as above.
2.Determine a guess for the size of the relevant document set. If unsure, a conservative (too small) guess is likely to be best. This motivates use of a fixed size set V of highest ranked documents.
3.Improve our guesses for pt and ut. We choose from the methods of Equations (11.23) and (11.25) for re-estimating pt, except now based on the set
V instead of VR. If we let Vt be the subset of documents in V containing xt and use add 12 smoothing, we get:
|Vt| + 1 pt = 2
|V| + 1
and if we assume that documents that are not retrieved are nonrelevant then we can update our ut estimates as:
dft − |Vt| + 1 ut = N − |V| + 12
4. Go to step 2 until the ranking of the returned results converges.
Once we have a real estimate for pt then the ct weights used in the RSV value look almost like a tf-idf value. For instance, using Equation (11.18),
Online edition (c) 2009 Cambridge UP