Using a Hidden Markov Model to Learn User Browsing Patterns for Focused Web Crawling


Hongyu Liu
Jeannette Janssen
Evangelos Milios

Author Addresses: 

Faculty of Computer Science
Dalhousie University
6050 University Ave.
PO Box 15000
Halifax, Nova Scotia, Canada
B3H 4R2


A focused crawler is designed to traverse the Web to gather documents on a specific topic. It can be used to build domain-specific Web search portals and online personalized search tools. The focused crawler must use information gleaned from previously crawled page sequences to estimate the relevance of a newly seen URL. In this paper, we present a new approach for prediction of the important links to relevant pages based on a Hidden Markov Model (HMM). The system consists of three stages: user data collection, user modelling via sequential pattern learning and focused crawling. In particular, we first collect the Web pages visited during a user browsing session. These pages are clustered, and the link structure among pages from different clusters is used to learn page sequences that are likely to lead to target pages. The learning is done using HMM. During crawling, the priority of links to follow is based on a learned estimate of how likely the page is to lead to a target page. We compare performance with Context-Graph crawling and Best-First crawling and experiments show that this approach performs better than other strategies.

Tech Report Number: 
Report Date: 
June 3, 2005
PDF icon CS-2005-05.pdf2.2 MB