JACIII Vol.14 No.4 pp. 396-401
doi: 10.20965/jaciii.2010.p0396


Extraction of Web Site Evaluation Criteria and Automatic Evaluation

Peng Li* and Seiji Yamada**

*Department of Computational Intelligence and Systems Science, Tokyo Institute of Technology, J2, 4259 Nagatsuta, Midori-ku, Yokohama 226-8502, Japan

**National Institute of Informatics, SOKENDAI, 2-1-2 Hitotsubashi, Chiyoda, Tokyo 101-8430, Japan

December 11, 2009
March 8, 2010
May 20, 2010
evaluation criteria extraction, web site evaluation
This paper proposes an automated web site evaluation using machine learning to extract evaluation criteria from existing evaluation data. Web site evaluation is a significant task because evaluated web sites provide information useful to users in estimating sites validation and popularity. Although many practical approaches have been taken to present possible measuring sticks for web sites, their evaluation criteria are manually determined. We developed a method to obtain evaluation criteria automatically and rank web sites with the learned classifier. Evaluation criteria are discriminant functions learned from a set of ranking information and evaluation features collected automatically by web robots. Experiments confirmed the effectiveness of our approach and its potential in high-quality web site evaluation.
Cite this article as:
P. Li and S. Yamada, “Extraction of Web Site Evaluation Criteria and Automatic Evaluation,” J. Adv. Comput. Intell. Intell. Inform., Vol.14 No.4, pp. 396-401, 2010.
Data files:
  1. [1]
  2. [2]
  3. [3] S. Ssemugabi and R. de Villiers, “A comparative study of two usability evaluation methods using a web-based e-learning application,” Proc. of 2007 Annual Research Conf. of the South African Institute of Computer Scientists and Information Technologists on IT Research in Developing Countries, pp. 132-142, 2007.
  4. [4] A. Aizpurua, M. Arrue, M. Vigo, and J. Abascal, “Transition of accessibility evaluation tools to new standards,” Proc. 2009 Int. Cross-Disciplinary Conf. on Web Accessibility, pp. 36-44, 2009.
  5. [5] E. Velleman, C. Strobbe, J. Koch, C. A. Velasco, and M. A. Snaprud, “Unified Web Evaluation Methodology Using WCAG,” Proc. of the 4th Int. Conf. on Universal Access in Human-Computer Interaction, Vol.4556, pp. 177-184, 2007.
  6. [6] L. Falk, A. Prakash, and K. Borders, “Analyzing Websites for User-Visible Security Design Flaws,” Proc. of the 4th Symposium on Usable Privacy and Security, pp. 117-126, 2008.
  7. [7] O. Chapelle and Y. Zhang, “A dynamic bayesian network click model for web search rank,” Proc. of the 18th Int. Conf. on World Wide Web, pp. 1-10, 2009.
  8. [8] S. Rajaram, A. Garg, X. S. Zhou, and T. S. Huang, “Classification approach towards ranking and sorting problems,” Proc. of the 14th European Conf. on Machine Learning, pp. 301-312, 2003.
  9. [9] K. Crammer and Y. Singer, “Pranking with ranking,” Advances in Neural Information Processing Systems 14, MIT Press, Vol.1, pp. 641-647, 2002.
  10. [10] E. L. Allwein, R. E. Schapire, and Y. Singer, “Reducing Multiclass to Binary: A Unifying Approach for Margin Classifiers,” J. of Machine Learning Research, Vol.1, pp. 113-141, 2000.
  11. [11] J. E. Alexander and M. A. Tate, “Web WISDOM: How to evaluate and create information quality on the Web,” Lawrence Erlbaum, Hillsdale, 1999.
  12. [12] M. P. Arnone and R. V. Small, “WWW motivation mining finding treasures for teaching evaluation skills grade 1-6,” Linworth Publishing, Worthington, 1999.
  13. [13] G. Velayathan and S. Yamada, “Behavior-based Web page evaluation,” J. of Web Engineering, pp. 222-243, 2007.
  14. [14] M. Y. Ivory, “An Empirical Approach to Automated Web Site Evaluation,” J. of Digital Information Management, Vol.1, No.2, pp.75-102, 2003.
  15. [15] R. Sinha, M. Hearst, and M. Ivory, “Content or Graphics?: An Empirical Analysis of Criteria for Award-Winning Websites,” Proc. of the 7th Conf. on Human Factors and the Web, 2001.
  16. [16] M. Y. Ivory and M. A. Hearst, “Statistical Profiles of Highly-Rated Web Sites,” Proc. of the 20th ACM Conf. on Human Factors in Computing Systems, 2002.
  17. [17] W. P. Palmer, “Web Site Usability, Design, and Performance Metrics,” Information Systems Research, Vol.13, No.2, 2002.
  18. [18] G. Lebanon and J. Lafferty, “Cranking: Combining rankings using conditional probability models on permutations,” Proc. of the 19th Int. Conf. on Machine Learning, pp. 363-370, 2002.
  19. [19] H. Takashima, H. Yamagishi, and S. Hirasawa, “An Improved Method of Collaborative Filtering with Predicting Unobserved Values,” FIT Japan, A-008, 2005.
  20. [20] B. M. Sarwar, G. Karypis, J. A. Konstan, and J. Riedl, “Itembased collaborative filtering recommendation algorithms,” Proc. of the 10th Int. Conf. on World Wide Web, pp. 285-295, 2001.
  21. [21] P. Li and S. Yamada, “A Movie Recommender System Based on Inductive Learning,” Proc. of IEEE Conf. on Cybernetics and Intelligent Systems, Vol.1, pp. 318-323, 2004.
  22. [22] R. Kohavi, and G. H. John, “Wrappers for Feature Subset Selection,” Artificial Intelligence, Vol.97, No.1-2, pp. 273-324, 1997.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Jun. 19, 2024