Многоагентная технология принятия решений в задачах объединения данных
Аннотация
В статье рассматривается технология поддержки принятия решений на основе информации, полученной из различных источников. Специфика технологии разработки таких систем обусловлена распределенностью источников и гетерогенностью представленных в них данных, распределенным характером функционирования таких систем поддержки принятия решений, а также распределенным характером процесса разработки ее базовых компонент. Эти особенности ведут к необходимости решения ряда специфических задач технологического характера. Модели, методы и архитектурные решения, касающиеся технологии разработки систем поддержки принятия решений на основе распределенных гетерогенных источников данных, а также их реализация в форме программной инструментальной системы составляют основное содержание данной работы.Литература
Городецкий В., Тулупьев А. Непротиворечивость баз знаний с интервальной вероятностной мерой неопределенности. Известия РАН "Теория и системы управления", №5, 1997, — с. 23-32.
Городецкий В., Самойлов В. Визуальный синтез классифицирующих предикатов и их применение для мета-классификации // Труды Таганрогского радиотехнического института, 4, 2001, — с. 5–16.
Растригин Л., Эренштейн Р. Методы коллективного распознавания. Москва, Энергоиздат, 1981, — 102 с.
Ali K. and Pazzani M. Error reduction through learning multiple descriptions // Machine Learning, 24(3), 1996. — с. 173–202
Bass T. Intrusion Detection System and Multisensor Data Fusion: Creating Cyberspace Situational Awareness // Communication of the ACM, vol. 43, no. 4, 2000. — p. 99–105.
Bay S. and Pazzani M. Characterizing model error and differences // Proceedings of 17th International Conference on machine learning (ICML–2000), Morgan Kaufmann, 2000.
Breiman L. Bagging predictors // Machine Learning, 24 (2), 1996. — p. 123–140.
Breiman L. Stacked regression // Machine Learning, 24(1), 1996, — p. 49–64.
Buntine W.L. A theory of learning classification rules // Ph.D thesis, University of Technology, School of Computing Science, Sydney, Australia, 1990.
Chan P.and Stolfo S. A comparative evaluation of voting and meta-learning on partitioned data // Proceedings of Twelfth 4th International Conference on machine Learning, Tahoe City, CA,1995, — p. 90–98.
Clark P.and Niblett P. The CN2 induction algorithm // Machine learning, 3(4), 1989, — p. 261– 283.
Clemen R. Combining forecasts: A review and annotated bibliography // International Journal of Forecasting, 5, 1989. — p. 559–583.
Corkill D and Lesser V. The use of meta-level control for coordination in distributed problem solving network // Proceedings of the Eighth International Joint Conference on Artificial Intelligence (IJCAI–83), Menlo Park, CA, 1983. —p. 767–770.
Cost S. and Salzberg S. A weighted nearest neighbor algorithm for learning with symbolic features // Machine Learning, 10(1), 1993 — p. 57–78.
Dietterich T. Machine Learning Research: Four Current Directions // AI magazine. 18(4), 1997. —p. 97–136.
Dietterich T. Ensemble Methods in Machine Learning // M.Arbib (Ed.) Handbook of Brain Theory and Neural Networks, 2nd Edition, MIT Press, 2001.
Freund Y.and Shapire R. Experiments with a new boosting algorithm // L.Saitta (Ed.) Machine Learning. Proceedings of the 13th International Conference. Morgan Kaufmann, 1996.
Gama J.and Brazdil P. Cascade generalization // Machine Learning, 41(3), 2000. — p. 315–342.
Goodman I., Mahler R., and Nguen H. Mathematics of Data Fusion // Kluwer Academic Publishers, 1997.
Gorodetski V. Bayes' Inference and Decision Making in Artificial Intelligence Systems" // Industrial Applications of Artificial Intelligence. North-Holland, 1991. — p. 276–281.
Gorodetski V.and Karsayev O. Algorithm of Rule Extraction from Learning Data // Proceedings of the 8th International Conference "Expert Systems & Artificial Intelligence" (EXPERSYS– 96),1996. — p. 133–138.
Gorodetski V., Skormin V., Popyack L., and Karsaev O. Distributed Learning in a Data Fusion System // Proceedings of the Conference of the World Computer Congress (WCC–2000) "Intelligent Information Processing" (IIP2000), Beijing, 2000. — p. 147–154.
Hashem S. Optimal linear combination of neural networks // Ph.D. thesis, Purdue University, School of Industrial Engineering, Lafaette, IN, 1997.
Jordan M.and Jacobs R. Hierarchical mixtures of experts and the EM algorithm // Neural Computations, 6(2), 1994. — p. 181–214.
"KAON – The KArlsruhe Ontology and Semantic Web Infrastructure". http://kaon.semanticweb.org/
http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html
Mason C. and Johnson K. DATMS: A framework for distributed assumption-based reasoning // Distributed Artificial Intelligence, vol. 2, Eds.M.Hurbis and L.Gasser. CA, Morgan Kaufmann, 1989. — p. 293–318.
Merz C. Combining classifiers using correspondence analysis // Advances in Neural Information Processing, 1997.
Merz C.and Murphy P. UCI repository of machine learning databases. Irvine, CA, University of California, Department of Information and Computer Science, 1997 http://www.ics.uci.edu/!mlearn/MLR repository.html.
Michalski R. A Theory and Methodology of Inductive Learning // Machine Learning, vol.1, Eds. J.G.Carbonel, R.S.Michalski, and T.M.Mitchel, Tigoda, Palo Alto, 1983. — p. 83–134.
Michalski R.and Kaufmann K. The AQ19 System for Machine Learning and Pattern discovery: A General Description and User's guide. George Mason University. Technical Report ML01–2, P01–2, 2001.
Murthy S., Kassif S., Salzberg S., and Beigel R. OC1: Randomized Induction of oblique decision trees // Proceedings of AAAI–93, AAAI Press, 1993.
Niyogi P., Pierrot J–B., and Siohan O. "Multiple Classifiers by constrained minimization" // Proceedings of International Conference on Acoustics, Speech, and Signal Processing, Istanbul, Turkey, June, 2000.
Ortega J., Coppel M., Argamon S. Arbitraining Among Competing Classifiers Using Learned Referees // Knowledge and Information Systems 3 (2001) 4, 2001. — p. 470–490.
Perrone M. and Cooper L. When networks disagree: Ensemble methods for hybrid neural networks // R.J.Mammone (Ed.), Neural Networks for Speech and Image Processing, Chapman and Hall, 1993.
Prodomidis A., Chan P., and Stolfo S. Meta–learning in distributed data mining systems: Issues and approaches // P.Chan and H.Kargupta (Eds.) Advances in Distributed Data Mining, AAAI Press, 1999. Also available at http://www.cs.columbia.edu/~sal/hpapers/DDMBOOK.ps.gz.
Quinlan R. C4.5 Programs for Machine Learning. Morgan Kaufmann, San Mateo, CA, 1993.
Rumelhart D., Hinton G., and Williams R. Learning internal representation by error propagation // D.Rumelhart, J.McClelland (Eds.) Parallel Distributed Processing: Exploration of the microstructure of cognitions, Volume 1: Foundations. MIT Press, 1986.
Seewald A.and Fuernkranz J. An evaluation of grading classifiers // Proceedings of 4th International Conference "Intelligent data Analysis", LNCS 2189, 2001. — p. 115–124.
Sementic Web roadmap. http://www.w3.org/DesignIssues/Semantic.html
Ting K. The characterization of predictive accuracy and decision combination // Proceedings of 13th International Conference on Machine Learning, Morgan Kaufman, 1996. — p. 498–506.
Ting K.and Witten I. Issues in stacked generalization // Journal of Artificial Intelligence Research, 10, 1999. — p. 271–289.
Todorovski L.and Dzeroski S. Combining classifiers with meta decision trees // D.A.Zighen, J.Komarowski and J.Zitkov (Eds.) Proceedings of 4th European Conference on Principles of Data Mining and Knowledge Discovery (PKDD–2000), Springer Verlag, 2000. — p. 54–64.
Todorovski L.and Dzeroski S. Combining multiple models classifiers with meta decision trees // To appear in Machine Learning Journal. 2001.
Vilalta R.and Drissi Y. A perspective view and survey of meta-learning // Submitted to the Journal of Artificial Intelligence Review, http://www.research.ibm.com/people/v/vilalta/papers/jaireview01.ps
D.Wolpert Stacked generalization // Neural Network, 5(2), 1992. — p. 241–260.
Городецкий В., Самойлов В. Визуальный синтез классифицирующих предикатов и их применение для мета-классификации // Труды Таганрогского радиотехнического института, 4, 2001, — с. 5–16.
Растригин Л., Эренштейн Р. Методы коллективного распознавания. Москва, Энергоиздат, 1981, — 102 с.
Ali K. and Pazzani M. Error reduction through learning multiple descriptions // Machine Learning, 24(3), 1996. — с. 173–202
Bass T. Intrusion Detection System and Multisensor Data Fusion: Creating Cyberspace Situational Awareness // Communication of the ACM, vol. 43, no. 4, 2000. — p. 99–105.
Bay S. and Pazzani M. Characterizing model error and differences // Proceedings of 17th International Conference on machine learning (ICML–2000), Morgan Kaufmann, 2000.
Breiman L. Bagging predictors // Machine Learning, 24 (2), 1996. — p. 123–140.
Breiman L. Stacked regression // Machine Learning, 24(1), 1996, — p. 49–64.
Buntine W.L. A theory of learning classification rules // Ph.D thesis, University of Technology, School of Computing Science, Sydney, Australia, 1990.
Chan P.and Stolfo S. A comparative evaluation of voting and meta-learning on partitioned data // Proceedings of Twelfth 4th International Conference on machine Learning, Tahoe City, CA,1995, — p. 90–98.
Clark P.and Niblett P. The CN2 induction algorithm // Machine learning, 3(4), 1989, — p. 261– 283.
Clemen R. Combining forecasts: A review and annotated bibliography // International Journal of Forecasting, 5, 1989. — p. 559–583.
Corkill D and Lesser V. The use of meta-level control for coordination in distributed problem solving network // Proceedings of the Eighth International Joint Conference on Artificial Intelligence (IJCAI–83), Menlo Park, CA, 1983. —p. 767–770.
Cost S. and Salzberg S. A weighted nearest neighbor algorithm for learning with symbolic features // Machine Learning, 10(1), 1993 — p. 57–78.
Dietterich T. Machine Learning Research: Four Current Directions // AI magazine. 18(4), 1997. —p. 97–136.
Dietterich T. Ensemble Methods in Machine Learning // M.Arbib (Ed.) Handbook of Brain Theory and Neural Networks, 2nd Edition, MIT Press, 2001.
Freund Y.and Shapire R. Experiments with a new boosting algorithm // L.Saitta (Ed.) Machine Learning. Proceedings of the 13th International Conference. Morgan Kaufmann, 1996.
Gama J.and Brazdil P. Cascade generalization // Machine Learning, 41(3), 2000. — p. 315–342.
Goodman I., Mahler R., and Nguen H. Mathematics of Data Fusion // Kluwer Academic Publishers, 1997.
Gorodetski V. Bayes' Inference and Decision Making in Artificial Intelligence Systems" // Industrial Applications of Artificial Intelligence. North-Holland, 1991. — p. 276–281.
Gorodetski V.and Karsayev O. Algorithm of Rule Extraction from Learning Data // Proceedings of the 8th International Conference "Expert Systems & Artificial Intelligence" (EXPERSYS– 96),1996. — p. 133–138.
Gorodetski V., Skormin V., Popyack L., and Karsaev O. Distributed Learning in a Data Fusion System // Proceedings of the Conference of the World Computer Congress (WCC–2000) "Intelligent Information Processing" (IIP2000), Beijing, 2000. — p. 147–154.
Hashem S. Optimal linear combination of neural networks // Ph.D. thesis, Purdue University, School of Industrial Engineering, Lafaette, IN, 1997.
Jordan M.and Jacobs R. Hierarchical mixtures of experts and the EM algorithm // Neural Computations, 6(2), 1994. — p. 181–214.
"KAON – The KArlsruhe Ontology and Semantic Web Infrastructure". http://kaon.semanticweb.org/
http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html
Mason C. and Johnson K. DATMS: A framework for distributed assumption-based reasoning // Distributed Artificial Intelligence, vol. 2, Eds.M.Hurbis and L.Gasser. CA, Morgan Kaufmann, 1989. — p. 293–318.
Merz C. Combining classifiers using correspondence analysis // Advances in Neural Information Processing, 1997.
Merz C.and Murphy P. UCI repository of machine learning databases. Irvine, CA, University of California, Department of Information and Computer Science, 1997 http://www.ics.uci.edu/!mlearn/MLR repository.html.
Michalski R. A Theory and Methodology of Inductive Learning // Machine Learning, vol.1, Eds. J.G.Carbonel, R.S.Michalski, and T.M.Mitchel, Tigoda, Palo Alto, 1983. — p. 83–134.
Michalski R.and Kaufmann K. The AQ19 System for Machine Learning and Pattern discovery: A General Description and User's guide. George Mason University. Technical Report ML01–2, P01–2, 2001.
Murthy S., Kassif S., Salzberg S., and Beigel R. OC1: Randomized Induction of oblique decision trees // Proceedings of AAAI–93, AAAI Press, 1993.
Niyogi P., Pierrot J–B., and Siohan O. "Multiple Classifiers by constrained minimization" // Proceedings of International Conference on Acoustics, Speech, and Signal Processing, Istanbul, Turkey, June, 2000.
Ortega J., Coppel M., Argamon S. Arbitraining Among Competing Classifiers Using Learned Referees // Knowledge and Information Systems 3 (2001) 4, 2001. — p. 470–490.
Perrone M. and Cooper L. When networks disagree: Ensemble methods for hybrid neural networks // R.J.Mammone (Ed.), Neural Networks for Speech and Image Processing, Chapman and Hall, 1993.
Prodomidis A., Chan P., and Stolfo S. Meta–learning in distributed data mining systems: Issues and approaches // P.Chan and H.Kargupta (Eds.) Advances in Distributed Data Mining, AAAI Press, 1999. Also available at http://www.cs.columbia.edu/~sal/hpapers/DDMBOOK.ps.gz.
Quinlan R. C4.5 Programs for Machine Learning. Morgan Kaufmann, San Mateo, CA, 1993.
Rumelhart D., Hinton G., and Williams R. Learning internal representation by error propagation // D.Rumelhart, J.McClelland (Eds.) Parallel Distributed Processing: Exploration of the microstructure of cognitions, Volume 1: Foundations. MIT Press, 1986.
Seewald A.and Fuernkranz J. An evaluation of grading classifiers // Proceedings of 4th International Conference "Intelligent data Analysis", LNCS 2189, 2001. — p. 115–124.
Sementic Web roadmap. http://www.w3.org/DesignIssues/Semantic.html
Ting K. The characterization of predictive accuracy and decision combination // Proceedings of 13th International Conference on Machine Learning, Morgan Kaufman, 1996. — p. 498–506.
Ting K.and Witten I. Issues in stacked generalization // Journal of Artificial Intelligence Research, 10, 1999. — p. 271–289.
Todorovski L.and Dzeroski S. Combining classifiers with meta decision trees // D.A.Zighen, J.Komarowski and J.Zitkov (Eds.) Proceedings of 4th European Conference on Principles of Data Mining and Knowledge Discovery (PKDD–2000), Springer Verlag, 2000. — p. 54–64.
Todorovski L.and Dzeroski S. Combining multiple models classifiers with meta decision trees // To appear in Machine Learning Journal. 2001.
Vilalta R.and Drissi Y. A perspective view and survey of meta-learning // Submitted to the Journal of Artificial Intelligence Review, http://www.research.ibm.com/people/v/vilalta/papers/jaireview01.ps
D.Wolpert Stacked generalization // Neural Network, 5(2), 1992. — p. 241–260.
Опубликован
2002-04-01
Как цитировать
Городецкий, Карсаев, & Самойлов,. (2002). Многоагентная технология принятия решений в задачах объединения данных. Труды СПИИРАН, 2(1), 12-37. https://doi.org/10.15622/sp.1.1
Раздел
Статьи
Авторы, которые публикуются в данном журнале, соглашаются со следующими условиями:
Авторы сохраняют за собой авторские права на работу и передают журналу право первой публикации вместе с работой, одновременно лицензируя ее на условиях Creative Commons Attribution License, которая позволяет другим распространять данную работу с обязательным указанием авторства данной работы и ссылкой на оригинальную публикацию в этом журнале.
Авторы сохраняют право заключать отдельные, дополнительные контрактные соглашения на неэксклюзивное распространение версии работы, опубликованной этим журналом (например, разместить ее в университетском хранилище или опубликовать ее в книге), со ссылкой на оригинальную публикацию в этом журнале.
Авторам разрешается размещать их работу в сети Интернет (например, в университетском хранилище или на их персональном веб-сайте) до и во время процесса рассмотрения ее данным журналом, так как это может привести к продуктивному обсуждению, а также к большему количеству ссылок на данную опубликованную работу (Смотри The Effect of Open Access).