Uso De Medidas De Complexidade Em Seleção De Atributos

Nenhuma Miniatura disponível
Data
2018-07-31
Autores
Okimoto, Lucas Chesini [UNIFESP]
Orientadores
Lorena, Ana Carolina [UNIFESP]
Tipo
Dissertação de mestrado
Título da Revista
ISSN da Revista
Título de Volume
Resumo
Feature Selection Is An Important Pre-Processing Step Usually Mandatory In Data Analysis By Machine Learning Techniques. Its Objective Is To Reduce Data Dimensionality By Removing Irrelevant And Redundant Features From A Dataset. In This Work We Evaluate The Use Of Complexity Measures Of Classification Problems In Feature Selection (Fs). These Descriptors Allow Estimating The Intrinsic Difficulty Of A Classification Problem By Regarding On Characteristics Of The Dataset Available For Learning. We Propose A Combined Univariate-Multivariate Fs Technique Which Employs Two Of The Complexity Measures: Fisher "S Maximum Discriminant Ratio And Intra-Extra Class Distances. The Results Are Promising And Reveal That The Complexity Measures Are Indeed Suitable For Estimating Feature Importance In Classification Datasets. Large Reductions In The Numbers Of Features Were Obtained, While Preserving, In General, The Predictive Accuracy Of Two Strong Classification Techniques: Support Vector Machines And Random Forests.
Feature Selection Is An Important Pre-Processing Step Usually Mandatory In Data Analysis By Machine Learning Techniques. Its Objective Is To Reduce Data Dimensionality By Removing Irrelevant And Redundant Features From A Dataset. In This Work We Evaluate The Use Of Complexity Measures Of Classification Problems In Feature Selection (Fs). These Descriptors Allow Estimating The Intrinsic Difficulty Of A Classification Problem By Regarding On Characteristics Of The Dataset Available For Learning. We Propose A Combined Univariate-Multivariate Fs Technique Which Employs Two Of The Complexity Measures: Fisher "S Maximum Discriminant Ratio And Intra-Extra Class Distances. The Results Are Promising And Reveal That The Complexity Measures Are Indeed Suitable For Estimating Feature Importance In Classification Datasets. Large Reductions In The Numbers Of Features Were Obtained, While Preserving, In General, The Predictive Accuracy Of Two Strong Classification Techniques: Support Vector Machines And Random Forests.
Descrição
Citação