Trabalhos de conclusão de curso
URI Permanente desta comunidade
Navegar
Navegando Trabalhos de conclusão de curso por Autor "Afonso, Bruno Klaus de Aquino [UNIFESP]"
Agora exibindo 1 - 1 de 1
Resultados por página
Opções de Ordenação
- ItemLgclvoauto: correction of labels with gradient descent optimization for graph-based semi-supervised learning(Universidade Federal de São Paulo, 2020-10-10) Afonso, Bruno Klaus de Aquino [UNIFESP]; Berton, Lilian [UNIFESP]; http://buscatextual.cnpq.br/buscatextual/visualizacv.do?id=K4266309U2&tokenCaptchar=03AGdBq24j3FaAQ2l3vC06a2EYNuDIubiizZuVcnlV6fI5G1vJpNj8yNRi7jWJTFyGDAnUm6aRVZH8qSDGWEhPG1WaS1YLF_rtjc_lKyHWhzMYbnbmgbDIkdx3Arfv0Y4Oxpng_A1ibsDpfkGlhbNmRfp15hn1niO3vRe6xySPQOvpTgwa1REqi06sQ189zgVOUJ30o2viuA_sRZzO2_1rwpHcCOGB2Sc1WzcAFv4ocwniB-c3tq3Z47nOwJzqBenCMPFqhaistyXaPhO9c7uhZ8ElxS2u5VAmtG-pmIGHecfkKgahkdZzFhGz4Vfj_HGh9CNcx47aLK-HBVWUobAZcmMbtV7E9nJ_ycZuw6U-EiIErMXYRR0at53Ep97GWvd56Mitbj3WaGTHKuIS1R3veldf4F5Pfl6MIt4wcPZdtybEF9cMM01NaHRFo0CM-EhPQ_-j7oQGw9MWg8LzVoq33R0cXIQQw9dhNIezaLsnSOILFNcepCD_ifwg5XYXCMdC24Ul046sVjE2qqGjlg3x6MHEleXHwxLRfw; http://buscatextual.cnpq.br/buscatextual/visualizacv.do?id=K8584642H3&tokenCaptchar=03AGdBq2638O_blQ7hqTlWZWIOTWvEUzqH53xZpspYlXm0MgoVA_h2APnQ3iXz7gle5FAdPB0yb7bw9z4swkUK5O8bunl3qZiGgPSmCkH7oLxahXpqhqNFyrU34bQAK6FJUDUAVB8LHCBXrGTT1a2fEecpL6k1ilTePqIvsy38KV3yawi3wxCJ3Yvs022uS8V8CcZ1ZRr-wy_DPL5MuqbphWGVUyPGIHOcvBYUrtCmLY7Jv8we0Z52vRacc_OFZuLCbumeJfyzk2iccjIADhaQfOJKKbJoGoEzIuSJMd8el6OFWJfXXppuJ-u-CNkCSdQopQMoAM9XJlaB9hJSrc4Ko3GwJzlQqUb_GUQ32GQTpzh2r6Pzu-uP4siVLJuqiRa_PCpfCIvw-DInwkHL5RHnjDG4Ns1ob1xJ85gEcrIVUnzGG3CY5zEAmImmE1x1JwbgBOG85NlPy_Z6E2QlGKq-KVjwQ5oaRpEqZz9amfcCPYmJE7QmqOX2k77M6_Y7OEiyBPYAXOV9IPny2XWy-d_1U27QThg5n7lecwWe consider the problem of learning with noisy labels where most of the data is unlabelled. More specifically, we focus on graph-based semi-supervised learning, a setting in which many different approaches have already been proposed, such as $\ell_1$ norm, smooth eigenbasis pursuit and bivariate formulation. We propose our own semi-supervised filter, named Automatic Leave-One-Out Filter based on Local and Global Consistency (LGCLVOAuto), that corrects and redistributes label information in order to minimize leave-one-out error, while remaining consistent with the random walk process imposed by its baseline, the Local and Global Consistency (LGC) algorithm. We explore the problem of diagonal dominance in LGC solutions and its possible relation to overfitting, and how setting it to zero leads to the leave-one-out cost. We make use of gradient descent optimization on labels to minimize this cost, transferring some of the trust from the labels themselves to the propagation model. In order to eliminate degenerate solutions, some restrictions are put in place: labels cannot change class, and the overall contribution for each class should remain the same. The optimization requires only the relations between labels: consequently, it is suited to moderately large datasets such as MNIST, in particular when labelled data is scarce. It requires a single parameter. In theory, it may be extended trivially to the more general LapRLS classifier. Results show that LGCLVOAuto is capable of outperforming its baseline significantly when there is noise, and not be too harmful in the noiseless scenario. Moreover, it is competitive with other methods that require more parameters.