Navegando por Palavras-chave "Aprendizado De Máquina"
Agora exibindo 1 - 8 de 8
Resultados por página
Opções de Ordenação
- ItemSomente MetadadadosAprendizado De Máquina Aplicado À Odometria Visual Para Estimação De Posição De Veículos Aéreos Não Tripulados(Universidade Federal de São Paulo (UNIFESP), 2018-07-31) Roos, Daniel Rodrigues [UNIFESP]; Lorena, Ana Carolina [UNIFESP]; Universidade Federal de São Paulo (UNIFESP)To Perform The Autonomous Navigation, The System Of An Unmanned Aerial Vehicle (Uav), Also Known As Drone, Needs, Among Other Things, To Know The Position Of The Aircraft. Such Information Is Commonly Obtained By The Use Of A Global Positioning System (Gps) Along With An Inertial Navigation System (Ins). Although Widely Used For This Pur Pose, There Are Many Situations Where The Gps Signal May Not Be Available For Several Reasons, Affecting The Aircraft Navigation System. Computer Vision Techniques, Such As Visual Odometry (Vo), May Serve As An Alter Native Or Complement To Navigation Systems Which Uses Gps And Ins. Vo Allows To Estimate The Displacement And Direction Of The Uav Movement By Extracting Information From A Sequence Of Images Obtained During Flight By Onboard Cameras. Local Feature Detection And Description Algorithms Can Be Applied To Subsequent Images To Find Matching Points, Which Are Used To Estimate Camera Motion. However, During The Flight Of The Uav, Differences In Flight Scene And Flight
- ItemSomente MetadadadosConstrução de algoritmos de Machine Learning na Radiologia(Universidade Federal de São Paulo (UNIFESP), 2020-09-17) Kitamura, Felipe Campos [UNIFESP]; Abdala, Nitamar [UNIFESP]; Universidade Federal de São PauloRecent research in artificial intelligence has shown great potential to change radiology as we know it today. The tools to aid the radiological diagnosis can bring numerous benefits to the patients, radiologists and referring physicians. Despite the high expectations for this technology, the path to the creation of clinically useful and safe tools is a huge challenge that involves several aspects. In this work, we will address ethical, regulatory, technical and cultural considerations that need to be addressed to expand the scope of artificial intelligence algorithms in practice. Next, we present 7 projects developed by our group that address some of the challenges in the area: (1) the lack of reproducibility when reading exams, (2) the creation of optimized algorithms for each clinical problem, (3) the limitation to access large volumes of quality annotated data, (4) the lack of reproducibility of artificial intelligence researches, (5) the difficulty of integrating algorithms in medical practice, (6) errors in the registration of exams types and (7) the risk of exposure of sensitive patient information.
- ItemSomente MetadadadosDesambiguação de sentidos de palavras por meio de aprendizado semissupervisionado e word embeddings(Universidade Federal de São Paulo (UNIFESP), 2020-01-27) Sousa, Samuel Bruno Da Silva [UNIFESP]; Berton, Lilian [UNIFESP]; Universidade Federal de São PauloWords naturally present more than one meaning and ambiguity is a recurrent feature in natural languages. Consequently, the task of Word Sense Disambiguation (WSD) aims at defining which word sense is the most adequate in a given context by using computers. WSD is one of the main problems in the field of Natural Language Processing (NLP) since many other tasks, such as Machine Translation and Information Retrieval, may have their results enhanced by accurate disambiguation systems. To solve this problem, several Machine Learning (ML) approaches have been used, such as unsupervised, supervised, and semi-supervised learning. However, the lack of labeled data to train supervised algorithms made models which combine labeled and unlabeled data in the learning process appear as a potential solution. Additionally, a comparative study of semi-supervised learning (SSL) approaches for WSD was not done before, as well as the combined employment of SSL algorithms with efficient word representations known as word embeddings, which became popular in the literature of NLP. Hence, the main goal of this work concerns the investigation of the performance of several semi-supervised algorithms applied to the problem of WSD, using word embeddings as features. To do so, four graph-based SSL algorithms were compared to each other on the main benchmark datasets for WSD. In order to check the word embeddings influence on the final results of the algorithms, six different setups for the Word2Vec model were trained and employed. The experimental results show that SSL models present competitive performances against supervised approaches, reaching over 80% of F1 score when only 25% of labeled data are input. Furthermore, these algorithms have the advantage of avoiding a new training step to classify new words.
- ItemSomente MetadadadosExplorando informação temporal em aprendizado profundo: reconhecimento de ações em vídeos(Universidade Federal de São Paulo (UNIFESP), 2019-08-09) Santos, Samuel Felipe Dos [UNIFESP]; Almeida Junior, Jurandy Gomes De [UNIFESP]; Universidade Federal de São Paulo (UNIFESP)The human action recognition in videos has been a very prominent task in recent years for being challenging and having applications in a wide range of areas, such as surveillance, robotics, health, video search, human-computer interaction, among others. Recently, many works have used deep learning to deal with several problems in computer vision, such as classification, retrieval, segmentation, and pattern recognition in videos. However, one of the main limitations faced by these works is their lack of capacity to learn temporal dynamics due to the large amount of data present in a video, which generates a high computational cost since it is necessary to process a huge amount of data to train a model. Although videos contain a lot of information, they also have a lot of redundancy, which makes it difficult to extract relevant information. To overcome these problems, this work propose a Compressed Video Convolutional 3D network (CV-C3D), which explores information from compressed video, avoiding the high computational cost for fully decoding the video stream. The speed up in data computation enables our network to use 3D convolutions for capturing the temporal context efficiently. The results obtained with the proposed method were evaluated in two public datasets for human action recognition, UCF-101 and HMDB- 51, where our network presented the lowest computational complexity among all the compared methods and maintained comparable performance.
- ItemAcesso aberto (Open Access)Projeto ForestEyes – Ciência Cidadã e Aprendizado de Máquina na Detecção de Áreas Desmatadas em Florestas Tropicais(Universidade Federal de São Paulo (UNIFESP), 2020-08-28) Jordan Rojas Dallaqua, Fernanda Beatriz [UNIFESP]; Fazenda, Alvaro Luiz [UNIFESP]; Universidade Federal de São PauloTropical forests’ conservation is a current issue of social and ecological relevance due to their important role in the global ecosystem. Tropical forests have a great diversity of fauna and flora, act in the regulation of climate and rainfall, absorb large amounts of carbon dioxide, and serve as a home for countless indigenous peoples. Unfortunately, millions of hectares are deforested and degraded every year, requiring government or private initiative programs to monitor tropical forests. Most of these programs involve the inspection of remote sensing images by specialists, generally counting on the support of computational resources for automatic detection of patterns. This thesis proposes a novel methodology that aims to detect deforestation in tropical forests based on Citizen Science and Machine Learning. With the created methodology, it was possible to develop the prototype of a system called ForestEyes. It uses non-specialized volunteers to inspect images for the target task, interacting with them through an appropriate graphical interface, allocated as a project on the well-known Citizen Science platform Zooniverse. In the performed experiments, six official campaigns have been carried out, receiving more than 81, 000 contributions from 644 distinct volunteers. The results were compared with the official monitoring program for the Brazilian Legal Amazon (PRODES). The volunteers, within the concept of the wisdom of crowds, achieved excellent data labeling when considered an efficient segmentation even for early deforestation detection, which is considered a challenge for any similar system. These labeled data were used as a training set for different Machine Learning techniques, the results of which are comparable and many times even better than the achieved by using the official monitoring program as input data. Active Learning, with a balanced initial training set, obtained results comparable to the classic supervised learning but using smaller amounts of samples. New Active Learning approaches based on the entropy of the classification have been proposed, which have proved to be suitable for some conditions. In this way, the developed methodology shows promise, and with its improvement, it can complement official monitoring systems or be applied to regions where there is a shortage of specialists or of official monitoring programs.
- ItemAcesso aberto (Open Access)Reconhecimento automático de padrões em dislexia: uma abordagem baseada em funções visuais da leitura e aprendizado de máquina(Universidade Federal de São Paulo (UNIFESP), 2019-12-16) Silva Junior, Antonio Carlos Da [UNIFESP]; Mancini, Felipe [UNIFESP]; Schor, Paulo [UNIFESP]; Gonçalves, Emanuela Cristina Ramos [UNIFESP]; http://lattes.cnpq.br/3542867700396961; http://lattes.cnpq.br/8425496220946395; http://lattes.cnpq.br/4433119488921195; http://lattes.cnpq.br/1464083566861583; Universidade Federal de São Paulo (UNIFESP)INTRODUCTION: Developmental dyslexia is a neurological disorder that affects reading ability, that when left untreated can lead to learning problems and negatively affecting vocabulary increase. The diagnosis of dyslexia is complex and made by exclusion. Some studies evaluated eye movement data in conjunction with machine learning (ML) techniques to classify dyslexia. Another study raises the hypothesis of visual reading function patterns (VRF) for dyslexic differentiation. The study of VRF in combination of ML techniques has not been explored. GENERAL OBJECTIVE: To apply ML techniques to explore and assist the diagnosis of dyslexics from VRF. SPECIFIC OBJECTIVES: To explore dyslexic and non-dyslexic VRF data with feature extraction and to classify dyslexic and non-dyslexic using ML. MATERIAL AND METHODS: This dissertation has two steps: a quantitative and exploratory and a quantitative and correlational. The first step explored two dyslexic VRF datasets, one of 1-line (1L) text readings and the other of 3-line (3L) text readings. The self-organizing map algorithm was applied to each base to separate them into clusters that were then sent to a decision tree to extract the rules characterize each of the groups. The second step used data from 3L readings. The outliers was selected by a specialist. With the remaining data, the SMOTE algorithm was applied. Then a feature selection technique was applied having the best area under the ROC curve (AUC) as target for each of the five selected algorithms. They were compared by AUC and accuracy. All were also compared by their calibration curve. RESULTS: In the first step, the 1L base evaluation resulted in a clustering of 1 cluster of controls and 3 of dyslexics. Only dyslexics obtained Maximum reading speed MRS <140.72 ppm, while in the 3L evaluation, 3 dyslexic clusters and 1 control were obtained. In this only dyslexics had reading speed at critical read size (RSCPS) of less than 112.71 ppm. In the second step, synthetic data were generated for each group to have 100 records. In feature selection, the reading acuity (RA) was selected in 4 of the 5 algorithms. Logistic regression obtained the best AUC (0.999) and accuracy (99%) and obtained the best calibration curve. CONCLUSION: In the first step, the fact that MRS was so determinant in the separation of the 1L clusters and the RSCPS in the first one. It may indicate that the crownding effect had some impact on the 3L test. The fact that RA has been selected in 4 of the 5 feature selections may be an important variable for the diagnosis and study of dyslexia. The logistic regression algorithm obtained the best results and was indicated for VRF-based dyslexic classification.
- ItemAcesso aberto (Open Access)Uso De Medidas De Complexidade Em Seleção De Atributos(Universidade Federal de São Paulo (UNIFESP), 2018-07-31) Okimoto, Lucas Chesini [UNIFESP]; Lorena, Ana Carolina [UNIFESP]; Universidade Federal de São Paulo (UNIFESP)Feature Selection Is An Important Pre-Processing Step Usually Mandatory In Data Analysis By Machine Learning Techniques. Its Objective Is To Reduce Data Dimensionality By Removing Irrelevant And Redundant Features From A Dataset. In This Work We Evaluate The Use Of Complexity Measures Of Classification Problems In Feature Selection (Fs). These Descriptors Allow Estimating The Intrinsic Difficulty Of A Classification Problem By Regarding On Characteristics Of The Dataset Available For Learning. We Propose A Combined Univariate-Multivariate Fs Technique Which Employs Two Of The Complexity Measures: Fisher "S Maximum Discriminant Ratio And Intra-Extra Class Distances. The Results Are Promising And Reveal That The Complexity Measures Are Indeed Suitable For Estimating Feature Importance In Classification Datasets. Large Reductions In The Numbers Of Features Were Obtained, While Preserving, In General, The Predictive Accuracy Of Two Strong Classification Techniques: Support Vector Machines And Random Forests.
- ItemAcesso aberto (Open Access)Uso de rotinas de aprendizado de máquina em prontuário eletrônico para apoio a diagnósticos de pacientes oftalmológicos(Universidade Federal de São Paulo (UNIFESP), 2021) Alves, Lucas De Oliveira Batista [UNIFESP]; Santos, Vagner Rogerio Dos [UNIFESP]; Universidade Federal de São PauloObjective: To implement artificial intelligence routines through machine learning to construct diagnostic prediction models with data from electronic medical records of patients from the Department of Ophthalmology of Hospital São Paulo. Method: Preparation of a literature review of the main techniques and solutions of machine learning to use in electronic medical records, 1. extraction, treatment and analysis of data from medical records of the Department; 2. construction and analysis of vectorization models of related words in the context of the Database of Hospital São Paulo; 3. construction and validation of diagnostic prediction models. Results: The word vectorization models were able to capture the semantics of medical terms and enabled the construction of diagnostic prediction models, making the prediction model a great tool to assist health professionals. Conclusion: The machine learning models showed potential results to assist as diagnostic support tools of ophthalmologic patients.