Navegando por Assunto "Biometrics"
Agora exibindo 1 - 4 de 4
- Resultados por Página
- Opções de Ordenação
Item Chronic fatigue syndrome and its relation with absenteeism : elastic-net and stepwise applied to biochemical and anthropometric clinical measurements.(2021) Neisse, Anderson Cristiano; Oliveira, Fernando Luiz Pereira de; Oliveira, Anderson Castro Soares de; Cruz, Frederico Rodrigues Borges da; Nascimento Neto, Raimundo Marques doCharacterized by persistent fatigue, pain, cognitive impairment and sleep difficulties, Chronic Fatigue Syndrome (CFS) has been common in clinical practice. Studies indicate multiple factors contributing to CFS development: poor sleep, dehydration, psychological stress, hormonal dysfunction, nutrient deficiencies, among others. In risk work conditions, like the shift work of mines, CFS significantly increases the chance of fatal accidents. Work environments of mines suggest the presence of factors that increase the risk of developing CFS. Considering the severity/implications of CFS’s symptoms on the social and professional lives as well as on the economy, efforts are targeting its characterization and prevention. This study aims to assess the risk of CFS by studying cross-sectional data on absenteeism of 621 shift workers, measuring 8 anthropometric and 11 biochemical variables as well as age and gender, amounting 21 variables. After imputation, logistic regression was fitted by Stepwise selection, Lasso and Elastic-Net regularization. Results suggest that the models do not discriminate very well due to noise inherent to the dependent variable. However, all models agree on the effects of Sodium and Total Cholesterol on the risk of absenteeism. The Stepwise model also indicates LDL and Triglycerides as significant factors, both Lasso and Elastic-Net show effects for LDL instead. The Elastic-Net model suggests an effect of Potassium, though inconclusive according to the literature.Item EEG time series learning and classification using a hybrid forecasting model calibrated with GVNS.(2017) Coelho, Vitor Nazário; Coelho, Igor Machado; Coelho, Bruno Nazário; Souza, Marcone Jamilson Freitas; Guimarães, Frederico Gadelha; Luz, Eduardo José da Silva; Barbosa, Alexandre Costa; Coelho, Mateus Nazario; Netto, Guilherme Gaigher; Pinto, Alysson Alves; Elias, Marcelo Eustaquio Versiani; Gonçalves Filho, Dalton Cesar de Oliveira; Oliveira, Thays Aparecida deBrain activity can be seen as a time series, in particular, electroencephalogram (EEG) can measure it over a specific time period. In this regard, brain fingerprinting can be subjected to be learned by machine learning techniques. These models have been advocated as EEG-based biometric systems. In this study, we apply a recent Hybrid Focasting Model, which calibrates its if-then fuzzy rules with a hybrid GVNS metaheuristic algorithm, in order to learn those patterns. Due to the stochasticity of the VNS procedure, models with different characteristics can be generated for each individual. Some EEG recordings from 109 volunteers, measured using a 64-channels EEGs, with 160 HZ of sampling rate, are used as cases of study. Different forecasting models are calibrated with the GVNS and used for the classification purpose. New rules for classifying the individuals using forecasting models are introduced. Computational results indicate that the proposed strategy can be improved and embedded in the future biometric systems.Item Evaluating the use of ECG signal in low frequencies as a biometry.(2014) Luz, Eduardo José da Silva; Menotti, David; Schwartz, William RobsonTraditional strategies, such as fingerprinting and face recognition, are becoming more and more fraud susceptible. As a consequence, new and more fraud proof biometrics modalities have been considered, one of them being the heartbeat pattern acquired by an electrocardiogram (ECG). While methods for subject identification based on ECG signal work with signals sampled in high frequencies (>100 Hz), the main goal of this work is to evaluate the use of ECG signal in low frequencies for such aim. In this work, the ECG signal is sampled in low frequencies (30 Hz and 60 Hz) and represented by four feature extraction methods available in the literature, which are then feed to a Support Vector Machines (SVM) classifier to perform the identification. In addition, a classification approach based on majority voting using multiple samples per subject is employed and compared to the traditional classification based on the presentation of single samples per subject each time. Considering a database composed of 193 subjects, results show identification accuracies higher than 95% and near to optimality (i.e., 100%) when the ECG signal is sampled in 30 Hz and 60 Hz, respectively, being the last one very close to the ones obtained when the signal is sampled in 360 Hz (the maximum frequency existing in our database). We also evaluate the impact of: (1) the number of training and testing samples for learning and identification, respectively; (2) the scalability of the biometry (i.e., increment on the number of subjects); and (3) the use of multiple samples for person identification.Item Exploiting a loss and a synthetic dataset protocol for biometrics system.(2022) Silva, Pedro Henrique Lopes; Moreira, Gladston Juliano Prates; Luz, Eduardo José da Silva; Moreira, Gladston Juliano Prates; Luz, Eduardo José da Silva; Queiroz, Rafael Alves Bonfim de; Silva, Rodrigo César Pedrosa; Oliveira, Luciano Rebouças de; Santos, Thiago Oliveira dosOs sistemas biométricos são um assunto comum no cotidiano do ser humano. Os esforços para aumentar a segurança desses sistemas estão aumentando a cada ano devido à sua necessidade por robustez. Os sistemas baseados em uma modalidade biométrica não tem um desempenho próximo da perfeição em ambientes não cooperativos, o que exige abordagens mais complexas. Devido a isso, novos estudos são desenvolvidos para melhorar o desempenho de sistemas baseados em biometria, criando novas formas de ensinar um algoritmo de machine learning a criar novas representações. Atualmente, vários pesquisadores estão direcionando seus esforços para desenvolver novas abordagens de metric learning para arquiteturas de deep learning para uma ampla gama de problemas, incluindo biometria. Neste trabalho, propõe-se uma função de perda baseada em dados biométricos para criar representações profundas a serem utilizadas em sistemas biométricos, chamada de D-loss. Os resultados mostram a eficácia da função de perda proposta com a menor taxa de equal-error rate (EER) de 5,38%, 13,01% e 7,96% para MNIST-Fashion, CIFAR-10 e CASIA-V4. Uma estratégia diferente para aumentar a robustez de um sistema é a fusão de duas ou mais modalidades biométricas. No entanto, é impossível encontrar um conjunto de dados com todas as combinações de modalidades biométricas possíveis. Uma solução simples é criar um conjunto de dados sintéticos, embora a metodologia para criar um ainda seja um problema em aberto na literatura. Neste trabalho, propõe-se a criação de um critério para mesclar duas ou mais modalidades de tal forma a criar conjuntos de dados sintéticos semelhantes: o critério de Doddington Zoo. Várias estratégias de mesclagem são avaliadas: fusões ao nível de score (mínimo, multiplicação e soma) e nível de características (concatenação simples e metric learning). Um EER próximo a zero também é observado usando os critérios de fusão propostos com a fusão de soma de pontuação e as modalidades de Eletrocardiograma (CYBHi), olho e face (FRGC). Dois conjuntos de dados com mais de 1.000 indivíduos (UFPR-Periocular e UofTDB) são usados para avaliar os critérios de mesclagem junto com a D-loss e outras funções de metric learning. Os resultados mostram o aspecto do critério do Doddington Zoo de criar conjuntos de dados semelhantes (pequeno desvio padrão em relação ao critério randômico) e a robustez do D-loss (2,50% EER contra 2,17% da função de perda triplets e 5,74, da função de perda multi-similarity).