DECOM - Trabalhos apresentados em eventos
URI permanente para esta coleçãohttp://www.hml.repositorio.ufop.br/handle/123456789/581
Navegar
39 resultados
Resultados da Pesquisa
Item Scatter search based approach for the quadratic assignment problem(1997) Cung, Van Dat; Mautor, Thierry; Michelon, Philippe Yves Paul; Tavares, Andréa IabrudiScatter search is an evolutionary heuristic, proposed two decades ago, that uses linear combinations of a population subset to create new solutions. A special operator is used to ensure their feasibility and to improve their quality. In this paper, we propose a scatter search approach to the QAP problem. The basic method is extended with intensification and diversification stages and we present a procedure to generate good scattered initial solutionsItem An embedded converter from RS232 to universal serial bus.(2001) Zuquim, Ana Luiza de Almeida Pereira; Coelho Júnior, Claudionor José Nunes; Fernandes, Antônio Otávio; Oliveira, Marcos Pêgo; Tavares, Andréa IabrudiUniversal Serial Bus (USB) is a new personal computer interconnection protocol, developed to make the connection of peripheral devices to a computer easier and more efficient. It reduces the cost for the enduser, improves communication speed and supports simultaneous attachment of multiple devices (up to 127). RS232, in another hand, was designed to single device connection, but is one of the most used communication protocols. An embedded converter from RS232 to USB is very interesting, since it would allow serial-based devices to experience USB advantages without major changes. This work describes the specification and development of such converter and it is also a useful guide for implementing other USB devices. The converter specification was based on Engetron UPS’ serial communication requirements and its implementation uses a Cypress microcontroller with USB support.Item An early warning system for space-time cluster detection.(2003) Assunção, Renato Martins; Tavares, Andréa Iabrudi; Kulldorff, MartinA new topic of great relevance and concern has been the design of efficient early warning systems to detect as soon as possible the emergence of spatial clusters. In particular, many applications involving spatial events recorded as they occur sequentially in time require this kind of analysis, such as fire spots in forest areas as in the Amazon, crimes occurring in urban centers, locations of new disease cases to prevent epidemics, etc. We propose a statistical method to test for the presence of space-time clusters in point processes data, when the goal is to identify and evaluate the statistical significance of localized clusters. It is based on scanning the three dimensional space with a score test statistic under the null hypothesis that the point process is an inhomogeneous Poisson point process with space and time separable first order intensity. We discuss an algorithm to carry out the test and we illustrate our method with space-time crime data from Belo Horizonte, a large Brazilian city.Item Balancing coordination and synchronization cost in cooperative situated multi-agent systems with imperfect communication.(2004) Tavares, Andréa Iabrudi; Campos, Mário Fernando MontenegroWe propose a new Markov team decision model to the decentralized control of cooperative multi-agent systems with imperfect communication. Informational classes capture system’s communication semantics and uncertainties about transmitted information and stochastic transmission models, including delayed and lost messages, summarize characteristics of communication devices and protocols. This model provides a quantitative solution to the problem of balancing coordination and synchronization cost in cooperative domains, but its exact solution is computationally infeasible.We propose a generic heuristic approach, based on a off-line centralized team plan. Decentralized decision-making relies on Bayesian dynamic system estimators and decision-theoretic policy generators. These generators use system estimators to express agent’s uncertaintyabout system state and also to quantify expected effects of communication on local and external knowledge. Probabilities of external team behavior, a byproduct of policy generators, are used into system estimators to infer state transition. Experimental results concerning two previously proposed multi-agent tasks are presented, including limited communication range and reliability.Item Efficient allocation of verification resources using revision history information.(2008) Nacif, José Augusto Miranda; Silva, Thiago; Tavares, Andréa Iabrudi; Fernandes, Antônio Otávio; Coelho Júnior, Claudionor José NunesVerifying large industrial designs is getting harder each day. he current verification methodologies are not able to guarantee bug free designs. Some recurrent questions during a design verification are: Which modules are most likely to contain undetected bugs? In wich modules the verification team should concentrate their effort? This information is very useful, because it is better to start verifying the most bug-prone modules. In this work we present a novel approach to answer these questions. In order to identify these bug rone modules the revision history of the design is used. Using information of an academic experiment, we demonstrate that there is a close between bugs/changes history and future bugs. Our results show that allocating modules for verification based on bugs/changes leaded to the coverage of 91.67% of future bugs, while random based strategy covered only 37.5% of the future work mainly focused in software engineering techniques to bugs.Item Projeto/Reprojeto de bancos de dados relacionais : a ferramenta DB-Tool.(1997) Ferreira, Anderson Almeida; Laender, Alberto Henrique Frade; Silva, Altigran Soares daThis paper describes a tool that supp orts the design and redesign of relational databases The tool produces optimized relational representations of entity relationship ER schemas and is implemented using Informix as its target database management system DBMS The tool operates in two phases In the first phase it receives as input an ER schema and generates a list of commands to implement the corresponding Informix schema In the second phase it receives a list of redesign commands specifying changes to the ER schema and generates a redesign plan to reestructure the database accordingly An example illustrates the use of the tool.Item SyGAR – A synthetic data generator for evaluating name disambiguation methods.(2009) Ferreira, Anderson Almeida; Gonçalves, Marcos André; Almeida, Jussara Marques de; Laender, Alberto Henrique Frade; Veloso, Adriano AlonsoName ambiguity in the context of bibliographic citations is one of the hardest problems currently faced by the digital library community. Several methods have been proposed in the literature, but none of them provides the perfect solution for the problem. More importantly, basically all of these methods were tested in limited and restricted scenarios , which raises concerns about their practical applicability. In this work, we deal with these limitation s by proposing a synthetic generator of ambiguous authors hip records called SyGAR . The generator was validated against a gold standard collection of d is ambiguated records , and aplied to evaluate three d is ambiguation method s in a relevant scenario.Item Syntactic similarity of web documents.(2003) Pereira Junior, Álvaro Rodrigues; Ziviani, NivioThis paper presents and compares two methods for evaluating the syntactic similarity between documents. The first method uses the Patricia tree, constructed from the original document, and the similarity is computed searching the text of each candidate document in the tree. The second method uses shingles concept to obtain the similarity measure for every document pairs, and each shingle from the original document is inserted in a hash table, where shingles of each candidate document are searched. Given an original doc-ument and some candidates, two methods find documents that have some similarity relationship with the original doc-ument. Experimental results were obtained by using a pla-giarized documents generator system, from 900 documents collected from the Web. Considering the arithmetic ave rage of the absolute differences between the expected and ob-tained similarity, the algorithm that uses shingles obtained a performance of 4,13 % and the algorithm that uses Patricia tree a performance 7.50%Item Geração de impressão digital para recuperação de documentos similares na web(2004) Pereira Junior, Álvaro Rodrigues; Ziviani, NivioThis paper presents a mechanism for the generation of the “finger-print” of a Web document. This mechanism is part of a system for detecting and retrieving documents from the Web with a similarity relation to a suspicious do-cument. The process is composed of three stages: a) generation of a fingerprint of the suspicious document, b) gathering candidate documents from the Web and c) comparison of each candidate document and the suspicious document. In the first stage, the fingerprint of the suspicious document is used as its identifica-tion. The fingerprint is composed of representative sentences of the document. In the second stage, the sentences composing the fingerprint are used as queries submitted to a search engine. The documents identified by the URLs returned from the search engine are collected to form a set of similarity candidate do-cuments. In the third stage, the candidate documents are “in-place” compared to the suspicious document. The focus of this work is on the generation of the fingerprint of the suspicious document. Experiments were performed using a collection of plagiarized documents constructed specially for this work. For the best fingerprint evaluated, on average87.06%of the source documents used in the composition of the plagiarized document were retrieved from the Web.Item Um novo retrato da web brasileira.(2005) Modesto, Marco; Pereira Junior, Álvaro Rodrigues; Ziviani, Nivio; Castilho, Carlos; Yates, Ricardo BaezaO objetivo deste artigo ´e avaliar características quantitativas e qualitativas da Web brasileira, confrontando estimativas atuais com estimativas obtidas há cinco anos. Grande parte do conteúdo Web´ e dinâmico e volátil, o que inviabiliza a sua coleta na totalidade. Logo, o processo de avaliação foi realizado sobre uma amostra da Web brasileira, coletada em marco de 2005. Os resultados são estimados de forma consistente, usando uma metodologia eficaz, j´a utilizada em trabalhos similares com Webs de outros países. Dentre os principais aspectos observados neste trabalho estão a distribuição dos idiomas das paginas, o uso de ferramentas abertas versus proprietárias para geração de paginas dinâmicas, a distribuição dos formatos de documentos, a distribuição de tipos de domínios e a distribuição dos links a Web sites externos.