在线时间:8:00-16:00
迪恩网络APP
随时随地掌握行业动态
扫描二维码
关注迪恩网络微信公众号
开源软件名称(OpenSource Name):masud-technope/BLIZZARD-Replication-Package-ESEC-FSE2018开源软件地址(OpenSource Url):https://github.com/masud-technope/BLIZZARD-Replication-Package-ESEC-FSE2018开源编程语言(OpenSource Language):Java 100.0%开源软件介绍(OpenSource Introduction):Improving IR-Based Bug Localization with Context-Aware Query ReformulationAccepted Papers at ESEC/FSE 2018 and ICSE 2018 (Poster)
Abstract: Recent findings suggest that Information Retrieval (IR)-based bug localization techniques do not perform well if the bug report lacks rich structured information (e.g., relevant program entity names). Conversely, excessive structured information (e.g., stack traces) in the bug report might not always help the automated localization either. In this paper, we propose a novel technique--BLIZZARD-- that automatically localizes buggy entities from project source using appropriate query reformulation and effective information retrieval. In particular, our technique determines whether there are excessive program entities or not in a bug report (query), and then applies appropriate reformulations to the query for bug localization. Experiments using 5,139 bug reports show that our technique can localize the buggy source documents with 7%--56% higher Hit@10, 6%--62% higher MAP@10 and 6%--62% higher MRR@10 than the baseline technique. Comparison with the state-of-the-art techniques and their variants report that our technique can improve 19% in MAP@10 and 20% in MRR@10 over the state-of-the-art, and can impro 59% of the noisy queries and 39% of the poor queries. Subject Systems (6)
Total Bug reports: 5,139 Materials IncludedBaseline Method
BLIZZARD-Proposed Method
Bug Report & Goldsets
System Corpora & Lucene Indices
BLIZZARD Prototype & External Dependencies
Installing, Building and Execution
Licensing & Others
Available Operations
Required parameters for the operations
Q.1: How to install the BLIZZARD tool?
Q.2: How to get reformulated queries for a system?
Currently, the tool extracts raw bug reports from "BR-Raw" folder using the sample input Bug-IDs and then reformulates the reports. Query File format:BugID1 Reformulated-query BugID2 Reformulated-query BugID3 Reformulated-query .......................................................... Q.3: How to collect Top-K bug localization results?
The above command collects Top-10 results and reports Hit@10, MRR@10, MAP@10 for the queries. If you want to extract all the results rather than Top-K only, you can set -topk to a big number, 100000 to get all the results. This provides the ranking of all source code files for each given query. DISCLAIMER: Currently, the tool provides system specific results. Hence, the performances reported at Table 5 (of the paper) can be found by averaging the results from all 6 subject systems. Q.4: How to determine Query Effectiveness (QE) performances?You can set -topk to a big number, 100000 to get all the results. This provides the ranking of all source code files for each query which can be then used to determine the Query Effectiveness (QE). Q.5: How to replicate the bug localization performances reported in the paper?
This command shows Hit@10, MRR@10, MAP@10 for all 6 subject systems, and their mean measures (as shown in Table 5) Q.6: How to replicate the Query Effectiveness performances reported in the paper?
This commands shows query improvement, query worsening and query preserving statistics across all 6 subject systems (as shown in Table 9). Please cite our work as
ACER, STRICT, and QUICKARRelated Projects:Something not working as expected?Contact: Masud Rahman ([email protected]) OR Create an issue from here |
2023-10-27
2022-08-15
2022-08-17
2022-09-23
2022-08-13
请发表评论