We provide the opportunity of Artifact Evaluation for both accepted regular papers and journal-track papers.
The goal of this initiative is to promote reproducibility of published results by highlighting papers supported with open-source code. These papers will be identified by badges from the IEEE. More details are provided below. Artifacts evaluation is optional, and the artifact evaluation process is separate to the peer paper review process.
Artifact Form Requirements
What is the Artifact Form?
If you opt for Artifact Evaluation, the Artifact Form is used to collect the necessary information for Artifact Evaluation. The form describes the presence or absence of computational artifacts (software, hardware or data) that supports the research presented in the paper.
Do I need to make my software open source in order to complete the Artifact Form?
No. You are not asked to make any changes to your computing environment or design process in order to complete the form. The form is meant to describe the computing environment in which you produced your results and any artifacts you wish to share. Any author-created software does not need to be open source, unless you wish to be eligible for an IEEE badge.
Who will review my artifact form?
The Artifact Evaluation Committee (AEC) will review the information provided, and will check that artifacts are indeed available at the URLs provided. They will also help authors improve their forms, in a double-open arrangement. If authors select this option, their paper may be evaluated for an Artifacts Available badge. Some papers will be evaluated for “Results Replicated”. If your paper is chosen, then the artifact evaluation committee will be in contact with the authors if they have questions regarding your artifacts.
How will review of artifacts interact with the double-blind review process?
Artifact review will not take place until after decisions on papers have been made. Reviewers will not have access to the artifact form. Authors should not include links to their repositories in their paper. The paper review process is double blind. The artifact review process is not.
Impact of Artifact Form (AF)
What’s the impact of an Artifact Form on scientific reproducibility?
Reproducibility depends on, as a first step, sharing the provenance of results with transparency, and the AF is an instrument of documentation and transparency. A good AF helps researchers document their results, and helps other researchers build from them.
The paper text explains why I believe my answers are right and shows all my work. Why do I need to provide an AF?
There are many good reasons for formalizing the artifact description and evaluation process. Standard practice varies across disciplines. Labelling the evaluation as such improves our ability to review the paper and improves reader confidence in the veracity of the results.
What are “author-created” artifacts and why make the distinction?
Author created artifacts are the hardware, software, or data created by the paper’s authors. Only these artifacts need be made available to facilitate evaluation. Proprietary, closed source artifacts (e.g. commercial software and CPUs) will necessarily be part of many research studies. These proprietary artifacts should be described to the best of the author’s ability but do not need to be provided.
What about proprietary author-created artifacts?
The ideal case for reproducibility is to have all author-created artifacts be publicly available with a stable identifier. Papers involving proprietary, closed source author-created artifacts should indicate the availability of the artifacts and describe them as much as possible. Note that results dependent on closed source artifacts are not reproducible and are therefore ineligible for some badges.
Are the numbers used to draw our charts a data artifact?
Not necessarily. Data artifacts are the data (input or output) required to reproduce the results, not necessarily the results themselves. For example, if your paper presents a system that generates charts from datasets, then providing an input dataset would facilitate reproducibility. However, if the paper merely uses charts to elucidate results then the input data to whatever tool you used to draw those charts isn’t required to reproduce the paper’s results. The tool which drew the chart isn’t part of the study, so the input data to that tool is not a data artifact of this work.
Help! My data is HUGE! How do I make it publicly available with a stable identifier?
Use Zenodo. Contact them for information on how to upload extremely large datasets. You can easily upload datasets of 50GB or less, have multiple datasets, and there is no size limit on communities.
What’s the impact of an Artifact Form on scientific reproducibility?
An artifact-evaluation effort can increase the trustworthiness of computational results. It can be particularly effective in the case of results obtained using specialized computing platforms, not available to other researchers. Leadership computing platforms, novel testbeds, and experimental computing environments are of keen interest to the FPGA community. Access to these systems is typically limited, however. Thus, most reviewers cannot independently check results, and the authors themselves may be unable to recompute their own results in the future, given the impact of irreversible changes in the environment (compilers, libraries, components, etc.). The various forms of Artifact Evaluation improve confidence that computational results from these special platforms are correct.
The following four badges are considered and awarded in the artifact evaluation:
Artifacts Available: Author-created artifacts relevant to this paper have been placed in a publicly accessible archival repository. A DOI or link to this repository, along with a unique identifier for the object, is provided.
Artifacts Evaluated—Functional: The artifacts associated with the research are found to be documented, consistent, complete, exercisable, and include appropriate evidence of verification and validation.
Artifacts Evaluated—Reusable: The artifacts associated with the paper are of a quality that significantly exceeds minimal functionality. That is, they have all the qualities of the Artifacts Evaluated—Functional level, but, in addition, they are very carefully documented and well-structured, to the extent that reuse and repurposing is facilitated. In particular, norms and standards of the research community for artifacts of this type are strictly adhered to.
Results Replicated: The main results of the paper have been obtained in a subsequent study by a person or team other than the authors, using, in part, artifacts provided by the author.
Q: I only want to make my artifacts public if the paper is accepted. Is that okay?
A: Yes. Artifacts will only be examined after papers are accepted. You can wait until you hear the decision on your paper before making your artifacts public, but you must complete the form at the time of being invited to allow enough time for later processing; information in the form will not be used until after paper acceptance.
Q: Does the artifact submission link need to be anonymous?
Any information in the paper must be anonymous. In the artifact descriptor form the link should not be anonymous. The artifact descriptor form will not be reviewed until after paper decisions are made.
Q: When to submit the artifacts and what is the deadline for submission of artifacts?
When being invited and you would like to perform an AE. You should put the artifact link in the form but you do not need to make it public until after the paper are accepted/published. No one will look at artifacts until your paper is accepted. We require the information available to start the evaluation process as soon as papers are accepted, hence the requirement to submit the form.
Q: Can I update the artifact evaluation form after paper acceptance?
You are free to make changes to details of the artifacts up to the start of the evaluation and in consultation with your evaluator, but the initial submission should include as many details as possible to allow the evaluation process to be organised.