News: notes from this joint CGO-PPoPP AE session are now available
online.
We would like to invite all researchers to an open CGO-PPoPP'17
Artifact Evaluation discussion on February 6 (Monday) at 17:15-17:45 (room 400/402, Hilton Austin, Texas, USA).
The program is the following:
- Briefly presenting Artifact Evaluation results for CGO'17 and PPoPP'17
- Announcing joint CGO/PPoPP'17 distinguished artifact awards:
- 500$ cheque presented by Grigori Fursin from dividiti for the highest-ranked artifact implemented using Collective Knowledge (open-source framework to share artifacts as customizable and reusable Python components with JSON API, automate software installation/detection and quickly prototype cross-platform experimental workflows).
- Discussing how to improve future AE and make it more scalable, introduce a new option of open reviews, discuss open challenges in computer engineering, and share knowledge about tools and techniques to enable collaborative and reproducible computer systems' research.
We had a record number of artifact submissions this time: 27 vs 17 two years ago. It is really great to see that researchers are now taking AE seriously, but it also highlighted new issues:
1) A growing number of diverse artifacts made it somewhat difficult to find AE members with appropriate knowledge, skills and access to rare hardware and software.
2) Ad-hoc experimental setups placed considerable burden on AE members and committee when installing, running and processing very complex experiments particularly when native environment is required (for example, for performance analysis and tuning) and Docker/VM images are not suitable.
3) It is still not clear whether we are ready to demand full validation of all experiments from a paper or still allow partial validation. However, we do understand that the complexity of experiments, lack of common experimental frameworks and methodology makes full validation of some experiments really challenging if possible.
Note that to solve some of these issue we tried for the first time "open reviewing" this year: for example, we asked the community to help us evaluate several open-source artifacts already publicly available at the time of submission. It turned out very well (see
links to public discussions) since we managed to find researchers with an access to rare hardware and appropriate skills. Furthermore, public comments helped authors communicate with reviewers directly (note that reviewers can still be anonymous) and fix all encountered issues immediately rather than waiting for the rebuttal.
We really want to know your options and suggestions about how to solve these and improve AE. Therefore we hope you will be able to join us at this discussion session! Also do not hesitate to contact
Artifact Evaluation Steering Committee directly! Remember that new AE procedures may affect you at the future conferences!
Looking forward to your participation and suggestions!