Tuesday, 17 January 2017

Artifact Evaluation discussion session at CGO/PPoPP'17

News:  notes from this joint CGO-PPoPP AE session are now available online.

We would like to invite all researchers to an open CGO-PPoPP'17 Artifact Evaluation discussion on February 6 (Monday) at 17:15-17:45 (room 400/402, Hilton Austin, Texas, USA).

The program is the following:
  • Briefly presenting Artifact Evaluation results for CGO'17 and PPoPP'17

  • Announcing joint CGO/PPoPP'17 distinguished artifact awards:
    • 500$ cheque presented by Grigori Fursin from dividiti for the highest-ranked artifact implemented using Collective Knowledge (open-source framework to share artifacts as customizable and reusable Python components with JSON API, automate software installation/detection and quickly prototype cross-platform experimental workflows).
  • Discussing how to improve future AE and make it more scalable, introduce a new option of open reviews, discuss open challenges in computer engineering, and share knowledge about tools and techniques to enable collaborative and reproducible computer systems' research.
We had a record number of artifact submissions this time: 27 vs 17 two years ago. It is really great to see that researchers are now taking AE seriously, but it also highlighted new issues:

1) A growing number of diverse artifacts made it somewhat difficult to find AE members with appropriate knowledge, skills and access to rare hardware and software.

2) Ad-hoc experimental setups placed considerable burden on AE members and committee when installing, running and processing very complex experiments particularly when native environment is required (for example, for performance analysis and tuning) and Docker/VM images are not suitable.

3) It is still not clear whether we are ready to demand full validation of all experiments from a paper or still allow partial validation. However, we do understand that the complexity of experiments, lack of common experimental frameworks and methodology makes full validation of some experiments really challenging if possible.

Note that to solve some of these issue we tried for the first time "open reviewing" this year: for example, we asked the community to help us evaluate several open-source artifacts already publicly available at the time of submission. It turned out very well (see links to public discussions) since we managed to find researchers with an access to rare hardware and appropriate skills. Furthermore, public comments helped authors communicate with reviewers directly (note that reviewers can still be anonymous) and fix all encountered issues immediately rather than waiting for the rebuttal.

We really want to know your options and suggestions about how to solve these and improve AE. Therefore we hope you will be able to join us at this discussion session! Also do not hesitate to contact Artifact Evaluation Steering Committee directly! Remember that new AE procedures may affect you at the future conferences!

Looking forward to your participation and suggestions!

Monday, 9 January 2017

Exciting internships at dividiti (deep learning, runtime adaptation, SW/HW co-design)

We wish you a very happy and successful New Year!

If you are passionate about performance analysis and optimization, run-time adaptation and SW/HW co-design, as well as collaborative and reproducible experimentation, we would like to draw your attention to several exciting internships at dividiti available for HiPEAC PhD students:
  1. Collective Knowledge on Deep Learning (apply here).
  2. Crowdtuning and runtime adaptation of open-source CPU/GPU libraries (apply here).
  3. Solving grand challenges in computer systems via knowledge sharing and crowdsourcing (apply here).
You can find general information about HiPEAC internships here. Our internships will be for 3-6 months between February and December 2017 in our fantastic office in Cambridge, UK. Please apply before 1 February 2017!

Collective Knowledge on Deep Learning

You will contribute to our growing suite of open-source tools for crowd-benchmarking and crowd-tuning of deep learning applications (CK-Caffe, CK-TensorFlow, CK-TinyDNN, CK-TensorRT, etc.), being developed in collaboration with our customers and partners.We aim to collectively grow optimisation knowledge on deep learning to meet the performance, prediction accuracy and cost requirements for deployment on a wide range of form factors - from sensors to self-driving cars.

Sounds interesting? Please read more about our initiatives in the latest HiPEAC newsletter (1, 2), try out our Android app and... apply!

Crowdtuning and runtime adaptation of open-source CPU/GPU libraries

Several open-source libraries are readily available (e.g. OpenBLAS, MAGMA, ViennaCL, clBLAS, CLBlast). Unfortunately, in terms of performance they generally trail behind closed-source libraries (e.g. Intel's MKL, NVIDIA's cuBLAS). First, developers typically expose only a few optimization parameters (“knobs”) for tuning, as it’s a very tedious, time-consuming and hardware-specific process. Second, developers have no effective means for optimization knowledge transfer between projects.

You will contribute to an ambitious and exciting open-source initiative to enable library crowd-tuning via our Collective Knowledge framework and repository. This initiative will allow the community to easily compare various implementations of library routines across different data sets and diverse hardware, gradually expose more and more optimization choices, continuously crowd-tune such routines, share optimization statistics in a public repository, and automatically assemble the best and possibly adaptive solution for a given platform.

Sounds interesting? Please read more about our initiatives in the latest HiPEAC newsletter (1, 2), and apply!

Solving grand challenges in computer systems via knowledge sharing and crowdsourcing

You will contribute to solving grand challenges in computer systems research by sharing research artefacts and crowdsourcing experimentation! Please read more about our approach and startup by following the links below and apply!