Wednesday, 2 March 2016

brand new GCC/LLVM crowdtuning engine has been released (including Android app)

 
Dear colleagues,
 
We have finally released a new Collective Knowledge workflow
to crowdsource multi-objective GCC/LLVM compiler flag
optimization. The results shared by volunteers are continuously
updated and classified here:
* http://cTuning.org/crowdtuning-results-gcc
* http://cTuning.org/crowdtuning-results-llvm

If you are interested, you can participate in this collaborative
optimization in 2 ways:

a) Using small Android app to crowdsource autotuning across
mobile devices: http://cTuning.org/crowdtuning-via-mobile-devices

b) Using CK framework on your laptop, server, data center. We tried
to make it as simple as possible. You just need to do a few steps:
  1. Check that you have Python >= 2.7 and Git installed
  2. Download CK from GitHub: $ git clone http://github.com/ctuning/ck ck-master
  3. Point PATH variable to ck-master/bin: $ export PATH=$PWD/ck-master/bin:$PATH
  4. Pull all repos for crowd-tuning (one of the examples of collaborative program optimization and machine learning): $ ck pull repo:ck-crowdtuning
  5. Start interactive experiment crowdsourcing: ck crowdsource experiments
  6. Start non-interactive crowdtuning for LLVM compilers: $ ck crowdtune program --quiet --llvm
  7. Start non-interactive crowdtuning for GCC compilers: $ ck crowdtune program --quiet --gcc
If you are on Windows and have MinGW compilers installed,
 you can also participate in crowdtuning via

 $ ck crowdtune program --quiet --target_os=mingw-64

Our crowdtuning engine randomly picks publicly shared workloads
(benchmarks, kernels, data sets)  in CK format from GitHub,
tunes them, applies Pareto filter, prunes best found optimization
solution (leave only influential flags in case of compiler crowd-tuning)
and stores results in public CK aggregator.

Workloads are available here:
* http://github.com/ctuning/ctuning-programs
* http://github.com/ctuning/ctuning-datasets-min
* https://github.com/ctuning/ck/wiki/Shared_repos

This new version of our framework is still in beta phase so we would like
to apologize in advance for possible glitches.

However, we still hope it will be of some use to compiler developers
to detect and fix problems with optimization heuristics using shared
workloads, to performance engineers to reuse the pool of the top
optimizations for a given compiler/CPU, or to researchers working
on machine-learning based self-tuning computing systems.

Depending on our availability and funding, we will continue making CK
more user friendly, adding more realistic workloads and developing new
optimization scenarios (CUDA/OpenCL crowd-tuning coming soon).
We are also improving the reproducibility of shared optimization results
by fixing common autotuning pipeline (http://github.com/ctuning/ck-autotuning )
whenever there is a problem replaying a given experiment ...

If you are interested to arrange new R&D projects based on this technology
or have feedback, do not hesitate to get in touch!

Have fun,
Grigori

=========================================
Grigori Fursin, PhD
CTO, dividiti, UK

Tuesday, 1 March 2016

We were interviewed for the Austrian radio!

We were interviewed for the Austrian radio about our collaborative SW/HW co-design approach. If you understand German, you can listen to it here:
http://oe1.orf.at/programm/427011

Don't miss our Collective Knowledge talk and demos at DATE'16!

The program is available online (Wednesday, March 16, 2016):

http://www.date-conference.com/conference/session/8.2