We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Performance tuning Twitter services with Graal and Machine Learning

Formal Metadata

Title
Performance tuning Twitter services with Graal and Machine Learning
Title of Series
Number of Parts
561
Author
License
CC Attribution 2.0 Belgium:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
Running Twitter services on Graal has been very successful and saved Twitter a lot of money on datacenter cost. But we would like to run more efficient to reduce cost even more. I mean, who doesn’t? In order to do this we are using our Machine Learning framework called Autotune to tune Graal inlining parameters. This talk will show how much performance improvement we got by autotuning Graal.