Think Parallel

By Bryce Adelstein Lelbach

By default, we think sequentially. Parallelism is often seen as challenging and complex. Tools to be used sparingly and cautiously, and only by experts.

But we must shatter these assumptions, for today, we live in a parallel world. Almost every hardware platform is parallel, from the smallest embedded devices to the largest supercomputers.

We must change our mindset. Anyone who writes code has to think in parallel. Parallelism must become our default.

In this example-driven talk, we will journey into the world of parallelism. We’ll look at four algorithms and data structures in depth, comparing and contrasting different implementation strategies and exploring how they will perform both sequentially and in parallel.

During this voyage, we’ll uncover and discuss some foundational principles of parallelism, such as latency hiding, localizing communication, and efficiency vs performance tradeoffs. By the time we’re done, you’ll be thinking in parallel.





Your Privacy

By clicking "Accept Non-Essential Cookies" you agree ACCU can store non-essential cookies on your device and disclose information in accordance with our Privacy Policy and Cookie Policy.

Current Setting: Non-Essential Cookies REJECTED


By clicking "Include Third Party Content" you agree ACCU can forward your IP address to third-party sites (such as YouTube) to enhance the information presented on this site, and that third-party sites may store cookies on your device.

Current Setting: Third Party Content EXCLUDED



Settings can be changed at any time from the Cookie Policy page.