

Scheduling multithreaded computations by work stealing. SIAM Journal on Computing 27, 1, 202-229. Space-efficient scheduling of multithreaded computations. In Proceedings of the 3rd Annual ACM Symposium on Parallel Algorithms and Architectures (SPAA’91). A comparison of sorting algorithms for the connection machine CM-2. Internally deterministic parallel algorithms can be fast. Carnegie Mellon University, Pittsburgh, PA.

In Proceedings of the 17th International Conference on Parallel Architectures and Compilation Techniques (PACT’08). The PARSEC benchmark suite: Characterization and architectural implications. Christian Bienia, Sanjeev Kumar, Jaswinder Pal Singh, and Kai Li.Parallel and Distributed Computation: Numerical Methods. Grace: Safe multithreaded programming for C/C++. Berger, Ting Yang, Tongping Liu, and Gene Novark. CoreDet: A compiler and runtime system for deterministic multithreaded execution. Tom Bergan, Owen Anderson, Joseph Devietti, Luis Ceze, and Dan Grossman.In Proceedings of the 24th ACM SIGPLAN Conference Companion on Object-Oriented Programming Systems Languages and Applications (OOPSLA’09).
#Allen datagraph c512 software#
The Habanero multicore software research project. Rajkishore Barik, Zoran Budimlic, Vincent Cavè, Sanjay Chatterjee, Yi Guo, David Peixotto, Raghavan Raman, et al.In Proceedings of the 41st Annual ACM Symposium on Theory of Computing (STOC’09). Distributed (Δ+1)-coloring in linear (in Δ) time. In Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD’06). Group formation in large social networks: Membership, growth, and evolution. Lars Backstrom, Dan Huttenlocher, Jon Kleinberg, and Xiangyang Lan.DOI:(86)90019-2 Google Scholar Digital Library A fast and simple randomized parallel algorithm for the maximal independent set problem. Noga Alon, László Babai, and Alon Itai.The Fortress Language Specification Version 1.0. Eric Allen, David Chase, Joe Hallett, Victor Luchangco, Jan-Willem Maessen, Sukyoung Ryu, Guy L.In Proceedings of the International Conference on Parallel Processing. A multi-color SOR method for parallel computation. On the seven application benchmarks studied, P rism-R incurs a 7% geometric mean overhead relative to P rism.

Despite its additional complexity, P rism-R is only marginally slower than P rism. P rism-R satisfies the same theoretical bounds as P rism, but its implementation is more involved, incorporating a multivector data structure to maintain a deterministically ordered set of vertices partitioned by color. This article also presents P rism-R, a variation of P rism that executes dynamic data-graph computations deterministically even when updates modify global variables with associative operations. P rism executes the benchmarks 1.2 to 2.1 times faster than GraphLab’s nondeterministic lock-based scheduler while providing deterministic behavior. To isolate the effect of the scheduling algorithm on performance, we modified GraphLab to incorporate P rism and studied seven application benchmarks on a 12-core multicore machine. These theoretical guarantees are matched by good empirical performance. We show that a P-processor execution of P rism performs updates in Q using O(χ (lg ( Q/χ ) + lg Δ ) + lg P span and Θ( size( Q) + P) work. Define size( Q)= | Q| + ∑ v∈ Q deg( v), which is proportional to the space required to store the vertices of Q using a sparse-graph layout. Let G = ( V, E) be a degree-Δ graph colored with χ colors, and suppose that Q⊆ V is the set of active vertices in a round. We analyze P rism using work-span analysis. A multibag data structure is used by P rism to maintain a dynamic set of active vertices as an unordered set partitioned by color. P rism uses a vertex coloring of the graph to coordinate updates performed in a round, precluding the need for mutual-exclusion locks or other nondeterministic data synchronization. This article introduces P rism, a chromatic-scheduling algorithm for executing dynamic data-graph computations. A dynamic data-graph computation updates only an active subset of the vertices during a round, and those updates determine the set of active vertices for the next round.
#Allen datagraph c512 update#
During each round of a data-graph computation, an update function atomically modifies the data associated with a vertex as a function of the vertex’s prior data and that of adjacent vertices. A data-graph computation-popularized by such programming systems as Galois, Pregel, GraphLab, PowerGraph, and GraphChi-is an algorithm that performs local updates on the vertices of a graph.
