# GPU computational geometry algorithms

**Posted:**July 19, 2012

**Filed under:**Available Project/Thesis Topics |

**Tags:**algorithms, computational-geometry, gpgpu Leave a comment

A Graphics Processing Unit (GPU) is a kind of computer processor originally designed to render 3D graphics for computer games. It turns out that the kinds of computations that GPUs do can be applied to many other general purpose computing problems, and in some cases GPU algorithms can be vastly faster than traditional CPU algorithms. Computational geometry is a branch of theoretical computer science concerned with algorithms and data structures for geometric objects.

The leading manufacturer of GPUs is NVidia. OpenCL and CUDA are two popular programming environments for writing GPU code. I have experience with OpenCL but am open to using other languages.

In addition to these low-level languages, there are emerging high-level tools such as PyOpenCL, ParFunk, and Thrust.

Project: Implement a novel data computational geometry data structure and/or algorithm on the GPU. Measure its performance and confirm that it is faster than comparable CPU implementations.

Some ideas for worthwhile data structures or algorithms:

- A doubly connected edge list (DCEL) or comparable structure. This is an important one; many algorithms depend on using a DCEL structure.
- Low-dimension linear programming, convex programming, or LP-type programming.
- k-dimension convex hulls. 2D and 3D convex hulls have already been done.
- Voronoi diagrams.
- Binary space partitions.
- Range trees.
- Quad trees.
- Planar graph separators.
- Well separated pairs decomposition.
- Anything else implemented in CGAL.
- Problems on my list of MapReduce topics might apply too.

# MapReduce algorithms

**Posted:**July 19, 2012

**Filed under:**Available Project/Thesis Topics |

**Tags:**algorithms, haskell, java, mapreduce, python Leave a comment

MapReduce is a software framework for distributed computing on huge data sets. It was created by Google and is used internally by Amazon, eBay, Facebook, Google, IBM, Nokia, Yahoo!, and many other organizations.

Disco, Holumbus, and Apache Hadoop are open source implementations of MapReduce, for Python, Haskell, and Java, respectively. Other implementations are available as well.

Project:

- Pick a computational problem that can be parallelized.
- Design a MapReduce-style algorithm to solve the problem. You will probably be able to reuse ideas from the vast literature on parallel algorithms.
- Implement the algorithm and analyze its performance.

Here are some ideas for important problems that might be worth investigating. If you’re interested in a specific problem, you’ll need to do a brief literature review to confirm that the problem is neither impossible, nor trivially easy, to solve with MapReduce.

- Matrix multiplication using Strassen’s algorithm
- Binary space partition
- Minimum spanning trees, possibly with a variant of Boruvka’s algorithm
- 2D arrangements
- 2D convex hulls or 3D convex hulls
- Voronoi diagrams
- Factoring using the Quadratic Sieve or General Number Field Sieve
- Graph separators
- Linear programming

# Monte Carlo Artificial Intelligence for Go

**Posted:**July 19, 2012

**Filed under:**Available Project/Thesis Topics |

**Tags:**algorithms, board-games, go Leave a comment

Go is a classical strategic board game. It has very simple rules, with only two kinds of pieces, yet requires a depth of strategy similar to Chess. The computational problem of making optimal Go movies is computationally difficult. However, there is a simple randomized algorithm using Monte Carlo methods that seems to work well in practice:

- Enumerate all the liberties (legal moves)
- For each liberty, play out a random game where players take turns making completely random, but legal moves. Try
*k*random trials, and keep track of the number of times w that the computer player wins. - Choose the liberty with the highest win ratio
*w*/*k*

This is a nice algorithm because it is simple, the player strength and running time can be tuned by adjusting the *k* parameter, it is embarrasingly parallel, and it seems to work well against human opponents. The basic algorithm has already been designed, implemented, and documented, but there is room for more work in this area.

Project: Implement a Monte-Carlo Go player. The project can be extended by parallelizing the algorithm, or by developing the player on a resource-constrained mobile device.

# Compress Wikipedia

**Posted:**July 19, 2012

**Filed under:**Available Project/Thesis Topics |

**Tags:**algorithms, compression, wikipedia Leave a comment

Wikipedia is the online encyclopedia that everyone can edit. It would be nice to be able to carry around an offline copy of Wikipedia for many reasons, such as reading on mobile devices when network service is unavailable, or as an archive in case of a prolonged Internet outage. Wikipedia stores a lot of data, so compressing it would be helpful. There is even a contest for solving this problem well.

General purpose text compression algorithms such as LZW (GZip) and BWT (bzip2) do a decent job, but it is possible to do even better. Standard text compression algorithms limit themselves to linear-time algorithms for obvious reasons. However, we might be able to do better with a relatively high-order polynomial-time algorithm. One approach would be to use a polynomial-time approximation scheme for a NP-complete problem that corresponds to text compression.

Project:

- Reduce hypertext compression to a known approximable NP-complete problem.
- Implement the approximation algorithm.
- Compare the compression performance to competitors.

# Prize Collecting Chinese Postman Problem

**Posted:**July 19, 2012

**Filed under:**Available Project/Thesis Topics |

**Tags:**algorithms, graphs 2 Comments

The Chinese postman problem (CPP) is an NP-complete problem related to the traveling salesman problem (TSP). The desired output of TSP is the shortest cycle that visits every *vertex* exactly once. By contrast, the desired output of CPP is the shortest cycle that visits every *edge* at least once. The TSP corresponds to the problem of a visiting every city (vertex) only once; the CPP corresponds to visiting every city road (edge) at least once. By extension the CPP corresponds to the problem of traversing and photographing city roads to collect data for a street view service such as Google Street View or Microsoft Bing Maps.

In the prize collecting variant, every edge is assigned a numerical reward, and the object is the collect at least *X* points of reward. This corresponds to the (realistic) scenario in which some streets are more important than others, and the algorithm can choose to skip unimportant streets. The prize collecting CPP is NP-complete, but perhaps it could be approximated efficiently.

Project: design a polynomial time approximation scheme (PTAS) for the prize collecting Chinese postman problem.