Skip to content

Commit dc3414b

Browse files
authored
Merge pull request #35 from PyDataBlog/cleandocs
Cleandocs
2 parents e29db28 + 5e053ae commit dc3414b

17 files changed

+1588
-939
lines changed

.travis.yml

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,7 @@ os:
55
- osx
66
julia:
77
- 1.3
8+
- 1.4
89
- nightly
910
after_success:
1011
- julia -e 'using Pkg; Pkg.add("Coverage"); using Coverage; Coveralls.submit(process_folder())'
@@ -14,7 +15,7 @@ jobs:
1415
fast_finish: true
1516
include:
1617
- stage: Documentation
17-
julia: 1.3
18+
julia: 1.4
1819
script: julia --project=docs -e '
1920
using Pkg;
2021
Pkg.develop(PackageSpec(path=pwd()));

Project.toml

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,7 @@ julia = "1.3"
1313
[extras]
1414
Random = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c"
1515
Test = "8dfed614-e22c-5e08-85e1-65c5234f0b40"
16+
Suppressor = "fd094767-a336-5f1f-9728-57cf17d0bbfb"
1617

1718
[targets]
18-
test = ["Test", "Random"]
19+
test = ["Test", "Random", "Suppressor"]

README.md

Lines changed: 8 additions & 52 deletions
Original file line numberDiff line numberDiff line change
@@ -10,39 +10,32 @@ ________________________________________________________________________________
1010
_________________________________________________________________________________________________________
1111

1212
## Table Of Content
13-
14-
1. [Motivation](#Motivatiion)
13+
1. [Documentation](#Documentation)
1514
2. [Installation](#Installation)
1615
3. [Features](#Features)
17-
4. [Benchmarks](#Benchmarks)
18-
5. [Pending Features](#Pending-Features)
19-
6. [How To Use](#How-To-Use)
20-
7. [Release History](#Release-History)
21-
8. [How To Contribute](#How-To-Contribute)
22-
9. [Credits](#Credits)
23-
10. [License](#License)
16+
4. [License](#License)
2417

2518
_________________________________________________________________________________________________________
2619

27-
### Motivation
28-
It's a funny story actually led to the development of this package.
29-
What started off as a personal toy project trying to re-construct the K-Means algorithm in native Julia blew up after into a heated discussion on the Julia Discourse forums after I asked for Julia optimizaition tips. Long story short, Julia community is an amazing one! Andrey Oskin offered his help and together, we decided to push the speed limits of Julia with a parallel implementation of the most famous clustering algorithm. The initial results were mind blowing so we have decided to tidy up the implementation and share with the world.
20+
### Documentation
21+
- Stable Documentation: [![Stable](https://img.shields.io/badge/docs-stable-blue.svg)](https://PyDataBlog.github.io/ParallelKMeans.jl/stable)
22+
23+
- Experimental Documentation: [![Dev](https://img.shields.io/badge/docs-dev-blue.svg)](https://PyDataBlog.github.io/ParallelKMeans.jl/dev)
3024

31-
Say hello to our baby, `ParallelKMeans`!
3225
_________________________________________________________________________________________________________
3326

3427
### Installation
3528
You can grab the latest stable version of this package by simply running in Julia.
3629
Don't forget to Julia's package manager with `]`
3730

3831
```julia
39-
pkg> add TextAnalysis
32+
pkg> add ParallelKMeans
4033
```
4134

4235
For the few (and selected) brave ones, one can simply grab the current experimental features by simply adding the experimental branch to your development environment after invoking the package manager with `]`:
4336

4437
```julia
45-
dev git@github.com:PyDataBlog/ParallelKMeans.jl.git
38+
pkg> dev git@github.com:PyDataBlog/ParallelKMeans.jl.git
4639
```
4740

4841
Don't forget to checkout the experimental branch and you are good to go with bleeding edge features and breaks!
@@ -60,43 +53,6 @@ ________________________________________________________________________________
6053

6154
_________________________________________________________________________________________________________
6255

63-
### Benchmarks
64-
65-
_________________________________________________________________________________________________________
66-
67-
### Pending Features
68-
- [X] Implementation of Triangle inequality based on [Elkan C. (2003) "Using the Triangle Inequality to Accelerate
69-
K-Means"](https://www.aaai.org/Papers/ICML/2003/ICML03-022.pdf)
70-
- [ ] Support for DataFrame inputs.
71-
- [ ] Refactoring and finalizaiton of API desgin.
72-
- [ ] GPU support.
73-
- [ ] Even faster Kmeans implementation based on current literature.
74-
- [ ] Optimization of code base.
75-
76-
_________________________________________________________________________________________________________
77-
78-
### How To Use
79-
80-
```Julia
81-
82-
```
83-
84-
_________________________________________________________________________________________________________
85-
86-
### Release History
87-
88-
- 0.1.0 Initial release
89-
90-
_________________________________________________________________________________________________________
91-
92-
### How To Contribue
93-
94-
_________________________________________________________________________________________________________
95-
96-
### Credits
97-
98-
_________________________________________________________________________________________________________
99-
10056
### License
10157

10258
[![FOSSA Status](https://app.fossa.com/api/projects/git%2Bgithub.com%2FPyDataBlog%2FParallelKMeans.jl.svg?type=large)](https://app.fossa.com/projects/git%2Bgithub.com%2FPyDataBlog%2FParallelKMeans.jl?ref=badge_large)

docs/src/benchmark_image.png

1010 KB
Loading

docs/src/index.md

Lines changed: 144 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,17 +1,160 @@
1-
# ParallelKMeans.jl Documentation
1+
# ParallelKMeans.jl Package
22

33
```@contents
4+
Depth = 4
45
```
56

7+
## Motivation
8+
It's actually a funny story led to the development of this package.
9+
What started off as a personal toy project trying to re-construct the K-Means algorithm in native Julia blew up after a heated discussion on the Julia Discourse forum when I asked for Julia optimizaition tips. Long story short, Julia community is an amazing one! Andrey offered his help and together, we decided to push the speed limits of Julia with a parallel implementation of the most famous clustering algorithm. The initial results were mind blowing so we have decided to tidy up the implementation and share with the world as a maintained Julia pacakge.
10+
11+
Say hello to `ParallelKMeans`!
12+
13+
This package aims to utilize the speed of Julia and parallelization (both CPU & GPU) to offer an extremely fast implementation of the K-Means clustering algorithm and its variations via a friendly interface for practioners.
14+
15+
In short, we hope this package will eventually mature as the "one stop" shop for everything KMeans on both CPUs and GPUs.
16+
17+
## K-Means Algorithm Implementation Notes
18+
Since Julia is a column major language, the input (design matrix) expected by the package in the following format;
19+
20+
- Design matrix X of size n×m, the i-th column of X `(X[:, i])` is a single data point in n-dimensional space.
21+
- Thus, the rows of the design design matrix represents the feature space with the columns representing all the training examples in this feature space.
22+
23+
One of the pitfalls of K-Means algorithm is that it can fall into a local minima.
24+
This implementation inherits this problem like every implementation does.
25+
As a result, it is useful in practice to restart it several times to get the correct results.
26+
627
## Installation
28+
You can grab the latest stable version of this package from Julia registries by simply running;
29+
30+
*NB:* Don't forget to Julia's package manager with `]`
31+
32+
```julia
33+
pkg> add ParallelKMeans
34+
```
35+
36+
For the few (and selected) brave ones, one can simply grab the current experimental features by simply adding the experimental branch to your development environment after invoking the package manager with `]`:
37+
38+
```julia
39+
dev git@github.com:PyDataBlog/ParallelKMeans.jl.git
40+
```
741

42+
Don't forget to checkout the experimental branch and you are good to go with bleeding edge features and breaks!
43+
```bash
44+
git checkout experimental
45+
```
846

947
## Features
48+
- Lightening fast implementation of Kmeans clustering algorithm even on a single thread in native Julia.
49+
- Support for multi-theading implementation of Kmeans clustering algorithm.
50+
- 'Kmeans++' initialization for faster and better convergence.
51+
- Modified version of Elkan's Triangle inequality to speed up K-Means algorithm.
52+
53+
54+
## Pending Features
55+
- [X] Implementation of [Hamerly implementation](https://www.researchgate.net/publication/220906984_Making_k-means_Even_Faster).
56+
- [ ] Full Implementation of Triangle inequality based on [Elkan - 2003 Using the Triangle Inequality to Accelerate K-Means"](https://www.aaai.org/Papers/ICML/2003/ICML03-022.pdf).
57+
- [ ] Implementation of [Geometric methods to accelerate k-means algorithm](http://cs.baylor.edu/~hamerly/papers/sdm2016_rysavy_hamerly.pdf).
58+
- [ ] Support for DataFrame inputs.
59+
- [ ] Refactoring and finalizaiton of API desgin.
60+
- [ ] GPU support.
61+
- [ ] Even faster Kmeans implementation based on current literature.
62+
- [ ] Optimization of code base.
63+
- [ ] Improved Documentation
64+
- [ ] More benchmark tests
1065

1166

1267
## How To Use
68+
Taking advantage of Julia's brilliant multiple dispatch system, the package exposes users to a very easy to use API.
69+
70+
```julia
71+
using ParallelKMeans
72+
73+
# Uses all available CPU cores by default
74+
multi_results = kmeans(X, 3; max_iters=300)
75+
76+
# Use only 1 core of CPU
77+
results = kmeans(X, 3; n_threads=1, max_iters=300)
78+
```
79+
80+
The main design goal is to offer all available variations of the KMeans algorithm to end users as composable elements. By default, Lloyd's implementation is used but users can specify different variations of the KMeans clustering algorithm via this interface
81+
82+
```julia
83+
some_results = kmeans([algo], input_matrix, k; kwargs)
84+
85+
# example
86+
r = kmeans(Lloyd(), X, 3) # same result as the default
87+
```
88+
89+
### Supported KMeans algorithm variations.
90+
- [Lloyd()](https://cs.nyu.edu/~roweis/csc2515-2006/readings/lloyd57.pdf)
91+
- [Hamerly()](https://www.researchgate.net/publication/220906984_Making_k-means_Even_Faster)
92+
- [Geometric()](http://cs.baylor.edu/~hamerly/papers/sdm2016_rysavy_hamerly.pdf) - (Coming soon)
93+
- [Elkan()](https://www.aaai.org/Papers/ICML/2003/ICML03-022.pdf) - (Coming soon)
94+
- [MiniBatch()](https://www.eecs.tufts.edu/~dsculley/papers/fastkmeans.pdf) - (Coming soon)
95+
96+
97+
### Practical Usage Examples
98+
Some of the common usage examples of this package are as follows:
99+
100+
#### Clustering With A Desired Number Of Groups
101+
102+
```julia
103+
using ParallelKMeans, RDatasets, Plots
104+
105+
# load the data
106+
iris = dataset("datasets", "iris");
107+
108+
# features to use for clustering
109+
features = collect(Matrix(iris[:, 1:4])');
110+
111+
# various artificats can be accessed from the result ie assigned labels, cost value etc
112+
result = kmeans(features, 3);
113+
114+
# plot with the point color mapped to the assigned cluster index
115+
scatter(iris.PetalLength, iris.PetalWidth, marker_z=result.assignments,
116+
color=:lightrainbow, legend=false)
117+
118+
```
119+
120+
![Image description](iris_example.jpg)
121+
122+
#### Elbow Method For The Selection Of optimal number of clusters
123+
```julia
124+
using ParallelKMeans
125+
126+
# Single Thread Implementation of Lloyd's Algorithm
127+
b = [ParallelKMeans.kmeans(X, i, n_threads=1;
128+
tol=1e-6, max_iters=300, verbose=false).totalcost for i = 2:10]
129+
130+
# Multi Thread Implementation of Lloyd's Algorithm by default
131+
c = [ParallelKMeans.kmeans(X, i; tol=1e-6, max_iters=300, verbose=false).totalcost for i = 2:10]
132+
133+
```
134+
135+
136+
## Benchmarks
137+
Currently, this package is benchmarked against similar implementation in both Python and Julia. All reproducible benchmarks can be found in [ParallelKMeans/extras](https://github.com/PyDataBlog/ParallelKMeans.jl/tree/master/extras) directory. More tests in various languages are planned beyond the initial release version (`0.1.0`).
138+
139+
*Note*: All benchmark tests are made on the same computer to help eliminate any bias.
140+
141+
142+
Currently, the benchmark speed tests are based on the search for optimal number of clusters using the [Elbow Method](https://en.wikipedia.org/wiki/Elbow_method_(clustering)) since this is a practical use case for most practioners employing the K-Means algorithm.
143+
144+
145+
![benchmark_image.png](benchmark_image.png)
146+
147+
148+
## Release History
149+
- 0.1.0 Initial release
150+
151+
152+
## Contributing
153+
Ultimately, we see this package as potentially the one stop shop for everything related to KMeans algorithm and its speed up variants. We are open to new implementations and ideas from anyone interested in this project.
13154

155+
Detailed contribution guidelines will be added in upcoming releases.
14156

157+
<!--- Insert Contribution Guidelines Below --->
15158

16159
```@index
17160
```

docs/src/iris_example.jpg

165 KB
Loading

0 commit comments

Comments
 (0)