A proper colouring of a graph is an assignment of colours to the vertices such that no two endpoints of an edge get the same colour.
This means that the set of vertices which get any particular colour form an *independent set*, i.e. there are no edges amongst vertices of the same colour.
Graph colouring is a notoriously hard problem (the decision problem is NP-complete), and *fractional colouring* is about whether the problem gets “easier” if vertices are allowed to be coloured with multiple colours instead of one.

In a technical sense, fractional colouring is a *relaxation* of proper colouring, as instead of assigning a single colour to a vertex you are allowed to assign multiple *weighted* colours such that the weights sum to at least 1. Usual colouring is the specal case where weights must be 0 or 1, and for fractional colouring weights can be any number between 0 and 1.
This means that one might find fractional colourings which use “less colour” than proper colourings, so fractional colouring is easier than proper colouring in this sense.
For fractional colouring the concept of *colour* gets a bit strange, so it is preferable to keep track of weights of independent sets instead.

In the sense of time complexity, it is not really true that fractional colouring is easier than proper colouring. Although the fractional chromatic number is the solution to a linear program, the number of variables of this program can be exponential in the size of the graph.

On this page, we demonstrate applications of a greedy fractional colouring algorithm found in *Graph Coloring and the Probabilistic Method* by Molloy and Reed.
Subsequently, I worked on refinements of the algorithm and defined the probabilistic notion of *local occupancy* that gives rise to many examples where the algorithm is close to best-possible.

To colour a graph $G$ the algorithm requires, for every induced subgraph $H$ of $G$, a random independent set $\mathbf{I}_H$ with the property for any vertex $v\in H$ we have $\beta \mathbb{P}(v\in \mathbf{I}_H) + \gamma \mathbb{E}|N(v)\cap\mathbf{I}_H|\ge 1$.
To apply the algorithm you must solve an optimisation problem and show that there are $\beta>0$ and $\gamma>0$ such that this holds, and this has been done essentially optimally for various families of locally sparse graphs.
On this page we use the *hard-core model* which is a nice probability distribution on independent sets for which can be easy to find local occupancy parameters.
The distribution has a *fugacity* parameter which changes the probabilities, you can experiment with this below.

The way the algorithm works is simple. We keep track of weights for each independent set, and for the visualisation below each independent set gets a colour. Usually there are far more independent sets than colours that are easy to distinguish visually, so we reuse colours and that’s why you might see multiple sectors of the same colours used on a vertex. Taking a “step” in the algorithm increases the weights of independent sets proportional to their probability in the hard-core model on the current graph, with the constant of proportionality chosen so that (a) no vertex gets weight more than 1, and (b) so that at least one vertex gets upgraded to weight exactly 1. Then any vertices which get weight 1 are removed from consideration, changing the probabilities in the hard-core model. Eventually every vertex is fully coloured (i.e. given weight 1) and the process stops.

The number in each vertex below is the current weight of colour given to the vertex, and this should increase until it hits 1 and then stops as the algorithm progresses. If we had the luxury of hundreds of visually-distinct colours it would be possible to observe that the same colour never appears on adjacent vertices, but since we have to resuse colours this can happen in the visualisation.

On a vertex-transitive graph such as the Petersen graph, the algorithm must complete in 1 step. This is because the probabilities in the hard-core model are all the same (by vertex transitivity), so if one vertex gets fully coloured, so do all the rest. If it were possible to take $\lambda = \infty$ in the visualisation, then the total colour weight used would be at most $(\omega + \Delta + 1)/2$ by the proof of the fractional version of Reed’s conjecture given in Molloy and Reed’s book. For more details on these parameters and this problem, the book is an excellent resource.

There is no randomness in the algorithm, so the output is always the same for the same input $G$. Curiously, despite the facts that fractional colourings are “weaker” objects than list colourings and the list-colouring algorithm implemented here relies on the same *local occupancy* as this fractional algorithm, this algorithm is not as efficient as the list-colouring one.
It is usually the case that the number of independent sets in $G$ is exponential in the number of vertices of $G$, and so there are an exponential number of weight parameters to keep track of and update throughout the algorithm. Clearly this cannot be done in polynomial time!