0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

Supersonic FF for Railgun CLP(FD)

Last updated at Posted at 2026-01-21

Introduction

We recently presented a fast constraint solver termed Railgun CLP(FD) that modelled attributed variables simply via ‘$ATTR’/2 compounds. We already went through an iteration which allowed (#\=)/2 constraints. In this instalment we present some further progress. In particular we intoduce a discount(C) = 1/k heuristic.

Railgun Versions

As a starting point we conceived a Lean CLP as reported in early December 2025 that used static variable ordering (SVO) based on variable degree scoring paired with an integer dif/2 constraint. In early January 2026 we dumped a further version of Railgun CLP(FD):

Railgun v1:

  • static variable ordering
  • integer dif/2 constraint

Railgun v7:

  • static condition filtering
  • (in)/2 constraint
  • (#=)/2 constraint

In this blog post we deal with improvements arounds static variable ordering (SVO) and its subsequent backtracking solutions search. Among the newly introduce features is tie breaking, which improves upon SVO and partial consistency which improves upon backtracking:

Railgun v9:

  • tie breaking
  • partial consistency

Magic Squares

To benchmark the recent additions to the Railgun CLP(FD) we choose the problem of magic squares. Magic squares can be easily encoded in Prolog based constraint solving systems. In the case of magic square of size 3x3 the columns, rows and diagonals have to sum up to 15:

Square3x3.png

The sums can be expressed as constraints using (#=)/2 and the addition (+)/2 evaluable function. Before we can do that we have to create as many variables as the square has cells. Without going into too much details, we used the following Prolog code:

magic(N, U, M) :-
   length(M, N),
   maplist(make_vec(N), M),        % square
   foldl(append, M, [], L),
   S is N^2,
   L ins 1..S,
   all_different(L),
   R is N*(S+1)//2,
   maplist(make_sum(R), M),        % rows
   transpose(M, H),
   maplist(make_sum(R), H),        % columns
   diagonal(M, 1, 1, D),           % 1st diagonal
   make_sum(R, D),
   diagonal(M, N, -1, C),          % 2nd diagonal
   make_sum(R, C),
   labeling([U], L).

We then created three magic squares that already contain clues, and used their time lump sum, as our benchmark result. It turned out that railgun version v7 could solve in in about 3 seconds, while SWI Prolog does it in about 30 milliseconds.

Tie Breaking

The bottom line is, we face a 100x times speed-up problem, in order to come close to SWI Prolog. Our static variable ordering (SVO) basically implements a degeneracy ordering. It repeatedly removes the lowest-degree variable, to find a variable ordering.

This can lead to a tie, when multiple variables have the same degree. And ultimately to the idea that a tie breaking heuristic, that assign some additional scoring to variables with the same degree, could improve the subsequent backtracking solutions search.

Since we had already condition filtering, our first take was to count the number of remaining conditions per variable as a further signal to create a static variable ordering. The routine that counts the conditions from a delayed goal list looks as follows:

sys_attr_skim(V, U, U) :- var(V), !.
sys_attr_skim([G|L], U, V) :- term_variables(G, [_]), !,
   H is U-1,
   sys_attr_skim(L, H, V).
 sys_attr_skim([_|L], U, V) :-
   sys_attr_skim(L, U, V).  

With this little improvement in the scoring of the variables, we saw an immediate boost in a number of examples. Among such an example was also the magic square benchmark, that we previously presented. Testing inside Dogelog Player 2.1.4 for Java, we found a 10x fold speed increase.

Partial Consistency

We then experimented with partial consistency. The idea here is to create an early consistency test, that prunes the search space. Among a popular consistency test, is the judgement whether a constraint can be rejected as being satisfiable. Take an equality constraint:

E #= F

We can check, while not all variables are yet instantiated:

min(E) =< max(F) & min(F) =< max(E)

While this rather seems a routine method inside constraint solving. It also leads to an improved scoring for tie breaking. While a constraint can be viewed to punctuate variable domains, a consistency test narrows the lower and upper bound of variable domains.

So either way, whether constraint or consistency test, they reduce the domain size, and are thus in relation to selections strategies such as first-fail (ff). In a propagation based constraint solver a variable with the smallest domain is dynamically selected.

sys_attr_skim(V, U, U) :- var(V), !.
sys_attr_skim([G|L], U, V) :- term_variables(G, R), R \= [], !,
   length(R, J),
   H is U-1/J,
   sys_attr_skim(L, H, V).
 sys_attr_skim([_|L], U, V) :-
   sys_attr_skim(L, U, V).  

The above code shows our attempt to estimate the impact of constraints and consistency tests at the same time. This was found by us and in a blind test also suggested by ChatGPT. Whereby ChatGPT used the formula discount(C) = 1/k, where k is the degree of the constraint or consistency test.

Results

We first benchmarked strategies and consistency. This means we separated the score from the availability of consistency tests. And saw that the discounted scoring, which we termed plausi, gave us already some benefit when we didn't really have the consistency test pruning:

image.png

Combining the discounted scoring with the presence of consistency tests, gives the greatest improvement. We actually reach our goal of a 100x times speed up of the magic square example. So we didn't need to shy away comparing with more Prolog systems than only SWI:

image.png

There is almost no difference among SWI Prolog, our Dogelog Player based Railgun CLP(FD) and ECLiPSe Prolog, when benchmarking with our magic square set-up. We also tested some newer Prolog systems, such as Scryer Prolog and Trealla Prolog, but they show a 10x-30x slowdown.

Conclusions

We demonstrated an additional scoring of variables among the same degree to improve the static variable ordering (SVO). It can give a 100x times speed up in a magic square example. Putting Dogelog Player on equal foot with SWI and ECLiPSe, while recent versions of Scryer and Trealla were still 10x-30x times slower.

See also:

Porting Railgun CLP(FD) to SWI-Prolog
https://medium.com/2989/e9f2ef4e6878

0
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?