Sometimes one needs to represent elements of the power set of a given set of n elements, for n not too large. For such purposes one may represent elemets of the power set by integers in the range from 0 to . A value m in this range can represent a given subset by its binary string, where 1's stand for those elements present in the subset.
For example, say we have a set of 100 elements. We will use integers in the range from 0 to to represent subsets, and take a random subset of, say, 128 elements in the power set (that is, subset whose elements are subsets of the original set). Actually, there is nonzero probability that my subset contains stricly fewer than 128 elements, but this is still okay for the examples below.
A reasonable question might be: "How many elements of subs contain the third element of the original set?" To answer this we represent the elements of the original set as "bit masks", that is, integer powers of two.
So the third element of the set is masks[], which is 2^2, or 4. We form the function that takes an integer and computes its bitwise AND with masks[]. We Map this function over the subset list subs, then count the number of elements in the result that are positive. The bitwise operation has masked out all but the third binary digit of each member of subs, so the positive values resulting (all will be 4) are from those member that have a third binary digit set.
Okay, easy enough. How about "How many members of subs contain the sixth and eleventh elements of the original set, but not the fourteenth?" (one might expect on average that this would be about an eighth of the number of elements in subs). Same idea.
What about "How many elements contain at least two from among the fourth, tenth, twenty-eighth, and fifty-first elements of the original set?" As before, we first mask appropriately.
Now we simply Count all those masked values whose binary DigitCount is at least two.
It is a simple matter to perform the basic set operations on sets represented as bitvectors. Set union is obtained with BitOr, intersection with BitAnd, and complement is only a little more complicated.
Application: working with a matrix modulo 2
For some number-theoretic algorithms such as factoring integers, one wants to find relations among a set of vectors modulo 2. Specifically, we suppose we have a matrix (regard it as a set of row vectors) where all elements are zeros and ones. We want to find elements of the null space. A simple way to get a generating set of this space is via NullSpace[Transpose[mat], Modulus->2].
One problem is that even with packed arrays we still require four bytes per entry. Clearly we can pack bits tighter than that!. Moreover it is not hard to see that we can avoid some work if we are willing to settle for, say, just one null vector. To this end, another method for finding these null vectors will prove useful. We augment on the right of the matrix by an identity matrix and then row reduce modulo 2. The values in the augmented columns record the operations that were done. Hence any row that has zeroes in the original columns tells us how to obtain a null vector using the information in the augmented columns. If any row suits the purpose then the last one must do so, because pivots are in increasing columns. So we simply look at that last column. Code to do this is as follows. First we set up an example. It is "typical" for the type of application mentioned in that it is approximately square.
Now we do the augmentation and row reduction. We ascertain that the original columns in the last row all have zeroes, and that the augmented part of that row gives a null vector.
Let us compare to finding a null space directly.
Using NullSpace was alot faster. By approximately a factor of about three, which is now surprise if you think about the underlying Gaussian elimination that is done. Thus far we have neither used bit vectors nor seen an advantage to the augmented matrix row reduction method. Here it is. We can treat each vector as a bit vector, thus saving alot of space. As we are doing Gaussian elimination modulo 2, the only operation consists of adding vectors to one another elementwise. This is easily seen to be equivalent to bitwise exclusive-or. A second area for speed improvement is that we need not do full row reduction, but only triangulate until we have a row of zeroes in the original columns (unless one wants more independent generators of the null space).
Below is code to implement this matrix triangulation method using bitvectors.
We illustrate on the example above. We must first translate the matrix to a list of bitvectors (nonnegative integers).
Now we obtain a null vector for this matrix and check that it is indeed such.
Still slower than using Nullspace directly. But the interesting fact is that the asymptotic complexity is better. If we double the size of the input then the Nullspace or RowReduce methods will take around eight times longer, as they are O() complexity algorithms. In contrast, for inputs in the size range up to several thousands, the bitvector formulation is only O(). This is because the bitwise operations are sufficiently fast as to be treated as constant speed up to a large size, so in effect other code in the loops dominates. We illustrate by multiplying the size of the example by four.
At this size the bitvector approach is substantially faster than using Nullspace. A reasonable question is whether one might in effect emulate Nullspace but using bit vectors. This can be done but in order to be as efficient as possible one would need the ability to check or set a given bit in a bignum in constant time, and this capability is not present in version 4 of Mathematica.