RC4 is a popular encryption algorithm. The way it works is that a “Key Scheduling Algorithm” (KSA) takes your key and generates a 256byte array, and then a “PseudoRandom Generation Algorithm” (PRGA) uses that byte array to output an endless stream of bytes (the “key stream”), which look like random noise unless you know what the original byte array was. To encrypt a text, this key stream is XORed with a plaintext, and decryption is done by XORing the ciphertext with the key stream again.
RC4 is broken in a variety of situations. If you just naively use it twice on two different plaintexts then that is it, it’s broken. If you try to route around this by prepending a public random value to the key with each encryption, it’s still broken. Extracting a key given its KSAgenerated state should be difficult, but Biham and Carmeli showed how to do this to a 5byte key with an 86% success rate; Akgun, Kavak and Demirci improved this success rate to 99%, and showed how to get a 3% success rate for a 16byte key; Basu, Maitra, Paul & Talukdar used intensive, hightimecomplexity algorithms to improve the rate for a 16byte key to 14% or so.
Before Reviewer 2 preempts the next paragraph by noting that actually we have failed to cite the latest research that has the figure up to 23 bytes and 35%, we will preempt the preemption by insisting that this isn’t the point. The point is that if we pick a novel and encrypt it with a random 12byte RC4 key, then ask you to recover the novel title, you’re not going to have an easy time of it. You might object that this isn’t fair, and you’re right, it isn’t. Even the stupendously broken Hill Cipher, invented in 1929, is considered stupendously broken because you can break it given a known pair of plaintext and ciphertext — but as to breaking just a ciphertext alone, people were still cracking their heads over it as late as 2009 and 2015. What we are trying to say is that cryptography has an amazing capacity to inflict trauma on people who go around looking for problems, even near “solved” terrain, and that there’s plenty we don’t understand even about RC4 and how to break it. For the more pragmaticallyminded among you, what we are also trying to say is that if someone wanted to create some perfectly serviceable ransomware based entirely on RC4 encryption, there’d be nothing stopping them.
We have a lovehate relationship with the part of cryptanalysis that pummels the problem to submission using statistics and plaintextciphertext pairs. By all objective measures, we’re the problem: out of the best attacks known on RC4 (cited above), exactly all of them apply probability theory to exploit statistical biases, and exactly 0 proceed with “The proof is trivial! Just biject the ciphertext to a nondegenerate residue class whose elements are clopen metric spaces”. All we can do is childishly shout “well what about 23byte keys, when is that 35% success rate coming, huh”, and then when it inevitably does, lean against our paperback copy of Computational Complexity: A Modern Approach and weep.
One thing we’ll say for the abovementioned known plaintext attack on the Hill Cipher is that it is beautiful. It engages with the cipher on its own linearalgebraic terms, then uses that understanding to tease out the key in one elegant motion, like untying a knot. The attack can only exist because linear algebra is as “solved” and wellunderstood as it is. RC4 is not built on linear algebra — it is built on transpositions and permutations. Can we somehow better understand the little mathematical corner of transpositionandpermutationland that RC4 lives in? If that’s what we want to do, we are motivated to completely ignore the best and most wellunderstood attacks that use statistics to deliver the coup de grace, and instead look for much weaker attacks or even almostattacks that explore that mathematical corner and maybe shed some light on it.
Unfortunately we are emphatically not equipped to engage with RC4 proper on that level; by the time you are done reading, you will understand why. It’s been said that mathematical reasoning can be like navigating a series of dark rooms, where you fumble around for the light switch until you finally find it, turn on the light and can proceed to the next room. In terms of this analogy, the rooms we can even enter are very far removed from anything resembling the grand prize. This doesn’t mean we should just give up. As Pólya said, “If you can’t solve a problem, then there is an easier problem you can solve: find it.” That is, enter the first dark room with a sigh, and work from there.
What this means in practice, and what we do below, is that first, instead of trying to launch fancy “exploratory” attacks on the full RC4, we focus on the simpler task of retrieving the key from the KSAgenerated state; and second, we turn our attention to toy versions of the cipher, where a statistical attack is trivial to launch and if you only cared about “a break no matter how” there would simply be no challenge. These toy versions are the simplest versions of RC4 that we could find where the relationship between the generated KSA state and the original key is not degenerate and doesn’t have an immediate closedform solution — so that we actually have to do some thinking, which is the point of the exercise. Below we tackle two toy RC4 variants, which according to our internal scheme we have designated “RC2zz” and “RC2pz” (see addendum if you really care where we got these names).
The mathematical claims below are preliminary and subject to possible modifications as errata is discovered. This is a diplomatic way of saying that we have programmatically verified that these results are not a hallucination and the described attacks do in fact apply to several test cases, but if you find out that actually in claim 7.5.1 we’ve neglected to mention that the function is not welldefined unless is even or some such, please let us know.
This is one of the simplest RClike KSAs that do not allow a completely trivial recovery of the key from the generated state:
The initial state is obtained by swapping 0 with the key’s 0th byte, then 0 again with the 1st byte, then 0 again with the 2nd byte, and so on, where upon running out of key bytes we loop back again to the 0th byte. Now, a good, oldfashioned attack that doesn’t care about pretty mathematics, and simply goes for the jugular, would make short work of this KSA. To see why, consider the simple, mediocre key, key
. The algorithm proceeds by swapping 0 and 107 (k
), then 0 and 101 (e
), then 0 and 121 (y
), then 0 and 107 again, and so forth and so on until 256 swaps are performed. When the initial state is fully generated, all elements that aren’t 0, 107, 101 or 121 have not been swapped away — we will have for example and . A simple scan of the final permutation will typically make it obvious which bytes other than 0 were a part of the key, and which were not. From there, we can start a bruteforce search for keys composed of these three bytes and optionally 0, with possible repetition. There are infinitely many such keys, but only possible permutations, so it won’t be long until we hit a valid key (that is, one that generates the desired permutation) by pure chance. What all this means is that using brute force and statistics, there is barely anything to discuss here.
Now, let us deliberately complicate our own lives, and engage with the KSA in terms of its mathematical structure. To do that, we must first learn the mathematical language used to describe and manipulate the basic building blocks that the KSA uses — swaps and permutations. Permutations are usually studied in Group Theory 101; Group Theory studies invertible operations, and it turns out that A. swapping stuff around is an invertible operation, and B. whenever you perform an invertible operation, you are, actually, swapping stuff around (this latter fact is called “Cayley’s Theorem“). In this context, you might see a permutation described like so:
This is called putting the permutation in “cyclic form”; the example above specifies , , , and (and for all other . Take a minute to convince yourself that any permutation on a finite amount of elements can be put in cyclic form (simply write until you are back to again, then start with the smallest element that hasn’t yet appeared (let’s say it’s 7) and write again: ; keep adding these cycles until you are out of elements).
A simple swap of two elements and (a “transposition”) is also a permutation, and its cyclic form is . It’s handy to define “multiplication” for permutations, such that we can write and mean “execute and then ” (if this seems backwards to you, just recall that in algebra also means “apply first and second”; the definition here works the exact same way, and for the same reason. If this still seems backwards to you, we apologize). Armed with this notation, we can now explicitly write the permutation state generated by the KSA above:
Where, again, the transpositions are applied from right to left, and is actually shorthand for “ at the index “. This is abuse of notation, but writing out the modulo operation every time is clunky, and in this case there is little danger of confusion (if the key is shorter than 256 bytes, it’s not clear what other definition the subscript operator could even have).
Using the above KSA, the key soloing
generates a permutation with one cycle of length 4, and another of length 2, where the element 0 appears in neither. We will write this out in more concise notation: <1{4: 1, 2: 1}>
, meaning: the element 0 appears in a cycle of length 1; of cycles of length 4, there are 1; of cycles of length 2, there are 1; all other cycles are of size 1. Cycles of size 1 are just elements that the permutation “doesn’t touch” — you could say that they stay put, or that they are fixed points of the permutation, or that they satisfy , whichever is your favorite way to see it.
For this KSA, it is particularly useful to look not at the generated permutation itself, but instead at its “general shape” — the number of cycles of each length involved, and the position of within them (we will call this sum of information the permutation’s “signature”). We can obtain a variety of different permutation signatures from different keys. The key rarer
gives a permutation with <2{2:2}>
; abacus
gives <1{4:1,2:1}>
; and calmly
gives <3{3:2}>
. Actually, boldly
also gives <3{3:2}>
, as does alibis
and canine
. If you squint at this “coincidence” hard enough, a general intuitive suspicion might start to form in your mind, which can be expressed either formally or clearly. Let us first express it formally:
💡 DEFINITION 1.1 Two keys , are said to be 2zzconjugate if there exists some permutation such that and . Permutations are applied to keys in the obvious way, e.g. applying to key
yields rny
.
💡 DEFINITION 1.2 Two permutations are said to be 2zzconjugate if there exists some permutation such that and .
This isn’t something we completely made up: the concept of “conjugate permutations” exists in the wide and beautiful mathematical world outside of this article, somewhere in chapter 3 or so of the aforementioned Group Theory 101 course. But that other definition is slightly different from the one we have above: is not required there. This is basically because the 2zzKSA in some important sense “cares” more than your usual permutation whether a number is 0 or not, seeing as the number 0 is hardcoded into the 2zzKSA, so if we try to prove anything interesting about the 2zzKSA using plain vanilla conjugacy, we will probably fail. To wit, and are perfectly conjugate by the normal, vanilla definition for conjugacy, but the RC2zz keys required to generate them will have a completely different underlying structure. You might want to stop and convince yourself that if two permutations are 2zzconjugate, then they are also plain vanilla conjugate per the definition from group theory 101 (in mathematical terms this makes 2zzconjugacy what’s called a “refinement” of vanilla conjugacy). With our concept of 2zzconjugacy we are able to make the following claim:
💡 CLAIM 1.1 Let be a key that generates a permutation . Then there exist a key , a permutation and a permutation with the following properties:
What this means, in plain English, is that we can look at a key such as sheep
and immediately disregard the specific letters s
, h
, e
, and p
being used. The 2zzKSA will proceed in exactly the same way if the key were, let’s say, pgjjq
instead, because there exists with — the two keys are 2zzconjugate. We take all keys conjugate to sheep
and put them in a giant pile, which we’ll name abccd
after the pile member that’s first alphabetically. Then it’s enough that we compute once; For every key in that same conjugate pile, we can take a shortcut and instead of computing , compute the correct with then immediately obtain . This is not a whitepaper and so we won’t include a proof here, but you might become convinced of this if you follow the two permutation constructions for the two keys abccd
and sheep
stepbystep, and verify for yourself that at the cyclical structure level, they proceed the same way. At a very handwavey level, we might say that permutationsignaturewise, the 2zzKSA doesn’t “care” about the actual value of each byte, only whether two bytes are identical and whether any of them are .
If that still doesn’t sound perfectly clear, we are happy to announce that in fact you can forget all about it and instead stare at the below pretty picture, which illustrates what happens after 4 steps of 2zzKSA on a 4byte key:
(⚪🔵⚪🔴)(0🔴)(0⚪)(0🔵)(0⚪)(0🔴)(⚪🔵)
(🔵🔴🔵⚪)(0⚪)(0🔵)(0🔴)(0🔵)(0⚪)(🔵🔴)
(🔴⚪🔴🔵)(0🔵)(0🔴)(0⚪)(0🔴)(0🔵)(🔴⚪)
Assuming all colorful circles are nonzero and different from each other (and remember, transpositions are applied from right to left). That last row has the lexicographically minimal representative. Once you realize what is going on here and extend the result in your mind’s eye to the entire 256 iterations, you have understood everything so far, and should also not have trouble with the following convenient result:
💡 CLAIM 1.2 Let be a key that 2zzgenerates a permutation , which is conjugate to via . Then generates .
What this means is that when looking for a key that generates , it is enough to find a key that generates a permutation conjugate to . Again, a proof will not be included here, but the principle behind it is very similar to the one behind claim 1.1.
While constructing a feasible practical attack isn’t our main goal here, it’d do us well to consider it anyway so as to ground the discussion. So how can we use all the above philosophizing to our advantage when attempting to extract a key from an initial permutation state? We can construct a precomputed table that, for each permutation signature, records the shortest key that generates that permutation signature and is lexicographically minimal among its conjugates. This is best done by iterating over every possible lexicographically minimal representative key, running the KSA, noting the result and updating the value for the resulting signature, if none exists. So, for instance, we might run the KSA with the key abcdce
, obtain the result <3{3:2}>
(see above with calmly
, boldly
, alibis
) and update the table — <3{3:2}> : abcdce
, if we haven’t yet found a shorter key that generates a permutation with that signature.
Alas, we have to contend with the question of how large this table is, and how long it will take us to compute. We might try to bound the table size from above using the number of different permutation signatures — but the number of conjugacy classes of a permutation group with 256 elements is a 48bit number. That means a database weighing Petabytes ( bytes), more on the “extremely determined nation state” side than the “bored student” side. Thankfully, we can do a lot without scratching this bound: after all, one bruteforce of all bit minimal keys generates a database that can then extract all such keys from the initial permutation. So, we can limit ourselves to 128bit (16byte) minimal keys, and maybe get a better bound.
How much better? Given some natural number , how many keys are there of length that are lexicographically minimal among their conjugates? To tackle this question we steeled our hearts, recalled all the wonderful tools of combinatorics at our disposal, and then promptly dismissed them and instead wrote a script to manually count the possibilities for which we then crossreferenced with the online encyclopedia of integer sequences. This search produced OEIS A024716, “ where are Stirling numbers of the second kind”. The 16th entry in the sequence, corresponding to 16byte (128bit) keys, is a 33bit number. That’s very feasible.
So that’s a theoretically possible attack with some forced mathematics, but the really interesting thing would be to go all the way, and find an algorithm that retrieves a short minimal key from the function signature, instead of relying on a precomputed table. But it’s not immediately clear how to go about that. In fact, we can posit a generalized “RC2zz problem”: Given a conjugacy class of a permutation group on elements (possibly here) and an integer , does there exist a key with size shorter than that generates a permutation in via a generalized RC2zz KSA that runs for iterations instead of a fixed ? If handed the correct key, it is straightforward to verify that a permutation is generated as desired, but how well can we mortals produce such a key from scratch? Is there a polynomialtime algorithm to do this, or maybe it is provably difficult?
As a point of interest, the insights above carry over even if we generalize the cipher a little to something like this:
With p1
, p2
public parameters. Instead of transpositions of we now deal with transpositions of . The above analysis remains applicable except that the “special element” being transposed with is now instead of , and all key bytes are shifted by the constant .
Now that we’re comfortable with permutations, cyclic forms and conjugacy, we can carefully move an inch up the ladder of complexity and address a slightly more challenging KSA. We call this variant RC2pz, and the initialization goes as follows:
The sharpeyed among you will notice that this is almost RC2zz. The one difference is that the assignment (=
) in line 5 is now an addition assignment (+=
). You might rightfully wonder how much this could complicate the analysis, but the answer is, unfortunately, “very”. For one thing, with this one innocentseeming tweak all the convenient properties due to 2zzconjugacy up and disappear. For example, the two keys k1 = [1, 14]
and k2 = [1, 255]
are 2zzconjugate, but try to put both through the KSA and their behavior diverges very quickly — by the 2nd iteration k1
performs an actual swap while k1
performs the “swap” , a nonopeartion. Let’s do what comes naturally and define a new and more suitable notion of 2pzconjugacy:
💡 DEFINITION 2.2 Two keys are said to be 2pzconjugate if:
(1) means, plainly, that if the running sum of is equal at indices and , then the same is true for , and viceversa (all sums are modulo 256 — this is the last time we’re going to explicitly write this down). A reader with an eye for analogies might suggest we could drop condition (2) for a more “vanilla flavor” 2pzconjugacy. The result isn’t something you find in any mathematical textbook we’re aware of, but indeed it is closer to something you might find in one, hypothetically, and the further analysis below will mainly deal with this “weak” 2pzconjugacy for the sake of simplicity. Still, if we try to extrapolate this definition to a full attack similar to the one launched on 2zz, we run into a wall quickly. First, the naive approach to check if two keys are conjugate was straightforward for 2zz, and here it has shot up in complexity (one might toy in analyzing exactly how intractable it is, but soon we will propose a less naive approach, anyway). Second, and more importantly, the number of conjugacy classes veritably explodes. Consider that [1]
, [2]
, [3]
, all the way to [255]
are all 2zzconjugate keys. In contrast, [1]
, [2]
and [4]
are each in a distinct 2pzconjugacy class already!
All we did to move from 2zz to here is a tiny gradation up the ladder of complexity, replacing one =
with a +=
. We didn’t even introduce any feedback that makes the swaps themselves dependent on the current state (such as j += S[i]
). And yet our entire approach from before is basically useless now. All we have is maybe some better intuition about how to approach the problem. Imagine attacking the actual reallife RC4 KSA, which sits at the top of a long tower of such complexity gradations and has j = j + S[i] + k[i%len(k)]
, with an autoincrement of i
.
Now, and this is the real beauty of the thing, if we stop forcing mathematics into it and allow an attacker to “eye it” and use statistical reasoning, RC2pz is laughably easy to break. This is because one property of swaps and permutations when put in cyclical form is that if are all different then
This implies that if we look at the cycles of the final generated permutation, usually a lot of them are going to contain sequences of the form , and thus if we take the sequence of differences from one term in the cycle to the next we will run into a lot of pretty key bytes sitting in a row waiting to be harvested. Allowing for a bit of trial and error, it’s not very complicated to proceed from there. Alas, we’re not here to shout “boom, broken”, we are here to handicap ourselves, disallow such vulgar methods, and hopefully enhance our mathematical understanding.
So what can we do? We’ve convinced ourselves that enumerating all values where the running key sum will repeat ( isn’t a good enough approach. Perhaps we can make progress by having a better understanding of how these “repeats” behave. We can start by convincing ourselves of the following claim:
💡 CLAIM 2.1 let be an RC2pz key and denote . Note that this implies the first swap will be , the second , and so on. Then there exists a function such that for all , we have , and further is minimal with this property. The case should be understood to mean that for all . For convenience let us generally define a function with .
This is less complicated than it sounds. All it is saying is that after the th iteration of the KSA, it is enough to consider the value of to understand how many iterations the KSA will go through until the running sum will reach the same value again (it might help to note that is the variable j
in the initial code listing in this section). Again, we won’t do any proofs here, but as far as proofs go, this one is not so complicated. The value of will repeat once the sum of key bytes from that point on is exactly , and if you sum bytes starting from a given position in the key (and looping around once you run out of key), this determines the exact sequence of bytes participating in the sum, which in turn determines how many of them must be collected until the sum hits 0.
In fact, is not an arbitrary number. We now take a deep breath and state
💡 CLAIM 2.2 Let an RC2pz key. Fix some , and write with odd. Then there exists some such that divides , and , with the division performed modulo 256 taking the smallest nonzero possible quotient (so for example , even though we have not just for but also for ). Conversely, any value of with divisible by , when evaluated in the formula above, will result in some with , though it is not guaranteed to be minimal.
The above merry jumble of symbols probably requires another illustration involving colorful circles, which we provide below. Suppose for example that for the key 🔵🔴⚪ we have with . This implies that if we sum the first 19 key elements, the result is 0:
(🔴⚪🔵🔴⚪🔵🔴⚪🔵🔴⚪🔵🔴⚪🔵🔴⚪🔵🔴)
By simple addition to both the righthand side and the lefthand side, this implies:
(🔴⚪🔵🔴⚪🔵🔴⚪🔵🔴⚪🔵🔴⚪🔵🔴⚪🔵🔴⚪🔵)(⚪🔵)
Now the lefthand side is just a multiple of the sum of all key bytes:
(🔴⚪🔵)(⚪🔵)
And more generally, by the same logic, every instance of corresponds to some equation (🔴⚪🔵)((🔴⚪🔵)) with an actual solution (and by reasoning backwards, it is not too difficult to see that every such equation with an actual solution produces an instance of ). Subsequences are allowed to loop around the end of the key and back to the beginning; actually that funny term that appears to come out of nowhere is just some machinery to express that.
All that’s left is to apply a basic fact of number theory, which is that the equation has solutions modulo if and only if divides . Claim 2.2 then immediately results, including a guarantee that the modulo256 division has a valid result. That entire unwieldylooking formula for is just bean counting to figure out the exact number of pretty circles implied by the relationship above, and the decomposition of the key bytes sum into with odd is just an artifact of computing its with 256.
If you don’t find this proof by handwaving convincing enough, we offer some additional handwaving in the form of an honesttogod example of Claim 2.2 at work. This is an RC2pz key:
The values for this key are (we have verified this manually). Here , which decomposes to with and . For , it so happens the correct value of is . This results in , which is divisible by . So, plugging it into the formula in claim 2.2 should produce some with . Doing so is pretty straightforward, except for the tricky term ; while in usual numberland this is simply , the division modulo (taking the smallest nonzero solution) gives the result . We multiply that result by the key length (), then subtract the term which in this case is , since we picked . , the correct value for .
Similarly for , the solution is obtained by choosing . The resulting value for is . By our definition . With and , using the formula above we obtain the correct value (that is, we have , and 154 is minimal with this property).
We can now proceed to
💡 CLAIM 2.3 Two keys are weakly 2pzconjugate if and only if the infinite extensions of their values are equal, where the infinite extension of is defined to be .
This is, again, not so complicated. The only reason we had to introduce this whole “infinite extension” business is that someone could have said “ahha! I have a counterexample: and are 2pzconjugate, but one of them has a value sequence of length two, and the other of length four!”. So if you are upset about claim 2.3 not just ending with the word “equal”, blame that person.
At this point we are juggling 3 different sets of properties that can encode some information about a key. At the top is the key itself which, obviously, encodes the entire information about the key. Farther down – the key’s values. Even farther down from that — the structure of the resulting permutation after being put through the RC2pz KSA. In this list of points A, B and C, we’ve just seen how to move from A to B, and it should be straightforward to move from A to C efficiently by simply running the RC2pz KSA on the key. But what about moving from B to C, or backwards, from C to B or from B to A? If we could do all of these, it would constitute a full attack on RC2pz: look at a generated permutation, fish out a sequence of values that generates it, then investigate that sequence to recover the key.
Happily, the climb from B to A is possible (which immediately hands us B to C as well). Given that seeing is believing, recall the list of values we just used that correspond to the key and observe the following sleight of hand:
Or, for the linearalgebraically minded among you:
The reason for this is straightforward enough that we don’t need to reach for the colorful circles. The 1st value of the key being 154 means, by definition, that . That’s times , times and times , which gives us the first equation. Similarly for the two other ones. So, given the values, the problem of recovering possible original keys is reduced to the knowntobemanageable problem of solving the above system of equations (equivalently, computing the kernel of the matrix in the giant parentheses on the lefthand side, which equals ).
What about moving from C to B — that is, given a permutation signature, recovering a possible value sequence? It’s not difficult to work out some really simple cases by hand; for example, for a 1byte key, the signature of the 2pzKSA generated state can be determined directly from the GCD of the byte with 256:
signature of 2pzKSA([i]) 


1  <256{256: 1}> 
2  <64{64: 2}> 
4  <16{16: 4}> 
8  <4{4: 8}> 
16  <1{}> 
But as the key size grows this relationship gets very complicated, very quickly. Here are some choice values for 3 bytes of key:
signature of 2pzKSA(k) 


<9{9: 1, 23: 1, 48: 1, 24: 2}> 

<59{59: 1, 37: 1}> 

<23{23: 1, 83: 1, 54: 1, 53: 1}> 

<8{8: 1, 64: 1, 39: 1, 26: 2}> 
As with the analogous problem for RC2zz, this is probably the toughest nut to emerge out of the entire approach, which we do not have a closedform solution for and can only route around with precomputed tables and other such dishonorable kludges. Still, we were pleasantly surprised to learn that the values that determine the proper concept of 2pzconjugacy are so wellbehaved with respect to the key, and we’re more excited than frustrated to have stumbled upon this interesting problem.
Given that the angle of making an attack work with some probability on the full RC4 using whatever method is relatively wellcovered by prior work, we are naturally drawn to instead linger on the combinatorial questions left to us even by the above analysis of very weak RC4 variants. “Given a permutation signature, recover a valid conjugacy class representative (for RC2zz) / value sequence (for RC2pz) that will generate it via the corresponding KSA” — is this problem tractable, and by what means? Is there an even more closed form for deriving RC2pz values from the key, instead of trying all possible values of two indices and seeing which one produce the smallest solutions? We hope that we have managed to show some of the magic of permutation groups, and stimulate some curiosity about very weak RC4 variants. Now, excuse us, we’re off to cleanse our palate with some CTF exercises where no one cares how “properly motivated” your solution is, and getting the flag is the only thing that matters.
This is a short, informal explanation for readers who are comfortable with linear algebra and are really curious where we got “RC2pz” from. The “2” is the easiest to explain: in RC4 itself and in all variants explored above, all manipulations on the permutation state are performed using 2 variables, i
and j
, but in principle one could use 3 or more; this notation is there to allow for that. p
and z
stand for matrix types out of the following table:
Letter  Matrix type 
z  Zero 
p  At most one nonzero element, equal to 1 
q  At most one nonzero element 
b  All nonzero elements equal to 1 
i  Identity 
c  Scalar 
d  Diagonal 
f  Any 
Denote the current permutation state and the current state vector . Each iteration of the KSA, a new state vector is computed as a sum of affine transforms – one on , one on (applying entrywise), one on , et cetera, though in the text above we only get far as an affine transform on itself. This transform requires a matrix and a vector to express. For instance, in RC2pz, if the previous state vector is , the next state vector is where are known public cipher parameters — has at most nonzero element, which equals 1, and is the zero vector. This defines a family of ciphers that includes RC2pz as exposited in the main text above, as well as some more trivial variants that we didn’t want to mention in the main text to avoid needless complication.