> 
> > I am not clear on what your algorithm is; could you be more explicit?
> 
> Perhaps that is because my previous explanation was shoddy.
> 
> > Also, I do not see that you have given an argument about the difficulty
> > of breaking it in a reasonable length of time.
> 
> I will try to be more explicit on this also.
> 
> I should warn you that I have NOT spent my life studying encryption, my
> main expertise is in computational complexity.  I would be interested 
> in your comments on my algorithm, especially if you are better informed
> about encryption than I. 
> 
> As I understand it, a good encryption or random number generation algorithm
> must be based on a problem which is computationally difficult to solve.
> Actually, I want something stronger:  it should be computationally 
> difficult to even make statistical statements about the solution other
> than obvious zero-knowledge statements.
> 
> Many NP-complete problems are easy to approximate or else make strong
> statistical statements about the solution (for example, 3-sat is
> easy to approximate by simulated-annealing and easy to guess most of
> the variables with high probability of correctness).
> 
> I choose the following problem as my foundation:
> 
>   Solve a set of N simultaneous 2nd degree equations in N variables
>   in MOD 2.  Every variable is a bit which can take on only the values
>   0 and 1.
>   
> Here is an example problem for N = 3:
> 
>   x*y+z=1
>   x*z+y*z+z=0
>   x+y+z=0
> 
> This system has 3 solutions:
> 
>   x=0, y=1, z=1
>   x=1, y=0, z=1
>   x=1, y=1, z=0
> 
> The general problem is easily shown to be NP-complete (reduction from SAT).  
>   
> A NxN system can be regarded as an N-bit to N-bit mapping.  For example:
> 
>   x*y+z     = out1
>   x*z+y*z+z = out2
>   x+y+z     = out3
> 
> x,y,z are inputs, out[i] are outputs.  
> 
> The outputs behave erratically as a function of the inputs and the inputs
> solution behaves erratically as a function of the outputs.  Approximating
> the general problem is also NP-complete (the proof is a bit more involved).
> It seems that solving most systems requires 2^N time.  The standard
> attacks for reducing the exponential growth factor -- simulated annealing 
> and branch-and-bound -- are obviously useless here because of the
> erratic behavior.  (Perhaps some algebraic attack might work?)
> 
> For a random number generator, we set up a machine with N bits of state.
> We produce N+1 2nd order MOD 2 functions (non-sparse), perhaps using
> true physical randoms to generate the coefficients.  We seed the state
> of the machine somehow.  The machine iterates thus
> 
>   Fout(state1,state2,....,stateN) = output bit
>   F1(state1,state2,....,stateN) = next state1
>   F2(state1,state2,....,stateN) = next state2
>     ...
>   FN(state1,state2,....,stateN) = next stateN
> 
> The argument that this machine is hard to break is obvious.  Even if
> the adversary knows the state of the machine at time T, and the random number it
> generated, he cannot solve for the previous state of the machine without
> solving an NxN system, which seems to require 2^N time.
> The adversary doesn't know the state of the machine at any time, all he
> has is a long (say O(N)) string of random numbers it generated.  
> This information is strictly less useful than knowing the state at the
> beginning of the string (say at time T)
> for solving for the state at time T-1.  He must solve for the state
> of the machine at some time in order to predict the other random numbers.
> 
> ------------------------------------------------
> 
> For this to be a good procedure, we need that the sequence of output
> bits is not only hard to break, but random in a fairly strong sense.
> I cannot see form this that even one bit is close to equally likely
> to be 0 or 1.  Also, the functions F1-FN should come close to giving
> a 1-1 mapping.  In addition, this does not necessarily make it 
> cryptographically strong; it should be that way both forward and
> backward.
> 
> I only know of two methods in the literature claimed to have the
> desired properties.  The simplest is the Shamir algorithm, which 
> produces pseudo-random [0, n-1] numbers.  The procedure is as follows:
> 
> Let p, q be primes, n=pq.  Let d and e be numbers; one way is to have
> de congruent to 1 mod p-1 and q-1, which would make them an encoding-
> decoding pair for the RSA public key cryptosystem, but this is not
> necessary.  It is necessary that d and e are relatively prime to p-1 
> and q-1.  There are two seeds, x and y, which are between 2 and n-2.
> 
> The procedure is 
> 
> 	x = x^d (mod n);
> 	y = y^e (mod n):
> 	output = x+y(mod n)
> 
> The Blum-Miccaeli procedure is simpler in principle, but slower.
> It has n as above, but p and q must be congurent to 3 mod 4.  It
> would be best if p and q are Sophie Germain primes (this holds for
> the previous one also), which means that (p-1)/2 and (q-1)/2 are prime.
> The seed is one number between 2 and n-2.  
> 
> The procedure is
> 
> 	x - x^2 (mod n);
> 	ouptput the least significant bit of x
> 
> They produced this procedure because the Shamir procedure did not 
> produce random bits.  But let z be the output of the Shamir procedure,
> and consider w=z XOR n.  All bits of w (or z) after the leading one
> can be used.
> 
> On the problem of complexity, I have looked into the complexity of 
> generating non-uniform from uniform.  It is surprisingly low, as 
> I define it.  Consider procedures of the following type, which I
> call infinite precision.  After a procedure which terminates with
> probability one, some of the bits are output.  All other bits are
> found by copying random bits.  A necessary and sufficient condition
> that one exists for a given density, even in k dimensions, is that
> there is a lower semi-continuous version of the density.  For most
> common densities, the expected number of bits needed to carry out
> such a procedure (there are lots of procuedures) is usually under 10.
> If you are interested, send me your snail-mail address.
> 

