Newsgroups: sci.math.research From: Hoey@AIC.NRL.Navy.Mil (Dan Hoey) Date: 1999/03/31 Subject: Re: Bin packing with bins of fixed size George Russell writes: > Tim Chow wrote: > > Garey and Johnson say that the bin-packing problem with a fixed > > bin size can be solved in polynomial time by exhaustive search. > > This should be easy to prove but I'm having some trouble. How > > does the proof go? > > (The problem in question is this. Fix an integer N > 0. Given as > > input a set of integers a_1, a_2, ..., a_n, each between 1 and N > > inclusive, and a positive integer b, determine whether or not the > > a_i can be grouped into b or fewer sets such that for each set the > > sum of its members is at most N. If N is not fixed but varying, > > then it is a standard fact that this problem is NP-complete.) > Here is one way. Construct a directed graph on N-tuple of integers > such that there is an arc from (x1,x2,...) to (x1+r1,x2+r2,...) iff > all ris are >=0 and (sum i*ri<=N). Then the minimum number of bins > is going to be the length of the shortest path from the zero vector > to (x1,x2,..) where xi is the number of occurrences of i in > a1,a2,...,an. The relevant section of this graph has x1*x2*...*xN > nodes, which is more than n^N. ^^^^ You mean "not more", of course. Your graph is essentially a nice example of using dynamic programming to cut the exhaustive search down. I sent Tim a much more exhausting search that was more like n^(Omega(2^N)), which, while still a polynomial, is somewhat daunting. It's worth noting that this is dependent on the input coding. If instead of the integers a_i we were given as input {(i,x_i) : the integer i appears x_i>0 times among the a_i} the size of the input might be O(log n) without sufficiently decreasing the processing time to keep the algorithm in P. Is this recoded problem (weakly) NP-complete? (and for which N?) I seem to recall integer programming with a fixed number of variables is in P (maybe when recoded in unary), and this problem looks like it's related to that. Unfortunately, I've found no such statement mentioned in Garey and Johnson, so I'm left wondering if I remember it right. Dan Hoey Hoey@AIC.NRL.Navy.Mil --- Newsgroups: sci.math From: dhoeye...@aol.com (Dan Hoey) Date: 1999/12/04 Subject: Re: xy=z in permutations. Existence from shapes. mc...@cs.concordia.ca (MCKAY john) wrote: >Spose we have a permutation on [1,2,...,n], then it has a disjoint >cycle decomposition and the lengths of these cycles form a partition >of n known (by me at least) as its shape. Of course, "conjugacy class" the usual name for this concept. (Exercise for beginners: show that two perms f,g have the same cycle length multiset if and only if there exists a perm p with p^-1 f p = g. You must be able to prove at least this much to get on this ride.) >Now, given n and three shapes, is there a neat way to determine >whether there is a solution (x,y,z), with these shapes, to xy=z. You can often eliminate triples by permutation parity, which can be deduced from the conjugacy class. Also, if we measure |x| as the total number of elements in the cycles forming x, then the measures satisfy the triangle inequality |x| + |y| >= |z| (and also for reorderings of x,y,z ). The problem can certainly be solved by brute force: Map the elements mentioned in the cycles for x and y to a set of size |x| + |y| in all possible ways, carry out the multiplications, and see if the results match z. If x is the product of n two-cycles and y has m cycles, then xy will have at least m-n cycles, with equality only when the cycles of z are formed by partitioning the set of cycles of y and gluing together the cycles in each block. This is enough to show that the problem is weakly NP-complete, by reduction to the knapsack problem. I suspect there's a way of showing NP-completeness in the strong sense--let me know if you find it. This is my first time posting from AOL--I hope the message doesn't get munged too badly. Dan Hoey [ posted and e-mailed ] DHoeye...@AOL.Com --- Newsgroups: rec.games.abstract From: haoyuep@aol.com (Dan Hoey) Date: 1999/12/10 Subject: Re: Research directions? Oon Wee Chong wrote: > Bruno Wolff III wrote: >> Solitaire Yahtzee, maximizing expected score. There aren't >> as many states as you would think, since your current score >> (excepting the bonus part) doesn't effect the optimal strategy. Tom Verhoeff has already done that. See his web page at http://wwwpa.win.tue.nl/misc/yahtzee/. But there are still possibilities for further work: 1) Different rules. You could allow scoring a non-wildcard Yahtzee as 25 points. This is arguably not in the the Yahtzee rules, but the official PC and handheld programs allow it. Also you could disallow using a Yahtzee in the lower table when the corresponding upper table is open. This is arguably disallowed by the official rules, though the programs allow it. It might be interesting to see how far changing these rules alters the expected score of 254.5896 that Tom Verhoeff reports. Even verifying that his answer is correct would be useful. 2) Different measure. Calculate the optimal *median* score instead of expected. This may take trial and error. Suppose all you want is to beat a score of N--can you make a table of the probability of winning, say for N=200,201,...,300? >> This problem was discussed in rec.games.board about two >> years ago and a very rough guess was made of it taking a >> couple of months on a PC to solve. If it's the discussion I saw, there were a lot of rough guesses and I doubted most of them. In particular, I'm not sure two-player Yahtzee is out of the question. Oon Wee Chong continues: > I think I need more information on this one. How do you > "solve" Yahtzee, a game of random dice rolls? ... Wherever you roll the dice, you calculate the outcome of your optimal strategy on all the possible outcomes. Then combine them according to the probabilities of the possible outcomes. > ... It would be a difficult task to prove that your > strategy maximizes the expected score. Computers do exhaustive case analysis very well, but proving the outcome is correct involves verifying the program. How many programs have you proved correct? > How would a PC help to determine an optimal strategy? Exhaustive case analysis, or maybe you call it dynamic programming. Sounds like a good masters thesis to me. Learning dynamic programming while you get your degree is a win-win proposition. Dan Hoey --- Newsgroups: sci.math From: haoyuep@aol.com (Dan Hoey) Date: Dec 11, 1999 12:09 AM Subject: Re: About James Harris Clive Tooth wrote: >About James Harris (JHS) >======================== >If you are new to sci.math and are considering responding to a >post to or about James Harris you might wish to read this note first. This is a very useful piece of information, and might be usefully inserted every now and then--maybe every hundred posts or so--in the JSH threads. >James Harris thinks he has an elementary proof of Fermat's >Last Theorem (FLT).... Wait. He _says_ he has an elementary proof. It's questionable whether he actually _thinks_ this nonsense is true, or whether he is just lying. It's hard to believe someone would lie so transparently for so long, but it's also hard to believe someone could state these falsehoods with belief. >For several years Harris has been posting various >versions of his proof to sci.math. His proofs are always >wrong. However they are virtually unintelligible so it is >extremely tedious to find the mistakes. A lot of guesswork >is required in order to make sense of what he is trying to >say. He appears to be a simple attention seeker. Not caring if >he is right or wrong, or even if he is intelligible or not. It might be worthwhile to summarize JSHs's usual proof schema: 1) Assume a counterexample to FLT, 2) Manipulate the counterexample through changes of variables, factorizations, etc. 3) Arrive at a contradiction, and 4) Deduce that the counterexample (1) could not exist. This method's effectiveness depends on being careless in step 2. If you make enough random mistakes, sooner or later you always arrive at a noticeable contradiction. The errors also have some resistance to easy falsification, since you can't just put sample values in and watch for the manipulation to produce an incorrect equation--the equations in step 2 depend on the nonexistent counterexample of step 1. Someone suggested that his method might arrive at a proof in a few centuries. I think they underestimate the power of random carelessness. It seems more likely that 10^100s of years could be wasted on the errors that JSH is so fond of, than that he would stumble on a correct proof in that time. Since he is not interested in learning anything, it's unlikely that the errors will become any less frequent. Dan Hoey --- Newsgroups: sci.math From: haoyuep@aol.com (Dan Hoey) Date: Dec 17, 1999 11:21 PM Subject: Re: New (unsolved?) problem erick@sfu.ca (Erick Bryce Wong) wrote: >JEFFREY HARRIS wrote: >>I have come up with a problem that I have not seen before, >>and I was wondering if it had any references/previous results. Here >>it is: >>For every number, do repeated applications of the function >>f(n) = 2n+1 eventually result in a prime? For instance, 9 is >>not prime, but 2*9+1 = 19 is. Any help would be greatly >>appreciated >This is equivalent to asking, for every k is there a prime of the form >k*2^n - 1? Or perhaps we should say (k+1)2^n - 1. >If it were true it would imply the infinitude of Mersenne >primes, which is still unsolved. But it turns out the answer is no, >and a simple proof can be found here (it's actually for k*2^n + 1, >but the exact same argument applies here): > I can't find it. Did you spell it right? >I think k=9223372036854775807 will work in this case. I'm not too sure: 9223372036854775808 2^26 - 1 = 618970019642690137449562111, which is prime (according to Mathematica, which is seldom wrong). Perhaps I should try starting at 9223372036854775806, but my bignum calculator is on the other computer.... At any rate, if you start at k=200883553191612792 you always get composite numbers--they will all be divisible by 3, 5, 17, 257, 65537, 641, or 6700417, in a cycle of period 64. Is this the approach you were using? Dan Hoey Posted and e-mailed haoyuep@aol.com ---