jump to navigation

Game Theory of Dirty Santa December 4, 2010

Posted by eric22222 in General, Math.
add a comment

I really like when I break into some unexplored territory on the net. I haven’t found anything on this subject online. I’ll start by saying I don’t have any calculations done here, but I’d like to make the points that were brought up available.

Dirty Santa, or White Elephant, or whatever you want to call it, is a gift exchange game. Participants each bring one gift. Order is randomly assigned. Each person may either open a gift or steal a gift that a previous player has opened. If your gift is stolen, you get another turn to either open or steal a gift. Pretty simple. In one variant, gifts are locked in place (no longer stealable) after they have been stolen so many times.

As the game was about to begin at our annual CSC Christmas party, a few of us nerds begin to discuss some of the finer points of the game. Is there an optimal turn one would want? How would we go about figuring that out?

Here’s some of the ideas that were brought up to consider:

  • You wouldn’t steal a valuable gift early, for fear of it being locked with someone else.
  • One could steal a gift with the intention of it being stolen later, granting them a turn later in the game.
  • Value may not be objective, but each player should have some idea about the average value a gift may hold.

So the question becomes “given a certain steal limit per gift and a certain number of gifts, what turn would allow a player to achieve the highest value present?”

Advertisements

The Battleship Algorithm February 4, 2010

Posted by eric22222 in General, Math.
1 comment so far

As Stephanie has posted, I managed to win the first event of the 2010 CSC Decathlon without even being present. Here is the story.

The Battleship tournament was Friday night, but I knew I had to work. I had joked with Todd about just writing down my moves in advance, knowing that wouldn’t do the trick. That morning, I was explaining how I couldn’t make a good set of rules without some sort of conditional statements. Todd said that as long as it was easy to understand, I could give it a try. So, during my two Friday classes, I wrote out my step-by-step plan, as well as some configurations for my own pieces.

Surprisingly enough, I managed to get three consecutive victories, netting me the blue ribbon.

But enough talk. Here’s the algorithm I wrote out. It could’ve been better, but it got the job done.

BATTLESHIP BATTLE PLAN:

Opening attack: E5, F6, D4, G7, G3, D8, C3, H8, H4, C7, F2, E9, I5, B6

Cleanup 1: If two hits are in the same row/column, then beginning with the upper-left hit, aim one cell in the direction of the second hit. Use chart until a sink:

All hits that were part of the sink are considered “closed.” If two open hits are in the same row/column, repeat cleanup 1.
*Clearest line: the direction from a cell that goes farthest without a moss or border in the line. In case of a tie, a coin flip can decide clearest line.

Second strike: B2, I9, B10, I1, A5, J6, E1, E10, A1, J10, J2, A9
If applicable, repeat cleanup 1.

Cleanup 2: Beginning with the cell A1, spiral clockwise inward looking for an open hit. Attempt a strike in the first cell in the clearest line. Use the above chart. If, after a sink, cleanup 1 is applicable, revert to it. Repeat until all hits are closed. By the completion of this step, the cruiser and battleship will have been destroyed.

Search & Destroy: Starting with A3 and working right/down diagonally, attack any cell in which four adjacent cells are open (note A3 itself fails the requirement). On a hit, use cleanup 2. Repeat Search & Destroy with starting cells C1, A7, and G1. Only one ship can possibly remain.

The Wild Hunt: Repeat Search & Destroy, but reduce the number of required adjacent open cells to 3. Proceed to two, then one.

Responses (optional):
When taking the lead: “That ship has a lot in common with your chance of winning. (lower voice) It’s sinking.”
1st hit after 1st sink: “I’m about to sink your boat. With math!”
1st hit after 2nd sink: “I’m about to sink this boat, too!” If opponent replies “with math again?” reply “No, with red, plastic missles!”
Opponent takes early lead: “Math beats luck in the long run.”
After sink, far behind: “Time for the greatest comeback in Battleship history!”
In lead, one boat left to fine: “Now it’s just a matter of time…”
First move it a hit: “I like where this is going.”
Starting new step: “Initiating phase <name>.”

Dobbs’s Power Summation Rule January 27, 2010

Posted by eric22222 in General, Math, Personal Favorites.
11 comments

When an interesting problem comes to mind, class is over for me. I reach a point where “quiz” this and “exam” that are completely overshadowed by some Interesting Problem. Today, during a lecture about countably infinite sets, my mind went back to a previously unsolved problem I had posed to myself last semester. It goes something like this:

We know that the sum of integers from one to N can be found with the formula

\displaystyle\sum_{x=1}^N x^1 = \frac{N^2 + N}{2}

So how can we represent the sum of squares? Cubes? Is there a general function for any power? I’ve managed to find a form for the sum of squares:

\displaystyle\sum_{x=1}^N x^2 = \frac{2{N^3} + 3{N^2 }+ N}{6}

Furthermore, I can backtrack to get the sum of ones to continue the pattern in both directions:

\displaystyle\sum_{x=1}^N x^0 = N

Yes, that is a complicated formula to say you’ll get five if you add five ones together. So given these three formulas, here’s my idea: each formula is a simple polynomial of degree d, where d is one greater than the power each term is raised to. So I can calculate plenty of equations. Let’s make a system of them to calculate the sum of cubes, shall we?

\left[ \begin{array}{cccc} 1 & 1 & 1 & 1 \\ 2 & 4 & 8 & 16 \\ 3 & 9 & 27 & 81 \\ 4 & 16 & 64 & 256 \end{array} \right] \left[ \begin{array}{cccc} x_1 \\ x_2 \\ x_3 \\ x_4 \end{array} \right] = \left[ \begin{array}{cccc} 1 \\ 9 \\ 36 \\ 100 \end{array} \right]

And this yields…

\left[ \begin{array}{cccc} x_1 \\ x_2 \\ x_3 \\ x_4 \end{array} \right] = \left[ \begin{array}{cccc} 0 \\ 1/4 \\ 1/2 \\ 1/4 \end{array} \right]

Ah… hmm. Not what I expected at all, really. Nothing about this fits with what I thought I’d find. That zero is especially surprising. That means the number of terms may not increase linearly. For this step, it actually stays at three. So we have:

\displaystyle\sum_{x=1}^N x^3 = \frac{{N^4} + 2{N^3 }+ {N^2}}{4}

And for the fourth power… blech. This doesn’t look patterned at all. Here’s the formulas for fourth, fifth, and sixth:

\displaystyle\sum_{x=1}^N x^4 = \frac{6{N^5} + 15{N^4} + 10{N^3 } - {N}}{30}

\displaystyle\sum_{x=1}^N x^5 = \frac{2{N^6} + 6{N^5} + 5{N^4} - {N^2}}{12}

\displaystyle\sum_{x=1}^N x^6 = \frac{6{N^7} + 21{N^6} + 21{N^5} -7{N^3} + {N}}{42}

HOLD THE PHONE. Wait, wait wait wait. Pause. If you were to distribute the denominator, that is, to split this fraction into parts, there is a pattern, albeit complex. I’m seeing some interesting changes in the seemingly random coefficients when I force the denominator to be equal to the power I’m summing.

Here’s a python script that calculates the coefficients from highest to lowest (underscores represent whitespace):

#!/usr/bin/python
from numpy import zeros,ones,array,float64,linalg
from LUdecomp import *
n = 0
TOL = 10e-6
while 1:
_ n +=1
_ a = zeros((n,n),dtype=float64)
_ b = zeros((n),dtype=float64)
_ for i in range(n):
_ _ for j in range(n):
_ _ _ a[i,j] = (i+1)**(j+1)
_ _ if i==0:
_ _ _ b[i] = 1
_ _ else:
_ _ _ b[i] = b[i-1] + a[i][n-2]
_ LU = LUdecomp(a)
_ b = LUsolve(LU,b)
_ c = b
_ for i in range(n):
_ _ if abs(c[n-i-1]) < TOL: c[n-i-1] = 0
_ print round(n*c[n-i-1], n/2), "\t",
_ print
_ try:
_ _ raw_input()
_ except EOFError:
_ _ break

You’re just dying to see the results, aren’t you? I can feel the enthusiasm here.

1
1 1.0
1 1.5 0.50
1 2.0 1.000 0
1 2.5 1.666 0 -0.166
1 3.0 2.500 0 -0.500 0
1 3.5 3.500 0 -1.166 0 0.166
1 4.0 4.666 0 -2.333 0 0.666 0
1 4.5 6.000 0 -4.200 0 2.000 0 -0.30
1 5.0 7.500 0 -7.000 0 5.000 0 -1.50 0
1 5.5 9.166 0 -11.00 0 11.00 0 -5.50 0 0.833
1 6.0 11.00 0 -16.50 0 22.00 0 -16.5 0 5.000 0
1 6.5 13.00 0 -23.83 0 40.86 0 -42.9 0 21.66 0 -3.290
1 7.0 15.16 0 -33.36 0 71.50 0 -100. 0 75.83 0 -23.03 0

There’s definitely a pattern  in there, column by column. For example, our highest power’s coefficient is always one (admittedly, that one is by design). The second column is a linear function. The third is quadratic (it’s one-sixth of the first function in this post). The next column is zeroes. No idea why.

HOLY MOTHER OF LIEBNIZ, I’ve got it. I originally figured there was a bit of rounding error in there, but it was just weird numbers. Those terms along the diagonal were pretty suspicious, though. If you scale each column by a value that makes the diagonal the set of naturals (with non-prime evens zeroed), you get an oddly divided version of Pascal’s triangle!

1
1 2.
1 3. 3.
1 4. 6. 0
1 5. 10 0 5.00
1 6. 15 0 15.0 0
1 7. 21 0 35.0 0 7.00
1 8. 28 0 70.0 0 28.0 0
1 9. 36 0 126. 0 84.0 0 9.00
1 10 45 0 210. 0 210. 0 45.0 0
1 11 55 0 330. 0 462. 0 165. 0 11.0
1 12 66 0 495. 0 924. 0 495. 0 66.0 0
1 13 78 0 715. 0 1716 0 1287 0 286. 0 13
1 14 91 0 1001 0 3003 0 3003 0 1001 0 91 0

So now, I proudly present Dobbs’s Power Summation Rule:

For P>0, the sum of all natural numbers up to N each raised to the P power can be given by:

\displaystyle\sum_{x=0}^N x^{P-1} = \frac{N^{P-1}}{2} + \displaystyle\sum_{x=0}^{\lfloor (P-1)/2 \rfloor} {P \choose 2x} \frac{N^{P-2x}}{P \times c(x)}

where c(x) is a rather strange function:

c(0) = 1
c(1) = 6
c(2) = -30
c(3) = 42
c(4) = -30
c(5) = 13.2

A note from myself January 14, 2010

Posted by eric22222 in General, Math.
add a comment

This morning, I woke up to find a sticky note next to my alarm clock. The text was sloppy, but it was definitely my handwriting. I didn’t fully remember writing anything out, but there it was: some simple equation.

\lim \limits_{p \to \infty} 1 - (1-p)^{p^{-1}}

I have some vague memories of thinking through some stuff last night, but nothing too concrete. Okay, so p must be some probability (I always use p for that), so one minus p is the chance of something not happening. That raised to the inverse of p… that’d be the chance of something not happening a number of times in a row. One minus that value. That’d be the chance of all that not happening.

Aha! So if p were the chance of, say, getting heads on a coin flip (50%), 1 - (1-p)^{p^{-1}} is the chance of not getting tails twice in a row. That is, the chance that heads will come up within the first two tosses. Seventy-five percent.

But why the limit?

Well, if p were only one-third… Ah, our chance of it cropping up in the first three times is 19/27, or roughly seventy percent. If we drop p to one-fourth, our chance goes down to sixty-eight percent. Oh, I see… it’s approaching some value, but what?

p, 1 - (1-p)^{p^{-1}}
1, 1.000
2, 0.750
3, 0.704
4, 0.684
5, 0.672
6, 0.665

What in the world is this thing approaching? This doesn’t look like anything I’m familiar with…

10^2 , 0.633967659
10^3 , 0.632304575
10^4 , 0.632138954
10^5 , 0.632122398
10^6 , 0.632120743
10^7 , 0.632120577

It’s a limit, so it should have something to do with e (2.71828), but I don’t see how- oh. Wait, now I do. This value we’re approaching is 1 - \frac{1}{e} . So the chance of success of a task of probability p within the first p^{-1} trials approaches one minus the inverse of e. Hope that’s what you were looking for, past self.

Geometric Chord System November 15, 2009

Posted by eric22222 in General, Math.
1 comment so far

No, not that kind of chord.

When I was in high school, I had a bulletin board in my room. Strung across three thumb tacks was a rubber band. It may have appeared to be no more than a triangle, but I had placed the tacks precisely so that strumming the three legs of the triangle created a major chord.

Yesterday, while playing with a rubber band, a thought struck me. You can change the pitch of a plucked note by changing the length of the string. So, you could represent any chord imaginable with a polygon! So here we go: Dobbs’s Geometric Chord System.

Firstly, a line. This represents our root note. For all the examples in this post, we’ll be in the key of C, so C is our root note (260 Hz).

GeometricChordNotation 1

Next, we need to figure out what the lengths of our other notes are. They will be connected to the endpoints of our root. Let’s make a C major chord by adding E and G. First, here’s how you find the new length, where x is the number of half-steps up the note is from the root. In this case, well use 1 for our length and 4 for our x.

2^{-x/12}

So our new length is 2^(-4/12) = 0.7937. For G, the length is 2^(-7/12) = 0.6674. If you already know the exact frequency of both notes, you’ll get the same results from their ratio (C/G = 260/390 = 0.667). So let’s map those on our root!

GeometricChordNotation 2

The colored circles our of radius 0.7937 (red) and 0.6674 (blue). The point where they intersect is what we wanted to find.

GeometricChordNotation 3

Tada! This triangle represents the major triad. Before you make the same assumption I almost did, it is not a right triangle. But it would’ve been cool if right triangles exhibited some kind of interesting musical properties. Moving on: the points where our three-note chords can meet up are bounded. That is, if C is our lowest note, the other two will never escape a radius equal to C’s length. Furthermore, if those other notes are lower than the next C up, they’ll be bounded on the other side by C/2 (a string of half length produces a note one octave higher). So, here’s what we’ve got:

GeometricChordNotation 5

The deep green center is where all three note chords will wind up. Notice the symmetry. Any chord on the left side of the area has a twin on the right side. So let’s map some actual chords to our plane:

GeometricChordNotation 6

Vaguely diamond shaped, yeah? No. But it was my first guess. It’s actually a zig-zag. Check it out:

GeometricChordNotation 7

And because that last one looked pretty cool, here’s a full grid of chords, with half-note steps. Notice the logarithmic behavior of the grid.

GeometricChordNotation 8