[ home ] [ math / cs / ai / phy / as / chem / bio / geo ] [ civ / aero / mech / ee / hdl / os / dev / web / app / sys / net / sec ] [ med / fin / psy / soc / his / lit / lin / phi / arch ] [ off / vg / jp / 2hu / tc / ts / adv / hr / meta / tex ] [ chat ] [ wiki ]

/math/ - Mathematics


Name
Email
Subject
Comment
Verification
Instructions:
  • Press the Get Captcha button to get a new captcha
  • Find the correct answer and type the key in TYPE CAPTCHA HERE
  • Press the Publish button to make a post
  • Incorrect answer to the captcha will result in an immediate ban.
File
Password (For file deletion.)

25 Dec 2021Mathchan is launched into public


File: stevin.png ( 123.72 KB , 1262x1779 , 1651889952353.png )

Image
In his work which introduced decimals to Europe, Stevin wrote (converting it to modern notation) that when you divide 0.4 ÷ 0.03, the algorithm gives you infinitely many 3's. But he didn't call this infinite result an exact answer. Instead he noted 13.33⅓ and 13.333⅓ as exact answers while recommending instead truncating to 13.33 or however near the answer you require. So clearly the main idea of infinite decimals giving arbitrarily good approximations was there. But at what point did people start saying things like 0.4 ÷ 0.03 = 13.333... exactly?
>>
To give an answer you need to define what is "13.333... exactly". If you define it as an infinite string to the right, an algorithm that calculates it (using the rules of addition and multiplication) will print it in infinite time. Mentioning the algorithm here is for intuition, in fact if you define operations on infinite strings that way, it will do so. The nerds can refer to the axiom of choice.

(In your post you used the word "approximate". Ask yourself the question -- "approximate what?")
>>
>>214
13.333... is the real number approximated by the sequence 13.3, 13.33, 13.333, ..., or in more precise terms, its limit.
>>
When people say 1/3 = .333... or anything like that, by defining the right side so that it would be true, it's bad notation. 1/3 can't be written as a decimal expansion because the division always results in a remainder. But you can have a number where there is always an extra 3 added to the expansion, so it would be an unending number of 3s, like .3 + .03 + .003 and so on. I would like to use an infinite series here to make it neater but it's unfortunately defined according to limits so I will avoid it here as limits are not part of the discussion, only true values. If you do have such an unending string of 3s in the decimal expansion, it will not be equal to 1/3. since the difference between it and 1/3 is always greater than 0, no matter how long the string stretches. There are quite a few numbers that can't be written using decimal expansions, 1/3 is just one of them.
>>

File: Fire.jpg ( 28.48 KB , 282x288 , 1692557240604.jpg )

Image
Paul Erdös has constructed a proof that the Copeland-Erdős constant is relatively normal.

Can someone help me understand the proof?


File: 25d30fdff6e5edb9cd299f196780830a.png ( 216.3 KB , 1200x899 , 1662422662595.png )

Image
The sum of the coefficients of the expanded Sum of the n first postive integer powers seem to equal the denominator.
Seems pretty cool and I can't find anything about it online
>>
Well this can be fairly easily explained by noting that the sum of the coefficients of a polynomial equals the polynomial evaluated at 1, as well the LHS for n=1 always 1. (btw it should be the sum from k=1 to n of k^something, not vice versa, your picture includes this typo consistently)
>>
Is there a formula for the general case, e.g. n^a?
>>
>>469
Yep! sum_{n=0}^k n^a=
=(Bernoullipolynomial_(a+1)(k+1)-Bernoullinumber_(a+1))/(a+1)=
=harmonicnumber(k,-a)=
=-Hurwitzzeta(-a,k+1)+Riemannzeta(-a) for a in Z+


File: 1651334063589.jpg ( 62.08 KB , 900x900 , 1653814939932.jpg )

Image
How to solve an equation?
8 posts omitted. Click here to view.
>>
What's an equation?
>>
multiply both sides by 0
>>
Linear maps
>>
By not posting tranime
>>
>>182
This is unironically an extremely good question. What is the general method for a computer to compute ANALYTICALLY the solution to any kind of equation, aka the zeroes of a function in a field taking any number of inputs.
>Newton-Raphson for R^n
>Fermat little-Theorem for prime-characteristic fields
These are exemples of explicit solution methods for definite classes of equations. I am talking about an oracle than solves even in non-closed form expression.


File: merkobacalc.jpg ( 81.78 KB , 1120x777 , 1683182411654.jpg )

Image
This is a multi-line calculator I made.
Each line is associated with a variable.
You can reference variables in other lines.
It uses math.js at 64 precision.
Allows normal and fraction mode.
http://calculator.merkoba.com/


File: F139_3957.jpg ( 184.38 KB , 1024x683 , 1681404166674.jpg )

Image
What's the difference between Trigometric Anaylsis and Calculus? especially as it pertains to vectors pic somewhat related
>>
Are these high school math classes? If so ask your school what's the difference.
>>
>>387

I dropped out, I'm trying to figure this out for work because I had made the same comment to my boss and he called me a retard.
>>
>>388
"Trigonometric analysis" isn't a common way of calling any particular branch of maths. That's probably why. It might perhaps refer to what people usually call simply "Trigonometry".
>>
>>389

okay, so how is this different from calculus? ex: wanting to know the area of a wave within a certain domain, and then adding several (different) areas of the larger wave together into a larger area
>>
>>390
Trigonometry is about trigonometric functions like the sine or cosine.
Calculus is about analytic things like derivatives and integrals.


File: galileo.jpg ( 353.63 KB , 1230x1134 , 1673073627763.jpg )

Image
Let
g(X)g(X)
be the minimal poly of
γ\gamma
. Then we have
Z[X]/g(X)Z[γ] \mathbb{Z}[X]/\langle g(X) \rangle \cong \mathbb{Z}[\gamma]

So then we can see that
Z[γ]/pZ[X]/g(X),p \mathbb{Z}[\gamma]/\langle p \rangle \cong \mathbb{Z}[X] / \langle g(X), p \rangle


My question: is this a general rule of rings that given a homomorphism
ϕ:R/IS \phi : R/I \rightarrow S

with ker
ϕ=J\phi = J
, then
R/(I+J)S/J R/(I + J) \cong S/J

?
>>
Yes, this is a general rule of rings. Given a homomorphism
ϕ:R/I→S
with kerϕ=J, then
R/(I+J)≅S/J.

This can be shown by considering the following two short exact sequences:

0→J→R→R/J→0
0→J→S→S/J→0

The first short exact sequence shows that R/J is isomorphic to R modulo J. The second short exact sequence shows that S/J is isomorphic to S modulo J. Since the map
ϕ:R/I→S
satisfies kerϕ=J, it follows that
R/I≅R/J
and
S≅S/J.

Thus, by the isomorphism theorems, we have that
R/(I+J)≅(R/J)/(I/J)≅S/J.

I hope this helps to clarify the relationship between these rings and the role of the homomorphism
ϕ:R/I→S
>>
thanks a lot. What books do you recommend on commutative algebra? I'm looking at Atiyah-MacDonald supplemented by Reid and the first few chapters of Bosch. I also have a copy of Matsumara's book. Does that look like a good course to you? Reid is basically a rewrite of Atiyah, but I will do Atiyah's exercises since there's multiple solution sheets online.
>>
Atiyah and Macdonald's "Introduction to Commutative Algebra" and Reid's "Undergraduate Commutative Algebra" are both excellent introductory texts on the subject of commutative algebra. They cover many of the basic concepts and results, and both have a clear and accessible writing style.

Matsumura's "Commutative Ring Theory" is also a very good text and covers more advanced topics, such as dimension theory and Cohen-Macaulay rings. It also has a more algebraic geometric flavor than the other two books.

Bosch's "Commutative Algebra" is another great reference. It provides a more geometric perspective on commutative algebra and it covers many of the same topics as Atiyah-MacDonald and Reid, but in more depth and with more geometric intuition.

Taken together, these books should provide a comprehensive introduction to the subject of commutative algebra. Additionally, there are many other resources available online such as lecture notes, videos, and problem sets that can help you understand better.


File: distracted-math.jpg ( 39.64 KB , 854x480 , 1667702420031.jpg )

Image
I've tried writing longform posts but don't get much traction. Let's try a shortform post.
>>
I had a thought that continuous sets are not real, but they are limits of discrete sets such as that gaps between two closest points are small, but we don't really know (or don't want to know) how small they are.
>>

File: hackenbush-1.png ( 6.28 KB , 229x220 , 1668744680246.png )

Image
>>332
Thank you for joining my friend.
What brings you to mathchan?
How can it grow?

>gaps between two closest points are small
Hackenbush is very interesting.
https://www.youtube.com/watch?v=ZYj4NkeGPdM&t=1260
https://www.goodreads.com/book/show/1293306.Winning_Ways_for_Your_Mathematical_Plays

-

>limits
R=Z|\mathbb{R}| = |\mathbb{Z}|

>muh diagonal argument
Invalid. Consider the following countably infinite list:
.
  First Number: 0
 Second Number: 0.1
  Third Number: 0.11
 Fourth Number: 0.111
  Fifth Number: 0.1111
  Sixth Number: 0.11111
Seventh Number: 0.111111
 Eighth Number: 0.1111111
  Ninth Number: 0.11111111
  Tenth Number: 0.111111111
  (... continues ...)

Question: Is the following number in the list?
919^{-1}

i.e.
19\frac{1}{9}

i.e.
0.10.\overline{1}

i.e.
0.1111111111110.111111111111\ldots


The diagonal argument claims
0.10.\overline{1}
isn't in the list.
(Because
0.10.\overline{1}
differs from the first number in the tenths digit, the second number in the hundredths digit, the third number in the thousandths digit, the fourth number in the ten-thousandths digit, the fifth number in the hundred-thousandths digit, and this will continue forever, then allegedly
0.10.\overline{1}
is not in the list.


But obviously,
0.10.\overline{1}
is in the list.
The list is directly constructed so as to contain
0.10.\overline{1}
.
>but muh infinite digits
The decimal number
0.10.\overline{1}
contains countably infinite digits.
The list is a countably infinite list.

Contradiction. Therefore the diagonal argument is invalid.
P.S. For the curious, the countably infinite list which contains all real numbers is simply
0    , 0.1  , 0.2  , 0.3  , 0.4  , 0.5  , 0.6  , 0.7  , 0.8  , 0.9  ,
0.01 , 0.11 , 0.21 , 0.31 , 0.41 , 0.51 , 0.61 , 0.71 , 0.81 , 0.91 ,
0.02 , 0.12 , 0.22 , 0.32 , 0.42 , 0.52 , 0.62 , 0.72 , 0.82 , 0.92 ,
0.03 , 0.13 , 0.23 , 0.33 , 0.43 , 0.53 , 0.63 , 0.73 , 0.83 , 0.93 ,
0.04 , 0.14 , 0.24 ,
Post too long. Click here to view the full text.
>>
>>333
>
R=Z|\mathbb{R}| = |\mathbb{Z}|

I didn't really mean that real numbers are badly contructed, by mathematicians, but the world is more likely based on discrete numbers than on "real" numbers and the latter is a formal contruction where distance between numbers is "really infinitely" small.

You seem to use axiom or principle that you can count from 0.1 to 0.(1). I won't probably disprove or confirm your theorem because it is not my focus in my journey through the land of mathematics. xp I only had thought that you can add from 1 to
++\infty
with it, but it's not limit of adding smaller values so they en up in a fixed value.
>>
>>333
>contains countably infinite digits
You're confusing the cardinality of sets with natural numbers.
Let your set of 0.1* numbers be S.
SN S \cong \mathbb{N}
by the counting the ones in each decimal number.
0.1 0.\overline{1}
has
0 \aleph_0
digits, and
0N \aleph_0 \notin \mathbb{N}
, therefore
0.1S 0.\overline{1} \notin S
.
Just because something is countably infinite, doesn't mean you can reach it by counting.
A more simple refutation would be so say that by nature of the construction of S, every number in it that is not the first has a single unique predecessor that you can follow back to 0.
The predecessor of
0.1 0.\overline{1}
would be
0.1 0.\overline{1}
, which clearly puts it ouside S.
This is a lot more intuitive if you imagine a graph of S with each number connected to the next.
This forms a straight line stretching from 0 outwards.
0.1 0.\overline{1}
is in a single node graph who's only edge is to itself, it clearly can not connect to the line.
>>
test


File: r_triangle_1.jpg ( 4.38 KB , 242x208 , 1655398539385.jpg )

Image
Trigonometric functions are used to relate the angles of a right triangle to its sides.

sin(angle)=oppositehypotenusecos(angle)=adjacenthypotenusetan(angle)=sin(angle)cos(angle)=oppositeadjacent\qquad \sin(\text{angle}) = \frac{\text{opposite}}{\text{hypotenuse}}\quad\cos(\text{angle}) =\frac{\text{adjacent}}{\text{hypotenuse}}\quad\tan(\text{angle}) = \frac{\sin(\text{angle})}{\cos(\text{angle})} = \frac{\text{opposite}}{\text{adjacent}}\quad


With respect to any of the two angles
α\alpha
or
β\beta
(the third angle of a right triangle is, of course,
90o90^\text{o}
thereofre it's irrelevant) three sides can be identified with the following names: hypotenuse, opposite and adjacent. The hypotenuse is easy to identify and it's the longest i.e.
cc
. The adjacent side is the one that's "touching" the angle; for angle
α\alpha
that would be
bb
, while for angle
β\beta
that would be
aa
. The opposite side is the one that's NOT "touching" the angle; for angle
α\alpha
that would be
aa
, while for angle
β\beta
that would be
bb
.

Therefore, the complete set of trigonometric functions for the triangle in the pic related is:

sin(α)=accos(α)=bctan(α)=ab\qquad \sin(\alpha) = \frac{a}{c}\quad \cos(\alpha) = \frac{b}{c}\quad \tan(\alpha) = \frac{a}{b}


sin(β)=bccos(β)=actan(β)=ba\qquad \sin(\beta) = \frac{b}{c}\quad \cos(\beta) = \frac{a}{c}\quad \tan(\beta) = \frac{b}{a}



Thus, just knowing the angle
α\alpha
allows conversion of any side to any other side:
aa
bb
cc
a=a=
aa
btan(α)b\tan(\alpha)
csin(α)c\sin(\alpha)
b=b=
atan(α)\frac{a}{\tan(\alpha)}
bb
ccos(α)c\cos(\alpha)
c=c=
asin(α)\frac{a}{\sin(\alpha)}
bcos(α)\frac{b}{\cos(\alpha)}
cc

And similarly, just knowing the angle
β\beta
also allows conversion of any side to any other side:
aa
bb
cc
a=a=
aa
btan(β)\frac{b}{\tan(\beta)}
ccos(β)c\cos(\beta)
b=b=
atan(β)a\tan(\beta)
bb
csin(β)c\sin(\beta)
c=c=
acos(β)\frac{a}{\cos(\beta)}
bsin(β)\frac{b}{\sin(\beta)}
cc

Intuition behind this is straightforward: For angles of a triangle, sine and cosine are bounded functions between zero and one.
(0sin(x)1,  0cos(x)1)(0 \leq \sin(x)\leq 1,\; 0\leq \cos(x) \leq1)
. Multiplying something with these functions should produce a smaller or equal value, while dividing something with these functions should produce a larger or equal value. Since hypotenuse is the longest, multiplying it with sine or cosine will "reduce" it to a leg. Similarly, dividing a leg by sine or cosine will "grow" it into a hypotenuse. A leg can be converted to another leg by "growing" it into a hypotenuse first, then "reduciPost too long. Click here to view the full text.
7 posts omitted. Click here to view.
>>
Taylor series of trigonometric functions

Sine and cosine have the following Maclaurin series (which are Taylor series around
x0=0)x_0 = 0)
:

sin(x)=n=0(1)n(2n+1)!x2n+1\qquad \sin(x) = \sum_{n=0}^\infty\frac{(-1)^n}{(2n + 1)!}x^{2n + 1}

cos(x)=n=0(1)n(2n)!x2n\qquad \cos(x) = \sum_{n=0}^\infty \frac{(-1)^n}{(2n)!}x^{2n}


This is the same as writing:

sin(x)=xx33!+x55!x77!+x99!x1111!+\qquad \sin(x) = x -\frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \frac{x^9}{9!} -\frac{x^{11}}{11!} +\dots

cos(x)=1x22!+x44!x66!+x88!x1010+\qquad \cos(x) = 1 - \frac{x^2}{2!} + \frac{x^4}{4!} - \frac{x^6}{6!} + \frac{x^8}{8!} - \frac{x^{10}}{10}+\dots


By summing the Maclaurin series of
cos(x)\cos(x)
and
isin(x)i\sin(x)
it's possible to prove Euler's formula
eix=cos(x)+isin(x)e^{ix} = \cos(x) + i\sin(x)


cos(x)+isin(x)=1+ix+(ix)22!+(ix)33!+(ix)44!+=eix\qquad \cos(x) + i\sin(x) = 1 + ix + \frac{(ix)^2}{2!} + \frac{(ix)^3}{3!} + \frac{(ix)^4}{4!} + \dots = e^{ix}


Maclaurin series can be used to approximate trigonometric functions to an arbitrary degree
For small angles (i.e.
α0\alpha\approx 0
), good approximations (often used in physics) are:

sin(α)x\qquad \sin(\alpha)\approx x

cos(α)1\qquad \cos(\alpha)\approx 1


Which are just Maclaurin series with only 1 term.
>>
Computation of trigonometric functions

Maclaurin series lend an easy way to numerically compute.
We need a helper function
ifac(n)
that computes
n!,  nNn!,\; n\in\mathbb{N}
and a helper function
fpow(x, n)
that computes
xn,  xR,nNx^n,\;x\in\mathbb{R}, n\in\mathbb{N}
:

int ifac(int n) {

return (n > 0) ? (n * ifac(n - 1)) : 1;
}
float fpow(float x, int n) {
return (n < 0) ? fpow(1/x, -n)
: (n == 0) ? 1
: (n == 1) ? x
: (n > 1) ? x * fpow(x, n - 1)
: 1;
}

Now
sin(x)\sin(x)
can be computed with
fsin(x, prec)
:
float fsin(float x, int termCount) {

int n, sign = 1;
float numer = x,
denom = 1,
result = sign * (numer / denom);

for (n = 1; n < termCount; n++) {
sign *= -1;
numer = fpow(x, 2*n+1);
denom = ifac(2*n+1);
result += sign * (numer / denom);
}

return result;
}

While
cos(x)\cos(x)
can be computed with
fcos(x, prec)
:

float fcos(float x, int termCount) {

int n, sign = 1;
float numer = 1,
denom = 1,
result = sign * (numer / denom);
for (n = 1; n < termCount; n++) {
sign *= -1;
numer = fpow(x, 2*n);
denom = ifac(2*n);
result += sign * (numer / denom);
}
return result;
}


While Maclaurin series for
sin(x)\sin(x)
and
cos(x)\cos(x)
converge on entire
R\mathbb{R}
, they converge the fastest when
x(π,π]x\in(-\pi,\pi]
so its good to bound
xx
to that interval considering trigonometric functions are periodic. In fact, only 5-15 are iterations are usually good enough in double-precision floating point precision.
>>

File: 8ce0ff601f99fdbf275b63c9b3b1944e.jpg ( 172.93 KB , 1040x1136 , 1656009606752.jpg )

Image
>>198
You touched on what nobody actually mentions, how these functions are computed. Excellent exposition.
>>

File: 1648530726809.png ( 25.95 KB , 644x800 , 1662284901464.png )

Image
gemmy though, thanks for the full info
>>
great thread


File: kotberet.png ( 154.62 KB , 393x403 , 1642718192578.png )

Image
As well as "Jew" or "gay" id on /pol/. I mean it's a three-letter word where second letter can be written as a number (k0t) and first and last letter aren't the same.

Now let's talk about IDs. They are 8-character sequences consisting of base64 characters (numbers, lowercase, uppercase letters, / and +). So, there can be

10+26+26+2=6410 + 26 + 26 + 2 = 64


possible characters for one position in id and thus

K0=648=248=281 474 976 710 656K_0 = 64^8 = 2^{48} = 281\ 474\ 976\ 710\ 656


possible IDs (BIG number :o).

Now back, to KOT.
You can write letters k, o and t in uppercase or lowercase and you can also write o as zero. That means you have
2×3×2=122\times3\times2 = 12
combinations of base64 characters that can make up the word "k0T".

Suppose that first three letters of your ID is one of 12 possible K0Ts. We have 5 letters of ID left and they can be anything from base64 alphabet.
So, there are

K1=12×645=12×230=12 884 901 888K_1 = 12 \times 64^5 = 12 \times 2^{30} = 12\ 884\ 901\ 888


So, it's also many, many ids. More than all cats, all humans, but less than all chickens in the world. So, maybe kot ID isn't really that rare.

Also there are 6 positions on which KoT id can happen:
KOTxxxxx
xKOTxxxx
xxKOTxxx
xxxKOTxx
xxxxKOTx
xxxxxKOT

We can assume that there are the same number of ids with KoT on n-th position where
n=1...6n = 1 ... 6
.

Remember, that KOT id can happen twice, so we must exclude duplicates:
KOTKOTxx
KOTxKOTx
KOTxxKOT
xKOTKOTx
xKOTxKOT
xxKOTKOT

Each of these duplicate positions exists in
K2=12×12×642=144×212=589824K_2 = 12 \times 12 \times 64^2 = 144 \times 2^{12} = 589824

configurations.

Because one duplicate can belong to two sets of IDs with at least one kot id (KOTxxKOT can belong to KOTxxxxx and xxxxxKOT), our equation for number of all KOT ids must be:

6×K16×K26 \times K_1 - 6 \times K_2


Let's calculate it:

6×K16×K2=6 \times K_1 - 6 \times K_2 =

6×12×2306×144×212=6 \times 12 \times 2^{30} - 6 \times 144 \times 2^{12} =

72×230864×212=72 \times 2^{30} - 864 \times 2^{12} =

77 305 872 38477\ 305\ 872\ 384


For every human on this planet there are approximately 10 KOT ids. :o

Now we can calculate probablility of having koT id:

77305872384K0=773058723842814749767106560.0002746456313641\frac{77305872384}{K_0} = \frac{77305872384}{281474976710656} \approx 0.00027464563 \approx \frac{1}{3641}


Thus, k0t id happens approximately one time for every 3641 ids.
Now you can calculate daily probability, because I don't know if ids on /bant/ are boardwise or just one for every poster and thread. x)
2 posts omitted. Click here to view.
>>
>>110
can you calculate the probability of "jak" id on /qa/ though
>>
>>126

Assuming that you can write second 'a' of 'jak' as 4 it's the same probability.

Thoughever, you can write it also as 'jaq' or 'jac', so there are 6 possibilities of writing the last letter there are:

2×3×6=362 \times 3 \times 6 = 36


Possible variation of previous constants for JaK:

K0=the sameK_0 = the\ same


K1=36×645=36×230=38 654 705 664K_1 = 36 \times 64^5 = 36 \times 2^{30} = 38\ 654\ 705\ 664


K2=36×36×642=1296×212=5 308 416K_2 = 36 \times 36 \times 64^2 = 1296 \times 2^{12} = 5\ 308\ 416



The final equation for the number of jAc IDs:

6×K16×K2=6 \times K_1 - 6 \times K_2 =

6×36×2306×1296×212=6 \times 36 \times 2^{30} - 6 \times 1296 \times 2^{12} =

216×2307776×212=216 \times 2^{30} - 7776 \times 2^{12} =

231 896 383 488231\ 896\ 383\ 488


And the probability:

231896383488K0=2318963834882814749767106560.00082386145311214\frac{231896383488}{K_0} = \frac{231896383488}{281474976710656} \approx 0.000823861453 \approx \frac{1}{1214}


So it's just pretty much 3 times bigger than "KoT". :o


/qa/ is dead tohough :(
>>
>>106
Let's eneralize this to id of length
nn
(for 4chan:
n=8n=8
) with set of
qq
different possible characers (for base64:
q=64q=64
) and substring of interest of length
 (n)\ell\ (\ell\leq n)
(for k0t:
=3\ell=3
) with
pp
different variations of the substring (for k0t:
p=12p = 12
)

The number of all possible IDs is:

K0=qn\qquad K_0 = q^n


If
k0t
appears in at least one place, we must consume
=3\ell = 3
characters to assemble them into
k0t
, then we return the
k0t
into the bunch (so we now have
n+1n -\ell + 1
items in the bunch) and then we arrange the
k0t
in
(n+11)n - \ell + 1\choose 1
ways. We have
pp
variations of k0t, plus
nn-\ell
other characters which can each be
qq
different vays (so
qnq^{n-\ell}
) (even tho I said "plus" we use
×\times
bcoz of thing called multiplication principle but w/e)

K1=(n+11)pqn\qquad K_1 = {n - \ell + 1 \choose 1}\cdot p \cdot q^{n - \ell}


If there are two
k0t
s somewhere, we must consume
2=62\ell = 6
characters to assemble 2
k0t
s (both with
p=12p=12
variations so
p2p^2
), then we return the k0t in the bunch (we now have
n2+2n -2\ell + 2
items) and we arrange the two kots in
(n2+2)2)n-2\ell + 2)\choose 2
ways. We have two kots which can each be
pp
different ways (so
p2p^2
) and
n2n - 2\ell
unconsoomed characters which can each be
qq
different ways (
qn2q^{n - 2\ell}
)

K2=(n2+2)2)p2qn2\qquad K_2 = {n - 2\ell + 2)\choose 2}\cdot p^2\cdot q^{n - 2\ell}


Therefore if we have
kk
k0t
s, the fomula is the following: (and I have a truly marevolous way to prove this formula by induction but I am also truly a lazy fuck.)

Kk=(nk(1)k)pkqnk\qquad K_k = {n - k(\ell - 1)\choose k}\cdot p^k\cdot q^{n-k\ell}


Now if we want to count all the
k0t
s without duplicates, we must use inclusion-exclusion principle:

KΣ=k=1nk(1)>0(1)k1Kk\qquad K_{\Sigma} = \sum_{k = 1}^{n - k(\ell - 1) > 0}(-1)^{k - 1}\cdot K_k


Then the probability of
k0t
appearing is

P=KΣK0\qquad P = \frac{K_\Sigma}{K_0}



So if I plug in the numbers for
k0t
on /pol/ slash /biz/:

n=8q=64=3p=12\begin{aligned} \qquad n &= 8\\q &= 64\\\ell&=3\\p&=12 \end{aligned}


We get the following result
P=KΣK0=1qnk=1nk(1)>0(1)k1Kk=1648((83+11)12645(86+22)122642)=12642648(6643612)=773058723842814749767106560.02746%\qquad P = \frac{K_\Sigma}{K_0} = \frac{1}{q^n}\sum_{k = 1}^{n - k(\ell - 1) > 0}(-1)^{k - 1}\cdot K_k = \frac{1}{64^8}\cdot\left({8 - 3 + 1\choose 1}\cdot 12\cdot 64^5 - {8 - 6 + 2\choose 2}\cdot 12^2\cdot 64^2\right) = \frac{12\cdot 64^2}{64^8}\cdot(6\cdot 64^3 - 6\cdot 12) = \frac{77305872384}{281474976710656} \approx 0.02746\%


Which matches OP's numbers!
>>
>>129
Nice, I treat the fact that our results are the same as a miracle because I always get something wrong during calculations. x)

Also I wonder if determining number for arbitrary word and
nn
(or just equation for
Kk,k1,...,nK_k, k \in 1,...,n
) can have an elegant equation or generic brute-force algorithm is the best way.

For example the word SUS is kinda sus, because SUSUSUSx is a valid id with three occurences of the searched word.

We can assume that there are problems when
kk
-long suffix of the word is
kk
-long prefix like when 'S' is 1-long suffix and prefix of 'SuS' and I see hope there, but words like 'SSUSSUSS' have the same words of length 1,2 and 5 as prefixes and suffixes of respective lengths.
>>
test


File: 1638294292313.png ( 363.02 KB , 1195x1240 , 1640551533576.png )

Image
Supose you have two baskets with one ball each. Probability that one basket has k ball in 0-th time is

p0,kp_{0,k}


therefore

p0,1=1p_{0,1} = 1


and for k != 1

p0,k=0p_{0,k} = 0
.

Now (You) choose random ball from any basket (every ball have the same probability of being chosen, not dependent on which basket is it placed in). You put new ball in a basket where this choosen ball was. :0 You repeat this t times.

In math language it is probably:

pt,k=pt1,k1k1t+1+pt1,ktkt+1p_{t,k} = p_{t-1,k-1}\frac{k-1}{t+1} + p_{t-1,k}\frac{t-k}{t+1}


Now

The most bone-chilling, slow-burn, atmosphere-oozing thing I discovered about it:

pt,k=1t+1p_{t,k} = \frac{1}{t+1}


HOLY SCIENCE!!!!!!!!1

Probability of having k balls after t time is always the same!!!!!
5 posts omitted. Click here to view.
>>
>Probability of having k balls after t time is always the same!!!!!
I agree. And this probability is 1/2. It either happens or it doesn't.
>>
>>254
and now consider the conditional probability of it happening or not.
>>
>>73
op is a faggot (test)
>>
>>269
Your containment board >>>/test/
>>
>>73
Your post was kind of difficult to read so I will try to summarise the problem and solution in my own words. I also think you made a typo, so my reccurance relation looks very slightly different to yours, but the solution is the same.


Problem

You have two boxes, box #1 and box #2. You also have a bag with an unlimited number of balls. At time 0, box #1 has one ball and box #2 has one ball.

In every iteration:
- Choose a ball at random from the set of balls that are in the boxes.
- Take a new ball from the bag and put it into the box where the chosen ball was.

So at time
tt
, there are
t+2t+2
balls in both boxes. Each box has between
11
and
t+1t+1
balls (inclusive).

Let
pt,kp_{t,k}
denote the probability that box #1 has exactly k balls at time t.

So by the initial conditions:
-
p0,k=1p_{0,k}=1
if
k=1k=1

-
p0,k=0p_{0,k}=0
otherwise

Derive a closed form expression for
pt,kp_{t,k}
.


Solution

We know that at time
t1t-1
there were
(t1)+2=t+1(t-1)+2=t+1
balls in both boxes.

At time
tt
, how can box #1 come to contain
kk
balls? There are two cases:
- at time
t1t-1
it had
k1k-1
balls and was chosen
- at time
t1t-1
it had
kk
balls and was not chosen

This leads to the following reccurence relation

pt,k=pt1,k1k1t+1+pt1,kt+1kt+1p_{t,k} = p_{t-1,k-1}\frac{k-1}{t+1} + p_{t-1,k}\frac{t+1-k}{t+1}


which has the solution by induction on
tt


pt,k=1t+1p_{t,k} = \frac{1}{t+1}


where
1<=k<=t+11<=k<=t+1
.

Base case for
t=1t=1
is trivial.

pt,kp_{t,k}


=pt1,k1k1t+1+pt1,kt+1kt+1 = p_{t-1,k-1}\frac{k-1}{t+1} + p_{t-1,k}\frac{t+1-k}{t+1}


=1(t1)+1k1t+1+1(t1)+1t+1kt+1 = \frac{1}{(t-1)+1}\frac{k-1}{t+1} + \frac{1}{(t-1)+1}\frac{t+1-k}{t+1}


by inductive assumption and because
t>=1t>=1
so no problems with dividing by 0

=k1t(t+1)+t+1kt(t+1) = \frac{k-1}{t(t+1)} + \frac{t+1-k}{t(t+1)}


=k1+t+1kt(t+1) = \frac{k-1+t+1-k}{t(t+1)}


=tt(t+1) = \frac{t}{t(t+1)}


=1t+1 = \frac{1}{t+1}


which is what we wanted.


Now for some values of
kk
, namely
11
and
t+1t+1
this is somewhat an obvious result because in these cases, we have to look at the probability of choosing one box repeatedly which is just
t!(t+1)!=1t+1\frac{t!}{(t+1)!}=\frac{1}{t+1}
, but for the other cases it is not so obvious. Drawing a tree diagram helps verify the result, I did it up to
t=3t=3
.

Cool problem overall anon :)