[ home ] [ math / cs / ai / phy / as / chem / bio / geo ] [ civ / aero / mech / ee / hdl / os / dev / web / app / sys / net / sec ] [ med / fin / psy / soc / his / lit / lin / phi / arch ] [ off / vg / jp / 2hu / tc / ts / adv / hr / meta / tex ] [ chat ] [ wiki ]

/math/ - Mathematics


Name
Email
Subject
Comment
Verification
Instructions:
  • Press the Get Captcha button to get a new captcha
  • Find the correct answer and type the key in TYPE CAPTCHA HERE
  • Press the Publish button to make a post
  • Incorrect answer to the captcha will result in an immediate ban.
File
Password (For file deletion.)

25 Dec 2021Mathchan is launched into public


File: stevin.png ( 123.72 KB , 1262x1779 , 1651889952353.png )

Image
In his work which introduced decimals to Europe, Stevin wrote (converting it to modern notation) that when you divide 0.4 ÷ 0.03, the algorithm gives you infinitely many 3's. But he didn't call this infinite result an exact answer. Instead he noted 13.33⅓ and 13.333⅓ as exact answers while recommending instead truncating to 13.33 or however near the answer you require. So clearly the main idea of infinite decimals giving arbitrarily good approximations was there. But at what point did people start saying things like 0.4 ÷ 0.03 = 13.333... exactly?
>>
To give an answer you need to define what is "13.333... exactly". If you define it as an infinite string to the right, an algorithm that calculates it (using the rules of addition and multiplication) will print it in infinite time. Mentioning the algorithm here is for intuition, in fact if you define operations on infinite strings that way, it will do so. The nerds can refer to the axiom of choice.

(In your post you used the word "approximate". Ask yourself the question -- "approximate what?")
>>
>>214
13.333... is the real number approximated by the sequence 13.3, 13.33, 13.333, ..., or in more precise terms, its limit.
>>
When people say 1/3 = .333... or anything like that, by defining the right side so that it would be true, it's bad notation. 1/3 can't be written as a decimal expansion because the division always results in a remainder. But you can have a number where there is always an extra 3 added to the expansion, so it would be an unending number of 3s, like .3 + .03 + .003 and so on. I would like to use an infinite series here to make it neater but it's unfortunately defined according to limits so I will avoid it here as limits are not part of the discussion, only true values. If you do have such an unending string of 3s in the decimal expansion, it will not be equal to 1/3. since the difference between it and 1/3 is always greater than 0, no matter how long the string stretches. There are quite a few numbers that can't be written using decimal expansions, 1/3 is just one of them.
>>

File: Fire.jpg ( 28.48 KB , 282x288 , 1692557240604.jpg )

Image
Paul Erdös has constructed a proof that the Copeland-Erdős constant is relatively normal.

Can someone help me understand the proof?


File: 25d30fdff6e5edb9cd299f196780830a.png ( 216.3 KB , 1200x899 , 1662422662595.png )

Image
The sum of the coefficients of the expanded Sum of the n first postive integer powers seem to equal the denominator.
Seems pretty cool and I can't find anything about it online
>>
Well this can be fairly easily explained by noting that the sum of the coefficients of a polynomial equals the polynomial evaluated at 1, as well the LHS for n=1 always 1. (btw it should be the sum from k=1 to n of k^something, not vice versa, your picture includes this typo consistently)
>>
Is there a formula for the general case, e.g. n^a?
>>
>>469
Yep! sum_{n=0}^k n^a=
=(Bernoullipolynomial_(a+1)(k+1)-Bernoullinumber_(a+1))/(a+1)=
=harmonicnumber(k,-a)=
=-Hurwitzzeta(-a,k+1)+Riemannzeta(-a) for a in Z+


File: she's doing integrals ok.jpg ( 21.76 KB , 480x480 , 1655484028377.jpg )

Image
Primitive function of function
f(x)f(x)
over some interval
x[a,b]x\in[a, b]
is a function
F(x)F(x)
whose derivative is the function
f(x)f(x)
on that interval.

x[a,b],F(x)=f(x)\qquad \forall x\in[a, b],\quad F'(x) = f(x)


Antiderivative (aka indefinite integral) of a function
f(x)f(x)
is a family of its primitive functions which differ by a constant
CRC\in\mathbb{R}
:

f(x)dx=F(x)+C\qquad \int f(x)\mathrm{d}x = F(x) + C


In a nutshell, integration is the opposite of differentiation:

df(x)dxdx=F(x)\qquad \frac{\mathrm{d}\int f(x)\mathrm{d}x}{\mathrm{d}x} = F(x)


df(x)dx=f(x)dx\qquad \mathrm{d}\int f(x)\mathrm{d}x = f(x)\mathrm{d}x


dF(x)=F(x)+C\qquad \int\mathrm{d}F(x) = F(x) + C


Solving integrals in general is pretty hard, but there are a lot of established ways to do it. As OP I'll post some of the standard approaches, but this thread is about any kind of integration so feel free to post integrals and theirs solutions.

Most basic methods are:

  1. Using the table of integrals
  2. Using linearity property
  3. Using substitution
  4. Using partial integration
  5. Reducing quadratic to its cannonical form
  6. Partial decomposition
2 posts omitted. Click here to view.
>>
Substitution

Let's say we want to integrate
f(x)f(x)
in order obtain its primitve function
F(x)F(x)
over some interval
[a,b][a,b]
:

f(x)dx=F(x)+C,x[a,b]\qquad \int f(x)\mathrm{d}x = F(x) + C,\quad x\in[a, b]


Now let's suppose we can identify a surjective function
g(x)g(x)
that is continuous on
[a,b][a,b]
whose derivative on that interval is also continuous.
We can then write.

f(g(x))g(x)dx=f(g(x)) dg(x)=F(g(x))+C\qquad \int f(g(x))\cdot g'(x)\mathrm{d}x = \int f(g(x))\ \mathrm{d}g(x) = F(g(x)) + C


In a nushell, we've substituted
xx
with
g(x)g(x)
and solved the integral in the usual way.
The tricky part is identifying
g(x)g'(x)
. Without it, it would not be possible to


Example:
Ok, the based and redpilled way to do this is to notice from the definition of
d\mathrm{d}
in the OP is
df(x)=f(x)dx\mathrm{d}f(x) = f'(x)\mathrm{d}x
.
That means we could write
g(x)dxg'(x)\mathrm{d}x
as
dg(x)\mathrm{d}g(x)
:

f(g(x))g(x)dx=f(g(x)) dg(x)\qquad \int f(g(x))\cdot g'(x)\mathrm{d}x = \int f(g(x))\ \mathrm{d}g(x)


So a funny thing based engineers like to do (which makes mathfags seethe and cope) is this:

sin(x)cos(x)dx=sin(x) dsin(x)=sin2(x)2+C\qquad\int\sin(x)\cos(x)\mathrm{d}x = \int\sin(x)\ \mathrm{d}\sin(x) = \frac{\sin^2(x)}{2} + C


Basically, treating
sin(x)\sin(x)
like
xx
and using
xdx=x22+C\int x\mathrm{d}x = \frac{x^2}{2} + C
.

The cringe and bluepilled way of solving this, of course, is recognizing
sin(2x)=2sin(x)cos(x)\sin(2x) = 2\sin(x)\cos(x)


sin(x)cos(x)dx=12sin(2x)dx=cos(2x)4+C=2sin2(x)14+C=sin2(x)2+(14+C)constant=sin2(x)2+C\qquad\int\sin(x)\cos(x)\mathrm{d}x = \frac{1}{2}\int\sin(2x)\mathrm{d}x = \frac{-\cos(2x)}{4} + C = \frac{2\sin^2(x) - 1}{4} + C = \frac{\sin^2(x)}{2} + \underbrace{\left(-\frac{1}{4} + C\right)}_{\text{constant}} = \frac{\sin^2(x)}{2} + C
>>
Partial integration

This basically allows you to swap what you integrate and what you integrate by.

u dv=uvv du+C\qquad\int u\ \mathrm{d}v = uv - \int v\ \mathrm{d}u + C



Usually, you use it to solve an integral with polynomial multiplied by one of the non-polynomial functions.

Pn(x)eaxsin(ax)cos(ax)ln(ax)arcsin(ax)arccos(ax)dx\qquad \int P_n(x)\cdot \begin{matrix}e^{ax}\\\sin(ax)\\\cos(ax)\\\ln(ax)\\\arcsin(ax)\\\arccos(ax)\end{matrix}\mathrm{d}x


You usually choose
uu
to be the non-polynomial because calculating the derivative of it is probably going to be easier than integrating it.
On the other hand, polynomials are easy to both derive and integrate.

Example

You integrate
x2arccos(x)dx\int x^2\arccos(x)\mathrm{d}x
by differentiating
arccos(x)\arccos(x)
and by integrating
x2x^2
:

arccos(x)x2dx={u=arccos(x)du=dx1x2dv=x2dxv=x33}=x33arccos(x)+13x31x2dx+C=x33arccos(x)13(1x2+(1x2)33)+C\begin{aligned} \qquad \int \arccos(x)\cdot x^2 \mathrm{d}x &= \begin{Bmatrix} u = \arccos(x) & \mathrm{d}u = -\frac{\mathrm{d}x}{\sqrt{1- x^2}}\\\mathrm{d}v = x^2\mathrm{d}x & v = \frac{x^3}{3}\end{Bmatrix} = \frac{x^3}{3}\arccos(x)+ \frac{1}{3}\int\frac{x^3}{\sqrt{1-x^2}}\mathrm{d}x +C \\&= \frac{x^3}{3}\arccos(x) - \frac{1}{3}\left(\sqrt{1-x^2} + \frac{\sqrt{(1-x^2)^3}}{3}\right) + C \end{aligned}


As for how you solve
x31x2dx\int\frac{x^3}{\sqrt{1-x^2}}\mathrm{d}x
you do it by substituting
t=1x2,x2=1t2,2dx=2tdtt = \sqrt{1-x^2},\quad x^2 = 1- t^2,\quad \cancel{2}\mathrm{d}x = -\cancel{2}t\mathrm{d}t
:

x31x2dx=1t2t(t)dt=tt33=1x2(1x2)33\qquad \int\frac{x^3}{\sqrt{1-x^2}}\mathrm{d}x = \int\frac{1-t^2}{t}(-t)\mathrm{d}t = t - \frac{t^3}{3} = \boxed{\sqrt{1-x^2} - \frac{\sqrt{(1-x^2)^3}}{3}}
>>
Quadratic trinomial

How do you solve
dxax2+bx+c\int\frac{\mathrm{d}x}{ax^2 + bx + c}
and
dxax2+bx+c\int\frac{\mathrm{d}x}{\sqrt{ax^2 + bx + c}}
? You write
ax2+bx+cax^2 + bx + c
in the following way (hint: completing the square by adding and subtracting
b24)\frac{b^2}{4})
:

ax2+bx+c=a(x2+bax+ca)=a((x2+2b2ax+b24a2)+(b24a2+c))=a((x+b2a)2+(cb2a)2)=a(t2+k2),t=x+b2a,k=cb2a\begin{aligned} \qquad ax^2 + bx + c &= a\left(x^2 + \frac{b}{a}x + \frac{c}{a}\right) = a\left(\left(x^2 + 2\frac{b}{2a}x + \frac{b^2}{4a^2}\right) +\left(- \frac{b^2}{4a^2} + c \right)\right)\\&= a\left(\left(x + \frac{b}{2a}\right)^2 +\left(\sqrt{c - \frac{b}{2a}}\right)^2\right)\\&=a(t^2 + k^2),\quad t= x+ \frac{b}{2a},\quad k=\sqrt{c-\frac{b}{2a}} \end{aligned}


Now the integral reduces to either
1adxt2+k2=1akarctantk+C\boxed{\frac{1}{a}\int\frac{\mathrm{d}x}{t^2 + k^2} = \frac{1}{ak}\arctan{\frac{t}{k}} + C}
or
1adxt2+k2=1alnt2+t2+k2+C\boxed{\frac{1}{a}\int\frac{\mathrm{d}x}{\sqrt{t^2 + k^2}} = \frac{1}{a}\ln\left|t^2 + \sqrt{t^2 + k^2}\right| + C}
>>
Partial fraction decomposition

How do you solve
P(x)Q(x)dx\int\frac{P(x)}{Q(x)}\mathrm{d}x
where
 degP(x)<degQ(x)\ \deg{P(x)} < \deg{Q(x)}
?


First, as a consequence of the fundamental theorem of algebra, any real polynomial can be factored into linear and quadtratic terms.
We will do that with
Q(x)Q(x)
:

Q(x)=(xa1)A1(xa2)A2(xam)Am(x2+b1x+c1)B1(x2+b2x+c2)B2(x2+bnx+cn)Bn\qquad Q(x) = (x-a_1)^{A_1}(x-a_2)^{A_2}\dots(x - a_m)^{A_m}(x^2 + b_1x + c_1)^{B_1}(x^2 + b_2 x + c_2)^{B_2}\dots(x^2 + b_n x + c_n)^{B_n}


Now employ the partial fraction decomposition:

P(x)Q(x)=i=1mj=1Aiaij(xai)j+i=1nj=1Bibijx+cij(x2+bix+)j\qquad\frac{P(x)}{Q(x)} = \sum_{i=1}^m\sum_{j=1}^{A_i}\frac{a_{ij}}{(x-a_i)^j} + \sum_{i=1}^n\sum_{j=1}^{B_i}\frac{b_{ij}x + c_{ij}}{(x^2 + b_ix +)^j}


Then just use the linearity property of the integral.

For example:

x2+1x53x4+x3+7x26x8dx=x2+1(x2)(x+1)2(x23x+4)dx=a11x2dx+a21x+1dx+a22(x+1)2dx+b11x+c11x23x+4dx\qquad \int\frac{x^2 + 1}{x^5 - 3x^4 + x^3 + 7x^2 - 6x - 8}\mathrm{d}x = \int\frac{x^2 + 1}{(x-2)(x +1)^2(x^2-3x + 4)}\mathrm{d}x = \int \frac{a_{11}}{x-2}\mathrm{d}x + \int\frac{a_{21}}{x+1}\mathrm{d}x + \int\frac{a_{22}}{(x+1)^2}\mathrm{d}x + \int\frac{b_{11} x + c_{11}}{x^2 - 3x + 4} \mathrm{d}x



The constants
aij,bij,cijRa_{ij},b_{ij}, c_{ij}\in\mathbb{R}
have to be found e.g. using the Heaviside cover up method.
>>
cool nad thanks


File: 1651334063589.jpg ( 62.08 KB , 900x900 , 1653814939932.jpg )

Image
How to solve an equation?
8 posts omitted. Click here to view.
>>
What's an equation?
>>
multiply both sides by 0
>>
Linear maps
>>
By not posting tranime
>>
>>182
This is unironically an extremely good question. What is the general method for a computer to compute ANALYTICALLY the solution to any kind of equation, aka the zeroes of a function in a field taking any number of inputs.
>Newton-Raphson for R^n
>Fermat little-Theorem for prime-characteristic fields
These are exemples of explicit solution methods for definite classes of equations. I am talking about an oracle than solves even in non-closed form expression.


File: photo_2022-03-18_17-07-07.jpg ( 48.05 KB , 1280x800 , 1662141820030.jpg )

Image
Hey /math/

I've been trying to understand spherical harmonics to grok an ML paper that represented points in euclidean space in a rotation-and-translation-invariant way (https://arxiv.org/pdf/1802.08219.pdf). I found a great textbook on SO(3) (https://www.diva-portal.org/smash/get/diva2:1334832/FULLTEXT01.pdf), but while I can kind of maybe sort of follow what's being done with them in this particular paper I fail to really get an intuition of what spherical harmonics are, and it feels like there's some pretty beautiful insight in there.

Do you have any advice or perspectives on how to intuitively grasp what these harmonics are and mean, beyond just group theory definitions?
>>
Harmonics are solutions of Laplace's equation in a given coordinate basis. Spherical harmonics are the solutions of Laplace's equation in spherical coordinates.
>>
>>282
https://youtu.be/Ziz7t1HHwBw


File: merkobacalc.jpg ( 81.78 KB , 1120x777 , 1683182411654.jpg )

Image
This is a multi-line calculator I made.
Each line is associated with a variable.
You can reference variables in other lines.
It uses math.js at 64 precision.
Allows normal and fraction mode.
http://calculator.merkoba.com/


File: F139_3957.jpg ( 184.38 KB , 1024x683 , 1681404166674.jpg )

Image
What's the difference between Trigometric Anaylsis and Calculus? especially as it pertains to vectors pic somewhat related
>>
Are these high school math classes? If so ask your school what's the difference.
>>
>>387

I dropped out, I'm trying to figure this out for work because I had made the same comment to my boss and he called me a retard.
>>
>>388
"Trigonometric analysis" isn't a common way of calling any particular branch of maths. That's probably why. It might perhaps refer to what people usually call simply "Trigonometry".
>>
>>389

okay, so how is this different from calculus? ex: wanting to know the area of a wave within a certain domain, and then adding several (different) areas of the larger wave together into a larger area
>>
>>390
Trigonometry is about trigonometric functions like the sine or cosine.
Calculus is about analytic things like derivatives and integrals.


File: galileo.jpg ( 353.63 KB , 1230x1134 , 1673073627763.jpg )

Image
Let
g(X)g(X)
be the minimal poly of
γ\gamma
. Then we have
Z[X]/g(X)Z[γ] \mathbb{Z}[X]/\langle g(X) \rangle \cong \mathbb{Z}[\gamma]

So then we can see that
Z[γ]/pZ[X]/g(X),p \mathbb{Z}[\gamma]/\langle p \rangle \cong \mathbb{Z}[X] / \langle g(X), p \rangle


My question: is this a general rule of rings that given a homomorphism
ϕ:R/IS \phi : R/I \rightarrow S

with ker
ϕ=J\phi = J
, then
R/(I+J)S/J R/(I + J) \cong S/J

?
>>
Yes, this is a general rule of rings. Given a homomorphism
ϕ:R/I→S
with kerϕ=J, then
R/(I+J)≅S/J.

This can be shown by considering the following two short exact sequences:

0→J→R→R/J→0
0→J→S→S/J→0

The first short exact sequence shows that R/J is isomorphic to R modulo J. The second short exact sequence shows that S/J is isomorphic to S modulo J. Since the map
ϕ:R/I→S
satisfies kerϕ=J, it follows that
R/I≅R/J
and
S≅S/J.

Thus, by the isomorphism theorems, we have that
R/(I+J)≅(R/J)/(I/J)≅S/J.

I hope this helps to clarify the relationship between these rings and the role of the homomorphism
ϕ:R/I→S
>>
thanks a lot. What books do you recommend on commutative algebra? I'm looking at Atiyah-MacDonald supplemented by Reid and the first few chapters of Bosch. I also have a copy of Matsumara's book. Does that look like a good course to you? Reid is basically a rewrite of Atiyah, but I will do Atiyah's exercises since there's multiple solution sheets online.
>>
Atiyah and Macdonald's "Introduction to Commutative Algebra" and Reid's "Undergraduate Commutative Algebra" are both excellent introductory texts on the subject of commutative algebra. They cover many of the basic concepts and results, and both have a clear and accessible writing style.

Matsumura's "Commutative Ring Theory" is also a very good text and covers more advanced topics, such as dimension theory and Cohen-Macaulay rings. It also has a more algebraic geometric flavor than the other two books.

Bosch's "Commutative Algebra" is another great reference. It provides a more geometric perspective on commutative algebra and it covers many of the same topics as Atiyah-MacDonald and Reid, but in more depth and with more geometric intuition.

Taken together, these books should provide a comprehensive introduction to the subject of commutative algebra. Additionally, there are many other resources available online such as lecture notes, videos, and problem sets that can help you understand better.


File: distracted-math.jpg ( 39.64 KB , 854x480 , 1667702420031.jpg )

Image
I've tried writing longform posts but don't get much traction. Let's try a shortform post.
>>
I had a thought that continuous sets are not real, but they are limits of discrete sets such as that gaps between two closest points are small, but we don't really know (or don't want to know) how small they are.
>>

File: hackenbush-1.png ( 6.28 KB , 229x220 , 1668744680246.png )

Image
>>332
Thank you for joining my friend.
What brings you to mathchan?
How can it grow?

>gaps between two closest points are small
Hackenbush is very interesting.
https://www.youtube.com/watch?v=ZYj4NkeGPdM&t=1260
https://www.goodreads.com/book/show/1293306.Winning_Ways_for_Your_Mathematical_Plays

-

>limits
R=Z|\mathbb{R}| = |\mathbb{Z}|

>muh diagonal argument
Invalid. Consider the following countably infinite list:
.
  First Number: 0
 Second Number: 0.1
  Third Number: 0.11
 Fourth Number: 0.111
  Fifth Number: 0.1111
  Sixth Number: 0.11111
Seventh Number: 0.111111
 Eighth Number: 0.1111111
  Ninth Number: 0.11111111
  Tenth Number: 0.111111111
  (... continues ...)

Question: Is the following number in the list?
919^{-1}

i.e.
19\frac{1}{9}

i.e.
0.10.\overline{1}

i.e.
0.1111111111110.111111111111\ldots


The diagonal argument claims
0.10.\overline{1}
isn't in the list.
(Because
0.10.\overline{1}
differs from the first number in the tenths digit, the second number in the hundredths digit, the third number in the thousandths digit, the fourth number in the ten-thousandths digit, the fifth number in the hundred-thousandths digit, and this will continue forever, then allegedly
0.10.\overline{1}
is not in the list.


But obviously,
0.10.\overline{1}
is in the list.
The list is directly constructed so as to contain
0.10.\overline{1}
.
>but muh infinite digits
The decimal number
0.10.\overline{1}
contains countably infinite digits.
The list is a countably infinite list.

Contradiction. Therefore the diagonal argument is invalid.
P.S. For the curious, the countably infinite list which contains all real numbers is simply
0    , 0.1  , 0.2  , 0.3  , 0.4  , 0.5  , 0.6  , 0.7  , 0.8  , 0.9  ,
0.01 , 0.11 , 0.21 , 0.31 , 0.41 , 0.51 , 0.61 , 0.71 , 0.81 , 0.91 ,
0.02 , 0.12 , 0.22 , 0.32 , 0.42 , 0.52 , 0.62 , 0.72 , 0.82 , 0.92 ,
0.03 , 0.13 , 0.23 , 0.33 , 0.43 , 0.53 , 0.63 , 0.73 , 0.83 , 0.93 ,
0.04 , 0.14 , 0.24 ,
Post too long. Click here to view the full text.
>>
>>333
>
R=Z|\mathbb{R}| = |\mathbb{Z}|

I didn't really mean that real numbers are badly contructed, by mathematicians, but the world is more likely based on discrete numbers than on "real" numbers and the latter is a formal contruction where distance between numbers is "really infinitely" small.

You seem to use axiom or principle that you can count from 0.1 to 0.(1). I won't probably disprove or confirm your theorem because it is not my focus in my journey through the land of mathematics. xp I only had thought that you can add from 1 to
++\infty
with it, but it's not limit of adding smaller values so they en up in a fixed value.
>>
>>333
>contains countably infinite digits
You're confusing the cardinality of sets with natural numbers.
Let your set of 0.1* numbers be S.
SN S \cong \mathbb{N}
by the counting the ones in each decimal number.
0.1 0.\overline{1}
has
0 \aleph_0
digits, and
0N \aleph_0 \notin \mathbb{N}
, therefore
0.1S 0.\overline{1} \notin S
.
Just because something is countably infinite, doesn't mean you can reach it by counting.
A more simple refutation would be so say that by nature of the construction of S, every number in it that is not the first has a single unique predecessor that you can follow back to 0.
The predecessor of
0.1 0.\overline{1}
would be
0.1 0.\overline{1}
, which clearly puts it ouside S.
This is a lot more intuitive if you imagine a graph of S with each number connected to the next.
This forms a straight line stretching from 0 outwards.
0.1 0.\overline{1}
is in a single node graph who's only edge is to itself, it clearly can not connect to the line.
>>
test


File: IMG_20220714_001733.jpg ( 5.31 KB , 240x60 , 1657768669674.jpg )

Image
-1/12

contradiction or too complex for mere mortals to understand?
6 posts and 1 image reply omitted. Click here to view.
>>
Doesn't work per the common definition of an infinite sum
>>
abuse of notation
>>

File: merge_from_ofoct.jpg ( 799.81 KB , 951x6024 , 1663049911610.jpg )

Image
This physicist's treatment of the -1/12 issue from Zee's QFT book is the best treatment of the question I've seen.
>Appallingly, in an apparent attempt to make the subject appear even more mysterious than it is, some treatments simply assert that the sum is by some mathematical sleight-of-hand equal to -1/12. Even though it would have allowed us to wormhole from (1) to (3) instantly, this assertion is manifestly absurd. What we did here, however, makes physical sense.
>>
Behold, the Barnett-Tooker-Wildberger conjecture.

Let

ζ^b(s)=k=1^bns\qquad \hat\zeta_b(s) = \sum_{k=1}^{\hat\infty - b} n^{-s}


then

ζ^e+π+2(1)=eiπ11+0.999\qquad \hat\zeta_{e + \pi + \sqrt 2}(-1) = \frac{e^{i\pi}}{11 + 0.999\dots}
>>
I still don't get it.