I don't know if this is exactly the same as what I learned in high school as "integration by substitution."
A number of years after I finished school, I was in a new town without a job, and got hired to teach a freshman algebra course at the nearby Big Ten university. About halfway into teaching the class, I was struck by the realization that virtually every problem was solved in the same way, by recognizing the "form" of a problem and applying an algorithm appropriate for that form, drawn from the most recent chapter.
In the TFA, the natural log in the integrand was a dead give-away because it only comes from one place in the standard order of topics in calculus class.
Is this what we call intuition?
The students called this the "trick." Many of them had come from high school math under the impression that math was subjective, and was a matter of guessing the teacher's preferred trick from among the many possible.
For instance, all of the class problems involving maxima and minima involved a quadratic equation, since it was the only form with an extremum that the students had learned. Every min/max problem culminated with completing the square. I taught my students a formula that they could just memorize.
The whole affair left me with a bad taste in my mouth.
I think the difference is something like Feynman’s trick simplifies a hard integral by introducing a parameter and differentiating the whole integral, while substitution simplifies an integral by changing variables to undo the chain rule. But it has been so long since I've done integration manually I'm not 100% sure that's an accurate description/the full story.
The thing I hated about integration was which approach would work and the best option for each approach were much more "do a lot and see what's right" and I was too lazy :).
I just finished Mathematica by David Bessis and I wish this information was presented in the way he talks about math: using words and imagery to explain what is happening, and only using the equations to prove the words are true.
I just haven’t had to use integral calculus in so many years, I don’t recall what the symbols mean and I certainly don’t care about them. That doesn’t mean I wouldn’t find the problem domain interesting, if it was expressed as such. Instead, though, I get a strong dose of mathematical formalism disconnected from anything I can meaningfully reason about. Too bad.
That's one of the things I like best about https://betterexplained.com -- it focuses on ways to gain intuition about a given math concept, using visuals and metaphors as appropriate. If only math education were always presented like that....
> So I got a great reputation for doing integrals, only because my box of tools was different from everybody else's
This is the most important lesson I learned in grad school. Methods are so important. I really think it is the core of what we call "critical thinking" - knowing how facts are made.
My issue with both this and u-substitution is that you don't know what expression to use. There are a LOT of expressions that plausibly simplify the integral. But you have to do a bunch of algebra for each one (and not screw it up!), without really knowing whether it actually helps.
OTOH, if I'm given the expression, it's just mechanical and unrewarding.
I don't know about this particular case though, I get the feeling there's a system to it that can be exploited by eg Wolfram. It's just that you're in the dark for a long time before you find the switch.
Your intuition is right. There is a general algorithm for finding the antiderivatives: https://en.wikipedia.org/wiki/Risch_algorithm Its simplified form can solve pretty much all the undergrad antiderivation problems.
I'm a math major, but I consider the time spent learning the tricks for antiderivation to be kinda useless.
No, it is correct. The integral is with respect to x, and the ordinary/partial derivatives are with respect to t. Written out fully, the derivative computation is
This reminds me of the "snake oil method" for generating functions. It's been many years, but I remember it as adding another sigma and then swapping them.
I don't know if this is exactly the same as what I learned in high school as "integration by substitution."
A number of years after I finished school, I was in a new town without a job, and got hired to teach a freshman algebra course at the nearby Big Ten university. About halfway into teaching the class, I was struck by the realization that virtually every problem was solved in the same way, by recognizing the "form" of a problem and applying an algorithm appropriate for that form, drawn from the most recent chapter.
In the TFA, the natural log in the integrand was a dead give-away because it only comes from one place in the standard order of topics in calculus class.
Is this what we call intuition?
The students called this the "trick." Many of them had come from high school math under the impression that math was subjective, and was a matter of guessing the teacher's preferred trick from among the many possible.
For instance, all of the class problems involving maxima and minima involved a quadratic equation, since it was the only form with an extremum that the students had learned. Every min/max problem culminated with completing the square. I taught my students a formula that they could just memorize.
The whole affair left me with a bad taste in my mouth.
I think the difference is something like Feynman’s trick simplifies a hard integral by introducing a parameter and differentiating the whole integral, while substitution simplifies an integral by changing variables to undo the chain rule. But it has been so long since I've done integration manually I'm not 100% sure that's an accurate description/the full story.
The thing I hated about integration was which approach would work and the best option for each approach were much more "do a lot and see what's right" and I was too lazy :).
I think it's intuitive to assume what you are being tested on is what is being taught by the book or the teacher. It's unfair otherwise.
I just finished Mathematica by David Bessis and I wish this information was presented in the way he talks about math: using words and imagery to explain what is happening, and only using the equations to prove the words are true.
I just haven’t had to use integral calculus in so many years, I don’t recall what the symbols mean and I certainly don’t care about them. That doesn’t mean I wouldn’t find the problem domain interesting, if it was expressed as such. Instead, though, I get a strong dose of mathematical formalism disconnected from anything I can meaningfully reason about. Too bad.
That's one of the things I like best about https://betterexplained.com -- it focuses on ways to gain intuition about a given math concept, using visuals and metaphors as appropriate. If only math education were always presented like that....
> So I got a great reputation for doing integrals, only because my box of tools was different from everybody else's
This is the most important lesson I learned in grad school. Methods are so important. I really think it is the core of what we call "critical thinking" - knowing how facts are made.
My issue with both this and u-substitution is that you don't know what expression to use. There are a LOT of expressions that plausibly simplify the integral. But you have to do a bunch of algebra for each one (and not screw it up!), without really knowing whether it actually helps.
OTOH, if I'm given the expression, it's just mechanical and unrewarding.
That’s how most of math works past high school. It requires a lot of practice and intuition.
I don't know about this particular case though, I get the feeling there's a system to it that can be exploited by eg Wolfram. It's just that you're in the dark for a long time before you find the switch.
I think it just tokenizes everything and does pattern matching to find compositions it can exploit. It's not unlike compiler optimization.
Your intuition is right. There is a general algorithm for finding the antiderivatives: https://en.wikipedia.org/wiki/Risch_algorithm Its simplified form can solve pretty much all the undergrad antiderivation problems.
I'm a math major, but I consider the time spent learning the tricks for antiderivation to be kinda useless.
It starts off with a pretty major error.
I'(t)=\int_0^1 \partial/(\partial t)((x^t - 1)/(ln x))dx = \int_0^1 x^t dx=1/(t+1), when it is actually equal to \int_0^1 x^{t-1}/ln(x)dx.
These two are definitely not always equal to each other.
No, it is correct. The integral is with respect to x, and the ordinary/partial derivatives are with respect to t. Written out fully, the derivative computation is
d/dt (x^t - 1)/ln(x) = d/dt [exp(ln(x)t) - 1]/ln(x) = ln(x)exp(ln(x)t)/ln(x) = exp(ln(x)t) = x^t.
Edit: d/dt exp(ln(x)t) = ln(x)exp(ln(x)t) by the chain rule, while d/dt (1/ln(x)) = 0 since the expression is constant with respect to t.
There are convergence considerations that were not discussed in the blog post, but the computations seem to be correct.
Ah, yes. I don't understand how I differentiated with respect to x instead of t, but...
It’s interesting he mentions he doesn’t like contour integration since many integrals can be done either way.
Feynman’s trick is equivalent to extending it into a double integral and then switching the order of integration.
Don't forget to check for the necessary measurability & integrability of the sections (f(a, y), f(x, b)) before switching the order: https://en.wikipedia.org/wiki/Fubini%27s_theorem?useskin=vec....
This reminds me of the "snake oil method" for generating functions. It's been many years, but I remember it as adding another sigma and then swapping them.
[dead]