A list of puns related to "Rolle's theorem"
My professor gave the following statement during my math class today, where we had to answer with true or false (and give a counterexample if false):
'If a function f(x) is continous on [0,1[, derivable on ]0,1[ en f(0)=f(0), then there must exist a 'c' that is an element of ]0,1[ with f'(c)=0'
Now I know this statement is false, there were several counterexamples given (for example f(x)=x when x =/=1 and f(x)=0 when x=1). But all of the examples given were 'artificial' functions like the one I mentioned before (sorry I don't know the proper name for functions with a plural prescription, English is not my first language). I asked whether there was a 'natural' function so to speak that could also be given as a counterexample. He believed it was possible but couldn't think of one right away.
I have been thinking for hours now and still haven't found one, would appreciate it if anyone here can!
Thanks!
If f(x) is continuous on [a, b] and differentiable on (a,b) and f(a) = f(b) Then there exists some value c where f'(c)= 0
Steps
Ensure it's continuous/differentiable
Make sure plugging in a and b resultbin same value
Find derivative of f(x)
Set it to 0 and solve for x
Ensure x is between a and b
suppose you have a function f(x) = x^3 + 3x^2 + 16. how do you show that it has exactly one root?
For example, f:R->R and f(2)=f(5). So will there be c in (2,5) where f'(c)=0 ?
Hi everyone,
Given a function f(x) like f(x) = (1/x^2) - e^x
on the open interval (0, +inf), will Rolle's theorem apply on this case? I assume it won't since the interval is closed, any thoughts?
Also, if we had a continuous and differentiable (at all points function) where the interval isn't defined in the question, would Rolle's theorem apply to it too?
Thanks in advance!
let f:[2Ο, 4Ο] - > R be a function defined as f(x) = sin(x/2)
Find all numbers c that satisfy the conclusion of Rolle theorem.
I'd be grateful if anyone could explain the question asked in the title.
Thanks.
John.
Im trying to prove that an equation has at most one real root.
Is this statement true?
If a function has n extrema, then the function has at most n + 1 real roots.
I can show that at least one root exists by IVT on the interval.
Thus, if I prove through rolles theorem that there is no value of c such that f'(c) = 0 on the interval, then that means the functions has no extrema on the interval, so n = 0. Therefore, the equation has at most 1 real root on the interval.
Thank you.
Let f : [0; 1] -> R
f(0) = 0
f is derivable on [0; 1]
Let c be a value in (0; 1)
Prove that there exists a c so that f'(c) = -f(c)/(c-1)
I first thought of Lagrange's mean value theorem. We know f is derivable so there exists f'(c) = [f(1)-f(0)]/(1-0) <=> f'(c) = f(1).
No idea what to do with this further.
Then I realized that the exercise comes after Rolle's theorem in the book, so I should probably use that.
Well, Rolle's theorem tells us that if f is derivable on [0; 1] then there could be two values a, b on that interval such that f(a) = f(b) and the function changes sign between a and b. But it's not a must. And even if f(0) = f(1), where do I go from here?
Can someone give me a boost here? I'm not sure what it is I'm supposed to be doing to solve this.
Let f(x) = ax^3-bx. Find the numbers a and b if f(2)=4 is the absolute maximum value of f on [0,4]
I'm not sure how to do this and most videos I've looked always happen to be a piece wise function. And other times, people tell me to graph but you cant always easily and correctly graph complicated functions within a time constraint. Could anyone help please? It would be very much appreciated, thank you!
https://imgur.com/a/VmpL3Vr
Ok. So ignoring that Stewart never presented any of these techniques in the earlier unit, it seems that he's creating a smaller interval within the given interval.
(a, b) within [-2, 2].
He's finding "c" (denoted r) within (a, b).
And then somehow is using a contradiction proof to prove that there can't be two roots. But what the hell, that doesn't prove that there's at most 1 root. What about a scenario where there's 0 roots?
That's as far as I got within the hour. Can anyone help?
Let f: [0,1] -> [0,1] be continuous and f(0) = 0 and f(1) = 1 and lefthandside derivatives exist at every x in (0,1). Prove that there exists x0 such that f'(x0) >= 1 (lefthandside derivative).
I'll be writing f' but I mean left derivative by it, ok?
So it looks almost like Rolle's theorem right? Let's define g(x) = f(x) - x. g(0) = g(1) = 0.
Notice that g'(x) = f'(x) - 1, so we're trying to show that there's a point such that g'(x) >= 0.
3 cases:
g is constant, g(x) = 0, theorem is true.
There exists a point at which g > 0. G is continuous on a closed interval so by Weierstrass theorem it attains a maximum at some point x1 in (a,b). Let's calculate the lefthandside derivative:
lim of [ g(x1 +h) - g(h) ]/[h] as h->0-
numerator is negative, denominator is negative so the left derivative is >= 0, so g'(x1) >= 0, so f'(x1) >= 1, case done.
So g'(x2) < 0. What now? x2 was supposed to be a minimum, yet left derivative is negative here. Does it follow that x2 is not a minimum, a contradiction? I'm not sure with one-sided derivatives. Can anyone help me?
edit1: it's not a contradiction, take |x| as counterexample. :(
Hello, i'm proving Rolles theorem, and having a bit of trouble about what it actually means. Though that i could find some help here :)
So from wikipedia "Rolles theorem states that any real-valued differentiable function that attains equal values at two distinct points must have at least one stationary point somewhere between themβthat is, a point where the first derivative (the slope of the tangent line to the graph of the function) is zero"
So in other words, if we have a differentiable function, that attains equal values at two distinct points let's calles those a & b. That must mean that at some point f(a)=f(b). And there is two ways that this could be:
Either the function is constant and if it is constant then there will be a c where f'(c)=0.
The other way:
The function is either increasing or decreasing, and since f(a)=f(b) that must mean that if the function "starts" in f(a) and if it has to "return" to f(b) it will have to at least once change from a positive to negative slope or vice versa, thus "crossing" f'(c)=0 and thus the theorem is again correct.
So this proof seems obvious in my mind, but i'm having trouble putting it into an "actual" proof, and hope someone here can explain this to me? I know there is a proof on wikipedia, but i suspect there might be a simpler way to prove it, and if not maybe someone could help explain the proof that is on there?
TIA for answers.
My lecturer of algreba mentioned today that we're aible to proof the intermediate value theorem and Rolle's theorem by examine the irreducibility of polynomials f β K[X].
In detail:
Let K be a real closed field.
We're aible to proof for f β K[X] the following:
Its been a while since I saw the proofs of these theorems in my analysis lecture but after I read them again I still dont understand how to use the "algebraic way"
I wonder why, Anton provided Rolle Theorem condition of that the f(a) = 0 and f(b) = 0; whereas the most books (including Spivak) says it must be f(a) = f(b). I find the latter one more general, but I still would like to know logic behind in Anton's one.. I have 11th edition of Anton's Calculus.
Hello there, everyone. I'm a high school student. I just came across a concept in my mathematics textbook which said that the derivative of a differentiable function need not be continuous. Doesn't this imply then that a function despite having equal values at two distinct points, could have a derivative which jumps between a positive value to a negative value without vanishing in between? I have a fairly good understanding of Rolle's theorem and have no confusion understanding it, but I'm very confused as to how both of these could be true at the same time without contradicting one another?
Thanks
let f = dF/dX. given F(0) = a and f(x) = x + x^3 + 2 how can we used rolles theorem or mean value theorem or identity criterion to find F?
Trivially I can find a satisfying equation as 0.5x^2 + 0.25x^4 + 2x + a but we have not covered integrals yet. We are just starting differentiation.
Using MVT I said fix x greater than 0. Then there exists c in (0, x) such that f(c) = ( F(x) - F(0) ) / ( x )
Then we get F(x) = f(c)*x + a so f(x) = f(c) = ( F(x) - F(0) ) / ( x ) so
F(x) = f(x)*x + F(0)
But this isn't right as it gives x^2 + x^4 + 2x + a.
Any advice on where I am going wrong?
Https://imgur.com/amQYcU3
Im not sure how I should go about proving this. Any suggestions?
For which constants c in (0,1) is the following statement true? For every continuous f:[0,1] -> R with f(0) = f(1), there exists x in [0,1βc] such that f(x) = f(x+c).
Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.