A list of puns related to "Locally Bounded Function"
Exercise: Let f be a measurable function from R^d to C that is supported on a set of finite measure and let Ξ΅>0. Show that there exists a measurable set E of measure at most Ξ΅ such that f is locally bounded outside of E.
Solution attempt: Let n be a positive integer. Then there is a compactly supported continuous function g such that (the L^1 norm) ||f-g||< Ξ΅/2^n . The set E(n)={x in B(0,n): |f(x)-g(x)|>1} has measure β«_{E(n)} 1 leq β«_{E(n)} |f-g| < Ξ΅/2^n . Since g is continuous on the closed ball B(0,n) and this ball is compact, so there is some real number M(n) such that |g(x)|<M(n) for all x in B(0,n). If x is not in E(n) then |f(x)| leq |g(x)|+1 <M(n)+1.
If E is the union of all E(n), then its measure is less than Ξ΅. Given any positive real number r, let R denote the next integer larger than r. The map f is locally bounded outside of E because |f(x)|<max{M(1),β¦,M(R)}+1 for all x outside of E.
Is this a correct solution? If not, where is the problem?
Hi I am recently studying probability and trying to relate it to my (limited) knowledge in real/functional analysis. As described in the title, I am curious about why the weak convergence of probability measures are defined by bounded continuous functions.
My question is due to some comments that weak convergence of measures can be viewed as the convergence in the weak* topology using Riesz representation theorem (such as the discussion here which is not entirely clear to me). But in the Riesz representation theorem, the space of compactly supported continuous functions are considered, not the space of bounded continuous functions.
For simplicity, I am happy to set the ground space to be the usual Euclidean space so probability measures are induced by the usual random vectors. I would appreciate any comments or suggestions!
I had this problem in the corner of my mind for some time now. Can't seem to think of any. Any help will be appreciated
Hi everyone, having a bit of trouble with this proof at the moment.
I need to change it to prove that a space with supremum norm and bounded real continuous functions is complete.
I have no idea how so any advice is really appreciated- thank you! :)
βGive an example of a function defined on the closed interval [0, 1] that doesnβt have a maximum or minimum value. Can such a function be increasing?β
My first guess was a constant function, but does it technically lack max/min, or is every point a max/min?
This is a cross post from Math Overflow.
The notation being too messy for a Reddit post, Iβve typed it up on mathbin here.
The underlying idea is that we may be able to detect the higher order variations by progressively βrenormalising/zooming intoβ the function.
Ongoing remarks:
Starting with the only if, let us set k = 2 for simplicity. We wish to find a decomposition of f into g and h. We may be able to get somewhere by examining the function V(x) := Total quadratic variation from 0 to x.
This function is monotone increasing, and so in particular of bounded variation.
I believe the singular set should be supp DV_s the support of the derivative of V viewed as a Radon measure.
In other words, we will define mu = DV_s up to a scaling factor. This suggests what the associated h should be - define first w(x) := Int_[0, x] f(x) DV_s. The remainder f - w still has nonzero quadratic variation, but in a βsmooth wayβ. I am not sure how to continue here.
Hey group,
I am new to Google Cloud Functions and have been tasked with refactoring some for my job.
Problem:
Attempted Solution:
Error:
Thoughts:
Help appreciated. Thanks
Hi, I just need to find such function (if it exists), however I was not yet able to find one. I can find some sequences of real functions on [0;1] that converge to a function that is not continuous but not to a functions that is non bounded.
We know that associated to every function g: [0, 1] -> R of bounded variation, there corresponds a Radon measure on [0, 1], the so called Lebesgue Stiltjes measure.
What happens if g is of positive but finite 1+eps variation for eps > 0? What goes wrong if we try to construct in the same way a measure associated with g?
I have function that has a loop. In each iteration of the loop, it asks the user for input and adds that input to a larger string. So with each iteration, the string gets bigger. After the loop is over, the function returns that string.
I have done a print statement within that function and I can confirm the string is what I want it to be.
However in my main function, when I declare a new variable salad by assigning it that function, salad is blank. But the combo string inside the function has the fruits.
Why is this happening?
My guess is, when I write that assignment statement in the main function, the salad variable is immediately getting assigned with an empty string (since that's what the combo string starts out as)? How do I make it return the string only after the loop is over?
def main():
salad = makeSalad()
def makeSalad():
combo="";
for x in range(0, 4);
ingredient = input("Please enter the ingredient");
combo = combo + ingredient;
return combo;
I'm interested in knowing if a function f(x) can satisfy all of the following conditions
This question was inspired by an applied modeling problem I was working on. I've toyed around with some example functions. You can look at this desmos link for plots of the functions described below.
f(x) = tanh(x) does not have compact support.
smoothstep f(x) = 3x^2-2x^3 is not infinitely differentiable. The second derivative is discontinuous at the endpoints.
The bump inspired function f(x)=exp(-1/x)/(exp(-1/x)+exp(-1/(1-x))) is interesting in that it satisfies the first three requirements but there is no fixed M such that all points of βdβΏ/dxβΏ f(x)β < M for all n.
So my question is, is there any function that meets these requirements? If not, can anyone guide me toward the intuition for why this is the case? Does this result have a name?
Thank you!
Find the area bounded by the functions
x=3+y^(2) , x=2-y^(2) , y=1, y=-2
Does anyone know if there exists an upper bounded function with f'>0? Does one exist with f''>0?
Is there such a function that if you know f(x=1), you could compute the instantaneous derivative at x=1, and then use this to calculate nearby values, but if you wanted to calculate values sufficiently far away, there would be no correlation between the value of the function at x=1 and the sufficiently distant x. That is, outside of a certain delta x, the function behaves similarly to a cryptographic hash function. This would likely be a smooth transition.
All examples of chaos I have read investigate how small changes in initial conditions amplify over many timesteps. After a sufficiently small number of timesteps, two sufficiently similar initial states S and S' can be related by some transformation. As additional timesteps occur, the transformation loses accuracy until it is no better than a guess. I suspect that a similar phenomenon exists for deviations in input to a function, but I have no idea what the functional form would be.
To avoid being too cryptic: I have extremely rich multimodal synesthesia, and I am looking for some sort of mathematical description of it. If I was listening to a song, and I increased the bass, for example, I could imagine how that would change the shapes/colors/textures that I perceive. The song would get a lot wider and most of the non-base components would shrink and push towards the center of my vision. Now, as more attributes of the song change, I quickly lose the ability to predict what the song will look like afterwards. With sufficiently many changes, I have no predictive ability; I must just listen to the music. And the features that I assign to songs have no long range meaning. For example, if I recall listening to a song that has glittering white rectangles spinning and falling down in the center, I can't reason anything about the actual music. I can't guess the genre, the gender of the singer, what musical instrument is making those shapes, etc.
Thank you very much in advance!
So there's a sorts throwaway comment about a metric on the space E=C(R) (or other non-compact metric spaces) that mimics tht sup metric in StanisΕaw Εojasiewicz's book "An introduction to the theory of real functions". Indeed the sup metric is not defined for non-compact spaces because there are unbounded functions. However, what if we just got around that using an "equivalent" bounded metric? By this, I mean defining a metric Ο on E by letting d denote the (possibly infinity-valued) sup metric on E, and then defining our actual metric by (say) Ο(x,y)=d(x,y)/(1+d(x,y)) (where if d(x,y)=+β we define Ο(x,y)=1). Does this metric space (E,Ο) have any nice (or not so nice) properties? What does convergence in this topology look like?
There is an entire function f, that is bounded by polynomial p(z) from degree n, meaning that
|f(z)|β€|p(z)| for every z in complex numbers.
Show that f is a polynomial of degree at most n (less or equal to n).
What are some interesting examples of locally constant functions over spaces with disconnected components (since locally constant functions are constant over connected domains) ? I'm especially interested in cases where the domain is p-adif or more generally some kind of Stein space.
P/s: Bonus points for L2-functions.
Hi, i am learning for my final exam of Calculus, we got a sample test, but I cannot answer this theoretical task. As far as I know sin(x) would be bounded and not monotonic, but it is periodic. Thank you <33
I've already told her I didn't mean it, and I'm not quite sure how to make things right.
Please help - this doghouse is only a few lightyears across and doesn't have cable.
Hi, so I know that a critical point of a function is where f'(x)=0 or f'(x) does not exist. However, if we have a function say y=x for x=[-3,3], then does that function have critical points at x=-3,3? My reasoning is that on one hand these points are included in the definiction of a function so we could differentiate them and that would mean that these points aren't critical. On the other hand, there is a rule stating that we can't differentiate at a "spiky" (non-smooth) point of a function because we can draw a tangent in a many different ways at that point.
So far what I have thought is to construct a compact interval from [0,N] where the limit is contained in this interval and f(x) is within epsilon range of the limit. Then using the Boundedness Thm, f is bounded on [0,N] and since this interval contains the limit, it is bounded for all x.
So I wrote out the definition of the limit, but I don't really know how to relate this to my chosen compact interval. Any help is appreciated, and if my approach is wrong please let me know! Thanks!
Edit: would doing a proof by contradiction work? If f is unbounded then it can't have a limit, so maybe something could be found doing this
Problem: "Give an example of a bounded function on [0,1] which achieves neither an infimum nor a supremum."
I tried with the function f:[0,1] -> R defined piecewise by f(x)=1/2 for x=0, f(x)=x for 0<x<1 and f(x)=1/2 for x=1. So f is bounded because for any x in [0,1] it is 0<f(x)<1, and its supremum is 1 and its infimum is 0. My textbook uses the piecewise function f(x)=2x-1 for 0<x<1 and f(x)=0 for x=0 or x=1. Is my example valid as well? I am a little unsure if 0 and 1 are respectively infimum and supremum for the function I exhibited, but f(x)<1 and f(x)>0 for any x in [0,1] and for any epsilon>0 I can always find at least one x in [0,1] such that x>1-epsilon and x<epsilon so I believe that this shows that 0 and 1 are infimum and supremum. Is this correct?
Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.