Maximizing a convex function in Excel with Solver

I want to know if it's possible to maximize the sum of cumulative distribution functions for independent normal distributions in Excel using Solver (or OpenSolver).

https://preview.redd.it/rlpog89qmb471.png?width=276&format=png&auto=webp&s=9a23913091e70bbfe2f268a6187c5b733037d938

where Ξ¦(β‹…) is the standard normal cdf (or NORM.DIST in Excel). Additionally pi and qi are β‰₯0.

Since pi and qi are β‰₯0, then the arguments to Ξ¦ are β‰₯0, and Ξ¦ is concave in that region. Therefore I'd be maximizing a concave function (equivalently, minimizing a convex function). So it should be possible, but I'm not sure how to model it in Excel.

πŸ‘︎ 4
πŸ’¬︎
πŸ‘€︎ u/jackries
πŸ“…︎ Jun 09 2021
🚨︎ report
Optimizing convex function over non-convex sets

In convex optimization, we usually try to solve a problem of minimizing/maximizing a convex function over a convex set/domain. Intuitively, I think it makes sense but I'm not be able to point out the exact reason why we want the set to be convex. Most of the analysis that I've seen so far usually use the convexity of the objective function rather than the set. So my question is why the constraint convex set is important and what happens if we optimize a convex function over non-convex set? Any insights is appreciated!

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/tranhp129
πŸ“…︎ May 03 2021
🚨︎ report
[Question] nonparametric regression for convex functions

Update 1: I am interested in nonparametric regression where the underlying regression function has a convexity property, rather than a convexity constraint.

--

OP: What are some nonparametric regression methods for convex functions and have consistency guarantees? For clarification, I'm looking for methods that don't necessarily produce convex functions, but just consistent estimates of convex functions. With that said, more general approaches also qualify, but I'm surveying for methods that are more efficient on convex functions.

πŸ‘︎ 8
πŸ’¬︎
πŸ‘€︎ u/fool126
πŸ“…︎ Mar 05 2021
🚨︎ report
Convex Function vs. Non-Convex Function vs. Concave Function

I am trying to wrap my head around what are the differences between the following types of functions: convex functions, non-convex functions and concave functions.

Based on this post over here (https://stats.stackexchange.com/questions/324561/difference-between-convex-and-concave-functions ), it would seem that:

Convex Functions : a function that strictly has one minimum (relatively to easy to determine this minimum through optimization procedures)

Non Convex Functions: a function that could several minimums, but one global minimum (more difficult to determine the global minimum through optimization procedures)

Concave Functions: This is basically the "negative" of a convex functions.

Is this a correct understanding of these 3 types of functions?

Thanks

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/jj4646
πŸ“…︎ Mar 26 2021
🚨︎ report
Concave and Convex Functions

How to find out whether a function of two variables f(x, y) is concave or convex?

πŸ‘︎ 2
πŸ’¬︎
πŸ“…︎ Apr 11 2021
🚨︎ report
[R] Logistic Q-Learning: They introduce the logistic Bellman error, a convex loss function derived from first principles of MDP theory that leads to practical RL algorithms that can be implemented without any approximation of the theory. arxiv.org/abs/2010.11151
πŸ‘︎ 145
πŸ’¬︎
πŸ‘€︎ u/hardmaru
πŸ“…︎ Oct 22 2020
🚨︎ report
How to tell if a function is convex?

Say I want to prove

a/sqrt(a^2 + 8bc) + b/sqrt(b^2 + 8ac) + c/sqrt(c^2 + 8ab) β‰₯ 1 for positive real a,b,c.

Then we can just note that

[af(a^2 + 8bc) + bf(b^2 + 8ac) + cf(c^2 + 8ab)]/[a+b+c] β‰₯ f([a(a^2 + 8bc) + b(b^2 + 8ac) + c(c^2 + 8ab)]/[a+b+c]),

where f(x) := x^(-1/2). So rearranging, we get that

a/sqrt(a^2 + 8bc) + b/sqrt(b^2 + 8ac) + c/sqrt(c^2 + 8ab) β‰₯ (a+b+c)^(3/2) /sqrt(a^3 + b^3 + c^3 + 24abc),

so it suffices to show that (a + b + c)^3 β‰₯ a^3 + b^3 + c^3 + 24abc, which follows immediately.

However, how do we know which function to pick, and how do we know that function is indeed convex?

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/polite_linear_alg
πŸ“…︎ Jan 14 2021
🚨︎ report
convex upward function

https://preview.redd.it/u0kyv48n5lz51.png?width=627&format=png&auto=webp&s=8d7aa34901e79a9394de878355e35e7ccf97fa96

https://preview.redd.it/pgab793q5lz51.png?width=640&format=png&auto=webp&s=c2ef0ed054b34c9fce142727bed79c3bf31fa095

At the end why do they have to mention that the same argument goes also for c<x. I thought in order to combine the two inequalities we only need the case x<c , so that we can always find some delta with |c-x|< delta and |f(c)-f(x)|<epsilon.

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/Moondoggy12345
πŸ“…︎ Nov 16 2020
🚨︎ report
"Logistic Q-Learning", Bas-Serrano et al 2020 (They introduce the logistic Bellman error, a convex loss function derived from first principles of MDP theory that leads to practical RL algorithms that can be implemented without any approximation of the theory.) arxiv.org/abs/2010.11151
πŸ‘︎ 7
πŸ’¬︎
πŸ‘€︎ u/gwern
πŸ“…︎ Oct 22 2020
🚨︎ report
Need a review for a convex function on R^1

I am trying to solve a simple problem using "Convex" package and need some review to help me understand what wrong i am doing here

Problem : i have to arrange a set of 1 dimensional vectors (y1..ym) such that none overlaps with the other and all vectors put together occupies minimum space on the number line - Also every vector has to follow the interval sequence within the vector.

example : vector (y1)[1,7,9,10] has the following interval sequence (d) [0.0, 6.0, 2.0, 1.0] - in this case i expect the function to return [0.0, 6.0, 2.0, 1.0] since that is the minimum value given 0 >=x <= somemax b but it returns [6.0; 6.0; 6.0; 6.0]

function get_convex(distance,b)
    function add_distance_constraints(d, x)
        for i in 1:length(d)-1
            v = x[i] - x[i+1]
            p.constraints += [v &gt;= d[i+1]]
        end
    end 
    s = length(distance)
    Ξ» = Variable(s)
    x = Variable(s)
    r_sum = 0
    r_p = 0
    # or c' * x
    p = minimize(dot(distance, x))
    cobjects = []
    for i in 1:length(x)-1
        cobject = Ξ»[i] * (distance[i + 1] - distance[i] )
        p.constraints += [x &gt;= distance[i]]
        push!(cobjects,cobject)
    end
    convex_combo = sum(cobjects)
    p.constraints += [x&gt;=0;x&lt;=b;Ξ» &gt;= 0;Ξ» &lt;= 1;sum(Ξ») == 1;convex_combo == x]
    solve!(p,solver,verbose=true)
    println(p.optval)
    print(distance)
    print(round.(x.value; digits=3))
    
end 

get_convex([0.0, 6.0, 2.0, 1.0],20)
πŸ‘︎ 5
πŸ’¬︎
πŸ‘€︎ u/adropintheriver
πŸ“…︎ Sep 07 2020
🚨︎ report
Let g(x)=(aβˆ’x^2)^2, where a is a fixed real number. Confirm or reject the claim that g(x) is convex for any constant a. If you reject convexity everywhere, please specify which for values of x that g(x) is not convex (as a function of a).
πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/rotentom
πŸ“…︎ Jun 07 2020
🚨︎ report
Legendre transform - works even for functions which are not convex! desmos.com/calculator/glo…
πŸ‘︎ 6
πŸ’¬︎
πŸ‘€︎ u/arfamorish
πŸ“…︎ Jul 05 2020
🚨︎ report
For a vector x=[x1 x2]^T, let f(x)=(1βˆ’x1)2+100βˆ—(x2βˆ’x1^2)^2. Calculate the gradient and Hessian of x, and confirm or reject that x is convex everywhere in R2. If the function is not convex everywhere, specify with a plot the region where convexity does not hold.

I'm new to vector calculus and I'm trying to solve this sum. How do I approach this?

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/rotentom
πŸ“…︎ Jun 07 2020
🚨︎ report
ELI5:What is convex and non-convex functions and how are they used in ML optimization problems (such as Gradient Descent)?

Thanks in advance for any answers.

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/19Summer
πŸ“…︎ Mar 05 2020
🚨︎ report
For a vector v=[v1 v2]T, let p(v)=(1βˆ’v1)2+100βˆ—(v2βˆ’v21)2. Calculate the gradient and Hessian of p, and confirm or reject that p is convex everywhere in R2. If the function is not convex everywhere, please specify with a plot the region where convexity does not hold.

Suggested Approach Compute the Hessian Hp(v)Hp(v). Function p(v)p(v) is convex around point vv if the eigenvalues of HpHp are both positive at vv*. On a coarse* [v1, v2][v1, v2] grid, compute the minimum eigenvalue. Look for regions where the minimum eigenvalue is 00*. Produce a contour plot of the minimum eigenvalue in that region, if you notice it. Carefully label your axes.*

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/rotentom
πŸ“…︎ Jun 22 2020
🚨︎ report
ConVex vs. conCave function

A convex function looks sort of like the letter V when it has zero derivative. Concave functions can be remembered as "the other type of function" or by visualizing tipping the letter C over a bit.

πŸ‘︎ 14
πŸ’¬︎
πŸ‘€︎ u/another-wanker
πŸ“…︎ Oct 10 2019
🚨︎ report
Suggestions for solving maximization of convex function

Hello members, How do you solve a maximization of a convex problem that is linearly constrained with bounds on variables by using Lagrange relaxation? Thanks!

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/oakdoc1
πŸ“…︎ May 03 2020
🚨︎ report
What does the global minima of a non-convex loss function look like?

For LeNet trained on MNIST with the lowest possible loss (global minima),

  • What would the test error rate look like? Is there a benchmark for best possible performance?
  • Can we achieve global minima on non-convex loss functions for a classification task with a minimum number of parameters? Or conversely, how does adding more parameters to a NN help with this?
πŸ‘︎ 5
πŸ’¬︎
πŸ‘€︎ u/liqui_date_me
πŸ“…︎ Dec 07 2019
🚨︎ report
Concave and Convex Functions With Second Derivatives

For the following functions determine if they are convex or concave. Perhaps they are both, which are? Lastly, what is the sign of the derivative for each?

a. y = 4 - 4x + x^2

b. y = 6x^1/2, 0 < x < infinity

c. y = 18 = 12x - 6x^2 + x^3

I don't quite understand the question.

πŸ‘︎ 2
πŸ’¬︎
πŸ“…︎ Mar 04 2019
🚨︎ report
[R] Near-Optimal Methods for Minimizing Star-Convex Functions and Beyond arxiv.org/abs/1906.11985
πŸ‘︎ 9
πŸ’¬︎
πŸ‘€︎ u/nimit_s
πŸ“…︎ Jul 01 2019
🚨︎ report
Need help proving a function is strictly convex[Maths applied to economics]

Hi! I'm currently struggling with a part of a problem. I need to evaluate a function, and then prove that the function is strictly convex. I've been shown this way of proving the convexity, but I've never seen an actual example with a function so I have no idea how I'm supposed to apply it.(Ξ»x1+(1-Ξ»)x2) < Ξ»f (x1)+(1-Ξ») f (x2)

My function is (Ξ΄x1 ^(-r-1))/((1- Ξ΄)x2 ^(-r-1))

I guess I have no choice but to prove it using the way with the lambdas I've written above, since my hessian matrix would be way too messy to easily be able to tell the convexity. I need help in knowing just how I'm supposed to proceed in order to prove it this way. Thanks in advance for your time!

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/customalibi
πŸ“…︎ Feb 14 2019
🚨︎ report
Projected Bellman error in linear value function approximation is non-convex?

This is in reference to Sutton's Reinforcement Introduction book (second edition). Chapter "Off-policy methods with approximation" > "Linera value-function geometry".

Linear value function space is a smaller subspace of the true value function space.

Value error projection (squared distance) seems to be convex. This is well-known I think.

Sutton suggests that minimimum projected Bellman error is not achievable via iterative process. To me that means the geometry of this projected Bellman error is not convex, having many local minima.

Note: Bellman error is actually the expected TD error, the direction you should take to get closer to the true value function.

Projected Bellman error is presented bacause in the case of function approximation you cannot optimize exactly to that becasue approximation function lives in a subspace. Even though the direction is right, but you will not actually get there. You will get only closest to there on your subspace.

That seems questionable that such a thing would happen in the linear function approxmiation case. Or, it is the case that linear function under this projected Bellman error is indeed non-convex.

I ask here for a confirmation if that is the case.

This is from Sutton's book page 217 (second edition, final), Is it illegal to upload this by the way?

πŸ‘︎ 5
πŸ’¬︎
πŸ‘€︎ u/phizaz
πŸ“…︎ Oct 28 2018
🚨︎ report
Why does a conCAVE function have a conVEX shape?

Concave generally means curved inwards (β€œlike a cave”), while convex is the opposite. However, a β€œconcave function” curves outwards (β€œconvex shape”) while a convex function curves inwards (β€œconcave shape”). What is the reason for the functions’ nomenclature being the opposite of the common usage of these terms for shapes?

TIA!

EDIT: (for visual clarity of what I mean) A concave shape: β€œU”. Yet a β€œconcave function” graphed is shaped like β€œβˆ©β€. A convex shape: β€œβˆ©β€. β€œConvex function” on graph: β€œU” For example, a β€œconcave abdomen” in medicine means an abdomen which is compressed inwards, etc etc Looking for the difference between the medical nomenclature (or general layman’s nomenclature) vs. what appears to be the mathematical terminology usage.

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/CHL9
πŸ“…︎ Feb 01 2019
🚨︎ report
AP If the cost function of logistic regression is always convex, why do we need other methods?

For classification problems, we use logistic regression. But if the cost function of logistic regression is always convex, and we are guaranteed one local minimum which is also the global maximum, then what's the purpose of other classification algorithm?

Why not just use logistic regression on ALL classification problems?

  1. Am I right to understand that logistic regression will always get global minimum?

  2. Besides performance issues, since logistic regression will always get global minimum, using other methods (artificial neural network, k-nearest neighbour, support vector machine) will not provide a better model?

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/BeatriceBernardo
πŸ“…︎ Feb 14 2018
🚨︎ report
Why is a function labelled concave when the set of points under it is convex?

I know it's an arbitrary naming thing and that you can say that the set of points above a convex function is convex but this has always been unintuitive and gnawed at me

πŸ‘︎ 10
πŸ’¬︎
πŸ‘€︎ u/sumant28
πŸ“…︎ Jun 18 2018
🚨︎ report
We want to minimise convex function f wrt g. It is equivalent to minimising f + g?

There is f(x) which is convex. There is g(x) which is also convex. I want to minimize f wrt g.

Is this problem equivalent to minimizing ***f(x) + g(x)***?

If yes, can you provide me a proof? The way I know about solving constraint minimization is, we should come up with Lagrange multipliers and equate the gradient of both f and g and then solve. But why is this solved this way?

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/irecebu
πŸ“…︎ Nov 03 2018
🚨︎ report
Cost Function for Linear and Non-Convex Production Function Help.

I have done all the work deriving the input requirement solutions for both the linear and non-convex to the origin case.

  1. Linear case: we have either 1 of 2 corner solutions and a set of solutions, depending on mrts</=/>price ratio.

  2. Non-convex case: we have either 1 of 2 corner solutions and both can be a solution, depending on mrts</=/>price ratio.

My problem is that when articulating the cost function in a shortened formed they are both given like this.

https://imgur.com/a/M9sedGA

Can somehow explain to me the meaning of this form because:

  1. the solution could be anywhere on the line.

  2. the solution could be at either corner.

Descriptively they are different, but they are given in the same format.

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/MedStudent-96
πŸ“…︎ Oct 10 2018
🚨︎ report
AP If the cost function of logistic regression is always convex, why do we need other methods? reddit.com/r/learnmachine…
πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/BeatriceBernardo
πŸ“…︎ Feb 14 2018
🚨︎ report
Methods of approximating singular (convex) functions.

Lets say I have a convex function f:K->R, where K is a convex subset of R^2 . Furthermore, f blows up near the boundary of K, and the rate depends on exactly where you are on the boundary.

Are there any "nice" methods/tricks for numerically approximating such functions? It is quite important that the singularity is preserved, ideally I should be able to make the approximation quite smooth, ideally at least C^4 , with easy to compute derivatives. In a perfect world the approximation would also be convex, but as long as it isn't too crazy, this shouldn't be a problem.

I had thought of approximating it by something of the form P(Log(Q(x))), where P and Q are polynomials to be determined by some kind of least squares argument, I know that the singularity is kind of logarithmic, but this seems totally infeasible to any degree of accuracy.

For what its worth, I can compute the function and its derivative relatively easily at a single point.

Thanks in advance for any and all help! This is one of those questions that I'm sure has a well researched answer if you know the right keywords.

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/JJ_MM
πŸ“…︎ Jan 21 2019
🚨︎ report
Is the following function concave or convex?

[x^(-1/4)+y^(-1/4)+z^(-1/4)]^(-1/2)

x^(1/4)y^(1/2)z^(1/8)+ln(y^(2)z)-e^(x^(2)+y^(2)+z^(2)

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/bricoleor
πŸ“…︎ Feb 15 2020
🚨︎ report
Minimise convex function wrt to another convex function

Let us say I have f(x) which is convex and I have to minimize it with respect to a convex set G. We define G(x) to be an indicator function where

G(x) = 0**,** if x is in G,

G(x) = Inf, if x is not in G.

Thus G(x) is convex.

Can I simplify the problem to minimising f(x)+G(x)?

If yes, can you point me to its proof?

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/irecebu
πŸ“…︎ Nov 03 2018
🚨︎ report

Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.