A list of puns related to "Multivariate Polynomial"
The function to factories is: 6x^6 + 3xy^7 + 3xy^5 - 18. I tried reducing everything modulo 5 to get a monomial and then apply Eisensteins with p=3. Could this be a correct method? I was given a hit: There exists a homomorphism between Z[y] and Z[i], with y^2 + 1 in its kernel. So I tried reducing modulo 2 to get xy^5 * (y^2+1), but I am not sure where to go next.
I'm looking for a library that can compute the roots of a system of multivariate polynomial equations. Something like Singular's solve
or PHCpack.
That is, it should take a system of m polynomial equations in n variables (with mβ©Ύn), with real coefficients, and be able to check whether the associated variety is 0-dimensional, and, if so, compute all the roots (I'm only interested in the real roots).
Unfortunately, neither of the above libraries currently have Haskell bindings.
The MΓΆllerβHillebrand algorithm used by Singular seems reasonably straightforward to implement given a basic set-up for commutative algebra (GrΓΆbner bases, ideal dimension,...), so if any good solution exists for that I'd also be interested.
Edit: I've just come across the computational-algebra package which seems to have the ability to solve such zero-dimensional systems (see Algebra.Algorithms.ZeroDim
). I'll give that a try!
Hello everyone!
For my master thesis I am investigating the association between adulthood weight gain, BMI, waist circumference, waist-hip ratio, waist-height ratio and colorectal tumors (colorectal cancer and colorectal adenoma) in people with the Lynch Syndrome. In order to investigate this I will perform a cox proportional hazard regression model with these exposures categorised. But, I will also perform it with the continuous exposures by using a multivariate fractional polynomial with cox. However, there are people from the same family in my dataset. Therefore, I need to use robust variance estimates. I was able to do that for the categorised analysis by clustering the participants based on their family numbers (I used this document for my script: https://www.rdocumentation.org/packages/survival/versions/3.2-10/topics/coxph). So, here is an example of my script where I have BMI as exposure with 2 categories and the covariates are based on a DAG:
ph_finalbminew1.2 <- coxph(Surv(ftime_CRT, CRT_afterstart)Β ~ BMIcatnew + age + totalindex + SumOfkCal + ultraprocessed + fruit_vegetables + fish +Β sex, cluster = famnr, data = df_mbulbaai)
This script works. But, when I try to do the cluster term in the mfp model it doesn't work**.**
Script trials for the mfp model:
df_bmi <-mfp(Surv(ftime_CRT, CRT_afterstart)~ fp(BMIcenter, df=4)+fp(SumOfalcohol, df=4)+fp(SumOfkCal, df=4)+fp(fruit_vegetables, df=4)+fp(totalindex, df=4)+ fp(red_pmeat, df=4)+ fp(SSBs, df=4)+fp(ultraprocessed, df=4)+fp(agecenter, df=4)+fp(fish, df=4)+SMOKER+educat_both+sex, cluster = famnr, family = cox, data=df_mbulbaai, verbose=TRUE)
R says:Β unused argument (cluster = famnr)
df_bmi <-mfp(Surv(ftime_CRT, CRT_afterstart)~ fp(BMIcenter, df=4)+fp(SumOfalcohol, df=4) + fp(SumOfkCal, df=4) + fp(fruit_vegetables, df=4) + fp(totalindex, df=4) + fp(red_pmeat, df=4) + fp(SSBs, df=4)+ fp(ultraprocessed, df=4) + fp(agecenter, df=4) + fp(fish, df=4)+ SMOKER + educat_both + sex, cluster(famnr), family = cox, data=df_mbulbaai, verbose=TRUE)
R says:Β Error in cluster(famnr) : object 'famnr' not found
Who could help me solve this problem? Is it even possible to do this in R?
Thank you for checking. I appreciate it!
I was having fun implementing polynomial in rust, https://github.com/apelloni/mpolynomial for another project.
I've implemented them such that they have all the features that I need also taking into account that many polynomials will be sparse.
Because of this, I end up having to use a binary search which is the main bottleneck of the whole run. The search is used when I want to add a coefficient to my polynomial trough this function:
/// Add a new coefficient to the polynomial keeping the list sorted
pub fn add(&mut self, pows: &[u8], coeff: T) -> bool {
for (pow, c_pow) in pows.iter().zip_eq(self.cache.coeff_pows.iter_mut()) {
*c_pow = *pow;
}
match self.powers.binary_search(&self.cache.coeff_pows) {
Ok(pos) => {
self.coeffs[pos] = self.coeffs[pos] + coeff;
false
}
Err(pos) => {
self.powers.insert(pos, self.cache.coeff_pows.clone());
self.coeffs.insert(pos, coeff);
for (pow, max_pow) in pows.iter().zip_eq(self.max_rank.iter_mut()) {
if pow > max_pow {
*max_pow = *pow;
}
}
true
}
}
}
My question is now the following, If I were to switch to a dense polynomial structure, where all the positions are determined by the rank in each variable, do I get a significant speedup? Is there something that I could easily change in the structure to avoid this slowdown?
Hi Everyone,
I'm new to Mathematica and I have been working on some summer research revolving mostly around solving increasingly larger systems of equations (like the ones in my code below), but things seem to be getting stuck on this case, and I'm not sure if I just have too many variables (7) or equations (17) for it to work quickly. I know that there should be solutions based on some other less computational methods that become infeasible quickly.
I also know the norms of all the variables, so I was thinking that taking that into account might speed things up, but I'm not sure how. Knowing the norms lets me turn the conjugates into rational expressions, but in the simpler cases that actually slowed things down, and it doesn't seem to work in this case either, so I've left it in the form that ran the fastest in simpler cases. For reference, the next most complicated case is 12 equations in 4 variables and takes less than half a second to run. Please let me know if there's a way to speed this up! Thanks!
Remove["Global`*"]
nu = 9;
delta = (nu + Sqrt[nu^2 + 4])/2;
matrixZ9 =
1/(delta - 1) {{delta - 2, -1, -1, -1, -1, -1, -1, -1, -1}, {-1, -1,
a, b, c, d, c, b, a}, {-1, Conjugate[a], -1, b, e, f, f, e,
b}, {-1, Conjugate[b], Conjugate[b], -1, c, f, g, f, c}, {-1,
Conjugate[c], Conjugate[e], Conjugate[c], -1, d, f, f, d}, {-1,
Conjugate[d], Conjugate[f], Conjugate[f], Conjugate[d], -1, c, e,
c}, {-1, Conjugate[c], Conjugate[f], Conjugate[g],
Conjugate[f], Conjugate[c], -1, b, b}, {-1, Conjugate[b],
Conjugate[e], Conjugate[f], Conjugate[f], Conjugate[e],
Conjugate[b], -1, a}, {-1, Conjugate[a], Conjugate[b],
Conjugate[c], Conjugate[d], Conjugate[c], Conjugate[b],
Conjugate[a], -1}};
resultsEq1 =
Table[Sum[
matrixZ9[[Mod[i + l, nu] + 1, h + 1]]*
Conjugate[matrixZ9[[Mod[i, nu] + 1, 1 + h]]], {i, 0, nu - 1}] -
KroneckerDelta[l, 0] + KroneckerDelta[h, 0]/delta, {l, 0,
nu - 1}, {h, 0, nu - 1}];
resultsEq1 = DeleteDuplicates[Flatten[resultsEq1]];
resultsEq1 = Simplify[resultsEq1]
Reduce[resultsEq1 == 0, {a, b, c, d, e, f, g}, Complexes]
I had originally posted this question here on SO, because I had to include some mathematical equations.
I want to use power series to approximate some PDEs (see this). The first step I need to generate symbolic multivariate polynomials, given a numpy ndarray.
Consider the polynomial below:
P1=β...β a_{i1,...,im}βxj^ij
I want to take a m
dimensional ndarray
of D=[d1,...,dm]
where dj
s are non-negative integers, and generate a symbolic multivariate polynomial in the form of symbolic expression. The symbolic expression consists of monomials of the form:
a_{i1,...,im}x1^i1...xm^im
Fo example if D=[2,3]
the output should be:
[a_[0,0]+a_[1,0]x_1+a_[0,1]x_2+a_[1,1]x_1x_2+a_[0,2]x_2^2+a_[1,2]x_1x_2^2
For this specific case I could nest two for loops
and add the expressions. But I don't know what to do for Ds with arbitrary length. If I could generate the D dimensional ndarrays of A and X without using for loops, then I could usenp.sum(np.multiply(A,X
)) as Frobenius inner product to get what I need.
I would appreciate if you could help me know how to do this in SymPy. Thanks in advance.
I am working on Polynomial features:
pr=PolynomialFeatures(degree=2)
pr
Z_pr=pr.fit_transform(Z)
Z.shape() #Output (201,4)
Z_pr.shape #Output (201,15)
Can I know how to print Z_pr that looks like below:
yhat=a+b1X1+b2X2+b3X1X2+b4(X1)^2+b5(X2)^2
I want to see how Z_pr looks like with 15 features, so please help me with it!
For a project I'm doing (in Java), I need to be able to factor multivariate polynomials (shouldn't be more than 5 variables or total degree about 15 with all coefficients less than about 100). I know there exist some algorithms (using Cantor-Zassenhaus over some different finite fields and interpolating seems to be the most common approach), but the logistics of the project require a self-contained implementation and so I'd like a simpler method (if one exists).
Does anyone know of a simple to implement method to factor multivariate polynomials? It doesn't need to be terribly fast (since the polynomials I'm dealing with are quite small).
I would like to create a polynomial fit for and 8 input 1 output data set.
I have written the following code in Matlab to be able to carry that out:
function [Coeff,R2,RMSE] = MultiLineRegression(x_var,y_var)
% This function carries out multivariate polynomial regression (MPR) analysis.
% The following stages are matrix manipulation to find the coefficients of
% the polynomial equation.
[n, p] = size(x_var);
nv = p;
np = 2;
k = nv*np;
B = repmat(eye(nv),np,1);
F = (1:1:np);
FT = transpose(F);
FTT = zeros(k,1);
for i=1:nv;
for j=1:np;
FTT(i+(j-1)*nv)= FT(j);
end;
end;
for i = 1:k;
for j=1:nv;
power_f(i,j) = B(i,j)*FTT(i);
end;
end;
for i = 1:nv;
power_f(k+1,i) = 0;
end;
nt = size(power_f,1);
M = ones(n,nt);
for i = 1:nt;
for j = 1:p;
M(:,i) = M(:,i).*x_var(:,j).^power_f(i,j);
end;
end;
Coeff = M\y_var;
y_calc = M *Coeff(:);
% Outputs coefficents which are in the order of x then x^2 with the last
% coefficent being the error
Coeff_fact = transpose(Coeff);
s_var = norm(y_var - y_calc);
R2 = 1 - (s_var/norm(y_var-mean(y_var)))^2;
RMSE = sqrt(mean((y_var - y_calc).^2));
I have created my input matrix and output vector but am getting stuck trying to find the equivalent functions for repmat
and eye
. I am trying to use Math.net as I have previously managed to create a plane fitting algorithm using this package.
If anyone would be able to point me in the right direction or even code that already performs this task it would be greatly appreciated.
I'm writing a paper on bioinformatics, and am now stuck at a point where I need a method of optimizing a polynomial like the following:
[; P(r) = p_c^5 \times \frac{(1-p_c)^3}{8} \times p_d \times (1-p_w) \times (1-p_d-p_x)^{10} \times p_s^7 ;]
I do not have a strong background in mathematics. I have been reading up on optimization, and I know the basics, but in all the books I have read I can't find a suitable method that allows me to find the optimal values for p_c, p_d, etc. while also maintaining that all must be bound by 0 and 1 (as probabilities).
Does anyone know of any methods that I can apply here? I will eventually be implementing this into a program, and so ideally any techniques used will be easily generalised to any other instances of this problem (essentially just changing all the constants).
EDIT: Need to mention that I am finding the p_c, p_d, p_w, p_x, p_s that maximize P(r).
EDIT (complication): Breaking the function up is a great idea, though my issue now lies with the following general function:
[; P(r) = p_c^\alpha \times \frac{1}{2^\beta}(1-p_c)^\beta \times p_d^\gamma \times p_x^\zeta \times \lambda p_w^\eta (1-p_w)^{\gamma-\eta} \times (1-p_d-p_x)^\mu \times p_s^\tau \times (1-p_s)^\omega ;]
I would like to break that up into 4 parts:
[; P_1(r) = p_c^\alpha \times \frac{1}{2^\beta}(1-p_c)^\beta ;]
[; P_2(r) = p_d^\gamma \times p_x^\zeta \times (1-p_d-p_x)^\mu ;]
[; P_3(r) = \lambda p_w^\eta (1-p_w)^{\gamma-\eta} ;]
[; P_4(r) = p_s^\tau \times (1-p_s)^\omega ;]
Unfortunately P_2 and P_3 are related by that one constant (gamma [; \gamma ;]). I'm not 100% sure that this does prevent them from being separated, but if it does, it requires that I find the maxima of a 4D function. This is not something I like the look of... Even if it was separable, I'm not sure how to differentiate P_2 in order to find the maxima there. Any thoughts?
Hello reddit,
I need an advise on how to solve the constrained minimization problem. I have a polynomial of many variables, which is basically degree 4 polynomial in one variable and degree 2 polynomial in all the other variables. Now I need to perform minimization of this polynomial with respect to set of constraints, which have the form of nonlinear inequalities.
Is there a deterministic method, which I could use to find the solution? I'd like something like nonlinear programming, but all the methods I know work only with convex functions.
Can you list any methods that you can think of in the commentaries? I know that I could probably use simulated annealing, but the set of constraints is difficult to handle when generating a new solution and I have no guarantee of finding the global minimum.
Thanks a lot in advance for all the ideas!
Edit: I should clarify about the problem: First, it is a many variable problem. By saying many I mean at most 22019+3 = 763 variables, which makes analytic solution impossible to do. That's why I need to come up with a numerical solution.
Problem itself can be written in form:
Cost = Sum(Sum(Sum(polynomial))) + Sum(Sum(Sum(polynomial))) + Sum(Sum(Sum(polynomial))) + Sum(Sum(Sum(polynomial))).
The Cost function has a lower bound equal to 0, because the inner polynomials can be written in form of polynomial to the power of 2.
There is just one variable, which is shared between all the summands and this is the variable in which the polynomial is of the fourth degree. In all the other variables, the polynomials are of the second degree. There are two other variables, each of them is shared between two triple sums.. Other variables appear only in one of these triple sums.
The constraints are all of the form of inequalities. For each variable, I know the small interval from which it can have the value. Then, I know that if I sum and multiply some of the variables, it must also fall to a given interval.
So I have this function f, hyper-matrix S(n x n x n) with coefficients and a vector variable w.
How can I cleanly represent the Hessian Matrix for the following?
I know that given an arbitrary finite set of coordinates on the (x,y) plane, we can generate polynomials P(x) = y which pass through all those points. How do we generalize this to larger dimensions - given a set of finite arbitrary 3-D coordinates, how can we construct multivariate polynomials P(x,y) = z which pass through all those points?
I have a survey with 2 input variables and an output. I want to test out various degrees for the function and see which one has the highest r2 value (i.e. maybe it's a multivariable input cubic function or maybe it's multivariable linear.)
Unfortunately, I can't find any calculators to do this online (except for multivariable linear, but I want to test it for other degrees including quartic) Could someone help? What's the formula for multivariable cubic/quadratic regression? I could possibly solve it myself then.
Thank you so much for whoever takes the time out to help :)
(a) y^(4)+(x^(4)-x^(2))
(b) y^(2)(6x-1)+(x^(3)+x^(2))
I think I solved, I'd just like to know if this is fine.
(a) Since the polynomial is equal to y^(4)+(x-1)(x+1)x^(2). Since (x-1) is prime, then by Eisenstein, we see that: (x-1) divides the second term but not the second, and (x-1)^(2) doesn't divide the second term. This implies it's not reducible over K[x][y]. Therefore it's not reducible over K[x,y].
(b) Think of the polynomial as being in K[x,y]. Repeat the argument above for (6x-1), which we can do since it is prime. Implying it is not reducible.
Is the proof like this?
I just discovered a quick way to solve linear equation systems on my Voyage 200 using the rref() function, (which uses gaussian elimination/row reduction and the Data/Matrix program) but I was seeking to solve quickly resistance, voltage and current calculations using the calculator. However, in parallel circuits, the calculations for resistance will result in a formula Req = (1/R1 + 1/R2 + 1/R3...)^-1 , not possible to calculate using just gaussian elimination. Is there a program for the Voyage 200/TI-89 or any other graphing calculator (or even a script or computer program I could use easily - Wolfram wouldn't do the job, as I am talking about dozens of both variables and equations) or way I could enter a bunch of equations with dozens of variables and solve for each variable? Thanks!
(An example of a system of equations I'd need to solve:
| 10x + 2y = 5z | x^-1 + y^-1 = z^-1 | x^2 /y = z
The problem is that the problems I would need to solve would involve some 20 equations with 30 variables or more.)
Good day, the price is going up to 0.3USDT.
ABCMint Second Foundation
ABCMint has been a first third-party organization that focuses on post-quantum cryptography research and technology and aims to help improve the ecology of ABCMint technology since 2018.
https://abcmintsf.com/exchange
What is ABCMint?
ABCMint is a quantum resistant cryptocurrency with the Rainbow Multivariable Polynomial Signature Scheme.
Cryptocurrencies and blockchain technology have attracted a significant amount of attention since 2009. While some cryptocurrencies, including Bitcoin, are used extensively in the world, these cryptocurrencies will eventually become obsolete and be replaced when the quantum computers avail. For instance, Bitcoin uses the elliptic curved signature (ECDSA). If a bitcoin user?s public key is exposed to the public chain, the quantum computers will be able to quickly reverse-engineer the private key in a short period of time. It means that should an attacker decide to use a quantum computer to decrypt ECDSA, he/she will be able to use the bitcoin in the wallet.
The ABCMint Foundation has improved the structure of the special coin core to resist quantum computers, using the Rainbow Multivariable Polynomial Signature Scheme, which is quantum resisitant, as the core. This is a fundamental solution to the major threat to digital money posed by future quantum computers. In addition, the ABCMint Foundation has implemented a new form of proof of arithmetic (mining) "ABCardO" which is different from Bitcoin?s arbitrary mining. This algorithm is believed to be beneficial to the development of the mathematical field of multivariate.
Rainbow Signature - the quantum resistant signature based on Multivariable Polynomial Signature Scheme
Unbalanced Oil and Vinegar (UOV) is a multi-disciplinary team of experts in the field of oil and vinegar. One of the oldest and most well researched signature schemes in the field of variable cryptography. It was designed by J. Patarin in 1997 and has withstood more than two decades of cryptanalysis. The UOV scheme is a very simple, smalls and fast signature. However, the main drawback of UOV is the large public key, which will not be conducive to the development of block practice technology.
The rainbow signature is an improvement on the oil and vinegar signature which increased the efficiency of unbalanced oi
... keep reading on reddit β‘Is there a name for these; polynomials of multiple (3 in this case) variables that can be solved for any of the three variables as a function of the other two?
For example:
x = c_0 + c_1y + c_2y^2 + c_3z + c_4z^2
can be solved for x(y,z) y(x,z) and z(x,y) with simple functional solutions. Once you start mixing the polynomials (y*z terms and above) then there constraints on the coefficients to keep it solvable for each x y and z.
An example of this would be:
x = c0 + (c1y + c2z)^2, where the y*z term is constrained by the y^2 and z^2 coefficients.
Any direction on what to search for would be appreciated!
The problem of whether or not a multivariable polynomial has an integral root is undecidable because, since one cannot establish a maximum value for the root, one cannot say with certainty that the machine will halt on a given input, since it never runs out of possible integers to check. Correct?
Can any undecidable problem be viewed in such manner, "never running out of possibilities to check for a given condition"? I can think of a sketch of a few other ones which also go like that. Has this been formalised/whatever?
What about the halting problem?
Would the third degree taylor polynomial of a function of 2 variables be something like this:
p(x,y)=f(a,b)+ (hD_x+kD_y)f| _ (a,b) +(hD_x+kD_y)^2 f| _ (a,b) + (hD_x+kD_y)^3 f| _ (a,b)
with the f|_(a,b) notation what I mean is that that I evaluate at (a,b) after the differentiation's are done. h=(x-a), k=(y-b)
[Edit 1:15: For anyone who noticed a hiccup in the outputs 5 minutes before kick-off, it was just a referencing problem-- I haven't heard from anyone about it, but sorry if there was confusion!]
[Edit 2: I think I hate this season. Not other sources get any more accurate, but there have just been too many surprises these last 4 weeks! Chiefs/Titans, Bills/Jax... So not cool.]
Week 9 was a rough ride for Vegas expectations-- So it's nice to be back to a fresh week of delusion, convincing ourselves of the comfort in those nice projected points that have yet to disappoint us.
Anyway, here are the week 9 charts!
Many of you saw from my future projections that week 9 would be a rough one for D/ST streaming-- only 5 decent options. Hopefully you snagged ahead. The tough week means there's more risk for going negative-- roster carefully.
[By the way, Accuracy report here.]
Now to this week's topic, based on request from some of you:
Some of you requested me to describe what is "overfit".Β Β So here's ...really probably... more than you really wanted to know! Anyway, I hope like 3 of you out there will get something from this discussion, or at least get some insight into why my models got updated after week 6.
Overfit the issue I've beenΒ focused on, during October, putting virtually all my spare hours into that alone. Not to mention past years.Β So it is an issue I treat seriously,Β to try to deliver the best possible recommendations.Β I strongly belief my efforts to address overfit will make the second half of the season even better.
What is overfit? In my own words, an overfit is a feature in a model that causes wrongΒ predictions-- despite theΒ fact that the same feature well describes past trends.
Are there different kinds of overfit?Β Yes, I'm glad you asked.Β I'm not dealing with the classic example of polynomial overfit (when you try to fit a trend to y=a+bx+cx^(2)+...+zx^(25) when it could have sufficed to use y=mx+b).Β The type of overfit I'm weeding out is of the "multivariable" kind.Β When there are more variables than just "x", then issues of overfit get tricky.
Example?Β Β Here's an easy example, and al
... keep reading on reddit β‘I had a math course this semester on Wilfred Kaplan's Advanced Calculus book and I found the book simply fascinating. I'd say it can be the single main math reference for physics majors. It packs all the math methods into one book, while still being rigorous. The first half is a stellar, thorough course on multivariate, vector, and (fairly basic but still hard) tensor analysis and differential geometry, while the second half is about Fourier series and transforms, things like Bessel functions, Hermite and Legendre polynomials, complex variables, and PDEs. Basically all the physics and engineering math. One course cannot cover the whole book but I felt very motivated to go beyond the course and study the other parts of the book. And best of all, it's pretty small and light, and since it's a bit old, it's inexpensive - especially old versions. A PDF can be found on Google for free. I'd say the prereq is calc III and linear algebra, but if you're rusty on those things, this book will fix that. It has a first chapter reviewing LA. A strong background will really help squeeze everything out of this book, but you might not need everything, and it will surely give you all you need in your own work or study.
It's great at all difficulty levels: both for an ordinary engineer and for a very talented physics/math student. One one hand, it goes out of its way to be accessible in the beginning, devoting a chapter to developing linear algebra from the ground up, and starts Chapter 2 by defining what partial derivatives. The language is also cordial and easy to read. On the other hand, there's plenty of extremely difficult and rigorous material to crack at, particularly the stuff on tensors, which would challenge a prodigy. Almost every theorem is proven. I had already taken real analysis before this course and still this book was a big challenge.
If you're a professor or a teaching grad student, I highly recommend teaching a course on this book if you can.
https://preview.redd.it/fh3ufyuwtl181.jpg?width=1000&format=pjpg&auto=webp&s=6dc25524fa5e9d4d6f760db0ac227ee456cde704
LAST UPDATED: --- FRIDAY --- JANUARY 7, 2022 --- 11:45 PM EST
MY CONTACT INFO:
IMPORTANT: When reaching out to inquire more info, please try to include the following information in the initial text message or email so that I can have all the important details necessary to determine the rate for my services:
WINTER 2022 UPDATE: For students needing help with exams, homework assignments, and online courses for Winter 2022, Spring 2022, and Fall 202 semesters, please reach out to me ASAP to reserve a tutoring appointment because my schedule is certain to get filled up fast. Thank you!
INTRODUCTION --- ABOUT ME:
Hello reddit,
I need an advise on how to solve the constrained minimization problem. I have a polynomial of many variables, which is basically degree 4 polynomial in one variable and degree 2 polynomial in all the other variables. Now I need to perform minimization of this polynomial with respect to set of constraints, which have the form of nonlinear inequalities.
Is there a deterministic method, which I could use to find the solution? I'd like something like nonlinear programming, but all the methods I know work only with convex functions.
Can you list any methods that you can think of in the commentaries? I know that I could probably use simulated annealing, but the set of constraints is difficult to handle when generating a new solution and I have no guarantee of finding the global minimum.
Thanks a lot in advance for all the ideas!
Edit: I should clarify about the problem: First, it is a many variable problem. By saying many I mean at most 22019+3 = 763 variables, which makes analytic solution impossible to do. That's why I need to come up with a numerical solution.
Problem itself can be written in form:
Cost = Sum(Sum(Sum(polynomial))) + Sum(Sum(Sum(polynomial))) + Sum(Sum(Sum(polynomial))) + Sum(Sum(Sum(polynomial))).
The Cost function has a lower bound equal to 0, because the inner polynomials can be written in form of polynomial to the power of 2.
There is just one variable, which is shared between all the summands and this is the variable in which the polynomial is of the fourth degree. In all the other variables, the polynomials are of the second degree. There are two other variables, each of them is shared between two triple sums.. Other variables appear only in one of these triple sums.
The constraints are all of the form of inequalities. For each variable, I know the small interval from which it can have the value. Then, I know that if I sum and multiply some of the variables, it must also fall to a given interval.
https://preview.redd.it/d0gcqggwfm181.jpg?width=1000&format=pjpg&auto=webp&s=1eb001bce924bdc1f8da7c993e6077a48276bba1
LAST UPDATED: --- FRIDAY --- JANUARY 7, 2022 --- 11:45 PM EST
MY CONTACT INFO:
IMPORTANT: When reaching out to inquire more info, please try to include the following information in the initial text message or email so that I can have all the important details necessary to determine the rate for my services:
WINTER 2022 UPDATE: For students needing help with exams, homework assignments, and online courses for Winter 2022, Spring 2022, and Fall 202 semesters, please reach out to me ASAP to reserve a tutoring appointment because my schedule is certain to get filled up fast. Thank you!
INTRODUCTION --- ABOUT ME:
https://preview.redd.it/eblan2fbfm181.jpg?width=1000&format=pjpg&auto=webp&s=3ecc96891874ec80966deeed17d8dd8382329a80
LAST UPDATED: --- FRIDAY --- JANUARY 7, 2022 --- 11:45 PM EST
MY CONTACT INFO:
IMPORTANT: When reaching out to inquire more info, please try to include the following information in the initial text message or email so that I can have all the important details necessary to determine the rate for my services:
WINTER 2022 UPDATE: For students needing help with exams, homework assignments, and online courses for Winter 2022, Spring 2022, and Fall 202 semesters, please reach out to me ASAP to reserve a tutoring appointment because my schedule is certain to get filled up fast. Thank you!
INTRODUCTION --- ABOUT ME:
Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.