A list of puns related to "Euclidean Vector"
Note : I know that vectors are abstract mathematical objects, I have taken Linear Algebra. I am only concentrating or Euclidean vectors, with their usua "tip to tail" addition.
So what is a euclidean vector? Is it the n-tuple that describes it? Or is it simply an arrow pointing in space, aka a geometric object that is completely irrelevant to the n-tuple from R^n that describes it? Or are these two the same?
From my understanding the n-tuple (x1, x2, ..., xn) that describes a vector in an n dimensional euclidean space is simply a collection of numbers and nothing more. They are not necessarily associated with geometric arrows pointing in space, n-tuples of R^n can be used as "lists" of numbers for other stuff as well, for example the coefficients of a polynomial. After all a vector in a euclidean space can be represented by polar coordinates or (using a different basis) as a different n-tuple and it's still the same "arrow pointing in space", aka the same geometric object, just using a different system to describe it.
So that leads me to conclude that n-tuples of R^n are lists of numbers (I know they are vectors too in the abstract sense of linear algebra) and Euclidean vectors are just geometric objects, detached from the coordinate system we choose.
So an n-tuple and an arrow in space are two different types of vectors, but the one is used to describe the other through a Cartesian coordinate system.
Is that correct? I would be grateful if someone could explain this to me clearly and thoroughly
Edit : When I say euclidean vector I mean the geometric object that is an arrow pointing in space
Please read the whole thing till the end if you have the time
Hello everyone, freshman mech engineer here. I'm sure this has been asked countless times but I have looked everywhere on the internet and couldn't find a satisfactory answer to the following questions :
If position vectors can represent points without any information being lost, why does a euclidean space need to have points and not just position (and displacement) vectors? (The only reason I can think of is to just have a point of reference)
The solution to a linear system of equations with n variables is a n-tuple (x1,x2,β¦,xn). Is this a position vector or a point in R^n? (Personally I think it's a vector since you can add two solutions of different linear systems to get a new solution for the sum of these two different linear systems. You wouldn't be able to do that with points since points cannot be added to each other)
...can you add two points? I always thought you added their position vectors and there is no addition between points
Someone told me "Linear Algebra is the study of points through vectors in some way". Is that correct?
Why do we say that transformations in the context of linear algebra act on position vectors whereas in multi variable calc they act on points?
So in the end why do we need points except for just reference? If these questions seem stupid please bear with me
I know vectors are displacements, and that two vectors with the same magnitude and direction are the same so please don't give an answer based on that observation because it really won't help me at all. I am looking for something deeper (if there is anything) behind all this. And by that I mean what mathematicians where thinking when they were creating these mathematical structures. My teacher told me that I'm an engineer and I shouldn't care about that stuff as long as they work on paper but I always hated this point of view.
Any answer would be greatly appreciated
I have a complex matrix D, where D = Dh + iDv. I also have a column vector X
I need to show that β (wrt x) of ||Dx||^(2) = 2*Real(D * D)x = (Dh^(T)Dh + Dv^(T)Dv)x
Does this make sense? Having trouble understanding how these are derived... Lin Alg knowledge is lacking.
vec2f.h:
namespace vec
{
struct vec2f
{
float x, y;
vec2f();
vec2f(float x, float y);
void set(float x, float y);
vec2f& operator=(const vec2f &rhs);
vec2f operator-();
vec2f operator+=(vec2f &rhs);
vec2f operator-=(vec2f &rhs);
vec2f operator*=(vec2f &rhs);
vec2f operator/=(vec2f &rhs);
vec2f operator+(vec2f &rhs);
vec2f operator-(vec2f &rhs);
vec2f operator*(vec2f &rhs);
vec2f operator/(vec2f &rhs);
vec2f operator*(float &rhs);
vec2f operator/(float &rhs);
float length();
float dot(vec2f &rhs);
vec2f norm(vec2f &a);
};
}
vec2f.cpp:
namespace vec
{
vec2f::vec2f()
{
this->x = 0;
this->y = 0;
}
vec2f::vec2f(float x, float y)
{
this->x = x;
this->y = y;
}
void vec2f::set(float x, float y)
{
this->x = x;
this->y = y;
}
vec2f& vec2f::operator=(const vec2f &rhs)
{
if (this == &rhs) //Same object?
{
return *this; //Yes, so skip assignment, and just return *this.
}
this->x = rhs.x;
this->y = rhs.y;
return *this;
}
vec2f vec2f::operator-()
{
//this->x = -x;
//this->y = -y;
//return *this;
return vec2f(-this->x, -this->y);
}
vec2f vec2f::operator+=(vec2f &rhs)
{
return vec2f(this->x += rhs.x, this->y += rhs.y);
}
vec2f vec2f::operator-=(vec2f &rhs)
{
return vec2f(this->x -= rhs.x, this->y -= rhs.y);
}
vec2f vec2f::operator*=(vec2f &rhs)
{
return vec2f(this->x *= rhs.x, this->x *= rhs.y);
}
vec2f vec2f::operator/=(vec2f &rhs)
{
if (rhs.x == 0)
{
throw "Error: vec2f /= rhs: rhs.x divide by zero error.";
}
if (rhs.y == 0)
{
throw "Error: vec2f /= rhs: rhs.y divide by zero error.";
}
return vec2f(this->x *= rhs.x, this->x *= rhs.y);
}
vec2f vec2f::operator+(vec2f &rhs)
{
return vec2f(this->x + rhs.x, this->y + rhs.y);
}
vec2f vec2f::operator-(vec2f &rhs)
{
return vec2f(this->x - rhs.x, this->y - rhs.y);
}
vec2f vec2f::operator*(vec2f &rhs)
{
return vec2f(this->x * rhs.x, this->y * rhs.y);
}
vec2f vec2f::operator/(vec2f &rhs)
{
if (rhs.x == 0)
{
throw "Error: this->x / rhs.x: divide by zero error.";
}
if (rhs.y == 0)
{
throw "
... keep reading on reddit β‘I need to calculate the Euclidean Distance between two vectors which is simple enough. However the problem I am having is that each each element of my vector that I am working with represents different data. For instance: vector representing info about a building = [temp in C, building area, building cost, building weight, ..] As you can see the magnitudes of all of these entries are different from each other. One idea I had was simply to take the percent difference of each corresponding entry between the two vectors and compute the euclidean distance like that. i.e. instead of sqrt(sum(xi-yi)^2) do sqrt(sum |xi-yi|/((|xi|+|yi|)/2))
Does this seem like a viable method, or is there a red flag somewhere, or is there a vastly better method to use here? Thank you for any input you may have!
I use pist2( ) but it's not fast enough. In terms of computation, similar to A*B, d(i,j) require element wise minus, element wise square, and a sum. I tried the following:
y = function(A, B)
[F Sa] = size( A);
[F Sb] = size( B);
%%%let say Sa << Sb
y = zeros( Sa, Sb)
for i=1: Sa
row = bsxfun( @minus, B, A( :,i);
y(i, :) = sum( row.^2, 1);
end
but it is even slow than pdist2.
Find all scalar C's:
i + cj + (c-1)k is in the span of i + 2j + k and 3i + 6j + 3k
My attempt:
[1,c,c-1] = r[1,2,1] + s[3,6,3]
[1,c,c-1]= (r+3s)[1,2,1]
1 = r+3s
c = 2r + 6s
c-1 = r+3s
how to do?
Hi. Supose you have a vector called A and you rotate the x-y axes with a phi angle. Now the new x-y axes are x' and y'. This rotation has caused that the A vector components have changed now from A_x_ & A_y_ to A_x'_ & A_y'_ and they are related by these formulae
A_x'_ = A_x_ cos(phi) + A_y_ sin(phi)
A_y'_ = - A_x_ sin(phi) + A_y_ cos(phi).
I have read and understood (correctly I hope) that this formulae is originated by the cross product of two vectors: the x-y axes vector and the rotating angle vector as stated here.
My question is simple: where the hell that rotating angle vector came from?
It has been a long time since I took college Physics and I can't recall how that vector (rotating angle) is originated, please help, the wandering has been killing me for the weekend.
Edit: modified an html code for greek letters and replaced it for (phi) Edit2: modified space in a paragraph for highlighting my question.
I was messing around with matlab trying to figure out what my teacher was on about when she showed that the length of a vector "x" in matlab would be the command norm(x,2) (2 being the euclidian space). She then put in norm(x, inf) (inf meaning infinite space? and Infinite dementions?) I was really confused on what the point of her showing us that was. Since then I have been trolling around Wikipedia looking at pages like L(p) space, taxicab geometry, and chebshev distance. So far I've just been confusing myself. I was hoping some one could give me some insight on distances/norms in different L(p) spaces.
Also i realized that i said in the title "non-Euclidean space" which I realize now to be curved space not multi-dimensional space.
I am trying to get at the essence of a vector space, but each answer uses examples involving (real) numbers. Or functions of real numbers.
I want to see if we can construct a system, using the concepts of vectors in a vector space, and not have it directly involve numbers at all. Is this possible? Basically, can vectors/vector-spaces be used to model things outside of numbers?
I get that a vector space requires satisfying some 8 axioms, or as Wikipedia says, "a vector space over a field F is a set V together with two operations that satisfy the eight axioms listed below."
Can you construct a system of vectors in a vector space which satisfies these properties/axioms, and yet doesn't directly revolve entirely around numbers (so isn't using the real numbers directly, or functions of real numbers, or matrices of real numbers, etc.). Is it possible?
My starting of an example would be something along the lines of... Say for example we try to make the vectors be molecules. Can it be done? In this way, they will not be directly related to numbers.
Maybe instead of molecules, we use some other objects like light waves or colors, or human beings, etc. Can vector spaces have as their "vectors" arrays of arbitrary non-numerically-related objects like these? If so, what is a complete example?
At all costs, please don't write about something to do with numbers, the real numbers, or anything directly referencing the real numbers.
If it can't be done, why not? If it can be done, what is an example, and if you can, what is a practical application of vectors as NOT-numbers in the real-w
... keep reading on reddit β‘This recurring thread will be for questions that might not warrant their own thread. We would like to see more conceptual-based questions posted in this thread, rather than "what is the answer to this problem?". For example, here are some kinds of questions that we'd like to see in this thread:
Including a brief description of your mathematical background and the context for your question can help others give you an appropriate answer. For example consider which subject your question is related to, or the things you already know or have tried.
I'm not very familiar with GR, so please let me know if there is a deep flaw in the premise of the question.
Let's say there is a stretched region of space with a very strong curvature. That is to say, space expands considerably in that region, and it occurs non-negligibly over length scales small enough that objects that are quite well causally self-connected would experience it. In my ideal thought experiment, this patch of curved space just is -- i.e. there isn't some elaborate configuration of matter or negative energy or anything weird there holding the space in that geometry. I know this is unphysical, but the goal of the thought experiment is just to understand how this type of space geometry would interact with free floating matter.
Now, some large piece of solid matter (asteroid maybe?) floats into that region of space. Will the stretching of the object cause its internal energy to increase due to strain energy (e.g. you have to do work to stretch a rubber band)? And if so, does that mean that the object will feel some kind of repulsive force away from that region of space (since you would have to do work to push it into that region and stretch it), such that it would kind of be "deflected" away by that region of space?
In a previous post on UFO propulsion (https://www.reddit.com/r/UFOs/comments/p8oxb2/ufo_propulsion_tech_cracked/?utm_source=share&utm_medium=ios_app&utm_name=iossmf), I gave insights on fundamentals propulsion principle which, although coming off the shelf from general relativity, is fairly different to what we are used to with Newtonian physics. Some comments highlighted the fact that I wasnβt explaining the space time distorsion mechanism that generate the bump we use to -almost- instantly accelerate. This has been left intentionally aside as the first post was already quite heavy.
I this new post, I bring together insights on how can we bring the missing part: spacetime distorsion generation. Thatβs not straightforward, so I do a post on general scientific principle and will come back later on with another one especially on engineering (how to do it for real).
1/ Discarding false tracks
Now, letβs start. So we want to distort spacetime at will (like ON/OFF switch) and locally (spacecraft size, donβt want to move planets). Up to now, only obvious way of generating curvature is mass. But obviously it cannot be used for anything (gigantic masses to get some traction, cannot be manipulated at will, planet size range, etc.). So we need to find something else. Letβs dig a bit into the forces we know: strong interaction: since force carrier (the carrier is the gluon boson) is interacting with itself, strong interaction is trapped into atoms nuclei. No chance we get meter scale effects aside of the ones we know. Also: nuclei weight is mostly given by relativistic considerations on gluons movements. Do it looks like strong interaction is somehow sitting on top of relativity. Weak interaction: aside from radioactivity, seems pretty useless from an engineering point of view. Electromagnetism: well, it certainly raises a lot of debates (plasmoids, ELectro hydrodynamics, etc.) in the UFO analysis field. But the thing is: electromagnetism has relativity has an underlying layer. Most simple argument is: magnetism is a pure relativistic effect of moving charges. So it looks like forces effects as we know it cannot be explained without relativity, but the opposite is false. We can deduct that relativity (and therefore gravity Β« force Β») is something more fundamental that strong/weak interaction and even electromagnetism, something emerging from an even lower layer of physics below QFT (quantum field dynamics).
But solution, while not being full EM, can incl
... keep reading on reddit β‘I'm sure there is such a concept. Most of the calculus I've learned (up to vector calculus) assumes that the domain you're integrating/differentiating over is Euclidean, at least I think so. Does that change significantly if you try to do calculus say, in hyperbolic or spherical space? And are there any interesting directions you can take that?
My POV is of a physicist interested in the mathematical foundations of GR (and other metric theories of gravity) and whose knowledge of topology and real analysis is all self-taught and patchy. So I suppose "point set topology" is my only interest here, and that's what I'm asking about. I understand that pure mathematicians don't need a reason, they just like having algebraic structures to poke and prod, so if you're a pure mathematician just pretend you care about applications for a second.
I first was introduced to it as the study of "continuity of maps", and I learned how the topology 101 definition of a continuous map maps (heh) exactly to the epsilon-delta definition of continuity in real analysis. Then I went digging through real analysis and topology books and I think I've pieced together the following applied mathematician's/physicist's motivation of topology:
>With metric spaces we study the continuity of maps (and hence differentiability, which we need for physics) using a generic definition of "distance" (not necessarily Euclidean distance, or a vector inner product, or something like that). Topology is the study of continuity of maps at its most fundamental, i.e. without needing to invoke a concept of "distance", so that continuity can be studied in contexts more general that functions from R to R. In this way we can study differentiability with the absolute minimum of assumptions and extra structure.
Okay that all sounds fine, but it seems to me that any topological space we'd want to study would have a metric defined on it, and the open sets that make up a topology on a given set are usually chosen to be workable with a (generic) metric. And so it seems to me that the "minimal assumptions and minimal structure" thing is a bit misleading since we're really choosing open sets (or bases for them) with (generic) metrics in mind. So we've kind of built the metric space structure into our topologies with our choices of open sets. Or if not the full structure, the socket that it plugs into.
I'm worried that if (given a set) we constructed the open sets of a topology for it in some way not amenable to a metric space, then for that topology we'd have maps that would fit the definition of "continuous" but wouldn't be what anyone would actually call continuous, if presented with the map in isolation, but right now this is just a vague idea in my head that I haven't pinned down.
In which case, why not just talk about metric spaces all the
... keep reading on reddit β‘I find imaginary numbers interesting. But almost always I'm told: Of course no real space could have distances along the imaginary axis.
So why not? What would it be like, leaving our standard geometric space with its 3 real-number axes and moving around in a traversible 3D space with (say) one axis with an i-component at right angles to an intermeshed 2-space like the one we live in? What would be the equivalent of a cube or a sphere in such space?
NOTE: not asking about the nano-dimensions of "string theory"...
https://preview.redd.it/hm2ilafdxv881.png?width=2048&format=png&auto=webp&s=77625412440d5141ba536f9e1d57e77a8df2121c
https://preview.redd.it/1u8tacfdxv881.png?width=2048&format=png&auto=webp&s=a40b167c60c21f3e41e69f5165c5a8ed097fe417
π΅ Manhattan distance π΅
π½You might have heard of Euclidean or L2 distance but have you heard of the L1-distance also known as the Manhattan distance?
πThe Manhattan distance is computed by treating the geometry to be as if it were the street of Manhattan, one square city block followed by another. So the only way to travel is to go along the right-angled streets. While the shortest Euclidean distance between two points has a unique path the same is not the case for Manhattan distance as you can have multiple paths with the same distance.
π Mathematically, the L1-distance is the sum of the absolute value of the difference of each coordinate of your point/vector and can be extended to N dimensions.
π€ L1-distance or the L1-norm is also used to regularize model parameters. Regularization penalizes model parameters from over-fitting. The L1-norm forces model parameters to be sparse which would shrink the non-important features towards zero. The coefficients of the model could then be used to understand which features are more important i.e. the features that correspond to model parameters with larger values.
For example if you have a linear regression model with L1 regularization that predicts the price of a house with features ("number_of_rooms", "area", "color_of_house"). After fitting the model on your data you see the coefficients corresponding to ("number_of_rooms", "area", "color_of_house") = (0.5, 0.6, 0.01) you can see that the model treats "number_of_rooms" and "area" to be more important features than "color_of_house" as |0.5| > |0.01| and |0.6| > |0.01| in determining its price.
---------------------------------------------------------------------------------
If you like such content and would like to steer the topics I cover, feel free to suggest topics you would like to know more about in the comments.
The codebook wasnβt what the Singer expected it to be.
In her hands was a worn hardback book with actual printed paper. It was strange to realize that this was the first time sheβd ever held one outside the pseudo-nostalgia of her mind. Inside the book was a collection of poems written by the Emperors.
Emperor Brycellis Gaius, the first all the way to the fifty-first. Different Emperors, but the same name. Fifty-two times. Each of them had written at least one poem, some so long they could have been a novel all their own. What surprised her was how consistent the voice was between the few poems sheβd tried to read.
Poems about love, and death, and anger. There were lives and pantheons written in poetry. And yet, unlike how she would have expected, the poems seemed to string together somehow. As if the authors had handed down the same story to each generation.
There was some cleverness, the Singer thought, to use a book like this. Though no Empire ship had ever been captured by an enemy, if they were, there would be no book of ciphers to find. The codebook would be dismissed as nothing more than a personal effect of the Captain.
How the General had discovered this, she might never know. Nor why heβd bothered to investigate it if he didnβt think theyβd be able to use that information.
It was likely that heβd expected trouble in the final Terminals. This was why the shuttle still clamped like a tick to the Manifest Destiny was stuffed to the gills with Electronic Warfare Equipment.
βYouβre certain this will work?β Achilles grumbled from over her shoulder.
βYes,β the Singer said. βAnd if it doesnβt, we have enough time to activate the backup in the shuttle. So if they can hear my voice, then that will definitely work.β Achilles didnβt reply, but she could feel his claws squeeze the back of the Captainβs seat nervously.
She looked up, and the holographic display of the command module dominated her vision. Its illusionary expanse made it seem like the vectored graphics floated in the air in front of her. The tiny arrow representing Manifest Destiny faced away from the oval shape of the Mβdivosk Terminal.
The Singer didnβt need to read the numbers floating nearby to know the ship was still accelerating β slowing their v
... keep reading on reddit β‘Just uploaded a video on using PostgreSQL for face recognition application. SQL is for just calculating Euclidean distance between an input and a large table containing face embedding vectors and finding the least distance face from the database. I am noob to SQL world and looking for more ideas on using postgresql for other image processing applications. Any help is appreciated.
In (Wang and Carreira-Perpinan 2013) the goal is to find a probability vector x
that's closest (w.r.t. the Euclidean metric) to some arbitrary vector y in R^n
. This paper approaches this by solving KKT conditions. The proof seems to work only because they assume that both vectors are sorted in decreasing order:
> Without loss of generality, we assume the components of y
are sorted and x
uses the same ordering:
>
> y[1] >= ... >= y[p] >= y[p+1] >= ... >= y[D]
>
> x[1] >= ... >= x[p] >= x[p+1] >= ... >= x[D]
>
> and that x[1] >= ... x[p] > 0, x[p+1] = ... = x[D] = 0
These assumptions then lead to a really simple algorithm that I think might be applicable to an optimization problem I'm trying to solve.
Why is there no loss of generality when we assume that the solution vector x
is sorted? I understand that I can apply any transformations I want to y
because it's a parameter that's given to the algorithm, but x
is unknown - how can I be sure that this assumption doesn't restrict the possible values of x
I can find?
Why don't they check all possible combinations of Lagrange multipliers that satisfy the complementarity conditions x[i] * b[i] = 0
? Say, if x
has 3 elements, I would want to check these 8 combinations of (b[1], b[2], b[3])
:
b[1] |
b[2] |
b[3] |
---|---|---|
==0 | ==0 | ==0 |
==0 | ==0 | !=0 |
==0 | !=0 | ==0 |
==0 | !=0 | !=0 |
!=0 | ==0 | ==0 |
!=0 | ==0 | !=0 |
!=0 | !=0 | ==0 |
!=0 | !=0 | !=0 |
Then solutions would look like x = (a,b,c)
, x = (d,e,0)
, x = (f,0,g)
and so on, where a,b,c,d,e,f,g > 0
. But the paper seeks solutions where x
is sorted in decreasing order, so x = (f,0,g)
won't be found.
In what cases does it make sense to assume that the solution vector is sorted? I think this has something to do with the Euclidean norm being a sum and thus lacking order, so (x1 - y1)^2 + (x2 - y2)^2 + (x3 - y3)^2
is exactly the same as (x2 - y2)^2 + (x1 - y1)^2 + (x3 - y3)^2
, which allows us to impose whatever order we find convenient? Thus, this Euclidean norm is a symmetric function of pairs {(x1, y1), (x2, y2), (x3, y3)}
, right? The constraints x1 + x2 + x3 == 1
and x[k] >= 0
seem to also be "symmetric". Does this
https://preview.redd.it/l7hqx2uctja81.png?width=1080&format=png&auto=webp&s=d59afaf86b489b348cc12d1919c1f6ddc29962a6
Original loadout posts seems to have gotten deleted, going to try here. Thanks to the excellent u/Rubinsk posting this spreadsheet bit.ly/warzonedatabase, we now have a whole bunch of juicy data on our hands. I'm a data scientist so the first thing I wanted to do was pop that open in Jupyter and start taking a peek around.
I'm curious how similar the guns are to each other and what categories will naturally occur -- we all know how some LMGs behave more like SMGs in practice when built a certain way. Could I find these groups? Might I have takeaway in the form of "well if you like gun X, the statistical best version in that gun category is weapon Y?"
This is where agglomerative clustering kicks in. If you take a vector of each weapon and list its attributes, you can form a numeric description of the gun. Since I didn't want to bother with replacing missing data, I took descriptive stats from each weapon on the following categories from the original spreadsheet:
Now here's where the data science gets bad. This is using many fields to describe damage compared to only a handful to describe movement characteristics. Even worse is that recoil is not listed -- we still need to get a full listing since the clustering algorithm can't have missing fields and I didn't want to infer anything and skew it. It also does not at all consider firing mode (auto/semi/burst).
Still, curiosity persists. I run a linkage matrix using average linkage and the Mahalanobis distance. The average linkage method just seemed prudent to keep things from "jumping" between clusters. The Mahalanobis is important -- since the units between fields are wildly different (ADS vs Shots to Kill), they need to be normalized per-dimension (basically on a scale on min to max). Mahalanobis is effectively Euclidean distance, but where each dimension measures deviation from the average rather than a specific value.
Anyway, the dendrogram attached is what I came up with. If you're not familiar with these, they basically show how similar things are by how they are grouped
... keep reading on reddit β‘Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.