A list of puns related to "Self organizing map"
Hi everyone!
I am working on a project which aims to modify a neural network called the Self-Organizing Map (SOM): https://en.wikipedia.org/wiki/Self-organizing_map - which is essentially a clustering/dimensionality reduction algorithm that preserves the topology of given data.
My goal is to propose several alternatives/variants of SOM which add a notion of relevance/ranking (in the context of relevance scores) so that high scoring images appear on top while preserving the local similarity still.
So in short, my dataset has 20,000 feature vectors and query scores of each of those vectors for 327 queries. And I wanna change the neural network so that in considers the relevence/ranking as well as the local similarity.
Do you have any ideas how I could go about this by chance? I'd really appreciate any input!
Hello all, I'm doing my PhD on numerical modellig of the atmosphere and my director and I thought of using SOMs to relate regionalcirculation to local weather features. I looked up a few papers about this technique but I'm still trying to learn the basics. I have to prepare a presentation for my director explaining the basics of SOM so I wanted to ask if anyone knows of some good book/paper/webpage to get more info on the matter. Also I would love to learn about libraries to implement it.
Thanks all!
Every data scientist will have to use SOM at least once in their life. So here you are a Very Simplified Introduction to Self Organizing Map.
https://ravinduramesh.blogspot.com/2021/04/intro-to-self-organizing-map-and-self.html
I am trying to learn about a type of unsupervised neural network called a "Self Organizing Map" (https://en.wikipedia.org/wiki/Self-organizing_map) . Since this is an unsupervised algorithm, it can not be trained using the usual accuracy metrics (since the concept of "accuracy" does not exist in unsupervised data, i.e. there are no "labels").
I watched different videos and read several articles on this algorithm, but I am still a bit confused. Here is my understanding so far:
the user decides on how many "neurons" (i.e. the circular bins) they want on the map
each of these neurons is assigned a random weight (this random weight is actually a vector of weights - if the original data has 5 variables, the weight vector will be of dimension 5 x 1)
take the first observation from the dataset. calculate the euclidean distance between this observation and the weight vector of every neuron on the map. the neuron that has the smallest euclidean distance "wins" - the observation is placed inside that neuron.
the weight vector of the winning neuron (from the earlier step) and it's neighboring neurons have their weight vectors updated.
repeat steps 2) to 4) for all observations in the dataset.
repeat steps 2) to 5) many times
finished
What I don't understand is the logic behind why the weights are being updated. In other examples of unsupervised algorithms (such as Principal Components Analysis or Autoencoders), usually there is something called "reconstruction loss error". We can see how similar the output of the algorithm is to the original dataset. But from what I read, I am not sure how "updating the weights of neurons" somehow produces "better quality" results.
Can someone please help me understand how the SOM algorithm works? Is there some concept of reconstruction error related to the weights being updated?
Thanks
Has anyone ever heard of the SOM algorithm being used used for identifying outliers? I found this link on how to identify outliers using the SOM algorithm :
https://stackoverflow.com/questions/56134313/r-som-kohonen-package-outlier-detection
Can someone please comment on the approach taken in this link? Apparently, for data points assigned to a given neuron, points that have bigger euclidean distances to the center of that neuron are considered outliers.
Is this mathematically legitimate and appropriate?
Thanks
I just read about this cool type of (often one layered) neural network that is designed for unsupervised data called "kohonen networks". Has anyone ever used this before on real data (i.e. not iris data)? Does anyone recommend using this? Any success stories?
I implemented a Self Organizing Map on Arty A7 board in VHDL. You can check it out on Github and share your thoughts. https://github.com/tuhalf/SOMvhdl
Hi everyone, I am sharing this code repo.
I wrote a pytorch implementation for an N-Dimensional Self Organizing Map (SOM) that uses dot-product similarity. I wrote this because existing pytorch SOM implementations I found were limited to 2 dimensional maps and did not provide usage for dot-product similarity. I also needed some practice working with pytorch.
Hope you may find it useful.
From the project on GitHub:
A hierarchical self-organizing map (HSOM) is an unsupervised neural network that learns patterns from high-dimensional space and represents them in lower dimensions.
HSOM networks recieve inputs and feed them into a set of self-organizing maps, each learning individual features of the input space. These maps produce sparse output vectors with only the most responsive nodes activating, a result of competitive inhibition which restricts the number of 'winners' (i.e. active nodes) allowed at any given time.
Each layer in an HSOM network contains a set of maps that view part of the input space and generate sparse output vectors, which together form the input for the next layer in the hierarchy. Information becomes increasingly abstract as it is passed through the network and ultimately results in a low-dimensional sparse representation of the original data.
The training process results in a model that maps certain input patterns to certain labels, corresponding to high-dimensional and low-dimensional data respectively. Given that training is unsupervised, the labels have no intrinsic meaning but rather become meaningful through their repeated association with certain input patterns and their relative lack of association with others. Put simply, labels come to represent higher-dimensional patterns over time, allowing them to be distinguished from one another in a meaningful way.
https://youtu.be/LNoyg6rdnq4
Can someone explain whats going on in this video? How is the mona lisa being recreated using self organizing maps?
In general, self organizing maps seem to be an "a.i." equivalent of k means clustering? In the end, self organizing maps sort observations into discrete clusters?
I was trying to find the code source of one of the articles about "improving the life of wireless sensor networks using SOM" but I couldn't find any.
I need help if anyone worked with SOM to cluster sensors and find their cluster heads using the sensor coordinates.
So if you have the source code or worked before in this area please help me.
Any help is appreciated.
I am trying to learn about a type of unsupervised neural network called a "Self Organizing Map" (https://en.wikipedia.org/wiki/Self-organizing_map) . Since this is an unsupervised algorithm, it can not be trained using the usual accuracy metrics (since the concept of "accuracy" does not exist in unsupervised data, i.e. there are no "labels").
I watched different videos and read several articles on this algorithm, but I am still a bit confused. Here is my understanding so far:
the user decides on how many "neurons" (i.e. the circular bins) they want on the map
each of these neurons is assigned a random weight (this random weight is actually a vector of weights - if the original data has 5 variables, the weight vector will be of dimension 5 x 1)
take the first observation from the dataset. calculate the euclidean distance between this observation and the weight vector of every neuron on the map. the neuron that has the smallest euclidean distance "wins" - the observation is placed inside that neuron.
the weight vector of the winning neuron (from the earlier step) and it's neighboring neurons have their weight vectors updated.
repeat steps 2) to 4) for all observations in the dataset.
repeat steps 2) to 5) many times
finished
What I don't understand is the logic behind why the weights are being updated. In other examples of unsupervised algorithms (such as Principal Components Analysis or Autoencoders), usually there is something called "reconstruction loss error". We can see how similar the output of the algorithm is to the original dataset. But from what I read, I am not sure how "updating the weights of neurons" somehow produces "better quality" results.
Can someone please help me understand how the SOM algorithm works? Is there some concept of reconstruction error related to the weights being updated?
Thanks
I am trying to learn about a type of unsupervised neural network called a "Self Organizing Map" (https://en.wikipedia.org/wiki/Self-organizing_map) . Since this is an unsupervised algorithm, it can not be trained using the usual accuracy metrics (since the concept of "accuracy" does not exist in unsupervised data, i.e. there are no "labels").
I watched different videos and read several articles on this algorithm, but I am still a bit confused. Here is my understanding so far:
the user decides on how many "neurons" (i.e. the circular bins) they want on the map
each of these neurons is assigned a random weight (this random weight is actually a vector of weights - if the original data has 5 variables, the weight vector will be of dimension 5 x 1)
take the first observation from the dataset. calculate the euclidean distance between this observation and the weight vector of every neuron on the map. the neuron that has the smallest euclidean distance "wins" - the observation is placed inside that neuron.
the weight vector of the winning neuron (from the earlier step) and it's neighboring neurons have their weight vectors updated.
repeat steps 2) to 4) for all observations in the dataset.
repeat steps 2) to 5) many times
finished
What I don't understand is the logic behind why the weights are being updated. In other examples of unsupervised algorithms (such as Principal Components Analysis or Autoencoders), usually there is something called "reconstruction loss error". We can see how similar the output of the algorithm is to the original dataset. But from what I read, I am not sure how "updating the weights of neurons" somehow produces "better quality" results.
Can someone please help me understand how the SOM algorithm works? Is there some concept of reconstruction error related to the weights being updated?
Thanks
I just read about this cool type of (often one layered) neural network that is designed for unsupervised data called "kohonen networks". Has anyone ever used this before on real data (i.e. not iris data)? Does anyone recommend using this? Any success stories?
I implemented a Self Organizing Map on Arty A7 board in VHDL. You can check it out on Github and share your thoughts. https://github.com/tuhalf/SOMvhdl
Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.