Need some idea for modifying the Self-Organizing Map (ranking-awareness)

Hi everyone!

I am working on a project which aims to modify a neural network called the Self-Organizing Map (SOM): https://en.wikipedia.org/wiki/Self-organizing_map - which is essentially a clustering/dimensionality reduction algorithm that preserves the topology of given data.

My goal is to propose several alternatives/variants of SOM which add a notion of relevance/ranking (in the context of relevance scores) so that high scoring images appear on top while preserving the local similarity still.

So in short, my dataset has 20,000 feature vectors and query scores of each of those vectors for 327 queries. And I wanna change the neural network so that in considers the relevence/ranking as well as the local similarity.

Do you have any ideas how I could go about this by chance? I'd really appreciate any input!

👍︎ 3
💬︎
📅︎ Jan 05 2022
🚨︎ report
Animation of a Self Organizing Map with 3D data (Description in Comments) v.redd.it/3knp2sax0s171
👍︎ 90
💬︎
👤︎ u/suoarski
📅︎ May 28 2021
🚨︎ report
[OC] Animation of a Self Organizing Map v.redd.it/2op81pzl5s171
👍︎ 50
💬︎
👤︎ u/suoarski
📅︎ May 28 2021
🚨︎ report
Beginners Guide to Self-Organizing Maps analyticsindiamag.com/beg…
👍︎ 3
💬︎
📅︎ Sep 06 2021
🚨︎ report
[P][D] Self Organizing Maps

Hello all, I'm doing my PhD on numerical modellig of the atmosphere and my director and I thought of using SOMs to relate regionalcirculation to local weather features. I looked up a few papers about this technique but I'm still trying to learn the basics. I have to prepare a presentation for my director explaining the basics of SOM so I wanted to ask if anyone knows of some good book/paper/webpage to get more info on the matter. Also I would love to learn about libraries to implement it.

Thanks all!

👍︎ 4
💬︎
📅︎ Jul 14 2021
🚨︎ report
Learning with Self-Organizing Maps youtube.com/watch?v=uiVFK…
👍︎ 3
💬︎
👤︎ u/inboble
📅︎ Aug 01 2021
🚨︎ report
Implementing Self-Organizing Maps with Python and TensorFlow rubikscode.net/2021/07/06…
👍︎ 10
💬︎
📅︎ Jul 06 2021
🚨︎ report
Implementing Self-Organizing Maps with Python and TensorFlow rubikscode.net/2021/07/06…
👍︎ 5
💬︎
📅︎ Jul 06 2021
🚨︎ report
Very simplified intro to Self Organizing Maps

Every data scientist will have to use SOM at least once in their life. So here you are a Very Simplified Introduction to Self Organizing Map.

https://ravinduramesh.blogspot.com/2021/04/intro-to-self-organizing-map-and-self.html

👍︎ 4
💬︎
📅︎ May 08 2021
🚨︎ report
Implementing Self-Organizing Maps with Python and TensorFlow rubikscode.net/2021/07/06…
👍︎ 3
💬︎
📅︎ Jul 06 2021
🚨︎ report
Implementing Self-Organizing Maps with Python and TensorFlow rubikscode.net/2021/07/06…
👍︎ 2
💬︎
📅︎ Jul 06 2021
🚨︎ report
[D] Understanding How a Self Organizing Map (Kohonen Network) Works

I am trying to learn about a type of unsupervised neural network called a "Self Organizing Map" (https://en.wikipedia.org/wiki/Self-organizing_map) . Since this is an unsupervised algorithm, it can not be trained using the usual accuracy metrics (since the concept of "accuracy" does not exist in unsupervised data, i.e. there are no "labels").

I watched different videos and read several articles on this algorithm, but I am still a bit confused. Here is my understanding so far:

  1. the user decides on how many "neurons" (i.e. the circular bins) they want on the map

  2. each of these neurons is assigned a random weight (this random weight is actually a vector of weights - if the original data has 5 variables, the weight vector will be of dimension 5 x 1)

  3. take the first observation from the dataset. calculate the euclidean distance between this observation and the weight vector of every neuron on the map. the neuron that has the smallest euclidean distance "wins" - the observation is placed inside that neuron.

  4. the weight vector of the winning neuron (from the earlier step) and it's neighboring neurons have their weight vectors updated.

  5. repeat steps 2) to 4) for all observations in the dataset.

  6. repeat steps 2) to 5) many times

  7. finished

What I don't understand is the logic behind why the weights are being updated. In other examples of unsupervised algorithms (such as Principal Components Analysis or Autoencoders), usually there is something called "reconstruction loss error". We can see how similar the output of the algorithm is to the original dataset. But from what I read, I am not sure how "updating the weights of neurons" somehow produces "better quality" results.

Can someone please help me understand how the SOM algorithm works? Is there some concept of reconstruction error related to the weights being updated?

Thanks

👍︎ 14
💬︎
📅︎ Feb 22 2021
🚨︎ report
[D] Using Self Organizing Maps (SOM) for Identifying Outliers

Has anyone ever heard of the SOM algorithm being used used for identifying outliers? I found this link on how to identify outliers using the SOM algorithm :

https://stackoverflow.com/questions/56134313/r-som-kohonen-package-outlier-detection

Can someone please comment on the approach taken in this link? Apparently, for data points assigned to a given neuron, points that have bigger euclidean distances to the center of that neuron are considered outliers.

Is this mathematically legitimate and appropriate?

Thanks

👍︎ 13
💬︎
📅︎ Feb 22 2021
🚨︎ report
[D] self organizing maps/kohonen networks

I just read about this cool type of (often one layered) neural network that is designed for unsupervised data called "kohonen networks". Has anyone ever used this before on real data (i.e. not iris data)? Does anyone recommend using this? Any success stories?

👍︎ 8
💬︎
👤︎ u/jj4646
📅︎ Jan 17 2021
🚨︎ report
I built a Self-Organizing Map in VHDL

I implemented a Self Organizing Map on Arty A7 board in VHDL. You can check it out on Github and share your thoughts. https://github.com/tuhalf/SOMvhdl

Ten Color Input Test

👍︎ 10
💬︎
👤︎ u/Tuhalf
📅︎ Jan 12 2021
🚨︎ report
[P] N-Dimensional Self Organizing Map with dot-product similarity (PyTorch)

Hi everyone, I am sharing this code repo.

I wrote a pytorch implementation for an N-Dimensional Self Organizing Map (SOM) that uses dot-product similarity. I wrote this because existing pytorch SOM implementations I found were limited to 2 dimensional maps and did not provide usage for dot-product similarity. I also needed some practice working with pytorch.

Hope you may find it useful.

👍︎ 12
💬︎
👤︎ u/bazyli-d
📅︎ Aug 30 2020
🚨︎ report
Self Organizing Maps. This presentation is based on: SOM’s are invented by Teuvo Kohonen. They represent multidimensional. - ppt download slideplayer.com/slide/479…
👍︎ 2
💬︎
👤︎ u/cincoutm8
📅︎ Nov 14 2020
🚨︎ report
[P] Hierarchical self-organizing maps for unsupervised pattern recognition

From the project on GitHub:

A hierarchical self-organizing map (HSOM) is an unsupervised neural network that learns patterns from high-dimensional space and represents them in lower dimensions.

HSOM networks recieve inputs and feed them into a set of self-organizing maps, each learning individual features of the input space. These maps produce sparse output vectors with only the most responsive nodes activating, a result of competitive inhibition which restricts the number of 'winners' (i.e. active nodes) allowed at any given time.

Each layer in an HSOM network contains a set of maps that view part of the input space and generate sparse output vectors, which together form the input for the next layer in the hierarchy. Information becomes increasingly abstract as it is passed through the network and ultimately results in a low-dimensional sparse representation of the original data.

The training process results in a model that maps certain input patterns to certain labels, corresponding to high-dimensional and low-dimensional data respectively. Given that training is unsupervised, the labels have no intrinsic meaning but rather become meaningful through their repeated association with certain input patterns and their relative lack of association with others. Put simply, labels come to represent higher-dimensional patterns over time, allowing them to be distinguished from one another in a meaningful way.

👍︎ 58
💬︎
👤︎ u/sterntree
📅︎ Dec 19 2019
🚨︎ report
[D] self organizing maps and the mona lisa

https://youtu.be/LNoyg6rdnq4

Can someone explain whats going on in this video? How is the mona lisa being recreated using self organizing maps?

In general, self organizing maps seem to be an "a.i." equivalent of k means clustering? In the end, self organizing maps sort observations into discrete clusters?

👍︎ 2
💬︎
👤︎ u/blueest
📅︎ Oct 06 2020
🚨︎ report
Self-organizing maps in Matlab

I was trying to find the code source of one of the articles about "improving the life of wireless sensor networks using SOM" but I couldn't find any.

I need help if anyone worked with SOM to cluster sensors and find their cluster heads using the sensor coordinates.

So if you have the source code or worked before in this area please help me.

Any help is appreciated.

👍︎ 3
💬︎
📅︎ Jul 26 2020
🚨︎ report
Hierarchical self-organizing maps for unsupervised pattern recognition github.com/CarsonScott/HS…
👍︎ 12
💬︎
👤︎ u/inboble
📅︎ Dec 19 2019
🚨︎ report
Hierarchical self-organizing maps for unsupervised pattern recognition github.com/CarsonScott/HS…
👍︎ 9
💬︎
👤︎ u/inboble
📅︎ Dec 19 2019
🚨︎ report
Bayesian Self-Organizing Maps illustrated [OC] v.redd.it/ymooclgol9l31
👍︎ 18
💬︎
📅︎ Sep 08 2019
🚨︎ report
Self-organizing map replicates the Mona Lisa youtu.be/LNoyg6rdnq4
👍︎ 31
💬︎
👤︎ u/inboble
📅︎ Jun 26 2019
🚨︎ report
Conceptual Map of Erich Jantsch's Self-Organizing Universe
👍︎ 11
💬︎
👤︎ u/stiivi
📅︎ Oct 15 2019
🚨︎ report
[P] Using Self-Organizing Maps to solve the Traveling Salesman Problem diego.codes/post/som-tsp/
👍︎ 64
💬︎
👤︎ u/hardmaru
📅︎ Jan 26 2018
🚨︎ report
Kohonen: Self-Organizing Maps medium.com/@eklavyaS/koho…
👍︎ 16
💬︎
📅︎ Nov 01 2019
🚨︎ report
Hierarchical self-organizing maps for unsupervised pattern recognition github.com/CarsonScott/HS…
👍︎ 8
💬︎
👤︎ u/inboble
📅︎ Dec 19 2019
🚨︎ report
Hi, can you kindly suggest some good resources (tutorials/ blogs/ books) for learning Self Organizing Maps (SOM) in Python?
👍︎ 3
💬︎
📅︎ Sep 20 2019
🚨︎ report
Hierarchical self-organizing maps for unsupervised pattern recognition github.com/CarsonScott/HS…
👍︎ 5
💬︎
👤︎ u/inboble
📅︎ Dec 19 2019
🚨︎ report
Bayesian Self-Organizing Maps illustrated [OC] v.redd.it/ymooclgol9l31
👍︎ 12
💬︎
📅︎ Sep 08 2019
🚨︎ report
Understanding Self Organizing Maps

I am trying to learn about a type of unsupervised neural network called a "Self Organizing Map" (https://en.wikipedia.org/wiki/Self-organizing_map) . Since this is an unsupervised algorithm, it can not be trained using the usual accuracy metrics (since the concept of "accuracy" does not exist in unsupervised data, i.e. there are no "labels").

I watched different videos and read several articles on this algorithm, but I am still a bit confused. Here is my understanding so far:

  1. the user decides on how many "neurons" (i.e. the circular bins) they want on the map

  2. each of these neurons is assigned a random weight (this random weight is actually a vector of weights - if the original data has 5 variables, the weight vector will be of dimension 5 x 1)

  3. take the first observation from the dataset. calculate the euclidean distance between this observation and the weight vector of every neuron on the map. the neuron that has the smallest euclidean distance "wins" - the observation is placed inside that neuron.

  4. the weight vector of the winning neuron (from the earlier step) and it's neighboring neurons have their weight vectors updated.

  5. repeat steps 2) to 4) for all observations in the dataset.

  6. repeat steps 2) to 5) many times

  7. finished

What I don't understand is the logic behind why the weights are being updated. In other examples of unsupervised algorithms (such as Principal Components Analysis or Autoencoders), usually there is something called "reconstruction loss error". We can see how similar the output of the algorithm is to the original dataset. But from what I read, I am not sure how "updating the weights of neurons" somehow produces "better quality" results.

Can someone please help me understand how the SOM algorithm works? Is there some concept of reconstruction error related to the weights being updated?

Thanks

👍︎ 3
💬︎
📅︎ Feb 22 2021
🚨︎ report
Understanding Self Organizing Maps

I am trying to learn about a type of unsupervised neural network called a "Self Organizing Map" (https://en.wikipedia.org/wiki/Self-organizing_map) . Since this is an unsupervised algorithm, it can not be trained using the usual accuracy metrics (since the concept of "accuracy" does not exist in unsupervised data, i.e. there are no "labels").

I watched different videos and read several articles on this algorithm, but I am still a bit confused. Here is my understanding so far:

  1. the user decides on how many "neurons" (i.e. the circular bins) they want on the map

  2. each of these neurons is assigned a random weight (this random weight is actually a vector of weights - if the original data has 5 variables, the weight vector will be of dimension 5 x 1)

  3. take the first observation from the dataset. calculate the euclidean distance between this observation and the weight vector of every neuron on the map. the neuron that has the smallest euclidean distance "wins" - the observation is placed inside that neuron.

  4. the weight vector of the winning neuron (from the earlier step) and it's neighboring neurons have their weight vectors updated.

  5. repeat steps 2) to 4) for all observations in the dataset.

  6. repeat steps 2) to 5) many times

  7. finished

What I don't understand is the logic behind why the weights are being updated. In other examples of unsupervised algorithms (such as Principal Components Analysis or Autoencoders), usually there is something called "reconstruction loss error". We can see how similar the output of the algorithm is to the original dataset. But from what I read, I am not sure how "updating the weights of neurons" somehow produces "better quality" results.

Can someone please help me understand how the SOM algorithm works? Is there some concept of reconstruction error related to the weights being updated?

Thanks

👍︎ 5
💬︎
📅︎ Feb 22 2021
🚨︎ report
self organizing maps/kohonen networks

I just read about this cool type of (often one layered) neural network that is designed for unsupervised data called "kohonen networks". Has anyone ever used this before on real data (i.e. not iris data)? Does anyone recommend using this? Any success stories?

👍︎ 3
💬︎
👤︎ u/jj4646
📅︎ Jan 17 2021
🚨︎ report
Self-Organizing-Map on FPGA

I implemented a Self Organizing Map on Arty A7 board in VHDL. You can check it out on Github and share your thoughts. https://github.com/tuhalf/SOMvhdl

10 Color Inputs Test

👍︎ 6
💬︎
👤︎ u/Tuhalf
📅︎ Jan 11 2021
🚨︎ report

Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.