I finally figured out K's nearest neighbors...

They are J and L

πŸ‘οΈŽ 981
πŸ’¬οΈŽ
πŸ‘€οΈŽ u/bhendel
πŸ“…οΈŽ Dec 26 2021
🚨︎ report
K-nearest neighbor

Hi everyone, I was wondering is it possible to create a K-NN model in oracle database? The algorithm is not present in DBMS_DATA_MINING. I am using the 12c version with plsql.

πŸ‘οΈŽ 2
πŸ’¬οΈŽ
πŸ‘€οΈŽ u/pourya_sh
πŸ“…οΈŽ Dec 28 2021
🚨︎ report
Question on how to calculate test error rate for bayes and k-nearest neighbors

I am studying for MAS 2 and looked at the fall 2019 exam. Question 38 is basically asking me to calculate the error rate for the test data using both k nearest neighbor and bayes classifier and report the difference. I have looked over the source material multiple times and there is literally no example on how we would do this. Since we are given an actual test data set are we just supposed to apply both methods to the test data and then the error rate is just the fraction of wrong choices? If I do this though I swear the bayes error is higher than the error from the k-means method which really doesn't make sense.

πŸ‘οΈŽ 4
πŸ’¬οΈŽ
πŸ‘€οΈŽ u/jaarndt6153
πŸ“…οΈŽ Dec 15 2021
🚨︎ report
kNN = k Nearest Neighbors: A super simple algorithm in Machine Learning youtube.com/watch?v=9zS3a…
πŸ‘οΈŽ 112
πŸ’¬οΈŽ
πŸ‘€οΈŽ u/Va_Linor
πŸ“…οΈŽ Sep 11 2021
🚨︎ report
Fellow NBA fans - Simple Modeling of NBA positions using the K-Nearest Neighbors Machine Learning Algorithm towardsdatascience.com/si…
πŸ‘οΈŽ 10
πŸ’¬οΈŽ
πŸ‘€οΈŽ u/PictoChris
πŸ“…οΈŽ Nov 08 2021
🚨︎ report
K Nearest Neighbors
πŸ‘οΈŽ 3
πŸ’¬οΈŽ
πŸ‘€οΈŽ u/help-me-grow
πŸ“…οΈŽ Nov 18 2021
🚨︎ report
K-Nearest Neighbors Algorithm From Scratch In Python youtu.be/iJfcRV4PPnY
πŸ‘οΈŽ 2
πŸ’¬οΈŽ
πŸ‘€οΈŽ u/Snoo28889
πŸ“…οΈŽ Sep 20 2021
🚨︎ report
K-Nearest Neighbors Algorithm From Scratch In Python youtu.be/iJfcRV4PPnY
πŸ‘οΈŽ 8
πŸ’¬οΈŽ
πŸ‘€οΈŽ u/Snoo28889
πŸ“…οΈŽ Sep 20 2021
🚨︎ report
Using K nearest neighbors to define new features

Hello friends,

I am learning on how to define new features (i.e. feature engineering) using the idea of K-nearest neighbors. Here is my idea to implement it;

a. Suppose we choose K=10 (i.e. 10 neighbors)

b. For every data point find, out of these 10 closest neighbors what percent of the points belong to positive class. And use this information as the new feature.

Above idea can work well during training. But my question is, how can I define this new feature for the test data(i.e. unlabeled set). Can I kindly get help here on how to do it? Thanks.

P.S. Examples or and links to documentation/blog will be really appreciated.

πŸ‘οΈŽ 2
πŸ’¬οΈŽ
πŸ‘€οΈŽ u/jsinghdata
πŸ“…οΈŽ Aug 27 2021
🚨︎ report
Classifying Movies With K-Nearest Neighbors youtube.com/watch?v=jw5Lh…
πŸ‘οΈŽ 118
πŸ’¬οΈŽ
πŸ‘€οΈŽ u/vaishak2future
πŸ“…οΈŽ May 20 2021
🚨︎ report
Learning K Nearest Neighbors with Julia youtu.be/P0I-oXGlbnM
πŸ‘οΈŽ 13
πŸ’¬οΈŽ
πŸ‘€οΈŽ u/mindaslab
πŸ“…οΈŽ Jul 31 2021
🚨︎ report
Learning K Nearest Neighbors with Julia youtu.be/P0I-oXGlbnM
πŸ‘οΈŽ 4
πŸ’¬οΈŽ
πŸ‘€οΈŽ u/mindaslab
πŸ“…οΈŽ Jul 31 2021
🚨︎ report
Happy birthday k-nearest neighbor

Today marks the 70 years since the inception of k-nearest neighbor by Evelyn Fix and Joseph Hodges - the first true machine learning algorithm. Happy birthday k-nearest neighbor.

πŸ‘οΈŽ 161
πŸ’¬οΈŽ
πŸ‘€οΈŽ u/karimNanvour
πŸ“…οΈŽ Feb 03 2021
🚨︎ report
Learning K Nearest Neighbors with Julia youtu.be/P0I-oXGlbnM
πŸ‘οΈŽ 2
πŸ’¬οΈŽ
πŸ‘€οΈŽ u/mindaslab
πŸ“…οΈŽ Jul 31 2021
🚨︎ report
Learning K Nearest Neighbors with Julia youtu.be/P0I-oXGlbnM
πŸ‘οΈŽ 2
πŸ’¬οΈŽ
πŸ‘€οΈŽ u/mindaslab
πŸ“…οΈŽ Jul 31 2021
🚨︎ report
Learning K Nearest Neighbors with Julia youtu.be/P0I-oXGlbnM
πŸ‘οΈŽ 2
πŸ’¬οΈŽ
πŸ‘€οΈŽ u/mindaslab
πŸ“…οΈŽ Jul 31 2021
🚨︎ report
K-Nearest Neighbor from Scratch in Python youtu.be/XvjydYhoVRs
πŸ‘οΈŽ 36
πŸ’¬οΈŽ
πŸ‘€οΈŽ u/research_pie
πŸ“…οΈŽ May 31 2021
🚨︎ report
Is this a "rustic" K-Nearest Neighbors function?

Hello fellow Rustaceans - I am still getting the hang of the language (coming as a scientist used to high-level languages like Python) . Would anyone be willing to help me determine if this is an appropriately "rustic" implementation of a K-nearest neighbors function?

This is for multivariate timeseries analysis and manifold learning, where we can be dealing with tens of thousands of datapoints, so I wanted something that would be memory-lite (even if it took a little longer to run). To that end, rather than representing the full time x time distance matrix, I'm only ever storing the k-nearest neighbors in a Hash Table and updating that every time we see a new distance smaller than our kth-nearest neighbor.

I've checked it against the scipy.spatial.distance function pdist() and the numbers it returns are right. I'm just wondering if there's anything Rust-y that I might take advantage of.

EDIT - sorry there's no syntax highlighting - it's a lot easier to read in my text editor.

pub fn k_nearest_neighbors(idx: usize, x: &Array2<f64>, k: usize, distance: String) -> HashMap<usize, f64> {
    // For a multdimensional time-series, computes the K nearest neighbors of a given index.
    // x must be in processes x time format.
    // Returns a HashMap: keys (usize) correspond to the indices of the nearest neighbors to idx in x,
    // Values are the distance. 
    // Distance can be euclidean, manhatten, chebyshev, or cosine.
    let mut lookup: HashMap<usize, f64> = HashMap::with_capacity(k);
    let idx_col: Array1<f64> = x.index_axis(Axis(1), idx).to_owned();
    let mut max: f64 = 0.0;
    let mut max_idx: usize = 0;

    for i in 0..x.ncols() {
        if i != idx { // Ignore the self-distance (will always be 0, makes things difficult)
            let mut dist = 0.0;
            let i_col: Array1<f64> = x.index_axis(Axis(1), i).to_owned();

            if distance == "manhatten" { //Selecting the given distance measure.
                dist += manhatten_distance(&idx_col, &i_col);
            } else if distance == "euclidean" {
                dist += euclidean_distance(&idx_col, &i_col);
            } else if distance == "chebyshev" {
                dist += chebyshev_distance(&idx_col, &i_col);
            } else {
                dist += cosine_distance(&idx_col, &i_col);
            }
... keep reading on reddit ➑

show more
πŸ‘οΈŽ 7
πŸ’¬οΈŽ
πŸ‘€οΈŽ u/antichain
πŸ“…οΈŽ Mar 05 2021
🚨︎ report
A Visual Guide to K Nearest Neighbors youtube.com/watch?v=jw5Lh…
πŸ‘οΈŽ 120
πŸ’¬οΈŽ
πŸ‘€οΈŽ u/vaishak2future
πŸ“…οΈŽ Dec 25 2020
🚨︎ report
The k-Nearest Neighbors (kNN) Algorithm in Python – Real Python realpython.com/knn-python…
πŸ‘οΈŽ 24
πŸ’¬οΈŽ
πŸ‘€οΈŽ u/endlesstrax
πŸ“…οΈŽ Apr 07 2021
🚨︎ report
Won't you be my k-nearest neighbor?
πŸ‘οΈŽ 16
πŸ’¬οΈŽ
πŸ‘€οΈŽ u/CuriosityStrikesBack
πŸ“…οΈŽ Apr 16 2021
🚨︎ report
The k-Nearest Neighbors (kNN) Algorithm in Python – Real Python realpython.com/knn-python…
πŸ‘οΈŽ 18
πŸ’¬οΈŽ
πŸ‘€οΈŽ u/endlesstrax
πŸ“…οΈŽ Apr 07 2021
🚨︎ report
The Flag of Japan but it's a representation of the k-Nearest Neighbor Algorithm
πŸ‘οΈŽ 14
πŸ’¬οΈŽ
πŸ‘€οΈŽ u/Neon244
πŸ“…οΈŽ Apr 12 2021
🚨︎ report
[TASK] Apply k-Nearest Neighbors algorithm to a set of data stored in a json file (in python)

Hello everyone!

I have a python project that I need some help to finish. I want to create an algorithm that uses supervised learning to categorize songs into genres.

So far what I have done is the first part of the project which is to analyze the songs using MFCC and store the data into an .json file. The json file contains the names of the genres, the label of each song (e.g pop is labeled as "1" so every pop song has as label "1") and the MFCC values for each song.

Anyway what's left now is to use kNN method to actually train the algorithm to categorize the songs. And that's what I want you to do, create an algorithm in python that uses the data from the json file and kNN method to train the algorithm.

If you don't know about MFCC that shouldn't be a problem as it is just an array of (a lot) real values it should be pretty easy to calculate the euclidean distance. Dm me and we can discuss a bit more about the type of data in the json file and MFCC if you are not sure you can do it. There is a lot of info online on both MFCCs and how to implement kNN I just don't have the time to deal with it.

$bid here and dm me with your price for your work. Payment only through paypal unfortunatelly.

If you read all of the wall of text above thank you for you patience lol.

πŸ‘οΈŽ 3
πŸ’¬οΈŽ
πŸ‘€οΈŽ u/Grateful-Shred
πŸ“…οΈŽ Jan 24 2021
🚨︎ report
How does k nearest neighbors work? | Machine Learning Basics youtu.be/0p0o5cmgLdE
πŸ‘οΈŽ 399
πŸ’¬οΈŽ
πŸ‘€οΈŽ u/wstcpyt1988
πŸ“…οΈŽ Apr 26 2020
🚨︎ report
Machine Learning Concepts with Visualization and Code β€” K-Nearest Neighbor Algorithm v.redd.it/d8brsn1wvab41
πŸ‘οΈŽ 315
πŸ’¬οΈŽ
πŸ‘€οΈŽ u/wstcpyt1988
πŸ“…οΈŽ Jan 17 2020
🚨︎ report
Why is K-Nearest neighbor considered to be a type of machine learning?

The only parameter in K-NN is K and that is not learned by the algorithm; it is just selected using guess and check by the researcher.

What am I missing?

πŸ‘οΈŽ 25
πŸ’¬οΈŽ
πŸ‘€οΈŽ u/no_username_for_me
πŸ“…οΈŽ Feb 14 2020
🚨︎ report
Killer Instinct Shadow AI (k-Nearest Neighbors) youtube.com/watch?v=9yydY…
πŸ‘οΈŽ 5
πŸ’¬οΈŽ
πŸ‘€οΈŽ u/HaskellHystericMonad
πŸ“…οΈŽ Nov 16 2020
🚨︎ report
K Nearest Neighbor in Caret (R)

Hello,

set.seed(522)
model <- train(Class~.,data = network_measures,method = 'knn',
               trControl = trainControl(method = 'LOOCV'),
               preProcess = c('center','scale'),
               tuneGrid = expand.grid(k = 1:30))
k <- model$bestTune[[1]]
accuracy <- model$results[k,2]
k
accuracy

So as you can see here I am searching for the best k. My dataset is 39 subjects so my k can range from 1 to 38. If I run the script with k = 1:38 I get 82% accuracy with k = 4. If I run the script with k = 1:30 I get 69% accuracy with k = 2. If I run the script with k = 1:32 I get 74% accuracy with k = 4. Is this normal? I would expect to always have 82% accuracy as long as k = 4 was included.

πŸ‘οΈŽ 5
πŸ’¬οΈŽ
πŸ‘€οΈŽ u/anakreontas
πŸ“…οΈŽ Oct 28 2020
🚨︎ report
k-nearest neighbors from scratch in pure Python

Hey everyone,

I’m currently implementing core Machine Learning algorithms from scratch in pure Python. While doing so I decided to consolidate and share my learnings via dedicated blog posts. The main goal is to explain the algorithm in an intuitive and playful way while turning the insights into code.

Today I’ve just published the second post which explains the k-nearest neighbors (k-NN) algorithm: https://philippmuens.com/k-nearest-neighbors-from-scratch/

Links to the Jupyter Notebooks can be found here: https://github.com/pmuens/lab#implementations

I hope that you enjoy it! More posts will follow in the upcoming weeks / months.

πŸ‘οΈŽ 18
πŸ’¬οΈŽ
πŸ‘€οΈŽ u/pmuens
πŸ“…οΈŽ Mar 17 2020
🚨︎ report

Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.