Deep Neural Network from Scratch in Python | Fully Connected Feedforward Neural Network youtu.be/b_w4eEiogaE
πŸ‘︎ 8
πŸ’¬︎
πŸ‘€︎ u/research_pie
πŸ“…︎ Nov 23 2021
🚨︎ report
Deep Neural Network from Scratch in Python | Fully Connected Feedforward Neural Network youtu.be/b_w4eEiogaE
πŸ‘︎ 19
πŸ’¬︎
πŸ‘€︎ u/research_pie
πŸ“…︎ Dec 10 2021
🚨︎ report
Feedforward speed of deep neural networks

When implementing neural networks for real-time control systems, would the feedforward process ever pose a problem?

For instance, a PID controller would be nearly instantaneous, as not very many calculations have to take place, but would a deep neural network consisting of hundreds/thousands of units used in place of the PID take too long to feedforward through and therefore cause problems?

I imagine of course this would entirely be a function of the neural network's size, but am wondering where the limits would be. If anyone could share their knowledge of point me in the direction of any studies/papers, I would appreciate it.

πŸ‘︎ 4
πŸ’¬︎
πŸ‘€︎ u/Cogitarius
πŸ“…︎ Nov 22 2021
🚨︎ report
Deep Neural Network from Scratch in Python | Fully Connected Feedforward Neural Network youtu.be/b_w4eEiogaE
πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/research_pie
πŸ“…︎ Nov 23 2021
🚨︎ report
[D] [R] Spiking Neural Networks and multiplexing feedforward and feedback signals

Can anyone help me in finding examples or articles on SNNs and multiplexing feedforward and feedback signals similar to the following article?

https://www.nature.com/articles/s41593-021-00857-x

πŸ‘︎ 6
πŸ’¬︎
πŸ‘€︎ u/Waste_Screen_5803
πŸ“…︎ Sep 29 2021
🚨︎ report
Feedforward Controller based of Gaussian Process Regression or Artificial Neural Networks

Hi Everyone,

Last semester I did my first course in Machine Learning. The course was called machine learning for Control Systems. The topics were about approximating transferfunctions using Gaussian Process Regression (GPR), Artificial Neural Networks (ANN) and controlling systems using reinforcement learning.

The GPR and ANN solutions were very good at approximating functions. However I don't quite understand how I can make a feedforward controller from these estimated transferfunctions. Pretty much all of these transferfunctions are difficult to model (because they are very non-linear). Ideally I would keep the model non-linear such that it can correct for the nonlinearities of the true system.

The question thus remains: "How can we make a feedforward controller based of a function estimate made with a GPR or ANN?"

Is there anyone here who has done this before?

Many thanks in advance!

πŸ‘︎ 5
πŸ’¬︎
πŸ‘€︎ u/hidjedewitje
πŸ“…︎ Jul 11 2021
🚨︎ report
[R] Tensor Programs I: Wide Feedforward or Recurrent Neural Networks of Any Architecture are Gaussian Processes

Hi r/ML,

I'm writing a series of papers on a new way to think about neural networks called Tensor Programs that I'm really excited about. The first paper was published in NeurIPS 2019, but I figured it's never too late to share with the community! I'll put the paper link here and also say a few words about the content.

Tensor Programs I: Wide Feedforward or Recurrent Neural Networks of Any Architecture are Gaussian Processes

paper: https://arxiv.org/abs/1910.12478

code: https://github.com/thegregyang/GP4A

What is a Gaussian process? You can just think of GP as just a fancy way of saying "multivariate Gaussian distribution". Thus our result says: for wide, randomly initialized network f and two inputs x, y, the distribution of (f(x), f(y)) looks like some 2D Gaussian. Similarly, for k inputs x_1, x_2, ..., x_k, the distribution of (f(x_1), ..., f(x_k)) looks like a kD Gaussian. The covariance of these kD Gaussians is the most important data associated with the GP, called the kernel of the GP.

NNGP Correspondence This correspondence between wide neural networks (NN) and Gaussian processes (GP) has a long history, starting with Radford Neal in 1994, and extended over the years (e.g. 1 2 3 4 5). Our paper shows this correspondence is architecturally universal, as the title says.

Architectural universality This architectural universality will be a recurring pattern in this series of papers, and is one of the reasons I'm really excited about it: Theoretical understanding of deep learning has always had a problem scaling up results beyond 1 or multi-layer perceptrons, and this gap grows wider by the day as mainstream deep learning move to transformers and beyond. With tensor programs, for the first time, you really just need to show your results once and it's true for all architectures. It's like a CUDA for theory.

OK so, what is a tensor program? In a gist, it's just a sequence of computation composed of matrix multiplication and coordinatewise nonlinearities --- simple, right? It turns out that practically all modern and classical neural networks can be written in this way (this sounds stupidly obvious but I'm hiding some details here; see paper). This expressivity is half of the power of tensor

... keep reading on reddit ➑

πŸ‘︎ 92
πŸ’¬︎
πŸ‘€︎ u/thegregyang
πŸ“…︎ Jul 31 2020
🚨︎ report
I made a software to visuale a feedforward Neural Network with pure Kotlin and Swing
πŸ‘︎ 55
πŸ’¬︎
πŸ‘€︎ u/longuyen2306
πŸ“…︎ Jan 11 2021
🚨︎ report
Explaining Feedforward, Backpropagation and Optimization: The Math Explained Clearly with Visualizations. I took the time to write this long article (>5k words), and I hope it helps someone understand neural networks better. mlfromscratch.com/neural-…
πŸ‘︎ 431
πŸ’¬︎
πŸ‘€︎ u/permalip
πŸ“…︎ Nov 30 2019
🚨︎ report
[D] What is the difference between deep learning and feedforward neural network?

I think you have already seen this question in the past. In short, deep learning is a type of machine learning. It is mainly used for computer vision in general.

Feedforward neural network is a type of machine learning. It is mainly used for classification in general.

What is the difference between machine learning and machine learning in neural network?

πŸ‘︎ 5
πŸ’¬︎
πŸ“…︎ Aug 03 2020
🚨︎ report
Explaining Feedforward, Backpropagation and Optimization: The Math Explained Clearly with Visualizations. I took the time to write this long article (>5k words), and I hope it helps someone understand neural networks better. mlfromscratch.com/neural-…
πŸ‘︎ 375
πŸ’¬︎
πŸ‘€︎ u/permalip
πŸ“…︎ Aug 05 2019
🚨︎ report
How Neural Networks Extrapolate: From Feedforward to Graph Neural Networks by Keyulu Xu et al. deepai.org/publication/ho…
πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/deep_ai
πŸ“…︎ Sep 27 2020
🚨︎ report
[D] Explaining Feedforward, Backpropagation and Optimization: The Math Explained Clearly with Visualizations. I took the time to write this long article (>5k words), and I hope it helps someone understand neural networks better.

I have been studying Machine Learning in the last few months, and I wanted to really get to understand everything that goes on in a basic neural network (excluding the many architectures). Therefore, I took the time to write this long article, to explain what I have learned. In particular, the post on purpose very extensive and goes into the smaller details; this is to have everything in one place. As the site says, it is machine learning from scratch, and I share what I have learned.

The particular reason for posting here, is that I hope someone else could learn from this. The goal is to share the knowledge in the easiest absorbable way possible. I tried to visualize much of the process going on in neural networks, but I also went through the math, to the detail of the partial derivatives.

This was quite a journey, and it took about 1 month to read all the things I have read, and write it down, have it make sense and creating the graphics.

Regardless, here is the link. Any constructive feedback is appreciated.

https://mlfromscratch.com/neural-networks-explained/

πŸ‘︎ 136
πŸ’¬︎
πŸ‘€︎ u/permalip
πŸ“…︎ Aug 05 2019
🚨︎ report
Neural network controlled homing rocket, a personal learning project. The rocket has forward and rotation thrusters which are controlled by a small (feedforward) neural network taking the position, velocity etc. as inputs. An evolutionary algorithm was used to generate the NN weights. imgur.com/gallery/LtFI5Cw
πŸ‘︎ 5
πŸ’¬︎
πŸ‘€︎ u/rhkibria
πŸ“…︎ Sep 09 2020
🚨︎ report
Explaining Feedforward, Backpropagation and Optimization: The Math Explained Clearly with Visualizations. I took the time to write this long article (>5k words), and I hope it helps someone understand neural networks better. mlfromscratch.com/neural-…
πŸ‘︎ 258
πŸ’¬︎
πŸ‘€︎ u/permalip
πŸ“…︎ Aug 05 2019
🚨︎ report
"Deep Neural Network from Scratch in Python | Fully Connected Feedforward Neural Network" by CodeThisCodeThat youtube.com/watch?v=b_w4e…
πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/aivideos
πŸ“…︎ Apr 28 2020
🚨︎ report
Explaining Feedforward, Backpropagation and Optimization: The Math Explained Clearly with Visualizations. I took the time to write this long article (>5k words), and I hope it helps someone understand neural networks better. mlfromscratch.com/neural-…
πŸ‘︎ 29
πŸ’¬︎
πŸ‘€︎ u/permalip
πŸ“…︎ Aug 05 2019
🚨︎ report
[R] On the Margin Theory of Feedforward Neural Networks

"We establish: 1) for multi-layer feedforward relu networks, the global minimizer of a weakly-regularized cross-entropy loss has the maximum normalized margin among all networks, 2) as a result, increasing the over-parametrization improves the normalized margin and generalization error bounds for two-layer networks."

Margin theory results for multi-layer FF neural nets. A hint towards why over-parametrization is better.

I'm interested to see the feedback on this paper.

https://arxiv.org/abs/1810.05369

πŸ‘︎ 26
πŸ’¬︎
πŸ‘€︎ u/gohu_cd
πŸ“…︎ Oct 22 2018
🚨︎ report
Explaining Feedforward, Backpropagation and Optimization: The Math Explained Clearly with Visualizations. I took the time to write this long article (>5k words), and I hope it helps someone understand neural networks better. mlfromscratch.com/neural-…
πŸ‘︎ 41
πŸ’¬︎
πŸ‘€︎ u/permalip
πŸ“…︎ Aug 05 2019
🚨︎ report
Neural Networks: Feedforward and Backpropagation Explained mlfromscratch.com/neural-…
πŸ‘︎ 11
πŸ’¬︎
πŸ‘€︎ u/pmz
πŸ“…︎ Dec 08 2019
🚨︎ report
Explaining Feedforward, Backpropagation and Optimization: The Math Explained Clearly with Visualizations. I took the time to write this long article (>5k words), and I hope it helps someone understand neural networks better. mlfromscratch.com/neural-…
πŸ‘︎ 43
πŸ’¬︎
πŸ‘€︎ u/permalip
πŸ“…︎ Aug 05 2019
🚨︎ report
Where can I download handwritten numbers from 0-9 to train my feedforward neural network study?
πŸ‘︎ 13
πŸ’¬︎
πŸ‘€︎ u/masterbruno11
πŸ“…︎ Nov 03 2018
🚨︎ report
Explaining Feedforward, Backpropagation and Optimization: The Math Explained Clearly with Visualizations. I took the time to write this long article (>5k words), and I hope it helps someone understand neural networks better. mlfromscratch.com/neural-…
πŸ‘︎ 40
πŸ’¬︎
πŸ‘€︎ u/permalip
πŸ“…︎ Aug 05 2019
🚨︎ report
Explaining Feedforward, Backpropagation and Optimization: The Math Explained Clearly with Visualizations. I took the time to write this long article (>5k words), and I hope it helps someone understand neural networks better. mlfromscratch.com/neural-…
πŸ‘︎ 29
πŸ’¬︎
πŸ‘€︎ u/permalip
πŸ“…︎ Nov 30 2019
🚨︎ report
Neural Networks Explained: Feedforward and Backpropagation. I wrote this long and detailed article, and now Towards Data Science has just published it. Thought I would share it, for anyone looking to grasp NNs.

> Link; Neural networks: Feedforward and Backpropagation

Hey r/datascience,

I wrote this long article explaining the basics of neural networks, and I just got it published on Towards Data Science. I hope that you could learn something from it and have a discussion with me. I'm always open for feedback, questions or any other type of comments.

πŸ‘︎ 5
πŸ’¬︎
πŸ‘€︎ u/permalip
πŸ“…︎ Sep 26 2019
🚨︎ report
Neural Networks Explained: Feedforward and Backpropagation. I wrote this long and detailed article, and now Towards Data Science has just published it. Thought I would share it, for anyone looking to grasp NNs. towardsdatascience.com/ne…
πŸ‘︎ 4
πŸ’¬︎
πŸ‘€︎ u/permalip
πŸ“…︎ Sep 26 2019
🚨︎ report
[D] Is the output of a feedforward neural network with bounded activations in the hidden layers, bounded?

I'm new here and on Reddit in general, so I hope I'm not making any major mistakes.

I have a simple question: consider a generic feedforward neural network. Suppose that all the hidden layers have bounded activation functions (e.g., tanh). Now, if the output layer has a bounded activation function, of course the NN has a bounded output. However, suppose that the last layer has an unbounded activation, for example a linear activation. I think the output will still be bounded:

y=f(a_1,..., a_N)=sum_{i=1}^N w_i a_i + b

where N is the number of units in the last hidden layer, and a_1,...,a_N are the activations of the layer. Let w = max(|w_1|,..., |w_N|). Then

|y| =< N|w|+|b|

i.e., the output is bounded. Is this correct?

πŸ‘︎ 4
πŸ’¬︎
πŸ‘€︎ u/TimelyCrazy
πŸ“…︎ Jul 23 2018
🚨︎ report
Build a Feedforward Neural Network with Backpropagation in Python enlight.ml/build-a-neural…
πŸ‘︎ 7
πŸ’¬︎
πŸ‘€︎ u/samayshamdasani
πŸ“…︎ Aug 06 2017
🚨︎ report
Total newbie questions about Feedforward neural network on malware classification problem

I'm currently working on my final year project, Malware Classification using Neural Nets. I have performed my data gathering and labeling (into benign or malicious files) and I was planning on using the windows API calls by each sample to create a binary feature vector.

However, as I iterated through all of my sample set (18,000 malicious, 4,000 benign) the total list of unique API calls generated was over 65,000, thus raising a concern regarding the dimensionality.

Now I was wondering, is it deemed feasible to even follow this course, and create a binary feature vector for the function calls present for each sample and pass it into my neural net or should I be looking at some other features to take into consideration, or perform some form of dimension reduction to reduce the feature size?

I am also clueless on the number of hidden layers & neurons per layer that I should be using... is there an actual way to determine this or should I just be playing with the numbers to land on some numbers. Initially, I was planning to have two hidden layers with 500 neurons each.

Do forgive me for my total newbie questions ...

πŸ‘︎ 4
πŸ’¬︎
πŸ‘€︎ u/MKazemHN
πŸ“…︎ Jul 27 2017
🚨︎ report
[R] How Neural Networks Extrapolate: From Feedforward to Graph Neural Networks
πŸ‘︎ 8
πŸ’¬︎
πŸ‘€︎ u/keyulu
πŸ“…︎ Oct 14 2020
🚨︎ report

Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.