A list of puns related to "Regress Problem"
How to even respond to this one...
Hi all,
In relation to a question I asked a few days ago that you can find a link to after this paragraph, I have done a bit of digging and it turns out I was referring to the epistomological problem of infinite regress. https://www.reddit.com/r/askphilosophy/comments/lpczx1/is_the_universe_as_infinite_microscopically_as_it/?utm_medium=android_app&utm_source=share
To visualize my conundrum, please imagine a super microscope/telescope (depending on which way you look at it) zooming in from the moon to earth, to a country, to a city, to a village, to a dog, to a flea, to a flea on a flea, ad infinitum.
I have faced some denial or disinterest when discussing the notion but the implications, although a tricky concept to grasp, appear to be unbelievably significant. The true meaning of infinity.
Please see the epistomology and other disciplines tab of the wiki page I'm going to link now
https://en.m.wikipedia.org/wiki/Turtles_all_the_way_down
Johann Gottlieb Fichte on the subject: "Either there is no immediate certainty at all, and then our knowledge forms many series or one infinite series, wherein each theorem is derived from a higher one, and this again from a higher one, etc., etc. We build our houses on the earth, the earth rests on an elephant, the elephant on a tortoise, the tortoise again--who knows on what?-- and so on ad infinitum. True, if our knowledge is thus constituted, we can not alter it; but neither have we, then, any firm knowledge. We may have gone back to a certain link of our series, and have found every thing firm up to this link; but who can guarantee us that, if we go further back, we may not find it ungrounded, and shall thus have to abandon it. Our certainty is only assumed, and we can never be sure of it for a single following day."
This is quite abstract and out there admittedly, but to counteract this I would like to add that Steven Hawking also references this in A brief history of time.
Tell me your thoughts on the matter. Thanks!
It's always been puzzled how the universe came into existence, because whatever the first cause was, there must have been a cause responsible for that cause, and so on. And if space and time did not exist before the big bang, why would anything happen at all? So it's been asked.
But, there is something rather than nothing. The problem of infinite regress for some reason just doesn't apply. Does that mean this problem is solvable, but we just don't know how?
If you think about it. Isnβt the most logical explanation that this all runs on a computer? Like, look at all the advanced simulations we can do right now thats available at the market now or soon. GTA V, Microsoft Flight Simulator (2019), Red Dead Redemption, Universe Sandbox, Battlefield V, Cyberpunk 2077, Half Life Alyx...(and the list goes on). And the arrival of the RTX 2080 and ray tracing.
Considering this, must not scientists and military have access to even more advance simulations? And in 10 years, what are will be possible then?
We know human can create this on a computer, why not come to this conclusion? If we use Okhams Razor of choosing the most plausible and least complex solution to a philosophical problem, we ought to choose Simulation Theory in the infinite regress problem.
I tried reading about the problem of infinite regress but I'm still no more knowledgeable than when I started.
A large number of philosophical arguments for the existence of God seem to rely upon saying something along the lines of 'since there cannot be infinite regress...' to prove their point. I don't see why it should be so obvious that infinite regress is impossible. Can someone explain?
I know this is a sophomoric topic but from an epistemic standpoint I see no issue with it , if you're a realist then yes I suppose you should terminate and have an uncaused cause.
As an antirealist, I see no problem with it, people who decide on this termination is just building up a wall and telling everyone there is nothing on the other side.
As I understand it, the basis for the Cosmological Argument is that there must be some sort of uncaused cause, which we can call God, because the idea of infinite regress is so untenable.
My question would be, why is this such a problem? Why is it that an infinite chain of causes and effects is such a problem that we need to have a prime mover to explain it away? Is it because the concepts associated with infinity are so beyond our ken as humans that we don't want to have to deal with them?
To be honest the Cosmological Argument is quite convincing, but I seem to be missing something here.
What are the arguments for the various solutions to the regress problem? Is one solution much more popular than the others among philosophers?
I am working on a Difference-in-Difference regression model in which I just want to find out that the treatment dummy has a significant positive or negative result on the treatment area. As I am a beginner, I am facing a problem that I can not understand if I need to perform the multi-collinearity test for my Independent variables or not. I have done Heat-map and VIF to test in which 3 of my important independent variable comes out highly correlated which I don't want to omit for my DID model.
https://preview.redd.it/cc91qn9nevd81.png?width=551&format=png&auto=webp&s=c1e3c50c4b022ec98966f4e816dbe113262105e0
https://preview.redd.it/sajsoupaevd81.png?width=748&format=png&auto=webp&s=334519f89bc0cd71845f5361f7bf34571fd28ea8
Specifically I'm working on an applied ML problem where I need to create a surrogate model for optimization of just one feature. Imagine my input data is a 2D physical plane with a vector field superimposed. If the vector field is highly divergent then I get a big output number, Omega. I care more that my predicted Omega has a high R Squared value than absolute error. Now I've been able to get a CNN to work for a simple 2D grid with reasonable results. But I need my model to be able to handle the case where the input shape isn't always a simple 2D plane. The model needs to be able to handle holes in the physical layer the vector field is super imposed on. As in some times there will and won't be holes, and the holes can move.
I've briefly looked into Spatial Transformer Networks but its mostly centered on Computer Vision. So much research in this area is for computer vision, and I am playing Research Paper Hop-Scotch trying to find something to help me. Does anyone has any recommendation of current research? Papers or key words to search and find. Thank you in advance.
Seriously though I wish there was an ML community that focused on everything ML but Computer Vision.
This is probably a stupid question, apologies if itβs been asked before.
How can a lagged target feature I.e the value of the target yesterday or a rolling average ie the average of the past 7 days, be used as a feature to predict a value?
For instance, in predicting 14 days into the future, we wonβt have yesterdayβs value for the 13 days. Or if we use train test split, we will contain actual information of the target in our test, so it will overestimate accuracy?
Just getting started in ML, so Iβve probably misunderstood something!
Thanks for any help!
Sorry if this is a very disorganized question, but I am not sure how to clear my confusion on this. In a neural network, the amount of training epochs can be set manually, and you can set exact parameters and see how long it takes to minimize training error. In a maximum margin classifier like SVM, however, there can be infinite solutions to a linear classification problem. Is it like gradient descent where calculating a gradient inherently shows you the right direction to go in to minimize error, or is solving the dual problem for calculating for the optimal hyperplane a problem of random trial and error? Similarly for a linear regression problem, does the program have a way that it makes its first guess for a best fit line? How does it go from there to actually find the minimum OLS solution?
What I assume is that in the beginning, the initial weights are randomly guessed, and then the algorithm goes from there. It may test some predefined incremental shift of weights in a predefined number of directions, and whichever shift decreases error the most may be the one it moves to in order to shift weights again and so on. In SVM, you can set parameters like gamma and C, and then the algorithm uses those values and then also guesses initial vectors. How small can these increments be made in order to fine tune the actual optimal decision boundary?
I am working on a regression problem where I need to get data from a time series data. For this I would ideally like to use some sort of machine learning algorithm. I would like to use some sort of machine learning algorithm to get data for this data.
I've been given the task to fit an Autoencoder to a large dataset of 600 dimensional vectors. After constructing some relatively simple AEs (eg., nn.Linear(600) --> nn.Linear(150) --> nn.Linear(5), reverse..), I've found that it's incredibly hard to converge onto a non-trivial solution. These vectors have been normalized in such a way that their means lie ~0.50, and are bounded between 0-1. I have attached an image illustrating a few of the vectors within my dataset.
I'm finding that since these vectors essentially look like noise centered around some global mean value, any model that I attempt to train collapses onto some noisy curve centered around the mean. Importantly, these vectors all represent something important, so smoothing or other preprocessing tricks are out of the question. Sadly, there doesn't seem to be much structure within the data, so a CNN model is also out.
This is basically a high-dimensional regression problem with diverse data. Does anyone have any tips for training an AE model with such data? Is it even possible? Any tips or recommendations would be great!
EDIT: I realize I was vague. These vectors are the Fourier transform of the original signal, where the first 300 values are the real portion and the second 300 values are the corresponding imaginary (phase) information.
Hi everybody! I would like to study physical systems with machine learning. In particular I have a lot of features that describe the time evolution of these systems. This time evolution is described by a sequence of arrays, and each array corresponds to the description of the system in an instant in time.
For example let's say that I can describe my system at time T0 with an array with 5 components [a1,a2,a3,a4,a5], then I have the description at time T1 that is [b1,b2,b3,b4,b5] and so on.
Given the whole sequence ([a1,a2,a3,a4,5],[b1,b2,b3,b4,b5],...,[z1,z2,z3,z4,z5]) I would like to predict some physical quantities that can be written for example in an array with 3 components [output1,output2,output3].
Which approach should I try? Can you give me some reference in order to learn how it works?
Thank you very much!
The existing software is verified by the regression testing method in order to assess whether the existing functionalities work just fine, despite making changes and updates to the code. Defects can be identified in the early stages of the software development life cycle process and thus there will be a low cost involved. A regression testing strategy is a platform that makes sure that the regression testing process is providing optimal results to the project and thus contributing good value to the business as a whole. But, there are certain challenges that may occur during the implementation of the regression testing strategy and thus needs to be dealt with tactfully. In this article, you will get to know about the common problems with regression testing strategy and the specific solutions for it.
1. Time-consuming process:
Problem: Regression testing is performed post every development stage. If regression testing is performed manually or either semi-manually also, a few days or weeks are taken up by regression tests to complete. An average of two weeks is taken up by the development sprints. This can in turn hinder growth.
If automated regression testing is being implemented, but, is not configured, then build times will slow down and the agile feedback loop will get affected.
Solution: Automate the entire regression test scenario. The continuing costs of regression testing will get significantly decreased. Test case management tools can be used. Tests need to be prioritized and specific parts of the program need to be focused upon that are exposed to bugs.
2. Value delivered is comparatively low:
Problem: Immediate value is not provided by the regression testing method. Those features that have been built months or years ago are tested. In order to generate new revenue, new features need to be focused upon.
Solution: The significance of detecting bugs at an early stage should be clearly known to the managers and the team. Specific scenarios such as basic capabilities that might stop working need to be shared with the team. The ROI of early bug detection should be shown in the form of a statistical format.
3. The cost factor:
Problem: Manual regression testing has a high ongoing manpower cost. When it comes to automated regression testing, tooling, troubleshooting the regressio
... keep reading on reddit β‘Why?
Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.