A list of puns related to "Limits of computation"
I have seen some mentions that tree level diagrams are related to the classical limit (in the context of qft and I think string theory too).
I know that tree level electron scattering is the same as the classical limit. But I guess this is not always the case, is there some deeper connection?
More than a programming course, I'm looking for a way to get a deep understanding of how things work, from electrical circuits to internet server systems; I want a mastery-learning ride! Any ideas on where I could get this sort of meticulous resource?
Thanks in advance.
Hi all!
I have a strange question. As I understand, the power of quantum computation is the solving problems which are hard to solve, but there are relatively easy to check some given solution. But can it be that there is a some limitation for checking the results obtained from quantum computation? What I mean. Imagine that due to the imperfections in quantum computer the output result of quantum computation is not a pure state in computational basis, but some density matrix:
\rho = p |answer><answer| + (1-p) \rho_wrong, and also <answer|\rho_wrong|answer> = 0
here p - is the probability that the result of quantum computation is the right answer. It seems, this is OK if p is big enough (p~1/2 or p~1/10 ..), we just check the answer, and we can see that result is wrong, and if so we just run our computation again.
But can it be that we have a situation when p must be very close to 1, because, we can not check is the result right or wrong, or we have some limitation for such classical verification (only one, two... times we can verify, but no more) ? Please, give me a Refs if such a question was discussed before.
Thanks!
I only know the basics of each so I'm hoping someone can explain the resource costs of iota and hashgraph.
As far as I understand it iota is a dag, so in order to do a transaction you must verify two other transactions. That basically means following their nodes back to the origin right? Making sure they check out?
To achieve consensus on hashgraph one keeps a memory of recent transactions and a must compute the consensus. But it works as a dag too.
I'm just wondering which system takes more time, costs more computational resources or bandwidth or memory resources. Can someone clear this up for me?
I'm reading about secure multiparty computation and I'm curious what kinds of things are or aren't possible.
I read this article: https://en.wikipedia.org/wiki/Secure_multi-party_computation
But it is quite short and doesn't mention what kinds of functions or data it can be applied to. (Actually I find most Wikipedia articles about mathematics to be very hard to read, much harder than other fields, which are written for lay people.)
Could you give me a simple overview of the state of the (public) art here?
Also an article it links https://en.wikipedia.org/wiki/Yao%27s_Millionaires%27_Problem mentions the millionaire problem (two millionaires want to see who's richer without revealing to the other their exact wealth, which each knows about themselves) but it says it's exponential in time and space. This surprised me because you would think you could do it in O(n) if someone has figured out a good protocol.
Does this mean most protocols are extremely prohibitive to actually run? Can't really be used for much in practice?
I don't know a lot of the jargon.
My friend just posted this philosophy video about words and it got me thinking about how intelligent minds, and even just emotional minds, have this ability to reinterpret what someone else is saying in a way that meshes with what the mind already "knows" (collected memories).
It could be thought of as "error" or "creativity" in recording data. But it leads to the ability to model novel things without complete information. I can tweak my code for the memories of my own experiences to imagine your experiences, so that I can better model what you want to get and/or do, in a way that's both similar and different to my own. This is necessary for me to do if I want to solve a problem where we both need to get what we want, so that I don't pick a solution that pisses you off, by either denying you what you want, or actively taking away what you already have and need.
This ability to change our own memories/code to bridge the gap between our own data set and the data sets of other individuals (especially humans) is what actual AI will need before it will really be intelligent in the most common sense of effective problem solving with complex situations.
We are wary of computers having a "bias" like this, where they can "make shit up" instead of being totally reliable and predictable, but if we want actual intelligent, creative problem solving from them, they will need to be biased, error-prone, and just generally weird.
That's my theory, anyway.
At present we are using von neumann architectures (based on turing's or similiar models of computation) are the ones we use to train Reinforcement Learning and Deep Learning algorithms, which may some day help achieve general intelligence.
There's this theory that states that almost all models of computations are capable of computing almost everything.
https://mathworld.wolfram.com/PrincipleofComputationalEquivalence.html
Even new types of computers like quantum computers can be modelled as quantum turing machines and as such, the space of computable functions remains the same.
The only thing that changes is the efficiency/capability of these different architectures, which is explained by complexity theory.
So my question is there any proofs/thesis etc that shows that almost all intelligent functions are computable by turing's model or lambda calculas ?
What is the significance of computer technology inside the Simulation?
Think about how computer tech took off so incredibly fast after the invention of the transistor (1948). Imo, we've been witnessing centuries worth of progress in decades or even years.
If we are living in a reality where the reality is not "randomness and physics" but programming, design and purpose...that adds a huge degree of context to the introduction of computer technology within the simulation.
For example, having computers and internet technology allows for all kinds of things that would otherwise not be possible. The average person these days has become much more familiar and comfortable with concepts that would have been completely foreign to their way of thinking just a few decades ago.
If you (as a simulator) want the people in the Sim to be ready to learn more about how things work, introducing the technology that manifests the sim itself is a huge step forward.
I notice itβs being offered online in the Spring 2020 quarter and itβs now showing up as an elective for the post-bacc program. There arenβt any reviews on Course Explorer, is the online version brand-new?
Do you identify with your body? Today if you have a condition the the doctor says they might have to replace your femur with a steel rod, would you become someone else? Or let's change your hair and ask the same question. Let's do this for a few thousand cells in your brain. The point of this argument is that you, the one reading this sentence, doesn't exist in the same way ordinary stuff does. You might say but wait, I feel as if I'm located between the ears and behind the eyes of my body but is that argument saying anything else apart from your position of view or your seat in the auditorium of experience? You, whatever you are, have the best seat for the movie of your experiences. You are the result of the computation of natural processes inside and outside your body. Your existence is of a different nature than that of ordinary matter and this is profound enough to ask for real scientific study of this kind of existence. Acknowledging non-physical phenomenon is the first step to profound understanding of reality.
The halting problem says you cant reliably know in advance if a program will do a certain thing or not if its possible to do that thing. If you prevent it from accessing files outside the sandbox, thats reliable, but if you ask will it end within a million cycles, it might take a million cycles to find out.
What if you only wanted to allow it max of 100k cycles and certain memory limit? If a counter starts at 100k and each op subtracts from it whatever that op costs and ends early if counter would become negative (so it never does), then you could guarantee any function call ends within the given number of cycles. If each function call also took a parameter of how much computing power to leave remaining, such as 90k then when the counter would become less than 90k it would end early, leaving the at least 90k for calls lower on the stack which gave such a limit. If each thread had such a counter and extra parameter, then not every call would be turingComplete but every call would halt within given limits and could each reduce the limit further or leave it as it is to any depth for the whole system only being constant times slower.
That could of course be coded at user level in wasm by making a wasm-like emulator in wasm which had that feature, or it might make a good standard option which could prevent things from "locking up".
Here's a bunch of Craig's Transcripts that nChain was kind enough to make available, and here's a key from CSU explaining the meaning of the grades. Theory of Computation is the class that would have taught Craig what "Turing Complete" means and on P. 43 we can see that Craig failed that class.
Studying a bit harder in his Theory of Computation class would have probably prevented years of idiotic papers purporting to prove that Bitcoin Script is Turing Complete with titles like "Bitcoin: A Total Turing Machine."
As an aside: Craig's grades on his transcripts are on the whole pretty abysmal. Most of his degrees he seems to have barely passed. These are very troubling transcripts given the view many of his followers seem to have that Craig is a multidisciplinary genius who's passion in life is education.
> "Math.sqrt is an expensive function."
I'm wondering if there is a console.log-esque function that outputs the expenditure of a a set of code.
Something like console.price(console.log(2+2)) would give the energy expenditure price for a console.log(2+2) call.
Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.