A list of puns related to "Turing Machines"
I know the famous proof by contradiction that obvious that there can not exist a halting problem Turing machine that works on ANY arbitrary Turing machine. But more specifically have we ever successfully built a decider(as in a machine that halts for every input) halting problem Turing machines that can work on some Turing machines as long as they have the some property X?
If so, could you please send me a link to the research paper.
I haven't personally experimented with a Turing Machine implementation myself - mainly because building my own implementations makes learning about the concept easier and more fun - but I know enough to know that TM's can simulate TM's, even if at a slower and smaller scale, which makes me wonder.
If a Turing program exists that simulates the entirety of a Turing machine inside a Turing machine, how many internal states would it need to run such a program? Would this number vary depending on the size of the tape's alphabet, or the number of states the simulated Turing Machine can be in?
EDIT: I found the answer, though I don't yet understand it. a 2-state, 3-letter Turing Machine can do this. I'm searching the internet for proof that that's true.
I've read that both Turing Machines and Busy Beavers can "reject" programs, and/or inputs to those programs. But I'm wondering if errors/exceptions are invented or discovered (much like the age-old question of whether Math is discovered or invented). Do all Turing-Complete machines share this "feature"? Is it something that depends on the machine's language, architecture and computation model, or is it universal to all TCMs?
Finished building Turing Machine MK II from thonk. When I plug it into the rack and power up, the whole rack starts clicking. The power distributor is clicking, like a relay or a fuse or something. The clicking also comes through as loud pulses of audio from output jacks.
I opened up the module and checked - ICs, voltage regulators, transistors, etc. appear to be oriented correctly. Didn't feel any intense heat anywhere, and the module still powers up on a mini USB power supply, without causing clicking but also producing almost no audio.
Any ideas?
I'm playing around with manually operated mechanical computers (think the abacus, crank calculator, etc.) and I'm wondering if there's other deterministic architectures exactly as powerful as the Turing Machine?
Edit: I've done some hours of research and discovered an online copy of Stephen Wolfram's "A New Kind of Science" that includes some mechanisms that match closely what I was looking for, such as tag systems, register machines, and substitution systems. I don't know if any one of them are Turing complete yet, but something tells me at least some of them are.
Blogs are also welcome as explain
I have a MCQs Theory of Computation / Computational Complexity exam.
Time: 8:45 am Jan 27th, EST.
Duration: 2 hrs.
Difficulty: medium.
I need an expert in theoretical CS and Turing Machines. Can pay well.
A short verification is needed (solving a few easy samples).
Please message me ASAP!
The universe seems to be a very complex, Turing-complete machine- the fact that we can build Turing machines implies that the universe is (at least) Turing-complete. This makes me wonder- could the universe itself be simulated on an extremely powerful Turing machine? If so, is it *actually* being simulated on such a device? How would we know?
I know its impossible. There are various theorems I learnt(although partially and I don't remember now) that some problems can only be processed in PDAs.
But, look at the case of the famous game of life. It consists of a infinite of cells, where the state of each cells depends on the previous state of its neighbors. It is considered to be Turing Complete
I'd argue that the getting the next state in Game of Life can done using ONLY PDAs.
That means, if I give the current state of the cells to PDA. It can give next state as the output. Which feed back as the input to the PDA. and now we have Turing machine running in PDA.
Recently I saw some renewed interest in Algorithmic Reasoning by Petar Velickovic, essentially augment traditional "discrete" algorithms with "continuous" pattern recognition of DL, and it reminds me of the Neural Turing Machine / Differentiable Computer, mostly spearheaded by Alex Graves, which I believe share the same motivation with the Algorithmic Reasoning approach.
I haven't heard much about any major new work since the Neural Differential Computer and was wondering why? I was (and still am) fascinated by the idea and this research, but since I am not working on the topic, I'm not sure where are the challenges and pitfall?
I was aware of some instability in implementation but I thought the open-source code would have help there.
Anybody has insights on why this direction has not been explored more in recent years? Or is this one of those Schmidhubered's idea that too ahead of its time, and once people have squeezed out all the internal memory capacity from Transformer, this idea of external dynamic memory would bounce back?
Iβve gotten a Turing Machine built by someone else and am trying to fix it.
It mostly works but the CV output is the same after the fourth or fifth step. The first few steps spit out different pitches, but then itβs just producing the same tone.
I reflowed most of the joints (although 99% of them had already looked good). And even exchanged the DAC chip. Nothing helped.
Has anyone had similar issues? Any idea where I should be looking?
Also how is this related with the David Hilbert's Entscheidungsproblem, GΓΆdel's incompleteness and The Halting problem. (Quick ELI5 of these topics as well, thanks)
Sorry if that made no sense. Basically, if a language can interpret itself, does that show that it's TC?
Here is the mini course's link, Which is about the computational lens of the physics, biology, and even Math. Eventhough it is organized by a Harvard grad student, Astonishingly the course is open for non-Harvard students and is accessible for the lay audience.
If possible, Share with us why do you wish to join the course, your learning expectations, and how do you wish to contribute to others?
Please share the announcement with everyone.
This is a follow-up to my question of how many states and symbols a TM would need to simulate a TM with an arbitrarily large number of states and symbols, that was answered for me, the answer is 2 states and 3 symbols. But I can't find a proof for this, or even better, an example. Also, is there a proof that a 2-state 3-letter TM is the simplest universal TM? Could a 2-state 2-symbol simulate bigger TM's, or could a 1-state 2-symbol TM work?
Edit: I found a source for the description of the (2,3) UTM, as well as some larger UTM's. However, the description of the Turing Machine does not prove anything to me, because it's only half the story. I'd also need to know what the tape looks like while it operates to gather anything.
E2: I've now learned that the (2,3) TM is universal because it simulates a specific 1D CA that has already been proven to be universal, however this pushed the question out for me, so I looked up a proof that 1D CA are universal, and I found that it can simulate some other thing that's been proven to be universal, and soon enough the question was put back onto the UTM and why it is universal. Either I'm stupid or the internet is.
I've been on the lookout for a UTM description, and I've found a machine with only 2 states and 3 symbols. I thought, how neat, it's so simple! I put the description into a TM simulator, realized I had no idea how to use the rule set, and took to the internet again to figure out why. Turns out, it simulates a 1D CA, which I know to be terribly time and space inefficient, and very hard to understand. It makes me wonder if there's a larger, synthetic ruleset, one with more internal states but only 3 symbols (1, 0, and empty symbol), one that can reasonably be experimented with, one that can be programmed more easily?
The ad on the page takes you to an AI project but it doesnt go vack and forth with you. Sam has an interest in AI.
Ediit; the best one online is called Kuki.
https://chat.kuki.ai
Its not very good but then we people have a couple hundred thousand years of chatting so all in all its impressive. But when I asked it if it could do anything creative it told me that it could probably make a tomato if it looked it up online..
One of the definitions of a TM is that it has infinite memory. Since no physical system has infinite memory, does that mean that no TMs exist?
Edit: Grammar mistake in title, can't fix. Now I sound like a bumpkin :/
Dynamic Quantizer:
https://preview.redd.it/f3gixh0tk5381.png?width=1263&format=png&auto=webp&s=b292c126848fc3232b750b40c99180f447873633
With the dynamic quantizer you can use a polyphonic signal (1-8 channels) to tune a quantizer which can then take another input signal and tune it according to the scale set by the quantizer. It's incredibly practical for playing melodies on top of chords among other more specific uses. I spent hours looking for a module that had that functionality but had no luck. If anyone knows about one or wants to design a module that does this it would be great. Anyways, here's how to use it:
All I/O is on the top left of this selection.
1st row input: Polyphonic signal with 1-8 channels to tune quantizer
2nd row input: Trigger for quantizer to update scale according to polyphonic signal
3rd row input: Incoming signal for quantizer to tune
4th row output: Tuned signal
One more thing, you can copy the Quantum module with cables if you'd like to quantize another signal simultaneously and it will tune itself just like the original without any issues. I think quantum can also handle polyphonic signals but I haven't tried it.
Pseudo Turing Machine:
https://preview.redd.it/3mw2ezxmm5381.png?width=406&format=png&auto=webp&s=f606382fe91f69017f063ac8fda2247cc4b0208d
This selection is a lot simpler. Top left is the clock input, bottom right has the two semi-random outputs. Use the replace knob to change the probability of a note being replaced by the random input on each tick.
Module brands used:
ML | Bog Audio | Count Modula | Computerscare | Vult
If anyone wants to download the selections let me know in the comments and I'll figure out how to upload the files and share them here.
I just finished learning about Context-Free Grammars (also CYK-algorithm, CNF, parse trees, parsing tables (LL(1)), ...), Push Down Automata and Turing Machines (at University, Comp. Science). I need to make a project (prefferably with intention for real-life use) where I use one or more of these topics (and algorithms that go along with these topics, for example the CYK-algorithm with Context Free Grammars) in my coding. I have been thinking a while about some creative applications/programs that I can make (CFG is used a lot in compilers (and lexical/syntactical analysis,...), but I don't really want to go in that direction), but I can't think of much. I would like to ask if any of you guys maybe have some ideas about programs that could be useful anywhere (can be whatever topic), where you can/need to use the above mentioned constructions and algorithms that connect to them. Examples of projects are: syntax error repairing or even creating your own mini-compiler (not allowed to use these). The programming part itself will probably not be a problem. Thank you in advance. I am sorry if this isn't the right subreddit for this.
Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.