A list of puns related to "Natural language understanding"
I know this sub tends away from scifi speculation, but I wanted to open one up.
So a lot of people, myself included, think it is plausible that something like a GPT successor, with a few add ons like a long term memory outside weights, could be the first AGI. Is that a sensible belief, or is it just panglossian tech enthusiasm?
Even if such a GPT successor were multimodal, there would be an interesting sense in which such an AGI represented a natural language understanding led pathway to AGI, is this plausible?
What do you see as the major qualitative gaps between GPT 3 and AGI? I would suggest some are already soluble (multimodality) some are whereas others are more difficult (absence of proper long term memory, absence of a capacity to preplan before action).
The paper introduces the first Relational #TsetlinMachine, which reasons with relations, variables, and constants. The approach is based on first-order logic and Herbrand semantics, taking the first steps toward the computing power of a universal Turing machine. The approach can take advantage of logical structures appearing in natural language, to learn rules that represent how actions and consequences are related in the real world. The outcome is a logic program of Horn clauses, bringing in a structured view of unstructured data. In closed-domain question-answering, the first-order representation produces 10Γ more compact knowledge bases, along with an increase in answering accuracy from 94.83% to 99.48%. The approach is further robust towards erroneous, missing, and superfluous information, distilling the aspects of a text that are important for real-world understanding. https://link.springer.com/article/10.1007/s10844-021-00682-5 #ML #AI #NLP #MachineLearning #Logic #Relational
I was just thinking of starting with an NLP course and these were the two options that seemed apt for someone who has a little background of ML and DL techniques. I aim to work out a course and as a project, to be accountable, I would work parallely on the ongoing Essay score prediction task on Kaggle and also have a good research project idea in mind once I feel confident enough.
From what I gather from the reviews - FastAI is a good intro course but could be outdated and Standford's CS224U "Natural Language Understanding" course goes deeper into workings and well equips you to create your own algorithms.
Have people taken these courses? What are your thoughts about these? Or much broadly how do you assess whether a course is a good fit in general and in this choice? TIA
https://thegradient.pub/machine-learning-wont-solve-the-natural-language-understanding-challenge/
Another fascinating post by Walid Saba.
Backstory as usual with the stories; I'm playing Halo 2 Anniversary and my father is watching the cutscenes with me. The first scene he mentions something is when Master Chief and Johnson are in their drop pods but Johnson is given new orders so he gets out, walks pass Chief's pod and bangs on it twice and Chief bangs back once. He told me that it can mean a lot of things like 'good luck', 'get moving' etc. Then he watched how the marines moved in cutscenes and said he can understand the body language fair amount as it was game and they don't much movement is most cutscenes. I knew that understanding body language becomes second nature when in the military (I myself never served in the military, I just knew. Also I have a friend in the Army and one in the Navy who have told me a few things they quickly learned, I.E body language) and of course my father told me a couple moments when it came with funny results or almost lead to a shoot out.
My father's platoon was told to hold a hill one time and they were expected to move with new orders in about a week maybe less. Dad being the radio operator was always being watched whenever he was speaking to command about supplies or if new orders came in. This time they spent roughly three months with no new orders to move out. Just supply drops every three days or so. Nearing the end of two months, he was speaking to command and writing down everything they said and repeated back for clarification. The others in the platoon watching and noticed how he moved and knew they were staying longer when he drooped a bit. Once he turned around, everyone groaned when he confirmed they are staying put when he shrugged his shoulders.
For this time, my father and some guys from the company were at the river washing off some of their dirty uniforms and their bodies. My father was in the river washing his shirt and his rifle leaned up against a boulder close by. In the corner of his eye he caught movement in the trees and when he saw more he slowly reached for his rifle. Once his rifle was in hand he took aim in the direction he saw movement thinking it was guerrilla fighters he had his finger hovering over the trigger. When it became clear it was only a civilian that stepped out from the trees and noticed rifles and MG's were pointed his direction he had his hands up. He was let go and my father turned around and saw everyone that was with him had their rifles or MG's up and pointing in the direction. Later on in the day one guy
... keep reading on reddit β‘Interesting paper about the limitations of current NLP models. https://arxiv.org/abs/2012.15180. From the abstract:
"Do state-of-the-art natural language understanding models care about word order - one of the most important characteristics of a sequence? Not always! We found 75% to 90% of the correct predictions of BERT-based classifiers, trained on many GLUE tasks, remain constant after input words are randomly shuffled.
...
Our work suggests that many GLUE tasks are not challenging machines to understand the meaning of a sentence."
https://super.gluebenchmark.com/leaderboard
Position #1 item was submitted on December 20, 2020.
Position #2 item was submitted on December 30, 2020. Version 2 of the paper for this item was submitted to arXiv on January 3, 2021 here.
>Basic context: These datasets reflect some of the hardest supervised language understanding task datasets that were freely available two years ago, but they're not meant to be perfect or complete tests of human language ability.
Another tweet from Sam Bowman:
>Anyhow, there's no reason to believe that SuperGLUE will be able to detect further progress in NLU, at least beyond a small remaining margin, and we don't have any kind of direct successor benchmark coming out soon.
Recent news has been circulating about Microsoft DeBERTa surpassing human performance on the SuperGLUE benchmark. SuperGLUE is a cluster of datasets related to NLU, or "Natural Language Understanding". These tests involve Winograd Schema, textual entailment, choice of plausible alternatives, and other common-sense-reasoning tests.
But is it time to sound the alarm bell on AGI?
Not so fast. In the words of Microsoft :
> Despite its promising results on SuperGLUE, the model is by no means reaching the human-level intelligence of NLU. Humans are extremely good at leveraging the knowledge learned from different tasks to solve a new task with no or little task-specific demonstration. This is referred to as compositional generalization, the ability to generalize to novel compositions (new tasks) of familiar constituents (subtasks or basic problem-solving skills).
Some italics were added by me.
Some good arguments regarding why machine learning will never result in machines that understand natural language at a human level
https://thegradient.pub/machine-learning-wont-solve-the-natural-language-understanding-challenge/
The learning steps of the Relational Tsetlin Machine
In this paper, we take the first steps towards increasing the computing power of Tsetlin Machines (TMs) by introducing a first order TM framework with Herbrand semantics, referred to as the Relational TM. https://arxiv.org/abs/2102.10952
Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.