Do you buy the idea that there could be natural language understanding led "path" to AGI

I know this sub tends away from scifi speculation, but I wanted to open one up.

So a lot of people, myself included, think it is plausible that something like a GPT successor, with a few add ons like a long term memory outside weights, could be the first AGI. Is that a sensible belief, or is it just panglossian tech enthusiasm?

Even if such a GPT successor were multimodal, there would be an interesting sense in which such an AGI represented a natural language understanding led pathway to AGI, is this plausible?

What do you see as the major qualitative gaps between GPT 3 and AGI? I would suggest some are already soluble (multimodality) some are whereas others are more difficult (absence of proper long term memory, absence of a capacity to preplan before action).

πŸ‘︎ 9
πŸ’¬︎
πŸ‘€︎ u/philbearsubstack
πŸ“…︎ Dec 20 2021
🚨︎ report
[R] New paper: "A relational Tsetlin machine with applications to natural language understanding"

Relational Tsetlin Machine

The paper introduces the first Relational #TsetlinMachine, which reasons with relations, variables, and constants. The approach is based on first-order logic and Herbrand semantics, taking the first steps toward the computing power of a universal Turing machine. The approach can take advantage of logical structures appearing in natural language, to learn rules that represent how actions and consequences are related in the real world. The outcome is a logic program of Horn clauses, bringing in a structured view of unstructured data. In closed-domain question-answering, the first-order representation produces 10Γ— more compact knowledge bases, along with an increase in answering accuracy from 94.83% to 99.48%. The approach is further robust towards erroneous, missing, and superfluous information, distilling the aspects of a text that are important for real-world understanding. https://link.springer.com/article/10.1007/s10844-021-00682-5 #ML #AI #NLP #MachineLearning #Logic #Relational

πŸ‘︎ 6
πŸ’¬︎
πŸ‘€︎ u/olegranmo
πŸ“…︎ Jan 03 2022
🚨︎ report
Fast AI NLP course vs Standford 's Natural Language Understanding course

I was just thinking of starting with an NLP course and these were the two options that seemed apt for someone who has a little background of ML and DL techniques. I aim to work out a course and as a project, to be accountable, I would work parallely on the ongoing Essay score prediction task on Kaggle and also have a good research project idea in mind once I feel confident enough.

From what I gather from the reviews - FastAI is a good intro course but could be outdated and Standford's CS224U "Natural Language Understanding" course goes deeper into workings and well equips you to create your own algorithms.

Have people taken these courses? What are your thoughts about these? Or much broadly how do you assess whether a course is a good fit in general and in this choice? TIA

πŸ‘︎ 17
πŸ’¬︎
πŸ‘€︎ u/mistryishan25
πŸ“…︎ Jan 09 2022
🚨︎ report
"CLUES: Few-Shot Learning Evaluation in Natural Language Understanding", Mukherjee et al 2021 arxiv.org/abs/2111.02570#…
πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/gwern
πŸ“…︎ Nov 15 2021
🚨︎ report
So much for understanding natural language…
πŸ‘︎ 246
πŸ’¬︎
πŸ‘€︎ u/terminalparadox
πŸ“…︎ Jul 20 2021
🚨︎ report
Why neural networks aren’t fit for natural language understanding bdtechtalks.com/2021/07/1…
πŸ‘︎ 5
πŸ’¬︎
πŸ‘€︎ u/bendee983
πŸ“…︎ Jul 12 2021
🚨︎ report
Machine Learning Won't Solve Natural Language Understanding
πŸ‘︎ 4
πŸ’¬︎
πŸ‘€︎ u/Trumpet1956
πŸ“…︎ Aug 24 2021
🚨︎ report
Machine Learning Won't Solve Natural Language Understanding /r/ReplikaTech/comments/p…
πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/Trumpet1956
πŸ“…︎ Aug 24 2021
🚨︎ report
Machine learning won't solve natural language understanding thegradient.pub/machine-l…
πŸ‘︎ 5
πŸ’¬︎
πŸ‘€︎ u/qznc_bot2
πŸ“…︎ Aug 10 2021
🚨︎ report
[R] What Will it Take to Fix Benchmarking in Natural Language Understanding? arxiv.org/abs/2104.02145
πŸ‘︎ 34
πŸ’¬︎
πŸ‘€︎ u/say_wot_again
πŸ“…︎ Apr 12 2021
🚨︎ report
A Generative Symbolic Model for More General Natural Language Understanding and Reasoning
πŸ‘︎ 9
πŸ’¬︎
πŸ‘€︎ u/loopy_fun
πŸ“…︎ May 09 2021
🚨︎ report
Understanding Body Language Becomes Second Nature

Backstory as usual with the stories; I'm playing Halo 2 Anniversary and my father is watching the cutscenes with me. The first scene he mentions something is when Master Chief and Johnson are in their drop pods but Johnson is given new orders so he gets out, walks pass Chief's pod and bangs on it twice and Chief bangs back once. He told me that it can mean a lot of things like 'good luck', 'get moving' etc. Then he watched how the marines moved in cutscenes and said he can understand the body language fair amount as it was game and they don't much movement is most cutscenes. I knew that understanding body language becomes second nature when in the military (I myself never served in the military, I just knew. Also I have a friend in the Army and one in the Navy who have told me a few things they quickly learned, I.E body language) and of course my father told me a couple moments when it came with funny results or almost lead to a shoot out.

My father's platoon was told to hold a hill one time and they were expected to move with new orders in about a week maybe less. Dad being the radio operator was always being watched whenever he was speaking to command about supplies or if new orders came in. This time they spent roughly three months with no new orders to move out. Just supply drops every three days or so. Nearing the end of two months, he was speaking to command and writing down everything they said and repeated back for clarification. The others in the platoon watching and noticed how he moved and knew they were staying longer when he drooped a bit. Once he turned around, everyone groaned when he confirmed they are staying put when he shrugged his shoulders.

For this time, my father and some guys from the company were at the river washing off some of their dirty uniforms and their bodies. My father was in the river washing his shirt and his rifle leaned up against a boulder close by. In the corner of his eye he caught movement in the trees and when he saw more he slowly reached for his rifle. Once his rifle was in hand he took aim in the direction he saw movement thinking it was guerrilla fighters he had his finger hovering over the trigger. When it became clear it was only a civilian that stepped out from the trees and noticed rifles and MG's were pointed his direction he had his hands up. He was let go and my father turned around and saw everyone that was with him had their rifles or MG's up and pointing in the direction. Later on in the day one guy

... keep reading on reddit ➑

πŸ‘︎ 144
πŸ’¬︎
πŸ‘€︎ u/silverwolf478
πŸ“…︎ Jan 11 2022
🚨︎ report
Out of Order: How important is the sequential order of words in a sentence in Natural Language Understanding tasks? [Research]

Interesting paper about the limitations of current NLP models. https://arxiv.org/abs/2012.15180. From the abstract:
"Do state-of-the-art natural language understanding models care about word order - one of the most important characteristics of a sequence? Not always! We found 75% to 90% of the correct predictions of BERT-based classifiers, trained on many GLUE tasks, remain constant after input words are randomly shuffled.
...
Our work suggests that many GLUE tasks are not challenging machines to understand the meaning of a sentence."

πŸ‘︎ 11
πŸ’¬︎
πŸ‘€︎ u/jonnor
πŸ“…︎ Jan 02 2021
🚨︎ report
[N] Two models now exceed the human baselines score for the SuperGLUE natural language understanding benchmark

https://super.gluebenchmark.com/leaderboard

Position #1 item was submitted on December 20, 2020.

Position #2 item was submitted on December 30, 2020. Version 2 of the paper for this item was submitted to arXiv on January 3, 2021 here.

A tweet from Sam Bowman:

>Basic context: These datasets reflect some of the hardest supervised language understanding task datasets that were freely available two years ago, but they're not meant to be perfect or complete tests of human language ability.

Another tweet from Sam Bowman:

>Anyhow, there's no reason to believe that SuperGLUE will be able to detect further progress in NLU, at least beyond a small remaining margin, and we don't have any kind of direct successor benchmark coming out soon.

πŸ‘︎ 18
πŸ’¬︎
πŸ‘€︎ u/Wiskkey
πŸ“…︎ Jan 05 2021
🚨︎ report
Hype check : Microsoft DeBERTa and Natural Language Understanding

Recent news has been circulating about Microsoft DeBERTa surpassing human performance on the SuperGLUE benchmark. SuperGLUE is a cluster of datasets related to NLU, or "Natural Language Understanding". These tests involve Winograd Schema, textual entailment, choice of plausible alternatives, and other common-sense-reasoning tests.

But is it time to sound the alarm bell on AGI?

Not so fast. In the words of Microsoft :

> Despite its promising results on SuperGLUE, the model is by no means reaching the human-level intelligence of NLU. Humans are extremely good at leveraging the knowledge learned from different tasks to solve a new task with no or little task-specific demonstration. This is referred to as compositional generalization, the ability to generalize to novel compositions (new tasks) of familiar constituents (subtasks or basic problem-solving skills).

Some italics were added by me.

πŸ‘︎ 5
πŸ’¬︎
πŸ‘€︎ u/moschles
πŸ“…︎ Jan 07 2021
🚨︎ report
[2104.02145] What Will it Take to Fix Benchmarking in Natural Language Understanding? arxiv.org/abs/2104.02145
πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/Veedrac
πŸ“…︎ Apr 13 2021
🚨︎ report
A standardised test of natural language understanding is the ARC reasoning challenge- a collection of high school science multiple choice questions. In one year state of the art has gone from 72% correct to 89% correct. leaderboard.allenai.org/a…
πŸ‘︎ 64
πŸ’¬︎
πŸ‘€︎ u/no_bear_so_low
πŸ“…︎ Apr 14 2020
🚨︎ report
Can natural language understanding models learn to understand morality? arxiv.org/pdf/2008.02275.…
πŸ‘︎ 17
πŸ’¬︎
πŸ‘€︎ u/no_bear_so_low
πŸ“…︎ Aug 06 2020
🚨︎ report
Two models now exceed the human baselines score for the SuperGLUE natural language understanding benchmark /r/MachineLearning/commen…
πŸ‘︎ 16
πŸ’¬︎
πŸ‘€︎ u/Wiskkey
πŸ“…︎ Jan 05 2021
🚨︎ report
Blue Frog Robotics exists to create smart robots with a smile that benefit everyone around them. Using Artificial Intelligence (AI) technologies, computer vision, natural language processing, and gesture controls, we are changing the way personal robots are used in homes. By understanding how consum
πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/rajneesh7890
πŸ“…︎ Jan 29 2021
🚨︎ report
The Future of Natural Language Processing for Business: Demands, Needs, and the Search for Understanding lionbridge.ai/articles/na…
πŸ‘︎ 10
πŸ’¬︎
πŸ‘€︎ u/Shirappu
πŸ“…︎ Nov 09 2020
🚨︎ report
I finally found a good thing in not understanding unspoken language
πŸ‘︎ 3k
πŸ’¬︎
πŸ‘€︎ u/Low-Bit2048
πŸ“…︎ Jan 17 2022
🚨︎ report
Machine Learning Won't Solve Natural Language Understanding

Some good arguments regarding why machine learning will never result in machines that understand natural language at a human level

https://thegradient.pub/machine-learning-wont-solve-the-natural-language-understanding-challenge/

πŸ‘︎ 8
πŸ’¬︎
πŸ‘€︎ u/sshwartz
πŸ“…︎ Aug 11 2021
🚨︎ report
[R] A Relational Tsetlin Machine with Applications to Natural Language Understanding

The learning steps of the Relational Tsetlin Machine

In this paper, we take the first steps towards increasing the computing power of Tsetlin Machines (TMs) by introducing a first order TM framework with Herbrand semantics, referred to as the Relational TM. https://arxiv.org/abs/2102.10952

πŸ‘︎ 6
πŸ’¬︎
πŸ‘€︎ u/olegranmo
πŸ“…︎ Feb 26 2021
🚨︎ report

Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.