11 ways Cardano will scale in 2022: Parameter adjustments, improvements, enhancements and other innovations will all play their part in steadily increasing Cardanoโ€™s capacity & throughput

On-chain solutions

Block size increase

The bigger the block, the more transactions it can carry. Block size was recently increased by 8KB to 72KB (a 12.5% increase); further increases will be applied over time based on ongoing system monitoring and overall network health.

Pipelining

Improves block propagation times by coalescing validation and propagation. The goal is for blocks to be propagated to at least 95% of peers within five seconds by reducing the โ€˜dead timeโ€™ between blocks (the block propagation overhead). This provides the headroom to make more aggressive scaling changes, such as increasing block size/increasing Plutus parameter limits.

Input Endorsers

Input endorsers improve block propagation times and throughput by allowing transactions to be separated into pre-constructed blocks. This improves the consistency of block propagation times and allows higher transaction rates.

Memory /CPU parameters for Plutus

Memory usage is more efficient across the chain. Specifically, there are memory improvements in Unspent Transaction Output (UTXO) handling, stake distribution, live stake distribution and pools, and hash representation.

Plutus script enhancements

Even more effective usage of the powerful EUTXO model through smart contract optimization, including:

  • Reference inputs (CIP-0031) โ€“ Plutus scripts can inspect transaction inputs without needing to spend them. This means that it is not necessary to create UTXOs simply to inspect the information held by an input.
  • Plutus Datums (CIP-0032) โ€“ Datums can be attached directly to outputs instead of datum hashes. This simplifies how datums are used, as a user can see the actual datum rather than having to supply the datum that matches the given hash.
  • Script sharing (CIP-0033) โ€“ Plutus script references can be associated with transaction outputs, meaning that they can be recorded on-chain for subsequent reuse. It will not be necessary to supply a copy of the script with each transaction, hugely reducing friction for developers. Reusing scripts in multiple transactions significantly reduces transaction sizes, improving throughput and reducing script execution costs.

Node enhancements

Improvements will help even distribution of stake and reward computations across the epochs, thus providing greater headroom for block size increases. Also, memory usage is now more efficient. Memory compaction reduces RSS footprint, and memory sh

... keep reading on reddit โžก

๐Ÿ‘︎ 544
๐Ÿ’ฌ︎
๐Ÿ“…︎ Jan 16 2022
๐Ÿšจ︎ report
Study: The pattern of green and black scales on an ocellated lizard can be described with the two-parameter Ising model for antiferromagnetic systems. The researchers wonder if natural selection led this species to favor its particular pattern and balance of colors. physics.aps.org/articles/โ€ฆ
๐Ÿ‘︎ 52
๐Ÿ’ฌ︎
๐Ÿ‘ค︎ u/rustoo
๐Ÿ“…︎ Jan 28 2022
๐Ÿšจ︎ report
I recently created Fluid Type Scale, an open-source calculator that lets you customize some parameters and copy the output CSS variables for fluid font sizing. Includes a preview mode! fluid-type-scale.com/
๐Ÿ‘︎ 58
๐Ÿ’ฌ︎
๐Ÿ‘ค︎ u/Alex_Hovhannisyan
๐Ÿ“…︎ Jan 03 2022
๐Ÿšจ︎ report
New video going live at 9AM PST tomorrow with updates to my parametric fretboard design that prevent fusion crashing as well as adding compound radius options, consistent edge/center thickness, and a user parameter for which fret is vertical in multi-scale designs. youtube.com/watch?v=BMu3Eโ€ฆ
๐Ÿ‘︎ 7
๐Ÿ’ฌ︎
๐Ÿ‘ค︎ u/Ak_Shaner
๐Ÿ“…︎ Jan 29 2022
๐Ÿšจ︎ report
White scale/patch on my half moons head, just seen today. Parameters in comments reddit.com/gallery/rw55xg
๐Ÿ‘︎ 7
๐Ÿ’ฌ︎
๐Ÿ‘ค︎ u/malonescig
๐Ÿ“…︎ Jan 04 2022
๐Ÿšจ︎ report
Full scale, biologically realistic model of mouse hippocampus uncovers new mechanism for pattern separation - Using parameters measured in the mouse hippocampus, the researchers created a comprehensive network model of the brain region medicalxpress.com/news/20โ€ฆ
๐Ÿ‘︎ 117
๐Ÿ’ฌ︎
๐Ÿ‘ค︎ u/QuantumThinkology
๐Ÿ“…︎ Dec 16 2021
๐Ÿšจ︎ report
My niece's Betta seems to be lethargic and has greying scales. Unfortunately I don't know much about the parameters as it's not my fish and owners don't know much about the temp. etc. Sorry I don't know much about parameters. Wanting help pls reddit.com/gallery/rcowus
๐Ÿ‘︎ 4
๐Ÿ’ฌ︎
๐Ÿ‘ค︎ u/jadethevenom
๐Ÿ“…︎ Dec 09 2021
๐Ÿšจ︎ report
Betta has pop eye, does he otherwise look healthy? His scales seem kind of rough and he has had that stress stripe since I got him (despite perfect parameters and normal behaviour) v.redd.it/wn56dsheypc81
๐Ÿ‘︎ 4
๐Ÿ’ฌ︎
๐Ÿ‘ค︎ u/swagneylitness
๐Ÿ“…︎ Jan 19 2022
๐Ÿšจ︎ report
Peng Cheng Laboratory (PCL) and Baidu release PCL-BAIDU Wenxin, the world's first knowledge-enhanced 100-billion-scale pretrained language model and the largest Chinese-language monolithic model with 260 billion parameters. syncedreview.com/2021/12/โ€ฆ
๐Ÿ‘︎ 36
๐Ÿ’ฌ︎
๐Ÿ‘ค︎ u/Buck-Nasty
๐Ÿ“…︎ Dec 09 2021
๐Ÿšจ︎ report
Researchers Introduce โ€˜PERSIAโ€™: A PyTorch-Based System for Training Large Scale Deep Learning Recommendation Models up to 100 Trillion Parameters

Deep learning-based models dominate the contemporary landscape of production recommender systems. Modern recommender systems offer a plethora of real-world applications. Thanks to deep neural network models of ever-increasing size, they have made incredible progress.

However, the training of such models is challenging even within industrial-scale data centers. This challenge stems from the training computationโ€™s startling heterogeneityโ€”the modelโ€™s embedding layer could account for more than 99.99 percent of the overall model size. The entire process is exceedingly memory-intensive, while the rest of the neural network (NN) becomes progressively computation-intensive.

PERSIA (parallel recommendation trainingย system with hybridย acceleration), an efficient distributed training system based on a revolutionary hybrid training algorithm, has been unveiled by a research team from Kwai Inc., Kuaishou Technology, and ETH Zรผrich. This approach provides training efficiency and accuracy for extensive deep learning recommender systems with up to 100 trillion parameters. The researchers have carefully co-designed the optimization method and the distributed system architecture.

Quick Read: https://www.marktechpost.com/2021/12/05/researchers-introduce-persia-a-pytorch-based-system-for-training-large-scale-deep-learning-recommendation-models-up-to-100-trillion-parameters/

Paper: https://arxiv.org/pdf/2111.05897.pdf

Github: https://github.com/persiaml/persia

๐Ÿ‘︎ 23
๐Ÿ’ฌ︎
๐Ÿ‘ค︎ u/techsucker
๐Ÿ“…︎ Dec 05 2021
๐Ÿšจ︎ report
[INFLATION] Good evening everyone! Iโ€™m studying various models of inflation. Iโ€™m wondering how to plot the power spectra as function of the scale k. I canโ€™t see the k dependence in the potential V (or in the slow roll parameters). Can someone help me?
๐Ÿ‘︎ 32
๐Ÿ’ฌ︎
๐Ÿ‘ค︎ u/Sky_physics
๐Ÿ“…︎ Nov 18 2021
๐Ÿšจ︎ report
Are there any apps or tools that generate song parameters for you to try (e.g. make a song in this key, scale, tempo, chord progression, genre, etc.)?

I would like to have a tool that would suggest song parameters (e.g. tempo, genre, chord progression, scale, specific instruments to use, etc.) so that I could challenge myself to get out of my comfort zone when writing new music. I could just come up with a list myself but it would probably be biased and wouldnโ€™t help me become a better producer in the long run. It would be nice if it wasnโ€™t just totally random but threw out parameters that also made sense in a music genre sort of way.

The closest thing I could find was this Patch card game but itโ€™s more for modular synthesis than general electronic music production. https://www.patchtcg.com

๐Ÿ‘︎ 4
๐Ÿ’ฌ︎
๐Ÿ‘ค︎ u/Wunjo26
๐Ÿ“…︎ Nov 26 2021
๐Ÿšจ︎ report
Is my zebra danio okay? The fish is twice the size of my other ones and itโ€™s so big I can see the scales sticking up all over the fish. Still acting and eating fine parameters are good
๐Ÿ‘︎ 5
๐Ÿ’ฌ︎
๐Ÿ‘ค︎ u/ewm993
๐Ÿ“…︎ Nov 09 2021
๐Ÿšจ︎ report
Peng Cheng Laboratory (PCL) and Baidu release PCL-BAIDU Wenxin, the world's first knowledge-enhanced 100-billion scale pretrained language model and the largest Chinese-language monolithic model with 260 billion parameters. PCL-BAIDU Wenxin achieves state-of-the-art results on more than 60 tasks
๐Ÿ‘︎ 14
๐Ÿ’ฌ︎
๐Ÿ‘ค︎ u/Dr_Singularity
๐Ÿ“…︎ Dec 09 2021
๐Ÿšจ︎ report
OMFG๏ผGPT-4 will be human brain scale(One hundred trillion parameters)

GPT-4 will be human brain scale(One hundred trillion parameters)

Unfortunately, That wonโ€™t be ready for several years

https://www.wired.com/story/cerebras-chip-cluster-neural-networks-ai/

๐Ÿ‘︎ 24
๐Ÿ’ฌ︎
๐Ÿ‘ค︎ u/Commercial_Bug_3726
๐Ÿ“…︎ Aug 25 2021
๐Ÿšจ︎ report
10 TRILLION parameters, achieves the same parameter scale at only 1% of the energy. pandaily.com/alibaba-damoโ€ฆ
๐Ÿ‘︎ 47
๐Ÿ’ฌ︎
๐Ÿ‘ค︎ u/No-Transition-6630
๐Ÿ“…︎ Nov 08 2021
๐Ÿšจ︎ report
BIG-bench preliminary results (May 2021 WELM talk): increasing returns to scale and capability spikes >=10b-parameters youtu.be/x-9KxACAPIo?t=19โ€ฆ
๐Ÿ‘︎ 9
๐Ÿ’ฌ︎
๐Ÿ‘ค︎ u/gwern
๐Ÿ“…︎ Nov 24 2021
๐Ÿšจ︎ report
Cerebras Systems Announces Worldโ€™s First Brain-Scale Artificial Intelligence Solution. Technology Breakthroughs Enable Training of 120 Trillion Parameters(human brain = ~100T) on Single CS-2 zdnet.com/article/cerebraโ€ฆ
๐Ÿ‘︎ 65
๐Ÿ’ฌ︎
๐Ÿ‘ค︎ u/QuantumThinkology
๐Ÿ“…︎ Aug 24 2021
๐Ÿšจ︎ report
[R] Microsoft Asiaโ€™s Swin Transformer V2 Scales the Award-Winning ViT to 3 Billion Parameters and Achieves SOTA Performance on Vision Benchmarks

Microsoft Research Asia has upgraded their Swin Transformer with a new version featuring three billion parameters to train images with resolutions up to 1,536 x 1,536 and advance the SOTA on four representative vision benchmarks.

Here is a quick read: Microsoft Asiaโ€™s Swin Transformer V2 Scales the Award-Winning ViT to 3 Billion Parameters and Achieves SOTA Performance on Vision Benchmarks.

The associated code will be available on the projectโ€™s GitHub. The paper Swin Transformer V2: Scaling Up Capacity and Resolution is on arXiv.

๐Ÿ‘︎ 6
๐Ÿ’ฌ︎
๐Ÿ‘ค︎ u/Yuqing7
๐Ÿ“…︎ Nov 22 2021
๐Ÿšจ︎ report
Block: Combining attribute and linear parameter with scale action

Hi everyone!

I'm looking to have a block where I can change an attribute within the enhanced attribute manager and have it take effect on my linear parameter.

So far I have my attribute and my linear parameter separated, which means i have to enter 2 different values. This tree block is only concentric circles.

I'm wondering if this is possible, otherwise I might drop the attribute and stick with the parameter. I may be overthinking my way here. Let me know what you guys have to say!

๐Ÿ‘︎ 7
๐Ÿ’ฌ︎
๐Ÿ‘ค︎ u/OneiricGeometry
๐Ÿ“…︎ Oct 17 2021
๐Ÿšจ︎ report
If there was a way to create large scale models commercially what types of models for what applications would be a game changer letโ€™s say a trillion parameters
๐Ÿ‘︎ 6
๐Ÿ’ฌ︎
๐Ÿ‘ค︎ u/Real_Ad_3301
๐Ÿ“…︎ Sep 25 2021
๐Ÿšจ︎ report
Hi, I am fairly new to the hobby and I have a sparkling gourami which seems to have something on his side sticking away from his body (a scale?) Water parameters are as always, no nitrite nor ammonia. Kinda sticks to the ground the last couple of days and hides. Thank you :c reddit.com/gallery/psnay6
๐Ÿ‘︎ 2
๐Ÿ’ฌ︎
๐Ÿ‘ค︎ u/Art3misXX
๐Ÿ“…︎ Sep 21 2021
๐Ÿšจ︎ report
Cerebras Systems Announces Worldโ€™s First Brain-Scale Artificial Intelligence Solution. Technology Breakthroughs Enable Training of 120 Trillion Parameters on Single CS-2 businesswire.com/news/homโ€ฆ
๐Ÿ‘︎ 56
๐Ÿ’ฌ︎
๐Ÿ‘ค︎ u/QuantumThinkology
๐Ÿ“…︎ Aug 24 2021
๐Ÿšจ︎ report
What could be the research parameters for an It project? Let's set the scale of the project to Cloud but features Should be new? How can a team define the research metrics for the project? How can one plan it out?
๐Ÿ‘︎ 2
๐Ÿ’ฌ︎
๐Ÿ‘ค︎ u/Dexter_4202021
๐Ÿ“…︎ Sep 21 2021
๐Ÿšจ︎ report
This Chinese Super Scale Intelligence Model, โ€˜Wu Dao 2.0โ€™, Claims To Be Trained Using 1.75 Trillion Parameters, Surpassing All Prior Models to Achieve a New Breakthrough in Deep Learning

Deep learning is one area of technology where ambitiousness has no barriers. According to a recent announcement byย The Beijing Academy of Artificial Intelligence (BAAI), in China, yet another milestone has been achieved in the field with its โ€œWu Daoโ€ AI system. Theย GPT 3ย brought in new interest for all the AI researchers, the super scale pre training models. By this approach and making use of 175 billion parameters, it managed to achieve exceptional performance results across the natural language processing tasks (NLP). However, the lacking component is its inability to have any form of cognitive abilities or common sense. Therefore, despite the size, even these models cannot indulge in tasks such as open dialogues, visual reasoning, and so on. With Wu Dao, the researchers plan to address this issue. This is Chinaโ€™s first attempt at a home-grown super-scale intelligent model system.

Article: https://www.marktechpost.com/2021/06/13/this-chinese-super-scale-intelligence-model-wu-dao-2-0-claims-to-be-trained-using-1-75-trillion-parameters-surpassing-all-prior-models-to-achieve-a-new-breakthrough-in-deep-learning/

Reference: https://syncedreview.com/2021/03/23/chinas-gpt-3-baai-introduces-superscale-intelligence-model-wu-dao-1-0/

๐Ÿ‘︎ 33
๐Ÿ’ฌ︎
๐Ÿ‘ค︎ u/ai-lover
๐Ÿ“…︎ Jun 14 2021
๐Ÿšจ︎ report
Cerebras' Tech Trains "Brain-Scale" AIs, 100 trillions parameters spectrum.ieee.org/cerebraโ€ฆ
๐Ÿ‘︎ 22
๐Ÿ’ฌ︎
๐Ÿ‘ค︎ u/avturchin
๐Ÿ“…︎ Aug 25 2021
๐Ÿšจ︎ report
Microsoft announced DeepSpeed MoE, a high-performance system that supports massive scale mixture of experts (MoE) models - It enables 3.5 trillion-parameter models on 512 GPUs, 8x larger than existing work microsoft.com/en-us/reseaโ€ฆ
๐Ÿ‘︎ 38
๐Ÿ’ฌ︎
๐Ÿ‘ค︎ u/QuantumThinkology
๐Ÿ“…︎ Aug 20 2021
๐Ÿšจ︎ report
"Reward is enough", Silver et al 2021 {DM} (manifesto: reward losses enough at scale (compute/parameters/tasks) to induce all important capabilities like memory/exploration/generalization/imitation/reasoning) sciencedirect.com/scienceโ€ฆ
๐Ÿ‘︎ 44
๐Ÿ’ฌ︎
๐Ÿ‘ค︎ u/gwern
๐Ÿ“…︎ Jun 10 2021
๐Ÿšจ︎ report
Hey guys I need your helpโ€ฆmy betta has 2 bulges on his side and some scales turning silver on the underside of his belly. Water parameters are all good reddit.com/gallery/odhi22
๐Ÿ‘︎ 2
๐Ÿ’ฌ︎
๐Ÿ‘ค︎ u/Lo0815
๐Ÿ“…︎ Jul 04 2021
๐Ÿšจ︎ report
Researchers Introduce โ€˜PERSIAโ€™: A PyTorch-Based System for Training Large Scale Deep Learning Recommendation Models up to 100 Trillion Parameters

Deep learning-based models dominate the contemporary landscape of production recommender systems. Modern recommender systems offer a plethora of real-world applications. Thanks to deep neural network models of ever-increasing size, they have made incredible progress.

However, the training of such models is challenging even within industrial-scale data centers. This challenge stems from the training computationโ€™s startling heterogeneityโ€”the modelโ€™s embedding layer could account for more than 99.99 percent of the overall model size. The entire process is exceedingly memory-intensive, while the rest of the neural network (NN) becomes progressively computation-intensive.

PERSIA (parallel recommendation trainingย system with hybridย acceleration), an efficient distributed training system based on a revolutionary hybrid training algorithm, has been unveiled by a research team from Kwai Inc., Kuaishou Technology, and ETH Zรผrich. This approach provides training efficiency and accuracy for extensive deep learning recommender systems with up to 100 trillion parameters. The researchers have carefully co-designed the optimization method and the distributed system architecture.

Quick Read: https://www.marktechpost.com/2021/12/05/researchers-introduce-persia-a-pytorch-based-system-for-training-large-scale-deep-learning-recommendation-models-up-to-100-trillion-parameters/

Paper: https://arxiv.org/pdf/2111.05897.pdf

Github: https://github.com/persiaml/persia

๐Ÿ‘︎ 20
๐Ÿ’ฌ︎
๐Ÿ‘ค︎ u/techsucker
๐Ÿ“…︎ Dec 05 2021
๐Ÿšจ︎ report
OMFG๏ผGPT-4 will be human brain scale(One hundred trillion parameters)

GPT-4 will be human brain scale(One hundred trillion parameters)

Unfortunately, That wonโ€™t be ready for several years.

https://www.wired.com/story/cerebras-chip-cluster-neural-networks-ai/

๐Ÿ‘︎ 86
๐Ÿ’ฌ︎
๐Ÿ‘ค︎ u/Commercial_Bug_3726
๐Ÿ“…︎ Aug 25 2021
๐Ÿšจ︎ report
[R] Microsoft Asiaโ€™s Swin Transformer V2 Scales the Award-Winning ViT to 3 Billion Parameters and Achieves SOTA Performance on Vision Benchmarks

Microsoft Research Asia has upgraded their Swin Transformer with a new version featuring three billion parameters to train images with resolutions up to 1,536 x 1,536 and advance the SOTA on four representative vision benchmarks.

Here is a quick read: Microsoft Asiaโ€™s Swin Transformer V2 Scales the Award-Winning ViT to 3 Billion Parameters and Achieves SOTA Performance on Vision Benchmarks.

The associated code will be available on the projectโ€™s GitHub. The paper Swin Transformer V2: Scaling Up Capacity and Resolution is on arXiv.

๐Ÿ‘︎ 3
๐Ÿ’ฌ︎
๐Ÿ‘ค︎ u/Yuqing7
๐Ÿ“…︎ Nov 22 2021
๐Ÿšจ︎ report
[R] Microsoft Asiaโ€™s Swin Transformer V2 Scales the Award-Winning ViT to 3 Billion Parameters and Achieves SOTA Performance on Vision Benchmarks

Microsoft Research Asia has upgraded their Swin Transformer with a new version featuring three billion parameters to train images with resolutions up to 1,536 x 1,536 and advance the SOTA on four representative vision benchmarks.

Here is a quick read: Microsoft Asiaโ€™s Swin Transformer V2 Scales the Award-Winning ViT to 3 Billion Parameters and Achieves SOTA Performance on Vision Benchmarks.

The associated code will be available on the projectโ€™s GitHub. The paper Swin Transformer V2: Scaling Up Capacity and Resolution is on arXiv.

๐Ÿ‘︎ 3
๐Ÿ’ฌ︎
๐Ÿ‘ค︎ u/Yuqing7
๐Ÿ“…︎ Nov 22 2021
๐Ÿšจ︎ report
OMFG๏ผGPT-4 will be human brain scale(One hundred trillion parameters)

GPT-4 will be human brain scale(One hundred trillion parameters)

Unfortunately, That wonโ€™t be ready for several years.

https://www.wired.com/story/cerebras-chip-cluster-neural-networks-ai/

๐Ÿ‘︎ 44
๐Ÿ’ฌ︎
๐Ÿ‘ค︎ u/Commercial_Bug_3726
๐Ÿ“…︎ Aug 25 2021
๐Ÿšจ︎ report
OMFG๏ผGPT-4 will be human brain scale(One hundred trillion parameters)

GPT-4 will be human brain scale(One hundred trillion parameters)

Unfortunately, That wonโ€™t be ready for several years

https://www.wired.com/story/cerebras-chip-cluster-neural-networks-ai/

๐Ÿ‘︎ 22
๐Ÿ’ฌ︎
๐Ÿ‘ค︎ u/Commercial_Bug_3726
๐Ÿ“…︎ Aug 25 2021
๐Ÿšจ︎ report
OMFG๏ผGPT-4 will be human brain scale(One hundred trillion parameters)

GPT-4 will be human brain scale(One hundred trillion parameters)

Unfortunately, That wonโ€™t be ready for several years.

https://www.wired.com/story/cerebras-chip-cluster-neural-networks-ai/

๐Ÿ‘︎ 9
๐Ÿ’ฌ︎
๐Ÿ‘ค︎ u/Commercial_Bug_3726
๐Ÿ“…︎ Aug 25 2021
๐Ÿšจ︎ report
This Chinese Super Scale Intelligence Model, โ€˜Wu Dao 2.0โ€™, Claims To Be Trained Using 1.75 Trillion Parameters, Surpassing All Prior Models to Achieve a New Breakthrough in Deep Learning

Deep learning is one area of technology where ambitiousness has no barriers. According to a recent announcement byย The Beijing Academy of Artificial Intelligence (BAAI), in China, yet another milestone has been achieved in the field with its โ€œWu Daoโ€ AI system. Theย GPT 3ย brought in new interest for all the AI researchers, the super scale pre training models. By this approach and making use of 175 billion parameters, it managed to achieve exceptional performance results across the natural language processing tasks (NLP). However, the lacking component is its inability to have any form of cognitive abilities or common sense. Therefore, despite the size, even these models cannot indulge in tasks such as open dialogues, visual reasoning, and so on. With Wu Dao, the researchers plan to address this issue. This is Chinaโ€™s first attempt at a home-grown super-scale intelligent model system.

Article: https://www.marktechpost.com/2021/06/13/this-chinese-super-scale-intelligence-model-wu-dao-2-0-claims-to-be-trained-using-1-75-trillion-parameters-surpassing-all-prior-models-to-achieve-a-new-breakthrough-in-deep-learning/

Reference: https://syncedreview.com/2021/03/23/chinas-gpt-3-baai-introduces-superscale-intelligence-model-wu-dao-1-0/

๐Ÿ‘︎ 24
๐Ÿ’ฌ︎
๐Ÿ‘ค︎ u/ai-lover
๐Ÿ“…︎ Jun 14 2021
๐Ÿšจ︎ report

Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.