• cognitive psychology, sequence learning is inherent to human ability because it is an integrated part of conscious and nonconscious learning as well as activities...
    15 KB (1,974 words) - 21:14, 25 October 2023
  • Thumbnail for Transformer (deep learning architecture)
    to overcome the vanishing gradient problem, allowing efficient learning of long-sequence modelling. One key innovation was the use of an attention mechanism...
    106 KB (13,091 words) - 21:14, 29 April 2025
  • Thumbnail for Seq2seq
    Seq2seq (redirect from Sequence-to-sequence)
    models, and text summarization. Seq2seq uses sequence transformation: it turns one sequence into another sequence. One naturally wonders if the problem of...
    19 KB (2,474 words) - 04:43, 23 March 2025
  • accurate". Sutskever, Ilya; Vinyals, Oriol; Le, Quoc V. (2014). "Sequence to Sequence Learning with Neural Networks" (PDF). Electronic Proceedings of the Neural...
    89 KB (10,413 words) - 06:01, 17 April 2025
  • Associative sequence learning (ASL) is a neuroscientific theory that attempts to explain how mirror neurons are able to match observed and performed actions...
    9 KB (1,235 words) - 11:31, 13 April 2025
  • learning. The topic has been studied in relation to real world systems (dynamic control systems), artificial grammar learning and sequence learning most...
    28 KB (3,673 words) - 09:47, 13 August 2023
  • Thumbnail for Attention Is All You Need
    Sutskever, Ilya; Vinyals, Oriol; Le, Quoc Viet (14 December 2014). "Sequence to sequence learning with neural networks". arXiv:1409.3215 [cs.CL]. [first version...
    15 KB (3,915 words) - 20:36, 1 May 2025
  • Thumbnail for Deep learning
    Retrieved 2020-02-25. Sutskever, L.; Vinyals, O.; Le, Q. (2014). "Sequence to Sequence Learning with Neural Networks" (PDF). Proc. NIPS. arXiv:1409.3215....
    180 KB (17,764 words) - 08:07, 11 April 2025
  • Thumbnail for Ilya Sutskever
    Ilya Sutskever (category Machine learning researchers)
    Sutskever worked with Oriol Vinyals and Quoc Viet Le to create the sequence-to-sequence learning algorithm, and worked on TensorFlow. He is also one of the AlphaGo...
    27 KB (2,174 words) - 21:41, 19 April 2025
  • Mamba is a deep learning architecture focused on sequence modeling. It was developed by researchers from Carnegie Mellon University and Princeton University...
    11 KB (1,159 words) - 19:42, 16 April 2025
  • Generalization (learning) Knowledge representation and reasoning Memory Memory Encoding Merge (linguistics) Method of loci Mnemonic Sequence learning Tokens in...
    47 KB (6,096 words) - 23:59, 26 January 2025
  • Thumbnail for Long short-term memory
    Long short-term memory (category Deep learning)
    is its advantage over other RNNs, hidden Markov models, and other sequence learning methods. It aims to provide a short-term memory for RNN that can last...
    52 KB (5,788 words) - 14:40, 12 March 2025
  • researchers at Google. It learns to represent text as a sequence of vectors using self-supervised learning. It uses the encoder-only transformer architecture...
    31 KB (3,528 words) - 01:20, 29 April 2025
  • Thumbnail for Attention (machine learning)
    machine learning method that determines the relative importance of each component in a sequence relative to the other components in that sequence. In natural...
    36 KB (3,494 words) - 17:00, 1 May 2025
  • close connection between machine learning and compression. A system that predicts the posterior probabilities of a sequence given its entire history can be...
    140 KB (15,513 words) - 11:41, 29 April 2025
  • accurate". Sutskever, Ilya; Vinyals, Oriol; Le, Quoc V. (2014). "Sequence to Sequence Learning with Neural Networks" (PDF). Electronic Proceedings of the Neural...
    84 KB (8,626 words) - 11:12, 27 April 2025
  • Thumbnail for Reinforcement learning
    Reinforcement learning is one of the three basic machine learning paradigms, alongside supervised learning and unsupervised learning. Reinforcement learning differs...
    64 KB (7,580 words) - 08:49, 30 April 2025
  • are frequently in the wrong sequence, errors that are similar to phonological mistakes made by young children when learning a language. As the bird ages...
    66 KB (8,038 words) - 20:11, 1 May 2025
  • In statistics and machine learning, ensemble methods use multiple learning algorithms to obtain better predictive performance than could be obtained from...
    54 KB (6,794 words) - 06:02, 19 April 2025
  • myriad of learning paradigms, notably unsupervised learning, supervised learning, reinforcement learning, multimodal learning, and sequence learning. In addition...
    16 KB (2,181 words) - 23:19, 4 September 2024
  • between encoding and retrieval in the domain of sequence learning". Journal of Experimental Psychology: Learning, Memory, and Cognition. 32 (1): 118–130. doi:10...
    30 KB (3,921 words) - 02:07, 3 October 2024
  • domain data is an important component of learning systems. In genomics, a typical representation of a sequence is a vector of k-mers frequencies, which...
    72 KB (8,279 words) - 04:23, 21 April 2025
  • approach models reinforcement learning as a sequence modelling problem. Similar to Behavior Cloning, it trains a sequence model, such as a Transformer...
    12 KB (1,285 words) - 19:17, 6 December 2024
  •  1700–1709. Sutskever, I.; Vinyals, O.; Le, Q. V. (2014). "Sequence to sequence learning with neural networks" (PDF). Twenty-eighth Conference on Neural...
    89 KB (10,702 words) - 10:21, 19 April 2025
  • In machine learning, support vector machines (SVMs, also support vector networks) are supervised max-margin models with associated learning algorithms...
    65 KB (9,068 words) - 08:13, 28 April 2025
  • mirror neurons are trained through Hebbian or Associative learning (see Associative Sequence Learning). However, if premotor neurons need to be trained by...
    86 KB (10,278 words) - 23:31, 11 April 2025
  • Q-learning is a reinforcement learning algorithm that trains an agent to assign values to its possible actions based on its current state, without requiring...
    29 KB (3,835 words) - 15:13, 21 April 2025
  • Procedural memory Proximodistal trend Sequence learning Adams JA (June 1971). "A closed-loop theory of motor learning". J mot Behav. 3 (2): 111–49. doi:10...
    28 KB (3,349 words) - 05:51, 16 April 2025
  • Serial reaction time (category Learning)
    between the sequence. Because the repeated sequence is not made aware to participants, their improvement in performance suggests implicit learning taking place...
    11 KB (1,395 words) - 03:16, 29 April 2024
  • this idea to categorization, language, motor control, sequence learning, reinforcement learning and theory of mind. At other times,[clarification needed]...
    2 KB (231 words) - 23:12, 22 October 2023