In 5000-6000 words, write a research paper on a biology-inspired or -connected machine learning topic.

Solution PreviewSolution Preview

These solutions may offer step-by-step problem-solving explanations or good writing examples that include modern styles of formatting and construction of bibliographies out of text citations and references. Students may use these solutions for personal skill-building and practice. Unethical use is strictly forbidden.

Credit Assignment, Weight Transport, and Biological Plausibility in the Neocortex – How Backpropagation and its Machine Learning Variants Possibly Model Learning in the Brain
While the overarching motivation of the discussion presented here is in connection with brain processes, the ‘story’ related in this paper has its starting point in machine learning. Specifically, the learning algorithm, backpropagation, as first introduced by Rumelhart et al. (1986), provides the ‘rubric’ or guiding structure by which learning in the brain is explored. It might be claimed that the more intuitive direction of investigation would have been to start with how the brain learns and then study machine learning in that light. However, as will incidentally be made apparent in the course of this discussion, that approach would have been far more fraught with difficulty. Instead, the brain-inspired starting point is the ability of multi-layer neural networks to learn by constructing internal representations. Perhaps the most important machine learning method which implements internal representations is that of backpropagation in neural networks with middle-layer ‘hidden’ units – arguably the foundation of all machine learning today. However, from the earliest days of backpropagation, going back to the 1980s, it was recognized that it at least appeared highly unlikely that backpropagation modeled how the brain learns, because it appeared impossible that the brain’s neural networks of neurons could do what backpropagation requires. This is what came to be standardly referred to as the biological implausibility of backpropagation. In the intervening decades since Rumelhart et al. brought backpropagation to prominence, there have been many specifications of the problems which underwrite that biological implausibility. Likewise, there have been many proposals which attempt to show how backpropagation may indeed be biologically plausible and perhaps even a critical part of how the brain does indeed learn. In essence, these attempts at establishing or ‘rehabilitating’ the biological plausibility of backpropagation work toward backpropagation as a model of how the brain learns.
In the discussion that follows, backpropagation and the problems it sought to address are presented in detail. Then, the factors which have been thought to determine the biological implausibility of backpropagation are considered in detail, prime among which are known as the credit assignment problem and the weight transport problem. This is followed by detailed discussion of prominent attempts to demonstrate that backpropagation may indeed be how the brain—at least in part—learns. And, along the way, the discussion is brought full-circle, by touching upon how attempts to address the ostensible biological implausibility of backpropagation may inform machine learning in new neural net architectures which incorporate backpropagation but far surpass its paradigm’s performance in learning.
The problem of learning
While not the key focus of the present discussion, the problem of learning internal representations is the key to understanding the motivations behind backpropagation and thereby seeing the ways in which backpropagation came to—apparently—diverge from biologically plausible learning procedures. “Simple two-layer associative networks” (Rumelhart et al., 1985:318), in contrast to multi-layer networks, have only input layers and output layers, and thus only map sets of input patterns to sets of output patterns; clearly, these associative networks have no middle layers and thus no hidden units. In turn, they lack the ability to ‘do’ internal representations – as the authors put it, “the coding provided by the external world must suffice” (Rumelhart et al., 1985:318). These networks essentially work by similarity, because “they map similar input patterns to similar output patterns” (Rumelhart et al., 1985:318), and determine similarity by the extent of overlap of the patterns. However, this sort of system is limited because, when the outside world presents something that is, in effect, sufficiently ‘unexpected’, these simple associative networks are unable to carry out their mapping because they run aground on the seeming dissimilarity of the input and output patterns. This running aground would not occur if the network was capable of internal representations....

By purchasing this solution you'll be able to access the following files:

for this solution

or FREE if you
register a new account!

PayPal, G Pay, ApplePay, Amazon Pay, and all major credit cards accepted.

Find A Tutor

View available Research Paper Writing Tutors

Get College Homework Help.

Are you sure you don't want to upload any files?

Fast tutor response requires as much info as possible.

Upload a file
Continue without uploading

We couldn't find that subject.
Please select the best match from the list below.

We'll send you an email right away. If it's not in your inbox, check your spam folder.

  • 1
  • 2
  • 3
Live Chats