Early AI systems were rule based, applying logic and expert knowledge to derive results. To understand why this will reshape machine learning, you must first understand why deep learning has been so successful and what it costs to keep it that way.ĭeep learning is a modern incarnation of the long-running trend in artificial intelligence that has been moving from streamlined systems based on expert knowledge toward flexible statistical models.
Like Rosenblatt before them, today's deep-learning researchers are nearing the frontier of what their tools can achieve. While deep learning's rise may have been meteoric, its future may be bumpy. Researchers used that ability to break record after record as they applied deep learning to new tasks. These more-powerful computers made it possible to construct networks with vastly more connections and neurons and hence greater ability to model complex phenomena. Even his inaugural paper was forced to acknowledge the voracious appetite of neural networks for computational power, bemoaning that "as the number of connections in the network increases.the burden on a conventional digital computer soon becomes excessive." In 1958, back when mainframe computers filled rooms and ran on vacuum tubes, knowledge of the interconnections between neurons in the brain inspiredįrank Rosenblatt at Cornell to design the first artificial neural network, which he presciently described as a "pattern-recognizing device." But Rosenblatt's ambitions outpaced the capabilities of his era-and he knew it. Success in those and other realms has brought this machine-learning technique from obscurity in the early 2000s to dominance today.Īlthough deep learning's rise to fame is relatively recent, its origins are not. Deep learning is now being used to translate between languages, predict how proteins fold, analyze medical scans, and play games as complex as Go, to name just a few applications of a technique that is now becoming pervasive.