Preface
As a participant in the Internet era, I have witnessed with my own eyes the development and explosion of the Internet, and I have also witnessed the transformation of artificial intelligence from long-term stagnation and unclear direction to the explosive period in recent years. Looking back at the time when I was choosing my research direction in graduate school, artificial intelligence was still debating whether biological intelligence or data intelligence was more correct. Who would have thought that a few years later, the explosion of big data would make data-driven artificial intelligence move from theory to reality. Although Li’s true intelligence is still far from it, it is still a good direction and attempt. I have always wanted to organize and record this period of history, and now I have finally put it into action - the AI Chronicle was born.
This article compiles some milestones in the development of modern artificial intelligence and will be continuously updated. All events are arranged in reverse order of year. If there are any omissions or errors, please leave a message to correct them.
Historical Process (Provincial Edition)
- 1946 The first computer is born
- 1950-1970 Artificial intelligence is defined and the research is in its enlightenment stage (it was once a hot field, but it was limited by various technologies and stagnated)
- 1970-1990 Artificial Intelligence Winter, people questioned whether there is truly achievable intelligence, and capital began to gradually withdraw investment
- 1990-2006 With the emergence of the Internet, some theories of artificial intelligence have room for practice
- 2007-2019 The mobile Internet triggered a data explosion, and data-driven deep learning became the mainstream research direction of artificial intelligence.
- 2020-present The era of language large models (LLM) and generative AI (Generative AI) based on data science has gradually given programs some intelligence.
The Era of Language Large Model (LLM) and Generative AI
2026
- In February, ByteDance released Seedance 2.0, which has movie-level production capabilities
2025
In the first year of implementation of generative AI derived from large language models into practical applications, AI Agent startups have sprung up and begun to be implemented in actual scenarios.
- In January, Chinese companies launched DeepSeek to benchmark top performance at low cost, sparking discussion
- In February, the concept of AI Agent began to emerge
- In March, the MCP (Model Context Protocol) standard was proposed
- In November, independent developer Peter Steinberger released OpenClaw and made it open source, becoming the fastest growing open source project on github. Subsequently, the developer joined OpenAI
2024
- In February, OpenAI launched the video generation model Sora
- In March, the European Union passed the Artificial Intelligence Act, the world’s first regulatory act targeting artificial intelligence.
- In July, Tesla released FSD4.0, which increased the computing power to 720 trillion operations per second.
- In October, the Nobel Prize in Chemistry and Physics were awarded for the first time to AI-related research.
2023
In the first year of large models, major companies successively released their own large models, so I won’t mention them one by one here.
- In February, Meta released and open sourced LLaMA, and large model capabilities were widely disseminated in the open source community for the first time.
- In February, Microsoft released Copilot
- In March, OpenAI launched the ChatGPT API, allowing developers to directly call models
- In March, OpenAI released GPT-4, which greatly improved model capabilities.
- In March, Cursor, an AI code writing tool, was released.
- In May, Microsoft embedded Copilot into Windows, and then Microsoft embedded GTP-4 into Office Family Bucket, ushering in the era of large-scale model commercialization.
- In May, Anthropic launched Claude
- In December, Google launched its own large model Gemini
2022
- In June, Midjourney was released, and AI painting began to enter the public eye.
- In August, Stable Diffusion was released and open sourced, which can run on ordinary consumer-grade graphics cards, significantly lowering the threshold for AI image generation and promoting the rapid popularity of AI painting around the world.
- In November, OpenAI launched ChatGPT, a conversational and interactive chatbot program, with active users reaching 100 million in two months.
- In December, artificial intelligence-driven search engine Perplexity was released
2021
- OpenAI releases CLIP and DALL·E to demonstrate cross-modal understanding of text and images
2020
- OpenAI suddenly emerged and officially launched GPT-3 in May, ushering in the era of large language models.
The era of big data and deep learning – the eve of big models
2018
- OpenAI releases GPT-1, proposing the language model paradigm of Generative Pre-trained Transformer
- Google releases BERT to revolutionize Natural Language Processing (NLP)
2017
- Eight Google scientists published a paper "Attention Is All You Need" and proposed the Transformer architecture, which became the technical basis for modern large language models.
2016
- Google’s AlphaGO defeated the world champion of Go, using deep learning to achieve artificial intelligence has become a hot topic
2012
- In the ImageNET Challenge, the deep convolutional neural network algorithm (AlexNET) designed by the University of Toronto is considered the beginning of the deep learning revolution
2010
- Watson developed by IBM participated in a knowledge quiz show (Jeopardy) and defeated two humans
2008
- Google released the first commercial version of the open source project Android, which greatly promoted the process of mobile Internet and caused an exponential explosion of data.
2007
- Apple releases iPhone, officially opening the era of mobile Internet and laying the foundation for data explosion
2006
- Professor Geoffrey Hinton of the University of Toronto proposed a truly "deep" neural network (DNN), which triggered the second wave of machine learning.
- Yahoo engineer Doug Cutting develops big data framework system Hadoop
2003
- Google proposed the distributed file system GFS (Google File System) to lay the foundation for big data storage. Together with the subsequent release of MapReduce and BigTable, it is also known as the big data troika.
Artificial Intelligence Exploration Era
1997
- After a long period of exploration and development, limited by technology and funds, the development of artificial intelligence once stagnated (the AI winter of 1970-1990). With the emergence of the Internet, artificial intelligence has ushered in a new direction. The concepts of neural networks and deep learning began to appear and be put into practice. In 1997, IBM's "Deep Blue" defeated the world chess champion, which became a milestone.
1986
- David Rumelhart, Geoffrey Hinton and Ronald Williams published the landmark paper "Learning representations by back-propagating errors" in "Nature", making multi-layer neural networks possible and laying the foundation for the subsequent development of deep learning.
1956
- Scholars such as John McCarthy and Marvin Minsky formally proposed the term "Artificial Intelligence" at a Dartmouth Workshop held at Dartmouth College and established the research direction, marking the formal establishment of this field as an independent discipline.
1950
- Alan Mathison Turing published the paper "Computing Machines and Intelligence" and proposed the Turing test to determine whether a machine has human intelligence. The concept of artificial intelligence gradually becomes clear and developed
1946
- The first computer ENIAC was built, ushering in the computer age and the starting point of modern artificial intelligence.
This siteOriginal articleAll follow "Attribution-NonCommercial-ShareAlike 4.0 License (CC BY-NC-SA 4.0)". Please keep the following tags for sharing and interpretation:
Original author:Jake Tao,source:"Chronicles of AI – milestones in the development of artificial intelligence (continuously updated)"