Since the public release of GPT-3 in mid-2020, AI has entered an era of foundation models, scaling and general-purpose algorithms. As AI systems become increasingly capable, they create new risks to public safety that need to be monitored. These include accident risks, as well as risks of new malicious applications that were previously impossible.
We built AI Tracker to monitor cutting-edge developments in this fast-moving field in real time, to help researchers and policy specialists better understand the AI risk landscape.
If we think a new model has important public safety or security implications, we add it to the tracker. New entries usually introduce a capability that hadn’t previously existed, or represent the proliferation of a flagged capability of concern.
Each entry includes our best assessment of the model’s scale (in terms of number of parameters, dataset size, and total FLOPs of compute), a short description of the model and its capabilities, and examples of outputs the model has generated.
Our methodology is constantly evolving. If you believe we’re omitting useful information or have any suggestions for us, please submit a correction above, or email us at firstname.lastname@example.org.
Want to stay on top of updates to AI Tracker? Subscribe below to see what models we're adding, and which capabilities and trends we're following — delivered each month, straight to your inbox.
There was an error submitting the form. Please try again, or contact us directly at email@example.com. Thanks!