- Chaos Theory
- Posts
- 🥟 Chao-Down #236 Airbnb turns to AI to prevent New Year house parties, Survey shows 1/3 of companies will replace employees with AI in 2024, OpenAI says Board can overrule CEO on new AI releases
🥟 Chao-Down #236 Airbnb turns to AI to prevent New Year house parties, Survey shows 1/3 of companies will replace employees with AI in 2024, OpenAI says Board can overrule CEO on new AI releases
Plus, the UK judicial office now allows ChatGPT to be used in legal rulings.
OpenAI wants us to know that it’s taking safety of AI systems very seriously.
The company announced its "preparedness framework" guidelines for how it will review and handle the safety risks introduced by their AI models like GPT4 and beyond. The document details OpenAI's approach to tracking, evaluating, and protecting models used in areas like cyberattacks, influence campaigns, and autonomous weapons.
Using a categorized matrix, OpenAI will assess the risk posed by its foundation models and assign them scores (low, medium, high, or critical) to each risk before they introduce any sort of mitigation strategies. Only models that score a “medium” or lower can be released to the public. If a model can’t have its risk level lowered below “critical”, OpenAI will stop working on the model.
The company is also forming a new internal advisory group to evaluate safety reports and send them to executives and board members.
It’s certainly a step in some direction towards building safer AI. The question is, is it the right one?
-Alex, your resident Chaos Coordinator.
What happened in AI? 📰
1 in 3 Companies Will Replace Employees With AI in 2024 (ResumeBuilder.com)
OpenAI Says Board Can Overrule CEO on Safety of New AI Releases (Bloomberg)
Airbnb turns to AI to help prevent house parties (bbc.com)
If AI is making the Turing test obsolete, what might be better? (Ars Technica)
Judges Given the OK to Use ChatGPT in Legal Rulings (Gizmodo)
AI job losses are rising, but the numbers don't tell the full story (CNBC)
Always be Learnin’ 📕 📖
Why Should You (Or Anyone) Become An Engineering Manager? (charity.wtf)
Year One of Generative AI: Six Key Trends (Foundation Capital)
ML system design: 300 case studies (Evidently AI)
Projects to Keep an Eye On 🛠
confident-ai/deepeval: The Evaluation Framework for LLMs (Github)
Fine Tuning Mistral 7B on Magic the Gathering Drafts (Substack)
The Latest in AI Research 💡
Self-Evaluation Improves Selective Generation in Large Language Models (arxiv)
Extending Context Window of Large Language Models via Semantic Compression (arxiv)
fengzhang427/LLF-LUT: Lookup Table meets Local Laplacian Filter: Pyramid Reconstruction Network for Tone Mapping (Github)
The World Outside of AI 🌎
US nuclear-fusion lab enters new era: achieving ‘ignition’ over and over (Nature)
For Volunteers Harmed in Clinical Trials, an Imperfect Safety Net (undark.org)
Automakers turn to hybrids in the middle of the EV transition (CNBC)
Housing market 'golden handcuffs' are very real, new data finds (Fast Company)
Surge in number of ‘extremely productive’ authors concerns scientists (Nature)
Employees are weaponizing private emails with colleagues (Fortune)